A Semantics for FAIR Data Interoperability Course

Introduction
FAIR Data is data that is Findable, Accessible, Interoperable and Reusable by computers. In order to achieve this, data should go through a FAIRification process. This course provides an introduction to the first course about FAIRification of (research) data as part of FAIR good practice in Data Stewardship. The course is aimed at information technology and data experts currently employed at research institutions and data intensive organisations and companies. The FAIR data approach has already impacted both academic and industrial research practices, and has been endorsed by the European Commission, the World Economic Forum, the G7 as well numerous national organizations, as well as the World Economic Forum and the G7 largest economies in the world.

Course date: 14-19 May 2018

What this course delivers
This course provides a solid introduction to the subjects related to semantics in FAIR. It starts with the basics of ontology-driven conceptual modeling providing the foundational knowledge for creating good conceptual models/ontologies. Once the ontologies are created and validated using a foundational ontology language the participants will learn how to make them computer-actionable using Semantic Web technologies such as OWL/RDF. Then the participants will learn how to semantically enrich and make data linkable by applying the ontologies to data. In addition, defining validation rules for the datasets are taught allowing for the data to be validated against both the syntax and semantics. Lastly the course will introduce the FAIR metadata approach so the FAIR/RDF data will be well described with their metadata.

Who should take the course
This course aims at data scientists in interdisciplinary teams within large organisation who are dealing with an ever growing complexity of data integration. Currently data technicians/ICTers spend between 70 and 80 percent of their time on data wrangling such as, dealing with format issues, identifiers, ontologies, etc. massaging the data so that it is ready for big data analysis. For larger organisations choosing to GO FAIR, Integration and re-use of datasets becomes less labor intensive, leaving more time to dive into more complex data analysis. However, the FAIR data approach requires tight collaboration between domain experts, computer scientists and FAIR data experts.

This course is intended for members of such teams in data intensive organisations or consultants who work in the field of change management or big-data trajectories, for example domain experts, ICT specialist / programmer, statistician, data scientist, data Modeler, Data Architect, Systems Architect, IT Executives, Project Managers, A.I. specialists, database Architects.

Background and context
FAIR Data aims at improving findability, accessibility, interoperability and Re-use of data. Due to the increase of volume and complexity of data, researchers and data analysts rely on automated support to be able to integrate and analyse these data in order to answer complex (research) questions.

A key aspect in this whole process is automated data interoperability, even when data were created in very different format, in different languages, and (as is more often the case) in different research domains. During the data integration activity, we need to identify the integration points, i.e., the parts of one dataset that can be related to other datasets so the end result will be a combined dataset where information has been contributed from different datasets and present more comprehensive understanding of a given subject. To achieve general, automated, interoperability of data, we make explicit the intended meaning for data elements, relations and constraints using semantic approaches fortified with GO FAIR community-based standards.

Go to the course website