Interoperability Maturity Indicators

Read how to apply the FAIR Maturity Indicators to measure the INTEROPERABILITY of the data and metadata.

  • Interoperability of data is compared with your FAIR objectives to identify and make improvements in an iterative manner

Overview

Implementation of the FAIR guidelines is measured by a framework of metrics which are now termed as Maturity Indicators (MI) [1, 2, 3]. This method is focussed on the maturity indicators to measure the findability of the data and metadata. It is a questionnaire for manual evaluation of the FAIR MIs for Interoperability which are recorded in the FAIRsharing registry [4,5]. It is important to understand that FAIR is intended to be aspirational. This means that any FAIR evaluation MIs are used to understand how to improve the FAIRness of the data.

The FAIR MIs are now reaching 2nd generation maturity as a result of community feedback and the need to automate FAIR evaluation, which is available as a public demonstrator developed by Mark Wilkinson and collaborators [6, 7]. The 2nd generation MIs have been adopted for this method to prepare for automated evaluation when they are ready for production usage by industry. All currently available FAIR evaluation tools and services have been compared by Research Data Alliance [8].

The MIs for Interoperability are illustrated below:

How To

This questionnaire enables manual evaluation of MIs to test for Interoperability.

  1. Is a knowledge representation language being used that has any kind of structured information?
    • Indicates the use of a formal, accessible, shared, and broadly applicable language for knowledge representation that takes a relaxed definition.
  2. Is a knowledge representation language being used that has ontological machine-resolvable formats?
    • Indicates the use of a formal, accessible, shared, and broadly applicable language for knowledge representation that takes a strict interpretation.
  3. Does the data and metadata make relaxed use of ontologies and vocabularies that are themselves, FAIR?
    • Indicates relaxed use of vocabularies that resolve to a human-readable page. FAIR ontologies and their vocabularies can be evaluated against the OBO principles [9, 10].
  4. Does the data and metadata make strict use of ontologies and vocabularies that are themselves, FAIR?
    • Indicates strict use of vocabularies that resolve to machine-readable linked identifiers. FAIR ontologies and their vocabularies can be evaluated against the OBO principles [9, 10].
  5. Does the metadata for the data contain links that resolve to different data sources?
    • Indicates whether the metadata for the data contain links to different sources i.e. that are not from the same source.

References and Resources

  1. The FAIR metrics group repository on GitHub at fairmetrics.org
  2. Wilkinson et al 2018 A design framework and exemplar metrics for FAIRness. Scientific Data volume5, Article number: 180118 (DOI: 10.1038/sdata.2018.118).
  3. Supplementary information for Wilkinson et al 2018: https://github.com/FAIRMetrics/Metrics/tree/master/Evaluation_Of_Metrics
  4. FAIR Maturity Indicators and Tools: https://github.com/FAIRMetrics/Metrics/tree/master/MaturityIndicators
  5. FAIRsharing registry search for FAIR metrics (https://fairsharing.org/standards/?q=FAIR+maturity+indicator)
  6. Second generation Maturity Indicators tests: https://github.com/FAIRMetrics/Metrics/tree/master/MaturityIndicators/Gen2
  7. A public demonstration server for The FAIR Evaluator: https://w3id.org/AmIFAIR
  8. Research Data Alliance 2020 Results of an Analysis of Existing FAIR assessment tools https://preview.tinyurl.com/yausl4s4
  9. The OBO Foundry Principles Overview: http://www.obofoundry.org/principles/fp-000-summary.html
  10. Smith et al 2010 The OBO Foundry: coordinated evolution of ontologies to support biomedical data integration Nat Biotechnol. 2007 Nov; 25(11): 1251 https://doi.org/10.1038/nbt1346

At a Glance

Related methods
Setting
  • Evaluation of interoperability to improve the FAIRness of the data and metadata
Team
  • Scientist generating or collecting the data and metadata
  • Data steward for advice and guidance
Timing
  • 0.5 day to answer the questions and faster if evaluation is automated
  • Additional time will depend on implementation of the FAIR improvements
Difficulty
  • High
Resources

Top Tips

  • How FAIR are your data? Checklist by Sara Jones & Marjan Grootveld
  • Knowledge representation languages, vocabularies and ontologies that are themselves “grounded” in the FAIR principles are recommended for data interoperability.
  • FAIR ontologies and vocabularies can be evulated by community standards such as the Open Biomedical Ontology (OBO) principles [9, 10].
  • Relaxed maturity indicators are sufficient for manual evaluation whereas more strict indicators are necessary for automated evaluation.