|
|
Line 11: |
Line 11: |
| | | |
| Data, datasets, descriptive data and metadata have to meet "quality criteria, such as accuracy, completeness, consistency, and actuality. [...] Additionally, Open Data credibility and transparency are crucial in order for Open Data to be trustworthy. Origin, originality, and data changes must be traceable and quality features testable." ([http://open-data.fokus.fraunhofer.de/en/projects/open-data-quality/ Fraunhofer FOKUS]) | | Data, datasets, descriptive data and metadata have to meet "quality criteria, such as accuracy, completeness, consistency, and actuality. [...] Additionally, Open Data credibility and transparency are crucial in order for Open Data to be trustworthy. Origin, originality, and data changes must be traceable and quality features testable." ([http://open-data.fokus.fraunhofer.de/en/projects/open-data-quality/ Fraunhofer FOKUS]) |
| + | |
| | | |
| | | |
Line 21: |
Line 22: |
| #The interfaces (API and web interface) of the database structure the data and the metadata. In this way the database tries to ensure for instance the submission of all relevant metadata belonging to a dataset. Another example is the automatically generated primary key, which ensures that every row in a table has got an unique identifier. For further information see also [[DatabaseRules|DatabaseRules]]. | | #The interfaces (API and web interface) of the database structure the data and the metadata. In this way the database tries to ensure for instance the submission of all relevant metadata belonging to a dataset. Another example is the automatically generated primary key, which ensures that every row in a table has got an unique identifier. For further information see also [[DatabaseRules|DatabaseRules]]. |
| #Plausibility and integration tests are applied to identify mistakes in the data, e.g. the geometrical correctness test which makes the manual testing with SQL-functions obsolete. | | #Plausibility and integration tests are applied to identify mistakes in the data, e.g. the geometrical correctness test which makes the manual testing with SQL-functions obsolete. |
| + | #When the number of users becomes large enough, a third level of data testing and rating based on user evaluations might be implemented. |
| | | |
− | Nontheless, the users need to contribute for instance by following the [[DatabaseRules|DatabaseRules]]. For further reading and guidelines with regard to data management and publication see for instance: [https://okfn.org/opendata/how-to-open-data Open Knowledge Foundation], [http://www.odaf.org/?lvl1=resources&lvl2=papers Open Data Foundation] and [http://software-carpentry.org/ Software Carpentry] (e.g. [https://github.com/swcarpentry/good-enough-practices-in-scientific-computing/blob/gh-pages/good-enough-practices-for-scientific-computing.pdf here]).
| + | Users can contribute to data quality for instance by following the [[DatabaseRules|DatabaseRules]]. For further reading and guidelines with regard to data management and publication see for instance: [https://okfn.org/opendata/how-to-open-data Open Knowledge Foundation], [http://www.odaf.org/?lvl1=resources&lvl2=papers Open Data Foundation] and [http://software-carpentry.org/ Software Carpentry] (e.g. [https://github.com/swcarpentry/good-enough-practices-in-scientific-computing/blob/gh-pages/good-enough-practices-for-scientific-computing.pdf here]). |
| | | |
− | <br/>
| |
| | | |
− | When the number of users is large enough, a third level of data testing and rating based on user evaluations might be implemented.
| |
| | | |
| <br/> | | <br/> |
| | | |
| <br/> | | <br/> |
Revision as of 18:25, 26 July 2016
Definition
1. Data quality refers to the level of quality of data.
2. The fitness for use of information; information that meets the requirements of its authors, users, and administrators. (Martin Eppler)
Abbreviation
Synonyms
Information Quality
Superterms
Data
Subterms
Sources
https://en.wikipedia.org/wiki/Data_quality; http://iaidq.org/main/glossary.shtml#I
Data Quality and Quality Criteria
Data, datasets, descriptive data and metadata have to meet "quality criteria, such as accuracy, completeness, consistency, and actuality. [...] Additionally, Open Data credibility and transparency are crucial in order for Open Data to be trustworthy. Origin, originality, and data changes must be traceable and quality features testable." (Fraunhofer FOKUS)
Data Quality in the open energy database (oedb)
The database set-up is designed to support users in achieving good data quality:
- The interfaces (API and web interface) of the database structure the data and the metadata. In this way the database tries to ensure for instance the submission of all relevant metadata belonging to a dataset. Another example is the automatically generated primary key, which ensures that every row in a table has got an unique identifier. For further information see also DatabaseRules.
- Plausibility and integration tests are applied to identify mistakes in the data, e.g. the geometrical correctness test which makes the manual testing with SQL-functions obsolete.
- When the number of users becomes large enough, a third level of data testing and rating based on user evaluations might be implemented.
Users can contribute to data quality for instance by following the DatabaseRules. For further reading and guidelines with regard to data management and publication see for instance: Open Knowledge Foundation, Open Data Foundation and Software Carpentry (e.g. here).