(platform.openmod-initiative.org/data)
Use Case of all users:
Finding and downloading data (web)
As before the user finds an overview (like e.g. here; replacing the current open_eGo-project-internal Data Inventory in the Open Database Redmine) of data and their sources for different kind of data (replacing the wiki page Data) and an overview of data and their sources for grid data (replacing the wiki page Transmission network datasets) including data models (sometimes also called grid models). The latter are understood as collections of all relevant information about a grid / network, separated from the tools for network calculations / analysis. Are theses tools models, that are presented in fact sheets?
In contrast to the existing openmod wiki, the OpenEnergy Platform provides additionally a database, which links to and if possible includes data, which are of general interest for energy system modellers. This database is called oedb (open energy database) and includes original open data collected from different sources as well as processed data. The datasets might be grouped / tagged as follows:
- climate data
- existing power stations (Kraftwerksliste and EEG-Anlagenregister: installed capacities and efficiencies)
- demand data / energy consumption
- transmission network
- flexibility options
In addition, the user finds two schemata for every model (called app) in the working database (wdb) on the OpenEnergy Platform, which the model developers create and fill with model specific content.
The user searches for certain data (using the data tags named above), assumptions or results (using provided tag like scenario input or scenario results etc.). She/he uses also the data view functions, which give an overview of the data itself in form of a table or in form of a map or graphs. Doing so, the user might limit the data to a certain geographical area using the map provided for the selection. The user might also download the (selected) data as a text, csv or json file.
Implementation:
Assuming that the openmod database includes or links to all relevant data the lists on the wiki pages are no longer necessary. On overview pages the database entries might be ranked by access frequency in order to reduce the overwhelming effect of the database.
On top of the PostgreSQL database CKAN with a faceted search is installed.
Use Case of model users and model developers:
Creating model specific schemata
In addition to the activities of non-modellers, modellers of a model can make model specific configurations and store model specific data and assumptions in the xy_app_wd schema and results in the xy_app_res schema.
Downloading data (API)
Modellers can use the API for data exchange between the databases (oedb and wdb) and the model. Models written in different languages can communicate with the database using a function or module that generates and sends http queries like for instance:
“GET openmod.org/data/oedb/schema/table?fields=id,name,date”. Python model might for instance use a function based url lib.
Uploading data (API and Web-GUI)
A user can add new rows in the tables of the database as well as new tables. To upload the data she/he can use a function / module, sending PUT queries in http to the database API (see also “Downloading data (API)”), or the data importer (GUI) on the online presence. In the first step the importer can cope with data in csv fomat. The user can submit data as a batch and edit (or delete) the tables manually before she/he sends them. When the user accidentally leave the input site, while data is still pending, she/he will be warned and can cancel the exit.
Before theses changes will be available in the oedb, they are saved in a test databse and must pass tests, which a test server runs regularly. Data, that is made available by OpenPowerSystenData (OPSD), will in the same way be checked and saved in the database.
Changing data
The documentation of changes of the data in the database means a saving an enormous amount of information. Therefore, changing the data in the database must be organised very well.
Implementation:
The API and the data importer will be implemented following the RESTFul paradigms.
Nightly tests check, whether the new data fulfil the requirements and work with the models and delivers plausible results.