5 Data Bases and SDMX
- Contents
5.1 Scope of this Chapter
5.2 Introduction
So far this guide has concentrated on the various SDMX constructs – structural metadata, data and metadata sets – which enable applications to understand and process the data and reference metadata. The next few chapters explain how these are used in practical situations. This starts with the database: the database holds the data that are to be reported, collected, or disseminated. The database is at the kernel of any statistical system.
This Chapter explains ways that you can read and write SDMX formatted data files to and from a database, and how you can process an SDMX REST query for data. It also describes how you can use a Data Structure Definition to create database tables.
5.3 Database and DSD Mapping
Note that for the data reporting use case it is probable that the coding system used in the database of the data reporter is not the same as that defined in the DSD (which is usually that used by the data collector), and the database column names are not the same as the Dimension and Attribute Ids in the DSD used for data reporting. Even for the data collector it may be that the coding system in DSD is not the same as that used in the collector’s database. In these cases there is a need for a generic mapping mechanism. Depending on the chosen method for reading and writing data to/from the database, this mapping can be performed from within the database application or external to it (i.e. before passing the data to the database application or after the database application has written the data).
This mechanism provides a generic metadata-driven way for the database application to map the local structural metadata present in the data providers’ system and those provided with the DSD.
In order to better explain this process the following real life example will be used.
The Eurostat SODI (SDMX Open Data Interchange) project deals with certain of the set of STS (short-term statistics) indicators defined by EU statistical legislation. This project implements a data-sharing architecture using the pull mode (although the push mode is also supported). Generally the majority of the involved data producers have their data already stored in a database and described using different local structural metadata. This is the case for the Italian National Institute of Statistics (ISTAT), which disseminates those data through its short-term statistical databank ConIstat1.
Inside ConIstat, data are stored in a database using local structure metadata. A simplified snapshot of the database schema is provided in Error! Reference source not found..
Figure 7: Database schema for ConIstat
The schema is mainly based on two database tables: METADATA and DATA. The others tables can be considered as lookup tables useful to store code lists. Moreover two tables, respectively named DOMAIN and SUBDOMAIN, allow categorizing data in statistical subject-matter domains.
The main concepts used in order to describe each time-series are the following:
Category: short term statistical indicator
Type: adjustment indicator
Vs: stock/flow
Classification: NACE, SEC95, other classifications
Freq: frequency
Um: Unit of measure + Unit multiply + Base year
Start_Period: start period of the time-series
End_Period: last available period of the time-series
Each time-series is identified through a row in the METADATA table, and each field in that table has a correspondence in a particular lookup table representing a code list. So the time-series Monthly, neither seasonally or working day adjusted, Production in industry index base 2000, Mining and quarrying is described in the following way:
Category: 11 (index of industrial production)
Type: g (neither seasonally or working day adjusted)
Vs: R (flow)
Classification: C (Mining and quarrying)
Freq: 12 (monthly)
Um: PE (index number - base 2000)
Start_Period: 01_1990 End_Period: 08_2008
The mapping process can be achieved by storing the resulting information in a special repository outside or inside the native database. In the case of the reported example it was chosen to use a repository inside the native database but without changing anything in the original tables. For this purpose, the following tables were added to the already existing schema:
- STS_METADATA: used to describe STS time-series (in order to describe other domains it would be necessary to add other tables, for example ESA_METADATA for National Accounts and so on);
- Some lookup tables useful to store within the local database some SDMX artefacts from the related DSD (for example: labels or even descriptions for concepts, code lists and dataflows)
The table STS_METADATA represents the place where the mapping process stores the mapping information. In fact, it inherits the base structure from METADATA, and some fields were added in order to cover all the concepts expressed in the SDMX DSD. The resulting database schema after adding the new tables useful for the mapping process is shown in Figure 5.3.8.
Figure 5.3.8: Database schema with additional tables for mapping
In order to perform the mapping process correctly, it is necessary to consider different types of mapping: mapping of concepts and mapping of codes2.
5.3.1.1 Mapping of concepts
The first step is to identify all the statistical concepts involved in the exercise. The following circumstances can occur:
- one concept in the DSD can be linked up with a single local concept. A typical example is the measured value in the data provider database that corresponds to the Primary measure in the STS DSD used for SODI;
- one local concept must be linked up with two or more concepts in the DSD. For example in the local concept named Um there is an element as follows: “one million of Euro”. In the related STS DSD it corresponds to two concepts: Unit (Euro) and Unit multiple (one million);
- one concept in the DSD is not directly linked up with any local concept. This could be the case of the concept “Reference area”, in fact that concept is generally not used in a National Organisation because it is the default (Italy);
- one concept in DSD is linked up with two or more local concepts. For example the DSD concept “Adjustment” has no 1-to-1 correspondence with any single local concept; it is split into two different concepts “DAYADJ” (calendar adjusted) and “SEASADJ” (adjusted for periodical variations during the measurement period), which each has a Boolean value (true/false).
5.3.1.2 Mapping of codes
The second step is the mapping of the codes. Often a concept within a DSD can assume a code enumerated in a code list or a free value. The same thing can happen for a local concept. Assuming the concept used in the DSD and the local concept, used in the data provider's database, are both described using code lists, it may be possible to map one code in the first code list with a code in the second code list. The following example shows two such code lists:
CODE DESCRIPTION 1 Annual 12 Monthly 365 Daily 4 Quarterly 52 Weekly CODE DESCRIPTION A Annual M Monthly D Daily Q Quarterly W Weekly H Half-yearly B Business The mapping process will produce the following result:
DSD CODE Local CODE DESCRIPTION A 1 Annual M 12 Monthly D 365 Daily Q 4 Quarterly W 52 Weekly H Half-yearly B Business Often the map processing can be helped by some rules. For example, consider the CL_STS_ACTIVITY code list and the NACE Rev 1.1 classification. The rule is: remove all dots from the NACE code and add as many zeros as necessary in order to reach four digits. Then add the prefix N1, or NS in case of special codes.
After applying the above steps, the result of the mapping process in ConIstat can be set out as in Table 5.3.1, in which columns represent both DSD concepts and local concepts, while rows represent a combination of their codes. The scheme shown here reflects the way in which the mapping tables are set up at ISTAT, which was chosen for performance reasons; the mapping table could be organised in other ways.
Table 5.3.1: Mapping result example
CATE GORY TYPE CLASSI FICATION FREQ UM DATAFLOW STS_INDICATOR STS_ACTIVITY UNIT BASE_ YEAR ADJU STMENT FREQUE NCY 18 G DL300 12 PE SSTSIND_ORD_M ORDT N13000 PURE_N UMB 2000 N M 18 G DL31 12 PE SSTSIND_ORD_M ORDT N13100 PURE_N UMB 2000 N M 18 G DL311 12 PE SSTSIND_ORD_M ORDT N13110 PURE_N UMB 2000 N M For example:
- the concept named CATEGORY that assumes the code 18 (Index of total orders), from the related local code list, is mapped with the concept named STS_INDICATOR that in the STS code list is represented by the code ORDT;
- the concept named TYPE that assumes the code G (neither seasonally or working day adjusted), from the related local code list, is mapped with the concept named ADJUSTMENT that in the STS code list is represented by the code N;
- the concept named FREQ that assumes the code 12 (Monthly), from the related local code list, is mapped with the concept named FREQUENCY that in the STS code list is represented by the code M;
- the concept named UM that assumes the code PE (index base=2000), from the related local code list, is mapped with the two concepts: UNIT that in the SDMX code list is represented by the code PURE_NUMB and BASE_YEAR that in the STS code list is represented by the code 2000;
- the concept named CLASSIFICATION that assumes the code DL300 (Manufacture of office machinery and computers), from the related local code list, is mapped with the concept STS_ACTIVITY that in the STS code list is represented by the code N13000.
5.4 Reading and Writing SDMX to and from a Database
5.4.1 Mechanisms
Database applications may need to read or write different versions of SDMX data. This can, of course, be solved in many ways. Two basic ways are:
- the application reads or writes a specific format of data (which could be a specific format of SDMX) which is pre or post processed by a transformation tool that transforms the input/output to the desired format
- the desired output format is read or written directly to/from the application
Figure 9: External Transformation of SDMX Formats
If the application reads/writes a specific form of SDMX, then there are transformation tools readily available that will convert the data to/from different SDMX formats. Some may rely on reading the entire file into memory to undertake the transformation and so this may be a limiting factor on the practicality of this approach (certainly if performance is an issue). Whist a separate transformation process is a simple approach which is well understood (and as such is not discussed any more in this Chapter), it does mean reading or writing the data twice.
The mechanism discussed here is concerned with what may seem to be a more complex approach, but it has two advantages over the separate transformation approach:
- The database application need have no knowledge of the SDMX data set syntax.
- The data set is read or written only once, and can be streamed directly to/from the database application and the SDMX read/write application which means there is no size limiting factor.
5.4.2 SDMX Information Model
The SDMX Information Model for data recognizes the following fundamental structures for data:
- Group Key - comprising Dimensions
- Series Key - comprising Dimensions
- Observation – possibly including time
- Attribute
It is practical to read or write an SDMX data file without the need for the database read/write application to know anything about SDMX. This can be done in two main ways:
- By means of a data reader and data writer software component
- By means of a data and structure mapping tool
5.5 Data Reader and Data Writer
5.5.1 Schematic
Figure 10: Reading and Writing SDMX Data
5.5.2 SDMX Data Writer Interface
An example of interfaces that will enable an application to read and write any type of SDMX data file is shown in Annex 3.
5.5.3 Data Mapping Tool
5.6 Data Base Table Structure
The Data Structure Definition can be used to create a relational table structure in a database. The simplest type of structure is shown below:
Figure 11: Schematic of a Database Schema Derived from a DSD
With this structure it is easy to implement the SDMX REST web services for data. Note that this type of structure does not store any of the structural metadata and this must be made available from another web service, such as an SDMX Registry, or from additional (structural metadata) structures in the database
The structural metadata from the ECB-EXR1 Data Structure Definition that is relevant to the database is shown below.
An example set of database tables from the ECB-EXR1 Data Structure Definition is shown below.
Figure 12: Schematic of a Database Schema Derived from the ECB_EXR1 DSD
5.7 Processing Queries
The SDMX Information Model for data recognizes the following constructs that are relevant to a database system for reading or writing SDMX, and for processing SDMX REST data queries:
For structure
For data
- Series Key
- Observation
A database can be made “SDMX Web Services enabled” in a similar way to the Data Reader and Data Writer described in Annex 4 – Data Reader and Data Writer Functions. This is shown schematically in the diagram below.
Figure 13: SDMX Query Reader
An example interface for the Query Reader API is shown in Annex 3. As for the Data Reader and Data Writer it is the interface that is the important asset here, and this is structured using the constructs of the SDMX Information Model, which are implemented in some way by any of the actual query formats (see Annex 3), the database application need not be concerned with this.
It will be seen from Chapter 6, that this interface presents the content of the SDMX REST data query in a way that is easy for the database application to process without the need to know the syntax of the REST query (or any of the other possible query formats).
The database result set is output to SDMX using the relevant implementation of the Data Writer Interface. This actual implementation that the database application uses will depend upon the query response format requested by the user. However, again this technicality is hidden from the database application and it is concerned solely with the methods of the Data Writer interface.
- ^ ConIstat: http://con.istat.it/
- ^ For further explanations of the usage of concepts and codes, see chapters XXXXXX