Last modified by Artur on 2025/09/30 12:30

From version 11.1
edited by Helena
on 2025/06/06 17:06
Change comment: There is no comment for this version
To version 12.2
edited by Helena
on 2025/06/06 17:10
Change comment: There is no comment for this version

Summary

Details

Page properties
Tags
... ... @@ -1,0 +1,1 @@
1 +SDMX Technical Specification|Statistical data and metadata exchange
Content
... ... @@ -10,11 +10,11 @@
10 10  
11 11  = 1 Introduction =
12 12  
13 -The Statistical Data and Metadata Exchange (SDMX) initiative (https:~/~/www.sdmx.org) sets standards that can facilitate the exchange of statistical data and metadata using modern information technology.
13 +The [[Statistical Data and Metadata Exchange>>doc:sdmx:Glossary.Statistical data and metadata exchange.WebHome]] ([[SDMX>>doc:sdmx:Glossary.Statistical data and metadata exchange.WebHome]]) initiative (https:~/~/www.sdmx.org) sets standards that can facilitate the exchange of statistical data and metadata using modern information technology.
14 14  
15 -The SDMX Technical Specifications are organised into several discrete sections.
15 +The [[SDMX Technical Specifications>>doc:sdmx:Glossary.SDMX Technical Specification.WebHome]] are organised into several discrete sections.
16 16  
17 -The following are published on the SDMX website ([[__https:~~/~~/www.sdmx.org__>>https://https:www.sdmx.org]]):
17 +The following are published on the [[SDMX>>doc:sdmx:Glossary.Statistical data and metadata exchange.WebHome]] website  ([[__https:~~/~~/www.sdmx.org__>>https://https:www.sdmx.org]]):
18 18  
19 19  **Section 1** **Framework for SDMX Technical Standards** – this document providing an introduction to the technical standards.
20 20  
... ... @@ -244,25 +244,16 @@
244 244  * //**Provision Agreement (Metadata Provision Agreement):**// The set of information which describes the way in which data sets and metadata sets are provided by a data/metadata provider. A provision agreement can be constrained in much the same way as a data or metadata flow definition. Thus, a data provider can express the fact that it provides a particular data flow covering a specific set of countries and topics, Importantly, the actual source of registered data or metadata is attached to the provision agreement (in terms of a URL). The term “agreement” is used because this information can be understood as the basis of a “service-level agreement”. In SDMX, however, this is informational metadata to support the technical systems, as opposed to any sort of contractual information (which is outside the scope of a technical specification). In version 3.0, metadata provision agreement and data provision agreement are two separate artefacts.
245 245  * //**Data Constraint:**// Used to restrict content (such as enumerations) and are used by provision agreements, data flows, data structure definitions in order to provide a set of reporting restrictions in the context of a collection
246 246  * //**Metadata Constraint:**// Used to restrict content (such as enumerations) and are used by metadata provision agreements, metadata flows, metadata structure definitions in order to provide a set of reporting restrictions in the context of a collection
247 +* • //**Available Data Constraint:**// Used to report the set of Component values that have data reported against them in the context of a Data Query. This structure allows a user to know what valid filters can be applied to a cube of data, such that the resulting cube will contain data.
248 +* • //**Structure Map: **//Structure maps describes a mapping between data structure definitions or dataflows for the purpose of transforming a data set into a different structure. The mapping rules are defined using one or more component maps which each map in turn describes how one or more components from the source data structure definition map to one or more components in that of the target. Represent maps act as lookup tables and specific provision is made for mapping dates and times.
249 +* • //**Representation Map:**// Representation maps describe mappings between source value(s) and target value(s) where the values are restricted to those in a code list, value list or be of a certain type such as integer or string.
250 +* • //**Item Scheme Map:**// An item scheme map describes mapping rules between any item scheme with the exception of code lists and value lists which use representation maps. The version 3.0 information model provides four item scheme maps: organisation scheme map, concept scheme map, category scheme map and reporting taxonomy map. Organisation scheme map and reporting scheme map have been omitted from the information model schematic in Figure 1.
251 +* • //**Reporting Taxonomy: **//A reporting taxonomy allows an organisation to link (possibly in a hierarchical way) a number of cube or data flow definitions which together form a complete “report” of data or metadata. This supports primary reporting which often comprises multiple cubes of heterogeneous data, but may also support other collection and reporting functions. It also supports the specification of publications such as a yearbook, in terms of the data or metadata contained in the publication.
252 +* • //**Process:**// The process class provides a way to model statistical processes as a set of interconnected //process steps.// Although not central to the exchange and dissemination of statistical data and metadata, having a shared description of processing allows for the interoperable exchange and dissemination of reference metadata sets which describe processes-related concepts.
253 +* • //**Hierarchy**//: Describes complex code hierarchies principally for data discovery purposes. The codes themselves are referenced from the code lists in which they are maintained.
254 +* • //**Hierarchy Association**//: A hierarchy association links a hierarchy to something that needs it like a dimension. Furthermore, the linking can be specified in the context of another object such as a dimension in the context of a dataflow. Thus, a dimension in a data structure definition could have different hierarchies depending on the dataflow.
255 +* • //**Transformation Scheme:**// A transformation scheme is a set of Validation and Transformation Language (VTL) transformations aimed at obtaining some meaningful results for the user (e.g., the validation of one or more data sets). The set of transformations is meant to be executed together (in the same run) and may contain 597 any number of transformations in order to produce any number of results. Thus, a transformation scheme can be considered as a VTL ‘program’.
247 247  
248 -• //**Available Data Constraint:**// Used to report the set of Component values that have data reported against them in the context of a Data Query. This structure allows a user to know what valid filters can be applied to a cube of data, such that the resulting cube will contain data.
249 -
250 -• //**Structure Map: **//Structure maps describes a mapping between data structure definitions or dataflows for the purpose of transforming a data set into a different structure. The mapping rules are defined using one or more component maps which each map in turn describes how one or more components from the source data structure definition map to one or more components in that of the target. Represent maps act as lookup tables and specific provision is made for mapping dates and times.
251 -
252 -• //**Representation Map:**// Representation maps describe mappings between source value(s) and target value(s) where the values are restricted to those in a code list, value list or be of a certain type such as integer or string.
253 -
254 -• //**Item Scheme Map:**// An item scheme map describes mapping rules between any item scheme with the exception of code lists and value lists which use representation maps. The version 3.0 information model provides four item scheme maps: organisation scheme map, concept scheme map, category scheme map and reporting taxonomy map. Organisation scheme map and reporting scheme map have been omitted from the information model schematic in Figure 1.
255 -
256 -• //**Reporting Taxonomy: **//A reporting taxonomy allows an organisation to link (possibly in a hierarchical way) a number of cube or data flow definitions which together form a complete “report” of data or metadata. This supports primary reporting which often comprises multiple cubes of heterogeneous data, but may also support other collection and reporting functions. It also supports the specification of publications such as a yearbook, in terms of the data or metadata contained in the publication.
257 -
258 -• //**Process:**// The process class provides a way to model statistical processes as a set of interconnected //process steps.// Although not central to the exchange and dissemination of statistical data and metadata, having a shared description of processing allows for the interoperable exchange and dissemination of reference metadata sets which describe processes-related concepts.
259 -
260 -• //**Hierarchy**//: Describes complex code hierarchies principally for data discovery purposes. The codes themselves are referenced from the code lists in which they are maintained.
261 -
262 -• //**Hierarchy Association**//: A hierarchy association links a hierarchy to something that needs it like a dimension. Furthermore, the linking can be specified in the context of another object such as a dimension in the context of a dataflow. Thus, a dimension in a data structure definition could have different hierarchies depending on the dataflow.
263 -
264 -• //**Transformation Scheme:**// A transformation scheme is a set of Validation and Transformation Language (VTL) transformations aimed at obtaining some meaningful results for the user (e.g., the validation of one or more data sets). The set of transformations is meant to be executed together (in the same run) and may contain 597 any number of transformations in order to produce any number of results. Thus, a transformation scheme can be considered as a VTL ‘program’.
265 -
266 266  == 3.5 SDMX Registry Services ==
267 267  
268 268  In order to provide visibility into the large amount of data and metadata which exists within the SDMX model of statistical exchange, it is felt that an architecture based on a set of registry services is potentially useful. A “registry” – as understood in webservices terminology – is an application which maintains and stores metadata for querying, and which can be used by any other application in the network with sufficient access privileges (though note that the mechanism of access control is outside of the scope of the SDMX standard). It can be understood as the index of a distributed database or metadata repository which is made up of all the data provider’s data sets and reference metadata sets within a statistical community, located across the Internet or similar network.