Skip to main content

Provids easy access to German publically availible regional statistics

Project description

https://img.shields.io/pypi/v/datenguidepy.svg https://img.shields.io/travis/CorrelAid/datenguide-python.svg https://readthedocs.org/projects/datenguidepy/badge/?version=latest https://mybinder.org/badge_logo.svg

The package provides easy access to German publicly available regional statistics. It does so by providing a wrapper for the GraphQL API of the Datenguide project.

Features

Overview of available statistics and regions:

The package provides DataFrames with the available statistics and regions, which can be queried by the user without having to refer to expert knowledge on regional statistics or the documentation of the underlying GraphQL API

Build and Execute Queries:

The package provides the user an object oriented interface to build queries that fetch certain statistics and return the results as a pandas DataFrame for further analysis.

Automatic inclusion of relevant meta data

Queries automatically retrieve some meta data along with the actual data to give the user more convenient access to the statistics without having to worry about too many technichal details

Full fidelity data

The package provides full fidelity data access to the datenguide API. This allows all use cases to use precicely the data that they need if it is available. It also means that most data cleaning has to be done by the user.

Quick Start

Install

To use the package install the package (command line):

pip install datenguidepy

Minimal example

To see the package work and obtain a DataFrame containing some statistics, the followin constitutes a minimal example.

from datenguidepy import Query

q = Query.region('01')
q.add_field('BEV001')
result_df = q.results()

Complex examples

These examples is intendend to illustrate many of the package’s features at the same time. The idea is to give an impression of some of the possibilities. A more detailed explanation of the functionality can be found in the the rest of the documentation.

q = Query.region(['02','11'])
stat = q.add_field('BEVSTD')
stat.add_args({'year' : [2011,2012]})
stat2 = q.add_field('AI1601')
stat2.add_args({'year' : [2011,2012]})
q.results(
    verbose_statistics = True,
    add_units = True,
).iloc[:,:7]

id

name

year

Verfügbares Einkommen je Einwohner (AI1601)

AI1601_unit

Bevölkerungsstand (BEVSTD)

BEVSTD_unit

0

02

Hamburg

2011

22695

EUR

1718187

Anzahl

1

02

Hamburg

2012

22971

EUR

1734272

Anzahl

0

11

Berlin

2011

18183

EUR

3326002

Anzahl

1

11

Berlin

2012

18380

EUR

3375222

Anzahl

q = Query.region('11')
stat = q.add_field('BEVSTD')
stat.add_args({
    'GES' : 'GESW',
    'statistics' : 'R12411',
    'NAT' : 'ALL',
    'year' : [1995,1996]
})
stat.add_field('GES')
stat.add_field('NAT')
q.results(verbose_enums = True).iloc[:,:6]

id

name

GES

NAT

year

BEVSTD

0

11

Berlin

weiblich

Ausländer(innen)

1995

191378

1

11

Berlin

weiblich

Deutsche

1995

1605762

2

11

Berlin

weiblich

Gesamt

1995

1797140

3

11

Berlin

weiblich

Deutsche

1996

1590407

4

11

Berlin

weiblich

Ausländer(innen)

1996

195301

5

11

Berlin

weiblich

Gesamt

1996

1785708

Get information on fields and meta data

Get information on region ids

# from datenguidepy import get_regions

 get_regions()

Use pandas query() functionality to get specific regions. E.g., if you want to get all IDs on “Bundeländer” use. For more information on “nuts” levels see Wikipedia.

get_regions().query("level == 'nuts1'")

Get information on statistic shortnames

#  from datenguidepy import get_statistics

  get_statistics()
  # return statistical descriptions in English
  get_statistics(target_language = 'en')

Get information on single fields

You can further information about description, possible arguments, fields and enum values on a field you added to a query.

q = Query.region("01")
stat = q.add_field("BEV001")
stat.get_info()

Further information

For detailed examples see the notebooks within the use_case folder.

For a detailed documentation of all statistics and fields see the Datenguide API.

Credits

All this builds on the great work of Datenguide and their GraphQL API datenguide/datenguide-api

The data is retrieved via the Datenguide API from the “Statistische Ämter des Bundes und der Länder”. Data being used via this package has to be credited according to the “Datenlizenz Deutschland – Namensnennung – Version 2.0”.

This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.

History

0.1.0 (2019-10-07)

  • First release on PyPI.

0.1.1 (2019-10-09)

  • Cleanup of the first release regarding naming, authors and docs.

0.2.0 (2020-11-30)

  • Added functionality to use meta data for displaying descriptive statistics names and enum values

0.2.1 (2020-05-17)

  • Added functionality to display the units of a statistic along with the numerical value.

  • Internally split the meta data extraction into technical meta data and meta data about the statistics. Implemented new defaults for the statistics meta data in order to account for changes in the datenguide API.

0.2.2 (2020-05-24)

  • Fixed a critical bug in the package data perventing the pypi version to essentially stop working completely.

  • Fixed a bug related to incorrectly displayed version number of the package.

0.3.0 (2020-06-24)

  • renamed get_all_regions to get_regions in accordance with get_statistics

  • changed the index column name of the DataFrame returnd by all_regions from id to region_id

  • made the statstics column name the index in the DataFrame returned by get_statistics and renamed it to statistic

  • added functionality to obtain a stored auto-translated version of the get_statistics descriptions (default is German, now machine translation is available in English)

  • introduced a new helper function get_availability_summary containing a (pre-calculated) summary of available data for region_id, statistic pairs down to nut3 level.

0.3.1 (2020-07-14)

  • Introduced a better error messages for queries that are run without a statistic

  • Bug fixes related to enums and auto join functionality

0.4.0 (2021-01-23)

  • Introduced better error messages in case of invalid regions

  • Introduced duplicate removal as an option for standard query results * New default is to remove duplicates but can be turned of with an argument * Auto-joining of multiple statistics should work better now as duplicates are removed before the joining. * Purpouse is only to remove duplicates that that may exist for technichal API reasons. The Purpouse is not to filter the data for content. * Rows are only counted as duplaces if everything, including the data source is identical

0.4.1 (2021-08-01)

  • Bugfixe

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

datenguidepy-0.4.1.tar.gz (2.0 MB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page