Skip to main content

Fellesfunksjoner for ssb i Python

Project description

SSB Fag-fellesfunksjoner i Python

PyPI Status Python Version License

Documentation Tests Coverage Quality Gate Status

pre-commit Black Ruff Poetry

A place for "loose, small functionality" produced at Statistics Norway in Python. Functionality might start here, if it is to be used widely within the organization, but later be moved to bigger packages if they "grow out of proportions".

Team: ssb-pythonistas

We are a team of statisticians which hope to curate and generalize functionality which arizes from specific needs in specific production-environments. We try to take responsibility for this functionality to be generalized and available to all of statistics Norway through this package.

Pypi-account Github-team

Contributing

Please make contact with one of our team members, to see if you can join, or how to send in a PR for approval into the package.

Installing

poetry add ssb-fagfunksjoner

Usage

Environment / Pathing

Check if you are on Dapla or in prodsone.

from fagfunksjoner import check_env


check_env()

Navigate to the root of your project and back again. Do stuff while in root, like importing local functions.

from fagfunksjoner import ProjectRoot


with ProjectRoot():
    ... # Do your local imports here...

Sasfiles

Setting up password with saspy

from fagfunksjoner.prodsone import saspy_ssb


saspy_ssb.set_password() # Follow the instructions to set the password
saspy_ssb.saspy_df_from_path("path")

Logger that follows SSB standards

import logging

from fagfunksjoner import StatLogger


# Ved å opprette StatLogger så "hijacker" den den vanlige loggeren
root_logger = StatLogger(log_file="custom_log_file.log")
# I tillegg sørger vi for at den ikke blir ryddet bort av Python, ved å assigne den til en variabel?

logger = logging.getLogger(__name__)
logger.info("This is an info message")

Export XMLs that can be imported into the KLASS UI

from fagfunksjoner import make_klass_xml_codelist


make_klass_xml_codelist(path="kjoenn.xml",
    codes=["1", "2"],
    names_bokmaal=["Mann", "Kvinne"])

Round data UP

import pandas as pd

from fagfunksjoner import round_up


print(round(2.5, 0), round_up(2.5, 0))

round_up(pd.Series([1.5, 2.5, 3.5]), 0)  # Datatype blir Int64 når man runder til 0 desimaler
round_up(pd.Series([1.15, 2.15, 3.15]), 1)  # Datatype blir Float64 når man runder til mer enn 0 desimaler

df = pd.DataFrame(
    {"col1": [1.5, 2.5, 1.2345, 1.2355],
    "col2": [3.5, 4.5, 5.6789, 6.7891]}
    ).astype({"col1": "Float64", "col2": "Float64"})
rounded = round_up(df, decimal_places=0, col_names="col1")  # Avrunder kun col1, den endrer datatype til Int64

rounded2 = round_up(df, col_names={"col1": 1, "col2": 2})  # Avrunder col1 til 1 desimal, col2 til 2 desimaler

Aggregation / Categories

Aggregate on all exclusive combinations of codes in certain columns (maybe before sending to statbank? Like proc means?)

from fagfunksjoner import all_combos_agg


ialt_koder = {
"skolefylk": "01-99",
"almyrk": "00",
"kjoenn_t": "0",
"sluttkomp": "00",
}
kolonner = list(ialt_koder.keys())
tab = all_combos_agg(vgogjen,
                     groupcols=kolonner,
                     aggargs={'antall': sum},
                     fillna_dict=ialt_koder)

To aggregate on NON-EXCLUSIVE combinations of codes in certain columns, use the slightly less process-effective

from fagfunksjoner import all_combos_agg_inclusive


category_mappings = {
    "Alder": {
        "15-24": range(15, 25),
        "25-34": range(25, 35),
        "35-44": range(35, 45),
        "45-54": range(45, 55),
        "55-66": range(55, 67),
        "15-21": range(15, 22),
        "22-30": range(22, 31),
        "31-40": range(31, 41),
        "41-50": range(41, 51),
        "51-66": range(51, 67),
        "15-30": range(15, 31),
        "31-45": range(31, 46),
        "46-66": range(46, 67),
    },
    "syss_student": {
        "01": ["01", "02"],
        "02": ["03", "04"],
        "03": ["02"],
        "04": ["04"],
    },
    "Kjonn": {
        "Menn": ["1"],
        "Kvinner": ["2"],
    }
}

totalcodes = {
    "Alder": "Total",
    "syss_student": "Total",
    "Kjonn": "Begge"
}
all_combos_agg_inclusive(
    synthetic_data,
    groupcols = [],
    category_mappings=category_mappings,
    totalcodes=totalcodes,
    valuecols = ["n"],
    aggargs={"n": "sum"},
    grand_total=True)

"Formats" like in SAS

Perform mapping using SsbFormat. Behaves like a dictionary. Has functionality for mapping ranges and 'other'-category and detecting different types of NaN-values. Does not handle non-exclusive / overlapping categories, please only use for exclusive categories.

from fagfunksjoner import SsbFormat


age_frmt = {
'low-18': '-18',
'19-25': '19-25',
'26-35': '26-35',
'36-45': '36-45',
'46-55': '46-55',
'56-high': '56+',
'other': 'missing'
}

# convert dictionary to SsbFormat
ssb_age_frmt = SsbFormat(age_frmt)

# perform mapping of age using ranges in format.
df['age_group'] = df['age'].map(ssb_age_frmt)

print(df['age_group'].value_counts())

# save format
from fagfunksjoner.formats import store_format


store_format(path+'format_name_p2025-02.json')

# or
# NB! after performing range mapping using SsbFormat. The dictionary will be long. You should save a short version. Inspect the dictionary before saving/storing.
ssb_age_frmt.store(path + 'format_name_p2025-02.json', force=True)

# read format/import format (dictionary saved as .json) as SsbFormat
from fagfunksjoner.formats import get_format


some_frmt = get_format(path+'format_name.json')

Opening archive-files based on Datadok-api in prodsone

We have "flat files", which are not comma seperated. These need metadata to correctly open. In SAS we do this with "lastescript". But there is an API to old Datadok in prodsone, so these functions let you just specify a path, and attempt to open the flat files directly into pandas, with the metadata also available.

from fagfunksjoner import open_path_datadok


archive_object = open_path_datadok("$TBF/project/arkiv/filename/g2022g2023")
# The object now has several important attributes
archive_object.df  # The Dataframe of the archived data
archive_object.metadata_df  # Dataframe representing metadata
archive_object.codelist_df  # Dataframe representing codelists
archive_object.codelist_dict  # Dict of codelists
archive_object.names  # Column names in the archived data
archive_object.datatypes  # The datatypes the archivdata ended up having?
archive_object.widths  # Width of each column in the flat file

Operation to Oracle database

Remember that any credidential values to the database should not be stored in our code. Possibly use python-dotenv package to make this easier.

Example for a normal select query where we expect not too many records:

import os

import pandas as pd
from doteng import load_dotenv

from fagfunksjoner.prodsone import Oracle


load_dotenv()

query = "select vare, pris from my_db_table"

ora = Oracle(pw=os.getenv("my-secret-password"),
             db=os.getenv("database-name"))

df = pd.DataFrame(ora.select(sql=query))

ora.close()

Example for a select query where possibly many records:

import os

import pandas as pd
from doteng import load_dotenv

from fagfunksjoner.prodsone import Oracle


load_dotenv()

query = "select vare, pris from my_db_table"

ora = Oracle(pw=os.getenv("my-secret-password"),
             db=os.getenv("database-name"))

df = pd.DataFrame(ora.selectmany(sql=query, batchsize=10000))

ora.close()

Example for inserting new records into database(note that ordering of the columns in sql query statement and data are important):

import os

import pandas as pd
from doteng import load_dotenv

from fagfunksjoner.prodsone import Oracle


load_dotenv()

df = pd.DataFrame(
    {
        "vare": ["banan", "eple"],
        "pris": [11, 10]
    }
)

data = list(df.itertuples(index=False, name=None))

query = "insert into my_db_table(vare, pris) values(:vare, :pris)"

ora = Oracle(pw=os.getenv("my-secret-password"),
             db=os.getenv("database-name"))

ora.insert_or_update(sql=query, update=data)

ora.close()

Example for updating records in the database(note that ordering of the columns in sql query statement and data are important. It is also important that the query doesn't update other records than it should. Having some kind of ID to the records will be very usefull!):

import os
import pandas as pd
from doteng import load_dotenv
from fagfunksjoner.prodsone import Oracle
load_dotenv()

df = pd.DataFrame(
    {
        "id": ["12345", "54321"]
        "vare": ["banan", "eple"],
        "pris": [11, 10]
    }
)

data = list(df[["vare", "pris", "id"]].itertuples(index=False, name=None))

query = "update my_db_table set vare = :vare, pris = :pris where id = :id"

ora = Oracle(pw=os.getenv("my-secret-password"),
             db=os.getenv("database-name"))

ora.insert_or_update(sql=query, update=data)

ora.close()

It also support context manager. This is handy when working with big data, and you then have to work more lazy. Or you want to do multiple operations to several tables without closing the connections. Or any other reasons... An easy case; reading large data from database and write it to a parquet file, in batches:

import os
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq
from doteng import load_dotenv
from fagfunksjoner.prodsone import Oracle, OraError
load_dotenv()

select_query = "select vare, pris from my_db_table"
parquet_write_path = "write/to/path/datafile.parquet"

with pq.ParquetWriter(parquet_write_path) as pqwriter: # pyarrow schema might be needed
    try:
        # will go straight to cursor
        with Oracle(pw=os.getenv("my-secret-password"),
                db=os.getenv("database-name")) as concur:
            concur.execute(select_query)
            cols = [c[0].lower() for c in cur.description]
            while True:
                rows = cur.fetchmany(10_000) # 10.000 rows per batch
                if not rows:
                    break
                else:
                    data = [dict(zip(cols, row)) for row in rows]
                    tab = pa.Table.from_pylist(data)
                    # this will write data to one row group per batch
                    pqwriter.write_table(tab)
    except OraError as error:
        raise error

Contributing

Contributions are very welcome. To learn more, see the Contributor Guide.

License

Distributed under the terms of the MIT license, SSB Fagfunksjoner is free and open source software.

Issues

If you encounter any problems, please file an issue along with a detailed description.

Credits

This project was generated from Statistics Norway's SSB PyPI Template.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ssb_fagfunksjoner-1.1.2.tar.gz (64.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ssb_fagfunksjoner-1.1.2-py3-none-any.whl (70.9 kB view details)

Uploaded Python 3

File details

Details for the file ssb_fagfunksjoner-1.1.2.tar.gz.

File metadata

  • Download URL: ssb_fagfunksjoner-1.1.2.tar.gz
  • Upload date:
  • Size: 64.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.8

File hashes

Hashes for ssb_fagfunksjoner-1.1.2.tar.gz
Algorithm Hash digest
SHA256 62585d77b9b8caabc218d8bc841f2b75815647f9f61f3586ab5ef2bbadc807b8
MD5 21d80c7b4e5dbc3bdc40c124dab966eb
BLAKE2b-256 203af2da7eed7fdf04f1500c74a0944953917d61b55b5bd1fdf5888e373572c1

See more details on using hashes here.

Provenance

The following attestation bundles were made for ssb_fagfunksjoner-1.1.2.tar.gz:

Publisher: release.yml on statisticsnorway/ssb-fagfunksjoner

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ssb_fagfunksjoner-1.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for ssb_fagfunksjoner-1.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 95f462ee89bec6fc3a5f79e3113f3f90ec23c350620eb35a47ab748010a8a696
MD5 7a1dc7c0ad74ae7eed77752d19907d40
BLAKE2b-256 e74c9f740759039b4efe2433e97fccdc628d523bf4a5bc7b82d12b1437440332

See more details on using hashes here.

Provenance

The following attestation bundles were made for ssb_fagfunksjoner-1.1.2-py3-none-any.whl:

Publisher: release.yml on statisticsnorway/ssb-fagfunksjoner

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page