Skip to main content

Location based social network (LBSN) data structure format & transfer tool

Project description

PyPI version pylint pipeline status Documentation


A python package that uses the common location based social network (LBSN) data structure (ProtoBuf) to import, transform and export Social Media data such as Twitter and Flickr.

Illustration of functions


The goal is to provide a common interface to handle Social Media Data, without the need to individually adapt to the myriad API endpoints available. As an example, consider the ProtoBuf spec lbsn.Post, which can be a Tweet on Twitter, a Photo shared on Flickr, or a post on Reddit. However, all of these objects share a common set of attributes, which is reflected in the lbsnstructure.

The tool is based on a 4-Facet conceptual framework for LBSN, introduced in a paper by Dunkel et al. (2018).

The GDPR directly requests Social Media Network operators to allow users to transfer accounts and data in-between services. While there are attempts by Google, Facebook etc. (e.g. see the data-transfer-project), this is not currently possible. With the lbsnstructure, a primary motivation is to systematically characterize LBSN data aspects in a common, cross-network data scheme that enables privacy-by-design for connected software, data handling and database design.


This tool enables data import from a Postgres database, JSON, or CSV and export to CSV, LBSN ProtoBuf or the hll and raw versions of the LBSN prepared Postgres Databases. The tool will map Social Media endpoints (e.g. Twitter tweets) to a common LBSN Interchange Structure format in ProtoBuf. LBSNTransform can be used using the command line (CLI) or imported to other Python projects with import lbsntransform, for on-the-fly conversion.

Quick Start

The recommended way to install lbsntransform, for both Linux and Windows, is through the conda package manager.

  1. Create a conda env using environment.yml

First, create an environment with the dependencies for lbsntransform using the [environment.yml][environment.yml] that is provided in the root of the repository.

git clone
cd lbsntransform
# not necessary, but recommended:
conda config --env --set channel_priority strict
conda env create -f environment.yml
  1. Install lbsntransform without dependencies

Afterwards, install lbsntransform using pip, without dependencies.

conda activate lbsntransform
pip install lbsntransform --no-deps --upgrade
# or locally, from the latest commits on master
# pip install . --no-deps --upgrade
  1. Import data using a mapping

For each data source, a mapping must be provided that defines how data is mapped to the lbsnstructure.

The default mapping is lbsnraw.

Additional mappings can be dynamically loaded from a folder.

We have provided two example mappings for the Flickr YFCC100M dataset (CSV) and Twitter (json).

For example, to import the first 1000 records from json data from Twitter to the lbsn raw database, clone to a local folder ./resources/mappings/, startup the Docker rawdb container, and use:

lbsntransform --origin 3 \
              --mappings_path ./resources/mappings/ \
              --file_input \
              --file_type "json" \
              --mappings_path ./resources/mappings/ \
              --dbpassword_output "sample-key" \
              --dbuser_output "postgres" \
              --dbserveraddress_output "" \
              --dbname_output "rawdb" \
              --dbformat_output "lbsn" \
              --transferlimit 1000

.. with the above input args, the the tool will:

  • read local json from ./01_Input/
  • and store lbsn records to the lbsn rawdb.

Vice versa, to import data directly to the privacy-aware version of lbsnstructure, called hlldb, startup the Docker container, and use:

lbsntransform --origin 3 \
              --mappings_path ./resources/mappings/ \
              --file_input \
              --file_type "json" \
              --mappings_path ./resources/mappings/ \
              --dbpassword_output "sample-key" \
              --dbuser_output "postgres" \
              --dbserveraddress_output "" \
              --dbname_output "hlldb" \
              --dbformat_output "hll" \
              --dbpassword_hllworker "sample-key" \
              --dbuser_hllworker "postgres" \
              --dbserveraddress_hllworker "" \
              --dbname_hllworker "hlldb" \
              --include_lbsn_objects "origin,post" \
              --include_lbsn_bases hashtag,place,date,community \
              --transferlimit 1000

.. with the above input args, the the tool will:

  • read local json from ./01_Input/
  • and store lbsn records to the privacy-aware lbsn hlldb
  • by converting only lbsn objects of type origin and post
  • and updating the HyperLogLog (HLL) target tables hashtag, place, date and community

A full list of possible input and output args is available in the documentation.

Built With

  • lbsnstructure - A common language independend and cross-network social-media datascheme
  • protobuf - Google's data interchange format
  • psycopg2 - Python-PostgreSQL Database Adapter
  • ppygis3 - A PPyGIS port for Python
  • shapely - Geometric objects processing in Python
  • emoji - Emoji handling in Python


  • Alexander Dunkel - Initial work

See also the list of contributors.


This project is licensed under the GNU GPLv3 or any higher - see the file for details.

Project details

Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lbsntransform-0.26.0.tar.gz (208.8 kB view hashes)

Uploaded source

Built Distribution

lbsntransform-0.26.0-py3-none-any.whl (90.5 kB view hashes)

Uploaded py3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page