Skip to main content

A set of tools to support downloading GDELT data

Project description

Loading GDELT data into MongoDB

This is a set of programs for loading the GDELT 2.0 data set into MongoDB.

Quick Start

Install the latest version of Python from You need at least version 3.6 for this program. Many versions of Python that come pre-installed are only 2.7. This version will not work.

Now install gdelttools

pip install gdelttools

Now get the master file of all the GDELT files.

gdeltloader --master

This will generate a file named something like gdelt-master-file-04-19-2022-19-33-56.txt

Downloading the master data set

To download the master data set associated with GDELT (the export files) you can combine these steps:

gdeltloader --master --download --overwrite

This will get the master file, parse it, extract the list of CSV files and unzip them. the full GDELT 2.0 database runs to several terabytes of data so this is not recommend.

To limit the amount you download you can specify --last to define how many days of data you want to download:

gdeltloader --master --download --overwrite --last 20

Will download the most recent 20 days of data.

GDELT 2.0 Encoding and Structure

The GDELT dataset is a large dataset of news events that is updated in real-time. GDELT stands for Global Database of Events Location and Tone. The format of records in a GDELT data is defined by the GDELT 2.0 Cookbook

Each record uses an encoding method called CAMEO coding which is defined by the CAMEO cookbook.

Once you understand the GDELT recording structure and the CAMEO encoding you will be able to decode a record. To fully decode a record you may need the TABARI dictionaries from which the CAMEO encoding is derived.

How to download GDELT 2.0 data

The gdeltloader script can download cameo data an unzip the files so that they can be loaded into MongoDB.

usage: gdeltloader [-h] [--host HOST] [--master] [--update]
                   [--database DATABASE] [--collection COLLECTION]
                   [--local LOCAL] [--overwrite] [--download] [--metadata]
                   [--filefilter {export,gkg,mentions,all}] [--last LAST]

optional arguments:
  -h, --help            show this help message and exit
  --host HOST           MongoDB URI
  --master              GDELT master file [False]
  --update              GDELT update file [False]
  --database DATABASE   Default database for loading [GDELT]
  --collection COLLECTION
                        Default collection for loading [events_csv]
  --local LOCAL         load data from local list of zips
  --overwrite           Overwrite files when they exist already
  --download            download zip files from master or local file
  --metadata            grab meta data files
  --filefilter {export,gkg,mentions,all}
                        download a subset of the data, the default is the
                        export data
  --last LAST           how many recent days of data to download [365]

Version: 0.06a

To operate first get the master and the update list of event files.

gdeltloader --master --update

Now grab the subset of files you want. For us lets grab the last 365 days of events. There are three times of files in the master and update files:

150383 297a16b493de7cf6ca809a7cc31d0b93
318084 bb27f78ba45f69a17ea6ed7755e9f8ff
10768507 ea8dde0beb0ba98810a92db068c0ce99

Export files contain event data. Mentions contain other mentions of the initial news event in the current 15 minute cycle. GKS files contain the global knowledge graph.

We just want the previous 365 days of events so we use the master file to get the previous 365 exports files as so.

$ grep export gdelt_master-file-04-08-2019-14-13-28.txt | tail -n 365 > last_365_days.txt
$ wc last_365_days.txt
  365  1095 38847 last_365_days.txt

now download the data.

gdeltloader --download --local last_365_days.txt 

Host tells us a database to store the files we have downloaded. The local argument tells us the location of the local file on disk. This command will download all the associated zip files and unpack them into uncompress .CSV files.

Now import the CSV files with mongoimport.

Need mongoimport example here

transforming the data

You can generate GeoJSON points from the existing geo-location lat/long filed by using gdelttools/

usage: [-h] [--host HOST] [--database DATABASE] [-i INPUTCOLLECTION] [-o OUTPUTCOLLECTION]

optional arguments:
  -h, --help            show this help message and exit
  --host HOST           MongoDB URI [mongodb://localhost:27017]
  --database DATABASE   Default database for loading [GDELT]
                        Default collection for input [events_csv]
                        Default collection for output [events]

This program expects to read and write data from a database called GDELT. The default input collection is events_csv and the default output collection is events.

To transform the collections run:

python gdelttools/
Processed documents total : 247441

If you run on the same dataset it will overwrite the records. Each new data-set will be merged into previous collections of documents.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gdelttools-0.6a5.tar.gz (14.5 kB view hashes)

Uploaded source

Built Distribution

gdelttools-0.6a5-py3-none-any.whl (13.6 kB view hashes)

Uploaded py3

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Huawei Huawei PSF Sponsor Microsoft Microsoft PSF Sponsor NVIDIA NVIDIA PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page