Skip to main content

Downloads various SEC EDGAR files converting many to CSV files

Project description

EDGARquery

Table of Contents

pip install edgarquery
  • License edgarquery is distributed under the terms of the MIT license.

-[Usage]

edgarquery

required environmental variable

EQEMAIL - required by the SEC to download some of the files with curl.
used as the user-agent in the url request by the scripts.

These commands retrieve various data from SEC EDGAR. They use a
CIK or Central Index Key to identify entities such as companies or
insiders - company officers or large stock holders.
Use edgartickerstocsv and edgarcikperson to find CIKs by name
or ticker and then use that CIK to gather the data of interest.
To display facts for a company aggregated by the SEC, invoke
edgarcompanyfactshow.

edgartickerstocsv


usage: edgartickerstocsv [-h] [--directory DIRECTORY]

collect EDGAR companyticker json files and convert them to csv

options:
-h, --help show this help message and exit
--directory DIRECTORY
where to deposit the fileѕ

## edgarcikperson

usage: edgarcikperson [-h] [--cikpersondb CIKPERSONDB] [--file FILE]

extract CIK and person names from form345 zip files

options:
-h, --help show this help message and exit
--cikpersondb CIKPERSONDB
full path to the sqlite3 database - default in memory
--file FILE where to store the output - default stdout

## edgarcompanyconcepttocsv
usage: edgarcompanyconcepttocsv [-h] --file FILE [--directory DIRECTORY]

Parse an SEC EDGAR companyconcepts json file after it has been altered to deal
with its multipart character and generate a csv file from its contents

options:
-h, --help show this help message and exit
--file FILE json file to process
--directory DIRECTORY
where to deposit the fileѕ

## edgarcompanyfactshow

usage: edgarcompanyfactsshow [-h] --cik CIK [--directory DIRECTORY] [--show]

parse EDGAR company facts for a cik and display them in a browser

options:
-h, --help show this help message and exit
--cik CIK Centralized Index Key for the company
--directory DIRECTORY
where to store the html file to display
--show display the html in your browser

## edgarcompanyfactstocsv

usage: edgarcompanyfactstocsv [-h] --file FILE [--directory DIRECTORY]

Parse an SEC EDGAR companyfacts json file after it has been altered to deal
with its multipart character and generate CSV files from its content

options:
-h, --help show this help message and exit
--file FILE json file to process
--directory DIRECTORY
where to deposit the csv fileѕ

## edgarcompanyfactsziptocsv

usage: edgarcompanyfactsziptocsv [-h] --zipfile ZIPFILE [--directory DIRECTORY]
[--files FILES]

Extract one or more json files from an SEC EDGAR companyfacts.zip file and
convert to CSV

options:
-h, --help show this help message and exit
--zipfile ZIPFILE submissions.zip file to process. Іt can be downloadæd
with edgarquery.query
--directory DIRECTORY
where to deposit the output
--files FILES comma separated(no spaces) content file(s) to process
a subset of the files in the zip file

## edgarquery

usage: edgarquery [-h] [--cik CIK] [--cy CY] [--frame FRAME] [--units UNITS]
[--fact FACT] [--directory DIRECTORY] [--file FILE]
[--companyconcept] [--companyfacts] [--xbrlframes]
[--companyfactsarchivezip] [--submissionszip]
[--financialstatementandnotesdataset]

query SEC EDGAR site NOTE thæt EQEMAIL env variable is required and must
contain a valid User-Agent such as your email address

options:
-h, --help show this help message and exit
--cik CIK 10-digit Central Index Key
--cy CY calendar year e.g. CY2023, CY2023Q1, CY2023Q4I
--frame FRAME reporting frame e.g us-gaap, ifrs-full, dei, srt
--units UNITS USD or shares
--fact FACT fact to collect e.g AccountsPayableCurrent, USD-per-
shares
--directory DIRECTORY
directory to store the output
--file FILE file in which to store the output argument allowed for
each query type if --directory is not provided, it
should be the full path
--companyconcept returns all the XBRL disclosures from a single company
--cik required --frame - default us-gaap --fact -
default USD-per-shares
--companyfacts aggregates one fact for each reporting entity that is
last filed that most closely fits the calendrical
period requested --cik required
--xbrlframes returns all the company concepts data for a CIK --cy
required
--companyfactsarchivezip
returns daily companyfacts index in a zip file
--submissionszip returns daily index of submissions in a zip file
--financialstatementandnotesdataset
returns zip file with financial statement and notes
summaries --cy required
## edgarlatest10K

usage: edgarlatest10K [-h] --cik CIK [--link] [--directory DIRECTORY]

find the most recent 10-K for cik

options:
-h, --help show this help message and exit
--cik CIK 10-digit Central Index Key
--link return the url for the latest 10-K
--directory DIRECTORY
directory to store the output
## edgarlatestsubmissions

usage: edgarlatestsubmissions [-h] --cik CIK [--directory DIRECTORY]
[--file FILE]

find the most recent submissions for cik

options:
-h, --help show this help message and exit
--cik CIK 10-digit Central Index Key
--directory DIRECTORY
directory to store the output
--file FILE json file to process

## edgarsubmissions

usage: edgarsubmissions [-h] --cik CIK [--year YEAR] [--file FILE]
[--directory DIRECTORY]

find the most recent submissions for cik

options:
-h, --help show this help message and exit
--cik CIK 10-digit Central Index Key
--year YEAR year to search for submissions if not current year
--file FILE store the output in this file
--directory DIRECTORY
store the output in this directory

## edgarsubmissionsziptocsv

usage: edgarsubmissionsziptocsv [-h] [--zipfile ZIPFILE] [--directory DIRECTORY]
[--files FILES]

Extract one or more json files from an SEC EDGAR submissions.zip file and
convert to CSV

options:
-h, --help show this help message and exit
--zipfile ZIPFILE submissions.zip file to process - required
--directory DIRECTORY
where to deposit the output
--files FILES comma separated(no spaces) content file(s) to process
a subset of the files in the zip file

## edgarxbrlframestocsv

usage: edgarxbrlframestocsv [-h] --file FILE [--directory DIRECTORY]

Parse an SEC EDGAR xbrlframes json file after it has been altered to deal with
its multipart character and generate a csv file from its contents

options:
-h, --help show this help message and exit
--file FILE xbrl frames json file to process
--directory DIRECTORY
where to deposit the output

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

edgarquery-0.0.50.tar.gz (41.4 kB view details)

Uploaded Source

Built Distribution

edgarquery-0.0.50-py3-none-any.whl (34.6 kB view details)

Uploaded Python 3

File details

Details for the file edgarquery-0.0.50.tar.gz.

File metadata

  • Download URL: edgarquery-0.0.50.tar.gz
  • Upload date:
  • Size: 41.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for edgarquery-0.0.50.tar.gz
Algorithm Hash digest
SHA256 e10b351fb5fad9f2f3cebb4532b4c61d34664c01a15463db944d79cdf3d07928
MD5 50c45d24e13ab3c56345dd4311173a62
BLAKE2b-256 2625b1dfad42f3bec39a4ba338ef7a32c03b750b42120ca47ef16e5fb892ccd9

See more details on using hashes here.

File details

Details for the file edgarquery-0.0.50-py3-none-any.whl.

File metadata

  • Download URL: edgarquery-0.0.50-py3-none-any.whl
  • Upload date:
  • Size: 34.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for edgarquery-0.0.50-py3-none-any.whl
Algorithm Hash digest
SHA256 8ee54b0d3ef97b97d48c7ec5b2eda0d184409e9560656167918569c1ce8b8dfc
MD5 5383830a5fa9a3f74d9382aca8434f7d
BLAKE2b-256 8490787f7ea85077235b1592d4ed6556d0bdc6141908338454b30e609901554d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page