USPTO patent data fetcher and parser
Project description
Fastpat
Fetch and parse patent application, grant, assignment, and maintenance info from USPTO Bulk Data. This handles all patent formats and outputs to pure CSV. Clusters patents by firm name, first filtering using locality-sensitive hashing, then finding components induced by a Levenshtein distance threshhold.
Requirements
In general, you'll need the fire
library. For parsing, you'll need: numpy
, pandas
, and lxml
. For firm clustering, you'll additionally need: xxhash
, editdistance
, networkx
, and Cython
. All of these are available through both pip
and conda
. You can install all the requirements with pip
by running: pip install -r requirements.txt
.
Usage
Most common tasks can be executed through the fastpat
command. For more advanced usage, you can also directly call the functions in the library itself. When using fastpat
you have to specify the data directory. You can either do this by passing the --datadir
flag directly or by setting the environment variable FASTPAT_DATADIR
. If you've cloned the repository locally, you have to run python3 -m fastpat
instead of fastpat
.
Downloading Data
The following USPTO data sources are supported
grant
: patent grantsapply
: patent applicationsassign
: patent resassignmentsmaint
: patent maintenance eventstmapply
: trademark applications (preliminary)
To download the files for data source SOURCE
, run the command
fastpat fetch SOURCE
This library ships with a list of source files for each type, however this will become out of date over time. As such, you can also specify your own metadata path containing these files. You can do this by passing the --metadir
flag directly or by setting the FASTPAT_METADIR
environment variable. If you've cloned this repository locally, you can also update the files in fastpat/meta
.
Parsing Data
Parsing works similarly to fetching. Simply run
fastpat parse SOURCE
for one of the sources listed above.
Firm Clustering
This step is a bit more bespoke, and you may want to change things to suit your needs. But in general, there are four subcommands you can pass to fastpat firms
: assign
which eliminates duplicate or redundant patent transfers from the reassignment data, cluster
which groups firm names into common entities using locality sensitive matching and Levenshtein distance, cites
which aggregates citation data to the patent level, and merge
which brings it all together into a firm-year panel. The simplest thing is to simply run these subcommands in order.
Example
Suppose you just want to parse patent grants. To do this, you would go through the following steps:
- Set up the environment with
export FASTPAT_DATADIR=data
- Fetch the grant data with
fastpat fetch grant
- Parse the grant data with
fastpat parse grant
- Cluster firm names with
fastpat firms cluster --sources grant
- Process citations with
fastpat firms cites
If you want to work with applications, grants, reassignment, and maintenance, you can run the following
- Set up the environment with
export FASTPAT_DATADIR=data
- Fetch all the data with
fastpat fetch SOURCE
for each ofSOURCE
inapply
,grant
,assign
,maint
(four separate commands) - Parse all the data with
fastpat parse SOURCE
for each ofSOURCE
inapply
,grant
,assign
,maint
(four separate commands) - Prune the resassignment data with
fastpat firms assign
- Cluster firm names with
fastpat firms cluster --sources apply,grant,assign,maint
- Process citations with
fastpat firms cites
- Merge into firm-year panel with
fastpat firms merge
Data Updates
Continual data updating works very well for applications and grants. Only new files will be downloaded and unzipped. The way the patent office constructs the assignment data means that you'll have to delete it and re-download it roughly once a year. Similarly, maintenance information is stored in a single file, so to update that, you'll need to delete the data file raw/maint/MaintFeeEvents.zip
and rerun the fetch command.
The parsing code will also only parse new files. If you wish to rerun the parsing step for a given file, either delete its outputs (in the parsed
data directory) or pass the --overwrite
flag (this works for the fetching step too). The clustering and merging steps must be run for any update to propagate the changes throughout. These will take about the same amount of time even for small updates, as they are undertaking global computations. Every command is idempotent, meaning it can be rerun without breaking anything.
Migration
If you've been using older versions of this repository, the new data layout is slightly different. To avoid having to re-download everything, you can move the contents of your data
directly to data/raw
and use data
as the data directory path that you pass to fastpat
. It's probably best to then re-parse everything and remove the parsed
and tables
directories.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file fastpat-0.9.2.tar.gz
.
File metadata
- Download URL: fastpat-0.9.2.tar.gz
- Upload date:
- Size: 41.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.48.0 CPython/3.9.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4528b0a1f318b38717f1946ce01bb881fdc8f94ccebf721cbf3d61286db23137 |
|
MD5 | 0264a5ecf7537840326d9ff23d6e41ce |
|
BLAKE2b-256 | ce29de368c713c578c13eff90cbdfc193462d86bf0497e0b92af7cf60af0676b |
File details
Details for the file fastpat-0.9.2-py3-none-any.whl
.
File metadata
- Download URL: fastpat-0.9.2-py3-none-any.whl
- Upload date:
- Size: 46.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.48.0 CPython/3.9.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | cc8bff5933b30bc94c5ee02479e2a0145bd8312102fd7b410a9314208fce313c |
|
MD5 | 235bfd857c7342cf7602877bee9c9117 |
|
BLAKE2b-256 | 86e5233784602aaa65e42444b5924b967268f89523708cd76da9cd53ecb22870 |