Skip to main content

Database for an academic CV

Project description

academicdb: An academic database builder

Why build your CV by hand when you can create it programmatically? This package uses a set of APIs (including Scopus, ORCID, CrossRef, and Pubmed) to generate a database of academic acheivements, and provides a tool to render those into a professional-looking CV. Perhaps more importantly, it provides a database of collaborators, which can be used to generate the notorious NSF collaborators spreadsheet.

Installing academicdb

To install the current version:

pip install academicdb

In addition to the Python packages required by academicdb (which should be automatically installed), you will also need a MongoDB server to host the database. There are two relatively easy alternatives:

The former is easier, but I prefer the latter because it allows the database to accessed from any system.

Rendering the CV after building the database requires that LaTeX be installed on your system and available from the command line. There are various LaTeX distributions depending on your operating system.

Configuring academicdb

To use academicdb, you must first set up some configuration files, which will reside in ~/.academicdb. The most important is config.toml, which contains all of the details about you that are used to retrieve your information. Here are the contents of mine as an example:

[researcher]
lastname = "poldrack"
middlename = "alan"
firstname = "russell"
email = "russpold@stanford.edu"
orcid = "0000-0001-6755-0259"
query = "poldrack-r"
url = "http://poldrack.github.io"
twitter = "@russpoldrack"
github = "http://github.com/poldrack"
phone = "650-497-8488"
scholar_id = "RbmLvDIAAAAJ"
scopus_id = "7004739390"
address = [
    "Stanford University",
    "Department of Psychology",
    "Building 420",
    "450 Jane Stanford Way",
    "Stanford, CA, 94305-2130",
]

Most of this should be self-explanatory. There are several identifiers that you need to specify:

  • ORCID: This is a unique identifier for researchers. If you don't already have an ORCID you can get one here. You will need to enter information about your education, employment, invited position and distinctions, and memberships and service into your ORCID account since that is where academicdb looks for that information.
  • Google Scholar: You will also need to retrieve your Google Scholar ID. Once you have set up your profile, go to the "My Profile" page. The URL from that page contains your id: for example, my URL is https://scholar.google.com/citations?user=RbmLvDIAAAAJ&hl=en and the ID is RbmLvDIAAAAJ.
  • Scopus: Scopus is a service run by Elsevier. I know that they are the bad guys, but Scopus provides a service that is not available from anywhere else: For each reference it provides a set of unique identifiers for the coauthors, which can be used to retreive their affilation information. This is essential for generating the NSF collaborators spreadsheet.

cloud MongoDB setup

If you are going to use a cloud MongoDB server, you will need to add the following lines to your config.toml:

[mongo]
CONNECT_STRING = 'mongodb+srv://<username>:<password>@<server>'

The cloud provider should provide you with the connection string that you can paste into this variable.

Obtaining an API key for Scopus

You will need to obtain an API key to access the Scopus database, which you can obtain from http://dev.elsevier.com/myapikey.html. This is used by the pybliometrics package to access the APIs; note that there are weekly limits on the number of records one can access from these APIs without a subscription. If your institution has a subscription and you are on the institution's network, you may be able to bypass these.

The first time you use the package, you will be asked by pybliometrics to enter your API key, which will be stored in ~/.pybliometrics/config.ini for reuse.

specifying additional information

There are a number of pieces of information that are difficult to reliably obtain from ORCID or other APIs, so they must be specified in a set of text files, which should be saved in the base directory that is specified when the command line dbbuilder tool is used. See the examples directory for examples of each of these.

  • editorial.csv: information about editorial roles
  • talks.csv: information about non-conference talks at other institutions
  • conference.csv: Information about conference talks
  • teaching.csv: Information about teaching
  • funding.csv: Information about grant funding

In addition, there may be references (including books, book chapters, and published conference papers) that are not detected by the automated search and need to be added by hand, using a file called additional_pubs.csv in the base directory.

Finally there is a file called links.csv that allows one to specify links related to individual publications, such as OSF repositories, shared code, and shared data. These links will be rendered in the CV alongside the publications.

Building the database

To build the database, you use the dbbuilder command line tool. The simplest usage is:

dbbuilder -b <base directory for files and output>

The full usage for the script is:

usage: dbbuilder [-h] [-c CONFIGDIR] -b BASEDIR [-d] [-o] [--no_add_pubs] [--no_add_info] [--nodb] [-t] [--bad_dois_file BAD_DOIS_FILE]

optional arguments:
  -h, --help            show this help message and exit
  -c CONFIGDIR, --configdir CONFIGDIR
                        directory for config files
  -b BASEDIR, --basedir BASEDIR
                        base directory
  -d, --debug           log debug messages
  -o, --overwrite       overwrite existing database
  --no_add_pubs         do not get publications
  --no_add_info         do not add additional information from csv files
  --nodb                do not write to database
  -t, --test            test mode (limit number of publications)
  --bad_dois_file BAD_DOIS_FILE
                        file with bad dois to remove

Rendering the CV

The render the CV after building the database, use the render_cv command line tool. The simplest usage is:

render_cv

This will create a LaTeX version of the CV and then render it using xelatex.

The full usage is:

usage: render_cv [-h] [-c CONFIGDIR] [-f FORMAT] [-d OUTDIR] [-o OUTFILE] [--no_render]

optional arguments:
  -h, --help            show this help message and exit
  -c CONFIGDIR, --configdir CONFIGDIR
                        directory for config files
  -d OUTDIR, --outdir OUTDIR
                        output dir
  -o OUTFILE, --outfile OUTFILE
                        output file stem
  --no_render           do not render the output file (only create .tex)

Creating the NSF collaborators spreadsheet

The database builder script will create a database collection called coauthors that contains the relevant information. The script to convert these to a spreadsheet is currently TBD; PR's welcome!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

academicdb-0.1.1a0.tar.gz (27.5 kB view details)

Uploaded Source

Built Distribution

academicdb-0.1.1a0-py3-none-any.whl (30.8 kB view details)

Uploaded Python 3

File details

Details for the file academicdb-0.1.1a0.tar.gz.

File metadata

  • Download URL: academicdb-0.1.1a0.tar.gz
  • Upload date:
  • Size: 27.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.4.1 CPython/3.9.15 Linux/5.19.17-051917-generic

File hashes

Hashes for academicdb-0.1.1a0.tar.gz
Algorithm Hash digest
SHA256 8ee3d2678c87157e1f8553d05b9157ce4df869ea4f87551d25c5dab4f1f6afb1
MD5 90089a2238ddd2124bfaf4637b3ad4af
BLAKE2b-256 7a6b42590a1f64542829f60feb01c9fa9d124774e146988c0070000d1b2b8c84

See more details on using hashes here.

File details

Details for the file academicdb-0.1.1a0-py3-none-any.whl.

File metadata

  • Download URL: academicdb-0.1.1a0-py3-none-any.whl
  • Upload date:
  • Size: 30.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.4.1 CPython/3.9.15 Linux/5.19.17-051917-generic

File hashes

Hashes for academicdb-0.1.1a0-py3-none-any.whl
Algorithm Hash digest
SHA256 dca6406ee8b861605a33ec7819cfaeb88fc94d13986f8d5b6e6251dde074542f
MD5 ddbb9a5cd9fd3befdaf3de7f98cb1c4d
BLAKE2b-256 1e601c6506828a90c17844406bcdb1a9df5593104da43d6f85f72df556425a43

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page