Skip to main content

A Bibliometric and Scientometric Library Powered with Artificial Intelligence Tools

Project description

pyBibX

Introduction

A Bibliometric and Scientometric python library that uses the raw files generated by Scopus (.bib files or .csv files), WOS (Web of Science) (.bib files), and PubMed (.txt files) scientific databases. Also, Powered with Advanced AI Technologies for Analyzing Bibliometric, Scientometric Outcomes, and Textual Data

To export the correct file formats from Scopus, Web of Science, and PubMed, follow these steps:

  • a) Scopus: Search, select articles, click "Export" choose "BibTeX" or ("CSV") select all fields, click "Export" again.
  • b) WoS: Search, select articles, click "Export" choose "Save to Other File Formats" select "BibTeX" select all fields, click "Send"
  • c) PubMed: Search, select articles, click "Save" choose "PubMed" format, click "Save" to download a .txt file.

General Capabilities:

  • a) Works with Scopus (.bib files or .csv files), WOS (.bib files) and PubMed (.txt files) databases
  • b) Identification and Removal of duplicates
  • c) Identification of documents per type
  • d) Generates a Health Report to evaluate the quality of the .bib/.csv file
  • e) Generates an EDA (Exploratory Data Analysis) Report: Publications Timespan, Total Number of Countries, Total Number of Institutions, Total Number of Sources, Total Number of References, Total Number of Languages (and also the number of docs for each language), Total Number of Documents, Average Documents per Author, Average Documents per Institution, Average Documents per Source, Average Documents per Year, Total Number of Authors, Total Number of Authors Keywords, Total Number of Authors Keywords Plus, Total Single-Authored Documents, Total Multi-Authored Documents, Average Collaboration Index, Max H-Index, Total Number of Citations, Average Citations per Author, Average Citations per Institution, Average Citations per Document, Average Citations per Source
  • f) Creates an ID (Identification) for each Document, Authors, Sources, Institutions, Countries, Authors' Keywords, Keywords Plus. The IDs can be used in graphs/plots to obtain a cleaner visualization
  • g) Creates a WordCloud from the Abstracts, Titles, Authors Keywords or Keywords Plus
  • h) Creates a N-Gram Bar Plot (interactive plot) from the Abstracts, Titles, Authors Keywords or Keywords Plus
  • i) Creates a Projection (interactive plot) of the documents based on the Abstracts, Titles, Authors Keywords or Keywords Plus
  • j) Creates an Evolution Plot (interactive plot) based on Abstracts, Titles, Sources, Authors Keywords or Keywords Plus
  • k) Creates an Evolution Plot Complement (interactive plot) based on Abstracts, Titles, Sources, Authors Keywords or Keywords Plus
  • l) Creates a Sankey Diagram (interactive plot) with any combination of the following keys: Authors, Countries, Institutions, Journals, Auhors_Keywords, Keywords_Plus, and/or Languages
  • m) Creates a TreeMap from the Authors, Countries, Institutions, Journals, Auhors_Keywords, or Keywords_Plus
  • n) Creates an Authors Productivity Plot (interactive plot) It informs for each year the documents (IDs) published for each author
  • o) Creates an Countries Productivity Plot (interactive plot) It informs for each year the documents (IDs) published for each country (each author's country)
  • p) Creates a Bar Plot for the following statistics: Documents per Year, Citations per Year, Past Citations per Year, Lotka's Law, Sources per Documents, Sources per Citations, Authors per Documents, Authors per Citations, Authors per H-Index, Bradford's Law (Core Sources 1, 2 or 3), Institutions per Documents, Institutions per Citations, Countries per Documents, Countries per Citations, Language per Documents, Keywords Plus per Documents and Authors' Keywords per Documents

Network Capabilities:

  • a) Collaboration Plot between Authors, Countries, Institutions, Authors' Keywords or Keywords Plus
  • b) Citation Analysis (interactive plot) between Documents (Blue Nodes) and Citations (Red Nodes). Documents and Citations can be highlighted for better visualization
  • c) Collaboration Analysis (interactive plot) between Authors, Countries, Institutions or Adjacency Analysis (interactive plot) between Authors' Keywords or Keywords Plus. Collaboration and Adjacency can be highlighted for better visualization
  • d) Similarity Analysis (interactive plot) can be performed using coupling or cocitation methods
  • e) World Map Collaboration Analysis (interactive plot) between Countries in a Map

Artificial Intelligence Capabilities:

  • a) Topic Modelling using BERTopic to cluster documents by topic
  • b) Visualize topics distribution
  • c) Visualize topics by the most representative words
  • d) Visualize documents projection and clusterization by topic
  • e) Visualize topics heatmap
  • f) Find the most representative documents from each topic
  • g) Find the most representative topics according to a word
  • h) Creates W2V Embeddings from Abstracts
  • i) Find Documents based in words
  • j) Calculates the cosine similarity between two words.
  • k) Make operations between W2V Embeddings
  • l) Visualize W2V Embeddings operations
  • m) Creates Sentence Embeddings from Abstracts, Titles, Authors Keywords or Keywords Plus
  • n) Abstractive Text Summarization using PEGASUS on a set of selected documents or all documents
  • o) Abstractive Text Summarization using chatGPT on a set of selected documents or all documents. Requires the user to have an API key (https://platform.openai.com/account/api-keys)
  • p) Extractive Text Summarization using BERT on a set of selected documents or all documents
  • q) Ask chatGPT to analyze the following results: EDA Report, WordCloud, N-Grams, Evolution Plot, Sankey Diagram, Authors Productivity Plot, Bar Plots, Citation Analysis, Collaboration Analysis, Similarity Analysis, and World Map Collaboration Analysis (consult Example 08). Requires the user to have an API key (https://platform.openai.com/account/api-keys)

Correction and Manipulation Capabilities:

  • a) Filter the .bib, .csv or .txt file by Year, Sources, Bradford Law Cores, Countries, Languages and/or Abstracts (Documents with Abstracts)
  • b) Merge Authors, Institutions, Countries, Languages and/or Sources that have multiple entries
  • c) Merge different or the same database files one at a time. The preference for information preservation is given to the old database, so the order of merging matters (consult Examples 04 and 05)

Usage

  1. Install
pip install pyBibX
  1. Try it in Colab:

Acknowledgement

This section indicates the libraries that inspired pyBibX

And to all the people that helped to improve or correct the code. Thank you very much!

  • Fabio Ribeiro von Glehn (29.DECEMBER.2022) - UFG - Federal University of Goias (Brazil)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyBibX-4.0.1.tar.gz (101.2 kB view details)

Uploaded Source

Built Distribution

pyBibX-4.0.1-py3-none-any.whl (98.6 kB view details)

Uploaded Python 3

File details

Details for the file pyBibX-4.0.1.tar.gz.

File metadata

  • Download URL: pyBibX-4.0.1.tar.gz
  • Upload date:
  • Size: 101.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.10.9

File hashes

Hashes for pyBibX-4.0.1.tar.gz
Algorithm Hash digest
SHA256 4f4886d3035b9fef7518c2b326a87464647f2056b75ca83f33dca269122c4834
MD5 826e7b5b5dd8a0928c78bdfcd31883a4
BLAKE2b-256 0663a10d3253d4445f2b3b590d11147407a3f90b86704760bac5246e11f9758e

See more details on using hashes here.

File details

Details for the file pyBibX-4.0.1-py3-none-any.whl.

File metadata

  • Download URL: pyBibX-4.0.1-py3-none-any.whl
  • Upload date:
  • Size: 98.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.10.9

File hashes

Hashes for pyBibX-4.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 100c9161dee03ec48fb8d39269a9404644069db6a4b6449324f831c2b1b1ead3
MD5 ac9f22ba5f1417262f8825b45b4d3e42
BLAKE2b-256 9f75574f2b66ea5628efd3d3a2111b1e08674f4cdb303fafe5dbbf8559693d4c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page