Skip to main content
This is a pre-production deployment of Warehouse. Changes made here affect the production instance of PyPI (
Help us improve Python packaging - Donate today!

Recommendation engine for scholarly works

Project Description

Recommendation of Scholarly Works

[![Build Status](](
[![Dependency Status](](
[![Zenodo DOI for github](](

This software has been built due to a need felt for a proper
recommendation system for publicly available scholarly/research works.

It classifies documents and uses personalization features and content-based algorithm to
suggest/recommend similar ones, possibly of interest to the user.

_Note:_ Currently, full-functionality is offered by combining this package and another one,
that offeres web interface (Django-based).
- [arcolife/django-scholarec]( "django-scholarec")

> *Inspired from an older project* [researchlei]( "BSD Licensed")



$ git clone
$ cd scholarec/
$ sh

* See INSTALL for detailed instructions.


* Optionally, to test if installed, look for a description on executing:
$ python -m scholarec

* To see if the scripts runs without error:
$ ./tests/
$ ./tests/


* To use the module in a Python script, simply import:
import scholarec

* To check a sample run output, open log/sample_run.txt

* To go for a sample run:

$ ./tests/query_parse

Note: For developing a small database from arXiv, you need to run
the query_parse script and accept "Extract PDF" option for extracting
related pdf's, converting them to plain text and extracting interesting
words that would later be used for recommendations and suggestions.

* A simple arXiv API call can be achieved by executing the following sample code:
import scholarec
from scholarec.base.arxiv import DocumentArXiv
url = ""
from urllib2 import urlopen
query_xml = urlopen(url)
doc = DocumentArXiv(query_xml)
data_dict = doc.extract_tags()
for entry_id in data_dict.keys():
print "ID: %s" % (entry_id)
print(data_dict[entry_id]), "\n"



Q. What data interchange file formats have been used?

A. Data conversion from XML to JSON as well as in XML itself.

Q. What are the Data sources?

A. Dataset currently taken from arXiv. Future: DBLP/Google Scholar.

Q. How is the Data dealt with?

A. ElasticSearch/MongoDB for search and storage



[![GPL V3](](

Release History

This version
History Node


History Node


Download Files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, Size & Hash SHA256 Hash Help File Type Python Version Upload Date
(22.4 kB) Copy SHA256 Hash SHA256
Source None Jun 2, 2014

Supported By

Elastic Elastic Search Pingdom Pingdom Monitoring Dyn Dyn DNS Sentry Sentry Error Logging CloudAMQP CloudAMQP RabbitMQ Heroku Heroku PaaS Kabu Creative Kabu Creative UX & Design Fastly Fastly CDN DigiCert DigiCert EV Certificate Google Google Cloud Servers DreamHost DreamHost Log Hosting