Skip to main content

LSHVec pre-trained models and its Python bindings

Project description

LSHVec pre-trained models and its Python bindings

Summary

This repository presents a few of pre-tained models with JLSHVec (which is a rewritten java version of LSHVec). See Remark for technical details.

Python codes and examples to uses these models are also provided.

Requirements

  1. Python 3.6
  2. cython>=0.28.5
  3. Jnius >=1.1.0
  4. java >=1.8

Install

build from source

git clone https://github.com/Lizhen0909/PyLSHvec.git && cd PyLSHvec && python setup.py install

or use pip

pip install pylshvec

or use docker

docker pull lizhen0909/pylshvec

How to use

Put things simply, just

from pylshvec import *

#here needs jlshvec jar file, download it first
set_lshvec_jar_path("/mnt/jlshvec-assembly-0.1.jar")

#since vector model is usually large, set a big java memory limit is preferred. 
add_java_options("-Xmx32G")

#here need model file and lsh function file, download them first
#use help(model) to see all the methods and constructor options 
model= LSHVec(model_file="/mnt/refdb_viruses_model_gs_k23_l3000_rand_model_299", 
              hash_file="/mnt/lsh_nt_NonEukaryota_k23_h25.crp")

reads = ['ACGTACGT.....', 'ACGTACGT.....', 'ACGTACGT.....', 'ACGTACGT.....', ....]

predicts = model.predict(reads)

For more complete examples please see the notebooks (see Download for minimum memory requirement):

example_use_virus_classfication_model.ipynb

example_use_bacteria_classfication_model.ipynb

example_use_vectors_in_bacteria_classfication_model.ipynb

example_use_Illumina_bacteria_classfication_model.ipynb

example_use_Pacbio_bacteria_classfication_model.ipynb

Download

JLSHVec jar file

The pre-trained models were trained with a rewritten LSHVec in java. The assembly jar file is needed to load the models.

Download jlshvec-assembly-0.1.jar

md5sum: aeb207b983b3adc27e14fd9c431e2130

Pre-trained models

Be Warned that like all the machine learning models, the model cannot preform better beyond the data. If your data is significant other than the pre-trained model data, training your own model is preferred.

Here are issues I can think of:

  • Some NCBI taxonomy id may never be predicted since not all ids have train data.
  • Data is not balanced. Some ids (e.g. a specified species) have much more data than others, which makes prediction may prefer to the rich-data ids.
  • Strain (even some species) prediction is terrible. Don't expect it.

RefDB viruses classfication model

Trainned with 9.3k viruses assemblies of RefDB. Minimum Java memory: 16G.

RefDB bacteria classfication model

Trainned with 42k bacteria assemblies of RefDB. Minimum Java memory: 32G.

GenBank bacteria and viruses classfication model (Illumina Simulation)

Trainned with 560k assemblies from GenBank. Only one assembly was sampled for each species. Because viruses data is too samll compared to bateria, it rarely predicts any viruses. Just take it as a bateria model.

art_illumina was used to simulate the paired-end reads with length of 150, mean size of 270 and stddev of 27.

Minimum Java memory: 48G.

GenBank bacteria and viruses classfication model (Pacbio Simulation)

Trainned with 560k assemblies from GenBank. Only one assembly was sampled for each species. Because viruses data is too samll compared to bateria, it rarely predicts any viruses. Just take it as a bateria model.

pbsim was used to simulate the pacbio reads with Continuous Long Read (CLR) profile, mean size of 3000 and stddev of 1000.

Minimum Java memory: 16G.

Sample data

Remark

What is JLSHVec ? Why JLSHVec instead of LSHVec?

JLSHVec is a rewritten version of LSHVec in Java language.

When we use LSHVec with big dataset (e.g. GenBank, RefDB), we found that LSHVec is hard to process such a big data size.

The reason is that LSHVec which inherits from FastText requires the input is text format separated by white space and then loads all the text in memory. This is acceptable for natural languages since the data size is at most tens GBs.

However in LSHVec k-mers are used instead of words. Suppose we want to train a k-mer embedding of simulated Illumina reads with RefDB bacteria assemblies (about 500G genetic bits). The number of kmers is about D*n, where D is the assembly data size and n is coverage. In our case, assuming n=10 and k=23, the number of kmers is 5T and requires a disk space of 125TB and tens TB of memory, which is unrealistic even for most HPC systems.

How were JLSHVec pre-trained models trained ?

First we prepared a RockDB for the reference sequences (e.g. all RefDB bacteria assemblies).

Then we have several nodes to train the model: one node (train node) trains the vectors and others (hash nodes) generate and hash kmers. The nodes pass protocol-buf message with a Redis server.

Hash node randomly reads reference sequences from RockDB, simulates (e.g. simulations Illumina, Pacbio, Gold Standard) reads, generates kmers and hashes them, then feeds the hashed-kmer-sequences to a Redis queue.

Train node reads from the Redis queue and does jobs of embedding or classification training. Our training code supports hierarchical softmax using NCBI taxonomy tree, which is essential for multi-label(an instance can have a label for each rank) and multi-class(an instance can only have one label for a rank) mixture classification model.

Citation

Please cite:

A Vector Representation of DNA Sequences Using Locality Sensitive Hashing

License

License: GPL v3

Project details


Release history Release notifications | RSS feed

This version

0.1

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pylshvec-0.1.tar.gz (9.6 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

pylshvec-0.1-py3.6.egg (16.5 kB view details)

Uploaded Egg

pylshvec-0.1-py3-none-any.whl (8.9 kB view details)

Uploaded Python 3

File details

Details for the file pylshvec-0.1.tar.gz.

File metadata

  • Download URL: pylshvec-0.1.tar.gz
  • Upload date:
  • Size: 9.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/2.0.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.4.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.6.9

File hashes

Hashes for pylshvec-0.1.tar.gz
Algorithm Hash digest
SHA256 a98df5e471d6bd28295793fff599f00a1486081d2dfc0dbec97e138db7ae03dc
MD5 0deb2f772be5a475613710a997ee2415
BLAKE2b-256 49ddc4094a5c314f8e7792471e032d927233c5a9a0f2ec43c737cd8fe34e5c36

See more details on using hashes here.

File details

Details for the file pylshvec-0.1-py3.6.egg.

File metadata

  • Download URL: pylshvec-0.1-py3.6.egg
  • Upload date:
  • Size: 16.5 kB
  • Tags: Egg
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/2.0.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.4.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.6.9

File hashes

Hashes for pylshvec-0.1-py3.6.egg
Algorithm Hash digest
SHA256 8e4e0eea45b98bdedcd7da276fb87c64fefdf715a78122701827b73b23ff375d
MD5 26e738948b434ba5204f24b9625e0b60
BLAKE2b-256 1a8ebfce636fab16b4567546ea4928accba82fe45fe82a7e3a4d865c2f80ef10

See more details on using hashes here.

File details

Details for the file pylshvec-0.1-py3-none-any.whl.

File metadata

  • Download URL: pylshvec-0.1-py3-none-any.whl
  • Upload date:
  • Size: 8.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/2.0.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.4.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.6.9

File hashes

Hashes for pylshvec-0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 bd5b47a28f28ea3126f8ef3996ad6863fb155dd78f6705208dd517bc9fb9ec2d
MD5 94bcdcbf4d5e725c12c419165292ff2c
BLAKE2b-256 3a4f40953ceb6da0bea522c36feb553d4e3fa192f7004c12a5a3351a17536bfa

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page