Skip to main content

Takes SeqRecordExpanded objects and creates datasets for phylogenetic software

Project description

Dataset-creator

Dataset creator for phylogenetic software

tests

Travis-CI Build Status Requirements Status Coverage Status

package

PyPI Package latest release PyPI Wheel Supported versions Supported implementations

Dataset-Creator - easy way to creat phylogenetic datasets in many formats

Documentation: dataset-creator.readthedocs.org

Takes SeqRecordExpanded objects and creates datasets for phylogenetic software such as MrBayes, TNT, BEAST, RAxML, MEGA, etc.

Features

  • Creates datasets in the following formats: FASTA, GenBankFASTA, NEXUS, TNT, MEGA and Phylip.

  • Can generate datasets of DNA and aminoacid sequences.

  • Can generate datasets of degenerated sequences.

  • It can partition datasets by codon positions or by gene.

Quick start

First:

pip install dataset_creator

Then the list of SeqRecordExpanded objects should be sorted by gene_code first then by voucher_code.

>>> from seqrecord_expanded import SeqRecord
>>> from dataset_creator import Dataset
>>>
>>> # `table` is the Translation Table code based on NCBI
>>> seq_record1 = SeqRecord('ACTACCTA', reading_frame=2, gene_code='RpS5',
...                         table=1, voucher_code='CP100-10',
...                         taxonomy={'genus': 'Aus', 'species': 'bus'})
>>>
>>> seq_record2 = SeqRecord('ACTACCTA', reading_frame=2, gene_code='RpS5',
...                         table=1, voucher_code='CP100-10',
...                         taxonomy={'genus': 'Aus', 'species': 'bus'})
>>>
>>> seq_record3 = SeqRecord('ACTACCTA', reading_frame=2, gene_code='wingless',
...                         table=1, voucher_code='CP100-10',
...                         taxonomy={'genus': 'Aus', 'species': 'bus'})
>>>
>>> seq_record4 = SeqRecord('ACTACCTA', reading_frame=2, gene_code='winglesss',
...                         table=1, voucher_code='CP100-10',
...                         taxonomy={'genus': 'Aus', 'species': 'bus'})
>>>
>>> seq_records = [
...    seq_record1, seq_record2, seq_record3, seq_record4,
... ]

>>> # codon positions can be 1st, 2nd, 3rd, 1st-2nd, ALL (default)
>>> dataset = Dataset(seq_records, format='TNT', partitioning='by codon position',
...                   codon_positions='ALL')

>>> dataset = Dataset(seq_records, format='PHYLIP', partitioning='1st-2nd, 3rd',
...                   codon_positions='ALL')

>>> dataset = Dataset(seq_records, format='NEXUS', partitioning='by gene',
...                   codon_positions='1st')

>>> dataset = Dataset(seq_records, format='NEXUS', partitioning='by gene',
...                   codon_positions='ALL', aminoacids=True)

>>> # Produce a dataset of degenerated sequences using the 'S' method:
>>> dataset = Dataset(seq_records, format='NEXUS', partitioning='by gene',
...                   codon_positions='ALL', degenerate='S')

>>> print(dataset.dataset_str)
#NEXUS
blah blah ...

Further documentation can be found at dataset-creator.readthedocs.org

Development

To run the all tests run:

tox

Changelog

0.4.0 (2020-06-28)

  • dropped support for python 2

  • added support for long taxon names in generated dataset files

0.3.20 (2018-01-07)

  • Updated seq record expanded.

0.3.19 (2018-01-06)

  • Fixed version of seqrecord expanded in setup.py.

0.3.18 (2018-01-06)

  • Support lineages for genbank fasta files.

0.3.17 (2018-01-06)

  • Avoid raising exception when translating sequence with dash.

0.3.16 (2017-10-01)

  • Fixed creating dataset with 1st, 2nd or 3rd codon positions.

0.3.14 (2016-09-11)

  • upgrade seqrecord-expanded.

0.3.13 (2016-08-27)

  • Fixed bug that did not replace all white spaces for underscores in taxon names when building datasets. Due to taxon names with whitespaces, the NEXUS interpreter assumed that part of the name was actually part of the sequence, rendering the sequence invalid.

  • Added some dependencies to requirements.

0.3.11 (2016-06-25)

  • Upgraded seqrecord-expanded requirement.

0.3.10 (2015-12-01)

  • Fixed bug that produced FASTA sequences with underscores. Now all voucher codes will have their dashes replaced by underscores.

0.3.9 (2015-11-06)

  • Create datasets using the GenBankFASTA format. This format has the following extra info in the description of sequences: >Aus_aus_CP100-10 [org=Aus aus] [Specimen-voucher=CP100-10] [note=ArgKin gene, partial cds.] [Lineage=]

0.3.8 (2015-10-30)

  • Fixed making dataset as aminoacid seqs for MEGA format.

  • Fixed making dataset as degenerated seqs for MEGA format.

  • Fixed making dataset as degenerated seqs for TNT format.

  • Fixed making dataset as aa seqs with specified outgroup for TNT format.

  • Raise ValueError when asked to degenerate seqs that will go to partitioning based on codon positions.

  • Dataset creator returns warnings if translated sequences have stop codons ‘*’.

  • Cannot generate MEGA datasets with partitioning.

0.3.7 (2015-10-30)

  • Fixed 2nd, 3rd codon positions bug that returned empty FASTA datasets.

0.3.6 (2015-10-30)

  • Fixed 3rd codon positions bug that returned FASTA datasets with 3rd codon positions even if they were not needed.

0.3.5 (2015-10-29)

  • If user provides outgroup, then TNT datasets will place its sequences in first position in the dataset blocks.

0.3.4 (2015-10-02)

  • Fixed bug that did not show DATATYPE=PROTEIN in Nexus files when aminoacid sequences were requested by user.

0.3.3 (2015-10-02)

  • Fixed bug that raised an exception when SeqExpandedRecords did not have data in the taxonomy field.

0.3.2 (2015-10-01)

  • Fixed bug that raised an exception when user wanted partitioned dataset as 1st-2nd and 3rd codon positions of only one codon.

0.3.1 (2015-10-01)

  • Fixed bug that raised an exception when user wanted partitioned dataset by codon positions of only one codon.

0.3.0 (2015-10-01)

  • Accepts voucher code as string that will be used to generate the outgroup string needed for NEXUS and TNT files.

0.2.0 (2015-09-30)

  • Creates datasets as degenerated sequences using the method by Zwick et al.

0.1.1 (2015-09-30)

  • It will issue errors if reading frames are not specified unless they are strictly necessary to build the dataset (datasets need to be divided by codon positions).

  • Added documentation using sphinx-doc

  • Creates datasets as aminoacid sequences.

0.1.0 (2015-09-23)

  • Creates Nexus, Tnt, Fasta, Phylip and Mega dataset formats.

0.0.1 (2015-06-10)

  • First release on PyPI.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dataset-creator-0.4.0.tar.gz (137.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dataset_creator-0.4.0-py2.py3-none-any.whl (18.8 kB view details)

Uploaded Python 2Python 3

File details

Details for the file dataset-creator-0.4.0.tar.gz.

File metadata

  • Download URL: dataset-creator-0.4.0.tar.gz
  • Upload date:
  • Size: 137.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/47.3.1 requests-toolbelt/0.9.1 tqdm/4.46.1 CPython/3.7.3

File hashes

Hashes for dataset-creator-0.4.0.tar.gz
Algorithm Hash digest
SHA256 3a1c789722d7ab1635814c66a642b94272f9a56b701b0886082e755d4f8bd1d4
MD5 8e6eb27715c72a0614bcbe7a234b972e
BLAKE2b-256 161e09bf4959dec70842c1340a61688e0a799ac3edf42e5f43d70dc70f66e24c

See more details on using hashes here.

File details

Details for the file dataset_creator-0.4.0-py2.py3-none-any.whl.

File metadata

  • Download URL: dataset_creator-0.4.0-py2.py3-none-any.whl
  • Upload date:
  • Size: 18.8 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/47.3.1 requests-toolbelt/0.9.1 tqdm/4.46.1 CPython/3.7.3

File hashes

Hashes for dataset_creator-0.4.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 3120e62f7898c4647a85d9732c8262b5c46bee185af41f2d1d3d2b07f1c06b30
MD5 807ede33e4e0176bf5b3ce84af80567d
BLAKE2b-256 061715874b6446682e02ef8d364db7d00a6ecbdad170536d1e1acd758f53bf09

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page