Skip to main content

Basic Informatics and Gene Statistics from Unnormalized Reads, a feature selection tool for scRNAseq

Project description

BigSur

BigSur is a package for principled, robust single-cell transcriptomics normalization, feature selection and correlations calculation. This ReadMe file includes a quick summary of what BigSur can be used for, along with small code examples to get started.

What is BigSur?

Basic Informatics and Gene Statistics from Unnormalized Reads (BigSur) is an analytical model of single-cell transcriptomics (scRNA-seq) data. This model can be used to select features and calculate correlation, taking into account the biological and technical noise inherent in scRNA-seq.

  • The importance of feature selection, along with results showing BigSur performs equivalently to, if not better than, Seurat and scanpy feature selection, are shown in Dollinger et al. 2025.
  • The pitfalls of using Pearson's Correlation Coefficients (PCCs) to calculate correlations in scRNA-seq data and the corrections made to PCCs to account for the noise and sparsity in these data are shown in Silkwood et al. 2023.

Updates

10/15/25

The GitHub repository now includes the code to calculate correlations. See below for the quickstart. The tutorial for the correlations will be uploaded soon. The pip package does not currently include the correlations code.

Installation

The easiest way to install bigsur is via pip:

conda create -n bigsur_env python pip
conda activate bigsur_env
pip install bigsur

Alternatively, you can clone the GitHub repo. We've included a environment file for conda environment installation; the only package we require that isn't installed with scanpy is mpmath and numexpr. For example:

In terminal:

cd bigsur_dir #directory to clone to

git clone https://github.com/landerlabcode/BigSur.git

conda create -f environment.yml -n bigsur

A note about the virtual environment

This environment contains all packages that are required to reproduce any result of the paper. If you want a lightweight conda enviroment (or alternatively, if the environment file is causing issues), you can create a sufficient conda environment as follows:

In terminal:

conda create -n bigsur -c conda-forge scanpy mpmath numexpr ipykernel python-igraph leidenalg

Usage

Feature selection

Usage for feature selection is detailed in the example notebook.

TL;DR:

import sys

sys.path.append(bigsur_dir) # directory where git repo was cloned, not necessary if BigSur was installed using pip

from BigSur.feature_selection import mcfano_feature_selection as mcfano

Replace sc.pp.highly_variable_genes(adata) in your pipeline with mcfano(adata, layer='counts'), where the UMIs are in adata.layers['counts'].

And that's it! You can read more about how to use BigSur for feature selection, and in particular how to optimize cutoffs for a given dataset, in the example notebook.

Correlations

To calculate correlations on data contained within an adata, where the UMIs are stored in adata.layers['counts'], run the following commands:

import sys

sys.path.append(bigsur_dir) # directory where git repo was cloned

from BigSur.correlations import calculate_correlations

calculate_correlations(adata, layer = 'counts', cv = None, verbose = 2, write_out=write_out_folder, previously_run=False, store_intermediate_results=True)

By default, the function stores the mcPCCs and the BH-corrected $p$-values in adata.varm. Both these matrices are lower-triangular and sparse. Given the potential size of these files, we recommend saving the mcPCCs and BH-corrected $p$-values to disk, by specifying a folder to write to, using the write_out parameter. See the docstring for more details.

Since the correlations $p$-value calculation can take a long time to run and can require a lot of memory, we've included optional parameters to ensure that intermediate results are saved to disk if the application runs out of memory. The store_intermediate_results parameter tells the function whether to store intermediate results, such as cumulants or coefficients, in the write_out folder. The previously_run parameter tells the function to look in that folder for any intermediate results that were previously generated. If it is likely that the application will run out of memory, we suggest storing the intermediate results; however, some of these files are not sparse matrices and therefore can take a lot of storage space.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bigsur-0.0.8.tar.gz (22.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

bigsur-0.0.8-py3-none-any.whl (23.1 kB view details)

Uploaded Python 3

File details

Details for the file bigsur-0.0.8.tar.gz.

File metadata

  • Download URL: bigsur-0.0.8.tar.gz
  • Upload date:
  • Size: 22.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for bigsur-0.0.8.tar.gz
Algorithm Hash digest
SHA256 1fd1a6d24e97dbd42b35d94e20382e5f86e47fee07a18444df44f017eef4be8d
MD5 0e6315f951a36e4eb1b78b4ba5e2a5f5
BLAKE2b-256 5af1b24d843a552ddf14d995404c1b907efa8e273a20ba5430e1de2ff303cf64

See more details on using hashes here.

File details

Details for the file bigsur-0.0.8-py3-none-any.whl.

File metadata

  • Download URL: bigsur-0.0.8-py3-none-any.whl
  • Upload date:
  • Size: 23.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for bigsur-0.0.8-py3-none-any.whl
Algorithm Hash digest
SHA256 6947b0279149bb2f5adf268014759ba8d31be8f93ad434eb10e73b731f9ddb32
MD5 494cd8a6a5c97c3e6e8fbed1e3547507
BLAKE2b-256 55379c251d4a288b59daeea192f1a3e4433aff58f8f61d767a2b0bd49d1544dc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page