Skip to main content

A python package for spectral clustering.

Project description

too-many-cells (à la Python)

image

It's Scanpy friendly!

** A python package for spectral clustering based on the powerful suite of tools named too-many-cells. In essence, you can use toomanycells to partition a data set in the form of a matrix of integers or floating point numbers into clusters, where members of a cluster are similar to each other. The rows represent observations and the columns are the features. However, sometimes just knowing the clusters is not sufficient. Often, we are insterested on the relationships between the clusters, and this tool can help you visualize the clusters as leaf nodes of a tree, where the branches illustrate the trajectories that have to be followed to reach a particular cluster. Initially, this tool will partition your data set into two subsets (each subset is a node of the tree), trying to maximize the differences between the two. Subsequently, it will reapply that same criterion to each subset (node) and will continue bifurcating until the modularity of the node that is about to be partitioned becomes less than a given threshold value ($10^{-9}$ by default), implying that the elements belonging to the current node are fairly homogeneous, and consequently suggesting that further partitioning is not warranted. Thus, when the process finishes, you end up with a tree structure of your data set, where the leaves represent the clusters. As mentioned earlier, you can use this tool with any kind of data. However, a common application is to classify cells and therefore you can provide an AnnData object. You can read about this application in this Nature Methods paper.**

Dependencies

Make sure you have installed the graph visualization library Graphviz. For example, if you want to use conda, then do the following.

conda install anaconda::graphviz

If you are using Linux, you can do

sudo apt install libgraphviz-dev

And if you are using home-manager with Nix, do not forget to include

pkgs.graphviz

in your home.packages

Installation

Just type

pip install toomanycells

in your home or custom environment. If you want to install an updated version, then use the following flag.

pip install toomanycells -U

Make sure you have the latest version. If not, run the previous command again.

Quick run

If you want to see a concrete example of how to use toomanycells, check out the jupyter notebook demo.

Usage

  1. First import the module as follows

    from toomanycells import TooManyCells as tmc
    
  2. If you already have an AnnData object A loaded into memory, then you can create a TooManyCells object with

    tmc_obj = tmc(A)
    

    In this case the output folder will be called tmc_outputs. However, if you want the output folder to be a particular directory, then you can specify the path as follows

    tmc_obj = tmc(A, output_directory)
    
  3. If instead of providing an AnnData object you want to provide the directory where your data is located, you can use the syntax

    tmc_obj = tmc(input_directory, output_directory)
    
  4. If your input directory has a file in the matrix market format, then you have to specify this information by using the following flag

    tmc_obj = tmc(input_directory,
                  output_directory,
                  input_is_matrix_market=True)
    

Under this scenario, the input_directory must contain a .mtx file, a barcodes.tsv file (the observations), and a genes.tsv (the features).

  1. Once your data has been loaded successfully, you can start the clustering process with the following command
    tmc_obj.run_spectral_clustering()
    

In my desktop computer processing a data set with ~90K cells (observations) and ~30K genes (features) took a little less than 6 minutes in 1809 iterations. For a larger data set like the Tabula Sapiens with 483,152 cells and 58,870 genes (14.51 GB in zip format) the total time was about 50 minutes in the same computer. Progress bar example

  1. At the end of the clustering process the .obs data frame of the AnnData object should have two columns named ['sp_cluster', 'sp_path'] which contain the cluster labels and the path from the root node to the leaf node, respectively.
    tmc_obj.A.obs[['sp_cluster', 'sp_path']]
    
  2. To generate the outputs, just call the function
    tmc_obj.store_outputs()
    

This call will generate a graphical representation of the tree (output_graph.svg), a DOT file containing the nodes and edges of the graph (graph.dot), one CSV file that describes the cluster information (clusters.csv), another CSV file containing the information of each node (node_info.csv), and two JSON files. One relates cells to clusters (cluster_list.json), and the other has the full tree structure (cluster_tree.json). You need this last file for TMCI.

For those who may have problems installing pygraphviz, you can still store the main outputs using the call

tmc_obj.store_outputs(store_tree_svg=False)

Note that in this case you will not be able to generate the output_graph.svg and graph.dot files. However, the cluster_tree.json file, which is the most important file, will still be generated, and you can continue working with this tutorial.

  1. If you already have a DOT file you can load it with
    tmc_obj.load_graph(dot_fname="some_path")
    

or plot it with

tmc_obj.plot_radial_tree_from_dot_file(
   dot_fname="some_path")
  1. If you want to visualize your results in a dynamic platform, I strongly recommend the tool too-many-cells-interactive. To use it, first make sure that you have Docker Compose and Docker. One simple way of getting the two is by installing Docker Desktop. If you use Nix, simply add the packages pkgs.docker and pkgs.docker-compose to your configuration or home.nix file and run
home-manager switch
  1. If you installed Docker Desktop you probably don't need to follow this step. However, under some distributions the following two commands have proven to be essential. Use
sudo dockerd

to start the daemon service for docker containers and

sudo chmod 666 /var/run/docker.sock

to let Docker read and write to that location.

  1. Now clone the repository
git clone https://github.com/schwartzlab-methods/too-many-cells-interactive.git

and store the path to the too-many-cells-interactive folder in a variable, for example path_to_tmc_interactive. Also, you will need to identify a column in your AnnData.obs data frame that has the labels for the cells. Let's assume that the column name is stored in the variable cell_annotations. Lastly, you can provide a port number to host your visualization, for instance port_id=1234. Then, you can call the function

tmc_obj.visualize_with_tmc_interactive(
         path_to_tmc_interactive,
         cell_annotations,
         port_id)

The following visualization corresponds to the data set with ~90K cells (observations). Visualization example

And this is the visualization for the Tabula Sapiens data set with ~480K cells. Visualization example

What is the time complexity of toomanycells (à la Python)?

To answer that question we have created the following benchmark. We tested the performance of toomanycells in 20 data sets having the following number of cells: 6360, 10479, 12751, 16363, 23973, 32735, 35442, 40784, 48410, 53046, 57621, 62941, 68885, 76019, 81449, 87833, 94543, 101234, 107809, 483152. The range goes from thousands of cells to almost half a million cells. These are the results. Visualization example Visualization example As you can see, the program behaves linearly with respect to the size of the input. In other words, the observations fit the model $T = k\cdot N^p$, where $T$ is the time to process the data set, $N$ is the number of cells, $k$ is a constant, and $p$ is the exponent. In our case $p\approx 1$. Nice!

Cell annotation

CellTypist

When visualizing the tree, we often are interested on observing how different cell types distribute across the branches of the tree. In case your AnnData object lacks a cell annotation column in the obs data frame, or if you already have one but you want to try a different method, we have created a wrapper function that calls CellTypist. Simply write

   tmc_obj.annotate_with_celltypist(
           column_label_for_cell_annotations,
   )

and the obs data frame of your AnnData object will have a column named like the string stored under the column_label_for_cell_annotations variable. By default we use the Immune_All_High celltypist model that contains 32 cell types. If you want to use another model, simply write

   tmc_obj.annotate_with_celltypist(
           column_label_for_cell_annotations,
           celltypist_model,
   )

where celltypist_model describes the type of model to use by the library. For example, if this variable is equal to Immune_All_Low, then the number of possible cell types increases to 98. For a complete list of all the models, see the following list. Lastly, if you want to use the fact that transcriptionally similar cells are likely to cluster together, you can assign the cell type labels on a cluster-by-cluster basis rather than a cell-by-cell basis. To activate this feature, use the call

   tmc_obj.annotate_with_celltypist(
           column_label_for_cell_annotations,
           celltypist_model,
           use_majority_voting = True,
   )

Median absolute deviation classification

Work in progress...

Heterogeneity quantification

Imagine you want to compare the heterogeneity of cell populations belonging to different branches of the toomanycells tree. By branch we mean all the nodes that derive from a particular node, including the node that defines the branch in question. For example, we want to compare branch 1183 against branch 2. heterogeneity One way to do this is by comparing the modularity distribution and the cumulative modularity for all the nodes that belong to each branch. We can do that using the following calls. First for branch 1183

   tmc_obj.quantify_heterogeneity(
      list_of_branches=[1183],
      use_log_y=true,
      tag="branch_A",
      show_column_totals=true,
      color="blue",
      file_format="svg")


And then for branch 2
   tmc_obj.quantify_heterogeneity(
      list_of_branches=[2],
      use_log_y=true,
      tag="branch_B",
      show_column_totals=true,
      color="red",
      file_format="svg")


Note that you can include multiple nodes in the list of branches. From this figures we observe that the higher cumulative modularity of branch 1183 with respect to branch 2 suggests that the former has a higher degree of heterogeneity. However, just relying in modularity could provide a misleading interpretation. For example, consider the following scenario where the numbers within the nodes indicate the modularity at that node.

In this case, scenario A has a larger cumulative modularity, but we note that scenario B is more heterogeneous. For that reason we recommend also computing additional diversity measures. First, we need some notation. For all the branches belonging to the list of branches in the above function

quantify_heterogeneity, let $C$ be the set of leaf nodes that belong to those branches. We consider each leaf node as a separate species, and we call the whole collection of cells an ecosystem. For $c_i \in C$, let $#(c_i)$ be the number of cells in $c_i$ and $#(C) = \sum_i #(c_i)$ the total number of cells contained in the given branches. If we let $p_i = \frac{#(c_i)}{#(C)}$, then we define the following diversity measure

$$D(q) = \left(\sum_{i=1}^{n} p_i^q \right)^{\frac{1}{1-q}}. $$

In general, the larger the value of $D(q)$, the more diverse is the collection of species. Note that $D(q=0)$ describes the total number of species. We call this quantity the richness of the ecosystem. When $q=1$, $D$ is the exponential of the Shannon entropy

$$H = -\sum_{i=1}^{n}p_i \ln(p_i).$$

When $q=2$, $D$ is the inverse of the Simpson's index

$$S = \sum_{i=1}^{n} (p_i)^2,$$

which represents the probability that two cells picked at random belong to the same species. Hence, the higher the Simpson's index, the less diverse is the ecosystem. Lastly, when $q=\infty$, $D$ is the inverse of the largest proportion $\max{p_i}$.

In the above example, for branch 1183 we obtain

               value
Richness  460.000000
Shannon     5.887544
Simpson     0.003361
MaxProp     0.010369
q = 0     460.000000
q = 1     360.518784
q = 2     297.562094
q = inf    96.442786

and for branch 2 we obtain

               value
Richness  280.000000
Shannon     5.500414
Simpson     0.004519
MaxProp     0.010750
q = 0     280.000000
q = 1     244.793371
q = 2     221.270778
q = inf    93.021531

After comparing the results using two different measures, namely, modularity and diversity, we conclude that branch 1183 is more heterogeneous than branch 2.

Similarity functions

So far we have assumed that the similarity matrix $S$ is computed by calculating the cosine of the angle between each observation. Concretely, if the matrix of observations is $B$ ($m\times n$), the $i$-th row of $B$ is $x = B(i,:)$, and the $j$-th row of $B$ is $y=B(j,:)$, then the similarity between $x$ and $y$ is

$$S(x,y)=\frac{x\cdot y}{||x||_2\cdot ||y||_2}.$$

However, this is not the only way to compute a similarity matrix. We will list all the available similarity functions and how to call them.

Cosine (sparse)

If your matrix is sparse, i.e., the number of nonzero entries is proportional to the number of samples ($m$), and you want to use the cosine similarity, then use the following instruction.

tmc_obj.run_spectral_clustering(
   similarity_function="cosine_sparse")

By default we use the Halko-Martinsson-Tropp algorithm to compute the truncated singular value decomposition. However, the ARPACK library (written in Fortran) is also available.

tmc_obj.run_spectral_clustering(
   similarity_function="cosine_sparse",
   svd_algorithm="arpack")

If $B$ has negative entries, it is possible to get negative entries for $S$. This could in turn produce negative row sums for $S$. If that is the case, the convergence to a solution could be extremely slow. However, if you use the non-sparse version of this function, we provide a reasonable solution to this problem.

Cosine

If your matrix is dense, and you want to use the cosine similarity, then use the following instruction.

tmc_obj.run_spectral_clustering(
   similarity_function="cosine")

The same comment about negative entries applies here. However, there is a simple solution. While shifting the matrix of observations can drastically change the interpretation of the data because each column lives in a different (gene) space, shifting the similarity matrix is actually a reasonable method to remove negative entries. The reason is that similarities live in an ordered space and shifting by a constant is an order-preserving transformation. Equivalently, if the similarity between $x$ and $y$ is less than the similarity between $u$ and $w$, i.e., $S(x,y) < S(u,w)$, then $S(x,y)+s < S(u,w)+s$ for any constant $s$. The raw data have no natural order, but similarities do. To shift the (dense) similarity matrix by $s=1$, use the following instruction.

tmc_obj.run_spectral_clustering(
   similarity_function="cosine",
   shift_similarity_matrix=1)

Note that since the range of the cosine similarity is $[-1,1]$, the shifted range for $s=1$ becomes $[0,2]$. The shift transformation can also be applied to any of the subsequent similarity matrices.

Laplacian

The similarity matrix is

$$S(x,y)=\exp(-\gamma\cdot ||x-y||_1)$$

This is an example:

tmc_obj.run_spectral_clustering(
   similarity_function="laplacian",
   similarity_gamma=0.01)

This function is very sensitive to $\gamma$. Hence, an inadequate choice can result in poor results or no convergence. If you obtain poor results, try using
a smaller value for $\gamma$.

Gaussian

The similarity matrix is

$$S(x,y)=\exp(-\gamma\cdot ||x-y||_2^2)$$

This is an example:

tmc_obj.run_spectral_clustering(
   similarity_function="gaussian",
   similarity_gamma=0.001)

As before, this function is very sensitive to $\gamma$. Note that the norm is squared. Thus, it transforms big differences between $x$ and $y$ into very small quantities.

Divide by the sum

The similarity matrix is

$$S(x,y)=1-\frac{||x-y||_p}{||x||_p+||y||_p},$$

where $p =1$ or $p=2$. The rows of the matrix are normalized (unit norm) before computing the similarity. This is an example:

tmc_obj.run_spectral_clustering(
   similarity_function="div_by_sum")

Normalization

TF-IDF

If you want to use the inverse document frequency (IDF) normalization, then use

tmc_obj.run_spectral_clustering(
   similarity_function="some_sim_function",
   use_tf_idf=True)

If you also want to normalize the frequencies to unit norm with the $2$-norm, then use

tmc_obj.run_spectral_clustering(
   similarity_function="some_sim_function",
   use_tf_idf=True,
   tf_idf_norm="l2")

If instead you want to use the $1$-norm, then replace "l2" with "l1".

Simple normalization

Sometimes normalizing your matrix of observations can improve the performance of some routines. To normalize the rows, use the following instruction.

tmc_obj.run_spectral_clustering(
   similarity_function="some_sim_function",
   normalize_rows=True)

Be default, the $2$-norm is used. To use any other $p$-norm, use

tmc_obj.run_spectral_clustering(
   similarity_function="some_sim_function",
   normalize_rows=True,
   similarity_norm=p)

Gene expression along a path

Introduction

Imagine you have the following tree structure after running toomanycells. Tree path Further, assume that the colors denote different classes satisfying specific properties. We want to know how the expression of two genes, for instance, Gene S and Gene T, fluctuates as we move from node $X$ (lower left side of the tree), which is rich in Class B, to node $Y$ (upper left side of the tree), which is rich in Class C. To compute such quantities, we first need to define the distance between nodes.

Distance between nodes

Assume we have a (parent) node $P$ with two children nodes $C_1$ and $C_2$. Recall that the modularity of $P$ indicates the strength of separation between the cell populations of $C_1$ and $C_2$. A large the modularity indicates strong connections, i.e., high similarity, within each cluster $C_i$, and also implies weak connections, i.e., low similarity, between $C_1$ and $C_2$. If the modularity at $P$ is $Q(P)$, we define the distance between $C_1$ and $C_2$ as

$$d(C_1,C_2) = Q(P).$$

We also define $d(C_i,P) = Q(P)/2$. Note that with those definitions we have that

$$d(C_1,C_2)=d(C_1,P) +d(P,C_2)=Q(P)/2+Q(P)/2=Q(P),$$

as expected. Now that we know how to calculate the distance between a node and its parent or child, let $X$ and $Y$ be two distinct nodes, and let ${(N_{i})}_{i=0}^{n}$ be the sequence of nodes that describes the unique path between them satisfying:

  1. $N_0 = X$,
  2. $N_n=Y$,
  3. $N_i$ is a direct relative of $N_{i+1}$, i.e., $N_i$ is either a child or parent of $N_{i+1}$,
  4. $N_i \neq N_j$ for $i\neq j$.

Then, the distance between $X$ and $Y$ is given by

d(X,Y) =
\sum_{i=0}^{n-1} d(N_{i},N_{i+1}).

Gene expression

We define the expression of Gene G at a node $N$, $Exp(G,N)$, as the mean expression of Gene G considering all the cells that belong to node $N$. Hence, given the sequence of nodes

(N_i)_{i=0}^{n}

we can compute the corresponding gene expression sequence

(E_{i})_{i=0}^{n}, \quad E_i = Exp(G,N_i).

Cumulative distance

Lastly, since we are interested in plotting the gene expression as a function of the distance with respect to the node $X$, we define the sequence of real numbers

(D_{i})_{i=0}^{n}, \quad D_{i} = d(X,N_{i}).

Summary

  1. The sequence of nodes between $X$ and $Y$ $${(N_{i})}_{i=0}^{n}$$
  2. The sequence of gene expression levels between $X$ and $Y$ $${(E_{i})}_{i=0}^{n}$$
  3. And the sequence of distances with respect to node $X$ $${(D_{i})}_{i=0}^{n}$$

The final plot is simply $E_{i}$ versus $D_{i}$. An example is given in the following figure.

Example

Gene expression

Note how the expression of Gene A is high relative to that of Gene B at node $X$, and as we move farther towards node $Y$ the trend is inverted and now Gene B is highly expressed relative to Gene A at node $Y$.

Acknowledgments

I would like to thank the Schwartz lab (GW) for letting me explore different directions and also Christie Lau for providing multiple test cases to improve this implementation.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

toomanycells-1.0.37.tar.gz (3.8 MB view details)

Uploaded Source

Built Distribution

toomanycells-1.0.37-py2.py3-none-any.whl (62.6 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file toomanycells-1.0.37.tar.gz.

File metadata

  • Download URL: toomanycells-1.0.37.tar.gz
  • Upload date:
  • Size: 3.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.5

File hashes

Hashes for toomanycells-1.0.37.tar.gz
Algorithm Hash digest
SHA256 bd27445c1e47ae8f3200df66e3d4f9e54c8b13a1c9eafb7ee70c7aff25e39b1e
MD5 54f30b96bbb51476dac3eabad8b86752
BLAKE2b-256 45a3b7dddbcd040dea6e6d18d781ea596186402fdf3b87b2ff932ef7e8eed3cc

See more details on using hashes here.

File details

Details for the file toomanycells-1.0.37-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for toomanycells-1.0.37-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 9f107280360911d8ac11400af12c365c96c70100a36ecc32ec7ca42635cdde4b
MD5 520bec34ad01b8711336b20763e47042
BLAKE2b-256 2660c96614b80f5d093e174791ed960fbf4ee855ed99f745082eb86b8593e3dc

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page