Package for pre- and post-processing of images and data for working with ilastik-software
Project description
caactus
caactus (cell analysis and counting tool using ilastik software) is a collection of python scripts to provide a streamlined workflow for ilastik-software, including data preparation, processing and analysis. It aims to provide biologist with an easy-to-use tool for counting and analyzing cells from a large number of microscopy pictures.
Introduction
The goal of this script collection is to provide an easy-to-use completion for the Boundary-based segmentation with Multicut-workflow in ilastik.
This workflow allows for the automatization of cell-counting from messy microscopic images with different (touching) cell types for biological research.
For easy copy & paste, commands are provided in grey code boxes with one-click copy & paste.
Installation
Install miniconda, create an environment and install Python and vigra
- Download and install miniconda for your respective operating system according to the instructions.
- Miniconda provides a lightweight package and environment manager. It allows you to create isolated environments so that Python versions and package dependencies required by caactus do not interfere with your system Python or other projects.
- Once installed, create an environment for using
caactuswith the following command from your cmd-line</code></pre> </li> </ul> <p>conda create -n caactus-env -c conda-forge python=3.12 vigra</p> <pre><code> ## Install caactus - Activate the `caactus-env` from the cmd-line with ```bash conda activate caactus-env
- To install
caactusplus the needed dependencies inside your environment, usepip install caactus
- During the below described steps that call the
caactus-scripts, make sure to have thecaactus-envactivated.
Install ilastik
- Download and install ilastik for your respective operating system.
- Please note, we developed the pipeline on ilastik 1.4.0. For optimal user experience, we recommend installing ilastik 1.4.0. For this, scroll down to "Previous stable versions" on the ilastik download webpage.
Quick Overview of the workflow
- Culture organism of interest in 96-well plate
- Acquire images of cells via microscopy.
- Create project directory
- Rename Files with the caactus-script
renaming - Convert files to HDF5 Format with the caactus-script
tif2h5py - Train a pixel classification model in ilastik for and later run it batch-mode.
- Train a boundary-based segmentation with Multicut model in ilastik for and later run it batch-mode.
- Remove the background from the images using
background_processing - Train a object classification model in ilastik for and later run it batch-mode.
- Pool all csv-tables from the individual images into one global table with
csv_summary
- output generated:
- "df_clean.csv"
- Summarize the data with
summary_statistics
- output generated:
- a) "df_summary_complete.csv" = .csv-table containing also "not usable" category,
- b) "df_refined_complete.csv" = .csv-table without "not usable" category",
- c) "counts.csv" dataframe used in PlnModelling
- d) bar graph ("barchart.png")
- Model the count data with
pln_modelling
- output generated:
- a) "correlation_circle.png"
- b) "pca_plot.png"
Sample Dataset
- a sample dataset to quickly test the workflow can be accessed via zenodo
- to showcase the functionalties, the ilastik steps have been pretrained. Use caactus in batch-modes.
Detailed Description of the Workflow
1. Culturing
- Culture your cells in a flat bottom plate of your choice and according to the needs of the organims being researched.
2. Image acquisition
- In your respective microscopy software environment, save the images of interest to
.tif-format. - From the image metadata, copy the pixel size.
3. Data Preparation
3.1 Create Project Directory
- For portability of the ilastik projects create the directory in the following structure:
(Please note: the below example already includes examples of resulting files in each sub-directory) - This allows you to copy an already trained workflow and use it multiple times with new datasets.
project_directory = Main folder ├── 1_pixel_classification.ilp ├── 2_boundary_segmentation.ilp ├── 3_object_classification.ilp ├── renaming.csv ├── conif.toml ├── 0_1_original_tif_training_images ├── training-1.tif ├── training-2.tif ├── ... ├── 0_2_original_tif_batch_images ├── image-1.tif ├── image-2.tif ├── .. ├── 0_3_batch_tif_renamed ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-1.tif ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-2.tif ├── .. ├── 1_images ├── training-1.h5 ├── training-2.h5 ├── ... ├── 2_probabilities ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-1-data_Probabilities.h5 ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-2-data_Probabilities.h5 ├── ... ├── 3_multicut ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-1-data_Multicut Segmentation.h5 ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-2-data_Multicut Segmentation.h5 ├── ... ├── 4_objectclassification ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-1-data_Object Predictions.h5 ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-1-data_table.csv ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-2-data_Object Predictions.h5 ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-2-data_table.csv ├── ... ├── 5_batch_images ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-1.h5 ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-2.h5 ├── ... ├── 6_batch_probabilities ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-1-data_Probabilities.h5 ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-2-data_Probabilities.h5 ├── ... ├── 7_batch_multicut ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-1-data_Multicut Segmentation.h5 ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-2-data_Multicut Segmentation.h5 ├── ... ├── 8_batch_objectclassification ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-1-data_Object Predictions.h5 ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-1-data_table.csv ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-2-data_Object Predictions.h5 ├── strain-xx_day-yymmdd_condition1-yy_timepoint-zz_parallel-2-data_table.csv ├── ... ├── 9_data_analysis3.2 Getting started
- open the the caacuts Graphical User Interface (GUI) by opening the command line in Unix or Anaconda Powershell/Prompt in Windows.
- make sure you have the caactus environmnet activated
conda activate caactus-env
- now simply type
caactusand hitenterto start the graphical user interace
caactus
- On the top, enter the path to your mainfolder.
- For steps where it is relevant, choose between training and batch mode.
- The subdirectories have default naming according to 3.1. You can rename them.
- When all information have been entered, click
Run. - Processing messages will appear on the bottom.
- The output can be accessed by inspecting the respective subdirectory from your main folder.
4. Training
To facilitate cross-platform reusability of the ilastik models, make sure to store Raw Data, Probabilities and Prediction Maps in Relative Links. This allows for portability of the models to other storage locations.
In case absolute file path is selected, right click on the location and select
edit propertiesunderstoragethe path logic can be modified4.1. Selection of Training Images and Conversion
4.1.1 Selection of Training data
- select a set of images that represant the different experimental conditions best
- store them in
0_1_original_tif_training_images
4.1.2 Conversion
- Go to the
tif2h5pytab. SelectTrainingfrom the dropdown menu.
- The script in the background will convert
.tif-filesto.h5-format. - The
.h5-formatallows for better performance when working with ilastik.
- When the file path are correct, click
Run.
4.2. Pixel Classification
-
When first training a pixel classification model in ilastik, open ilastik.
-
Create a new project and select "Pixel Classification" as the workflow.
-
Save it as 1_pixel_classification.ilp inside the main project directory.
-
Under Raw Data, add the .h5 files from 1_images folder.
-
Feature selection. Select the features you want to use for training. It is recommended to use all features.
-
For working with neighbouring / touching cells, it is suggested to create three classes: 0 = interior, 1 = background, 2 = boundary (This follows python's 0-indexing logic where counting is started at 0).
-
Annotate the classes by drawing on the images.
-
Export the Predictions. In prediction export change the settings to
Convert to Data Type: integer 8-bitRenormalize from 0.00 1.00 to 0 255- File:
{dataset_dir}/../2_probabilties/{nickname}_{result_type}.h5
- For more information, consult the documentation for pixel classification with ilastik.
4.3 Boundary-based Segmentation with Multicut
-
When first training a boundary-based Segmentation model in ilastik, open ilastik.
-
Create a new project and select "Boundary-based Segmentation with Multicut" as the workflow.
-
Save it as
2_boundary_segmentation.ilpinside the main project directory. -
Under Raw Data, add the .h5 files from
1_images folder. -
Under Probabilities, add the data_Probabilities.h5 files from
2_probabilitesfolder. -
in DT Watershed, use the input channel the corresponds to the order you used under project setup (in this case input channel = 2).
-
Annotate the edges by clicking on the edges between cells. Annotate the background by clicking on the background.
-
Export the Multicut Segmentation. In prediction export change the settings to
Convert to Data Type: integer 8-bitRenormalize from 0.00 1.00 to 0 255- Format:
compressed hdf5 - File:
{dataset_dir}/../3_multicut/{nickname}_{result_type}.h5
- For more information follow the documentation for boundary-based segmentation with Multicut.
4.4 Background Processing
For futher processing in the object classification, the background needs to eliminated from the multicut data sets. For this the next script will set the numerical value of the largest region to 0. It will thus be shown as transpartent in the next step of the workflow. This operation will be performed in-situ on all
.*data_Multicut Segmentation.h5-files in theproject_directory/3_multicut/.- Select the
background-processingtab in the GUI. - Select
Trainingmode from the dropdown menu. - When the file path are correct, click
Run.
4.5. Object Classification
-
When first training a Object classification model in ilastik, open ilastik.
-
Create a new project and select "Object Classification [Inputs: Raw, Data, Pixel Prediction Map]" as the workflow.
-
Save it as 3_object_classification.ilp inside the main project directory.
-
Under "Raw Data", add the .h5 files from 1_images folder.
-
Under "Segmentation Image", add the data_Multicut Segmentation.h5 files from 3_multicut folder.
-
Define your cell types plus an additional category for "not-usable" objects, e.g. cell debris and cut-off objects on the side of the images. Please note: default settings for the cell names in caactus are
resting,swollen,germling,hyphae,notusable(andmyceliumfor EUCAST steps). You are welcome to change the names. Make sure to also change the names in the caactus GUI when performing analysis steps below. -
Annotate the edges by clicking on the edges between cells. Annotate the background by clicking on the background.
-
Export the Object_Predictions. In
Choose Export Imager Settingschange settings to
Convert to Data Type: integer 8-bitRenormalize from 0.00 1.00 to 0 255- Format:
compressed hdf5 - File:
{dataset_dir}/../4_objectclassification/{nickname}_{result_type}.h5
- Export the Object data_table.csv-files
In
Configure Feature Table Export Generalchange seetings to
- format
.csvand output directory File:{dataset_dir}/../4_objectclassification/{nickname}.csv
- select your features of interest for exporting
For more information follow the documentation for object classification.
5. Batch Processing
- Once you have successfully trained all three ilastik models, you are ready to process large image datasets with the caactus pipeline.
- store the images you want to process in the
0_2_original_tif_batch_imagesdirectory - Perform steps 4.1 to 4.5 in batch mode, as explained in detail below (5.1 to 5.5).
- When relevant select batch in the dropdown menu in the caactus GUI.
- For more information, follow the documentation for batch processing
5.1 Rename Files
- Rename the
.tif-filesso that they contain information about your cells and experimental conditions
- Create a csv-file that contains the information you need in columns. Each row corresponds to one image. Follow the same order as your images files are stored in the respective directory.
- The script will rename your files in the following format
columnA-value1_columnB-value2_columnC_etc.tifeg. as seen in the example below picture 1 (well A1 from our plate) will be namedstrain-ATCC11559_date-20241707_timepoint-6h_biorep-A_techrep-1.tif
- Select the Renaming tab in the caactus GUI. When the file path are correct, click
Run.
CAVE: Do not use underscores or dashes in the column names or values, as they will be used as delimiters in the new file names.
CAVE: The only hardcoded column names needed are "biorep", and "techrep". They are needed in downstream analysis for calculating averages.
CAVE: After successfully having renamed the files, we recommend deleting the content of 0_2_original_tif_batch_images in order to save disk space. """
5.2 Conversion
- Go to the
tif2h5pytab. SelectBatchfrom the dropdown menu.
- The script in the background will convert
.tif-filesto.h5-format. - The
.h5-formatallows for better performance when working with ilastik.
- When the file path are correct, click
Run.
CAVE: After successfully having converted the files, we recommend deleting the content of 0_3_batch_renamed in order to save disk space.
5.3 Batch Processing Pixel Classification
-
Open ilastik.
-
Open your trained ilastik pixel classification project (e.g.
1_pixel_classification.ilp).
CAVE: DO NOT CHANGE anything in 1. Input Data, 2. Feature Selection and 3. Training when running Batch Processing!
- Under
4. Prediction Export, selectExport predictionsand name the folder for the output atFile:
{dataset_dir}/../6_batch_probabilities/{nickname}_{result_type}.h5
-
Go to
5. Batch processingtab -
Under
Raw data, add the .h5 files from5_batch_imagesfolder. -
Now click
Process all files. -
The output will be saved as _Probabilities.h5 files in the output folder.
5.4 Batch Processing Multicut Segmentation
-
Open ilastik.
-
Open your trained ilastik boundary-Segmentation project (e.g. open the
2_boundary_segmentation.ilpproject file). CAVE: DO NOT CHANGE anything in 1. Input Data, 2. DT Watershed, and 3. Training and Multicut, when running Batch Processing! -
Under
4. Data Export, selectChoose Export Image Settingsand choose a folder for the output (e.g. 7_batch_multicut).
- under
Choose Export Image Settingschange the export directory toFile:{dataset_dir}/../7_batch_multicut/{nickname}_{result_type}.h5
-
Go to
5. Batch processing. -
Under,
Raw data, add the .h5 files from5_batch_imagesfolder. -
Under
Probabilities, add the data_Probabilities.h5 files from6_batch_probabilitiesfolder.
-
Go to
5. Batch Processingand clickProcess all files. -
The output will be saved as _Multicut Segmentation.h5 files in the output folder.
5.5 Background Processing
For futher processing in the object classification, the background needs to eliminated from the multicut data sets. For this the next script will set the numerical value of the largest region to 0. It will thus be shown as transpartent in the next step of the workflow. This operation will be performed in-situ on all
.*data_Multicut Segmentation.h5-files in theproject_directory/3_multicut/.- Select the
background-processingtab in the GUI. - Select
Batchmode from the dropdown menu. - When the file path are correct, click
Run.
5.6 Batch processing Object classification
-
Open ilastik.
-
Open your trained ilastik object classification project (
3_object_classification.ilp). CAVE: DO NOT CHANGE anything in 1. Input Data, 2. Object Feature Selection, 3. Object Classification, when running Batch Processing! -
Under
4. Object Information Export, chooseExport Image Settingschange the export directory toFile:
{dataset_dir}/../8_batch_objectclassification/{nickname}_{result_type}.h5
-
Under "4. Object Information Export", choose "Configure Feature Table Export" with the following settings:
-
In
Configure Feature Table Export Generalchoose format.csvand change output directory to:
{dataset_dir}/../8_batch_objectclassification/{nickname}.csv
Choose
Featuresto choose the Feature you are interested in exporting-
Go to 5.
Batch Processingtab -
Under
Raw data, add the .h5 files from5_batch_imagesfolder. -
Under
Segmentation Image, add the data_Multicut Segmentation.h5 files from7_batch_multicutfolder. -
Go to
5. Batch Processingand clickProcess all files. -
The output will be saved as data_Object Predictions.h5 files and data_table.csv in the output folder.
6. Post-Processing and Data Analysis
- Please be aware, the last two scripts,
summary_statisitcs.pyandpln_modelling.pyat this stage are written for the analysis and visualization of two independent variables.
6.1 Merging Data Tables and Table Export
The next script will combine all tables from all images into one global table for further analysis. Additionally, the information stored in the file name will be added as columns to the dataset.
- Technically from this point on, you can continue to use whatever software / workflow your that is easiest for use for subsequent data analysis.
- Go to the
CSV summarytab in the caactus GUI. - Enter the pixel size for cell size calculation.
- When the file path are correct, click
Run. - The output generated will be
df_clean.csv. - This spreadsheet now has all feature tables that are the output of 5.6 Object classification united in one spreadsheet.
- You can use this spreadsheet now, to continue with analysis in the software of your choice.
6.2 Creating Summary Statistics
- This script processes EUCAST data and generates summary statistics and a stacked bar plot of predicted classes cell categories.
- If working with EUCAST antifungal susceptibility testing, use the
Summary Statistics EUCASTtab - For the stacked bar plot, it groups data by the two variables that you enter.
- It computes the average count and percentage of each predicted class, across replicates (technical and biological), for each combination of the two grouping variables.
- It visualizes the distribution in stacked bar plots of classes across different conditions.
- The first variable you enter will be displayed on the x-axis (e.g. incubation temperature), and the second variable will be used for faceting (e.g. timepoint).
- This will create separate subplots for each level of that variable.
- The plot will show the percentage distribution of predicted classes for each condition, allowing you to compare how the classes are distributed across different experimental conditions defined by the two grouping variables.
- The colors of the bars will correspond to the predicted classes, as defined in your color mapping.
- By default the IBM coloor-blind friendly palette is used, but you can customize the colors by providing the HEX color code.
- Go to the
Summary Statisticstab in the caactus GUI. - When the file path are correct, click
Run. - The output generated will be
- a) "df_summary_complete.csv" = .csv-table containing also "not usable" category,
- b) "df_refined_complete.csv" = .csv-table without "not usable" category",
- c) "counts.csv" dataframe used in PlnModelling
- d) bar graph ("barchart.png")
- CAVE: all fields contain default values. You may change them to your needs. E.g. edit
Variable namesto enter the variables you are interested in for analysis. EditClass orderto name it according to your cell morphotype names and ordering. Change theColor mappingaccording to the logic ofClass order.
6.3 PLN Modelling
-
This script runs ZIPln modelling on input data with dynamic design and generates PCA visualizations and a correlation circle plot.
-
The two grouping variables you enter will be used in the model formula and for visualizing the PCA results.
-
The will be combined into a single factor for the model, and the PCA plot will show the latent variable projections colored by this combined category.
-
The correlation circle plot will show how the original variables relate to the latent dimensions, helping you interpret the PCA results in terms of the original grouping variables.
-
CAVE: the limit of categories for display in the PCA-plot is n=15
- Go to the
PLN modellingtab in the caactus GUI. - When the file path are correct, click
Run. - The output generated will be
- a) "correlation_circle.png"
- b) "pca_plot.png"
- CAVE: all fields contain default values. You may change them to your needs. E.g. edit
Variable namesto enter the variables you are interested in for analysis. EditClass orderto name it according to your cell morphotype names and ordering.
7. Tutorial
7.1 Download Sample Data
- Go to zenodo to download the sample data.
- Unpack the
.zip-file into your project folder. - The path to where you unpacked the sample data will be your main folder.
- To showcase the functionalties, the ilastik steps have been pretrained. Use caactus in batch-mode for the following steps. Please note, we intentionally left some subdirectories empty for the tutorial. The intend of the the tutorial is that potential users learn how to run the batch mode with pretrained models. The subdirectory
0_1_original_tif_training_imagesis empty and will stay empty. The other empty subdirectories will get filled with data once the user follows the below explained steps.
- make sure you have caactus installed (see Installation above)
- make sure you have the caactus environmnet activated
conda activate caactus-env
- now simply type
caactusand hitenterto start the graphical user interace
caactus
-
On the top, enter the path to your mainfolder.
-
We recommend working with two screens. This allows to follow the instructions implemented in the caactus GUI while performing the steps in ilastik and quickly switiching back to the caactus steps for fast completion of the pipeline.
7.2 Renaming
- Inspect the
renaming.csvspreadsheet, to see how therenaming.csvis constucted and filled. - Go the renaming tab inside the caactus GUI.
- Enter the main folder path fro 7.3
- Click
Run
7.3 Batch Pixel Classification
-
Open ilastik.
-
Open the pre-trained ilastik pixel classification project from the sample data in the main folder (
1_pixel_classification.ilp). CAVE: DO NOT CHANGE anything in 1. Input Data, 2. Feature Slection 3. Training when running Batch Processing! -
Under
4. Prediction Export, selectExport predictionsand choose a folder for the output toFile:
{dataset_dir}/../6_batch_probabilities/{nickname}_{result_type}.h5
-
Go to
5. Batch processingtab -
Under
Raw data, add the .h5 files from5_batch_imagesfolder. -
Now click
Process all files. -
The output will be saved as _Probabilities.h5 files in the output folder.
7.4 Batch Processing Multicut Segmentation
-
In ilastik open the next project file.
-
Open the pre-trained ilastik boundary-Segmentation project from the sample data in the main folder (
2_boundary_segmentation.ilpproject file). CAVE: DO NOT CHANGE anything in 1. Input Data, 2. DT Watershed, 3. Training and Multicut, when running Batch Processing! -
Under
4. Data Export, selectChoose Export Image Settingsand choose a folder for the output (e.g. 7_batch_multicut).
- under
Choose Export Image Settingschange the export directory toFile:{dataset_dir}/../7_batch_multicut/{nickname}_{result_type}.h5
-
Go to
5. Batch processing. -
Under,
Raw data, add the .h5 files from5_batch_imagesfolder. -
Under
Probabilities, add the data_Probabilities.h5 files from6_batch_probabilitiesfolder.
-
Go to
5. Batch Processingand clickProcess all files. -
The output will be saved as _Multicut Segmentation.h5 files in the output folder.
-
Close the
2_boundary_segmentation.ilpproject-file in ilastik.
7.5 Batch Background Processing
- Switch back to the caactus GUI.
- Select the
background-processingtab in the GUI. - Select
Batchmode from the dropdown menu. - When the file paths are correct, click
Run. - The background now has been deleted and you can continue with object classification in ilastik.
7.8 Batch Object classification
-
Switch back to ilastik.
-
Open your trained ilastik object classification project (
3_object_classification.ilp). CAVE: DO NOT CHANGE anything in 1. Input Data, 2. Object Feature Selection, 3. Object Classification, when running Batch Processing! -
Under
4. Object Information Export, chooseExport Image Settingschange the export directory toFile:
{dataset_dir}/../8_batch_objectclassification/{nickname}_{result_type}.h5
-
Under "4. Object Information Export", choose "Configure Feature Table Export" with the following settings:
-
In
Configure Feature Table Export Generalchoose format.csvand change output directory to:
{dataset_dir}/../8_batch_objectclassification/{nickname}.csv
Choose
Featuresto choose the Feature you are interested in exporting-
Go to 5.
Batch Processingtab -
Under
Raw data, add the .h5 files from5_batch_imagesfolder. -
Under
Segmentation Image, add the data_Multicut Segmentation.h5 files from7_batch_multicutfolder. -
Go to
5. Batch Processingand clickProcess all files. -
The output will be saved as data_Object Predictions.h5 files and data_table.csv in the output folder.
-
Now you have performed all steps in ilastik. You can close ilastik.
7.9 CSV summary
- Switch back to the caactus GUI.
- Go to the
CSV summarytab in the caactus GUI. - You can leave the default pixel size for cell size calculation.
- When the file paths are correct, click
Run. - Inspect the generated results. The output generated will be
df_clean.csv. - This spreadsheet now has all feature tables that are the output of 5.6 Object classification united in one spreadsheet.
- You can use this spreadsheet now, to continue with analysis in the software of your choice.
7.10 Summary Statistics
-
Go to the
Summary Statisticstab in the caactus GUI. -
Change the variable names to
['condition1','condition2']. -
When the file paths are correct, click
Run. -
Inspect the generated results. The output generated will be
- a) "df_summary_complete.csv" = .csv-table containing also "not usable" category,
- b) "df_refined_complete.csv" = .csv-table without "not usable" category",
- c) "counts.csv" dataframe used in PlnModelling
- d) bar graph ("barchart.png") (faceted by condition1 on x-axis, percent of morphotypes "Predicted Class" on the y-axis and condition2 as the facetting variable in rows.) You can play around by putting 'condition2' first and 'condition1' second to see how it changes the plot.
-
You may also change the colors: change the default
{'resting': '#FE6100', 'swollen': '#648FFF', 'germling': '#785EF0', 'hyphae': '#DC267F'}
to
{'resting': 'yellow', 'swollen': 'cyan', 'germling': 'blue', 'hyphae': 'magenta'} -
Similarly, you my change the morphotype names. Open
df_clean.csvin a speadsheet software (e.g. Excel). Replace allrestingwithdormant(useCtrl+F). Now re-do step7.10 Summary Statistics. Before you clickRun, make sure you replacerestingwithdormantin bothClass orderandColor Mappingfields.
7.11 PLN modelling
- Go to the
PLN modellingtab in the caactus GUI. - Change the variable names to
['condition1','condition2']. - When the file path are correct, click
Run. - Inspect the generated results. The output generated will be
- a) "correlation_circle.png". Shows that PCA1, accounting for 57.465% of the variance, primarily separated samples by condition2, whereas PCA2 accounted for 24.57% of the variance based on condition1.
- b) "pca_plot.png". The PCA plot shows how the images are grouped together in 2D-space based on combined category of condition1 and condition2 (the categorical levels will be combined).
- To install
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file caactus-0.2.8.tar.gz.
File metadata
- Download URL: caactus-0.2.8.tar.gz
- Upload date:
- Size: 2.0 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
478f5c4b499d71e25531a825fd04a7d197fdd7353488b47d4dbfe2e3e105e7b4
|
|
| MD5 |
64ce77d714c8ba99c6ab5640a8aa3639
|
|
| BLAKE2b-256 |
347662eb42245c15d64b812188504b16b74f96c8de251eeb973c1e6ed85f15f0
|
File details
Details for the file caactus-0.2.8-py3-none-any.whl.
File metadata
- Download URL: caactus-0.2.8-py3-none-any.whl
- Upload date:
- Size: 2.0 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4587849896b8acff1ffd3634a8d570f1e22a3a90caa11fcf226caef049468bbd
|
|
| MD5 |
b92b5793a0864278a2ea85a93fe72af0
|
|
| BLAKE2b-256 |
d05c8ec995bdec3274765cdbdf9f5d928a5fecc6d56462c3ad6b9a3187e37c5f
|