Thunderfit fitting code
Project description
Quick install
Python version >=3.6 needed for this package
Mac/Linux:
Using pip:
First have pip installed: https://www.makeuseof.com/tag/install-pip-for-python/ Be sure to install python >=3.6
- (optional - recommended):
Create a new environment in python so that packages aren’t corrupted. Maintenance of this package won’t be great so dependencies are set to specific releases.
Choose a directory you will store your python environment in. recommended to be somewhere convenient to access
python3 -m pip install –user virtualenv
python3 -m virtualenv thunder
When it comes time to use your environment (when installing the package or when using it):
source path_to_env/thunder/bin/activate
(Do this step whenever you’re finished using thunderfit) to deactivate just type deactivate
Now with you environment active (if using one) type:
pip install thunderfit
You can check the correct script for ramananalyse (or any other script in future releases) is present by typing:
command -v ramananalyse
- To use ramananalyse::
ramananalyse –param_file_path path_to_param_file –datapath path_to_data_file
Using anaconda:
ANACONDA NOT CURRENTLY SUPPORTED.
Windows:
install python: https://www.python.org/downloads/ Be sure to install python >=3.6 pip should be automatically installed
- (optional - recommended):
Create a new environment in python so that packages aren’t corrupted. Maintenance of this package won’t be great so dependencies are set to specific releases.
Choose a directory you will store your python environment in. recommended to be somewhere convenient to access
py -m pip install virtualenv
py -m virtualenv thunder
When it comes time to use your environment (when installing the package or when using it):
.path_to_envthunderScriptsactivate.bat
(Do this step whenever you’re finished using thunderfit) to deactivate just type deactivate
Now with you environment active (if using one) type:
py -m pip install thunderfit
scripts in windows install as .exe inside the environment created above in scripts. type the path to this followed by any arguments as usual
Using windows subsystem for linux (WSL):
Follow instructions for Mac/Linux
Using Thunderfit
To create a thunderfit object, call the Thunder class in the thundobj module with correct inputs. See code for this.
Alternatively a thunderfit object can be created by passing a thunderfit object to the Thunder class, and all attributes will be copied into a new object.
The param file
The param file is in json format and an example is below::
{"x_ind": 2, "y_ind": 3,"x_coord_ind":1, "y_coord_ind":0, "map": true, "background": "no", "clip_data": true, "clips":[3100, 1100], "method": "leastsq", "tol": 0.001, "no_peaks": 4, "peak_info_dict": { "type": ["LorentzianModel", "LorentzianModel", "LorentzianModel", "PowerLawModel", "LinearModel"], "center": [1350, 1590, 2700], "sigma": [15, 10, 30], "height": [100, 1000, 500], "amplitude":[null,null,null,0], "exponent":[null,null,null,1], "slope":[null,null,null,null,0], "intercept":[null,null,null,null,0] }, "bounds": { "center": [[1330, 1370], [1570, 1610], [2680, 2730]], "sigma": [[8, 40], [10, 30], [30, 60]], "amplitude": [[0.0001,null], [0.0001,null], [0.0001,null]] } }
Possible Arguments are:
x_ind - the data should be in a csv format only currently. x_ind speicifies which column of the csv data is the x data
y_ind - the data should be in a csv format only currently. y_ind speicifies which column of the csv data is the y data
e_ind - (optional) the data should be in a csv format only currently. e_ind speicifies which column of the csv data is the e data. If not specified then only x and y data will be loaded
x_coord_ind - which column of the map has the x coordinates
y_coord_ind - which column of the map has the x coordinates
map - is this a mapscan? defaults to no
background - either “SCARF” or “no” to subtract either a scarf generated background or no background before fitting (note using e.g. linear models and powerlaw models is a good way to do a background simultaneously with the peaks)
clip_data - true or false. should the data be clipped? defaults to false
clips - if the data is being clipped this will be read. should be a list, e.g. [10,20] where the two elements are the left clip and right clip of the data. Note the order is important and if the data file has x read in backwards then the first number should be the right clip
method - what type of fitting method to use. uses same names as lmfit methods
tol - what tolerance to use. currently defaults to same as lmfit and tol is set for xtol and ftol
no_peaks - how many peaks to fit (will be depreciated soon)
peak_info_dict - this is a dictionary of information about the models to fit. the very minimum is to include type as a key. pass in the format {“key”: value}. the value for all should be a list [] which is comma seperated. note that the element number will correspond to the model. if its not appropriate for that model type null. default is to not set parameters for models unless specified
type - a key to specify models. the value. currently most of lmfits models are supported. expression model and split lorentzian currently aren’t
model parameters - see lmfit built in models to see which parameters can be passed
bounds - this has the same format as peak_info_dict except the values should be a list of list, with each sublist being two elements for a lower and upper bound on that parameter
a.model parameters - [[low,upp],[low,upp]] replace low and upp with numerical bound values
datapath - the relative path to the data. Data should be in csv format. note and nan rows will be removed. - if passed into command line then that always takes precedence.
scarf_params - a dictionary containing parameters for the “SCARF” background method. if null then it will launch an interactive procedure for choosing the parameters which could be passed in here.
rad - a number which corresponds to the radius of the rolling ball
b - a number which corresponds to the shift in the background generated by rolling ball method
window_length - a parameter for Savgol filter (current implementation uses scipy savgol_filter from signal)
poly_order - a parameter for Savgol filter (current implementation uses scipy savgol_filter from signal)
normalise - bool - should the data be normalised
bg_first_only - bool- if finding a background with a user guided routine, should the routine only be for the first spectra
bounds_first_only - bool - if finding bounds interactively should do this for only first
peakf_first_only - bool - find peaks interactively only for first
find_peaks - find peaks interactively
adj_params - should the parameters be adjusted for each spectra e.g. by peak finding to slightly move the guess and improve covergence time?
find_bounds - interactively find the bounds
make_gif - make a gif of all the fits
peak_finder_type - what type of peak finding should be performed?
Scripts
The below scripts will install with Thunderfit by default. They are useful for either analysing a single Raman spectra, a mapscan or generating a parameters file with user guided routine.
The ramananalyse script
needs user input for the param file location at a minimum
Currently this script processes user inputs and parses everything, it then creates a new directory in the current directory named analysed_{time}. This will contain all the analysis data . Then it creates a Thunder object based on input and params file. The background and the data with the background removed are then saved as variables in the object. Then peaks are fitted to the data using the peak information and the bounds information (and of course the y data with the bg removed). Then the original data, fitted peaks, background, the fit sum and the uncertainties on the fitted peaks are all plotted using matplot lib and the plot object returned. A fit report is then generated. The plots are then saved in the generated directory from earlier, as is the fit report and the Thunder object (using dill).
The map_scan script
same to run as ramananalyse
Further details coming soon. Run like:
mapscan –param_file_path ../bag_params.txt –datapath ‘./map.txt’
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for thunderfit-1.7.2.2-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | cd092b9b8b2f9c959634f40da2b796346d92cd022c501978adba591c00e8a61a |
|
MD5 | 5c011881e06a9cca81c74d39254d0ebc |
|
BLAKE2b-256 | 26cf7a27fd9e979bca635f9ffd5f28464f047ec9faec3f4bca149c8e263567eb |