Deep and machine learning for atom-resolved data
What is AtomAI
AtomAI is a simple Python package for machine learning-based analysis of experimental atomic-scale and mesoscale data from electron and scanning probe microscopes, which doesn't require any advanced knowledge of Python (or machine learning). It is the next iteration of the AICrystallographer project.
How to use it
AtomAI has two main modules: atomnet and atomstat. The atomnet is for training neural networks (with just one line of code) and for applying trained models to finding atoms and defects in image data. The atomstat allows taking the atomnet predictions and performing the statistical analysis on the local image descriptors associated with the identified atoms and defects (e.g., principal component analysis of atomic distortions in a single image or computing gaussian mixture model components with the transition probabilities for movies).
Quickstart: AtomAI in the Cloud
The easiest way to start using AtomAI is via Google Colab
Below is an example of how one can train a neural network for atom/particle/defect finding with essentially one line of code:
from atomai import atomnet # Load your training/test data (as numpy arrays or lists of numpy arrays) dataset = np.load('training_data.npz') images_all, labels_all, images_test_all, labels_test_all = dataset.values() # Train a model trained_model = atomnet.train_single_model( images_all, labels_all, images_test_all, labels_test_all, # train and test data gauss_noise=True, zoom=True, # on-the-fly data augmentation training_cycles=500, swa=True) # train for 500 iterations with stochastic weights averaging at the end
One can also train an ensemble of models instead of just a single model. The average ensemble prediction is usually more accurate and reliable than that of the single model. In addition, we also get the information about the uncertainty in our prediction for each pixel.
# Initialize ensemble trainer etrainer = atomnet.ensemble_trainer( images_all, labels_all, images_test_all, labels_test_all, rotation=True, zoom=True, gauss_noise=True, # On-the fly data augmentation strategy="from_baseline", swa=True, n_models=30, model="dilUnet", training_cycles_base=1000, training_cycles_ensemble=100) # Train deep ensemble of models ensemble, amodel = etrainer.run()
Prediction with trained model(s)
Trained model is used to find atoms/particles/defects in the previously unseen (by a model) experimental data:
# Here we load new experimental data (as 2D or 3D numpy array) expdata = np.load('expdata.npy') # Initialize predictive object (can be reused for other datasets) spredictor = atomnet.predictor(trained_model, use_gpu=True, refine=False) # Get model's "raw" prediction, atomic coordinates and classes nn_output, coord_class = spredictor.run(expdata)
One can also make a prediction with uncertainty estimates using the ensemble of models:
epredictor = atomnet.ensemble_predictor(amodel, ensemble, calculate_coordinates=True, eps=0.5) (out_mu, out_var), (coord_mu, coord_var) = epredictor.run(expdata)
(Note: In some cases, it may be easier to get coordinates by simply running
atomnet.locator(*args, *kwargs).run(out_mu) on the mean "raw" prediction of the ensemble)
The information extracted by atomnet can be further used for statistical analysis of raw and "decoded" data. For example, for a single atom-resolved image of ferroelectric material, one can identify domains with different ferroic distortions:
from atomai import atomstat # Get local descriptors imstack = atomstat.imlocal(nn_output, coordinates, window_size=32, coord_class=1) # Compute distortion "eigenvectors" with associated loading maps and plot results: pca_results = imstack.imblock_pca(n_components=4, plot_results=True)
For movies, one can extract trajectories of individual defects and calculate the transition probabilities between different classes:
# Get local descriptors (such as subimages centered around impurities) imstack = atomstat.imlocal(nn_output, coordinates, window_size=32, coord_class=1) # Calculate Gaussian mixture model (GMM) components components, imgs, coords = imstack.gmm(n_components=10, plot_results=True) # Calculate GMM components and transition probabilities for different trajectories transitions_dict = imstack.transition_matrix(n_components=10, rmax=10) # and more
In addition to multivariate statistical analysis, one can also use variational autoencoders (VAEs) in AtomAI to find in the unsupervised fashion the most effective reduced representation of system's local descriptors. The VAEs can be applied to both raw data and NN output, but typically work better with the latter.
from atomai import atomstat, utils # Get stack of subimages from a movie imstack, com, frames = utils.extract_subimages(decoded_imgs, coords, window_size=32) # Initialize and train rotationally-invariant VAE rvae = atomstat.rVAE(imstack, latent_dim=2, training_cycles=200) rvae.run() # Visualize the learned manifold rvae.manifold2d()
First, install PyTorch. Then, install AtomAI via
pip install atomai
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size atomai-0.5.2-py3-none-any.whl (72.0 kB)||File type Wheel||Python version py3||Upload date||Hashes View|
|Filename, size atomai-0.5.2.tar.gz (65.0 kB)||File type Source||Python version None||Upload date||Hashes View|