Tools for the generation and analysis of dislocation distributions.
Project description
Project
This repository is related to the analysis of crystals containing dislocations by X-ray diffraction. It is part of a project conducted during a research internship at the laboratory of material and structural sciences of the École Nationale Supérieure des Mines de Saint-Étienne.
Features
The tools developed can be used to:
- generate dislocation distributions according to different models
- export the distributions in standardized files for input to an X-ray diffraction simulation program
- export the distributions in dislocation maps
- export a spatial analysis of the distributions
Physical aspects
A dislocation associates:
- a Burgers vector
- a position
Two geometries are proposed:
- circle (intersection of a plane with a cylinder) centered in (0, 0)
- square (intersection of a plane with a cuboid) bottom left corner in (0, 0)
A distribution is characterized by the following elements:
- the geometry of the region of interest
- the model used for the random generation of dislocations
- the generated dislocations
Abbreviations
Some abbreviations are used in the program:
Models
- RDD: random dislocation distribution
- RRDD: restrictedly random dislocation distribution
- RCDD: random cell dislocation distribution
Model variants
- R: randomly distributed Burgers vectors
- E: evenly distributed Burgers vectors
- D: dipolar Burgers vectors
Boundary conditions
- PBCG: periodic boundary conditions applied when generating the distribution
- PBCR: periodic boundary conditions applied when running the simulation
- IDBC: image dislocations boundary conditions
User guide
Installation
The project is indexed on PyPI and installable directly via pip.
pip install -U lpa-input
Generation
To create a random dislocation distribution with evenly distributed Burgers vectors in a cylindrical geometry with a radius of 1000 nm:
from lpa.input import sets
from lpa.input.models import RDD
r = {'d': 0.03, 'v': 'E'}
d = sets.Distribution('circle', 1000, RDD, r)
To create a sample of 100 random dislocation distributions with evenly distributed Burgers vectors in a cylindrical geometry with a radius of 1000 nm:
from lpa.input import sets
from lpa.input.models import RDD
r = {'d': 0.03, 'v': 'E'}
s = sets.Sample(500, 'circle', 1000, RDD, r)
Exportation
To export a dislocation map of a distribution d
.
from lpa.input import maps
maps.export(d)
To make standardized files for input to an X-ray diffraction simulation program from a sample s
:
from lpa.input import data
data.export(s)
To make a spatial analysis of a sample s
:
from lpa.input import analyze
analyze.export(s)
Parallelization
To parallelize the spatial analysis of distributions on a supercomputer equipped with Slurm Workload Manager, it is necessary to create two files. The first one is the python script that will be executed on each core.
slurm.py
#!/usr/bin/env python
# coding: utf-8
"""
This script is executed on each core during a parallel analysis.
"""
import time
from lpa.input import sets
from lpa.input import parallel
import settings
n = 1 # number of distribution per core
p = [
[n, *settings.circle, *settings.rrdde13],
[n, *settings.circle, *settings.rrdde14],
]
if parallel.rank == parallel.root:
t1 = time.time()
for args in p:
s = sets.Sample(*args)
if parallel.rank == parallel.root:
print("- analysis of "+s.fileName()+" ", end="")
t2 = time.time()
parallel.export(s)
if parallel.rank == parallel.root:
print("("+str(round((time.time()-t2)/60))+" mn)")
if parallel.rank == parallel.root:
print("total time: "+str(round((time.time()-t1)/60))+" mn")
The second file is used to submit the task to Slurm.
slurm.job
#!/bin/bash
#SBATCH --job-name=disldist
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=2
#SBATCH --time=10:00:00
#SBATCH --partition=intensive.q
ulimit -l unlimited
###unset SLURM_GTIDS
SCRIPT=slurm.py
echo ------------------------------------------------------
echo number of nodes in the job resource allocation: $SLURM_NNODES
echo nodes allocated to the job: $SLURM_JOB_NODELIST
echo directory from which sbatch was invoked: $SLURM_SUBMIT_DIR
echo hostname of the computer from which sbatch was invoked: $SLURM_SUBMIT_HOST
echo id of the job allocation: $SLURM_JOB_ID
echo name of the job: $SLURM_JOB_NAME
echo name of the partition in which the job is running: $SLURM_JOB_PARTITION
echo number of nodes requested: $SLURM_JOB_NUM_NODES
echo number of tasks requested per node: $SLURM_NTASKS_PER_NODE
echo ------------------------------------------------------
echo generating hostname list
COMPUTEHOSTLIST=$( scontrol show hostnames $SLURM_JOB_NODELIST |paste -d, -s )
echo ------------------------------------------------------
echo creating scratch directories on nodes $SLURM_JOB_NODELIST
SCRATCH=/scratch/$USER-$SLURM_JOB_ID
srun -n$SLURM_NNODES mkdir -m 770 -p $SCRATCH || exit $?
echo ------------------------------------------------------
echo transferring files from frontend to compute nodes $SLURM_JOB_NODELIST
srun -n$SLURM_NNODES cp -rvf $SLURM_SUBMIT_DIR/$SCRIPT $SCRATCH || exit $?
echo ------------------------------------------------------
echo load packages
module load anaconda/python3
python3 -m pip install -U lpa-input
echo ------------------------------------------------------
echo run -mpi program
cd $SCRATCH
mpirun --version
mpirun -np $SLURM_NTASKS -npernode $SLURM_NTASKS_PER_NODE -host $COMPUTEHOSTLIST python3 $SLURM_SUBMIT_DIR/$SCRIPT
echo ------------------------------------------------------
echo transferring result files from compute nodes to frontend
srun -n$SLURM_NNODES cp -rvf $SCRATCH $SLURM_SUBMIT_DIR || exit $?
echo ------------------------------------------------------
echo deleting scratch from nodes $SLURM_JOB_NODELIST
srun -n$SLURM_NNODES rm -rvf $SCRATCH || exit 0
echo ------------------------------------------------------
Finally, to start the simulation enter the following command.
sbatch slurm.job
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file lpa-input-0.9.2.tar.gz
.
File metadata
- Download URL: lpa-input-0.9.2.tar.gz
- Upload date:
- Size: 23.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.2 importlib_metadata/4.6.3 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.0 CPython/3.9.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ea0473993d14815fceed499ce781f2e50142041370f483f703276aeda900a722 |
|
MD5 | 6f6b38f3bee84aed8bdd54e3cbe4fc47 |
|
BLAKE2b-256 | bfed6d3550bd9f905801218c08f4528f451a0792086d5b65ded305b58422ce25 |
File details
Details for the file lpa_input-0.9.2-py3-none-any.whl
.
File metadata
- Download URL: lpa_input-0.9.2-py3-none-any.whl
- Upload date:
- Size: 25.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.2 importlib_metadata/4.6.3 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.0 CPython/3.9.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f786c5700140268826d9da07f7770ec3f096ee45f461d1152b1df65c32db5b14 |
|
MD5 | 3a78f8fb9161181f67ac82a6a3323752 |
|
BLAKE2b-256 | 6acc364de139c43a0d80f6fb7218a2324bbe63c4dcde8b7c687cb20eecc02f34 |