Skip to main content

Tools for the generation and analysis of dislocation distributions.

Project description

Mines Saint-Etienne

Project

This repository is related to the analysis of crystals containing dislocations by X-ray diffraction. It is part of a project conducted during a research internship at the laboratory of material and structural sciences of the École Nationale Supérieure des Mines de Saint-Étienne.

Features

The tools developed can be used to:

  • generate dislocation distributions according to different models
  • export the distributions in standardized files for input to an X-ray diffraction simulation program
  • export the distributions in dislocation maps
  • export a spatial analysis of the distributions

Physical aspects

A dislocation associates:

  • a Burgers vector
  • a position

Two geometries are proposed:

  • circle (intersection of a plane with a cylinder) centered in (0, 0)
  • square (intersection of a plane with a cuboid) bottom left corner in (0, 0)

A distribution is characterized by the following elements:

  • the geometry of the region of interest
  • the model used for the random generation of dislocations
  • the generated dislocations

Abbreviations

Some abbreviations are used in the program:

Models

  • urdd: uniformly random dislocation distribution
  • rrdd: restrictedly random dislocation distribution
  • rcdd: random cell dislocation distribution

Model variants

  • r: randomly distributed Burgers vectors
  • e: evenly distributed Burgers vectors
  • d: dipolar Burgers vectors

Boundary conditions

  • pbcg: periodic boundary conditions applied when generating the distribution
  • pbcr: periodic boundary conditions applied when runnning the simulation
  • idbc: image dislocations boundary conditions

User guide

Installation

The project is indexed on PyPI and installable directly via pip.

pip install lpa-input

Generation

To create a uniformly random dislocation distribution with evenly distributed Burgers vectors in a cylindrical geometry with a radius of 1000 nm:

from lpa.input import sets
r = {'density': 0.03, 'variant': 'e'}
d = sets.Distribution('circle', 1000, 'urdd', r)

To create a sample of 100 uniformly random dislocation distribution with evenly distributed Burgers vectors in a cylindrical geometry with a radius of 1000 nm:

from lpa.input import sets
r = {'density': 0.03, 'variant': 'e'}
s = sets.Sample(500, 'circle', 1000, 'urdd', r)

Exportation

To export a dislocation map of a distribution d.

from lpa.input import maps
maps.export(d)

To make standardized files for input to an X-ray diffraction simulation program from a sample s:

from lpa.input import data
data.export(s)

To make a spatial analysis of a sample s:

from lpa.input import analyze
analyze.export(s)

Parallelization

To parallelize the spatial analysis of distributions on a supercomputer equipped with Slurm Workload Manager, it is necessary to create two files. The first one is the python script that will be executed on each core.

slurm.py

#!/usr/bin/env python
# coding: utf-8

"""
This script is executed on each core during a parallel analysis.
"""

import time
from lpa.input import sets
from lpa.input import parallel
import settings

n = 1 # number of distribution per core

p = [
    [n, *settings.circle, *settings.urdde14],
    [n, *settings.circle, *settings.rrdde14],
]

if parallel.rank == parallel.root:
    t1 = time.time()
for args in p:
    s = sets.Sample(*args)
    if parallel.rank == parallel.root:
        print("- analysis of "+s.fileName()+" ", end="")
        t2 = time.time()
    parallel.export(s)
    if parallel.rank == parallel.root:
        print("("+str(round((time.time()-t2)/60))+" mn)")
if parallel.rank == parallel.root:
    print("total time: "+str(round((time.time()-t1)/60))+" mn")

The second file is used to submit the task to Slurm.

slurm.job

#!/bin/bash
#SBATCH --job-name=disldist
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=2
#SBATCH --time=10:00:00
#SBATCH --partition=intensive.q
ulimit -l unlimited
###unset SLURM_GTIDS

SCRIPT=slurm.py
echo ------------------------------------------------------
echo number of nodes in the job resource allocation: $SLURM_NNODES
echo nodes allocated to the job: $SLURM_JOB_NODELIST
echo directory from which sbatch was invoked: $SLURM_SUBMIT_DIR
echo hostname of the computer from which sbatch was invoked: $SLURM_SUBMIT_HOST
echo id of the job allocation: $SLURM_JOB_ID
echo name of the job: $SLURM_JOB_NAME
echo name of the partition in which the job is running: $SLURM_JOB_PARTITION
echo number of nodes requested: $SLURM_JOB_NUM_NODES
echo number of tasks requested per node: $SLURM_NTASKS_PER_NODE
echo ------------------------------------------------------
echo generating hostname list
COMPUTEHOSTLIST=$( scontrol show hostnames $SLURM_JOB_NODELIST |paste -d, -s )
echo ------------------------------------------------------
echo creating scratch directories on nodes $SLURM_JOB_NODELIST
SCRATCH=/scratch/$USER-$SLURM_JOB_ID
srun -n$SLURM_NNODES mkdir -m 770 -p $SCRATCH || exit $?
echo ------------------------------------------------------
echo transferring files from frontend to compute nodes $SLURM_JOB_NODELIST
srun -n$SLURM_NNODES cp -rvf $SLURM_SUBMIT_DIR/$SCRIPT $SCRATCH || exit $?
echo ------------------------------------------------------
echo load packages
module load anaconda/python3
python3 -m pip install -U lpa-input
echo ------------------------------------------------------
echo run -mpi program
cd $SCRATCH
mpirun --version
mpirun -np $SLURM_NTASKS -npernode $SLURM_NTASKS_PER_NODE -host $COMPUTEHOSTLIST python3 $SLURM_SUBMIT_DIR/$SCRIPT
echo ------------------------------------------------------
echo transferring result files from compute nodes to frontend
srun -n$SLURM_NNODES cp -rvf $SCRATCH $SLURM_SUBMIT_DIR || exit $?
echo ------------------------------------------------------
echo deleting scratch from nodes $SLURM_JOB_NODELIST
srun -n$SLURM_NNODES rm -rvf $SCRATCH || exit 0
echo ------------------------------------------------------

Finally, to start the simulation enter the following command.

sbatch slurm.job

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lpa-input-0.9.0.tar.gz (21.4 kB view details)

Uploaded Source

Built Distribution

lpa_input-0.9.0-py3-none-any.whl (22.4 kB view details)

Uploaded Python 3

File details

Details for the file lpa-input-0.9.0.tar.gz.

File metadata

  • Download URL: lpa-input-0.9.0.tar.gz
  • Upload date:
  • Size: 21.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.6.3 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.0 CPython/3.9.6

File hashes

Hashes for lpa-input-0.9.0.tar.gz
Algorithm Hash digest
SHA256 14a925913feaf3f7cffdcb5e2648e2adf4e9f02d182f8e7dc3e342a1e3d9cae9
MD5 9c5e0823bd6a41abbfad6c0c88750f8a
BLAKE2b-256 76f7cf28873999cdf69b8f96d8aa26083426e6cba6e50e974e434b2d4710894d

See more details on using hashes here.

File details

Details for the file lpa_input-0.9.0-py3-none-any.whl.

File metadata

  • Download URL: lpa_input-0.9.0-py3-none-any.whl
  • Upload date:
  • Size: 22.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.6.3 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.0 CPython/3.9.6

File hashes

Hashes for lpa_input-0.9.0-py3-none-any.whl
Algorithm Hash digest
SHA256 79a457bc8e738315dbe0d26e0383a6671cdea519c3dc8072089dd673b9b690cf
MD5 c6d993e3c2fefcecdc35c40c7f6bdc39
BLAKE2b-256 ba23c3673294b6e16875740453b35fb0f53c832da1d5a57ee21f499c1e4e2ce3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page