Skip to main content

Virtual Lab Modeling Platform

Project description

VLMP (Virtual Lab Modeling Platform)

Table of Contents

Introduction

VLMP is a Python library designed for running parallelized simulations, specifically optimized for molecular dynamics and other continuous models. Built on the backend technology of UAMMD-structured, it leverages multi-level parallelization to achieve highly efficient simulation runs.

Features

  • Multi-level Parallelization: Run multiple simulations concurrently on a single GPU or distribute across multiple GPUs.
  • Optimized for Coarse-grained Models: Achieve better GPU utilization with small-scale simulations.
  • Highly Configurable: Easily adaptable for a variety of scientific phenomena.
  • Community Sharing: Distribute new models as VLMP modules.

Documentation

Online Documentation

Installation

Prerequisites

VLMP can be used without any additional program ( but required Python libraries) for generating simulations input files. For execute these simulations UAMMD-structured must be available in the system. UAMMD-structured Documentation

Installing VLMP

Via pip:

pip install pyVLMP

Or clone the GitHub repository:

git clone https://github.com/PabloIbannez/VLMP.git
cd VLMP
pip install .

Verifying Installation

import VLMP

Getting Started

Here's a minimal example to simulate a set of DNA chains:

   import VLMP
   from VLMP.utils.units import picosecond2KcalMol_A_time
   from numpy import random

   # Convert picoseconds to AKMA time unit
   ps2AKMA = picosecond2KcalMol_A_time()

   # Number of sequences and sequence set size
   Nsequence = 10
   sequenceSetSize = 10

   # Length of each sequence and the basis of DNA
   sequenceLength  = 100
   basis = ['A', 'C', 'G', 'T']

   # Generate random sequences
   sequences = []
   for i in range(Nsequence):
       sequences.append(''.join(random.choice(basis, sequenceLength)))

   # Populate simulation pool
   simulationPool = []
   for seq in sequences:
       # Configure simulation parameters
       simulationPool.append({
           "system": [
               {"type": "simulationName", "parameters": {"simulationName": seq}},
               {"type": "backup", "parameters": {"backupIntervalStep": 100000}}
           ],
           "units": [{"type": "KcalMol_A"}],
           "types": [{"type": "basic"}],
           "ensemble": [
               {"type": "NVT", "parameters": {"box": [2000.0, 2000.0, 2000.0],
                                              "temperature": 300.0}}
           ],
           "integrators": [
               {"type": "BBK", "parameters": {"timeStep": 0.02*ps2AKMA,
                                              "frictionConstant": 0.2/ps2AKMA,
                                              "integrationSteps": 1000000}}
           ],
           "models": [
               {"type": "MADna", "parameters": {"sequence": seq}}
           ],
           "simulationSteps": [
               {"type": "saveState", "parameters": {"intervalStep": 10000,
                                                    "outputFilePath": "traj",
                                                    "outputFormat": "dcd"}},
               {"type": "thermodynamicMeasurement", "parameters": {"intervalStep": 10000,
                                                                   "outputFilePath": "thermo.dat"}},
               {"type": "info", "parameters": {"intervalStep": 10000}}
           ]
       })

   # Initialize VLMP and load simulation pool
   vlmp = VLMP.VLMP()
   vlmp.loadSimulationPool(simulationPool)

   # Distribute simulations and set up
   vlmp.distributeSimulationPool("size", sequenceSetSize)
   vlmp.setUpSimulation("EXAMPLE")

Execute the simulations with:

cd EXAMPLE
python -m VLMP -s VLMPsession.json --local --gpu 0 1

Workflow

  1. Simulation Configuration: Define simulation parameters.
  2. Simulation Pool Creation: Prepare multiple configurations for batch execution.
  3. Simulation Distribution: Distribute simulations across computational resources.
  4. Simulation Execution: Execute simulations on GPU using UAMMD-structured.

License

GPLv3

Contact

For issues and contributions, please contact: GitHub Issues

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyvlmp-1.0.71.tar.gz (133.9 kB view details)

Uploaded Source

Built Distribution

pyVLMP-1.0.71-py3-none-any.whl (202.6 kB view details)

Uploaded Python 3

File details

Details for the file pyvlmp-1.0.71.tar.gz.

File metadata

  • Download URL: pyvlmp-1.0.71.tar.gz
  • Upload date:
  • Size: 133.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for pyvlmp-1.0.71.tar.gz
Algorithm Hash digest
SHA256 b115018a6518b6fd94905f7dafd352cb41688f1e44bcc1edf712ebdf79879ea2
MD5 3041c1d2d2e61d61ab9a2b8cccf98f36
BLAKE2b-256 d0451b98d053045cc5ea2c9b928c85d8940c153857ac9b7980339ee79ed71a44

See more details on using hashes here.

File details

Details for the file pyVLMP-1.0.71-py3-none-any.whl.

File metadata

  • Download URL: pyVLMP-1.0.71-py3-none-any.whl
  • Upload date:
  • Size: 202.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for pyVLMP-1.0.71-py3-none-any.whl
Algorithm Hash digest
SHA256 8a425ad54e190d1a75815fbf20d262ceb8099ad9f024496ec4260c12eff460c4
MD5 520963fc6de4ffb86e2fc2d9f10a9d04
BLAKE2b-256 7a092ad1821ebfda9aac423412a47c8400ff40de001eafc405d867679fc6f783

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page