Skip to main content

Simple finite element assemblers with torch.

Project description

License: MIT PyPI - Python Version PyPI - Version Black Binder

torch-fem: differentiable finite elements in PyTorch

Simple GPU accelerated finite element assemblers for small-deformation mechanics with PyTorch. The advantage of using PyTorch is the ability to efficiently compute sensitivities and use them in optimization tasks.

Installation

Your may install torch-fem via pip with

pip install torch-fem

Optional: For GPU support, install CUDA and the corresponding CuPy version with

pip install cupy-cuda11x # v11.2 - 11.8
pip install cupy-cuda12x # v12.x

Features

  • Elements
    • 1D: Bar1 (linear), Bar2 (quadratic)
    • 2D: Quad1 (linear), Quad2 (quadratic), Tria1 (linear), Tria2 (quadratic)
    • 3D: Hexa1 (linear), Hexa2 (quadratic), Tetra1 (linear), Tetra2 (quadratic)
    • Shell: Flat-facet triangle (linear)
  • Material models
    • Isotropic linear elasticity (3D, 2D plane stress, 2D plane strain, 1D)
    • Orthotropic linear elasticity (3D, 2D plane stress, 2D plane strain)
    • Isotropic plasticity (3D, 2D plane stress, 2D plane strain, 1D)
  • Utilities
    • Homogenization of orthotropic stiffness (mean field)
    • I/O to and from other mesh formats via meshio

Basic examples

The subdirectory examples->basic contains a couple of Jupyter Notebooks demonstrating the use of torch-fem for trusses, planar problems, shells and solids.


Simple cantilever beam: There are examples with linear and quadratic triangles and quads.


Plasticity in a plate with hole: Isotropic linear hardening model for plane-stress.

Optimization examples

The subdirectory examples->optimization demonstrates the use of torch-fem for optimization of structures (e.g. topology optimization, composite orientation optimization).


Simple shape optimization of a truss: The top nodes are moved and MMA + autograd is used to minimize the compliance.


Simple topology optimization of a MBB beam: You can switch between analytical sensitivities and autograd sensitivities.


3D topology optimization of a jet engine bracket: The model is exported to Paraview for visualization.


Simple shape optimization of a fillet: The shape is morphed with shape basis vectors and MMA + autograd is used to minimize the maximum stress.


Simple fiber orientation optimization of a plate with a hole: Compliance is minimized by optimizing the fiber orientation of an anisotropic material using automatic differentiation w.r.t. element-wise fiber angles.

Minimal code

This is a minimal example of how to use torch-fem to solve a simple cantilever problem.

from torchfem import Planar
from torchfem.materials import IsotropicPlaneStress

# Material
material = IsotropicElasticityPlaneStress(E=1000.0, nu=0.3)

# Nodes and elements
nodes = torch.tensor([[0., 0.], [1., 0.], [2., 0.], [0., 1.], [1., 1.], [2., 1.]])
elements = torch.tensor([[0, 1, 4, 3], [1, 2, 5, 4]])

# Create model
cantilever = Planar(nodes, elements, material)

# Load at tip
cantilever.forces[5, 1] = -1.0

# Constrained displacement at left end
cantilever.constraints[[0, 3], :] = True

# Show model
cantilever.plot(node_markers="o", node_labels=True)

This creates a minimal planar FEM model:

minimal

# Solve
u, f, σ, ε, α = cantilever.solve()

# Plot
cantilever.plot(u, node_property=torch.norm(u, dim=1))

This solves the model and plots the result:

minimal

If we want to compute gradients through the FEM model, we simply need to define the variables that require gradients. Automatic differentiation is performed through the entire FE solver.

# Enable automatic differentiation
cantilever.thickness.requires_grad = True
u, f = cantilever.solve()

# Compute sensitivity
compliance = torch.inner(f.ravel(), u.ravel())
torch.autograd.grad(compliance, cantilever.thickness)[0]

Benchmarks

The following benchmarks were performed on a cube subjected to a one-dimensional extension. The cube is discretized with N x N x N linear hexahedral elements, has a side length of 1.0 and is made of a material with Young's modulus of 1000.0 and Poisson's ratio of 0.3. The cube is fixed at one end and a displacement of 0.1 is applied at the other end. The benchmark measures the forward time to assemble the stiffness matrix and the time to solve the linear system. In addition, it measures the backward time to compute the sensitivities of the sum of displacements with respect to forces.

Apple M1 Pro (10 cores, 16 GB RAM)

Python 3.10, SciPy 1.14.1, Apple Accelerate

N DOFs FWD Time BWD Time Peak RAM
10 3000 0.24s 0.15s 567.7MB
20 24000 0.74s 0.25s 965.7MB
30 81000 2.63s 1.18s 1797.9MB
40 192000 7.18s 3.66s 2814.2MB
50 375000 15.60s 9.12s 3784.7MB
60 648000 32.22s 19.24s 4368.8MB
70 1029000 55.33s 34.54s 5903.4MB
80 1536000 87.58s 56.95s 7321.9MB
90 2187000 137.29s 106.87s 8855.2MB

AMD Ryzen Threadripper PRO 5995WX (64 Cores, 512 GB RAM)

Python 3.12, SciPy 1.14.1, scipy-openblas 0.3.27.dev

N DOFs FWD Time BWD Time Peak RAM
10 3000 0.37s 0.27s 973.9MB
20 24000 0.53s 0.35s 1260.4MB
30 81000 1.81s 1.27s 1988.4MB
40 192000 4.80s 4.01s 3790.1MB
50 375000 9.94s 9.49s 6872.2MB
60 648000 19.58s 21.52s 10668.0MB
70 1029000 33.70s 39.02s 15116.4MB
80 1536000 54.25s 54.72s 21162.3MB
90 2187000 80.43s 130.16s 29891.6MB

AMD Ryzen Threadripper PRO 5995WX (64 Cores, 512 GB RAM) and NVIDIA GeForce RTX 4090

Python 3.12, CuPy 13.3.0, CUDA 11.8

N DOFs FWD Time BWD Time Peak RAM
10 3000 0.99s 0.29s 1335.4MB
20 24000 0.66s 0.17s 1321.5MB
30 81000 0.69s 0.27s 1313.0MB
40 192000 0.85s 0.40s 1311.3MB
50 375000 1.05s 0.51s 1310.5MB
60 648000 1.40s 0.67s 1319.5MB
70 1029000 1.89s 1.08s 1311.3MB

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torch_fem-0.3.2.tar.gz (2.7 MB view details)

Uploaded Source

Built Distribution

torch_fem-0.3.2-py3-none-any.whl (2.7 MB view details)

Uploaded Python 3

File details

Details for the file torch_fem-0.3.2.tar.gz.

File metadata

  • Download URL: torch_fem-0.3.2.tar.gz
  • Upload date:
  • Size: 2.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for torch_fem-0.3.2.tar.gz
Algorithm Hash digest
SHA256 8983d77d7a2fae4ca3c72710d2adc4dc06cc2b773e13cb1589a01643d15968eb
MD5 9cf835e1ec6d1d9bed457de5d1440543
BLAKE2b-256 c9ac586dcd3ec5ca9063ebd9ab9f39322cbe8dc974cdb72b49d2d72666154ca9

See more details on using hashes here.

File details

Details for the file torch_fem-0.3.2-py3-none-any.whl.

File metadata

  • Download URL: torch_fem-0.3.2-py3-none-any.whl
  • Upload date:
  • Size: 2.7 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for torch_fem-0.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 7049a91b3e716a561e3e1a87a4e3b4a7b07e537f5f78aa06c1257dcbadd36f18
MD5 ed8931f63beefe358bb1bca107a0e347
BLAKE2b-256 4c5e63b9bbeff2f83181bcede71d7f2ee85cff1dab5e25f8dbd755079e6e8314

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page