Skip to main content

Simple finite element assemblers with torch.

Project description

License: MIT PyPI - Python Version PyPI - Version Black Binder

torch-fem: differentiable linear elastic finite elements

Simple finite element assemblers for linear elasticity with PyTorch. The advantage of using PyTorch is the ability to efficiently compute sensitivities and use them in structural optimization.

Basic examples

The subdirectory examples->basic contains a couple of Jupyter Notebooks demonstrating the use of torch-fem for trusses, planar problems, shells and solids.


Simple cantilever beam: There are examples with linear and quadratic triangles and quads.

Optimization examples

The subdirectory examples->optimization demonstrates the use of torch-fem for optimization of structures (e.g. topology optimization, composite orientation optimization).


Simple topology optimization of a MBB beam: You can switch between analytical sensitivities and autograd sensitivities.


Simple topology optimization of a 3D beam: The model is exported to Paraview for visualization.


Simple shape optimization of a fillet: The shape is morphed with shape basis vectors and MMA + autograd is used to minimize the maximum stress.


Simple fiber orientation optimization of a plate with a hole: Compliance is minimized by optimizing the fiber orientation of an anisotropic material using automatic differentiation w.r.t. element-wise fiber angles.

Installation

Your may install torch-fem via pip with

pip install torch-fem

Minimal code

This is a minimal example of how to use torch-fem to solve a simple cantilever problem.

from torchfem import Planar
from torchfem.materials import IsotropicPlaneStress

# Define a (minimal) mesh 
nodes = torch.tensor([[0.0, 0.0], [1.0, 0.0], [2.0, 0.0], [0.0, 1.0], [1.0, 1.0], [2.0, 1.0]])
elements = torch.tensor([[0, 1, 4, 3], [1, 2, 5, 4]])

# Apply a load at the tip
tip = (nodes[:, 0] == 2.0) & (nodes[:, 1] == 1.0)
forces = torch.zeros_like(nodes)
forces[tip, 1] = -1.0

# Constrained displacement at left end
left = nodes[:, 0] == 0.0
displacements = torch.zeros_like(nodes)
constraints = torch.zeros_like(nodes, dtype=bool)
constraints[left, :] = True

# Thickness
thickness = torch.ones(len(elements))

# Material model (plane stress)
material = IsotropicPlaneStress(E=1000.0, nu=0.3)

# Create model
cantilever = Planar(nodes, elements, forces, displacements, constraints, thickness, material.C())

This creates a minimal planar FEM model:

minimal

# Solve
u, f = cantilever.solve()

# Plot
cantilever.plot(u, node_property=torch.norm(u, dim=1), node_markers=True)

This solves the model and plots the result:

minimal

Benchmarks

The following benchmarks were performed on a cube subjected to a one dimensional extension. The cube is discretized with N x N x N linear hexahedral elements, has a side length of 1.0 and is made of a material with Young's modulus of 1000.0 and Poisson's ratio of 0.3. The cube is fixed at one end and a displacement of 0.1 is applied at the other end. The benchmark measures the forward time to assemble the stiffness matrix and the time to solve the linear system. In addition, it measures the backward time to compute the sensitivities of the sum of displacements with respect to forces.

Apple M1 Pro (10 cores, 16 GB RAM)

Python 3.10 with Apple Accelerate

N DOFs FWD Time FWD Memory BWD Time BWD Memory
10 3000 0.75s 23.59 MB 0.59s 0.09 MB
20 24000 3.37s 439.23 MB 2.82s 227.05 MB
30 81000 3.09s 728.31 MB 1.47s 0.06 MB
40 192000 7.84s 807.41 MB 3.99s 217.56 MB
50 375000 16.29s 1211.27 MB 9.50s 433.30 MB
60 648000 30.78s 2638.23 MB 19.17s 1484.67 MB
70 1029000 55.14s 3546.22 MB 34.41s 1997.77 MB
80 1536000 87.83s 5066.81 MB 56.23s 3594.27 MB
90 2187000 131.09s 7795.83 MB 107.40s 5020.55 MB

AMD Ryzen Threadripper PRO 5995WX (64 Cores, 512 GB RAM)

Python 3.12 with openBLAS

N DOFs FWD Time FWD Memory BWD Time BWD Memory
10 3000 1.42s 17.27 MB 1.87s 0.00 MB
20 24000 1.30s 160.49 MB 0.98s 62.64 MB
30 81000 2.76s 480.52 MB 2.16s 305.76 MB
40 192000 6.68s 1732.11 MB 5.15s 762.89 MB
50 375000 12.51s 3030.36 MB 11.29s 1044.85 MB
60 648000 22.94s 5813.95 MB 25.54s 3481.15 MB
70 1029000 38.81s 7874.30 MB 45.06s 4704.93 MB
80 1536000 63.07s 14278.46 MB 63.70s 8505.52 MB
90 2187000 93.47s 16803.27 MB 142.94s 10995.63 MB

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torch_fem-0.1.16.tar.gz (50.2 kB view details)

Uploaded Source

Built Distribution

torch_fem-0.1.16-py3-none-any.whl (53.9 kB view details)

Uploaded Python 3

File details

Details for the file torch_fem-0.1.16.tar.gz.

File metadata

  • Download URL: torch_fem-0.1.16.tar.gz
  • Upload date:
  • Size: 50.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for torch_fem-0.1.16.tar.gz
Algorithm Hash digest
SHA256 c6b028c3c6abc7ffe0ef59bceb12ae5c27013b76c2a9d8d08fd3c862789a8a76
MD5 100c9b1fafde794c1d4236a44f187509
BLAKE2b-256 cdce8f8262246927111697259d52617cf48eeae511098e41ed86656d661d8b2d

See more details on using hashes here.

File details

Details for the file torch_fem-0.1.16-py3-none-any.whl.

File metadata

  • Download URL: torch_fem-0.1.16-py3-none-any.whl
  • Upload date:
  • Size: 53.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for torch_fem-0.1.16-py3-none-any.whl
Algorithm Hash digest
SHA256 c40f23fef08ba9ad7542dae807c33c9c93d542e936c4b16aa8ce421d3ef2dea4
MD5 f9589e1411b9991b094e6ad50c6e278b
BLAKE2b-256 84c012637ca5d784aff009a61a76ecb78ba53aa07e8763972a730cc40b5937be

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page