Estimate derivatives with finite differences
Project description
FDM: Finite Difference Methods
FDM estimates derivatives with finite differences. See also FiniteDifferences.jl.
- Installation
- Multivariate Derivatives
- Scalar Derivatives
- Testing Sensitivities in a Reverse-Mode Automatic Differentation Framework
Installation
FDM requires Python 3.6 or higher.
pip install fdm
Multivariate Derivatives
from fdm import gradient, jacobian, jvp, hvp
For the purpose of illustration, let us consider a quadratic function:
>>> a = np.random.randn(3, 3); a = a @ a.T
>>> a
array([[ 3.57224794, 0.22646662, -1.80432262],
[ 0.22646662, 4.72596213, 3.46435663],
[-1.80432262, 3.46435663, 3.70938152]])
>>> def f(x):
... return 0.5 * x @ a @ x
Consider the following input value:
>>> x = np.array([1.0, 2.0, 3.0])
Gradients
>>> grad = gradient(f)
>>> grad(x)
array([-1.38778668, 20.07146076, 16.25253519])
>>> a @ x
array([-1.38778668, 20.07146076, 16.25253519])
Jacobians
>>> jac = jacobian(f)
>>> jac(x)
array([[-1.38778668, 20.07146076, 16.25253519]])
>>> a @ x
array([-1.38778668, 20.07146076, 16.25253519])
But jacobian
also works for multi-valued functions.
>>> def f2(x):
... return a @ x
>>> jac2 = jacobian(f2)
>>> jac2(x)
array([[ 3.57224794, 0.22646662, -1.80432262],
[ 0.22646662, 4.72596213, 3.46435663],
[-1.80432262, 3.46435663, 3.70938152]])
>>> a
array([[ 3.57224794, 0.22646662, -1.80432262],
[ 0.22646662, 4.72596213, 3.46435663],
[-1.80432262, 3.46435663, 3.70938152]])
Jacobian-Vector Products (Directional Derivatives)
In the scalar case, jvp
computes directional derivatives:
>>> v = np.array([0.5, 0.6, 0.7]) # A direction
>>> dir_deriv = jvp(f, v)
>>> dir_deriv(x)
22.725757753354657
>>> np.sum(grad(x) * v)
22.72575775335481
In the multivariate case, jvp
generalises to Jacobian-vector products:
>>> prod = jvp(f2, v)
>>> prod(x)
array([0.65897811, 5.37386023, 3.77301973])
>>> a @ v
array([0.65897811, 5.37386023, 3.77301973])
Hessian-Vector Products
>>> prod = hvp(f, v)
>>> prod(x)
array([[0.6589781 , 5.37386023, 3.77301973]])
>>> 0.5 * (a + a.T) @ v
array([0.65897811, 5.37386023, 3.77301973])
Scalar Derivatives
>>> from fdm import central_fdm
Let's try to estimate the first derivative of np.sin
at 1
with a
second-order method.
>>> central_fdm(order=2, deriv=1)(np.sin, 1) - np.cos(1)
-1.2914319613699377e-09
And let's try to estimate the second derivative of np.sin
at 1
with a
third-order method.
>>> central_fdm(order=3, deriv=2)(np.sin, 1) + np.sin(1)
1.6342919018086377e-08
Hm.
Let's check the accuracy of this third-order method.
The step size and accuracy of the method are computed upon calling
FDM.estimate
.
>>> central_fdm(order=3, deriv=2).estimate(np.sin, 1).acc
5.476137293912896e-06
We might want a little more accuracy. Let's check the accuracy of a fifth-order method.
>>> central_fdm(order=5, deriv=2).estimate(np.sin, 1).acc
7.343652562575157e-10
And let's estimate the second derivative of np.sin
at 1
with a
fifth-order method.
>>> central_fdm(order=5, deriv=2)(np.sin, 1) + np.sin(1)
-1.7121615236703747e-10
Hooray!
Finally, let us verify that increasing the order generally increases the accuracy.
>>> for i in range(3, 10):
... print(central_fdm(order=i, deriv=2)(np.sin, 1) + np.sin(1))
1.6342919018086377e-08
8.604865264771888e-09
-1.7121615236703747e-10
8.558931341440257e-12
-2.147615418834903e-12
6.80566714095221e-13
-1.2434497875801753e-14
Testing Sensitivities in a Reverse-Mode Automatic Differentation Framework
Consider the function
def mul(a, b):
return a * b
and its sensitivity
def s_mul(s_y, y, a, b):
return s_y * b, a * s_y
The sensitivity s_mul
takes in the sensitivity s_y
of the output y
,
the output y
, and the arguments of the function mul
; and returns a tuple
containing the sensitivities with respect to a
and b
.
Then function check_sensitivity
can be used to assert that the
implementation of s_mul
is correct:
>>> from fdm import check_sensitivity
>>> check_sensitivity(mul, s_mul, (2, 3)) # Test at arguments `2` and `3`.
Suppose that the implementation were wrong, for example
def s_mul_wrong(s_y, y, a, b):
return s_y * b, b * s_y # Used `b` instead of `a` for the second sensitivity!
Then check_sensitivity
should throw an AssertionError
:
>>> check_sensitivity(mul, s_mul, (2, 3))
AssertionError: Sensitivity of argument 2 of function "mul" did not match numerical estimate.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file fdm-0.5.0.tar.gz
.
File metadata
- Download URL: fdm-0.5.0.tar.gz
- Upload date:
- Size: 9.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 47876c8fe8aea4b374913594938cbcbd065a648d044fe8e287af7f320341ef39 |
|
MD5 | f0a580c5948873526cd0e6ee76d701c4 |
|
BLAKE2b-256 | 3b073200eb57d6904253a84944cbdb8cfb69056fb3318d2ac2ca4722560f961e |
File details
Details for the file fdm-0.5.0-py3-none-any.whl
.
File metadata
- Download URL: fdm-0.5.0-py3-none-any.whl
- Upload date:
- Size: 10.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | cf2ce17abcc439cde3aaa8b59c1bfffa53875156c2d8dcdf1639de1a2b3c166c |
|
MD5 | 05bcc3b28b920b3a5c3564f929715da9 |
|
BLAKE2b-256 | e535294e2a694f0f29b69dd28c2cc4d6c16524c2c72f0102ab43846c2adf589f |