scipy.differentiate.

jacobian#

scipy.differentiate.jacobian(f, x, *, tolerances=None, maxiter=10, order=8, initial_step=0.5, step_factor=2.0, step_direction=0)[source]#

Evaluate the Jacobian of a function numerically.

Parameters:
fcallable

The function whose Jacobian is desired. The signature must be:

f(xi: ndarray) -> ndarray

where each element of xi is a finite real. If the function to be differentiated accepts additional arguments, wrap it (e.g. using functools.partial or lambda) and pass the wrapped callable into jacobian. f must not mutate the array xi. See Notes regarding vectorization and the dimensionality of the input and output.

xfloat array_like

Points at which to evaluate the Jacobian. Must have at least one dimension. See Notes regarding the dimensionality and vectorization.

tolerancesdictionary of floats, optional

Absolute and relative tolerances. Valid keys of the dictionary are:

  • atol - absolute tolerance on the derivative

  • rtol - relative tolerance on the derivative

Iteration will stop when res.error < atol + rtol * abs(res.df). The default atol is the smallest normal number of the appropriate dtype, and the default rtol is the square root of the precision of the appropriate dtype.

maxiterint, default: 10

The maximum number of iterations of the algorithm to perform. See Notes.

orderint, default: 8

The (positive integer) order of the finite difference formula to be used. Odd integers will be rounded up to the next even integer.

initial_stepfloat array_like, default: 0.5

The (absolute) initial step size for the finite difference derivative approximation. Must be broadcastable with x and step_direction.

step_factorfloat, default: 2.0

The factor by which the step size is reduced in each iteration; i.e. the step size in iteration 1 is initial_step/step_factor. If step_factor < 1, subsequent steps will be greater than the initial step; this may be useful if steps smaller than some threshold are undesirable (e.g. due to subtractive cancellation error).

step_directioninteger array_like

An array representing the direction of the finite difference steps (e.g. for use when x lies near to the boundary of the domain of the function.) Must be broadcastable with x and initial_step. Where 0 (default), central differences are used; where negative (e.g. -1), steps are non-positive; and where positive (e.g. 1), all steps are non-negative.

Returns:
res_RichResult

An object similar to an instance of scipy.optimize.OptimizeResult with the following attributes. The descriptions are written as though the values will be scalars; however, if f returns an array, the outputs will be arrays of the same shape.

successbool array

True where the algorithm terminated successfully (status 0); False otherwise.

statusint array

An integer representing the exit status of the algorithm.

  • 0 : The algorithm converged to the specified tolerances.

  • -1 : The error estimate increased, so iteration was terminated.

  • -2 : The maximum number of iterations was reached.

  • -3 : A non-finite value was encountered.

dffloat array

The Jacobian of f at x, if the algorithm terminated successfully.

errorfloat array

An estimate of the error: the magnitude of the difference between the current estimate of the Jacobian and the estimate in the previous iteration.

nitint array

The number of iterations of the algorithm that were performed.

nfevint array

The number of points at which f was evaluated.

Each element of an attribute is associated with the corresponding element of df. For instance, element i of nfev is the number of points at which f was evaluated for the sake of computing element i of df.

See also

derivative, hessian

Notes

Suppose we wish to evaluate the Jacobian of a function \(f: \mathbf{R}^m \rightarrow \mathbf{R}^n\), and assign to variables m and n the positive integer values of \(m\) and \(n\), respectively. If we wish to evaluate the Jacobian at a single point, then:

  • argument x must be an array of shape (m,)

  • argument f must be vectorized to accept an array of shape (m, p). The first axis represents the \(m\) inputs of \(f\); the second is for evaluating the function at multiple points in a single call.

  • argument f must return an array of shape (n, p). The first axis represents the \(n\) outputs of \(f\); the second is for the result of evaluating the function at multiple points.

  • attribute df of the result object will be an array of shape (n, m), the Jacobian.

This function is also vectorized in the sense that the Jacobian can be evaluated at k points in a single call. In this case, x would be an array of shape (m, k), f would accept an array of shape (m, k, p) and return an array of shape (n, k, p), and the df attribute of the result would have shape (n, m, k).

References

[1]

Jacobian matrix and determinant, Wikipedia, https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant

Examples

The Rosenbrock function maps from \(\mathbf{R}^m \rightarrow \mathbf{R}\); the SciPy implementation scipy.optimize.rosen is vectorized to accept an array of shape (m, p) and return an array of shape p. Suppose we wish to evaluate the Jacobian (AKA the gradient because the function returns a scalar) at [0.5, 0.5, 0.5].

>>> import numpy as np
>>> from scipy.differentiate import jacobian
>>> from scipy.optimize import rosen, rosen_der
>>> m = 3
>>> x = np.full(m, 0.5)
>>> res = jacobian(rosen, x)
>>> ref = rosen_der(x)  # reference value of the gradient
>>> res.df, ref
(array([-51.,  -1.,  50.]), array([-51.,  -1.,  50.]))

As an example of a function with multiple outputs, consider Example 4 from [1].

>>> def f(x):
...     x1, x2, x3 = x
...     return [x1, 5*x3, 4*x2**2 - 2*x3, x3*np.sin(x1)]

The true Jacobian is given by:

>>> def df(x):
...         x1, x2, x3 = x
...         one = np.ones_like(x1)
...         return [[one, 0*one, 0*one],
...                 [0*one, 0*one, 5*one],
...                 [0*one, 8*x2, -2*one],
...                 [x3*np.cos(x1), 0*one, np.sin(x1)]]

Evaluate the Jacobian at an arbitrary point.

>>> rng = np.random.default_rng()
>>> x = rng.random(size=3)
>>> res = jacobian(f, x)
>>> ref = df(x)
>>> res.df.shape == (4, 3)
True
>>> np.allclose(res.df, ref)
True

Evaluate the Jacobian at 10 arbitrary points in a single call.

>>> x = rng.random(size=(3, 10))
>>> res = jacobian(f, x)
>>> ref = df(x)
>>> res.df.shape == (4, 3, 10)
True
>>> np.allclose(res.df, ref)
True