scipy.optimize.approx_fprime(xk, f, epsilon=1.4901161193847656e-08, *args)[source]#

Finite difference approximation of the derivatives of a scalar or vector-valued function.

If a function maps from \(R^n\) to \(R^m\), its derivatives form an m-by-n matrix called the Jacobian, where an element \((i, j)\) is a partial derivative of f[i] with respect to xk[j].


The coordinate vector at which to determine the gradient of f.


Function of which to estimate the derivatives of. Has the signature f(xk, *args) where xk is the argument in the form of a 1-D array and args is a tuple of any additional fixed parameters needed to completely specify the function. The argument xk passed to this function is an ndarray of shape (n,) (never a scalar even if n=1). It must return a 1-D array_like of shape (m,) or a scalar.

Changed in version 1.9.0: f is now able to return a 1-D array-like, with the \((m, n)\) Jacobian being estimated.

epsilon{float, array_like}, optional

Increment to xk to use for determining the function gradient. If a scalar, uses the same finite difference delta for all partial derivatives. If an array, should contain one value per element of xk. Defaults to sqrt(np.finfo(float).eps), which is approximately 1.49e-08.

*argsargs, optional

Any other arguments that are to be passed to f.


The partial derivatives of f to xk.

See also


Check correctness of gradient function against approx_fprime.


The function gradient is determined by the forward finite difference formula:

         f(xk[i] + epsilon[i]) - f(xk[i])
f'[i] = ---------------------------------


>>> import numpy as np
>>> from scipy import optimize
>>> def func(x, c0, c1):
...     "Coordinate vector `x` should be an array of size two."
...     return c0 * x[0]**2 + c1*x[1]**2
>>> x = np.ones(2)
>>> c0, c1 = (1, 200)
>>> eps = np.sqrt(np.finfo(float).eps)
>>> optimize.approx_fprime(x, func, [eps, np.sqrt(200) * eps], c0, c1)
array([   2.        ,  400.00004208])