SciPy 1.2.0 is the culmination of 6 months of hard work. It contains
many new features, numerous bug-fixes, improved test coverage and better
documentation. There have been a number of deprecations and API changes
in this release, which are documented below. All users are encouraged to
upgrade to this release, as there are a large number of bug-fixes and
optimizations. Before upgrading, we recommend that users check that
their own code does not use deprecated SciPy functionality (to do so,
run your code with
python -Wd and check for
Our development attention will now shift to bug-fix releases on the
1.2.x branch, and on adding new features on the master branch.
This release requires Python 2.7 or 3.4+ and NumPy 1.8.2 or greater.
This will be the last SciPy release to support Python 2.7. Consequently, the 1.2.x series will be a long term support (LTS) release; we will backport bug-fixes until 1 Jan 2020.
For running on PyPy, PyPy3 6.0+ and NumPy 1.15.0 are required.
1-D root finding improvements with a new solver,
toms748, and a new unified interface,
dual_annealingoptimization method that combines stochastic and local deterministic searching
a new optimization algorithm,
shgo(simplicial homology global optimization), for derivative-free optimization problems
a new category of quaternion-based transformations are available in
Proper spline coefficient calculations have been added for the
reflect modes of
DCT-IV, DST-IV, DCT-I, and DST-I orthonormalization are now supported in
scipy.interpolate.pade now accepts a new argument for the order of the
The one-dimensional nonlinear solvers have been given a unified interface
scipy.optimize.root_scalar, similar to the
for multi-dimensional solvers.
scipy.optimize.root_scalar(f, bracket=[a ,b],
method="brenth") is equivalent to
scipy.optimize.brenth(f, a ,b). If no
method is specified, an appropriate one will be selected based upon the
bracket and the number of derivatives available.
The so-called Algorithm 748 of Alefeld, Potra and Shi for root-finding within
an enclosing interval has been added as
scipy.optimize.toms748. This provides
guaranteed convergence to a root with convergence rate per function evaluation
of approximately 1.65 (for sufficiently well-behaved functions).
differential_evolution now has the
The first chooses between continuous updating of the best solution vector (the
default), or once per generation. Continuous updating can lead to faster
workers keyword accepts an
int or map-like callable,
and parallelises the solver (having the side effect of updating once per
generation). Supplying an
int evaluates the trial solutions in N parallel
parts. Supplying a map-like callable allows other parallelisation approaches
joblib) to be used.
shgo below) is a powerful new general-purpose
global optizimation (GO) algorithm.
dual_annealing uses two annealing
processes to accelerate the convergence towards the global minimum of an
objective mathematical function. The first annealing process controls the
stochastic Markov chain searching and the second annealing process controls the
deterministic minimization. So, dual annealing is a hybrid method that takes
advantage of stochastic and local deterministic searching in an efficient way.
shgo (simplicial homology global optimization) is a similar algorithm
appropriate for solving black box and derivative-free optimization (DFO)
problems. The algorithm generally converges to the global solution in finite
time. The convergence holds for non-linear inequality and
equality constraints. In addition to returning a global minimum, the
algorithm also returns any other global and local minima found after every
iteration. This makes it useful for exploring the solutions in a domain.
scipy.optimize.newton can now accept a scalar or an array.
MINPACK usage is now thread-safe, such that
MINPACK + callbacks may
be used on multiple threads.
Digital filter design functions now include a parameter to specify the sampling
rate. Previously, digital filters could only be specified using normalized
frequency, but different functions used different scales (e.g. 0 to 1 for
butter vs 0 to π for
freqz), leading to errors and confusion. With
fs parameter, ordinary frequencies can now be entered directly into
functions, with the normalization handled internally.
find_peaks and related functions no longer raise an exception if the
properties of a peak have unexpected values (e.g. a prominence of 0). A
PeakPropertyWarning is given instead.
The new keyword argument
plateau_size was added to
plateau_size may be used to select peaks based on the length of the
flat top of a peak.
csd() methods in
scipy.signal now support calculation
of a median average PSD, using
scipy.sparse.bsr_matrix.tocsr method is now implemented directly instead
of converting via COO format, and the
is now also routed via CSR conversion instead of COO. The efficiency of both
conversions is now improved.
The issue where SuperLU or UMFPACK solvers crashed on matrices with
non-canonical format in
scipy.sparse.linalg was fixed. The solver wrapper
canonicalizes the matrix if necessary before calling the SuperLU or UMFPACK
largest option of scipy.sparse.linalg.lobpcg() was fixed to have
a correct (and expected) behavior. The order of the eigenvalues was made
consistent with the ARPACK solver (
eigs()), i.e. ascending for the
smallest eigenvalues, and descending for the largest eigenvalues.
scipy.sparse.random function is now faster and also supports integer and
complex values by passing the appropriate value to the
scipy.spatial.distance.jaccard was modified to return 0 instead
np.nan when two all-zero vectors are compared.
Support for the Jensen Shannon distance, the square-root of the divergence, has
been added under
An optional keyword was added to the function scipy.spatial.cKDTree.query_ball_point() to sort or not sort the returned indices. Not sorting the indices can speed up calls.
A new category of quaternion-based transformations are available in
scipy.spatial.transform, including spherical linear interpolation of
Slerp), conversions to and from quaternions, Euler angles,
and general rotation and inversion capabilities
(spatial.transform.Rotation), and uniform random sampling of 3D
The Yeo-Johnson power transformation is now supported (
the Box-Cox transformation, the Yeo-Johnson transformation can accept negative
Added a general method to sample random variates based on the density only, in
the new function
The Yule-Simon distribution (
yulesimon) was added – this is a new
discrete probability distribution.
mstats now have access to a new regression method,
siegelslopes, a robust linear regression algorithm
scipy.stats.gaussian_kde now has the ability to deal with weighted samples,
and should have a modest improvement in performance
Levy Stable Parameter Estimation, PDF, and CDF calculations are now supported
The Brunner-Munzel test is now available as
LAPACK version 3.4.0 or later is now required. Building with Apple Accelerate is no longer supported.
scipy.linalg.subspace_angles(A, B) now gives correct
results for all angles. Before this, the function only returned
correct values for those angles which were greater than π/4.
Support for the Bento build system has been removed. Bento had not been maintained for several years, and did not have good Python 3 or wheel support, hence it was time to remove it.
The required signature of scipy.optimize.lingprog
callback function has changed. Before iteration begins, the simplex solver
first converts the problem into a standard form that does not, in general,
have the same variables or constraints
as the problem defined by the user. Previously, the simplex solver would pass a
user-specified callback function several separate arguments, such as the
current solution vector
xk, corresponding to this standard-form problem.
Unfortunately, the relationship between the standard-form problem and the
user-defined problem was not documented, limiting the utility of the
information passed to the callback function.
In addition to numerous bug-fix changes, the simplex solver now passes a
user-specified callback function a single
OptimizeResult object containing
information that corresponds directly to the user-defined problem. In future
OptimizeResult object may be expanded to include additional
information, such as variables corresponding to the standard-form problem and
information concerning the relationship between the standard-form and
The implementation of
scipy.sparse.random has changed, and this affects the
numerical values returned for both
some matrix shapes and a given seed.
scipy.optimize.newton will no longer use Halley’s method in cases where it
negatively impacts convergence.