scipy.stats.gaussian_kde¶

class
scipy.stats.
gaussian_kde
(dataset, bw_method=None, weights=None)[source]¶ Representation of a kerneldensity estimate using Gaussian kernels.
Kernel density estimation is a way to estimate the probability density function (PDF) of a random variable in a nonparametric way.
gaussian_kde
works for both univariate and multivariate data. It includes automatic bandwidth determination. The estimation works best for a unimodal distribution; bimodal or multimodal distributions tend to be oversmoothed. Parameters
 datasetarray_like
Datapoints to estimate from. In case of univariate data this is a 1D array, otherwise a 2D array with shape (# of dims, # of data).
 bw_methodstr, scalar or callable, optional
The method used to calculate the estimator bandwidth. This can be ‘scott’, ‘silverman’, a scalar constant or a callable. If a scalar, this will be used directly as kde.factor. If a callable, it should take a
gaussian_kde
instance as only parameter and return a scalar. If None (default), ‘scott’ is used. See Notes for more details. weightsarray_like, optional
weights of datapoints. This must be the same shape as dataset. If None (default), the samples are assumed to be equally weighted
Notes
Bandwidth selection strongly influences the estimate obtained from the KDE (much more so than the actual shape of the kernel). Bandwidth selection can be done by a “rule of thumb”, by crossvalidation, by “plugin methods” or by other means; see [Ra3a8695506c73], [Ra3a8695506c74] for reviews.
gaussian_kde
uses a rule of thumb, the default is Scott’s Rule.Scott’s Rule [Ra3a8695506c71], implemented as
scotts_factor
, is:n**(1./(d+4)),
with
n
the number of data points andd
the number of dimensions. In the case of unequally weighted points,scotts_factor
becomes:neff**(1./(d+4)),
with
neff
the effective number of datapoints. Silverman’s Rule [Ra3a8695506c72], implemented assilverman_factor
, is:(n * (d + 2) / 4.)**(1. / (d + 4)).
or in the case of unequally weighted points:
(neff * (d + 2) / 4.)**(1. / (d + 4)).
Good general descriptions of kernel density estimation can be found in [Ra3a8695506c71] and [Ra3a8695506c72], the mathematics for this multidimensional implementation can be found in [Ra3a8695506c71].
With a set of weighted samples, the effective number of datapoints
neff
is defined by:neff = sum(weights)^2 / sum(weights^2)
as detailed in [Ra3a8695506c75].
References
 Ra3a8695506c71(1,2,3)
D.W. Scott, “Multivariate Density Estimation: Theory, Practice, and Visualization”, John Wiley & Sons, New York, Chicester, 1992.
 Ra3a8695506c72(1,2)
B.W. Silverman, “Density Estimation for Statistics and Data Analysis”, Vol. 26, Monographs on Statistics and Applied Probability, Chapman and Hall, London, 1986.
 Ra3a8695506c73
B.A. Turlach, “Bandwidth Selection in Kernel Density Estimation: A Review”, CORE and Institut de Statistique, Vol. 19, pp. 133, 1993.
 Ra3a8695506c74
D.M. Bashtannyk and R.J. Hyndman, “Bandwidth selection for kernel conditional density estimation”, Computational Statistics & Data Analysis, Vol. 36, pp. 279298, 2001.
 Ra3a8695506c75
Gray P. G., 1969, Journal of the Royal Statistical Society. Series A (General), 132, 272
Examples
Generate some random twodimensional data:
>>> from scipy import stats >>> def measure(n): ... "Measurement model, return two coupled measurements." ... m1 = np.random.normal(size=n) ... m2 = np.random.normal(scale=0.5, size=n) ... return m1+m2, m1m2
>>> m1, m2 = measure(2000) >>> xmin = m1.min() >>> xmax = m1.max() >>> ymin = m2.min() >>> ymax = m2.max()
Perform a kernel density estimate on the data:
>>> X, Y = np.mgrid[xmin:xmax:100j, ymin:ymax:100j] >>> positions = np.vstack([X.ravel(), Y.ravel()]) >>> values = np.vstack([m1, m2]) >>> kernel = stats.gaussian_kde(values) >>> Z = np.reshape(kernel(positions).T, X.shape)
Plot the results:
>>> import matplotlib.pyplot as plt >>> fig, ax = plt.subplots() >>> ax.imshow(np.rot90(Z), cmap=plt.cm.gist_earth_r, ... extent=[xmin, xmax, ymin, ymax]) >>> ax.plot(m1, m2, 'k.', markersize=2) >>> ax.set_xlim([xmin, xmax]) >>> ax.set_ylim([ymin, ymax]) >>> plt.show()
 Attributes
 datasetndarray
The dataset with which
gaussian_kde
was initialized. dint
Number of dimensions.
 nint
Number of datapoints.
 neffint
Effective number of datapoints.
New in version 1.2.0.
 factorfloat
The bandwidth factor, obtained from kde.covariance_factor, with which the covariance matrix is multiplied.
 covariancendarray
The covariance matrix of dataset, scaled by the calculated bandwidth (kde.factor).
 inv_covndarray
The inverse of covariance.
Methods
evaluate
(self, points)Evaluate the estimated pdf on a set of points.
__call__
(self, points)Evaluate the estimated pdf on a set of points.
integrate_gaussian
(self, mean, cov)Multiply estimated density by a multivariate Gaussian and integrate over the whole space.
integrate_box_1d
(self, low, high)Computes the integral of a 1D pdf between two bounds.
integrate_box
(self, low_bounds, high_bounds)Computes the integral of a pdf over a rectangular interval.
integrate_kde
(self, other)Computes the integral of the product of this kernel density estimate with another.
pdf
(self, x)Evaluate the estimated pdf on a provided set of points.
logpdf
(self, x)Evaluate the log of the estimated pdf on a provided set of points.
resample
(self[, size, seed])Randomly sample a dataset from the estimated pdf.
set_bandwidth
(self[, bw_method])Compute the estimator bandwidth with given method.
covariance_factor
(self)Computes the coefficient (kde.factor) that multiplies the data covariance matrix to obtain the kernel covariance matrix.