scipy's interpn for interpolate high N data - python

I try to interpolate some data using scipy.interpolate.interpn. It might not be the right function, so please advise me if it's not. I need to interpolate over 3 variables where each have 2 values (8 in total) down to a single point.
Here is a working example for N=2 (I think).
from scipy.interpolate import interpn
import numpy as np
points = np.zeros((2, 2))
points[0, 1] = 1
points[1, 1] = 1
values = np.array(([ 5.222, 6.916], [6.499, 4.102]))
xi = np.array((0.108, 0.88))
print(interpn(points, values, xi)) # Gives: 6.462
But when I try to use it for a higher dimension, it breaks. I have a feeling it is because how my arrays are constructed.
p2 = np.zeros((2, 2, 2))
p2[0,0,1] = 1
p2[0,1,1] = 1
p2[1,0,1] = 1
p2[1,1,1] = 1
v2 = np.array([[[5.222,4.852],
[6.916,4.377]],
[[6.499,6.076],
[4.102,5.729]]])
x2 = np.array((0.108, 0.88, 1))
print(interpn(p2, v2, x2))
This gives me the following error message:
/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.pyc in interpn(points, values, xi, method, bounds_error, fill_value)
1680 if not np.asarray(p).ndim == 1:
1681 raise ValueError("The points in dimension %d must be "
-> 1682 "1-dimensional" % i)
1683 if not values.shape[i] == len(p):
1684 raise ValueError("There are %d points and %d values in "
ValueError: The points in dimension 0 must be 1-dimensional
How do I fix my code? Keep in mind I need to interpolate over 3 variables with 2 values in each (v2.shape = (2, 2, 2)).

Related

Subtracting Two dimensional arrays using numpy broadcasting

I'm new to the numpy in general so this is an easy question however i'm clueless as how to solve it.
i'm trying to implement K nearest neighbor algorithm for classification of a Data set
there are to arrays named new_points and point that respectively have the shape of (30,4)
and (120,4) (with 4 being the total number of the properties of each element)
so i'm trying to calculate the distance between each new point and all old points using numpy.broadcasting
def calc_no_loop(new_points, points):
return np.sum((new_points-points)**2,axis=1)
#doesn't work here is log
ValueError: operands could not be broadcast together with shapes (30,4) (120,4)
however as per rules of broadcasting two array of shapes (30,4) and (120,4) are incompatible
so i would appreciate any insight on how to slove this (using .reshape prehaps - not sure)
please note: that i'have already implemented the same function using one and two loops but can't implement it without one
def calc_two_loops(new_points, points):
m, n = len(new_points), len(points)
d = np.zeros((m, n))
for i in range(m):
for j in range(n):
d[i, j] = np.sum((new_points[i] - points[j])**2)
return d
def calc_one_loop(new_points, points):
m, n = len(new_points), len(points)
d = np.zeros((m, n))
print(d)
for i in range(m):
d[i] = np.sum((new_points[i] - points)**2)
return d
Let's create an exapmle smaller in size:
nNew = 3; nOld = 5 # Number of new / old points
# New points
new_points = np.arange(100, 100 + nNew * 4).reshape(nNew, 4)
# Old points
points = np.arange(10, 10 + nOld * 8, 2).reshape(nOld, 4)
To compute the differences alone, run:
dfr = new_points[:, np.newaxis, :] - points[np.newaxis, :, :]
So far we have differences in each property of each point (every new point with every old point).
The shape of dfr is (3, 5, 4):
first dimension: the number of new point,
second dimension: the number of old point,
third dimension: the difference in each property.
Then, to sum squares of differences by points, run:
d = np.power(dfr, 2).sum(axis=2)
and this is your result.
For my sample data, the result is:
array([[31334, 25926, 21030, 16646, 12774],
[34230, 28566, 23414, 18774, 14646],
[37254, 31334, 25926, 21030, 16646]], dtype=int32)
So you have 30 new points, and 120 old points, so if I understand you correctly you want a shape(120,30) array result of distances.
You could do
import numpy as np
points = np.random.random(120*4).reshape(120,4)
new_points = np.random.random(30*4).reshape(30,4)
def calc_no_loop(new_points, points):
res = np.zeros([len(points[:,0]),len(new_points[:,0])])
for idx in range(len(points[:,0])):
res[idx,:] = np.sum((points[idx,:]-new_points)**2,axis=1)
return np.sqrt(res)
test = calc_no_loop(new_points,points)
print(np.shape(test))
print(test)
Which gives
(120, 30)
[[0.67166838 0.78096694 0.94983683 ... 1.00960301 0.48076185 0.56419991]
[0.88156338 0.54951826 0.73919191 ... 0.87757896 0.76305462 0.52486626]
[0.85271938 0.56085692 0.73063341 ... 0.97884167 0.90509791 0.7505591 ]
...
[0.53968258 0.64514941 0.89225849 ... 0.99278462 0.31861253 0.44615026]
[0.51647526 0.58611128 0.83298535 ... 0.86669406 0.64931403 0.71517123]
[1.08515826 0.64626221 0.6898687 ... 0.96882542 1.08075076 0.80144746]]
But from your function name above I get the notion that you do not want a loop? Then you could do this instead:
def calc_no_loop(new_points, points):
new_points1 = np.repeat(new_points[np.newaxis,...],len(points),axis=0)
points1 = np.repeat(points[:,np.newaxis,:],len(new_points),axis=1)
return np.sqrt(np.sum((new_points-points1)**2 ,axis=2))
test = calc_no_loop(new_points,points)
print(np.shape(test))
print(test)
which has output:
(120, 30)
[[0.67166838 0.78096694 0.94983683 ... 1.00960301 0.48076185 0.56419991]
[0.88156338 0.54951826 0.73919191 ... 0.87757896 0.76305462 0.52486626]
[0.85271938 0.56085692 0.73063341 ... 0.97884167 0.90509791 0.7505591 ]
...
[0.53968258 0.64514941 0.89225849 ... 0.99278462 0.31861253 0.44615026]
[0.51647526 0.58611128 0.83298535 ... 0.86669406 0.64931403 0.71517123]
[1.08515826 0.64626221 0.6898687 ... 0.96882542 1.08075076 0.80144746]]
i.e. the same result. Note that I added the np.sqrt() into the result which you may have forgotten in your example above.

ValueError: x and y must have same first dimension, but have shapes (41,) and (1, 41)

I'm trying to plot a vs kappa_inv and I keep getting the error: ValueError: x and y must have same first dimension, but have shapes (41,) and (1, 41).
I saw a prev post about changing plt.plot square brackets to round ones but the error is still occurring. Can anyone see what I'm doing wrong?
import numpy
L = [20,20, 20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20]
L = numpy.array(L)
delta = [0.5, 0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5]
delta = numpy.array(delta)
x = L/delta
a =[-0.5,0, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5, 5.5,6,6.5,7,7.5,8,8.5,9,9.5,10,10.5,11,11.5,12,12.5,13,13.5,14,14.5,15,15.5,16,16.5,17,17.5,18,18.5,19,19.5]
numpy.array(a)
#Force
F = 100 #kN
#calc sigma
y = 250 #mm
E = 32800 #MPa
I =1.837E9 #mm4
sig = y/(E*I)
print (sig)
kappa = []
b = []
y = 20
while y >= 0:
   b.append(y)
   y = y-0.5
numpy.array(b)
for val in a:
val = "{:.1f}".format(val)
val = float(val)
fraction = b/L
kappa_i = fraction * val
kappa.append(kappa_i)
b = b - delta
N = 4
Length = len(kappa)
pad_kappa = numpy.pad(kappa,(0,N),'constant', constant_values = 0)
print(pad_kappa)
#Calc bending moment list
BM = []
for k in range (0,Length):
bendingMoment = (pad_kappa[k]*F) + (pad_kappa[k+3]*F)
BM.append(bendingMoment)
print(BM)
Strain =[]
for j in range(0,len(BM)):
strain = (BM[j] * sig) * 10E6
Strain.append(strain)
kappa_inv = [ -x for x in kappa]
numpy.array(kappa_inv)
import matplotlib.pyplot as plt
plt.plot(a,kappa_inv)
plt.ylabel('KAPPA')
plt.xlabel('LENGTH ALONG BEAM')
plt.show()
#E = BM*10E6 * sigma
strainCalcReverse = []
for s in Strain:
bendYourMomLOL = s/sig * (1/10E6)
bendYourMomLOL.append(strainCalcReverse)
print(strainCalcReverse)
There's a lot of messy stuff in your code,
Lines like:
numpy.array(a) # doesn't change list a
numpy.array(b) # same
but
fraction = b/L # only works if b is an array, not a list.
Looks like this is trying to turn all elements in a to float, but that's not how a python loop works.
for val in a:
val = "{:.1f}".format(val)
val = float(val)
a = np.array(a) will produce a float array, so there's no need for this loop.
Anyways, it looks like kappa_i is an array. If so then the following demonstrates your error:
In [311]: kappa=[]
In [312]: kappa.append(np.arange(3))
In [313]: kappa
Out[313]: [array([0, 1, 2])]
In [314]: plt.plot([1,2,3], kappa)
Traceback (most recent call last):
File "<ipython-input-314-e15645b5613f>", line 1, in <module>
plt.plot([1,2,3], kappa)
File "/usr/local/lib/python3.8/dist-packages/matplotlib/pyplot.py", line 2988, in plot
return gca().plot(
File "/usr/local/lib/python3.8/dist-packages/matplotlib/axes/_axes.py", line 1605, in plot
lines = [*self._get_lines(*args, data=data, **kwargs)]
File "/usr/local/lib/python3.8/dist-packages/matplotlib/axes/_base.py", line 315, in __call__
yield from self._plot_args(this, kwargs)
File "/usr/local/lib/python3.8/dist-packages/matplotlib/axes/_base.py", line 501, in _plot_args
raise ValueError(f"x and y must have same first dimension, but "
ValueError: x and y must have same first dimension, but have shapes (3,) and (1, 3)
By using that list append, you made a list with one array element. When passed to plot that is produces a (1,n) array.
Correct your code, whether it's the actual code or the copy to the question. And pay closer attention to when variables are lists, or arrays, and if arrays, what's the shape and dtype.

How to make a contour/error surface plot of 3d array?

My script calculates the location error using a set of the equation for different values of x and y and stores the output into an empty array t_error. However, there are two issues that need to be resolved:
1: How to store the output in a 20_by_20 matrix instead of a 400_by_1 dimension.
2: How to make a contour plot (error surface) using x, y, and out_put parameter that is t_error in our case.
The sample script is as below:
**import pandas as pd
import numpy as np
import math
ev_loc= pd.read_csv("test_grid.txt", sep='\t',header=None)
x=np.array(ev_loc[1])
y=np.array(ev_loc[0])
v=3.5
t_error=[]
for s in x:
for t in y:
for i, j, k in [[73.9,33.1, 1.268571], [73.5,33.1, 1.268571], [73.4,33.1, 2.854286], [73.7,33.2, 0.317143],[73.7,33.0, 0.317143]]:
u=((np.sqrt((t-j)**2 + (s-i)**2)/v)*111 - k)
v=u*u
t_error.append(float(v))
df_hr = pd.DataFrame(t_error)
numbers = np.array(df_hr)
window_size = 5
i = 0
moving_averages = []
while i < len(numbers) - window_size + 1:
this_window = numbers[i : i + window_size]
window_average = sum(this_window)
moving_averages.append(window_average)
i += 5
Error = pd.DataFrame(moving_averages)
Error.to_csv('test_total_error.csv')
print(Error)**
The data of test_grid.txt is as below
x1=np.linspace(73,75,num=41)
y1=np.linspace(33,35,num=41)
v=3.5
t_error=[]
for i, j, k in [[71.91500,33.82850, 57.2], [72.32200,33.16267, 38.28], [72.57900, 33.61317, 37.48], [73.44883, 33.83300, 27.8], [71.52967,33.15267, 58.8],
[73.27017,33.65167, 18.44], [73.14017, 33.75200, 29.97], [72.46550,32.63183, 39.98], [73.22900, 32.99867, 14.77], [72.67167, 31.92100, 48.71],
[71.91817, 32.53983, 54.73],[71.92333,33.04400, 49.67],[71.74417,32.79617, 57.39]]:
u=((np.sqrt((y1-j)**2 + (x1-i)**2)/v)*111 - k)
c=np.sum(u)
t_error.append(c)
plt.plot(t_error)
plt.show()
What is the error suppose to show?

sklearn: bins must be monotonically increasing or decreasing

I have a Python list containing continuous values (from 0 to 1020) that I'd like to descritize in ordinal values from 0 to 5 using K-Means strategy.
I have used the new class sklearn.preprocessing.KBinsDiscretizer to perform that:
def descritise_kmeans(python_arr, num_bins):
X = np.array(python_arr).reshape(-1, 1)
est = KBinsDiscretizer(n_bins=num_bins, encode='ordinal', strategy='kmeans')
est.fit(X)
Xt = est.transform(X)
return Xt
When running this method, I got error:
/usr/local/Cellar/python3/3.6.3/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/sklearn/preprocessing/_discretization.py in transform(self, X)
262 atol = 1.e-8
263 eps = atol + rtol * np.abs(Xt[:, jj])
--> 264 Xt[:, jj] = np.digitize(Xt[:, jj] + eps, bin_edges[jj][1:])
265 np.clip(Xt, 0, self.n_bins_ - 1, out=Xt)
266
ValueError: bins must be monotonically increasing or decreasing
When looking closely at this, seems like numpy.descritize method is the one that throws the error. This seems to be a bug of Sklearn library.
When number of bins n_bins is 6, the error is thrown. However, when n_bins is 5, it works.
I faced a similar problem and I find my mistake in setting values for the bins. My code is simple
bins = np.array([0.0, .33, 66, 1])
data = [0.1, .2, .4, .5, .7, 8]
inds = np.digitize(data, bins, right=False)
I missed a dot before .66 and my bins were not monotonic. While it may not be the source of the problem in this question, I hope it helps someone.
Makeshift solution:
Edit sklearns sourcecode with this transform function: sklearn/preprocessing/_discretization.py
It is at line 237 as of version '0.20.2'
def transform(self, X):
"""Discretizes the data.
Parameters
----------
X : numeric array-like, shape (n_samples, n_features)
Data to be discretized.
Returns
-------
Xt : numeric array-like or sparse matrix
Data in the binned space.
"""
check_is_fitted(self, ["bin_edges_"])
Xt = check_array(X, copy=True, dtype=FLOAT_DTYPES)
n_features = self.n_bins_.shape[0]
if Xt.shape[1] != n_features:
raise ValueError("Incorrect number of features. Expecting {}, "
"received {}.".format(n_features, Xt.shape[1]))
def ensure_monotic_increase(array):
"""
add small noise to the bin_edges[i]
when bin_edges[i] !> bin_edges[i-1]
"""
noise_overlay = np.zeros(array.shape)
for i in range(1,len(array)):
bigger = array[i]>array[i-1]
if bigger:
pass
else:
noise_overlay[i] = abs(array[i-1] * 0.0001)
return(array+noise_overlay)
bin_edges = self.bin_edges_
for jj in range(Xt.shape[1]):
# Values which are close to a bin edge are susceptible to numeric
# instability. Add eps to X so these values are binned correctly
# with respect to their decimal truncation. See documentation of
# numpy.isclose for an explanation of ``rtol`` and ``atol``.
rtol = 1.e-5
atol = 1.e-8
eps = atol + rtol * np.abs(Xt[:, jj])
old_bin_edges = bin_edges[jj][1:]
try:
Xt[:, jj] = np.digitize(Xt[:, jj] + eps, old_bin_edges)
except ValueError:
new_bin_edges = ensure_monotic_increase(old_bin_edges)
#print(old_bin_edges)
#print(new_bin_edges)
try:
Xt[:, jj] = np.digitize(Xt[:, jj] + eps, new_bin_edges)
except:
raise
np.clip(Xt, 0, self.n_bins_ - 1, out=Xt)
if self.encode == 'ordinal':
return Xt
return self._encoder.transform(Xt)
The issue (that I encountered)
The bin edges were too close to each other. Possibly, by some kind of floating point error, the prior bin edge ends up larger than the next bin edge.
When printing the edges, (uncomment the print statements in the above function), the first 2 bin edges were observably equal to each other. The printed bin_edges were:
[-0.1025641 -0.1025641 0.82793522] # ValueError
[-0.1025641 -0.10255385 0.82793522] # After fix
[0.2075 0.2075 0.88798077] # ValueError
[0.2075 0.20752075 0.88798077] # After fix
[ 0.7899066 0.7899066 24.31967669] # ValueError
[ 0.7899066 0.78998559 24.31967669] # After fix
[5.47545572e-18 5.47545572e-18 2.36842105e-01] # ValueError
[5.47545572e-18 5.47600326e-18 2.36842105e-01] # After fix
[5.47545572e-18 5.47545572e-18 2.82894737e-01] # ValueError
[5.47545572e-18 5.47600326e-18 2.82894737e-01] # After fix
[-0.46762302 -0.46762302 -0.00969465] # ValueError
[-0.46762302 -0.46757626 -0.00969465] # After fix

Python: binned_statistic_2d mean calculation ignoring NaNs in data

I am using scipy.stats.binned_statistic_2d to bin irregular data onto a uniform grid by finding the mean of points within every bin.
x,y = np.meshgrid(sort(np.random.uniform(0,1,100)),sort(np.random.uniform(0,1,100)))
z = np.sin(x*y)
statistic, xedges, yedges, binnumber = sp.stats.binned_statistic_2d(x.ravel(), y.ravel(), values=z.ravel(), statistic='mean',bins=[np.arange(0,1.1,.1), np.arange(0,1.1,.1)])
plt.figure(1)
plt.pcolormesh(x,y,z, vmin = 0, vmax = 1)
plt.figure(2)
plt.pcolormesh(xedges,yedges,statistic, vmin = 0, vmax = 1)
Produces these plots, as expected:
Scattered data:
Gridded data:
But the data I want to grid has NaNs in it. This is what the result is like when I add NaNs:
x,y = np.meshgrid(sort(np.random.uniform(0,1,100)),sort(np.random.uniform(0,1,100)))
z = np.sin(x*y)
z[50:55,50:55] = np.nan
statistic, xedges, yedges, binnumber = binned_statistic_2d(x.ravel(), y.ravel(), values=z.ravel(), statistic='mean',bins=[np.arange(0,1.1,.1), np.arange(0,1.1,.1)])
plt.figure(3)
plt.pcolormesh(x,y,z, vmin = 0, vmax = 1)
plt.figure(4)
plt.pcolormesh(xedges,yedges,statistic, vmin = 0, vmax = 1)
Scattered:
Gridded:
Obviously if a bin is entirely filled with NaNs, the the resulting mean of that bin should still be NaN. However, I would like bins that are not entirely filled with NaNs to just result in the mean of the non-NaN numbers.
I've tried replacing the "statistic" argument in sp.stats.binned_statistic_2d with np.nanmean. This works, but it goes very very slowly when I use it on large datasets. I've tried digging into the underlying code of `sp.stats.binned_statistic_2d', but I can't figure out exactly how it is calculating the mean, or how to make it ignore NaNs in it's calculation.
Any ideas?
I had the same problem and changed the definition of binned_statistic_dd in scipy.stats and saved a local copy so that it won't be changed if scipy is updated.
I added 'nanmean' to the list of known_stats and
elif statistic == 'nanmean':
result.fill(np.nan)
for i in np.unique(binnumbers):
for vv in builtins.range(Vdim):
result[vv, i] = np.nanmean(values[vv, binnumbers == i])
Full new definition:
def binned_statistic_dd(sample, values, statistic='mean',
bins=10, range=None, expand_binnumbers=False,
binned_statistic_result=None):
"""
Compute a multidimensional binned statistic for a set of data.
This is a generalization of a histogramdd function. A histogram divides
the space into bins, and returns the count of the number of points in
each bin. This function allows the computation of the sum, mean, median,
or other statistic of the values within each bin.
Parameters
----------
sample : array_like
Data to histogram passed as a sequence of N arrays of length D, or
as an (N,D) array.
values : (N,) array_like or list of (N,) array_like
The data on which the statistic will be computed. This must be
the same shape as `sample`, or a list of sequences - each with the
same shape as `sample`. If `values` is such a list, the statistic
will be computed on each independently.
statistic : string or callable, optional
The statistic to compute (default is 'mean').
The following statistics are available:
* 'mean' : compute the mean of values for points within each bin.
Empty bins will be represented by NaN.
* 'median' : compute the median of values for points within each
bin. Empty bins will be represented by NaN.
* 'count' : compute the count of points within each bin. This is
identical to an unweighted histogram. `values` array is not
referenced.
* 'sum' : compute the sum of values for points within each bin.
This is identical to a weighted histogram.
* 'std' : compute the standard deviation within each bin. This
is implicitly calculated with ddof=0. If the number of values
within a given bin is 0 or 1, the computed standard deviation value
will be 0 for the bin.
* 'min' : compute the minimum of values for points within each bin.
Empty bins will be represented by NaN.
* 'max' : compute the maximum of values for point within each bin.
Empty bins will be represented by NaN.
* function : a user-defined function which takes a 1D array of
values, and outputs a single numerical statistic. This function
will be called on the values in each bin. Empty bins will be
represented by function([]), or NaN if this returns an error.
bins : sequence or positive int, optional
The bin specification must be in one of the following forms:
* A sequence of arrays describing the bin edges along each dimension.
* The number of bins for each dimension (nx, ny, ... = bins).
* The number of bins for all dimensions (nx = ny = ... = bins).
range : sequence, optional
A sequence of lower and upper bin edges to be used if the edges are
not given explicitly in `bins`. Defaults to the minimum and maximum
values along each dimension.
expand_binnumbers : bool, optional
'False' (default): the returned `binnumber` is a shape (N,) array of
linearized bin indices.
'True': the returned `binnumber` is 'unraveled' into a shape (D,N)
ndarray, where each row gives the bin numbers in the corresponding
dimension.
See the `binnumber` returned value, and the `Examples` section of
`binned_statistic_2d`.
binned_statistic_result : binnedStatisticddResult
Result of a previous call to the function in order to reuse bin edges
and bin numbers with new values and/or a different statistic.
To reuse bin numbers, `expand_binnumbers` must have been set to False
(the default)
.. versionadded:: 0.17.0
Returns
-------
statistic : ndarray, shape(nx1, nx2, nx3,...)
The values of the selected statistic in each two-dimensional bin.
bin_edges : list of ndarrays
A list of D arrays describing the (nxi + 1) bin edges for each
dimension.
binnumber : (N,) array of ints or (D,N) ndarray of ints
This assigns to each element of `sample` an integer that represents the
bin in which this observation falls. The representation depends on the
`expand_binnumbers` argument. See `Notes` for details.
See Also
--------
numpy.digitize, numpy.histogramdd, binned_statistic, binned_statistic_2d
Notes
-----
Binedges:
All but the last (righthand-most) bin is half-open in each dimension. In
other words, if `bins` is ``[1, 2, 3, 4]``, then the first bin is
``[1, 2)`` (including 1, but excluding 2) and the second ``[2, 3)``. The
last bin, however, is ``[3, 4]``, which *includes* 4.
`binnumber`:
This returned argument assigns to each element of `sample` an integer that
represents the bin in which it belongs. The representation depends on the
`expand_binnumbers` argument. If 'False' (default): The returned
`binnumber` is a shape (N,) array of linearized indices mapping each
element of `sample` to its corresponding bin (using row-major ordering).
If 'True': The returned `binnumber` is a shape (D,N) ndarray where
each row indicates bin placements for each dimension respectively. In each
dimension, a binnumber of `i` means the corresponding value is between
(bin_edges[D][i-1], bin_edges[D][i]), for each dimension 'D'.
.. versionadded:: 0.11.0
Examples
--------
>>> from scipy import stats
>>> import matplotlib.pyplot as plt
>>> from mpl_toolkits.mplot3d import Axes3D
Take an array of 600 (x, y) coordinates as an example.
`binned_statistic_dd` can handle arrays of higher dimension `D`. But a plot
of dimension `D+1` is required.
>>> mu = np.array([0., 1.])
>>> sigma = np.array([[1., -0.5],[-0.5, 1.5]])
>>> multinormal = stats.multivariate_normal(mu, sigma)
>>> data = multinormal.rvs(size=600, random_state=235412)
>>> data.shape
(600, 2)
Create bins and count how many arrays fall in each bin:
>>> N = 60
>>> x = np.linspace(-3, 3, N)
>>> y = np.linspace(-3, 4, N)
>>> ret = stats.binned_statistic_dd(data, np.arange(600), bins=[x, y],
... statistic='count')
>>> bincounts = ret.statistic
Set the volume and the location of bars:
>>> dx = x[1] - x[0]
>>> dy = y[1] - y[0]
>>> x, y = np.meshgrid(x[:-1]+dx/2, y[:-1]+dy/2)
>>> z = 0
>>> bincounts = bincounts.ravel()
>>> x = x.ravel()
>>> y = y.ravel()
>>> fig = plt.figure()
>>> ax = fig.add_subplot(111, projection='3d')
>>> with np.errstate(divide='ignore'): # silence random axes3d warning
... ax.bar3d(x, y, z, dx, dy, bincounts)
Reuse bin numbers and bin edges with new values:
>>> ret2 = stats.binned_statistic_dd(data, -np.arange(600),
... binned_statistic_result=ret,
... statistic='mean')
"""
known_stats = ['mean', 'median', 'count', 'sum', 'std', 'min', 'max',
'nanmean']
if not callable(statistic) and statistic not in known_stats:
raise ValueError('invalid statistic %r' % (statistic,))
try:
bins = index(bins)
except TypeError:
# bins is not an integer
pass
# If bins was an integer-like object, now it is an actual Python int.
# NOTE: for _bin_edges(), see e.g. gh-11365
if isinstance(bins, int) and not np.isfinite(sample).all():
raise ValueError('%r contains non-finite values.' % (sample,))
# `Ndim` is the number of dimensions (e.g. `2` for `binned_statistic_2d`)
# `Dlen` is the length of elements along each dimension.
# This code is based on np.histogramdd
try:
# `sample` is an ND-array.
Dlen, Ndim = sample.shape
except (AttributeError, ValueError):
# `sample` is a sequence of 1D arrays.
sample = np.atleast_2d(sample).T
Dlen, Ndim = sample.shape
# Store initial shape of `values` to preserve it in the output
values = np.asarray(values)
input_shape = list(values.shape)
# Make sure that `values` is 2D to iterate over rows
values = np.atleast_2d(values)
Vdim, Vlen = values.shape
# Make sure `values` match `sample`
if(statistic != 'count' and Vlen != Dlen):
raise AttributeError('The number of `values` elements must match the '
'length of each `sample` dimension.')
try:
M = len(bins)
if M != Ndim:
raise AttributeError('The dimension of bins must be equal '
'to the dimension of the sample x.')
except TypeError:
bins = Ndim * [bins]
if binned_statistic_result is None:
nbin, edges, dedges = _bin_edges(sample, bins, range)
binnumbers = _bin_numbers(sample, nbin, edges, dedges)
else:
edges = binned_statistic_result.bin_edges
nbin = np.array([len(edges[i]) + 1 for i in builtins.range(Ndim)])
# +1 for outlier bins
dedges = [np.diff(edges[i]) for i in builtins.range(Ndim)]
binnumbers = binned_statistic_result.binnumber
result = np.empty([Vdim, nbin.prod()], float)
if statistic == 'mean':
result.fill(np.nan)
flatcount = np.bincount(binnumbers, None)
a = flatcount.nonzero()
for vv in builtins.range(Vdim):
flatsum = np.bincount(binnumbers, values[vv])
result[vv, a] = flatsum[a] / flatcount[a]
elif statistic == 'std':
result.fill(0)
flatcount = np.bincount(binnumbers, None)
a = flatcount.nonzero()
for vv in builtins.range(Vdim):
for i in np.unique(binnumbers):
# NOTE: take std dev by bin, np.std() is 2-pass and stable
binned_data = values[vv, binnumbers == i]
# calc std only when binned data is 2 or more for speed up.
if len(binned_data) >= 2:
result[vv, i] = np.std(binned_data)
elif statistic == 'count':
result.fill(0)
flatcount = np.bincount(binnumbers, None)
a = np.arange(len(flatcount))
result[:, a] = flatcount[np.newaxis, :]
elif statistic == 'sum':
result.fill(0)
for vv in builtins.range(Vdim):
flatsum = np.bincount(binnumbers, values[vv])
a = np.arange(len(flatsum))
result[vv, a] = flatsum
elif statistic == 'median':
result.fill(np.nan)
for i in np.unique(binnumbers):
for vv in builtins.range(Vdim):
result[vv, i] = np.median(values[vv, binnumbers == i])
elif statistic == 'min':
result.fill(np.nan)
for i in np.unique(binnumbers):
for vv in builtins.range(Vdim):
result[vv, i] = np.min(values[vv, binnumbers == i])
elif statistic == 'max':
result.fill(np.nan)
for i in np.unique(binnumbers):
for vv in builtins.range(Vdim):
result[vv, i] = np.max(values[vv, binnumbers == i])
elif statistic == 'nanmean':
result.fill(np.nan)
for i in np.unique(binnumbers):
for vv in builtins.range(Vdim):
result[vv, i] = np.nanmean(values[vv, binnumbers == i])
elif callable(statistic):
with np.errstate(invalid='ignore'), suppress_warnings() as sup:
sup.filter(RuntimeWarning)
try:
null = statistic([])
except Exception:
null = np.nan
result.fill(null)
for i in np.unique(binnumbers):
for vv in builtins.range(Vdim):
result[vv, i] = statistic(values[vv, binnumbers == i])
# Shape into a proper matrix
result = result.reshape(np.append(Vdim, nbin))
# Remove outliers (indices 0 and -1 for each bin-dimension).
core = tuple([slice(None)] + Ndim * [slice(1, -1)])
result = result[core]
# Unravel binnumbers into an ndarray, each row the bins for each dimension
if(expand_binnumbers and Ndim > 1):
binnumbers = np.asarray(np.unravel_index(binnumbers, nbin))
if np.any(result.shape[1:] != nbin - 2):
raise RuntimeError('Internal Shape Error')
# Reshape to have output (`result`) match input (`values`) shape
result = result.reshape(input_shape[:-1] + list(nbin-2))
return BinnedStatisticddResult(result, edges, binnumbers)

Categories