Calculating wind divergence of u and v using Python, np.gradient - python

I'm very new to Python and currently trying to replicate plots etc that I previously used GrADs for. I want to calculate the divergence at each grid box using u and v wind fields (which are just scaled by specific humidity, q), from a netCDF climate model file.
From endless searching I know I need to use some combination of np.gradient and np.sum, but can't find the right combination. I just know that to do it 'by hand', the calculation would be
divg = dqu/dx + dqv/dy
I know the below is wrong, but it's the best I've got so far...
nc = Dataset(ifile)
q = np.array(nc.variables['hus'][0,:,:])
u = np.array(nc.variables['ua'][0,:,:])
v = np.array(nc.variables['va'][0,:,:])
lon=nc.variables['lon'][:]
lat=nc.variables['lat'][:]
qu = q*u
qv = q*v
dqu/dx, dqu/dy = np.gradient(qu, [dx, dy])
dqv/dx, dqv/dy = np.gradient(qv, [dx, dy])
divg = np.sum(dqu/dx, dqv/dy)
This gives the error 'SyntaxError: can't assign to operator'.
Any help would be much appreciated.

try something like:
dqu_dx, dqu_dy = np.gradient(qu, [dx, dy])
dqv_dx, dqv_dy = np.gradient(qv, [dx, dy])
you can not assign to any operation in python; any of those are syntax errors:
a + b = 3
a * b = 7
# or, in your case:
a / b = 9
UPDATE
following Pinetwig's comment: a/b is not a valid identifier name; it is (the return value of) an operator.

Try removing the [dx, dy].
[dqu_dx, dqu_dy] = np.gradient(qu)
[dqv_dx, dqv_dy] = np.gradient(qv)
Also to point out if you are recreating plots. Gradient changed in numpy between 1.82 and 1.9. This had an effect for recreating matlab plots in python as 1.82 was the matlab method. I am not sure how this relates to GrADs. Here is the wording for both.
1.82
"The gradient is computed using central differences in the interior
and first differences at the boundaries. The returned gradient hence has
the same shape as the input array."
1.9
"The gradient is computed using second order accurate central differences in the interior and either first differences or second order accurate one-sides (forward or backwards) differences at the boundaries. The returned gradient hence has the same shape as the input array."
The gradient function for 1.82 is here.
def gradient(f, *varargs):
"""
Return the gradient of an N-dimensional array.
The gradient is computed using central differences in the interior
and first differences at the boundaries. The returned gradient hence has
the same shape as the input array.
Parameters
----------
f : array_like
An N-dimensional array containing samples of a scalar function.
`*varargs` : scalars
0, 1, or N scalars specifying the sample distances in each direction,
that is: `dx`, `dy`, `dz`, ... The default distance is 1.
Returns
-------
gradient : ndarray
N arrays of the same shape as `f` giving the derivative of `f` with
respect to each dimension.
Examples
--------
>>> x = np.array([1, 2, 4, 7, 11, 16], dtype=np.float)
>>> np.gradient(x)
array([ 1. , 1.5, 2.5, 3.5, 4.5, 5. ])
>>> np.gradient(x, 2)
array([ 0.5 , 0.75, 1.25, 1.75, 2.25, 2.5 ])
>>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=np.float))
[array([[ 2., 2., -1.],
[ 2., 2., -1.]]),
array([[ 1. , 2.5, 4. ],
[ 1. , 1. , 1. ]])]
"""
f = np.asanyarray(f)
N = len(f.shape) # number of dimensions
n = len(varargs)
if n == 0:
dx = [1.0]*N
elif n == 1:
dx = [varargs[0]]*N
elif n == N:
dx = list(varargs)
else:
raise SyntaxError(
"invalid number of arguments")
# use central differences on interior and first differences on endpoints
outvals = []
# create slice objects --- initially all are [:, :, ..., :]
slice1 = [slice(None)]*N
slice2 = [slice(None)]*N
slice3 = [slice(None)]*N
otype = f.dtype.char
if otype not in ['f', 'd', 'F', 'D', 'm', 'M']:
otype = 'd'
# Difference of datetime64 elements results in timedelta64
if otype == 'M' :
# Need to use the full dtype name because it contains unit information
otype = f.dtype.name.replace('datetime', 'timedelta')
elif otype == 'm' :
# Needs to keep the specific units, can't be a general unit
otype = f.dtype
for axis in range(N):
# select out appropriate parts for this dimension
out = np.empty_like(f, dtype=otype)
slice1[axis] = slice(1, -1)
slice2[axis] = slice(2, None)
slice3[axis] = slice(None, -2)
# 1D equivalent -- out[1:-1] = (f[2:] - f[:-2])/2.0
out[slice1] = (f[slice2] - f[slice3])/2.0
slice1[axis] = 0
slice2[axis] = 1
slice3[axis] = 0
# 1D equivalent -- out[0] = (f[1] - f[0])
out[slice1] = (f[slice2] - f[slice3])
slice1[axis] = -1
slice2[axis] = -1
slice3[axis] = -2
# 1D equivalent -- out[-1] = (f[-1] - f[-2])
out[slice1] = (f[slice2] - f[slice3])
# divide by step size
outvals.append(out / dx[axis])
# reset the slice object in this dimension to ":"
slice1[axis] = slice(None)
slice2[axis] = slice(None)
slice3[axis] = slice(None)
if N == 1:
return outvals[0]
else:
return outvals

If your grid is Gaussian and the wind names in the file are "u" and "v" you can also calculate divergence directly using cdo:
cdo uv2dv in.nc out.nc
See https://code.mpimet.mpg.de/projects/cdo/embedded/index.html#x1-6850002.13.2 for more details.

Related

Adding two 1D arrays is giving me a 2D array (python)

I'm having an issue where I'm adding two 4x1 arrays and the result is a 4x4 array where the first column is repeated 4 times. The result I need is a 4x1 array.
I've initialized an array as such (m = 4): z = np.zeros((m, len(t))
Later in my code I pass this array into a function as z[:,k+1] so the dimensionality becomes a 4x1 array. (Note that when I print this array to my terminal is shows up as a row vector and not a column vector: [0. 0. 0. 0.], I'm not sure why this is either). The array that I'm trying to add to z has the following structure when printed to my terminal:
[[#]
[#]
[#]
[#]]
Clearly the addition is pulling the above array into each element of z instead of adding their respective components together, but I'm not sure why as they should both be column vectors. I'd appreciate any help with this.
EDIT: I have a lot of code so I've included a condensed version that hopefully gets the idea accross.
n = 4 # Defines number of states
m = 4 # Defines number of measurements
x = np.zeros((n, len(t)), dtype=np.float64) # Initializes states
z = np.zeros((m, len(t)), dtype=np.float64) # Initializes measurements
u = np.zeros((1, len(t)), dtype=np.float64) # Initializes input
...
C = np.eye(m) # Defines measurement matrix
...
for k in range(len(t)-1):
...
x_ukf[:,k+1], P_ukf[k+1,:,:] = function_call(x_ukf[:,k], z[:,k+1], u[:,k], P_ukf[k,:,:], C, Q, R, T) # Calls UKF function
This then leads to the function where the following occurrs (note that measurement_matrix = C (4x4 matrix), X is a 4x9 matrix, and W a 1x9 row vector):
Z = measurement_matrix # X # Calculates measurements based on sigma points
zhat = Z # W.T
...
state_vec = state_vec + K # (measurement_vec - zhat) # Updates state estimates
The issue I'm having is with the expression (measurement_vec - zhat). This is where the result should be a 4x1 vector but I'm getting a 4x4 matric.
This is sometimes called broadcasting:
a, b = np.arange(4), np.arange(8,12)
c = a + b[:,None]
Output:
array([[ 8, 9, 10, 11],
[ 9, 10, 11, 12],
[10, 11, 12, 13],
[11, 12, 13, 14]])

Python Optimization: Using vector technique to find power of each matrix in an numpy array

3D numpy array A contains a series (in this example, I am choosing 3) of 2D numpy array D of shape 2 x 2. The D matrix is as follows:
D = np.array([[1,2],[3,4]])
A is initialized and assigned as below:
idx = np.arange(3)
A = np.zeros((3,2,2))
A[idx,:,:] = D # This gives A = [[[1,2],[3,4]],[[1,2],[3,4]],\
# [[1,2],[3,4]]]
# In mathematical notation: A = {D, D, D}
Now, essentially what I require after the execution of the codes is:
Mathematically, A = {D^0, D^1, D^2} = {D0, D1, D2}
where D0 = [[1,0],[0,1]], D1 = [[1,2],[3,4]], D2=[[7,10],[15,22]]
Is it possible to apply power to each matrix element in A without using a for-loop? I would be doing larger matrices with more in the series.
I had defined, n = np.array([0,1,2]) # corresponding to powers 0, 1 and 2 and tried
Result = np.power(A,n) but I do not get the desired output.
Is there are an efficient way to do it?
Full code:
D = np.array([[1,2],[3,4]])
idx = np.arange(3)
A = np.zeros((3,2,2))
A[idx,:,:] = D # This gives A = [[[1,2],[3,4]],[[1,2],[3,4]],\
# [[1,2],[3,4]]]
# In mathematical notation: A = {D, D, D}
n = np.array([0,1,2])
Result = np.power(A,n) # ------> Not the desired output.
A cumulative product exists in numpy, but not for matrices. Therefore, you need to make your own 'matcumprod' function. You can use np.dot for this, but np.matmul (or #) is specialized for matrix multiplication.
Since you state your powers always go from 0 to some_power, I suggest the following function:
def matcumprod(D, upto):
Res = np.empty((upto, *D.shape), dtype=A.dtype)
Res[0, :, :] = np.eye(D.shape[0])
Res[1, :, :] = D.copy()
for i in range(1,upto):
Res[i, :, :] = Res[i-1,:,:] # D
return Res
By the way, a loop often times outperforms a built-in numpy function if the latter uses a lot of memory, so don't fret over it if your powers stay within bounds...
Alright, i spent a lot of time on this problem but could not seem to find a vectorized solution in the way you'd like. So i would like to instead first propose a basic solution, and then perhaps an optimization if you require finding continuous powers.
The function you're looking for is called numpy.linalg.matrix_power
import numpy as np
D = np.matrix([[1,2],[3,4]])
idx = np.arange(3)
A = np.zeros((3,2,2))
A[idx,:,:] = D # This gives A = [[[1,2],[3,4]],[[1,2],[3,4]],\
# [[1,2],[3,4]]]
# In mathematical notation: A = {D, D, D}
np.zeros(A.shape)
n = np.array([0,1,2])
result = [np.linalg.matrix_power(D, i) for i in n]
np.array(result)
#Output:
array([[[ 1, 0],
[ 0, 1]],
[[ 1, 2],
[ 3, 4]],
[[ 7, 10],
[15, 22]]])
However, if you notice, you end up calculating multiple powers for the same base matrix. We could instead utilize the intermediate results and go from there, using numpy.linalg.multi_dot
def all_powers_arr_of_matrix(A):
result = np.zeros(A.shape)
result[0] = np.linalg.matrix_power(A[0], 0)
for i in range(1, A.shape[0]):
result[i] = np.linalg.multi_dot([result[i - 1], A[i]])
return result
result = all_powers_arr_of_matrix(A)
#Output:
array([[[ 1., 0.],
[ 0., 1.]],
[[ 1., 2.],
[ 3., 4.]],
[[ 7., 10.],
[15., 22.]]])
Also, we can avoid creating the matrix A entirely, saving some time.
def all_powers_matrix(D, *rangeargs): #end exclusive
''' Expects 2D matrix.
Use as all_powers_matrix(D, end) or
all_powers_matrix(D, start, end)
'''
if len(rangeargs) == 1:
start = 0
end = rangeargs[0]
elif len(rangeargs) == 2:
start = rangeargs[0]
end = rangeargs[1]
else:
print("incorrect args")
return None
result = np.zeros((end - start, *D.shape))
result[0] = np.linalg.matrix_power(A[0], start)
for i in range(start + 1, end):
result[i] = np.linalg.multi_dot([result[i - 1], D])
return result
return result
result = all_powers_matrix(D, 3)
#Output:
array([[[ 1., 0.],
[ 0., 1.]],
[[ 1., 2.],
[ 3., 4.]],
[[ 7., 10.],
[15., 22.]]])
Note that you'd need to add error handling if you decide to use these functions as-is.
To calculate power of matrix D, one way could be to find the eigenvalues and right eigenvectors of it with np.linalg.eig and then raise the power of the diagonal matrix as it is easier, then after some manipulation, you can use two np.einsum to calculate A
#get eigvalues and eigvectors
eigval, eigvect = np.linalg.eig(D)
# to check how it works, you can do:
print (np.dot(eigvect*eigval,np.linalg.inv(eigvect)))
#[[1. 2.]
# [3. 4.]]
# so you get back on D
#use power as ufunc of outer with n on the eigenvalues to get all the one you want
arrp = np.power.outer( eigval, n).T
#apply_along_axis to create the diagonal matrix along the last axis
diagp = np.apply_along_axis( np.diag, axis=-1, arr=arrp)
#finally use two np.einsum to calculate with the subscript to get what you want
A = np.einsum('lij,jk -> lik',
np.einsum('ij,kjl -> kil',eigvect,diagp), np.linalg.inv(eigvect)).round()
print (A)
print (A.shape)
#[[[ 1. 0.]
# [-0. 1.]]
#
# [[ 1. 2.]
# [ 3. 4.]]
#
# [[ 7. 10.]
# [15. 22.]]]
#
#(3, 2, 2)
I don't have a full solution, but there are some things I wanted to mention which are a bit too long for the comments.
You might first look into addition chain exponentiation if you are computing big powers of big matrices. This is basically asking how many matrix multiplications are required to compute A^k for a given k. For instance A^5 = A(A^2)^2 so you need to only three matrix multiplies: A^2 and (A^2)^2 and A(A^2)^2. This might be the simplest way to gain some efficiency, but you will probably still have to use explicit loops.
Your question is also related to the problem of computing Ax, A^2x, ... , A^kx for a given A and x. This is an active area of research right now (search "matrix powers kernel"), since computing such a sequence efficiently is useful for parallel/communication avoiding Krylov subspace methods. If you're looking for a very efficient solution to your problem it might be worth looking into some of the results about this.

Python: weighted percentile for each row of array

I would like to calculate the weighted median of each row of a pandas dataframe.
I found this nice function (https://stackoverflow.com/a/29677616/10588967), but I don't seem to be able to pass a 2d array.
def weighted_quantile(values, quantiles, sample_weight=None, values_sorted=False, old_style=False):
""" Very close to numpy.percentile, but supports weights.
NOTE: quantiles should be in [0, 1]!
:param values: numpy.array with data
:param quantiles: array-like with many quantiles needed
:param sample_weight: array-like of the same length as `array`
:param values_sorted: bool, if True, then will avoid sorting of initial array
:param old_style: if True, will correct output to be consistent with numpy.percentile.
:return: numpy.array with computed quantiles.
"""
values = numpy.array(values)
quantiles = numpy.array(quantiles)
if sample_weight is None:
sample_weight = numpy.ones(len(values))
sample_weight = numpy.array(sample_weight)
assert numpy.all(quantiles >= 0) and numpy.all(quantiles <= 1), 'quantiles should be in [0, 1]'
if not values_sorted:
sorter = numpy.argsort(values)
values = values[sorter]
sample_weight = sample_weight[sorter]
weighted_quantiles = numpy.cumsum(sample_weight) - 0.5 * sample_weight
if old_style:
# To be convenient with numpy.percentile
weighted_quantiles -= weighted_quantiles[0]
weighted_quantiles /= weighted_quantiles[-1]
else:
weighted_quantiles /= numpy.sum(sample_weight)
return numpy.interp(quantiles, weighted_quantiles, values)
Using the code from the link, the following works:
weighted_quantile([1, 2, 9, 3.2, 4], [0.0, 0.5, 1.])
However, this does not work:
values = numpy.random.randn(10,5)
quantiles = [0.0, 0.5, 1.]
sample_weight = numpy.random.randn(10,5)
weighted_quantile(values, quantiles, sample_weight)
I receive the following error:
weighted_quantiles = np.cumsum(sample_weight) - 0.5 * sample_weight
ValueError: operands could not be broadcast together with shapes (250,) (10,5,5)
Question
Is it possible to apply this weighted quantile function in a vectorized manner on a dataframe, or I can only achieve this using .apply()?
Many thanks for your time!
np.cumsum(sample_weight)
return a 1D list. So you would like to reshape it to (10,5,5) using
np.cumsum(sample_weight).reshape(10,5,5)
Try my code in the handy repo
https://github.com/syrte/handy/blob/773a1500a9e10dd28eb0704fded94d6105a84374/stats.py#L239
I copy the docstring here, so you see what it can do. Please go to the link for the complete function (which is pretty long...)
def quantile(a, weights=None, q=None, nsig=None, origin='middle',
axis=None, keepdims=False, sorted=False, nmin=0,
nanas=None, shape='stats'):
'''Compute the quantile of the data.
Be careful when q is very small or many numbers repeat in a.
Parameters
----------
a : array_like
Input array.
weights : array_like, optional
Weighting of a.
q : float or float array in range of [0,1], optional
Quantile to compute. One of `q` and `nsig` must be specified.
nsig : float, optional
Quantile in unit of standard deviation.
Ignored when `q` is given.
origin : ['middle'| 'high'| 'low'], optional
Control how to interpret `nsig` to `q`.
axis : int, optional
Axis along which the quantiles are computed. The default is to
compute the quantiles of the flattened array.
sorted : bool
If True, the input array is assumed to be in increasing order.
nmin : int or None
Return `nan` when the tail probability is less than `nmin/a.size`.
Set `nmin` if you want to make result more reliable.
- nmin = None will turn off the check.
- nmin = 0 will return NaN for q not in [0, 1].
- nmin >= 3 is recommended for statistical use.
It is *not* well defined when `weights` is given.
nanas : None, float, 'ignore'
- None : do nothing. Note default sorting puts `nan` after `inf`.
- float : `nan`s will be replaced by given value.
- 'ignore' : `nan`s will be excluded before any calculation.
shape : 'data' | 'stats'
Put which axes first in the result:
'data' - the shape of data
'stats' - the shape of `q` or `nsig`
Only works for case where axis is not None.
Returns
-------
quantile : scalar or ndarray
The first axes of the result corresponds to the quantiles,
the rest are the axes that remain after the reduction of `a`.
See Also
--------
numpy.percentile
conflevel
Examples
--------
>>> np.random.seed(0)
>>> x = np.random.randn(3, 100)
>>> quantile(x, q=0.5)
0.024654858649703838
>>> quantile(x, nsig=0)
0.024654858649703838
>>> quantile(x, nsig=1)
1.0161711040272021
>>> quantile(x, nsig=[0, 1])
array([ 0.02465486, 1.0161711 ])
>>> quantile(np.abs(x), nsig=1, origin='low')
1.024490097937702
>>> quantile(-np.abs(x), nsig=1, origin='high')
-1.0244900979377023
>>> quantile(x, q=0.5, axis=1)
array([ 0.09409612, 0.02465486, -0.07535884])
>>> quantile(x, q=0.5, axis=1).shape
(3,)
>>> quantile(x, q=0.5, axis=1, keepdims=True).shape
(3, 1)
>>> quantile(x, q=[0.2, 0.8], axis=1).shape
(2, 3)
>>> quantile(x, q=[0.2, 0.8], axis=1, shape='stats').shape
(3, 2)
'''

Get solution to overdetermined linear homogeneous system numpy

I'm trying to find the solution to overdetermined linear homogeneous system (Ax = 0) using numpy in order to get the least linear squares solution for a linear regression.
This is the code I am using to generate the linear regression:
N = 100
x_data = np.linspace(0, N-1, N)
m = +5
n = -5
y_model = m*x_data + n
y_noise = y_model + np.random.normal(0, +5, N)
I want to recover m and n from y_noise. In other words, I want to resolve the homogeneous system (Ax = 0) where "x = (m, n)" and "A = (x_data | 1 | -y_noise)". So I convert non-homogeneous (Ax = y) into homogeneous (Ax = 0) using this code:
A = np.array(np.vstack((x_data, np.ones(N), -y_noise)).T)
I know I could resolve non-homogeneous system using np.linalg.lstsq((x_data | 1), y_noise)) but I want to get the solution for homogeneous system. I am finding a problem with this function as it only returns the trivial solution (x = 0):
x = np.linalg.lstsq(A, np.zeros(N))[0] => array([ 0., 0., 0.])
I was thinking about using eigenvectors to get the solution but it seems not to work:
A_T_A = np.dot(A.T, A)
eigen_values, eigen_vectors = np.linalg.eig(A_T_A)
# eigenvectors
[[ -2.03500000e-01 4.89890000e+00 5.31170000e+00]
[ -3.10000000e-03 1.02230000e+00 -2.64330000e+01]
[ 1.00000000e+00 1.00000000e+00 1.00000000e+00]]
# eigenvectors normalized
[[ -0.98365497700 -4.744666220 1.0] # (m1, n1, 1)
[ 0.00304878118 0.210130914 1.0] # (m2, n2, 1)
[ 25.7752417000 -5.132910010 1.0]] # (m3, n3, 1)
Which none of them fits model parameters (m=+5, n=-5)
How can I find (m, n) correctly? Thanks!
I have already found how to fix it, the problem is how I was interpreting the output of np.linalg.eig function, but the approach using eigenvectors is right. In spite of that, #Stelios is in the right when he says that the function np.linalg.lstsq returns the trivial solution (x = 0) because matrix A is full column rank.
I was assuming the output of np.linalg.eig was:
[[m1 n1 1]
[m2 n2 1]
[m3 n3 1]]
But it is not, the correct format is:
[[m1 m2 m3]
[n1 n2 n3]
[ 1 1 1]]
So if we want to get the solution which better fits model paramaters (m, n), we have to choose the eigenvector with the smallest eigenvalue and normalize it:
A_T_A = np.dot(A_homo.T, A_homo)
eigen_values, eigen_vectors = np.linalg.eig(A_T_A)
# eigenvectors
[[ 1.96409304e-01 9.48763118e-01 -2.47531678e-01]
[ 2.94608003e-04 2.52391765e-01 9.67625088e-01]
[ -9.80521952e-01 1.90123494e-01 -4.92925776e-02]]
# MIN eigenvector
eigen_vector_min = eigen_vectors[:, np.argmin(eigen_values)]
[-0.24753168 0.96762509 -0.04929258]
# MIN eigenvector normalized
[ 5.02168258 -19.63023915 1. ] # [m, n, 1]
Finally we get that m = 5.02 and n = -19,6 which is a pretty good approximation.

Filtering histogram edges and counts

Consider a histogram calculation of a numpy array that returns percentages:
# 500 random numbers between 0 and 10,000
values = np.random.uniform(0,10000,500)
# Histogram using e.g. 200 buckets
perc, edges = np.histogram(values, bins=200,
weights=np.zeros_like(values) + 100/values.size)
The above returns two arrays:
perc containing the % (i.e. percentages) of values within each pair of consecutive edges[ix] and edges[ix+1] out of the total.
edges of length len(hist)+1
Now, say that I want to filter perc and edges so that I only end up with the percentages and edges for values contained within a new range [m, M]. '
That is, I want to work with the sub-arrays of perc and edges corresponding to the interval of values within [m, M]. Needless to say, the new array of percentages would still refer to the total fraction count of the input array. We just want to filter perc and edges to end up with the correct sub-arrays.
How can I post-process perc and edges to do so?
The values of m and M can be any number of course. In the example above, we can assume e.g. m = 0 and M = 200.
m = 0; M = 200
mask = [(m < edges) & (edges < M)]
>>> edges[mask]
array([ 37.4789683 , 87.07491593, 136.67086357, 186.2668112 ])
Let's work on a smaller dataset so that it is easier to understand:
np.random.seed(0)
values = np.random.uniform(0, 100, 10)
values.sort()
>>> values
array([ 38.34415188, 42.36547993, 43.75872113, 54.4883183 ,
54.88135039, 60.27633761, 64.58941131, 71.51893664,
89.17730008, 96.36627605])
# Histogram using e.g. 10 buckets
perc, edges = np.histogram(values, bins=10,
weights=np.zeros_like(values) + 100./values.size)
>>> perc
array([ 30., 0., 20., 10., 10., 10., 0., 0., 10., 10.])
>>> edges
array([ 38.34415188, 44.1463643 , 49.94857672, 55.75078913,
61.55300155, 67.35521397, 73.15742638, 78.9596388 ,
84.76185122, 90.56406363, 96.36627605])
m = 0; M = 50
mask = (m <= edges) & (edges < M)
>>> mask
array([ True, True, True, False, False, False, False, False, False,
False, False], dtype=bool)
>>> edges[mask]
array([ 38.34415188, 44.1463643 , 49.94857672])
>>> perc[mask[:-1]][:-1]
array([ 30., 0.])
m = 40; M = 60
mask = (m < edges) & (edges < M)
>>> edges[mask]
array([ 44.1463643 , 49.94857672, 55.75078913])
>>> perc[mask[:-1]][:-1]
array([ 0., 20.])
Well you might need some mathematics for this. The bins are equally spaced so you can determine which bin is the first to include and which is the last by using the width of each bin:
bin_width = edges[1] - edges[0]
Now compute the first and last valid bin:
first = math.floor((m - edges[0]) / bin_width) + 1 # How many bins from the left
last = math.floor((edges[-1] - M) / bin_width) + 1 # How many bins from the right
(Ignore the +1 for both if you want to include the bin containing m or M - but then be careful that you don't end up with negative values for first and last!)
Now you know how many bins to include:
valid_edges = edges[first:-last]
valid_perc = perc[first:-last]
This will exclude the first first points and the last last points.
Might be that I haven't payed enough attention to rounding and there is an "off by one" error included but I think the idea is sound. :-)
You probably need to catch special cases like M > edges[-1] but for readability I haven't included these.
Or if the bins are not equally spaced use boolean masks instead of the calculation:
first = edged[edges < m].size + 1
last = edged[edges > M].size + 1

Categories