How to calculate average value of items in a 3D array? - python

I am trying to get an average value for parameters to then plot with a given function. I think I have to somehow fill a 3-column array and then take the average of values of that array. I want to create 1000 values for popt[0] , popt[1] , and popt[2] and then take the average of all those values and then plot them.
for n in range(0,1000):
params=np.zeros(3,1000)
y3=y2+np.random.normal(loc=0.0,scale=0.1*y2)
popt,pcov=optimize.curve_fit(fluxmeasureMW,bands,y3)
params.append(popt[0],popt[1],popt[2])
a_avg=st.mean(params[0:])
b_avg=st.mean(params[1:])
e_avg=st.mean(params[2:])
The final goal is to plot:
fluxmeasureMW(bands,a_avg,b_avg,e_avg)
I am just not sure how to iterate the fitting function to then output 1000 values. 1000 is arbitrary, I just want a good sample size. The values for y2 and bands are already defined and can be plotted without issue, as well as the function fluxmeasureMW.

Say your function is like this
def fluxmeasureMW(x,f,g,h):
return result_of_calc
Just run the fit in a loop; accumulate the popts in a list then take the mean
from scipy import optimize
import numpy as np
n = 1000
t = []
for i in range(n):
y3 = y2 + np.random.normal(loc=0.0,scale=.1*y2)
popt,pcov = optimize.curve_fit(fluxmeasureMW,bands,y3)
t.append(popt)
f,g,h = np.mean(t,0)
t will be a list of lists...
[[f,g,h],
[f,g,h],
...]
np.mean(t,0) will average the values over the columns.
You could also use
import statistics
a = [[0, 1, 2],
[1, 2, 3],
[2, 3, 4],
[3, 4, 5]]
for column in zip(*a):
#print(column)
print(statistics.mean(column))

Related

Replace all values in array with their enumerated counterparts Numpy

I have an array with the following structure
[[distance_1,intensity_1],[distance_2,intensity_2]...]
These distances have many decimal points, are unordered and are not unique. I want these distances to have values from 0-max_number_of_unique_values in integers.
An example:
array = [[-1.13243,3],[-0.71229,2],[-2.314532,9],[2.34235,4],[1.342545,4],[-1.13243,2]]
By enumerating all unique distance values I get the following mapping
enumerated_distances = np.array(list(enumerate(np.unique(array[:,0]))))
[[-2.314532,0],[-1.13243,1],[-0.71229,2],[1.342525,3],[2.34235,4]]
Now, what I want to do, is to replace all distance values with their enumerated counterparts, so the original array ends up like this:
[[1,3],[2,2],[0,9],[4,4],[3,4],[1,2]]
Is there a way of doing this efficiently in numpy, without manually finding each value and replacing it with its enumerated counterpart?
Performance is key, as this will be integrated into a system running in real time. In my example, there is only one distance (x), but in reality it will be three dimensional (x,y,z).
As #Epsi95 points out, this is just np.unique(*, return_inverse = True)
_, inv = np.unique(array[:,0], return_inverse = True)
enumerated_out = np.stack([inv, array[:, 1]], axis = -1).astype(int)
enumerated_out
Out[]:
array([[1, 3],
[2, 2],
[0, 9],
[4, 4],
[3, 4],
[1, 2]])

How can I use Numpy to obtain a function that represents the relationship between a pair of values and another?

I am not a mathematician but I think that what I am after is called a "multiple linear regression"; please correct me if I am wrong.
I use numpy.polyfit and numpy.poly1d on a series of angle/pulse_width values from a servo motor, to obtain a function, angles_to_pulsewidths.
angles_to_pulsewidths is a polynomial function that models the servo and represents a line of good fit for the series. Given an angle value, it returns a corresponding pulse_width.
I am now trying to do a similar thing but instead of a single angle value in my series, I have pair of x/y co-ordinates for each pulse_width. I want to obtain a function that given an x/y pair, returns a corresponding pulse_width.
This is my code for creating my angles_to_pulsewidths function:
import numpy
angles_and_pulsewidths = [
[-162, 2490],
[-144, 2270],
[-126, 2070],
[-108, 1880]
]
angles_values_array = numpy.array(angles_and_pulsewidths)[:, 0]
pulsewidths_values_array = numpy.array(angles_and_pulsewidths)[:, 1]
coefficients = numpy.polyfit(
angles_values_array,
pulsewidths_values_array,
3
)
angles_to_pulsewidths = numpy.poly1d(coefficients)
I have been trying to modify this so that instead of providing a one-dimensional array of angles I will provide a two-dimensional array of x/y values:
xy_values = [[1, 2], [3, 4], [5, 6], [6, 7]]
pulse_widths = [2490, 2270, 2070, 1880]
However in this case, I can't use polyfit, because that takes only a one-dimensional array for its x parameter.
I can use numpy.linalg.lstsq instead, but I can't work out what to do with the results it gives me.
I'm also not even sure if I am on the right track; am I? I have read numerous related questions here, and have found numerous clues, but not enough to get me to the next step.
It is possible to use scipy's curve_fit for this.
If you know the general format of the function, perhaps you think it will be something of the form:
a x ^ 2 + b x y + c y ^ 2 + d x + e y +f
then you can use scipy's curve_fit to estimate what I will refer to as "parameters": a, b, c, d, e, f.
First we need to define the general form of our function:
def func(variables, a, b, c, d, e, f):
x, y = variables
return a * x ** 2 + b * x * y + c * y ** 2 + d * x + e * y + f
Note that our function has 6 parameters, to be able to demonstrate how this works we need more data than parameters so I'm extending your example data set to have 7 pairs of xy values and 7 pulse widths:
xy_values = [[1, 2], [3, 4], [5, 6], [6, 7], [8, 9], [10, 11], [12, 13]]
pulse_widths = [2490, 2270, 2070, 1880, 2000, 500, 600]
(If you do not have more data than parameters then you probably can choose a general form of your function to have less parameters.)
We need to reshape our xy_values so that it is not pairs of values but a single pair of two sets of values (the xs and the ys). To do this I'm choosing to creating a numpy array and "transpose" it:
xy_values = np.array(xy_values).T
We can now call our func on our array:
func(variables=xy_values, a=0, b=0, c=0, d=0, e=0, f=4)
Which gives:
array([4, 4, 4, 4, 4, 4, 4])
We can now actually use our data and curve_fit to estimate the best parameters:
from scipy.optimize import curve_fit
popt, pcov = curve_fit(f=func, xdata=xy_values, ydata=pulse_widths)
pcov contains information about how good the fit is and popt is the actual values of the parameters which we can directly see and use:
popt
gives:
array([ -25.61043682, -106.84636863, 119.10145249, -374.6200899 ,
230.65326227, 2141.55126789])
and we can call the function with it on some new value of x and y:
func([0, 5], *popt)
which gives:
6272.353891536915
Choosing the correct general form of the function you want to fit is case dependant. If there is any knowledge of the problem at hand (perhaps you expect there to be some trigonometric relationship) then you can use it otherwise it's a case of trial and error and getting a relationship that's "good enough" for your use case.
EDIT: Your original suggestion of needing to use multiple linear regression (MLR) is not completely incorrect. The solution approach I've described allows you to do MLR but it just assumes a specific type of func: one where all the terms are linear.

Sum up data on specific (multiple) ranges

I'm certain there's a good way to do this but I'm blanking on the right search terms to google, so I'll ask here instead. My problem is this:
I have 2 2-dimensional array, both with the same dimensions. One array (array 1) is the accumulated precipitation at (x,y) points. The other (array 2) is the topographic height of the same (x,y) grid. I want to sum up array 1 between specific heights of array 2, and create a bar graph with topographic height bins a the x-axis and total accumulated precipitation on the y axis.
So I want to be able to declare a list of heights (say [0, 100, 200, ..., 1000]) and for each bin, sum up all precipitation that occurred within that bin.
I can think of a few complicated ways to do this, but I'm guessing there's probably an easier way that I'm not thinking of. My gut instinct is to loop through my list of heights, mask anything outside of that range, sum up remaining values, add those to a new array, and repeat.
I'm wondering is if there's a built-in numpy or similar library that can do this more efficiently.
This code shows what you're asking for, some explanation in comments:
import numpy as np
def in_range(x, lower_bound, upper_bound):
# returns wether x is between lower_bound (inclusive) and upper_bound (exclusive)
return x in range(lower_bound, upper_bound)
# vectorize allows you to easily 'map' the function to a numpy array
vin_range = np.vectorize(in_range)
# representing your rainfall
rainfall = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
# representing your height map
height = np.array([[1, 2, 1], [2, 4, 2], [3, 6, 3]])
# the bands of height you're looking to sum
bands = [[0, 2], [2, 4], [4, 6], [6, 8]]
# computing the actual results you'd want to chart
result = [(band, sum(rainfall[vin_range(height, *band)])) for band in bands]
print(result)
The next to last line is where the magic happens. vin_range(height, *band) uses the vectorized function to create a numpy array of boolean values, with the same dimensions as height, that has True if a value of height is in the range given, or False otherwise.
By using that array to index the array with the target values (rainfall), you get an array that only has the values for which the height is in the target range. Then it's just a matter of summing those.
In more steps than result = [(band, sum(rainfall[vin_range(height, *band)])) for band in bands] (but with the same result):
result = []
for lower, upper in bands:
include = vin_range(height, lower, upper)
values_to_include = rainfall[include]
sum_of_rainfall = sum(values_to_include)
result.append(([lower, upper], sum_of_rainfall))
You can use np.bincount together with np.digitize. digitize creates an array of bin indices from the height array height and the bin boundaries bins. bincount then uses the bin indices to sum the data in array rain.
# set up
rain = np.random.randint(0,100,(5,5))/10
height = np.random.randint(0,10000,(5,5))/10
bins = [0,250,500,750,10000]
# compute
sums = np.bincount(np.digitize(height.ravel(),bins),rain.ravel(),len(bins)+1)
# result
sums
# array([ 0. , 37. , 35.6, 14.6, 22.4, 0. ])
# check against direct method
[rain[(height>=bins[i]) & (height<bins[i+1])].sum() for i in range(len(bins)-1)]
# [37.0, 35.6, 14.600000000000001, 22.4]
An example using the numpy ma module which allows to make masked arrays. From the docs:
A masked array is the combination of a standard numpy.ndarray and a mask. A mask is either nomask, indicating that no value of the associated array is invalid, or an array of booleans that determines for each element of the associated array whether the value is valid or not.
which seems what you need in this case.
import numpy as np
pr = np.random.randint(0, 1000, size=(100, 100)) #precipitation map
he = np.random.randint(0, 1000, size=(100, 100)) #height map
bins = np.arange(0, 1001, 200)
values = []
for vmin, vmax in zip(bins[:-1], bins[1:]):
#creating the masked array, here minimum included inside bin, maximum excluded.
maskedpr = np.ma.masked_where((he < vmin) | (he >= vmax), pr)
values.append(maskedpr.sum())
values is the list of values for each bin, which you can plot.
The numpy.ma.masked_where function returns an array masked where condition is True. So you need to set the condition to be True outside the bins.
The sum() method performs the sum only where the array is not masked.

Efficient Histogram of Differences for sparse Data

I want to compute a histogram of the differences between all the elements in one array A with all the elements in another array B.
So I want to have a histogram of the following data:
Delta1 = A1-B1
Delta2 = A1-B2
Delta3 = A1-B3
...
DeltaN = A2-B1
DeltaN+1 = A2-B2
DeltaN+2 = A2-B3
...
The point of this calculation is to show that these data has a correlation, even though not every data point has a "partner" in the other array and the correlation is rather noisy in practice.
The problem is that these files are in practice very large, several GB and all entries of the vectors are 64 bit integer numbers with very large differences.
It seems unfeasible to me to convert these data to binary arrays in order to be able to use correlation functions and fourier transforms to compute this.
Here is a small example to give a better taste of what I'm looking at.
This implementation with numpy's searchsorted in a for loop is rather slow.
import numpy as np
import matplotlib.pyplot as plt
timetagsA = [668656283,974986989,1294941174,1364697327,\
1478796061,1525549542,1715828978,2080480431,2175456303,2921498771,3671218524,\
4186901001,4444689281,5087334517,5467644990,5836391057,6249837363,6368090967,8344821453,\
8933832044,9731229532]
timetagsB = [13455,1294941188,1715828990,2921498781,5087334530,5087334733,6368090978,9731229545,9731229800,9731249954]
max_delta_t = 500
nbins = 10000
histo=np.zeros((nbins,2), dtype = float)
histo[:,0]=np.arange(0,nbins)
for i in range(0,int(len(timetagsA))):
delta_t = 0
j = np.searchsorted(timetagsB,timetagsA[i])
while (np.round(delta_t) < max_delta_t and j<len(timetagsB)):
delta_t = timetagsB[j] - timetagsA[i]
if(delta_t<max_delta_t):
histo[int(delta_t),1]+=1
j = j+1
plt.plot(histo[0:50,1])
plt.show()
It would be great if someone could help me to find a faster way to compute this. Thanks in advance!
EDIT
The below solution is supposing that your data is so huge that you can not use np.substract with np.outer and then slice the value you want to keep:
arr_diff = np.subtract.outer(arrB, arrA)
print (arr_diff[(0<arr_diff ) &(arr_diff <max_delta_t)])
# array([ 14, 12, 10, 13, 216, 11, 13, 268], dtype=int64)
with your example data it works but not with too huge data set
ORIGINAL SOLUTION
Let's first suppose your max_delta_t is smaller than the difference between two successive values in timetagsB for an easy way of doing it (then we can try to generalize it).
#create the array instead of list
arrA = np.array(timetagsA)
arrB = np.array(timetagsB)
max_delta_t = np.diff(arrB).min() - 1 #here it's 202 just for the explanation
You can use np.searchsorted in a vectorize way:
# create the array of search
arr_search = np.searchsorted(arrB, arrA) # the position of each element of arrA in arrB
print (arr_search)
# array([1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 4, 4, 4, 4, 6, 6, 6, 6, 7, 7, 7],dtype=int64)
You can calculate the difference between the element of arrB corresponding to each element of arrA by slicing arrB with arr_search
# calculate the difference
arr_diff = arrB[arr_search] - arrA
print (arr_diff[arr_diff<max_delta_t]) # finc the one smaller than max_delta_t
# array([14, 12, 10, 13, 11, 13], dtype=int64)
So what you are looking for is then calculated by np.bincount
arr_bins = np.bincount(arr_diff[arr_diff<max_delta_t])
#to make it look like histo but not especially necessary
histo = np.array([range(len(arr_bins)),arr_bins]).T
Now the problem is that, there is some values of difference between arrA and arrB that could not be obtained with this method, when max_delta_t is bigger than two successive values in arrB. Here is one way, naybe not the most efficient depending on the values of your data. For any value of max_delta_t
#need an array with the number of elements in arrB for each element of arrA
# within a max_delta_t range
arr_diff_search = np.searchsorted(arrB, arrA + max_delta_t)- np.searchsorted(arrB, arrA)
#do a loop to calculate all the values you are interested in
list_arr = []
for i in range(arr_diff_search.max()+1):
arr_diff = arrB[(arr_search+i)%len(arrB)][(arr_diff_search>=i)] - arrA[(arr_diff_search>=i)]
list_arr.append(arr_diff[(0<arr_diff)&(arr_diff<max_delta_t)])
Now you can np.concatenate the list_arr and use np.bincount such as:
arr_bins = np.bincount(np.concatenate(list_arr))
histo = np.array([range(len(arr_bins)),arr_bins]).T

How to find linearly independent rows from a matrix

How to identify the linearly independent rows from a matrix? For instance,
The 4th rows is independent.
First, your 3rd row is linearly dependent with 1t and 2nd row. However, your 1st and 4th column are linearly dependent.
Two methods you could use:
Eigenvalue
If one eigenvalue of the matrix is zero, its corresponding eigenvector is linearly dependent. The documentation eig states the returned eigenvalues are repeated according to their multiplicity and not necessarily ordered. However, assuming the eigenvalues correspond to your row vectors, one method would be:
import numpy as np
matrix = np.array(
[
[0, 1 ,0 ,0],
[0, 0, 1, 0],
[0, 1, 1, 0],
[1, 0, 0, 1]
])
lambdas, V = np.linalg.eig(matrix.T)
# The linearly dependent row vectors
print matrix[lambdas == 0,:]
Cauchy-Schwarz inequality
To test linear dependence of vectors and figure out which ones, you could use the Cauchy-Schwarz inequality. Basically, if the inner product of the vectors is equal to the product of the norm of the vectors, the vectors are linearly dependent. Here is an example for the columns:
import numpy as np
matrix = np.array(
[
[0, 1 ,0 ,0],
[0, 0, 1, 0],
[0, 1, 1, 0],
[1, 0, 0, 1]
])
print np.linalg.det(matrix)
for i in range(matrix.shape[0]):
for j in range(matrix.shape[0]):
if i != j:
inner_product = np.inner(
matrix[:,i],
matrix[:,j]
)
norm_i = np.linalg.norm(matrix[:,i])
norm_j = np.linalg.norm(matrix[:,j])
print 'I: ', matrix[:,i]
print 'J: ', matrix[:,j]
print 'Prod: ', inner_product
print 'Norm i: ', norm_i
print 'Norm j: ', norm_j
if np.abs(inner_product - norm_j * norm_i) < 1E-5:
print 'Dependent'
else:
print 'Independent'
To test the rows is a similar approach.
Then you could extend this to test all combinations of vectors, but I imagine this solution scale badly with size.
With sympy you can find the linear independant rows using: sympy.Matrix.rref:
>>> import sympy
>>> import numpy as np
>>> mat = np.array([[0,1,0,0],[0,0,1,0],[0,1,1,0],[1,0,0,1]]) # your matrix
>>> _, inds = sympy.Matrix(mat).T.rref() # to check the rows you need to transpose!
>>> inds
[0, 1, 3]
Which basically tells you the rows 0, 1 and 3 are linear independant while row 2 isn't (it's a linear combination of row 0 and 1).
Then you could remove these rows with slicing:
>>> mat[inds]
array([[0, 1, 0, 0],
[0, 0, 1, 0],
[1, 0, 0, 1]])
This also works well for rectangular (not only for quadratic) matrices.
I edited the code for Cauchy-Schwartz inequality which scales better with dimension: the inputs are the matrix and its dimension, while the output is a new rectangular matrix which contains along its rows the linearly independent columns of the starting matrix. This works in the assumption that the first column in never null, but can be readily generalized in order to implement this case too. Another thing that I observed is that 1e-5 seems to be a "sloppy" threshold, since some particular pathologic vectors were found to be linearly dependent in that case: 1e-4 doesn't give me the same problems. I hope this could be of some help: it was pretty difficult for me to find a really working routine to extract li vectors, and so I'm willing to share mine. If you find some bug, please report them!!
from numpy import dot, zeros
from numpy.linalg import matrix_rank, norm
def find_li_vectors(dim, R):
r = matrix_rank(R)
index = zeros( r ) #this will save the positions of the li columns in the matrix
counter = 0
index[0] = 0 #without loss of generality we pick the first column as linearly independent
j = 0 #therefore the second index is simply 0
for i in range(R.shape[0]): #loop over the columns
if i != j: #if the two columns are not the same
inner_product = dot( R[:,i], R[:,j] ) #compute the scalar product
norm_i = norm(R[:,i]) #compute norms
norm_j = norm(R[:,j])
#inner product and the product of the norms are equal only if the two vectors are parallel
#therefore we are looking for the ones which exhibit a difference which is bigger than a threshold
if absolute(inner_product - norm_j * norm_i) > 1e-4:
counter += 1 #counter is incremented
index[counter] = i #index is saved
j = i #j is refreshed
#do not forget to refresh j: otherwise you would compute only the vectors li with the first column!!
R_independent = zeros((r, dim))
i = 0
#now save everything in a new matrix
while( i < r ):
R_independent[i,:] = R[index[i],:]
i += 1
return R_independent
I know this was asked a while ago, but here is a very simple (although probably inefficient) solution. Given an array, the following finds a set of linearly independent vectors by progressively adding a vector and testing if the rank has increased:
from numpy.linalg import matrix_rank
def LI_vecs(dim,M):
LI=[M[0]]
for i in range(dim):
tmp=[]
for r in LI:
tmp.append(r)
tmp.append(M[i]) #set tmp=LI+[M[i]]
if matrix_rank(tmp)>len(LI): #test if M[i] is linearly independent from all (row) vectors in LI
LI.append(M[i]) #note that matrix_rank does not need to take in a square matrix
return LI #return set of linearly independent (row) vectors
#Example
mat=[[1,2,3,4],[4,5,6,7],[5,7,9,11],[2,4,6,8]]
LI_vecs(4,mat)
I interpret the problem as finding rows that are linearly independent from other rows.
That is equivalent to finding rows that are linearly dependent on other rows.
Gaussian elimination and treat numbers smaller than a threshold as zeros can do that. It is faster than finding eigenvalues of a matrix, testing all combinations of rows with Cauchy-Schwarz inequality, or singular value decomposition.
See:
https://math.stackexchange.com/questions/1297437/using-gauss-elimination-to-check-for-linear-dependence
Problem with floating point numbers:
http://numpy-discussion.10968.n7.nabble.com/Reduced-row-echelon-form-td16486.html
With regards to the following discussion:
Find dependent rows/columns of a matrix using Matlab?
from sympy import *
A = Matrix([[1,1,1],[2,2,2],[1,7,5]])
print(A.nullspace())
It is obvious that the first and second row are multiplication of each other.
If we execute the above code we get [-1/3, -2/3, 1]. The indices of the zero elements in the null space show independence. But why is the third element here not zero? If we multiply the A matrix with the null space, we get a zero column vector. So what's wrong?
The answer which we are looking for is the null space of the transpose of A.
B = A.T
print(B.nullspace())
Now we get the [-2, 1, 0], which shows that the third row is independent.
Two important notes here:
Consider whether we want to check the row dependencies or the column
dependencies.
Notice that the null space of a matrix is not equal to the null
space of the transpose of that matrix unless it is symmetric.
You can basically find the vectors spanning the columnspace of the matrix by using SymPy library's columnspace() method of Matrix object. Automatically, they are the linearly independent columns of the matrix.
import sympy as sp
import numpy as np
M = sp.Matrix([[0, 1, 0, 0],
[0, 0, 1, 0],
[1, 0, 0, 1]])
for i in M.columnspace():
print(np.array(i))
print()
# The output is following.
# [[0]
# [0]
# [1]]
# [[1]
# [0]
# [0]]
# [[0]
# [1]
# [0]]

Categories