Modify keys in ndarray - python

I have a ndarray that looks like this:
array([[ -2.1e+00, -9.89644000e-03],
[ -2.2e+00, 0.00000000e+00],
[ -2.3e+00, 2.33447000e-02],
[ -2.4e+00, 5.22411000e-02]])
Whats the most pythonic way to add the integer 2 to the first column to give:
array([[ -0.1e+00, -9.89644000e-03],
[ -0.2e+00, 0.00000000e+00],
[ -0.3e+00, 2.33447000e-02],
[ -0.4e+00, 5.22411000e-02]])

Edit:
To add 2 to the first column only, do
>>> import numpy as np
>>> x = np.array([[ -2.1e+00, -9.89644000e-03],
[ -2.2e+00, 0.00000000e+00],
[ -2.3e+00, 2.33447000e-02],
[ -2.4e+00, 5.22411000e-02]])
>>> x[:,0] += 2 # : selects all rows, 0 selects first column
>>> x
array([[-0.1, -0.00989644],
[-0.2, 0. ],
[-0.3, 0.0233447 ],
[-0.4, 0.0522411 ]])
>>> import numpy as np
>>> x = np.array([[ -2.1e+00, -9.89644000e-03],
[ -2.2e+00, 0.00000000e+00],
[ -2.3e+00, 2.33447000e-02],
[ -2.4e+00, 5.22411000e-02]])
>>> x + 2
array([[-0.1, 1.99010356],
[-0.2, 2. ],
[-0.3, 2.0233447 ],
[-0.4, 2.0522411 ]])
Perhaps the Numpy Tutorial may help you.

Related

Python - add 1D-array as column of 2D

I want to add a vector as the first column of my 2D array which looks like :
[[ 1. 0. 0. nan]
[ 4. 4. 9.97 1. ]
[ 4. 4. 27.94 1. ]
[ 2. 1. 4.17 1. ]
[ 3. 2. 38.22 1. ]
[ 4. 4. 31.83 1. ]
[ 3. 4. 41.87 1. ]
[ 2. 1. 18.33 1. ]
[ 4. 4. 33.96 1. ]
[ 2. 1. 5.65 1. ]
[ 3. 3. 40.74 1. ]
[ 2. 1. 10.04 1. ]
[ 2. 2. 53.15 1. ]]
I want to add an aray [] of 13 elements as the first column of the matrix. I tried with np.stack_column, np.append but it is for 1D vector or doesn't work because I can't chose axis=1 and only do np.append(peak_values, results)
I have a very simple option for you using numpy -
x = np.array( [[ 3.9427767, -4.297677 ],
[ 3.9427767, -4.297677 ],
[ 3.9427767, -4.297677 ],
[ 3.9427767, -4.297677 ],
[ 3.942777 , -4.297677 ],
[ 3.9427767, -4.297677 ],
[ 3.9427767, -4.297677 ],
[ 3.9427767 ,-4.297677 ],
[ 3.9427767, -4.297677 ],
[ 3.9427772 ,-4.297677 ]])
b = np.arange(10).reshape(-1,1)
np.concatenate((b.T, x), axis=1)
Output-
array([[ 0. , 3.9427767, -4.297677 ],
[ 1. , 3.9427767, -4.297677 ],
[ 2. , 3.9427767, -4.297677 ],
[ 3. , 3.9427767, -4.297677 ],
[ 4. , 3.942777 , -4.297677 ],
[ 5. , 3.9427767, -4.297677 ],
[ 6. , 3.9427767, -4.297677 ],
[ 7. , 3.9427767, -4.297677 ],
[ 8. , 3.9427767, -4.297677 ],
[ 9. , 3.9427772, -4.297677 ]])
Improving on this answer by removing the unnecessary transposition, you can indeed use reshape(-1, 1) to transform the 1d array you'd like to prepend along axis 1 to the 2d array to a 2d array with a single column. At this point, the arrays only differ in shape along the second axis and np.concatenate accepts the arguments:
>>> import numpy as np
>>> a = np.arange(12).reshape(3, 4)
>>> b = np.arange(3)
>>> a
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
>>> b
array([0, 1, 2])
>>> b.reshape(-1, 1) # preview the reshaping...
array([[0],
[1],
[2]])
>>> np.concatenate((b.reshape(-1, 1), a), axis=1)
array([[ 0, 0, 1, 2, 3],
[ 1, 4, 5, 6, 7],
[ 2, 8, 9, 10, 11]])
For the simplest answer, you probably don't even need numpy.
Try the following:
new_array = []
new_array.append(your_array)
That's it.
I would suggest using Numpy. It will allow you to easily do what you want.
Here is an example of squaring the entire set. you can use something like nums[0].
nums = [0, 1, 2, 3, 4]
even_squares = [x ** 2 for x in nums if x % 2 == 0]
print even_squares # Prints "[0, 4, 16]"

numpy: get indices value in a separate column

How to get indices value in a separate column as effienctly as possible? i know how to do this in a loop, but i wonder what other ways there are?
from this ndarray
[[ 0.71587892 0.72278279 ]
[ 0.72225173 0.73340414 ]
[ 0.7259692 0.72862454 ]]
to this
[[0 0.71587892 0.72278279 ]
[1 0.72225173 0.73340414 ]
[2 0.7259692 0.72862454 ]]
How about
np.column_stack((np.arange(len(a)), a))
where a is your initial array?
Take a look at the following IPython-session:
In [1]: import numpy as np
In [2]: a = np.array([[0.71587892, 0.72278279],
...: [ 0.72225173, 0.73340414],
...: [ 0.7259692, 0.72862454]])
In [3]: np.column_stack((np.arange(len(a)), a))
Out[3]:
array([[ 0. , 0.71587892, 0.72278279],
[ 1. , 0.72225173, 0.73340414],
[ 2. , 0.7259692 , 0.72862454]])

Filter a numpy array based on largest value

I have a numpy array which holds 4-dimensional vectors which have the following format (x, y, z, w)
The size of the array is 4 x N. Now, the data I have is where I have (x, y, z) spatial locations and w holds some particular measurement at this location. Now, there could be multiple measurements associated with an (x, y, z) position (measured as floats).
What I would like to do is filter the array, so that I get a new array where I get the maximum measurement corresponding with each (x, y, z) position.
So if my data is like:
x, y, z, w1
x, y, z, w2
x, y, z, w3
where w1 is greater than w2 and w3, the filtered data would be:
x, y, z, w1
So more concretely, say I have data like:
[[ 0.7732126 0.48649481 0.29771819 0.91622924]
[ 0.7732126 0.48649481 0.29771819 1.91622924]
[ 0.58294263 0.32025559 0.6925856 0.0524125 ]
[ 0.58294263 0.32025559 0.6925856 0.05 ]
[ 0.58294263 0.32025559 0.6925856 1.7 ]
[ 0.3239913 0.7786444 0.41692853 0.10467392]
[ 0.12080023 0.74853649 0.15356663 0.4505753 ]
[ 0.13536096 0.60319054 0.82018125 0.10445047]
[ 0.1877724 0.96060999 0.39697999 0.59078612]]
This should return
[[ 0.7732126 0.48649481 0.29771819 1.91622924]
[ 0.58294263 0.32025559 0.6925856 1.7 ]
[ 0.3239913 0.7786444 0.41692853 0.10467392]
[ 0.12080023 0.74853649 0.15356663 0.4505753 ]
[ 0.13536096 0.60319054 0.82018125 0.10445047]
[ 0.1877724 0.96060999 0.39697999 0.59078612]]
This is convoluted, but it is probably as good as you are going to get using numpy only...
First, we use lexsort to put all entries with the same coordinates together. With a being your sample array:
>>> perm = np.lexsort(a[:, 3::-1].T)
>>> a[perm]
array([[ 0.12080023, 0.74853649, 0.15356663, 0.4505753 ],
[ 0.7732126 , 0.48649481, 0.29771819, 0.91622924],
[ 0.7732126 , 0.48649481, 0.29771819, 1.91622924],
[ 0.1877724 , 0.96060999, 0.39697999, 0.59078612],
[ 0.3239913 , 0.7786444 , 0.41692853, 0.10467392],
[ 0.58294263, 0.32025559, 0.6925856 , 0.0524125 ],
[ 0.58294263, 0.32025559, 0.6925856 , 0.05 ],
[ 0.58294263, 0.32025559, 0.6925856 , 1.7 ],
[ 0.13536096, 0.60319054, 0.82018125, 0.10445047]])
Note that by reversing the axis, we are sorting by x, breaking ties with y, then z, then w.
Because it is the maximum we are looking for, we just need to take the last entry in every group, which is a pretty straightforward thing to do:
>>> a_sorted = a[perm]
>>> last = np.concatenate((np.all(a_sorted[:-1, :3] != a_sorted[1:, :3], axis=1),
[True]))
>>> a_unique_max = a_sorted[last]
>>> a_unique_max
array([[ 0.12080023, 0.74853649, 0.15356663, 0.4505753 ],
[ 0.13536096, 0.60319054, 0.82018125, 0.10445047],
[ 0.1877724 , 0.96060999, 0.39697999, 0.59078612],
[ 0.3239913 , 0.7786444 , 0.41692853, 0.10467392],
[ 0.58294263, 0.32025559, 0.6925856 , 1.7 ],
[ 0.7732126 , 0.48649481, 0.29771819, 1.91622924]])
If you would rather not have the output sorted, but keep them in the original order they came up in the original array, you can also get that with the aid of perm:
>>> a_unique_max[np.argsort(perm[last])]
array([[ 0.7732126 , 0.48649481, 0.29771819, 1.91622924],
[ 0.58294263, 0.32025559, 0.6925856 , 1.7 ],
[ 0.3239913 , 0.7786444 , 0.41692853, 0.10467392],
[ 0.12080023, 0.74853649, 0.15356663, 0.4505753 ],
[ 0.13536096, 0.60319054, 0.82018125, 0.10445047],
[ 0.1877724 , 0.96060999, 0.39697999, 0.59078612]])
This will only work for the maximum, and it comes as a by-product of the sorting. If you are after a different function, say the product of all same-coordinates entries, you could do something like:
>>> first = np.concatenate(([True],
np.all(a_sorted[:-1, :3] != a_sorted[1:, :3], axis=1)))
>>> a_unique_prods = np.multiply.reduceat(a_sorted, np.nonzero(first)[0])
And you will have to play a little around with these results to assemble your return array.
I see that you already got the pointer towards pandas in the comments. FWIW, here's how you can get the desired behavior, assuming you don't care about the final sort order since groupby changes it up.
In [14]: arr
Out[14]:
array([[ 0.7732126 , 0.48649481, 0.29771819, 0.91622924],
[ 0.7732126 , 0.48649481, 0.29771819, 1.91622924],
[ 0.58294263, 0.32025559, 0.6925856 , 0.0524125 ],
[ 0.58294263, 0.32025559, 0.6925856 , 0.05 ],
[ 0.58294263, 0.32025559, 0.6925856 , 1.7 ],
[ 0.3239913 , 0.7786444 , 0.41692853, 0.10467392],
[ 0.12080023, 0.74853649, 0.15356663, 0.4505753 ],
[ 0.13536096, 0.60319054, 0.82018125, 0.10445047],
[ 0.1877724 , 0.96060999, 0.39697999, 0.59078612]])
In [15]: import pandas as pd
In [16]: pd.DataFrame(arr)
Out[16]:
0 1 2 3
0 0.773213 0.486495 0.297718 0.916229
1 0.773213 0.486495 0.297718 1.916229
2 0.582943 0.320256 0.692586 0.052413
3 0.582943 0.320256 0.692586 0.050000
4 0.582943 0.320256 0.692586 1.700000
5 0.323991 0.778644 0.416929 0.104674
6 0.120800 0.748536 0.153567 0.450575
7 0.135361 0.603191 0.820181 0.104450
8 0.187772 0.960610 0.396980 0.590786
In [17]: pd.DataFrame(arr).groupby([0,1,2]).max().reset_index()
Out[17]:
0 1 2 3
0 0.120800 0.748536 0.153567 0.450575
1 0.135361 0.603191 0.820181 0.104450
2 0.187772 0.960610 0.396980 0.590786
3 0.323991 0.778644 0.416929 0.104674
4 0.582943 0.320256 0.692586 1.700000
5 0.773213 0.486495 0.297718 1.916229
You can start off with lex-sorting input array to bring entries with identical first three elements in succession. Then, create another 2D array to store the last column entries, such that elements corresponding to each duplicate triplet goes into the same rows. Next, find the max along axis=1 for this 2D array and thus have the final max output for each such unique triplet. Here's the implementation, assuming A as the input array -
# Lex sort A
sortedA = A[np.lexsort(A[:,:-1].T)]
# Mask of start of unique first three columns from A
start_unqA = np.append(True,~np.all(np.diff(sortedA[:,:-1],axis=0)==0,axis=1))
# Counts of unique first three columns from A
counts = np.bincount(start_unqA.cumsum()-1)
mask = np.arange(counts.max()) < counts[:,None]
# Group A's last column into rows based on uniqueness from first three columns
grpA = np.empty(mask.shape)
grpA.fill(np.nan)
grpA[mask] = sortedA[:,-1]
# Concatenate unique first three columns from A and
# corresponding max values for each such unique triplet
out = np.column_stack((sortedA[start_unqA,:-1],np.nanmax(grpA,axis=1)))
Sample run -
In [75]: A
Out[75]:
array([[ 1, 1, 1, 96],
[ 1, 2, 2, 48],
[ 2, 1, 2, 33],
[ 1, 1, 1, 24],
[ 1, 1, 1, 94],
[ 2, 2, 2, 5],
[ 2, 1, 1, 17],
[ 2, 2, 2, 62]])
In [76]: sortedA
Out[76]:
array([[ 1, 1, 1, 96],
[ 1, 1, 1, 24],
[ 1, 1, 1, 94],
[ 2, 1, 1, 17],
[ 2, 1, 2, 33],
[ 1, 2, 2, 48],
[ 2, 2, 2, 5],
[ 2, 2, 2, 62]])
In [77]: out
Out[77]:
array([[ 1., 1., 1., 96.],
[ 2., 1., 1., 17.],
[ 2., 1., 2., 33.],
[ 1., 2., 2., 48.],
[ 2., 2., 2., 62.]])
You can use logical indexing.
I will use random data for an example:
>>> myarr = np.random.random((6, 4))
>>> print(myarr)
[[ 0.7732126 0.48649481 0.29771819 0.91622924]
[ 0.58294263 0.32025559 0.6925856 0.0524125 ]
[ 0.3239913 0.7786444 0.41692853 0.10467392]
[ 0.12080023 0.74853649 0.15356663 0.4505753 ]
[ 0.13536096 0.60319054 0.82018125 0.10445047]
[ 0.1877724 0.96060999 0.39697999 0.59078612]]
To get the row or rows where the last column is the greatest, do this:
>>> greatest = myarr[myarr[:, 3]==myarr[:, 3].max()]
>>> print(greatest)
[[ 0.7732126 0.48649481 0.29771819 0.91622924]]
What this does is it gets the last column of myarr, and finds the maximum of that column, finds all the elements of that column equal to the maximum, and then gets the corresponding rows.
You can use np.argmax
x[np.argmax(x[:,3]),:]
>>> x = np.random.random((5,4))
>>> x
array([[ 0.25461146, 0.35671081, 0.54856798, 0.2027313 ],
[ 0.17079029, 0.66970362, 0.06533572, 0.31704254],
[ 0.4577928 , 0.69022073, 0.57128696, 0.93995176],
[ 0.29708841, 0.96324181, 0.78859008, 0.25433235],
[ 0.58739451, 0.17961551, 0.67993786, 0.73725493]])
>>> x[np.argmax(x[:,3]),:]
array([ 0.4577928 , 0.69022073, 0.57128696, 0.93995176])

Sample with replacement from existing array

I have a matrix A with shape 1.6M rows and 400 columns.
One of the columns in A (call it the output column) has binary values (0,1) with a predominance of 0's.
I want to create a new matrix B (same shape as A) by sampling rows in A with replacement such, that the distribution of 0's & 1's in the output column in B becomes 50/50.
What is the efficient way to do this using python/numpy?
You could do this by:
Creating a list of all rows with 0 in the "output column" (called outputZeros), and a list of all rows with 1 in the output column (called outputOnes); then,
Sampling with replacement from outputZeros and outputOnes 1.6M times.
Here's a small example. It's not clear to me if you want the rows in B to be in any particular order, so here they first include 0s, then include 1s.
In [1]: import numpy as np, random
In [2]: A = np.random.rand(10, 2)
In [3]: A
In [4]: A[:7, 1] = 0
In [5]: A[7:, 1] = 1
In [6]: A
Out[6]:
array([[ 0.70126052, 0. ],
[ 0.51161067, 0. ],
[ 0.76511966, 0. ],
[ 0.91257144, 0. ],
[ 0.97024895, 0. ],
[ 0.55817776, 0. ],
[ 0.55963466, 0. ],
[ 0.6318139 , 1. ],
[ 0.90176108, 1. ],
[ 0.76033151, 1. ]])
In [7]: outputZeros = np.where(A[:, 1] == 0)[0]
In [8]: outputZeros
Out[8]: array([0, 1, 2, 3, 4, 5, 6])
In [9]: outputOnes
Out[9]: array([7, 8, 9])
In [10]: outputOnes = np.where(A[:, 1] == 1)[0]
In [11]: B = np.zeros((10, 2))
In [12]: for i in range(10):
if i < 5:
B[i, :] = A[random.choice(outputZeros), :]
else:
B[i, :] = A[random.choice(outputOnes), :]
....:
In [13]: B
Out[13]:
array([[ 0.97024895, 0. ],
[ 0.97024895, 0. ],
[ 0.76511966, 0. ],
[ 0.76511966, 0. ],
[ 0.51161067, 0. ],
[ 0.90176108, 1. ],
[ 0.76033151, 1. ],
[ 0.6318139 , 1. ],
[ 0.6318139 , 1. ],
[ 0.76033151, 1. ]])
I would create a new 1D numpy array filled with values from numpy.random.random_integers(low, high=None, size=None) and swap that new array with the old one.

Scipy's fftpack dct and idct

Let say you use the dct function, then do no manipulation of the data and use the invert transform; wouldn't the inverted data be the same as the pre-transformed data? Why the floating point issue? Is it a reported issue or is it a normal behavior?
In [21]: a = [1.2, 3.4, 5.1, 2.3, 4.5]
In [22]: b = dct(a)
In [23]: b
Out[23]: array([ 33. , -4.98384545, -4.5 , -5.971707 , 4.5 ])
In [24]: c = idct(b)
In [25]: c
Out[25]: array([ 12., 34., 51., 23., 45.])
Anyone has an explanation as why? Of course, a simple c*10**-1 would do the trick, but if you repeat the call of the function to use it on several dimensions, the error gets bigger:
In [37]: a = np.random.rand(3,3,3)
In [38]: d = dct(dct(dct(a).transpose(0,2,1)).transpose(2,1,0)).transpose(2,1,0).transpose(0,2,1)
In [39]: e = idct(idct(idct(d).transpose(0,2,1)).transpose(2,1,0)).transpose(2,1,0).transpose(0,2,1)
In [40]: a
Out[40]:
array([[[ 0.48709809, 0.50624831, 0.91190972],
[ 0.56545798, 0.85695062, 0.62484782],
[ 0.96092354, 0.17453537, 0.17884233]],
[[ 0.29433402, 0.08540074, 0.18574437],
[ 0.09942075, 0.78902363, 0.62663572],
[ 0.20372951, 0.67039551, 0.52292875]],
[[ 0.79952289, 0.48221372, 0.43838685],
[ 0.25559683, 0.39549153, 0.84129493],
[ 0.69093533, 0.71522961, 0.16522915]]])
In [41]: e
Out[41]:
array([[[ 105.21318703, 109.34963575, 196.97249887],
[ 122.13892469, 185.10133376, 134.96712825],
[ 207.55948396, 37.69964085, 38.62994399]],
[[ 63.57614855, 18.44656009, 40.12078466],
[ 21.47488098, 170.42910452, 135.35331646],
[ 44.00557341, 144.80543099, 112.95260949]],
[[ 172.69694529, 104.15816275, 94.69156014],
[ 55.20891593, 85.42617016, 181.71970442],
[ 149.2420308 , 154.48959477, 35.68949734]]])
Here a link to the doc.
It looks like dct and idct do not normalize by default. define dct to call fftpack.dct in the following manner. Do the same for idct.
In [13]: dct = lambda x: fftpack.dct(x, norm='ortho')
In [14]: idct = lambda x: fftpack.idct(x, norm='ortho')
Once done, you will get back the original answers after performing the transforms.
In [19]: import numpy
In [20]: a = numpy.random.rand(3,3,3)
In [21]: d = dct(dct(dct(a).transpose(0,2,1)).transpose(2,1,0)).transpose(2,1,0).transpose(0,2,1)
In [22]: e = idct(idct(idct(d).transpose(0,2,1)).transpose(2,1,0)).transpose(2,1,0).transpose(0,2,1)
In [23]: a
Out[23]:
array([[[ 0.51699637, 0.42946223, 0.89843545],
[ 0.27853391, 0.8931508 , 0.34319118],
[ 0.51984431, 0.09217771, 0.78764716]],
[[ 0.25019845, 0.92622331, 0.06111409],
[ 0.81363641, 0.06093368, 0.13123373],
[ 0.47268657, 0.39635091, 0.77978269]],
[[ 0.86098829, 0.07901332, 0.82169182],
[ 0.12560088, 0.78210188, 0.69805434],
[ 0.33544628, 0.81540172, 0.9393219 ]]])
In [24]: e
Out[24]:
array([[[ 0.51699637, 0.42946223, 0.89843545],
[ 0.27853391, 0.8931508 , 0.34319118],
[ 0.51984431, 0.09217771, 0.78764716]],
[[ 0.25019845, 0.92622331, 0.06111409],
[ 0.81363641, 0.06093368, 0.13123373],
[ 0.47268657, 0.39635091, 0.77978269]],
[[ 0.86098829, 0.07901332, 0.82169182],
[ 0.12560088, 0.78210188, 0.69805434],
[ 0.33544628, 0.81540172, 0.9393219 ]]])
I am not sure why no normalization was chosen by default. But when using ortho, dct and idct each seem to normalize by a factor of 1/sqrt(2 * N) or 1/sqrt(4 * N). There may be applications where the normalization is needed for dct and not idct and vice versa.

Categories