Inconsistance spacing in numpy array elements - python

In python when I say np.array([1,2,3]), the result is
array([1, 2, 3])
but when I say np.array([11,22,3]) the result is
array([11, 22, 3])
which has two spaces before '3' unlike '22' which has one space before. Later i am using map function to read this result from a CSV file with Pandas
appended_data.append({'array': numpyarray})
OutputDataFrame = pd.DataFrame(appended_data).ix[:, columns]
OutputDataFrame.to_csv('name.csv', index=False)
and i need the spacing to be consistent. is there any way to do so ?

The default display for arrays is a uniform field width per element, not a uniform spacing:
In [30]: x=np.array([11,223,3])
In [31]: x
Out[31]: array([ 11, 223, 3])
In [32]: x.tolist() # list display with uniform spacing
Out[32]: [11, 223, 3]
In effect numpy uses a format like:
In [35]: fmt = ' '.join(['%3d','%3d','%3d'])
In [36]: fmt
Out[36]: '%3d %3d %3d'
In [37]: fmt%tuple(x)
Out[37]: ' 11 223 3'
np.savetxt does just that, using the fmt and delimiter that you provide.
csv stands for 'comma separated'. Tabs are also used. If 'white space' is used, good readers are just as happy with one, two or more 'blanks'. Such tables are usually formatted to keep the columns aligned, not to keep the space between numbers constant.
A 3 row array with mixed number sizes:
In [39]: x=np.array([[1,123,32],[34,1,2],[0,23,1000]])
In [40]: x
Out[40]:
array([[ 1, 123, 32],
[ 34, 1, 2],
[ 0, 23, 1000]])
Fixed width csv formatting:
In [41]: np.savetxt('test.csv',x,fmt='%5d', delimiter=',')
In [42]: cat test.csv
1, 123, 32
34, 1, 2
0, 23, 1000
delimited reading:
In [43]: np.genfromtxt('test.csv',delimiter=',',dtype=None)
Out[43]:
array([[ 1, 123, 32],
[ 34, 1, 2],
[ 0, 23, 1000]])
The default mode for Python string split uses generalized white space:
In [44]: ' 11 223 3'.split()
Out[44]: ['11', '223', '3']
Here's an example of a csv with constant spacing (and variable width)
In [45]: np.savetxt('test.csv',x,fmt='%d', delimiter=' ')
In [46]: cat test.csv
1 123 32
34 1 2
0 23 1000
np.genfromtxt('test.csv',dtype=None) reads it just fine.

You could convert the evenly spaced numpy array to a list first:
np.array([11, 22, 3]).tolist()
will give you
[11, 22, 3]
Also, when you map the numpy array, each individual value passed to the function will not have spacing so you don't have to worry about it.
See hpaulj's answer below as it's much more comprehensive than mine.

Related

What does '...' mean in a python slice [duplicate]

What is the meaning of x[...] below?
a = np.arange(6).reshape(2,3)
for x in np.nditer(a, op_flags=['readwrite']):
x[...] = 2 * x
While the proposed duplicate What does the Python Ellipsis object do? answers the question in a general python context, its use in an nditer loop requires, I think, added information.
https://docs.scipy.org/doc/numpy/reference/arrays.nditer.html#modifying-array-values
Regular assignment in Python simply changes a reference in the local or global variable dictionary instead of modifying an existing variable in place. This means that simply assigning to x will not place the value into the element of the array, but rather switch x from being an array element reference to being a reference to the value you assigned. To actually modify the element of the array, x should be indexed with the ellipsis.
That section includes your code example.
So in my words, the x[...] = ... modifies x in-place; x = ... would have broken the link to the nditer variable, and not changed it. It's like x[:] = ... but works with arrays of any dimension (including 0d). In this context x isn't just a number, it's an array.
Perhaps the closest thing to this nditer iteration, without nditer is:
In [667]: for i, x in np.ndenumerate(a):
...: print(i, x)
...: a[i] = 2 * x
...:
(0, 0) 0
(0, 1) 1
...
(1, 2) 5
In [668]: a
Out[668]:
array([[ 0, 2, 4],
[ 6, 8, 10]])
Notice that I had to index and modify a[i] directly. I could not have used, x = 2*x. In this iteration x is a scalar, and thus not mutable
In [669]: for i,x in np.ndenumerate(a):
...: x[...] = 2 * x
...
TypeError: 'numpy.int32' object does not support item assignment
But in the nditer case x is a 0d array, and mutable.
In [671]: for x in np.nditer(a, op_flags=['readwrite']):
...: print(x, type(x), x.shape)
...: x[...] = 2 * x
...:
0 <class 'numpy.ndarray'> ()
4 <class 'numpy.ndarray'> ()
...
And because it is 0d, x[:] cannot be used instead of x[...]
----> 3 x[:] = 2 * x
IndexError: too many indices for array
A simpler array iteration might also give insight:
In [675]: for x in a:
...: print(x, x.shape)
...: x[:] = 2 * x
...:
[ 0 8 16] (3,)
[24 32 40] (3,)
this iterates on the rows (1st dim) of a. x is then a 1d array, and can be modified with either x[:]=... or x[...]=....
And if I add the external_loop flag from the next section, x is now a 1d array, and x[:] = would work. But x[...] = still works and is more general. x[...] is used all the other nditer examples.
In [677]: for x in np.nditer(a, op_flags=['readwrite'], flags=['external_loop']):
...: print(x, type(x), x.shape)
...: x[...] = 2 * x
[ 0 16 32 48 64 80] <class 'numpy.ndarray'> (6,)
Compare this simple row iteration (on a 2d array):
In [675]: for x in a:
...: print(x, x.shape)
...: x[:] = 2 * x
...:
[ 0 8 16] (3,)
[24 32 40] (3,)
this iterates on the rows (1st dim) of a. x is then a 1d array, and can be modified with either x[:] = ... or x[...] = ....
Read and experiment with this nditer page all the way through to the end. By itself, nditer is not that useful in python. It does not speed up iteration - not until you port your code to cython.np.ndindex is one of the few non-compiled numpy functions that uses nditer.
The ellipsis ... means as many : as needed.
For people who don't have time, here is a simple example:
In [64]: X = np.reshape(np.arange(9), (3,3))
In [67]: Y = np.reshape(np.arange(2*3*4), (2,3,4))
In [70]: X
Out[70]:
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
In [71]: X[:,0]
Out[71]: array([0, 3, 6])
In [72]: X[...,0]
Out[72]: array([0, 3, 6])
In [73]: Y
Out[73]:
array([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]],
[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]]])
In [74]: Y[:,0]
Out[74]:
array([[ 0, 1, 2, 3],
[12, 13, 14, 15]])
In [75]: Y[...,0]
Out[75]:
array([[ 0, 4, 8],
[12, 16, 20]])
In [76]: X[0,...,0]
Out[76]: array(0)
In [77]: Y[0,...,0]
Out[77]: array([0, 4, 8])
This makes it easy to manipulate only one dimension at a time.
One thing - You can have only one ellipsis in any given indexing expression, or your expression would be ambiguous about how many : should be put in each.
I believe a very good parallel (that most people are maybe used to) is to think that way:
import numpy as np
random_array = np.random.rand(2, 2, 2, 2)
In such case, [:, :, :, 0] and [..., 0] are the same.
You can use to analyse only an specific dimension, say you have a batch of 50 128x128 RGB image (50, 3, 128, 128), if you want to slice a piece of it in every image at every color channel, you could either do image[:,:,50:70, 20:80] or image[...,50:70,20:80]
Just be aware that you can't use it more than once in the statement like [...,0,...] is invalid.

Masking out some rows of numpy array and recover back

I have a mask with a mask_re:(8781288, 1) including ones and zeros, label file (y_lbl:(8781288, 1)) and a feature vector with feat_re: (8781288, 64). I need to take only those rows from feature vector and label files that are 1 in the mask file. how can I do this, and how can apply the opposite action of putting (recovering back) prediction values (ypred) in the masked_label file based on the mask file in the elements that are one?
For example in Matlab can be done easily X=feat_re(mask_re==1) and can be recovered back new_lbl(mask_re==1)=ypred, where new_lbl=zeros(8781288, 1). I tried to do a similar thing in python:
X=feat_re[np.where(mask_re==1),:]
X.shape
(2, 437561, 64)
EDITED (SOLVED) According to what #hpaulj suggested
The problem was with the shape of my mask file, once I changed it to mask_new=mask_re.reshape((8781288)), it solved my issue, and then
X=feat_re[mask_new==1,:]
(437561, 64)
In [182]: arr = np.arange(12).reshape(3,4)
In [183]: mask = np.array([1,0,1], bool)
In [184]: arr[mask,:]
Out[184]:
array([[ 0, 1, 2, 3],
[ 8, 9, 10, 11]])
In [185]: new = np.zeros_like(arr)
In [186]: new[mask,:] = np.array([10,12,14,16])
In [187]: new
Out[187]:
array([[10, 12, 14, 16],
[ 0, 0, 0, 0],
[10, 12, 14, 16]])
I suspect your error comes from the shape of mask:
In [188]: mask1 = mask[:,None]
In [189]: mask1.shape
Out[189]: (3, 1)
In [190]: arr[mask1,:]
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-190-6317c3ea0302> in <module>
----> 1 arr[mask1,:]
IndexError: too many indices for array
Remember, numpy can have 1d and 0d arrays; it doesn't force everything to be 2d.
With where (aka nonzero):
In [191]: np.nonzero(mask)
Out[191]: (array([0, 2]),) # 1 element tuple
In [192]: np.nonzero(mask1)
Out[192]: (array([0, 2]), array([0, 0])) # 2 element tuple
In [193]: arr[_191] # using the mask index
Out[193]:
array([[ 0, 1, 2, 3],
[ 8, 9, 10, 11]])
you can use boolean indexing for masking like below
X = feat_re[mask_re==1, :]
X = X.reshape(2, -1, 64)
this selects rows of feat_re where (mask_re==1) is True. Then you can reshape x using reshape function. you can again use reshape to get back to same array shape. "-1" in reshape indicate the size need to be calculated by numpy

Map a number to an id in python

Suppose I have a numpy array like: [11, 30, 25]. These numbers represent categories of the objects corresponding to the indices. I know there are just 20 categories but for some reason they are numbered from 11 to 29. I'd like to convert them to numbers in 0:19 and back. What would by a pythonic way to do this? Preferably in bumpy.
EDIT: this is just a small example of a bigger problem, where the number of categories are in the thousands, and some categories are never represented, so the maximum id will be the number of unique existing categories.
Let's say arr is the input array of categories.
Forward Process/Encoding : From categories to IDs
To perform the encoding, use np.unique alongwith its optional return_inverse argument to give us IDs that would have values from 0 to N-1, where N is the number of categories you would have in arr , like so -
unq,idx = np.unique(arr,return_inverse=True)
Backward Process/Decoding : From IDs to categories
To go back to the original categories from the IDs (idx), just index into unique categories saved earlier as unq, like so -
arr_out = unq[idx]
Sample run -
In [40]: arr # Input array of categories
Out[40]: array([7, 1, 1, 3, 8, 2, 7, 7, 0, 2])
In [41]: unq,idx = np.unique(arr,return_inverse=True)
In [42]: idx # ID array with values from 0 to 5 (6 categories)
Out[42]: array([4, 1, 1, 3, 5, 2, 4, 4, 0, 2])
In [43]: unq[idx] # Get back original array of categories
Out[43]: array([7, 1, 1, 3, 8, 2, 7, 7, 0, 2])
To be able to easily convert back-and-forth, I would use the sklearn.preprocessing module LabelEncoder:
In [7]: from sklearn.preprocessing import LabelEncoder
In [8]: encoder = LabelEncoder()
In [9]: encoder.fit(range(11,31))
Out[9]: LabelEncoder()
In [10]: encoder.transform([11,30,25])
Out[10]: array([ 0, 19, 14])
In [11]: encoder.inverse_transform([18, 1, 15])
Out[11]: array([29, 12, 26])

Change a 1D NumPy array from (implicit) row major to column major order

I have a 1D array in NumPy that implicitly represents some 2D data in row-major order. Here's a trivial example:
import numpy as np
# My data looks like [[1,2,3,4], [5,6,7,8]]
a = np.array([1,2,3,4,5,6,7,8])
I want to get a 1D array in column-major order (ie. b = [1,5,2,6,3,7,4,8] in the example above).
Normally, I would just do the following:
mat = np.reshape(a, (-1,4))
b = mat.flatten('F')
Unfortunately, the length of my input array is not an exact multiple of the row length I want (ie. a = [1,2,3,4,5,6,7]), so I can't call reshape. I want to keep that extra data, though, which might be quite a lot since my rows are pretty long. Is there any straightforward way to do this in NumPy?
The simplest way I can think of is not to try and use reshape with methods such as ravel('F'), but just to concatenate sliced views of your array.
For example:
>>> cols = 4
>>> a = np.array([1,2,3,4,5,6,7])
>>> np.concatenate([a[i::cols] for i in range(cols)])
array([1, 5, 2, 6, 3, 7, 4])
This works for any length of array and any number of columns:
>>> cols = 5
>>> b = np.arange(17)
>>> np.concatenate([b[i::cols] for i in range(cols)])
array([ 0, 5, 10, 15, 1, 6, 11, 16, 2, 7, 12, 3, 8, 13, 4, 9, 14])
Alternatively, use as_strided to reshape. The fact that the array a is too small to fit the (2, 4) shape doesn't matter: you'll just get junk (i.e. whatever's in memory) in the last place:
>>> np.lib.stride_tricks.as_strided(a, shape=(2, 4))
array([[ 1, 2, 3, 4],
[ 5, 6, 7, 168430121]])
>>> _.flatten('F')[:7]
array([1, 5, 2, 6, 3, 7, 4])
In the general case, given an array b and a desired number of columns cols you can do this:
>>> x = np.lib.stride_tricks.as_strided(b, shape=(len(b)//cols + 1, cols)) # reshape to min 2d array needed to hold array b
>>> np.concatenate((x[:,:len(b)%cols].ravel('F'), x[:-1, len(b)%cols:].ravel('F')))
This unravels the "good" part of the array (those columns not containing junk values) and the bad part (except for the junk values which lie in the bottom row) and concatenates the two unraveled arrays. For example:
>>> cols = 5
>>> b = np.arange(17)
>>> x = np.lib.stride_tricks.as_strided(b, shape=(len(b)//cols + 1, cols))
>>> np.concatenate((x[:,:len(b)%cols].ravel('F'), x[:-1, len(b)%cols:].ravel('F')))
array([ 0, 5, 10, 15, 1, 6, 11, 16, 2, 7, 12, 3, 8, 13, 4, 9, 14])
Use some value to represent null to make the array be a multiple of how you want to split it. If casting to float is acceptable, you could use nan's to represent the added elements that represent nulls. Then reshape to 2D, call transpose, and reshape to 1D. Then eliminate the nulls.
import numpy as np
a = np.array([1,2,3,4,5,6,7]) # input
b = np.concatenate( (a, [np.NaN]) ) # add a NaN to make it 8 = 4x2
c = b.reshape(2,4).transpose().reshape(8,) # reshape to 2x4, transpose, reshape to 8x1
d = c[-np.isnan(c)] # remove NaN
print d
[ 1. 5. 2. 6. 3. 7. 4.]

I have need the N minimum (index) values in a numpy array

Hi I have an array with X amount of values in it I would like to locate the indexs of the ten smallest values. In this link they calculated the maximum effectively, How to get indices of N maximum values in a numpy array?
however I cant comment on links yet so I'm having to repost the question.
I'm not sure which indices i need to change to achieve the minimum and not the maximum values.
This is their code
In [1]: import numpy as np
In [2]: arr = np.array([1, 3, 2, 4, 5])
In [3]: arr.argsort()[-3:][::-1]
Out[3]: array([4, 3, 1])
If you call
arr.argsort()[:3]
It will give you the indices of the 3 smallest elements.
array([0, 2, 1], dtype=int64)
So, for n, you should call
arr.argsort()[:n]
Since this question was posted, numpy has updated to include a faster way of selecting the smallest elements from an array using argpartition. It was first included in Numpy 1.8.
Using snarly's answer as inspiration, we can quickly find the k=3 smallest elements:
In [1]: import numpy as np
In [2]: arr = np.array([1, 3, 2, 4, 5])
In [3]: k = 3
In [4]: ind = np.argpartition(arr, k)[:k]
In [5]: ind
Out[5]: array([0, 2, 1])
In [6]: arr[ind]
Out[6]: array([1, 2, 3])
This will run in O(n) time because it does not need to do a full sort. If you need your answers sorted (Note: in this case the output array was in sorted order but that is not guaranteed) you can sort the output:
In [7]: sorted(arr[ind])
Out[7]: array([1, 2, 3])
This runs on O(n + k log k) because the sorting takes place on the smaller
output list.
I don't guarantee that this will be faster, but a better algorithm would rely on heapq.
import heapq
indices = heapq.nsmallest(10,np.nditer(arr),key=arr.__getitem__)
This should work in approximately O(N) operations whereas using argsort would take O(NlogN) operations. However, the other is pushed into highly optimized C, so it might still perform better. To know for sure, you'd need to run some tests on your actual data.
Just don't reverse the sort results.
In [164]: a = numpy.random.random(20)
In [165]: a
Out[165]:
array([ 0.63261763, 0.01718228, 0.42679479, 0.04449562, 0.19160089,
0.29653725, 0.93946388, 0.39915215, 0.56751034, 0.33210873,
0.17521395, 0.49573607, 0.84587652, 0.73638224, 0.36303797,
0.2150837 , 0.51665416, 0.47111993, 0.79984964, 0.89231776])
Sorted:
In [166]: a.argsort()
Out[166]:
array([ 1, 3, 10, 4, 15, 5, 9, 14, 7, 2, 17, 11, 16, 8, 0, 13, 18,
12, 19, 6])
First ten:
In [168]: a.argsort()[:10]
Out[168]: array([ 1, 3, 10, 4, 15, 5, 9, 14, 7, 2])
This code save 20 index of maximum element of split_list in Twenty_Maximum:
Twenty_Maximum = split_list.argsort()[-20:]
against this code save 20 index of minimum element of split_list in Twenty_Minimum:
Twenty_Minimum = split_list.argsort()[:20]

Categories