Suppose you have:
data = np.array([
[ 0, 1 ],
[ 0, 1 ]
], dtype='int64')
calling
data[:, 1]
yields
[1. 1.]
however
data[(slice(None,None,None), slice(1,2,None))]
yields
[[1.]
[1.]]
how come?
How would I explicitly write the slice object to get the equivalent of [:, 1]?
Indexing with a slice object has different semantics from indexing with an integer. Indexing with a single integer collapses / removes the corresponding dimension, whereas indexing with a slice never does so.
If you want to have the behavior of indexing with a single integer along a certain axis, you could emulate it with a slice and some reshaping logic after that; but the least convoluted solution would be to just replace that slice with the corresponding integer.
Alternatively, there is np.squeeze. Absolutely never use it without the axis keyword since that is an guaranteed recipe for code that produces unintentional behavior. But squeezing only those axes that you sliced will have the effect that you seem to be after.
If you will permit me a mini-rant: On a higher level, I suspect that what you are after is very much a numpy-antipattern though. If I had to formulate the first rule of numpy, it is not squeeze axes just because they happen to be singleton. Embrace ndarrays and realize that more axes does not make your code more complicated, but it allows you to express meaningfull semantics of your data. The fact that you are slicing this axis in the first place suggests that the size of this axis isnt fundamentally 1; it just happens to be so in a particular case. Squeezing out that singleton dimension is going to make it near impossible for any downstream code to be written in a numpythonic and bugfree manner. If you absolutely must squeeze an array, like say before passing it to a plot function, treat it like you would a stack allocated variable in C++ and don't let any reference to it leak out of the current scope, because you will have garbled the semantics of that array, and downstream code no longer knows 'what is it looking at'.
Seems like you need to remove singleton dimensions yourself after using a slice object. Or there is some underlying implementation detail that I don't understand.
import numpy as np
data = np.array([
[ 0, 1 ],
[ 0, 1 ]
], dtype='int64')
print("desired result")
print(data[:, 1])
print("first try")
print(data[(slice(None,None,None), slice(1,2,None))])
print("solution")
print(data[[slice(None,None,None), slice(1,2,None)]].squeeze())
output:
desired result
[1 1]
first try
[[1]
[1]]
solution
[1 1]
Related
I'm having trouble getting used to Numpy arrays (I'm a Matlab user). When I try to select just a range of values from an array, I see the resulting array has an extra dimension:
ioi = np.nonzero((self.data_array[0,:] >= range_start) & (self.data_array[0,:] <= range_end))
print("self.data_array.shape = {0}".format(self.data_array.shape))
print("self.data_array.shape[:,ioi] = {0}".format(self.data_array[:,ioi].shape))
The result is:
self.data_array.shape = (5, 50000)
self.data_array.shape[:,ioi] = (5, 1, 408)
I also see that ioi is a tuple. I don't know if that has anything to do with it.
What is happening here to create that extra dimension and what should I do, in the most direct way, to get an array shape of (5,408) in this case?
The simplest and most efficient thing would be to get rid of the np.nonzero call, and use logical indexing just as one would in Matlab. Here's an example. (I'm using random data of the same shape, FYI.)
>>> data = np.random.randn(5, 5000)
>>> start, end = -0.5, 0.5
>>> ioi = (data[0] > start) & (data[0] < end)
>>> print(ioi.shape)
(5000,)
>>> print(ioi.sum())
1900
>>> print(data[:, ioi].shape)
(5, 1900)
The np.nonzero call is not usually needed. Just like Matlab's find function, it's slow compared with logical indexing, and usually one's goal can be more efficiently accomplished with logical indexing. np.nonzero, just like find, should mostly be used only when you need the actual index values themselves.
As you suspected, the reason for the extra dimensions is that tuples are handled differently from other types of indexing arrays in NumPy. This is to allow more flexible indexing, such as with slices, ellipses, etc. See this useful page for in-depth explanation, especially the last section.
There are at least two other options to solve the problem. One is to use the ioi array, as returned from np.nonzero, directly as your only index to the data array. As in: self.data_array[ioi]. Part of why you have an extra dimension is that you actually have two set of indices in your call: the slice (:) and the tuple ioi. np.nonzero is guaranteed to return a tuple exactly for this reason, so that its output can always be used to directly index the source array.
The last option is to call np.squeeze on the returned array, but I'd opt for one of the above first.
For numpy ndarray, there are no append, and insert as there are for native python lists.
a = np.array([1, 2, 3])
a.append(5) # this does not work
a = np.append(a, 5) # this is the only way
Whereas for native python lists,
a = [1, 2, 3]
a.append(4) # this modifies a
a # [1, 2, 3, 4]
Why was numpy ndarray designed to be this way? I'm writing a subclass of ndarray, is there any way of implementing "append" like native python arrays?
NumPy makes heavy use of views, a feature that Python lists do not support. A view is an array that uses the memory of another object rather than owning its own memory; for example, in the following snippet
a = numpy.arange(5)
b = a[1:3]
b is a view of a.
Views would interact very poorly with an in-place append or other in-place size-changing operations. Arrays would suddenly not be views of arrays they should be views of, or they would be views of deallocated memory, or it would be unpredictable whether an append on one array would affect an array it was a view of, or all sorts of other problems. For example, what would a look like after b.append(6)? Or what would b look like after a.clear()? And what kind of performance guarantees could you make? Probably not the amortized constant time guarantee of list.append.
If you want to append, you probably shouldn't be using NumPy arrays; you should use a list, and build an array from the list when you're done appending.
ndarray is created with a fixed size databuffer - just big enough to hold the bytes representing the elements.
arr.nbytes == arr.itemsize * arr.size
arr.resize can change the array inplace. But read it's docs to see the limitations, especially about owning its own data. It's one of the few inplace operations, and not used that often.
In contrast a Python list stores object pointers in a buffer. The buffer has some growth room allowing for efficient append. It just has to add a new pointer to the buffer. When the buffer fills up, it allocates a new larger buffer and copies the pointers.
For a 1d array the buffers for ndarray and list will be similar, at least for 4 or 8 bytes numeric dtypes. But for multidimensional arrays, the databuffer can be very large (the product of all dimensions), while the top buffer of an equivalent nested array just contains pointers to the outer layer of lists (the 'rows').
Object dtype arrays store pointers like a list, but the databuffer still has the fixed size (no growth space). Performance lies between numeric arrays and lists.
I can imagine writing an inplace append that uses the resize method, followed by copying the new value(s) to the 0 fills.
In [96]: arr = np.array([[1,3],[2,7]])
In [97]: arr.resize(3,2)
In [98]: arr
Out[98]:
array([[1, 3],
[2, 7],
[0, 0]])
In [99]: arr[-1,:] = 10,11
In [100]: arr
Out[100]:
array([[ 1, 3],
[ 2, 7],
[10, 11]])
But notice what happens to values when we resize an inner axis:
In [101]: arr = np.array([[1,3],[2,7]])
In [102]: arr.resize(2,3)
In [103]: arr
Out[103]:
array([[1, 3, 2],
[7, 0, 0]])
So this kind of append is quite limited compared to concatenate (and all of its 'stack' derivatives).
Have you looked at the code for np.append? After making sure the arguments are arrays, and tweaking their shapes, it does:
concatenate((arr, values), axis=axis)
In other words, it is just an alternative way of calling concatenate. It's probably best for adding a single value to a 1d array. It shouldn't be used repeatedly in a loop, precisely because it returns a new array, and thus is relatively expensive. Otherwise its use messes up many users. Some ignore the axis parameter. Others have problems creating a correct 'empty' array to start with. Concatenate also has those problems, but at least users have to consciously deal the issue of matching shapes.
np.insert is much more complicated. It does different things depending on whether the indices (obj) is a number, slice or list of numbers. One approach is to create a target array of the right size, and copy slices from the original and insert values to the right slots. Another is to use a boolean mask to copy values into the right locations. Both have to accommodate multidimensions - it inserts along one axis, but must use the appropriate slice(None) for the other dimensions. This is much more complicated than the list insert, which inserts one object (pointer) at one location in 1d.
I have a trouble with numpy ndarray when I'm indexing multiple dimensions at the same time :
> a = np.random.random((25,50,30))
> b = a[0,:,np.arange(30)]
> print(b.shape)
Here I expected the result to be (50,30), but actually the real result is (30,50) !
Can someone explain it to me please I don't get it and this feature introduces tons of bugs in my code. Thank you :)
Additional information :
Indexing in one dimension works perfectly :
> b = a[0,:,:]
> print(b.shape)
(50,30)
And when I have the transposition :
> a[0,:,0] == b[0,:]
True
From numpy docs
The easiest way to understand the situation may be to think in terms
of the result shape. There are two parts to the indexing operation,
the subspace defined by the basic indexing (excluding integers) and
the subspace from the advanced indexing part. Two cases of index
combination need to be distinguished:
The advanced indexes are separated by a slice, ellipsis or newaxis.
For example x[arr1, :, arr2].
The advanced indexes are all next to each other. For example x[...,
arr1, arr2, :] but not x[arr1, :, 1] since 1 is an advanced index in
this regard.
In the first case, the dimensions resulting from the advanced indexing
operation come first in the result array, and the subspace dimensions
after that. In the second case, the dimensions from the advanced
indexing operations are inserted into the result array at the same
spot as they were in the initial array (the latter logic is what makes
simple advanced indexing behave just like slicing).
(my emphasis) the highlighted bit applies to your
b = a[0,:,np.arange(30)]
When you use a list or array of integers to index a numpy array, you're using something that is known as Fancy Indexing. The rules for Fancy Indexing are not so straightforward as one might think. This is the reason that you're array has the wrong dimension. To avoid surprises, I'd recommend you to stick with slicing. So, you should change your code to:
a = np.random.random((25,50,30))
b = a[0,:,:]
print(b.shape)
On Python2.4, the single colon slice operator : works as expected on Numeric matrices, in that it returns all values for the dimension it was used on. For example all X and/or Y values for a 2-D matrix.
On Python2.6, the single colon slice operator seems to have a different effect in some cases: for example, on a regular 2-D MxN matrix, m[:] can result in zeros(<some shape tuple>, 'l') being returned as the resulting slice. The full matrix is what one would expect - which is what one gets using Python2.4.
Using either a double colon :: or 3 dots ... in Python2.6, instead of a single colon, seems to fix this issue and return the proper matrix slice.
After some guessing, I discovered you can get the same zeros output when inputting 0 as the stop index. e.g. m[<any index>:0] returns the same "zeros" output as m[:]. Is there any way to debug what indexes are being picked when trying to do m[:]? Or did something change between the two Python versions (2.4 to 2.6) that would affect the behavior of slicing operators?
The version of Numeric being used (24.2) is the same between both versions of Python. Why does the single colon slicing NOT work on Python 2.6 the same way it works with version 2.4?
Python2.6:
>>> a = array([[1,2,3],[4,5,6]])
**>>> a[:]
zeros((0, 3), 'l')**
>>> a[::]
array([[1,2,3],[4,5,6]])
>>> a[...]
array([[1,2,3],[4,5,6]])
Python2.4:
>>> a = array([[1,2,3],[4,5,6]])
**>>> a[:]
array([[1,2,3],[4,5,6]])**
>>> a[::]
array([[1,2,3],[4,5,6]])
>>> a[...]
array([[1,2,3],[4,5,6]])
(I typed the "code" up from scratch, so it may not be fully accurate syntax or printout-wise, but shows what's happening)
It seems the problem is an integer overflow issue. In the Numeric source code, the matrix data structure being used is in a file called MA.py. The specific class is called MaskedArray. There is a line at the end of the class that sets the "array()" function to this class. I had much trouble finding this information but it turned out to be very critical.
There is also a getslice(self, i, j) method in the MaskedArray class that takes in the start/stop indices and returns the proper slice. After finding this and adding debug for those indices, I discovered that under the good case with Python2.4, when doing a slice for an entire array the start/stop indices automatically input are 0 and 2^31-1, respectively. But under Python2.6, the stop index automatically input changed to be 2^63-1.
Somewhere, probably in the Numeric source/library code, there is only 32 bits to store the stop index when slicing arrays. Hence, the 2^63-1 value was overflowing (but any value greater than 2^31 would overflow). The output slice in these bad cases ends up being equivalent to slicing from start 0 to stop 0, e.g. an empty matrix. When you slice from [0:-1] you do get a valid slice. I think (2^63 - 1) interpreted as a 32 bit number would come out to -1. I'm not quite sure why the output of slicing from 0 to 2^63-1 is the same as slicing from 0 to 0 (where you get an empty matrix), and not from 0 to -1 (where you get at least some output).
Although, if I input ending slice indexes that would overflow (i.e. greater than 2^31), but the lower 32 bits were a valid positive non-zero number, I would get a valid slice back. E.g. a stop index of 2^33+1 would return the same slice as a stop index of 1, because the lower 32 bits are 1 in both cases.
Python 2.4 Example code:
>>> a = array([[1,2,3],[4,5,6]])
>>> a[:] # (which actually becomes a[0:2^31-1])
[[1,2,3],[4,5,6]] # correct, expect the entire array
Python 2.6 Example code:
>>> a = array([[1,2,3],[4,5,6]])
>>> a[:] # (which actually becomes a[0:2^63-1])
zeros((0, 3), 'l') # incorrect b/c of overflow, should be full array
>>> a[0:0]
zeros((0, 3), 'l') # correct, b/c slicing range is null
>>> a[0:2**33+1]
[ [1,2,3]] # incorrect b/c of overflow, should be full array
# although it returned some data b/c the
# lower 32 bits of (2^33+1) = 1
>>> a[0:-1]
[ [1,2,3]] # correct, although I'm not sure why "a[:]" doesn't
# give this output as well, given that the lower 32
# bits of 2^63-1 equal -1
I think I was using 2.4 10 years ago. I used numpy back then, but may have added Numeric for its NETCDF capabilities. But the details are fuzzy. And I don't have any of those versions now for testing.
Python documentation back then should be easy to explore. numpy/Numeric documentation was skimpier.
I think Python has always had the basic : slicing for lists. alist[:] to make a copy, alist[1:-1] to slice of the first and last elements, etc.
I don't know when the step was added, e.g. alist[::-1] to reverse a list.
Python started to recognize indexing tuples at the request of numeric developers, e.g. arr[2,4], arr[(2,4)], arr[:, [1,2]], arr[::-1, :]. But I don't know when that appeared
Ellipsis is also mainly of value for multidimensional indexing. The Python interpreter recognizes ..., but lists don't handle it. About the same time the : notation was formally implemented as slice, e.g.
In 3.5, we can reverse a list with a slice
In [6]: list(range(10)).__getitem__(slice(None,None,-1))
Out[6]: [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
I would suggest a couple of things:
make sure you understand numpy (and list) indexing/slicing in a current system
try the same things in the older versions; ask SO questions with concrete examples of the differences. Don't count on any of us to have memories of the old code.
study the documentation to find when suspected features where changed or added.
I have a problem with some numpy stuff. I need a numpy array to behave in an unusual manner by returning a slice as a view of the data I have sliced, not a copy. So heres an example of what I want to do:
Say we have a simple array like this:
a = array([1, 0, 0, 0])
I would like to update consecutive entries in the array (moving left to right) with the previous entry from the array, using syntax like this:
a[1:] = a[0:3]
This would get the following result:
a = array([1, 1, 1, 1])
Or something like this:
a[1:] = 2*a[:3]
# a = [1,2,4,8]
To illustrate further I want the following kind of behaviour:
for i in range(len(a)):
if i == 0 or i+1 == len(a): continue
a[i+1] = a[i]
Except I want the speed of numpy.
The default behavior of numpy is to take a copy of the slice, so what I actually get is this:
a = array([1, 1, 0, 0])
I already have this array as a subclass of the ndarray, so I can make further changes to it if need be, I just need the slice on the right hand side to be continually updated as it updates the slice on the left hand side.
Am I dreaming or is this magic possible?
Update: This is all because I am trying to use Gauss-Seidel iteration to solve a linear algebra problem, more or less. It is a special case involving harmonic functions, I was trying to avoid going into this because its really not necessary and likely to confuse things further, but here goes.
The algorithm is this:
while not converged:
for i in range(len(u[:,0])):
for j in range(len(u[0,:])):
# skip over boundary entries, i,j == 0 or len(u)
u[i,j] = 0.25*(u[i-1,j] + u[i+1,j] + u[i, j-1] + u[i,j+1])
Right? But you can do this two ways, Jacobi involves updating each element with its neighbours without considering updates you have already made until the while loop cycles, to do it in loops you would copy the array then update one array from the copied array. However Gauss-Seidel uses information you have already updated for each of the i-1 and j-1 entries, thus no need for a copy, the loop should essentially 'know' since the array has been re-evaluated after each single element update. That is to say, every time we call up an entry like u[i-1,j] or u[i,j-1] the information calculated in the previous loop will be there.
I want to replace this slow and ugly nested loop situation with one nice clean line of code using numpy slicing:
u[1:-1,1:-1] = 0.25(u[:-2,1:-1] + u[2:,1:-1] + u[1:-1,:-2] + u[1:-1,2:])
But the result is Jacobi iteration because when you take a slice: u[:,-2,1:-1] you copy the data, thus the slice is not aware of any updates made. Now numpy still loops right? Its not parallel its just a faster way to loop that looks like a parallel operation in python. I want to exploit this behaviour by sort of hacking numpy to return a pointer instead of a copy when I take a slice. Right? Then every time numpy loops, that slice will 'update' or really just replicate whatever happened in the update. To do this I need slices on both sides of the array to be pointers.
Anyway if there is some really really clever person out there that awesome, but I've pretty much resigned myself to believing the only answer is to loop in C.
Late answer, but this turned up on Google so I probably point to the doc the OP wanted. Your problem is clear: when using NumPy slices, temporaries are created. Wrap your code in a quick call to weave.blitz to get rid of the temporaries and have the behaviour your want.
Read the weave.blitz section of PerformancePython tutorial for full details.
accumulate is designed to do what you seem to want; that is, to proprigate an operation along an array. Here's an example:
from numpy import *
a = array([1,0,0,0])
a[1:] = add.accumulate(a[0:3])
# a = [1, 1, 1, 1]
b = array([1,1,1,1])
b[1:] = multiply.accumulate(2*b[0:3])
# b = [1 2 4 8]
Another way to do this is to explicitly specify the result array as the input array. Here's an example:
c = array([2,0,0,0])
multiply(c[:3], c[:3], c[1:])
# c = [ 2 4 16 256]
Just use a loop. I can't immediately think of any way to make the slice operator behave the way you're saying you want it to, except maybe by subclassing numpy's array and overriding the appropriate method with some sort of Python voodoo... but more importantly, the idea that a[1:] = a[0:3] should copy the first value of a into the next three slots seems completely nonsensical to me. I imagine that it could easily confuse anyone else who looks at your code (at least the first few times).
It is not the correct logic.
I'll try to use letters to explain it.
Image array = abcd with a,b,c,d as elements.
Now, array[1:] means from the element in position 1 (starting from 0) on.
In this case:bcd and array[0:3] means from the character in position 0 up to the third character (the one in position 3-1) in this case: 'abc'.
Writing something like:
array[1:] = array[0:3]
means: replace bcd with abc
To obtain the output you want, now in python, you should use something like:
a[1:] = a[0]
It must have something to do with assigning a slice. Operators, however, as you may already know, do follow your expected behavior:
>>> a = numpy.array([1,0,0,0])
>>> a[1:]+=a[:3]
>>> a
array([1, 1, 1, 1])
If you already have zeros in your real-world problem where your example does, then this solves it. Otherwise, at added cost, set them to zero either by multiplying by zero or assigning to zero, (whichever is faster)
edit:
I had another thought. You may prefer this:
numpy.put(a,[1,2,3],a[:3])
Numpy must be checking if the target array is the same as the input array when doing the setkey call. Luckily, there are ways around it. First, I tried using numpy.put instead
In [46]: a = numpy.array([1,0,0,0])
In [47]: numpy.put(a,[1,2,3],a[0:3])
In [48]: a
Out[48]: array([1, 1, 1, 1])
And then from the documentation of that, I gave using flatiters a try (a.flat)
In [49]: a = numpy.array([1,0,0,0])
In [50]: a.flat[1:] = a[0:3]
In [51]: a
Out[51]: array([1, 1, 1, 1])
But this doesn't solve the problem you had in mind
In [55]: a = np.array([1,0,0,0])
In [56]: a.flat[1:] = 2*a[0:3]
In [57]: a
Out[57]: array([1, 2, 0, 0])
This fails because the multiplication is done before the assignment, not in parallel as you would like.
Numpy is designed for repeated application of the exact same operation in parallel across an array. To do something more complicated, unless you can find decompose it in terms of functions like numpy.cumsum and numpy.cumprod, you'll have to resort to something like scipy.weave or writing the function in C. (See the PerfomancePython page for more details.) (Also, I've never used weave, so I can't guarantee it will do what you want.)
You could have a look at np.lib.stride_tricks.
There is some information in these excellent slides:
http://mentat.za.net/numpy/numpy_advanced_slides/
with stride_tricks starting at slide 29.
I'm not completely clear on the question though so can't suggest anything more concrete - although I would probably do it in cython or fortran with f2py or with weave. I'm liking fortran more at the moment because by the time you add all the required type annotations in cython I think it ends up looking less clear than the fortran.
There is a comparison of these approaches here:
www. scipy. org/ PerformancePython
(can't post more links as I'm a new user)
with an example that looks similar to your case.
In the end I came up with the same problem as you. I had to resort to use Jacobi iteration and weaver:
while (iter_n < max_time_steps):
expr = "field[1:-1, 1:-1] = (field[2:, 1:-1] "\
"+ field[:-2, 1:-1]+"\
"field[1:-1, 2:] +"\
"field[1:-1, :-2] )/4."
weave.blitz(expr, check_size=0)
#Toroidal conditions
field[:,0] = field[:,self.flow.n_x - 2]
field[:,self.flow.n_x -1] = field[:,1]
iter_n = iter_n + 1
It works and is fast, but is not Gauss-Seidel, so convergence can be a bit tricky. The only option of doing Gauss-Seidel as a traditional loop with indexes.
i would suggest cython instead of looping in c. there might be some fancy numpy way of getting your example to work using a lot of intermediate steps... but since you know how to write it in c already, just write that quick little bit as a cython function and let cython's magic make the rest of the work easy for you.