For example (from https://numpy.org/doc/stable/reference/generated/numpy.nditer.html):
def iter_add_py(x, y, out=None):
addop = np.add
it = np.nditer([x, y, out], [],
[['readonly'], ['readonly'], ['writeonly','allocate']])
with it:
for (a, b, c) in it:
addop(a, b, out=c)
return it.operands[2]
In the case out = None, the only way to access the result of the operation is with it.operands[2]. Numba's nditer supports None as an operand, creating the correct empty array, but it's inaccessible because Numba doesn't support the operands attribute. Speed is critical in my solution.
Things I've tried:
Sending a new empty array (which we can reference) into out is a seemingly obvious solution, but you'd need to manually compute the broadcasted shape, dtype, etc. I couldn't find a way to do this, with/without numpy, that was quick enough to justify ignoring that nditer does the job anyway.
Even if I could, Numba's nditer doesn't support flags in its constructor, rendering it read-only, so the best way I could find to fill the array is to enumerate nditer and set out.flat in the loop, which is also too slow (and a little gross).
Unsurprisingly, a jitted function can't return nor take nditer objects. I'm new to Numba, so I could be wrong about this, but I couldn't find anything to the contrary. I can only iterate inside the function.
There's no Numba support for np.broadcast, so that won't work.
I should mention that Numba.vectorize isn't an option, for a host of irrelevant reasons.
Any other ideas?
Related
I am fairly new to cython and I am wondering why the following takes very long:
cpdef test(a):
cdef np.ndarray[dtype=int] b
for i in range(10):
b=a
a=np.array([1,2,3],dtype=int)
t = timeit.Timer(functools.partial(test.test, a))
print(t.timeit(1000000))
-> 0.5446977 Seconds
If i comment out the cdef declaration this is done in no-time. If i declare "a" as np.ndarray in the function header nothing changes. Also, id(a) == id(b) so no new objects are created.
Similar behaviour can be observed when calling a function that takes many ndarray as args, e.g.
cpdef foo(np.ndarray a, np.ndarray b,np.ndarray c, ..... )
Can anybody help me? What am i missing here?
Edit:
I noticed the following:
This is slow:
cpdef foo(np.ndarray[dtype=int,ndim=1] a,np.ndarray[dtype=int,ndim=1] b,np.ndarray[dtype=int,ndim=1] c ) :
return
This is faster:
def foo(np.ndarray[dtype=int,ndim=1] a,np.ndarray[dtype=int,ndim=1] b,np.ndarray[dtype=int,ndim=1] c ) :
return
This is the fastest
cpdef foo( a,b,c ) :
return
The function foo() is called very frequently (many million times) in my project from many different locations and does some calculus with the three numpy arrays (however, it doesnt change their content).
I basically need the speed of knowing the data-type inside of the arrays while also having a very low function-call overead. What would be the most adequate solution for this?
b = a generates a bunch of type checking that needs to identify whether the type of a is actually an ndarray and makes sure it exports the buffer protocol with an appropriate element type. In exchange for this one-off cost you get fast indexing of single elements.
If you're not doing indexing of single elements then typing as np.ndarray is literally pointless and you're pessimizing your code. If you are doing this indexing then you can get significant optimizations.
If i comment out the cdef declaration this is done in no-time.
This is often a sign that the C compiler has realized the entire function does nothing and optimized it out completely. And therefore your measurement may be meaningless.
cpdef foo(np.ndarray a, np.ndarray b,np.ndarray c, ..... )
just specifying the type as np.ndarray without specifying the element dtype usually gains you very little, and is probably not worthwhile.
If you have a function that you're calling millions of times then it is likely that the input arrays come from somewhere, and can be pre-typed, probably with less frequency. For example they might come by taking slices from a larger array?
The newer memoryview syntax (int[:]) is quick to slice, so for example if you already have a 2D memoryview (int[:,:] x) it's very quick to generate a 1D memoryview from it with (e.g. x[:,0]), and it's quick to pass existing memoryviews into a cdef function with memoryview arguments. (Note that (a) I'm just unsure if all of this applies to np.ndarray too, and (b) seeing up a fresh memoryview is likely to be about the same cost an an np.ndarray so I'm only suggesting using them because I know slicing is quick).
Therefore my main suggestion is to move the typing outwards to try to reduce the number of fresh initializations of these typed arrays. If that isn't possible then I think you may be stuck.
I tried giving numba a go, as I was told it works very well for numerical/scientific computing applications. However, it seems that I've already run into a problem in the following scenario:
I have a function that computes a 12x12 Jacobian matrix, represented by a numpy array, and then returns this Jacobian. However, when I attempt to decorate said function with #numba.njit, I get the following error:
This is not usually a problem with Numba itself but instead often caused by
the use of unsupported features or an issue in resolving types.
As a basic example of my usage, the following code tries to declare a 12x12 numpy zero matrix, but it fails:
import numpy as np
import numba
#numba.njit
def numpy_matrix_test():
A = np.zeros([12,12])
return A
A_out = numpy_matrix_test()
print(A_out)
Since I assumed declaring numpy arrays in such a way was common enough that numba would be able to handle them, I'm quite surprised.
The assumption that the functions called in a numba jitted function are the same functions when not used in a numba function is actually wrong (but understandable). In reality numba (behind the scenes) delegates to its own functions instead of using the "real" NumPy functions.
So it's not really np.zeros that is called in the jitted function, it's their own function. So some differences between Numba and NumPy are unavoidable.
For example you cannot use a list for the shape, it has to be a tuple (lists and arrays produce the exception you've encountered). So the correct syntax would be:
#numba.njit
def numpy_matrix_test():
A = np.zeros((12, 12))
return A
Something similar applies to the dtype argument. It has to be a real NumPy/numba type, a Python type cannot be used:
#numba.njit
def numpy_matrix_test():
A = np.zeros((12, 12), dtype=int) # to make it work use numba.int64 instead of int here
return A
Even if "plain" NumPy allows it:
np.zeros((12, 12), dtype=int)
Do you perhaps mean numpy.zeros((12,12)), because you want a shape of 12 rows and 12 columns?
Numpy Zeros reference
For doing repeated operations in numpy/scipy, there's a lot of overhead because most operation return a new object.
For example
for i in range(100):
x = A*x
I would like to avoid this by passing a reference to the operation, like you would in C
for i in range(100):
np.dot(A,x,x_new) #x_new would now store the result of the multiplication
x,x_new = x_new,x
Is there any way to do this? I would like this not for just mutiplication but all operations that return a matrix or a vector.
See Learning to avoid unnecessary array copies in IPython Books. From there, note e.g. these guidelines:
a *= b
will not produce a copy, whereas:
a = a * b
will produce a copy. Also, flatten() will copy, while ravel() only copies if necessary and returns a view otherwise (and thus should in general be preferred). reshape() also does not produce a copy, but returns a view.
Furthermore, as #hpaulj and #ali_m noted in their comments, many numpy functions support an out parameter, so have a look at the docs. From numpy.dot() docs:
out : ndarray, optional
Output argument.
This must have the exact kind that would be returned if it was not used. In particular, it must have the right type, must be C-contiguous, and its dtype must be the dtype that would be returned for dot(a,b). This is a performance feature. Therefore, if these conditions are not met, an exception is raised, instead of attempting to be flexible.
I'm trying to subclass numpy.complex64 in order to make use of the way numpy stores the data, (contiguous, alternating real and imaginary part) but use my own __add__, __sub__, ... routines.
My problem is that when I make a numpy.ndarray, setting dtype=mysubclass, I get a numpy.ndarray with dtype='numpy.complex64' in stead, which results in numpy not using my own functions for additions, subtractions and so on.
Example:
import numpy as np
class mysubclass(np.complex64):
pass
a = mysubclass(1+1j)
A = np.empty(2, dtype=mysubclass)
print type(a)
print repr(A)
Output:
<class '__main__.mysubclass'>
array([ -2.07782988e-20 +4.58546896e-41j, -2.07782988e-20 +4.58546896e-41j], dtype=complex64)'
Does anyone know how to do this?
Thanks in advance - Soren
The NumPy type system is only designed to be extended from C, via the PyArray_RegisterDataType function. It may be possible to access this functionality from Python using ctypes but I wouldn't recommend it; better to write an extension in C or Cython, or subclass ndarray as #seberg describes.
There's a simple example dtype in the NumPy source tree: newdtype_example/floatint.c. If you're into Pyrex, reference.pyx in the pytables source may be worth a look.
Note that scalars and arrays are quite different in numpy. np.complex64 (this is 32-bit float, just to note, not double precision). You will not be able to change the array like that, you will need to subclass the array instead and then override its __add__ and __sub__.
If that is all you want to do, it should just work otherwise look at http://docs.scipy.org/doc/numpy/user/basics.subclassing.html since subclassing an array is not that simple.
However if you want to use this type also as a scalar. For example you want to index scalars out, it gets more difficult at least currently. You can get a little further by defining __array_wrap__ to convert to scalars to your own scalar type for some reduce functions, for indexing to work in all cases it appears to me that you may have define your own __getitem__ currently.
In all cases with this approach, you still use the complex datatype, and all functions that are not explicitly overridden will still behave the same. #ecatmur mentioned that you can create new datatypes from the C side, if that is really what you want.
This code below best illustrates my problem:
The output to the console (NB it takes ~8 minutes to run even the first test) shows the 512x512x512x16-bit array allocations consuming no more than expected (256MByte for each one), and looking at "top" the process generally remains sub-600MByte as expected.
However, while the vectorized version of the function is being called, the process expands to enormous size (over 7GByte!). Even the most obvious explanation I can think of to account for this - that vectorize is converting the inputs and outputs to float64 internally - could only account for a couple of gigabytes, even though the vectorized function returns an int16, and the returned array is certainly an int16. Is there some way to avoid this happening ? Am I using/understanding vectorize's otypes argument wrong ?
import numpy as np
import subprocess
def logmem():
subprocess.call('cat /proc/meminfo | grep MemFree',shell=True)
def fn(x):
return np.int16(x*x)
def test_plain(v):
print "Explicit looping:"
logmem()
r=np.zeros(v.shape,dtype=np.int16)
for z in xrange(v.shape[0]):
for y in xrange(v.shape[1]):
for x in xrange(v.shape[2]):
r[z,y,x]=fn(x)
print type(r[0,0,0])
logmem()
return r
vecfn=np.vectorize(fn,otypes=[np.int16])
def test_vectorize(v):
print "Vectorize:"
logmem()
r=vecfn(v)
print type(r[0,0,0])
logmem()
return r
logmem()
s=(512,512,512)
v=np.ones(s,dtype=np.int16)
logmem()
test_plain(v)
test_vectorize(v)
v=None
logmem()
I'm using whichever versions of Python/numpy are current on an amd64 Debian Squeeze system (Python 2.6.6, numpy 1.4.1).
It is a basic problem of vectorisation that all intermediate values are also vectors. While this is a convenient way to get a decent speed enhancement, it can be very inefficient with memory usage, and will be constantly thrashing your CPU cache. To overcome this problem, you need to use an approach which has explicit loops running at compiled speed, not at python speed. The best ways to do this are to use cython, fortran code wrapped with f2py or numexpr. You can find a comparison of these approaches here, although this focuses more on speed than memory usage.
you can read the source code of vectorize(). It convert the array's dtype to object, and call np.frompyfunc() to create the ufunc from your python function, the ufunc returns object array, and finally vectorize() convert object array to int16 array.
It will use many memory when the dtype of array is object.
Using python function to do element wise calculation is slow, even is's converted to ufunc by frompyfunc().