Numpy passing input array as `out` argument to ufunc - python

Is it generally safe to provide the input array as the optional out argument to a ufunc in numpy, provided the type is correct? For example, I have verified that the following works:
>>> import numpy as np
>>> arr = np.array([1.2, 3.4, 4.5])
>>> np.floor(arr, arr)
array([ 1., 3., 4.])
The array type must be either compatible or identical with the output (which is a float for numpy.floor()), or this happens:
>>> arr2 = np.array([1, 3, 4], dtype = np.uint8)
>>> np.floor(arr2, arr2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: ufunc 'floor' output (typecode 'e') could not be coerced to provided output parameter (typecode 'B') according to the casting rule ''same_kind''
So given that an array of proper type, is it generally safe to apply ufuncs in-place? Or is floor() an exceptional case? The documentation does not make it clear, and neither do the following two threads that have tangential bearing on the question:
Numpy modify array in place?
Numpy Ceil and Floor "out" Argument
EDIT:
As a first order guess, I would assume it is often, but not always safe, based on the tutorial at http://docs.scipy.org/doc/numpy/user/c-info.ufunc-tutorial.html. There does not appear to be any restriction on using the output array as a temporary holder for intermediate results during the computation. While something like floor() and ciel() may not require temporary storage, more complex functions might. That being said, the entire existing library may be written with that in mind.

The out parameter of a numpy function is the array where the result is written. The main advantage of using out is avoiding the allocation of new memory where it is not necessary.
Is it safe to use write the output of a function on the same array passed as input? There is no general answer, it depends on what the function is doing.
Two examples
Here are two examples of ufunc-like functions:
In [1]: def plus_one(x, out=None):
...: if out is None:
...: out = np.zeros_like(x)
...:
...: for i in range(x.size):
...: out[i] = x[i] + 1
...: return out
...:
In [2]: x = np.arange(5)
In [3]: x
Out[3]: array([0, 1, 2, 3, 4])
In [4]: y = plus_one(x)
In [5]: y
Out[5]: array([1, 2, 3, 4, 5])
In [6]: z = plus_one(x, x)
In [7]: z
Out[7]: array([1, 2, 3, 4, 5])
Function shift_one:
In [11]: def shift_one(x, out=None):
...: if out is None:
...: out = np.zeros_like(x)
...:
...: n = x.size
...: for i in range(n):
...: out[(i+1) % n] = x[i]
...: return out
...:
In [12]: x = np.arange(5)
In [13]: x
Out[13]: array([0, 1, 2, 3, 4])
In [14]: y = shift_one(x)
In [15]: y
Out[15]: array([4, 0, 1, 2, 3])
In [16]: z = shift_one(x, x)
In [17]: z
Out[17]: array([0, 0, 0, 0, 0])
For the function plus_one there is no problem: the expected result is obtained when the parameters x and out are the same array. But the function shift_one gives a surprising result when the parameters x and out are the same array because the array
Discussion
For function of the form out[i] := some_operation(x[i]), such as plus_one above but also the functions floor, ceil, sin, cos, tan, log, conj, etc, as far as I know it is safe to write the result in the input using parameter out.
It is also safe for functions taking two input parameters of the form ``out[i] := some_operation(x[i], y[i]) such as the numpy function add, multiply, subtract.
For the other functions, it is case-by-case. As illustrated bellow, the matrix multiplication is not safe:
In [18]: a = np.arange(4).reshape((2,2))
In [19]: a
Out[19]:
array([[0, 1],
[2, 3]])
In [20]: b = (np.arange(4) % 2).reshape((2,2))
In [21]: b
Out[21]:
array([[0, 1],
[0, 1]], dtype=int32)
In [22]: c = np.dot(a, b)
In [23]: c
Out[23]:
array([[0, 1],
[0, 5]])
In [24]: d = np.dot(a, b, out=a)
In [25]: d
Out[25]:
array([[0, 1],
[0, 3]])
Last remark: if the implementation is multithreaded, the result of an unsafe function may even be non-deterministic because it depends on the order on which the array elements are processed.

This is an old question, but there is an updated answer:
Yes, it is safe. In the Numpy documentation, we see that as of v1.13:
Operations where ufunc input and output operands have memory overlap are defined to be the same as for equivalent operations where there is no memory overlap. Operations affected make temporary copies as needed to eliminate data dependency. As detecting these cases is computationally expensive, a heuristic is used, which may in rare cases result in needless temporary copies. For operations where the data dependency is simple enough for the heuristic to analyze, temporary copies will not be made even if the arrays overlap, if it can be deduced copies are not necessary. As an example, np.add(a, b, out=a) will not involve copies.

Related

Find and use the transpose of a linear operator in python

I have a complicated linear system $y = Ax$ where I cannot specify the matrix A, I can however write a function that computes Ax, and I have made this into a linear operator.
I need to find $A^T$.
I have tried finding $A^T$ by hand but it is becoming tricky.
I found that scipy has a built in function .transpose(), I have tried using this with a simple example,
def mv(v):
return np.array([2*v[0]- v[1], 3*v[1]])
A = LinearOperator((2,2), matvec=mv)
C = A.transpose()
but then when I try to use this it doesn't seem to work. I tried comparing the results
A.matvec(np.ones(2))
array([1., 3.])
C.rmatvec(np.ones(2))
array([1., 3.])
but the results are the same? I'm not sure why this is, surely the second result should be [2, 2].
For a 2d array, M:
In [28]: M = np.array([[1,3,2],[2,1,2],[5,2,1]]); x=np.array([1,2,3])
The transpose lets us switch the order:
In [29]: M#x
Out[29]: array([13, 10, 12])
In [30]: x#M.T
Out[30]: array([13, 10, 12])
Is that what your C does? Implementing the rmatvec inplace of the A.matvec.
Trying to use matvec on C produces an error. C itself is a LinearOperator, one that (somehow) references the operations defined for A.
In [40]: C.matvec([1,1])
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
Cell In[40], line 1
----> 1 C.matvec([1,1])
File ~\miniconda3\lib\site-packages\scipy\sparse\linalg\_interface.py:232, in LinearOperator.matvec(self, x)
229 if x.shape != (N,) and x.shape != (N,1):
230 raise ValueError('dimension mismatch')
--> 232 y = self._matvec(x)
234 if isinstance(x, np.matrix):
235 y = asmatrix(y)
File ~\miniconda3\lib\site-packages\scipy\sparse\linalg\_interface.py:583, in _TransposedLinearOperator._matvec(self, x)
581 def _matvec(self, x):
582 # NB. np.conj works also on sparse matrices
--> 583 return np.conj(self.A._rmatvec(np.conj(x)))
File ~\miniconda3\lib\site-packages\scipy\sparse\linalg\_interface.py:535, in _CustomLinearOperator._rmatvec(self, x)
533 func = self.__rmatvec_impl
534 if func is None:
--> 535 raise NotImplementedError("rmatvec is not defined")
536 return self.__rmatvec_impl(x)
NotImplementedError: rmatvec is not defined
The key is that for C, matvec is implemented with a A.rmatvec:
np.conj(self.A._rmatvec(np.conj(x)))
I haven't worked a lot with this LinearOperator class, but I view it was an abstract class that can be used in iterative solvers in much the same as a 'conventional' 2d array, except that all it has to define is one or more operations like matvec. In your case it's the mv function.
From the LinearOpertor docs:
shape : tuple
Matrix dimensions (M, N).
matvec : callable f(v)
Returns returns A * v.
rmatvec : callable f(v)
Returns A^H * v, where A^H is the conjugate transpose of A.
contrived example
Define a rmatvec for A:
In [51]: def mv(v):
...: return np.array([2*v[0]- v[1], 3*v[1]])
...: def rmv(v):
...: return np.array([2*v[1]- v[0], 3*v[0]])
...: A = linalg.LinearOperator((2,2), matvec=mv, rmatvec=rmv)
...: C = A.transpose()
In [52]: A.matvec((1,6))
Out[52]: array([-4, 18])
In [53]: A.rmatvec((1,6))
Out[53]: array([11, 3])
In [54]: A.rmatvec((6,1))
Out[54]: array([-4, 18])
Notice how C just switches the roles of matvec and rmatvec:
In [55]: C.matvec((1,6))
Out[55]: array([11, 3])
In [56]: C.rmatvec((1,6))
Out[56]: array([-4, 18])
transpose will also change the shape of the operator. If A takes (2,) and returns a (3,), it has shape (3,2); C has shape (2,3), consistent with being a right-hand operator:
In [58]: def mv(v):
...: return np.array([2*v[0]- v[1], 3*v[1],0])
...: A = linalg.LinearOperator((3,2), matvec=mv)
...: C = A.transpose()
In [59]: A
Out[59]: <3x2 _CustomLinearOperator with dtype=float64>
In [60]: C
Out[60]: <2x3 _TransposedLinearOperator with dtype=float64>
To find the transpose of a function you would need to use automatic differentiation which python has in built tools for. I managed, in the end, to find the transpose of my linear operator by hand.

Numpy broadcasting and structured dtype: how to handle a vector as an entity?

I'm fairly new to NumPy, so it's quite possible that I'm missing something fundamental. Don't hesitate to ask "stupid" questions about "basic" things!
I'm trying to write some functions that manipulate vectors. I'd like them to work on single vectors, as well as on arrays of vectors, like most of NumPy's ufuncs:
import math
import numpy
def func(scalar, x, vector):
# arbitrary function
# I'm NOT looking to replace this with numpy.magic_sum_multiply()
# I'm trying to understand broadcasting & dtypes
return scalar * x + vector
print(func(
scalar=numpy.array(2),
x=numpy.array([1, 0, 0]),
vector=numpy.array([1, 0, 0]),
))
# => [3 0 0], as expected
print(func(
scalar=numpy.array(2),
x=numpy.array([1, 0, 0]),
vector=numpy.array([[1, 0, 0], [0, 1, 0]]),
))
# => [[3 0 0], [2 1 0]], as expected. x & scalar are broadcasted out to match the multiple vectors
However, when trying to use multiple scalars, things go wrong:
print(func(
scalar=numpy.array([1, 2]),
x=numpy.array([1, 0, 0]),
vector=numpy.array([1, 0, 0]),
))
# => ValueError: operands could not be broadcast together with shapes (2,) (3,)
# expected: [[2 0 0], [3 0 0]]
I'm not entirely surprised be this. After all, NumPy has no idea that I'm working with vectors that are an single entity, and not an arbitrary dimension.
I can solve this ad-hoc with some expand_dims() and/or squeeze() to add/remove axes, but that feels hacky...
So I figured that, since I'm working with vectors that are a single "entity", dtypes may be what I'm looking for:
vector_dtype = numpy.dtype([
('x', numpy.float64),
('y', numpy.float64),
('z', numpy.float64),
])
_ = numpy.array([(1, 0, 0), (0, 1, 0)], dtype=vector_dtype)
print(_.shape) # => (2,), good, we indeed have 2 vectors!
_ = numpy.array((1, 0, 0, 7), dtype=vector_dtype)
# Good, basic checking that I'm staying in 3D
# => ValueError: could not assign tuple of length 4 to structure with 3 fields.
However, I seem to loose basic math capabilities:
print(2 * _)
# => TypeError: The DTypes <class 'numpy.dtype[void]'> and <class 'numpy.dtype[uint8]'> do not have a common DType. For example they cannot be stored in a single array unless the dtype is `object`.
So my main question is: How do I solve this?
Is there some numpy.magic_broadcast_that_understands_what_I_mean() function?
Can I define math-operators (such as addition, ...) on the vector-dtype?
How do I solve this?
You are after the first-argument vectorized version of func, let's call it vfunc(vfunc is not "vectorization" stricto sensu, since the vectorization job in done internally.)
# v
def vfunc(scalars, x, vector):
# ^
return numpy.vstack([ # Assuming that's the shape you want.
scalar * x + vector for scalar in scalars
])
print(vfunc(
scalars = [2], # No need for array instance actually
x = numpy.array([1, 0, 0]),
vector = numpy.array([1, 0, 0]),
))
# => [3 0 0], as expected
print(vfunc(
scalars = [2],
x = numpy.array([1, 0, 0]),
vector = numpy.array([[1, 0, 0], [0, 1, 0]]),
))
# => [[3 0 0], [2 1 0]], as expected
print(vfunc(
scalars = [1, 2],
x = numpy.array([1, 0, 0]),
vector = numpy.array([1, 0, 0]),
))
# => # expected: [[2 0 0], [3 0 0]]
[...] dtypes may be what I'm looking for
No it is not.
Is there some numpy.magic_broadcast_that_understands_what_I_mean()
Yes. It is called numpy.vectorize but it is not worth it.
As it reads in the documentation:
The vectorize function is provided primarily for convenience, not for performance. The implementation is essentially a for loop.
ufuncs obey the same broadcasting rules as the operators. And your own function, written with numpy operators and ufuncs have to work with those as well. Your function could tweak the dimensions to translate inputs to something works with the rest of numpy. (Writing your own ufuncs is an advanced topic.)
In [64]: scalar=numpy.array([1, 2])
...: x=numpy.array([1, 0, 0])
...: vector=numpy.array([1, 0, 0])
In [65]: scalar * x + vector
Traceback (most recent call last):
File "<ipython-input-65-ad4a73833616>", line 1, in <module>
scalar * x + vector
ValueError: operands could not be broadcast together with shapes (2,) (3,)
The problem is the multiplication; regardless of what you call it, scalar is a (2,) shape array, which does not work with a (3,) array.
In [68]: scalar*x
Traceback (most recent call last):
File "<ipython-input-68-0d21729ffa15>", line 1, in <module>
scalar*x
ValueError: operands could not be broadcast together with shapes (2,) (3,)
But what do you expect to happen? What shape should the result have?
If scalar is a (2,1) shaped array, then by broadcasting this result is (2,3) - taking the 2 from scalar and 3 from the other arrays:
In [76]: scalar[:,None] * x + vector
Out[76]:
array([[2, 0, 0],
[3, 0, 0]])
This is standard numpy broadcasting, and there's nothing "hacky" about it.
I don't know what you mean by calling scalar a 'single entity'.
Structured array is a convenient way of putting arrays with diverse dtypes into one structure. Or to access "columns" of convenient 'names'.
But you can't perform math across the fields of such an array.
In [70]: z=np.array([(1, 0, 0), (0, 1, 0)], dtype=vector_dtype)
In [71]: z
Out[71]:
array([(1., 0., 0.), (0., 1., 0.)],
dtype=[('x', '<f8'), ('y', '<f8'), ('z', '<f8')])
In [72]: z.shape
Out[72]: (2,)
In [73]: z.dtype
Out[73]: dtype([('x', '<f8'), ('y', '<f8'), ('z', '<f8')])
In [74]: z['x']
Out[74]: array([1., 0.])
In [75]: 2*z['x'] # math on a single field
Out[75]: array([2., 0.])
note
There is a np.vectorize function. It takes a function that accepts only (true) scalar arguments, and applies array arguments according to the standard broadcasting rules. So even if your func was implemented with it, you'd still have to use arguments as I did in [70]. Sometimes it's convenient, but it's better to use standard numpy functions and operators where possible - better and much faster.

Vectorization and matrix multiplication by scalars

I am new to python/numpy.
I need to do the following calculation:
for an array of discrete times t, calculate $e^{At}$ for a $2\times 2$ matrix $A$
What I did:
def calculate(t_,x_0,v_0,omega_0,c):
# define A
a_11,a_12, a_21, a_22=0,1,-omega_0^2,-c
A =np.matrix([[a_11,a_12], [a_21, a_22]])
print A
# use vectorization
temps = np.array(t_)
A_ = np.array([A for k in range (1,n+1,1)])
temps*A_
x_=scipy.linalg.expm(temps*A)
v_=A*scipy.linalg.expm(temps*A)
return x_,v_
n=10
omega_0=1
c=1
x_0=1
v_0=1
t_ = [float(5*k*np.pi/n) for k in range (1,n+1,1)]
x_, v_ = calculate(t_,x_0,v_0,omega_0,c)
However, I get this error when multiplying A_ (array containing n times A ) and temps (containg the times for which I want to calculate exp(At) :
ValueError: operands could not be broadcast together with shapes (10,) (10,2,2)
As I understand vectorization, each element in A_ would be multiplied by element at the same index from temps; but I think i don't get it right.
Any help/ comments much appreciated
A pure numpy calculation of t_ is (creates an array instead of a list):
In [254]: t = 5*np.arange(1,n+1)*np.pi/n
In [255]: t
Out[255]:
array([ 1.57079633, 3.14159265, 4.71238898, 6.28318531, 7.85398163,
9.42477796, 10.99557429, 12.56637061, 14.13716694, 15.70796327])
In [256]: a_11,a_12, a_21, a_22=0,1,-omega_0^2,-c
In [257]: a_11
Out[257]: 0
In [258]: A = np.array([[a_11,a_12], [a_21, a_22]])
In [259]: A
Out[259]:
array([[ 0, 1],
[-3, -1]])
In [260]: t.shape
Out[260]: (10,)
In [261]: A.shape
Out[261]: (2, 2)
In [262]: A_ = np.array([A for k in range (1,n+1,1)])
In [263]: A_.shape
Out[263]: (10, 2, 2)
A_ is np.ndarray. I made A a np.ndarray as well; yours is np.matrix, but your A_ will still be np.ndarray. np.matrix can only be 2d, where as A_ is 3d.
So t * A will be array elementwise multiplication, hence the broadcasting error, (10,) (10,2,2).
To do that elementwise multiplication right you need something like
In [264]: result = t[:,None,None]*A[None,:,:]
In [265]: result.shape
Out[265]: (10, 2, 2)
But if you want matrix multiplication of the (10,) with (10,2,2), then einsum does it easily:
In [266]: result1 = np.einsum('i,ijk', t, A_)
In [267]: result1
Out[267]:
array([[ 0. , 86.39379797],
[-259.18139392, -86.39379797]])
np.dot can't do it because its rule is 'last with 2nd to last'. tensordot can, but I'm more comfortable with einsum.
But that einsum expression makes it obvious (to me) that I can get the same thing from the elementwise *, by summing on the 1st axis:
In [268]: (t[:,None,None]*A[None,:,:]).sum(axis=0)
Out[268]:
array([[ 0. , 86.39379797],
[-259.18139392, -86.39379797]])
Or (t[:,None,None]*A[None,:,:]).cumsum(axis=0) to get a 2x2 for each time.
This is what I would do.
import numpy as np
from scipy.linalg import expm
A = np.array([[1, 2], [3, 4]])
for t in np.linspace(0, 5*np.pi, 20):
print(expm(t*A))
No attempt at vectorization here. The expm function applies to one matrix at a time, and it surely takes the bulk of computation time. No need to worry about the cost of multiplying A by a scalar.

Write to a masked array in numpy

Let's say I have an array x and a mask for the array mask. I want to use np.copyto to write to x using mask. Is there a way I can do this? Just trying to use copyto doesn't work, I suppose because the masked x is not writeable.
x = np.array([1,2,3,4])
mask = np.array([False,False,True,True])
np.copyto(x[mask],[30,40])
x
# array([1, 2, 3, 4])
# Should be array([1, 2, 30, 40])
As commented index assignment works
In [16]: x[mask]=[30,40]
In [17]: x
Out[17]: array([ 1, 2, 30, 40])
You have to careful when using x[mask]. That is 'advanced indexing', so it creates a copy, not a view of x. With direct assignment that isn't an issue, but with copyto x[mask] is passed as an argument to the function.
In [19]: y=x[mask]
In [21]: np.copyto(y,[2,3])
changes y, but not x.
Checking its docs I see the copyto does accept a where parameter, which could be used as
In [24]: np.copyto(x,[0,0,31,41],where=mask)
In [25]: x
Out[25]: array([ 1, 2, 31, 41])

generalized cumulative functions in NumPy/SciPy?

Is there a function in numpy or scipy (or some other library) that generalizes the idea of cumsum and cumprod to arbitrary function. For example, consider the (theoretical) function
cumf( func, array)
func is a function that accepts two floats, and returns a float. Particular cases
lambda x,y: x+y
and
lambda x,y: x*y
are cumsum and cumprod respectively. For example, if
func = lambda x,prev_x: x^2*prev_x
and I apply it to:
cumf(func, np.array( 1, 2, 3) )
I would like
np.array( 1, 4, 9*4 )
The ValueError above is still a bug using Numpy 1.20.1 (with Python 3.9.1).
Luckily a workaround was discovered that uses casting:
https://groups.google.com/forum/#!topic/numpy/JgUltPe2hqw
import numpy as np
uadd = np.frompyfunc(lambda x, y: x + y, 2, 1)
uadd.accumulate([1,2,3], dtype=object).astype(int)
# array([1, 3, 6])
Note that since the custom operation works on an object type, it won't benefit from the efficient memory management of numpy. So the operation may be slower than one that didn't need casting to object for extremely large arrays.
NumPy's ufuncs have accumulate():
In [22]: np.multiply.accumulate([[1, 2, 3], [4, 5, 6]], axis=1)
Out[22]:
array([[ 1, 2, 6],
[ 4, 20, 120]])
Unfortunately, calling accumulate() on a frompyfunc()'ed Python function fails with a strange error:
In [32]: uadd = np.frompyfunc(lambda x, y: x + y, 2, 1)
In [33]: uadd.accumulate([1, 2, 3])
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
ValueError: could not find a matching type for <lambda> (vectorized).accumulate,
requested type has type code 'l'
This is using NumPy 1.6.1 with Python 2.7.3.

Categories