Numpy function to get shape of added arrays - python

tl;dr: How do I predict the shape returned by numpy broadcasting across several arrays without having to actually add the arrays?
I have a lot of scripts that make use of numpy (Python) broadcasting rules so that essentially 1D inputs result in a multiple-dimension output. For a basic example, the ideal gas law (pressure = rho * R_d * temperature) might look like
def rhoIdeal(pressure,temperature):
rho = np.zeros_like(pressure + temperature)
rho += pressure / (287.05 * temperature)
return rho
It's not necessary here, but in more complicated functions it's very useful to initialize the array with the right shape. If pressure and temperature have the same shape, then rho also has that shape. If pressure has shape (n,) and temperature has shape (m,), I can call
rhoIdeal(pressure[:,np.newaxis], temperature[np.newaxis,:])
to get rho with shape (n,m). This lets me make plots with multiple values of temperature without having to loop over rhoIdeal, while still allowing the script to accept arrays of the same shape and compute the result element-by-element.
My question is: Is there a built-in function to return the shape compatible with several inputs? Something that behaves like
def returnShape(list_of_arrays):
return np.zeros_like(sum(list_of_arrays)).shape
without actually having to sum the arrays? If there's no built-in function, what would a good implementation look like?

You could use np.broadcast. This function returns an object encapsulating the result of broadcasting two or more arrays together. No actual operation (e.g. addition) is performed - the object simply has some of the same attributes that an array produced by means of other operations would have (shape, ndim, etc.).
For example:
x = np.array([1,2,3]) # shape (3,)
y = x.reshape(3,1) # shape (3, 1)
z = np.ones((5,1,1)) # shape (5, 1, 1)
Then you can check what the shape of the array returned by broadcasting x, y and z would be by inspecting the shape attribute:
>>> np.broadcast(x, y, z).shape
(5, 3, 3)
This means that you could implement your function simply as follows:
def returnShape(*args):
return np.broadcast(*args).shape

Related

Jax vectorization: vmap and/or numpy.vectorize?

what are the differences between jax.numpy.vectorizeand jax.vmap?
Here is a small snipset
import jax
import jax.numpy as jnp
def f(x):
return jnp.exp(-x)*jnp.sin(x)
gf = jax.grad(f)
x = jnp.arange(0,1,0.1)
jax.vmap(gf)(x)
jnp.vectorize(gf)(x)
Both computations give the same results:
DeviceArray([ 1. , 0.80998397, 0.63975394, 0.4888039 ,
0.35637075, 0.24149445, 0.14307144, 0.05990037,
-0.00927836, -0.06574923], dtype=float32)
How to decide which one to use, and if there is a difference in terms of performance?
jax.vmap and jax.numpy.vectorize have quite different semantics, and only happen to be similar in the case of a single 1D input as in your example.
The purpose of jax.vmap is to map a function over one or more inputs along a single explicit axis, as specified by the in_axes parameter. On the other hand, jax.numpy.vectorize maps a function over one or more inputs along zero or more implicit axes according to numpy broadcasting rules.
To see the difference, let's pass two 2-dimensional inputs and print the shape within the function:
import jax
import jax.numpy as jnp
def print_shape(x, y):
print(f"x.shape = {x.shape}")
print(f"y.shape = {y.shape}")
return x + y
x = jnp.zeros((20, 10))
y = jnp.zeros((20, 10))
_ = jax.vmap(print_shape)(x, y)
# x.shape = (10,)
# y.shape = (10,)
_ = jnp.vectorize(print_shape)(x, y)
# x.shape = ()
# y.shape = ()
Notice that vmap only maps along the first axis, while vectorize maps along both input axes.
And notice also that the implicit mapping of vectorize means it can be used much more flexibly; for example:
x2 = jnp.arange(10)
y2 = jnp.arange(20).reshape(20, 1)
def add(x, y):
# vectorize always maps over all axes, such that the function is applied elementwise
assert x.shape == y.shape == ()
return x + y
jnp.vectorize(add)(x2, y2).shape
# (20, 10)
vectorize will iterate over all axes of the inputs according to numpy broadcasting rules. On the other hand, vmap cannot handle this by default:
jax.vmap(add)(x2, y2)
# ValueError: vmap got inconsistent sizes for array axes to be mapped:
# arg 0 has shape (10,) and axis 0 is to be mapped
# arg 1 has shape (20, 1) and axis 0 is to be mapped
# so
# arg 0 has an axis to be mapped of size 10
# arg 1 has an axis to be mapped of size 20
To accomplish this same operation with vmap requires more thought, because there are two separate mapped axes, and some of the axes are broadcast. But you can accomplish the same thing this way:
jax.vmap(jax.vmap(add, in_axes=(None, 0)), in_axes=(0, None))(x2, y2[:, 0]).shape
# (20, 10)
This latter nested vmap is essentially what is happening under the hood when you use jax.numpy.vectorize.
As for which to use in any given situation:
if you want to map a function across a single, explicitly specified axis of the inputs, use jax.vmap
if you want a function's inputs to be mapped across zero or more axes according to numpy's broadcasting rules as applied to the input, use jax.numpy.vectorize.
in situations where the transforms are identical (for example when mapping a function of 1D inputs) lean toward using vmap, because it more directly does what you want to do.

How to write a function that DIRECTLY outputs a 2D Numpy array from two 1D array?

I created two numpy 1D arrays
x1 = np.linspace(0, 1, 5)
x2 = np.linspace(0, 10, 5)
I wrote a function
def myfoo(x1,x2):
return x1**2+x1*x2+x2**2
To get a 2D numpy array, I use the following code :
y=np.empty((x1.size,x2.size))
for a in range(0,x2.size):
y[a]=myfoo(x1,x2[a])
I would like to know if is it possible to write a function that outputs this 2D array DIRECTLY. I simply wonder if is possible to write y=myfoo2(x1,x2) instead of three code lines as above.
I know I can insert these lines into the function as suggested in the comment. But, I wonder if it exists in Numpy or Python "something" (function, operators, ...) like the mathematicals dyadic product of two vectors (i.e. from two 1D vectors of size m,n, this operation gives a matrix of size m x n)
Thanks for answer
myfoo(x1[:,None], x2). x1[:,None]*x2
produces a (5,5) array.

Does np.dot automatically transpose vectors?

I am trying to calculate the first and second order moments for a portfolio of stocks (i.e. expected return and standard deviation).
expected_returns_annual
Out[54]:
ticker
adj_close CNP 0.091859
F -0.007358
GE 0.095399
TSLA 0.204873
WMT -0.000943
dtype: float64
type(expected_returns_annual)
Out[55]: pandas.core.series.Series
weights = np.random.random(num_assets)
weights /= np.sum(weights)
returns = np.dot(expected_returns_annual, weights)
So normally the expected return is calculated by
(x1,...,xn' * (R1,...,Rn)
with x1,...,xn are weights with a constraint that all the weights have to sum up to 1 and ' means that the vector is transposed.
Now I am wondering a bit about the numpy dot function, because
returns = np.dot(expected_returns_annual, weights)
and
returns = np.dot(expected_returns_annual, weights.T)
give the same results.
I tested also the shape of weights.T and weights.
weights.shape
Out[58]: (5,)
weights.T.shape
Out[59]: (5,)
The shape of weights.T should be (,5) and not (5,), but numpy displays them as equal (I also tried np.transpose, but there is the same result)
Does anybody know why numpy behave this way? In my opinion the np.dot product automatically shape the vector the right why so that the vector product work well. Is that correct?
Best regards
Tom
The semantics of np.dot are not great
As Dominique Paul points out, np.dot has very heterogenous behavior depending on the shapes of the inputs. Adding to the confusion, as the OP points out in his question, given that weights is a 1D array, np.array_equal(weights, weights.T) is True (array_equal tests for equality of both value and shape).
Recommendation: use np.matmul or the equivalent # instead
If you are someone just starting out with Numpy, my advice to you would be to ditch np.dot completely. Don't use it in your code at all. Instead, use np.matmul, or the equivalent operator #. The behavior of # is more predictable than that of np.dot, while still being convenient to use. For example, you would get the same dot product for the two 1D arrays you have in your code like so:
returns = expected_returns_annual # weights
You can prove to yourself that this gives the same answer as np.dot with this assert:
assert expected_returns_annual # weights == expected_returns_annual.dot(weights)
Conceptually, # handles this case by promoting the two 1D arrays to appropriate 2D arrays (though the implementation doesn't necessarily do this). For example, if you have x with shape (N,) and y with shape (M,), if you do x # y the shapes will be promoted such that:
x.shape == (1, N)
y.shape == (M, 1)
Complete behavior of matmul/#
Here's what the docs have to say about matmul/# and the shapes of inputs/outputs:
If both arguments are 2-D they are multiplied like conventional matrices.
If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly.
If the first argument is 1-D, it is promoted to a matrix by prepending a 1 to its dimensions. After matrix multiplication the prepended 1 is removed.
If the second argument is 1-D, it is promoted to a matrix by appending a 1 to its dimensions. After matrix multiplication the appended 1 is removed.
Notes: the arguments for using # over dot
As hpaulj points out in the comments, np.array_equal(x.dot(y), x # y) for all x and y that are 1D or 2D arrays. So why do I (and why should you) prefer #? I think the best argument for using # is that it helps to improve your code in small but significant ways:
# is explicitly a matrix multiplication operator. x # y will raise an error if y is a scalar, whereas dot will make the assumption that you actually just wanted elementwise multiplication. This can potentially result in a hard-to-localize bug in which dot silently returns a garbage result (I've personally run into that one). Thus, # allows you to be explicit about your own intent for the behavior of a line of code.
Because # is an operator, it has some nice short syntax for coercing various sequence types into arrays, without having to explicitly cast them. For example, [0,1,2] # np.arange(3) is valid syntax.
To be fair, while [0,1,2].dot(arr) is obviously not valid, np.dot([0,1,2], arr) is valid (though more verbose than using #).
When you do need to extend your code to deal with many matrix multiplications instead of just one, the ND cases for # are a conceptually straightforward generalization/vectorization of the lower-D cases.
I had the same question some time ago. It seems that when one of your matrices is one dimensional, then numpy will figure out automatically what you are trying to do.
The documentation for the dot function has a more specific explanation of the logic applied:
If both a and b are 1-D arrays, it is inner product of vectors
(without complex conjugation).
If both a and b are 2-D arrays, it is matrix multiplication, but using
matmul or a # b is preferred.
If either a or b is 0-D (scalar), it is equivalent to multiply and
using numpy.multiply(a, b) or a * b is preferred.
If a is an N-D array and b is a 1-D array, it is a sum product over
the last axis of a and b.
If a is an N-D array and b is an M-D array (where M>=2), it is a sum
product over the last axis of a and the second-to-last axis of b:
In NumPy, a transpose .T reverses the order of dimensions, which means that it doesn't do anything to your one-dimensional array weights.
This is a common source of confusion for people coming from Matlab, in which one-dimensional arrays do not exist. See Transposing a NumPy Array for some earlier discussion of this.
np.dot(x,y) has complicated behavior on higher-dimensional arrays, but its behavior when it's fed two one-dimensional arrays is very simple: it takes the inner product. If we wanted to get the equivalent result as a matrix product of a row and column instead, we'd have to write something like
np.asscalar(x # y[:, np.newaxis])
adding a trailing dimension to y to turn it into a "column", multiplying, and then converting our one-element array back into a scalar. But np.dot(x,y) is much faster and more efficient, so we just use that.
Edit: actually, this was dumb on my part. You can, of course, just write matrix multiplication x # y to get equivalent behavior to np.dot for one-dimensional arrays, as tel's excellent answer points out.
The shape of weights.T should be (,5) and not (5,),
suggests some confusion over the shape attribute. shape is an ordinary Python tuple, i.e. just a set of numbers, one for each dimension of the array. That's analogous to the size of a MATLAB matrix.
(5,) is just the way of displaying a 1 element tuple. The , is required because of older Python history of using () as a simple grouping.
In [22]: tuple([5])
Out[22]: (5,)
Thus the , in (5,) does not have a special numpy meaning, and
In [23]: (,5)
File "<ipython-input-23-08574acbf5a7>", line 1
(,5)
^
SyntaxError: invalid syntax
A key difference between numpy and MATLAB is that arrays can have any number of dimensions (upto 32). MATLAB has a lower boundary of 2.
The result is that a 5 element numpy array can have shapes (5,), (1,5), (5,1), (1,5,1)`, etc.
The handling of a 1d weight array in your example is best explained the np.dot documentation. Describing it as inner product seems clear enough to me. But I'm also happy with the
sum product over the last axis of a and the second-to-last axis of b
description, adjusted for the case where b has only one axis.
(5,) with (5,n) => (n,) # 5 is the common dimension
(n,5) with (5,) => (n,)
(n,5) with (5,1) => (n,1)
In:
(x1,...,xn' * (R1,...,Rn)
are you missing a )?
(x1,...,xn)' * (R1,...,Rn)
And the * means matrix product? Not elementwise product (.* in MATLAB)? (R1,...,Rn) would have size (n,1). (x1,...,xn)' size (1,n). The product (1,1).
By the way, that raises another difference. MATLAB expands dimensions to the right (n,1,1...). numpy expands them to the left (1,1,n) (if needed by broadcasting). The initial dimensions are the outermost ones. That's not as critical a difference as the lower size 2 boundary, but shouldn't be ignored.

Force 2-dimensionality in vector

When I do p = np.zeros((3,1)) I get a matrix in the shape (3, 1).
Sometimes when I am working with NumPy arrays that are nx1, however, I get that their shape is (3,).
How can I make these (3,) shaped arrays into (3,1)?
i.e. here is a minimum runnable program:
a = np.random.randn(3)
>>a.shape
(3,)
I want it to be (3,1). I know I could just call with arguments 3,1 but this is just an example, sometimes I can't control the generative process but only manipulate the output.
Just check the shape and add another axis if needed:
if len(a.shape) == 1:
a = a[..., np.newaxis]
# or this, if you need more generality:
a = a.reshape(a.shape + (1,) * (desired_dimensions - len(a.shape)))
There's an np.atleast_2d function, but it would produce a 1-by-3 array instead of 3-by-1.

How to assign a 1D numpy array to 2D numpy array?

Consider the following simple example:
X = numpy.zeros([10, 4]) # 2D array
x = numpy.arange(0,10) # 1D array
X[:,0] = x # WORKS
X[:,0:1] = x # returns ERROR:
# ValueError: could not broadcast input array from shape (10) into shape (10,1)
X[:,0:1] = (x.reshape(-1, 1)) # WORKS
Can someone explain why numpy has vectors of shape (N,) rather than (N,1) ?
What is the best way to do the casting from 1D array into 2D array?
Why do I need this?
Because I have a code which inserts result x into a 2D array X and the size of x changes from time to time so I have X[:, idx1:idx2] = x which works if x is 2D too but not if x is 1D.
Do you really need to be able to handle both 1D and 2D inputs with the same function? If you know the input is going to be 1D, use
X[:, i] = x
If you know the input is going to be 2D, use
X[:, start:end] = x
If you don't know the input dimensions, I recommend switching between one line or the other with an if, though there might be some indexing trick I'm not aware of that would handle both identically.
Your x has shape (N,) rather than shape (N, 1) (or (1, N)) because numpy isn't built for just matrix math. ndarrays are n-dimensional; they support efficient, consistent vectorized operations for any non-negative number of dimensions (including 0). While this may occasionally make matrix operations a bit less concise (especially in the case of dot for matrix multiplication), it produces more generally applicable code for when your data is naturally 1-dimensional or 3-, 4-, or n-dimensional.
I think you have the answer already included in your question. Numpy allows the arrays be of any dimensionality (while afaik Matlab prefers two dimensions where possible), so you need to be correct with this (and always distinguish between (n,) and (n,1)). By giving one number as one of the indices (like 0 in 3rd row), you reduce the dimensionality by one. By giving a range as one of the indices (like 0:1 in 4th row), you don't reduce the dimensionality.
Line 3 makes perfect sense for me and I would assign to the 2-D array this way.
Here are two tricks that make the code a little shorter.
X = numpy.zeros([10, 4]) # 2D array
x = numpy.arange(0,10) # 1D array
X.T[:1, :] = x
X[:, 2:3] = x[:, None]

Categories