I am trying create a new column with using np.where condition of other columns in the database.
My code
df5['RiskSubType']=np.where(new_df['Snow_Risk']==1,(( ' Heavy Snow forecasted at ' +df5.LOCATION.mask(new_df.LOCATION=='',df5.LOCATION_CITY))),
np.where(df5['Wind_Risk']==1,( ' Heavy Wind forecasted at ' +df5.LOCATION.mask(df5.LOCATION=='',df5.LOCATION_CITY)),
np.where(df5['Precip_Risk']==1,( ' Heavy Rain forecasted at ' +df5.LOCATION.mask(df5.LOCATION=='',df5.LOCATION_CITY)),"No Risk Identified")))
Error
ValueError: operands could not be broadcast together with shapes
How to fix this or this alternative way do this.
So first of all, your design/code style is really hard to read, you should think about simplifying it.
Your problems occurs due to the fact, that you are trying to smash strings and arrays in the np.where function. The documentation says:
numpy.where(condition[, x, y])
Return elements chosen from x or y depending on condition.
Parameters:
condition : array_like, bool
Where True, yield x, otherwise yield y.
x, y : array_like
Values from which to choose. x, y and condition need to be broadcastable to some shape.
Returns:
out : ndarray
An array with elements from x where condition is True, and elements from y elsewhere.
As you can see x and y need to be broadcastable to some shape. Looking at the documentation of broadcastable:
6.4. Broadcasting
Another powerful feature of Numpy is broadcasting. Broadcasting takes
place when you perform operations between arrays of different shapes.
For instance
>>> a = np.array([
[0, 1],
[2, 3],
[4, 5],
])
>>> b = np.array([10, 100])
>>> a * b
array([[ 0, 100],
[ 20, 300],
[ 40, 500]])
The shapes of a and b don’t match. In order to proceed, Numpy will
stretch b into a second dimension, as if it were stacked three times
upon itself. The operation then takes place element-wise.
One of the rules of broadcasting is that only dimensions of size 1 can
be stretched (if an array only has one dimension, all other dimensions
are considered for broadcasting purposes to have size 1). In the
example above b is 1D, and has shape (2,). For broadcasting with a,
which has two dimensions, Numpy adds another dimension of size 1 to b.
b now has shape (1, 2). This new dimension can now be stretched three
times so that b’s shape matches a’s shape of (3, 2).
The other rule is that dimensions are compared from the last to the
first. Any dimensions that do not match must be stretched to become
equally sized. However, according to the previous rule, only
dimensions of size 1 can stretch. This means that some shapes cannot
broadcast and Numpy will give you an error:
>>> c = np.array([
[0, 1, 2],
[3, 4, 5],
])
>>> b = np.array([10, 100])
>>> c * b
ValueError: operands could not be broadcast together with shapes (2,3) (2,)
What happens here is that Numpy, again, adds a dimension to b, making
it of shape (1, 2). The sizes of the last dimensions of b and c (2 and
3, respectively) are then compared and found to differ. Since none of
these dimensions is of size 1 (therefore, unstretchable) Numpy gives
up and produces an error.
The solution to multiplying c and b above is to specifically tell
Numpy that it must add that extra dimension as the second dimension of
b. This is done by using None to index that second dimension. The
shape of b then becomes (2, 1), which is compatible for broadcasting
with c:
>>> c = np.array([
[0, 1, 2],
[3, 4, 5],
])
>>> b = np.array([10, 100])
>>> c * b[:, None]
array([[ 0, 10, 20],
[300, 400, 500]])
A good visual description of these rules, together with some advanced
broadcasting applications can be found in this tutorial of Numpy broadcasting rules.
So the problem is, that you are trying to broadcast an (n,)(first where) to a scalar(first string) to a (m,)(second where) to a scalar(second string) to a (k,)(third where) and so on. Since n != m != k can and will be the case and the dimensions for stretching do not match the broadcasting does not work.
Please provide something like this:
d = {'LOCATION': ['?', '?'],
'LOCATION_CITY': ['?', '?'],
'Wind_Risk': [1, 0],
'Precip_Risk': [1, 0],
'Snow_Risk': [1, 0]}
df = pd.DataFrame(data=d)
Related
I am trying to slice a tensor with a indices tensor. For this purpose I am trying to use tf.gather.
However, I am having a hard time understanding the documentation and don't get it to work as I would expect it to:
I have two tensors. An activations tensor with a shape of [1,240,4] and an ids tensor with the shape [1,1,120]. I want to slice the second dimension of the activations tensor with the indices provided in the third dimension of the ids tensor:
downsampled_activations = tf.gather(activations, ids, axis=1)
I have given it the axis=1 option since that is the axis in the activations tensor I want to slice.
However, this does not render the expected result and only gives me the following error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,0,1] = 1 is not in [0, 1)
I have tried various combinations of the axis and batch_dims options, but to no avail so far and the documentation doesn't really help me on my path. Anybody care to explain the parameters in more detail or on the example above would be very helpful!
Edit:
The IDs are precomputed before runtime and come in through an input pipeline as such:
features = tf.io.parse_single_example(
serialized_example,
features={ 'featureIDs': tf.io.FixedLenFeature([], tf.string)}
They are then reshaped into the previous format:
feature_ids_raw = tf.decode_raw(features['featureIDs'], tf.int32)
feature_ids_shape = tf.stack([batch_size, (num_neighbours * 4)])
feature_ids = tf.reshape(feature_ids_raw, feature_ids_shape)
feature_ids = tf.expand_dims(feature_ids, 0)
Afterwards they have the previously mentioned shape (batch_size = 1 and num_neighbours = 30 -> [1,1,120]) and I want to use them to slice the activations tensor.
Edit2: I would like the output to be [1,120,4]. (So I would like to gather the entries along the second dimension of the activations tensor in accordance with the IDs stored in my ids tensor.)
You can use :
downsampled_activations =tf.gather(activations , tf.squeeze(ids) ,axis = 1)
downsampled_activations.shape # [1,120,4]
In most cases, the tf.gather method needs 1d indices, and that is right in your case, instead of indices with 3d (1,1,120), a 1d is sufficient (120,). The method tf.gather will look at the axis( = 1) and return the element at each index provided by the indices tensor.
tf.gather Gather slices from params axis axis according to indices.
Granted that the documentation is not the most expressive, and the emphasis should be placed on the slices (since you index slices from the axis and not elements, which is what I suppose you mistakenly took it for).
Let's take a much smaller example:
activations_small = tf.convert_to_tensor([[[1, 2, 3, 4], [11, 22, 33, 44]]])
print(activations_small.shape) # [1, 2, 4]
Let's picture this tensor:
XX 4 XX 44 XX XX
XX 3 XX 33 X XX
XXX 2 XX 22XX XX
X-----X-----+X XX
| 1 | 11 | XX
+-----+-----+X
tf.gather(activations1, [0, 0], axis=1) will return
<tf.Tensor: shape=(1, 2, 4), dtype=int32, numpy=
array([[[1, 2, 3, 4],
[1, 2, 3, 4]]], dtype=int32)>
What tf.gather did was to look from axis 1, and picks up index 0 (ofc, two times i.e. [0, 0]). If you were to run tf.gather(activations1, [0, 0, 0, 0, 0], axis=1).shape, you'd get TensorShape([1, 5, 4]).
Your Error
Now let's try to trigger the error that you're getting.
tf.gather(activations1, [0, 2], axis=1)
InvalidArgumentError: indices[1] = 2 is not in [0, 2) [Op:GatherV2]
What happened here was that when tf.gather looks from axis 1 perspective, there's no item (column if you will) with index = 2.
I guess this is what the documentation is hinting at by
param:<indices> The index Tensor. Must be one of the following types: int32, int64. Must be in range [0, params.shape[axis]).
Your (potential) solution
From the dimensions of indices, and that of the expected result from your question, I am not sure if the above was very obvious to you.
tf.gather(activations, indices=[0, 1, 2, 3], axis=2) or anything with indices within the range of indices in [0, activations.shape[2]) i.e. [0, 4) would work. Anything else would give you the error that you're getting.
There's a verbatim answer below in case that's your expected result.
This question already has answers here:
Transposing a 1D NumPy array
(15 answers)
Closed 3 years ago.
Let's consider a as an 1D row/horizontal array:
import numpy as np
N = 10
a = np.arange(N) # array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
a.shape # (10,)
now I want to have b a 1D column/vertical array transposed of a:
b = a.transpose() # array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
b.shape # (10,)
but the .transpose() method returns an identical ndarray whith the exact same shape!
What I expected to see was
np.array([[0], [1], [2], [3], [4], [5], [6], [7], [8], [9]])
which can be achieved by
c = a.reshape(a.shape[0], 1) # or c = a; c.shape = (c.shape[0], 1)
c.shape # (10, 1)
and to my surprise, it has a shape of (10, 1) instead of (1, 10).
In Octave/Scilab I could do:
N = 10
b = 0:(N-1)
a = b'
size(b) % ans = 1 10
size(a) % ans = 10 1
I understand that numpy ndarrays are not matrices (as discussed here), but the behavior of the numpy's transpose function just doesn't make sense to me! I would appreciate it if you could help me understand how this behavior makes sense and what am I missing here.
P.S. So what I have understood so far is that b = a.transpose() is the equivalent of b = a; b.shape = b.shape[::-1] which if you had a "2D array" of (N, 1) would return a (1, N) shaped array, as you would expect from a transpose operator. However, numpy seems to treat the "1D array" of (N,) as a 0D scalar. I think they should have named this method something else, as this is very misleading/confusing IMHO.
To understand the numpy array better, you should take a look at this review paper: The NumPy array: a structure for efficient numerical computation
In short, numpy ndarrays have this attribute called the stride, which is
the number of bytes to skip in memory to proceed to the next element.
For a (10, 10) array of bytes, for example, the strides may be (10,
1), in other words: proceed one byte to get to the next column and ten
bytes to locate the next row.
For your ndarray a, a.stride = (8,), which shows that it is only 1 dimensional, and that to get to the next element on this single dimension, you need to advance 8 bytes in memory (each int is 64-bit).
Strides are useful for representing transposes:
By modifying strides, for example, an array can be transposed or
reshaped at zero cost (no memory needs to be copied).
So if there was a 2-dimensional ndarray, say b = np.ones((3,5)) for example, then b.strides = (40, 8), while b.transpose().strides = (8, 40). So as you see a transposed 2D-ndarray is simply the exact same array, whose strides have been reordered. And since your 1D ndarray has only 1 dimension, swapping the the values of its strides (i.e. taking its transpose), doesn't do anything.
As you already mentioned that numpy array are not matrix. The defination of transpose function is like below
Permute the dimensions of an array.
Which means that numpy's transpose method will move data from one dimension to another. As 1D array has only one dimension there is no other dimension to move the data t0. So you need add a dimension before transpose has any effect. This behavior make sense also to be consistent with higher dimensional array (3D, 4D ...) array.
There is a clean way to achive what you want
N = 10
a = np.arange(N)
a[ :, np.newaxis]
I'm trying to setup back propagation for my neural network using numpy, but for some reason when I'm setting up the gradient decent equation for the matrix that holds my output weights, two of the matrix's (2,5)(5,1) in the gradient decent equation are not broadcasting together. Am I doing this wrong?
I've tried to dissect the equation into different parts to see if there is anything else that might be causing this, but so far I've pin pointed it down to specifically the entire matrix in the numerator, and the entire matrix in the denominator (the gradient decent equation is a fraction). I've also thought that it might be happening between the original output weights and the gradient decent equation, but that is also false because the the matrix for the output weights are (5,2) not (2,5). I've also tried functions other than numpy.divide, like using numpy.dot to multiply the first equation by the second to the power of -1.
dissected code
self.outputWeights = self.outputWeights - l *
#numarator
( -numpy.divide((2 * (numpy.dot(y.reshape(self.outputs, 1), (1+numpy.power(e, -n-b))).reshape(self.neurons, self.outputs)-w)).reshape(self.outputs, self.neurons),
#denominator
(numpy.power(1+ numpy.power(e, -n-b), 2)).reshape(self.neurons, 1)))
actual code
n = self.HIDDEN[self.layers]
b = self.bias[self.layers]
w = self.outputWeights
self.outputWeights = self.outputWeights - l * ( -numpy.divide((2 * (numpy.dot(y.reshape(self.outputs, 1), (1+numpy.power(e, -n-b))).reshape(self.neurons, self.outputs)-w)).reshape(self.outputs, self.neurons), (numpy.power(1+ numpy.power(e, -n-b), 2)).reshape(self.neurons, 1)))
I expected that because of the fact that the columns of the first matrix and the rows of the second matrix are the same size, that it wouldn't have a problem.
With a matrix product, dot, the rule is last dim of A pairs with 2nd to the last dim of B:
In [136]: x=np.arange(10).reshape(5,2); y=np.arange(2)[:,None]
In [137]: x.shape, y.shape
Out[137]: ((5, 2), (2, 1))
In [138]: x.dot(y)
Out[138]:
array([[1],
[3],
[5],
[7],
[9]])
In [139]: _.shape
Out[139]: (5, 1)
The inner 2's match, and the result is (5,1).
But with elementwise operations, such as * (multiply), divide and sum, those dimensions don't work
In [140]: x*y
---------------------------------------------------------------------------
ValueError: operands could not be broadcast together with shapes (5,2) (2,1)
A transpose of y works:
In [141]: x*y.T
Out[141]:
array([[0, 1],
[0, 3],
[0, 5],
[0, 7],
[0, 9]])
That's because y.T has shape (1,2). By broadcasting rules that can pair with (5,2) to produce a (5,2) array. The size 1 dimension can be expanded to match the 5 of x.
I am working with a deep learning model that is trying to concatenate a label with dimensions (1,2) with a numpy array of (25,25). I'm not really sure if it is possible to get a dimension of (627,0), however, the model summary says that is the input shape it expects.
I've tried to concatenate them, but I get the error " all the input array dimensions except for the concatenation axis must match exactly" as expected.
x = np.concatenate((X[1], to_categorical(Y_train[1]))
Where X = (25,25) and Y_train is ( 1,0), making to_categorical(Y_train[1]) equal to (2,1).
Is there a way to get this (627, 0) dimension with these dimensions?
#Psidom has a great answer to this:
Let's say you have a 1-d and a 2-d array
You can use numpy.column_stack:
np.column_stack((array_1, array_2))
Which converts the 1-d array to 2-d implicitly, and thus equivalent to np.concatenate((array_1, array_2[:,None]), axis=1).
a = np.arange(6).reshape(2,3)
b = np.arange(2)
a
#array([[0, 1, 2],
# [3, 4, 5]])
b
#array([0, 1])
np.column_stack((a, b))
#array([[0, 1, 2, 0],
# [3, 4, 5, 1]])
I would like a numpy-sh way of vectorizing the calculation of eigenvalues, such that I can feed it a matrix of matrices and it would return a matrix of the respective eigenvalues.
For example, in the code below, B is the block 6x6 matrix composed of 4 copies of the 3x3 matrix A.
C is what I would like to see as output, i.e. an array of dimension (2,2,3) (because A has 3 eigenvalues).
This is of course a very simplified example, in the general case the matrices A can have any size (although they are still square), and the matrix B is not necessarily formed of copies of A, but different A1, A2, etc (all of same size but containing different elements).
import numpy as np
A = np.array([[0, 1, 0],
[0, 2, 0],
[0, 0, 3]])
B = np.bmat([[A, A], [A,A]])
C = np.array([[np.linalg.eigvals(B[0:3,0:3]),np.linalg.eigvals(B[0:3,3:6])],
[np.linalg.eigvals(B[3:6,0:3]),np.linalg.eigvals(B[3:6,3:6])]])
Edit: if you're using a version of numpy >= 1.8.0, then np.linalg.eigvals operates over the last two dimensions of whatever array you hand it, so if you reshape your input to an (n_subarrays, nrows, ncols) array you'll only have to call eigvals once:
import numpy as np
A = np.array([[0, 1, 0],
[0, 2, 0],
[0, 0, 3]])
# the input needs to be an array, since matrices can only be 2D.
B = np.repeat(A[np.newaxis,...], 4, 0)
# for arbitrary input arrays you could do something like:
# B = np.vstack(a[np.newaxis,...] for a in input_arrays)
# but for this to work it will be necessary for each element in
# 'input_arrays' to have the same shape
# eigvals will operate over the last two dimensions of the array and return
# a (4, 3) array of eigenvalues
C = np.linalg.eigvals(B)
# reshape this output so that it matches your original example
C.shape = (2, 2, 3)
If your input arrays don't all have the same dimensions, e.g. input_arrays[0].shape == (2, 2), input_arrays[1].shape == (3, 3) etc. then you could only vectorize this calculation across subsets with matching dimensions.
If you're using an older version of numpy then unfortunately I don't think there's any way to vectorize the calculation of the eigenvalues over multiple input arrays - you'll just have to loop over your inputs in Python instead.
You could just do something like this
C = np.array([[np.linalg.eigvals(B[i:i+3, j:j+3])
for i in xrange(0, B.shape[0], 3)]
for j in xrange(0, B.shape[1], 3)])
Perhaps a nicer approach is to use the block_view function from https://stackoverflow.com/a/5078155/1352250:
B_blocks = block_view(B)
C = np.array([[np.linalg.eigvals(m) for m in v] for v in B_blocks])
Update
As ali_m points out, this method is a form of syntactic sugar that will not reduce the overhead incurred from calling eigvals a large number of times. While this overhead should be small if each matrix it is applied to is large-ish, for the 6x6 matrices that the OP is interested in, it is not trivial (see the comments below; according to ali_m, there might be a factor of three difference between the version I give above, and the version he posted that uses Numpy >= 1.8.0).