Numpy where not giving expected shape - python

The problem is fairly simple. Given a 256x256 grayscale image, I want to return a color image with colors based on a threshold.
So I'm thinking:
img=whatever # 2D array of floats representing grayscale image
threshold=0.5
color1=[1.0,0.0,0.0]
color2=[0.0,0.0,1.0]
newImg=np.where(img>threshold,color1,color2)
Yet I get the infamous:
"ValueError: operands could not be broadcast together with shapes (500,500) (3,) (3,)"
Huh? I was really expecting that to give an array shaped (500,500,3). Why didn't it combine them??

You're misunderstanding how numpy.where works. It looks like you might be thinking that for True cells of img>threshold, where picks the entirety of color1 as a value, and for False cells, it picks the entirety of color2. Whatever it was you were thinking, that's not how it works.
numpy.where broadcasts the arguments together, and then for each cell of the first argument, picks the corresponding cell of either the second or third argument. To produce a shape-(500, 500, 3) result, the arguments would have to broadcast together to a shape of (500, 500, 3). Your inputs aren't broadcasting-compatible with each other at all.
One way to make the broadcasting work would be to add an extra length-1 dimension to the end of img>threshold:
newImg=np.where((img>threshold)[..., None],color1,color2)
If you're new to broadcasting, it may help to use numpy.broadcast_arrays to see what the result of broadcasting multiple arrays together looks like.

EDIT: I realize that I originally misinterpreted the original array dimensions as user2357112 pointed out.
To add an additional solution to your original problem that does not require numpy, use:
newImg = [[color1 if (cell > threshold) else color2 for cell in row] for row in img]

Related

ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 2, the array at index 0 has size 3

I am getting the error ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 2, the array at index 0 has size 3 and the array at index 1 has size 1 while running the below code.
for i in range(6):
print('current b', current_batch)
current_pred = model.predict(current_batch)[0]
print('current pred', current_pred)
test_predictions.append(current_pred)
print('current batch', current_batch)
print('current batch => ', current_batch[:,1:,:])
current_batch = np.append(current_batch[:,1:,:], [[current_pred]], axis=1)
getting this error
Can anyone please explain me why this is happening.
Thanks,
Basically, Numpy is telling you that the shapes of the concatenated matrices should align. For example, it is possible to concatenate a 3x4 matrix with 3x5 matrix so that we get 3x9 matrix (we added dimension 1).
The problem here is that Numpy is telling you that the axis don't align. In my example, that would be trying to concatenate 3x4 matrix with 10x10 matrix. This is not possible as the shapes are not aligned.
This usually means that the you are trying to concatenate wrong things. If you are sure though, try using np.reshape function, which will change the shape of one of the matrices so that they can be concatenated.
As the traceback shows, np.append is actually using np.concatenate. Did you read (study) the docs for either function? Understand what they say about dimensions?
From the display [[current_pred]], converted to array will be (1,1,1) shape. Do you understand that?
current_batch[:,1:,:] is, as best I can tell from the small image (1,5,3)
You are asking to join on axis 1, which is 1 and 5, ok. But it's saying that the last dimension, axis 2, doesn't match. That 1 does not equal 3. Do you understand that?
List append as you do with test_predictions.append(current_pred) works well in an iteration.
np.append does not work well. Even when it works, it is slow. And here it doesn't work, because you aren't taking sufficient care to match dimensions.

How to apply numpy functions on a slice?

I have a numpy array of shape (100, 30, 3). I wanted to apply a function to transform the second dimension (N=30) based on the slice from third dimension.
For example, consider I am doing a machine learning and my shape is (Samples, 1D Pixels, Color Channels). Now I want to apply np.log on the 2nd color channel. Something like np.log(x, axis=1, slice_axis=2, slice_index=1) to apply log on (:,:,1). How?
For applying operations like np.log in-place, you can use the out parameter. For the problem you mentioned, np.log(x[:, : ,1], out=x[:, : ,1]).

Numpy : how to use np.where in a multidimensional array with a given test condition?

Edit : I reduce to a minimal problem, since my first question was probably too messy
when I use np.where on a condition on a scalar cell things work fine:
new_array = np.where(old_array==6, rempl_array, old_array)
but if I want my condition to work on a full dimension of the array:
new_array = np.where((old_array == [1, 2, 3]).all(axis=-1), rempl_array, old_array)
I does not any more, for dimension mismatch
But I can't figure out how to transform the 2D boolean (old_array == [1, 2, 3]).all(axis=-1) in a suitable 3D boolean for where
Here was the initial post :
I have a 3D array, that I have created from a picture (so dimensions hold for height, width and RGB value). I want to change colors according to a given condition.
submap = np.any([(carr == [pr["red"], pr["green"], pr["blue"]]).all(axis=-1) for pr in list_areas], axis=0)
The condition works fine, retruning a 2D array with True for pixels where the condition is met, and False otherwise.
However, when I try to build a new 3D array where I change colors according to that condition:
new_carr = np.where(submap, new_color, carr)
I get a shape mismatch error :
ValueError: operands could not be broadcast together with shapes (2048,5632) (3,) (2048,5632,3)
The problem seems not to be only the fact that my new_color has shape (3,), since the problem still holds when I replace it with an array of shape (2048,5632,3), but the fact that my condition is 2D while my initial array is 3D. But how could this condition not be 2D by definition, and how could I make this work?
Thanks for your help
Starting with this posterised image of Paddington:
I think you want to use np.where() as follows to make all red areas into magenta and all other areas into yellow:
#!/usr/bin/env python3
from PIL import Image
import numpy as np
# Load PIL Image and ensure RGB rather than palette based, then make into Numpy array
pi = Image.open('paddington.png').convert('RGB')
na = np.array(pi)
# Now make 2 images same size, one magenta, one yellow
magenta = np.zeros_like(na) + [255,0,255]
yellow = np.zeros_like(na) + [255,255,0]
# Anywhere paddington is red, make him magenta. Anywhere else, make him yellow.
result = np.where((na==[255,0,0]).all(axis=-1)[...,None], magenta, yellow)
# Save result
Image.fromarray(result.astype(np.uint8)).save('result.png')
Of course, it was not necessary to make a full size image of magenta and yellow, I just did that to match your original code. You could have used a single pixel and saved memory, making him green and blue like this:
result = np.where((na==[255,0,0]).all(axis=-1)[...,None], [0,255,0], [0,0,255])
Actually, I have solved my problem in a very ugly way
submap = np.array([[[b, b, b] for b in x] for x in submap.tolist()])
But boy that seems inefficient. There should be a way to do that with arrays only.

At what situation points need to be reshaped like reshape(-1,1,2) in python-opencv?

I'm a newer to python-opencv, so I read this tutorial and found some sample codes like this:
pts = np.array([[10,5],[20,30],[70,20],[50,10]], np.int32)
pts = pts.reshape((-1,1,2))
cv2.polylines(img,[pts],True,(0,255,255))
it's about to show you how to draw polygons using cv2.polylines function, most of them are easy to understand, which I'm curious about is :
pts = pts.reshape((-1,1,2)) #I understand this makes pts.shape to be (4,1,2)
I try to remove this code and find that didn't make any difference, it works fine, either. before this reshape operation, pts's shape is (4,2), which is intuitive enough for me. Besides, when I write codes like this:
convex_pts = cv2.convexHull(pts) #get a convexhull from pts,pts's shape is(4,2)
print(convex_pts) #[[[72,20]],[[20,30]],[[10,5]],[[50,10]]]
print(convex_pts.shape) #(4,1,2)
it seems to me that python-opencv insists to "unsqueeze" it and make it has shape like(x,1,2), that's weird to me cause when the shape is (x,2) it just works fine in the tutorial case .Any reason why is this necessary ? I searched
around and found nothing helpful, am I missing something? So I want to know why and at what situation points need to be reshaped like this?
pts = pts.reshape((-1,1,2))
This changes the shape from (4,2) to (4,1,2), which is consistent with the kind of shape several cv2 functions use. For example, if you wanted to find contours using findContours the output contours have shape (x,1,y).
If you debug the first line of code which says:
pts = np.array([[10,5],[20,30],[70,20],[50,10]], np.int32)
and then use pts.shape, you will get (4, 2), which means that pts at this point in time has 4 rows and 2 columns. Now it may be possible that the function which is taking this matrix is expecting the input in some other format, which in your case seems to be (4, 1, 2), which means 4 rows, 1 column and each element with 2 sub-elements. To convert the (4, 2) shape in (4, 1, 2) shape, we use the following line of code:
pts = pts.reshape((-1,1,2))
What the above line means is that, I need a matrix with unknown number of rows but a single column and each element with 2 sub-elements. Now numpy internally calculates the size of unknown and automatically creates a matrix for you. It is just a fancy way of doing pts.reshape((4,1,2))
Also quoting the documentation:
One shape dimension can be -1. In this case, the value is inferred
from the length of the array and remaining dimensions.

Concatenate matrixes to tensor

I have two (or sometimes more) matrixes, which I want to combine to a tensor. The matrixes e.g. have the shape (100, 400) and when they are combined, they should have the dimensions (2, 100, 400).
How do I do that? I tried it the same way I created matrixes from vectors, but that didn't work:
tensor = numpy.concatenate(list_of_matrixes, axis=0)
Probably you want
tensor = np.array(list_of_matrices)
np.array([...]) just loves to combine the inputs into a new array along a new axis. In fact it takes some effort to prevent that.:)
To use concatenate you need to add an axis to your arrays. axis=0 means 'join on the current 1st axis', so it would produce a (200,400) array.
np.concatentate([arr1[None,...], arr2[None,...], axis=0)
would do the the trick, or more generally
np.concatenate([arr[None,...] for arr in list_arr], axis=0)
If you look at the code for dstack, hstack, vstack you'll see that they do this sort of dimension adjustment before passing the task to concatenate.
The np.array solution is easy, but the concatenate solution is a good learning opportunity.

Categories