I have a 3 channel numpy array, ie an image and I want to mask out some areas then calculate the mean on the unmasked areas. When I go to convert my numpy array to a masked numpy array I always get the following error:
raise MaskError(msg % (nd, nm))
numpy.ma.core.MaskError: Mask and data not compatible: data size is 325080, mask size is 108360.
My array (image) shape is: (301, 360, 3) for reference. I create my mask by creating a duplicate array of zeros then drawing a polygon shape of 1's (True) on the mask.
My code is:
mask = np.zeros((src.shape[0], src.shape[1], 1), dtype='uint8')
cv2.drawContours(mask, [np.array(poly)], -1, (1,), -1)
msrc = np.ma.array(src, mask=mask, dtype='uint8') # error on this line
mean = np.ma.mean(msrc)
What am I doing wrong and how can I fix it to successfully create a masked array in numpy?
As stated in the comments, numpy doesn't consider images, it's just math. OpenCV abstracts that math into easy image manipulations.
To mask an image using OpenCV, you can use
masked_img = cv2.bitwise_and(src,src,mask=mask).
(docs)
Related
So here is the incomplete code
import cv2
import numpy as np
arr= np.zeros((30,30))
# bw_image = <missing part>
cv2.imshow("BW Image",bw_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
#please use arr at 4th line to complete the code
I am actually new to this and don't know how to convert a given 2d array into a binary image using opencv.
Please use the name "arr" for the missing part, as in my code, the array is not a zeros array, instead it has some random values of 0 and 255 of 400x400 shape.
I think you want a Numpy array of random integers:
arr = np.random.randint(0, 256, (400,400), dtype=np.uint8)
If your question is actually about thresholding, maybe you want:
_, bw_image = cv2.threshold(arr, 128,255,cv2.THRESH_BINARY)
Let's say I have a numpy array of shape (100, 100, 3), and that it represents an image in RGB encoding. How do I iterate over the individual pixels of this image.
Specifically I want to map this image with a function.
Note, I got that array from opencv.
I'm trying to mask a 3D array (RGB image) with numpy.
However, my current approach is reshaping the masked array (output below).
I have tried to follow the approach described on the SciKit-Image crash course.
Crash Course
I have looked in the Stackoverflow and a similar question has been asked, but with no accepted answer (similar question here)
What is the best way to accomplish masking like this?
Here is my attempt:
# create some random numbers to fill array
tmp = np.random.random((10, 10))
# create a 3D array to be masked
a = np.dstack((tmp, tmp, tmp))
# create a boolean mask of zeros
mask = np.zeros_like(a, bool)
# set a few values in the mask to true
mask[1:5,0,0] = 1
mask[1:5,0,1] = 1
# Try to mask the original array
masked_array = a[:,:,:][mask == 1]
# Check that masked array is still 3D for plotting with imshow
print(a.shape)
(10, 10, 3)
print(mask.shape)
(10, 10, 3)
print(masked_array.shape)
(8,)
# plot original array and masked array, for comparison
plt.imshow(a)
plt.imshow(masked_array)
plt.show()
NumPy broadcasting allows you to use a mask with a different shape than the image. E.g.,
import numpy as np
import matplotlib.pyplot as plt
# Construct a random 50x50 RGB image
image = np.random.random((50, 50, 3))
# Construct mask according to some condition;
# in this case, select all pixels with a red value > 0.3
mask = image[..., 0] > 0.3
# Set all masked pixels to zero
masked = image.copy()
masked[mask] = 0
# Display original and masked images side-by-side
f, (ax0, ax1) = plt.subplots(1, 2)
ax0.imshow(image)
ax1.imshow(masked)
plt.show()
After finding the following post on loss of dimensions HERE, I have found a solution using numpy.where:
masked_array = np.where(mask==1, a , 0)
This appears to work well.
When I load an image with PIL and convert it into a NumPy array:
image = Image.open("myimage.png")
pixels = np.asarray(image)
The data is stored as [x][y][channel]. I.e., the value of pixels[3, 5, 0] will be the the (3, 5) pixel, and the red component of that pixel.
However, I am using a library which requires the image to be in the format [channel][x][y]. Therefore, I am wondering how I can do this conversion?
I know that NumPy has a reshape function, but this doesn't actually allow you to "swap" over the dimensions as I want.
Any help? Thanks!
In order to get the dimensions in the order that you want, you could use the transpose method as follows:
image = Image.open("myimage.png")
pixels = np.asarray(image).transpose(2,0,1)
I would like to add two 3D numpy arrays (RGB image arrays) with a 2D mask generated by some algorithms on a greyscale image. What is the best way to do this?
As an example of what I am trying to do:
from PIL import Image, ImageChops, ImageOps
import numpy as np
img1=Image.open('./foo.jpg')
img2=Image.open('./bar.jpg')
img1Grey=ImageOps.grayscale(img1)
img2Grey=ImageOps.grayscale(img2)
# Some processing for example:
diff=ImageChops.difference(img1Grey,img2Grey)
mask=np.ma.masked_array(img1,diff>1)
img1Array=np.asarray(im1)
img2Array=np.asarray(im2)
imgResult=img1Array+img2Array[mask]
I was thinking:
1) break up the RGB image and do each color separately
2) duplicate the mask into a 3D array
or is there a more pythonic way to do this?
Thanks in advance!
Wish I could add a comment instead of an answer. Anyhow:
masked_array is not for making masks. It's for including only the data outside the mask in calculations such as sum, mean, etc.. scientific statistical applications. It's comprised of an array and the mask for the array.
It's probably NOT what you want.
You probably just want a normal boolean mask, as in:
mask = diff>1
Then you'll need to modify the shape so numpy broadcasts in the correct dimension, then broadcast it into the 3rd dimension:
mask.shape = mask.shape + (1,)
mask = np.broadcast_arrays(img1Array, mask)[1]
After that, you can just add the pixels:
img1Array[mask] += img2Array[mask]
A further point of clarification:
imgResult=img1Array+img2Array[mask]
That could never work. You are saying 'add some of the pixels from img2Array to all of the pixels in img1Array' 6_9
If you want to apply a ufunc between two or more arrays, they must be either the same shape, or broadcastable to the same shape.