masked RGB image does not appear masked with imshow - python

I noticed that displaying a RGB masked image does not work as I would expect, i.e. the resulting image is not masked when displayed. Is it normal, is there a workaround?
The example bellow shows the observed behaviour:
import numpy as np
from matplotlib import pyplot as plt
img=np.random.normal(0,10,(20,20)) # create a random image
mask=img>0
ma_img=np.ma.masked_where(mask, img) # create a masked image
img_rgb=np.random.uniform(0,1,(20,20,3)) # create a randomRGB image
mask_rgb=np.broadcast_to(mask[...,np.newaxis],img_rgb.shape) # extend the mask so that it matches the RGB image shape
ma_img_rgb=np.ma.masked_where(mask_rgb, img_rgb) # create a masked RGB image
## Display:
fig, ax=plt.subplots(2,2)
ax[0,0].imshow(img)
ax[0,0].set_title('Image')
ax[0,1].imshow(ma_img)
ax[0,1].set_title('Masked Image')
ax[1,0].imshow(img_rgb)
ax[1,0].set_title('RGB Image')
ax[1,1].imshow(ma_img_rgb)
ax[1,1].set_title('Masked RGB Image')
Interestingly, when the mouse passes over masked pixels in the masked RBG image, the pixel value does not appear in the lower right corner of the figure window.

It seems the mask is ignored for RGB arrays, see also this question.
From the doc the input for imshow() can be:
(M, N): an image with scalar data. The values are mapped to colors using normalization and a colormap. See parameters norm, cmap, vmin, vmax.
(M, N, 3): an image with RGB values (0-1 float or 0-255 int).
(M, N, 4): an image with RGBA values (0-1 float or 0-255 int), i.e. including transparency.
Therefore one option would be to use ~mask as alpha values for the rgb array:
img_rgb = np.random.uniform(0, 1, (20, 20, 3))
ma_img_rgb = np.concatenate([img_rgb, ~mask[:, :, np.newaxis]], axis=-1)
# ma_img_rgb = np.dstack([img_rgb, ~mask]) # Jan Kuiken

Related

How can I plot a normalized RGB map

I have a numpy array where each element has 3 values (RGB) from 0 to 255, and it spans from [0, 0, 0] to [255, 255, 255] with 256 elements evenly spaced. I want to plot it as a 16 by 16 grid but have no idea how to map the colors (as the numpy array) to the data to create the grid.
import numpy as np
# create an evenly spaced RGB representation as integers
all_colors_int = np.linspace(0, (255 << 16) + (255 << 8) + 255, dtype=int)
# convert the evenly spaced integers to RGB representation
rgb_colors = np.array(tuple(((((255<<16)&k)>>16), ((255<<8)&k)>>8, (255)&k) for k in all_colors_int))
# data to fit the rgb_colors as colors into a plot as a 16 by 16 numpy array
data = np.array(tuple((k,p) for k in range(16) for p in range(16)))
So, how to map the rgb_colors as colors to the data data into a grid plot?
There's quite a bit going on here, and I think it's valuable to talk about it.
linspace
I suggest you read the linspace documentation.
https://numpy.org/doc/stable/reference/generated/numpy.linspace.html
If you want a 16x16 grid, then you should start by generating 16x16=256 values, however if you inspect the shape of the all_colors_int array, you'll notice that it's only generated 50 values, which is the default value of the linspace num argument.
all_colors_int = np.linspace(0, (255 << 16) + (255 << 8) + 255, dtype=int)
print(all_colors_int.shape) # (50,)
Make sure you specify this third 'num' argument to generate the correct quantity of RGB pixels.
As a further side note, (255 << 16) + (255 << 8) + 255 is equivalent to (2^24)-1. The 2^N-1 formula is usually what's used to fill the first N bits of an integer with 1's.
numpy is faster
On your next line, your for loop manually iterates over all of the elements in python.
rgb_colors = np.array(tuple(((((255<<16)&k)>>16), ((255<<8)&k)>>8, (255)&k) for k in all_colors_int))
While this might work, this isn't considered the correct way to use numpy arrays.
You can directly perform bitwise operations to the entire numpy array without the python for loop. For example, to extract bits [16, 24) (which is usually the red channel in an RGB integer):
# Shift over so the 16th bit is now bit 0, then select only the first 8 bits.
RedChannel = (all_colors_int >> 16) & 255
Building the grid
There are many ways to do this in numpy, however I would suggest this approach.
Images are usually represented with a 3-dimensional numpy array, usually of the form
(HEIGHT, WIDTH, CHANNELS)
First, reshape your numpy int array into the 16x16 grid that you want.
reshaped = all_colors_int.reshape((16, 16))
Again, the numpy documentation is really great, give it a read:
https://numpy.org/doc/stable/reference/generated/numpy.reshape.html
Now, extract the red, green and blue channels, as described above, from this reshaped array. If you operate directly on the numpy array, you won't need a nested for-loop to iterate over the 16x16 grid, numpy will handle this for you.
RedChannel = (reshaped >> 16) & 255
GreenChannel = ... # TODO
BlueChannel = ... # TODO
And then finally, we can convert our 3, 16x16 grids, into a 16x16x3 grid, using the numpy stack function
https://numpy.org/doc/stable/reference/generated/numpy.stack.html
grid_rgb = np.stack((
RedChannel,
GreenChannel,
BlueChannel
), axis=2).astype(np.uint8)
Notice two things here
When we 'stack' arrays, we create a new dimension. The axis=2 argument tells numpy to add this new dimension at index 2 (e.g. the third axis). Without this, the shape of our grid would be (3, 16, 16) instead of (16, 16, 3)
The .astype(np.uint8) casts all of the values in this numpy array into a uint8 data type. This is so the grid is compatible with other image manipulation libraries, such as openCV, and PIL.
Show the image
We can use PIL for this.
If you want to use OpenCV, then remember that OpenCV interprets images as BGR not RGB and so your channels will be inverted.
# Show Image
from PIL import Image
Image.fromarray(grid_rgb).show()
If you've done everything right, you'll see an image... And it's all gray.
Why is it gray?
There are over 16 million possible colours. Selecting only 256 of them just so happens to select only pixels with the same R, G and B values which results in an image without any color.
If you want to see some colours, you'll need to either show a bigger image (e.g. 256x256), or alternatively, you can use a dimension that's not a power of two. For example, try a prime number, as this will add a small amount of pseudo-randomness to the RGB selection, e.g. try 17.
Best of luck.
Based solely on the title 'How to plot a normalized RGB map' rather than the approach you've provided, it appears that you'd like to plot a colour spectrum in RGB.
The following approach can be taken to manually construct this.
import cv2
import matplotlib.pyplot as plt
import numpy as np
h = np.repeat(np.arange(0, 180), 180).reshape(180, 180)
s = np.ones((180, 180))*255
v = np.ones((180, 180))*255
hsv = np.stack((h, s, v), axis=2).astype('uint8')
rgb = cv2.cvtColor(hsv, cv2.COLOR_HSV2RGB)
plt.imshow(rgb)
Explanation:
It's generally easier to construct (and decompose) a colour palette using the HSV (hue, saturation, value) colour scale; where hue is the colour itself, saturation can be thought of as the intensity and value as the distance from black. Therefore, there's really only one value to worry about, hue. Saturation and value can be set to 255, for 'full intensity'.
cv2 is used here to simply convert the constructed HSV colourscale to RGB and matplotlib is used to plot the image. (I didn't use cv2 for plotting as it doesn't play nicely with Jupyter.)
The actual spectrum values are constructed in numpy.
Breakdown:
Create the colour spectrum of hue and plug 255 in for the saturation and value. Why is 180 used?
h = np.repeat(np.arange(0, 180), 180).reshape(180, 180)
s = np.ones((180, 180))*255
v = np.ones((180, 180))*255
Stack the three channels H+S+V into a 3-dimensional array, convert the array values to unsigned 8-bit integers, and have cv2 convert from HSV to RGB for us, to be lazy and save us working out the math.
hsv = np.stack((h, s, v), axis=2).astype('uint8')
rgb = cv2.cvtColor(hsv, cv2.COLOR_HSV2RGB)
Plot the RGB image.
plt.imshow(rgb)

Write my own image function that takes a greyscale (0-255) to a r,g,b, alpha heat map

I have a 300*500 image. It's is in grayscale and ranges from 0-255. I want to iterate value by value and apply a heat map (say viridis but it doesn't matter) to each value.
My final heatmap image is in Red, Blue, Green and Alpha. I imagine the specific heat map function would take the grayscale values and output three values for each Red, Blue, Green and their appropriate weights.
f(0-255) = weightr(Red), weightb(Blue), weightg(Green).
My ending image would have dimensions (300,500,4) The four channels are r,b,g and an alpha channel.
What is the function that would achieve this? Almost certain it's going to be highly dependent on the specific heat map. Viridis is what I'm after, but I want to understand the concept as well.
The code below reads in a random image (the fact it's from unsplash does not matter) and turns it into a (300,500), 0-255 image called imgarray. I know matplotlib defaults to viridis, but I included the extra step to show what I would like to achieve with my own function.
import matplotlib.pyplot as plt
import requests
from PIL import Image
from io import BytesIO
img_src = 'https://unsplash.it/500/300'
response = requests.get(img_src)
imgarray = Image.open(BytesIO(response.content))
imgarray = np.asarray(imgarray.convert('L'))
from matplotlib import cm
print(cm.viridis(imgarray))
plt.imshow(cm.viridis(imgarray))
Matplotlib defines the viridis colormap as 256 RGB colors (one for each 8 bit gray scale value), where each color channel is a floating point value from [0, 1]. The definition can be found on github. The following code demonstrates how matplotlib applies the viridis colormap to a gray scale image.
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib._cm_listed import _viridis_data # the colormap look-up table
import requests
from PIL import Image
from io import BytesIO
img_src = 'https://unsplash.it/id/767/500/300'
response = requests.get(img_src)
imgarray = Image.open(BytesIO(response.content))
imgarray = np.asarray(imgarray.convert('L'))
plt.imshow(cm.viridis(imgarray))
plt.show()
# look-up table: from grayscale to RGB
viridis_lut = np.array(_viridis_data)
print(viridis_lut.shape) # (256, 3)
# convert grayscale to RGB using the LUT
img_viridis = viridis_lut.take(imgarray, axis=0, mode='clip')
plt.imshow(img_viridis)
plt.show()
# add an alpha channel
alpha = np.full(imgarray.shape + (1,), 1.) # shape: (300, 500, 1)
img_viridis_alpha = np.concatenate((img_viridis, alpha), axis=2)
assert (cm.viridis(imgarray) == img_viridis_alpha).all() # are both equal
Produces the following image:
The actual magic happens in the np.take(a, indices) method, which takes values from array a (the viridis LUT) at the given indices (gray scale values from 0..255 from the image). To get the same result as from the cm.viridis function, we just need to add an alpha channel (full of 1. = full opacity).
For reference, the same conversion happens around here in the matplotlib source code.

How to digitise and use a color scale to analyse areas of an RGB image with Numpy

I have a RGB picture in dicom format, extracted as a 3 dimensional numpy array with pydicom package.
The image looks like this one:
I would like to quantify the mean RGB value of one of the squares in the image, based on the color scale to the right (embedded in the image).
I have a general idea about the approach:
build a RGB profile from the color scale and match it to the represented measure (say from -10 to 30 in this example)
get the average RGB value from the square of interest
compare the average RGB value to the scale to get the measure it is closest to.
I found a few examples of color analyser scripts but nothing doing this exactly. Has anyone a suggestion or an example I could look at?
Just in case this is useful to someone, here was what I ended up doing:
import cv2
import matplotlib.pyplot as plt
import numpy as np
img = cv2.cvtColor(cv2.imread('data/Untitled.jpg'), cv2.COLOR_BGR2RGB) # shape (640, 796, 3)
color_bar = img[139:477, 695:715, :] # coordinates retrieved manually for the example
roi = img[212:301, 262:352, :] # region of interest, a square on the figure
# average colors over the bar width to get unique rgb values over the scale length
color_profile = color_bar.mean(axis=1, dtype=int)
# array of values matching color bar (values to measure in the image)
real_values = np.linspace(30, -14, color_profile.shape[0])#.reshape(color_profile.shape[0], 1)
mean_rgb = roi.mean(axis=(0, 1), dtype=int)
# What follows was adapted from https://stackoverflow.com/a/55464480/13147488
# distances between mean RGB value in ROI and RGB values in color bar
distances = np.sqrt(np.sum((mean_rgb - color_profile) ** 2, axis=-1))
# indices of closest corresponding RGB value in color bar
min_index = distances.argmin()
mapped_value = real_values[min_index]
An alternative was also to map RGB values with corresponding "real values" and calculate their mean in the end:
# distances between RGB values of each pixel in ROI and RGB values in color bar
distances = np.sqrt(np.sum((roi[:, :, np.newaxis, :] - color_profile) ** 2, axis=3))
min_indices = distances.argmin(2)
mapped_values = real_values[min_indices]
mean = mapped_values.mean(axis=(0, 1))

Colormap of a RGB image converted from HSV colorspace

TL;DR
I have a RGB image (i.e ndarray of shape (nlines, ncolumns, 3)) whose colors are the ones of the HSV colormap (matplotlib.cm.hsv)
How do I transform such RGB image so that the values correspond to a different colormap?
MORE DETAILS
I have two 2D numpy arrays that represent a phase (in -pi, pi) and magnitude of some data. I want to have the phase values 'modulated' by the magnitude, so I create a matrix with like this:
HSV_matrix[..., 0] = phase_scaled # rescaled in (0,1)
HSV_matrix[..., 1] = np.ones_like(phase_scaled)
HSV_matrix[..., 2] = magnitude_scaled # rescaled in (0,1)
which means that the phase is associated to the HUE and the magnitude to the VALUE.
Then I input such matrix to hsv_to_rgb():
rgb_image = matplotlib.colors.hsv_to_rgb(hsv_matrix)
rgb_image is now a ndarray of shape (nlines, ncolumns, 3) and represent the RGB values of my image where the colors are associated to the phase and the brightness to the magnitude.
At this point the colormap is already defined because, by definition, the values of rgb_image indicate the Red Green and Blue values for every pixel.
After hsv_to_rgb, the colormap of rgb_image is matplotlib.cm.hsv, hence to produce a consistent colorbar:
cmap = cm.ScalarMappable(cmap=cm.hsv, norm=plt.Normalize(vmin=hmin, vmax=hmax))
plt.colorbar(cmap)
where hmin and hmax are the min and max value of whatever I have in my HUE (hsv_matrix[...,0]).
NOTE:
Matplotlib imshow ignores the cmap parameter if the input is RGB(A) [1], hence I cannot change the colormap with imshow
[1] https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.imshow.html

Get RGB colors from color palette image and apply to binary image

I have a color palette image like this one and a binarized image in a numpy array, for example a square such as this:
img = np.zeros((100,100), dtype=np.bool)
img[25:75,25:75] = 1
(The real images are more complicated of course.)
I would like to do the following:
Extract all RGB colors from the color palette image.
For each color, save a copy of img in that color with a transparent background.
My code so far (see below) can save the img as a black object with transparent background. What I am struggling with is a good way of extracting the RGB colors so I can apply them to the image.
# Create an MxNx4 array (RGBA)
img_rgba = np.zeros((img.shape[0], img.shape[1], 4), dtype=np.bool)
# Fill R, G and B with inverted copies of the image
# Note: This creates a black object; instead of this, I need the colors from the palette.
for c in range(3):
img_rgba[:,:,c] = ~img
# For alpha just use the image again (makes background transparent)
img_rgba[:,:,3] = img
# Save image
imsave('img.png', img_rgba)
You can use a combination of a reshape and np.unique to extract the unique RGB values from your color palette image:
# Load the color palette
from skimage import io
palette = io.imread(os.path.join(os.getcwd(), 'color_palette.png'))
# Use `np.unique` following a reshape to get the RGB values
palette = palette.reshape(palette.shape[0]*palette.shape[1], palette.shape[2])
palette_colors = np.unique(palette, axis=0)
(Note that the axis argument for np.unique was added in numpy version 1.13.0, so you may need to upgrade numpy for this to work.)
Once you have palette_colors, you can pretty much use the code you already have to save the image, except you now add the different RGB values instead of copies of ~img to your img_rgba array.
for p in range(palette_colors.shape[0]):
# Create an MxNx4 array (RGBA)
img_rgba = np.zeros((img.shape[0], img.shape[1], 4), dtype=np.uint8)
# Fill R, G and B with appropriate colors
for c in range(3):
img_rgba[:,:,c] = img.astype(np.uint8) * palette_colors[p,c]
# For alpha just use the image again (makes background transparent)
img_rgba[:,:,3] = img.astype(np.uint8) * 255
# Save image
imsave('img_col'+str(p)+'.png', img_rgba)
(Note that you need to use np.uint8 as datatype for your image, since binary images obviously cannot represent different colors.)

Categories