I'm using the colorsys lib of Python:
import colorsys
colorsys.rgb_to_hsv(64, 208, 61)
output:(0.16666666666666666, 0, 208)
But this output is wrong, this is the true value using a RGB to HSV online converter:
RGB to HSV
What's going on?
colorsys takes its values in the range 0 to 1:
Coordinates in all of these color spaces are floating point values. In the YIQ space, the Y coordinate is between 0 and 1, but the I and Q coordinates can be positive or negative. In all other spaces, the coordinates are all between 0 and 1.
You need to divide each of the values by 255. to get the expected output:
>>> colorsys.rgb_to_hsv(64/255., 208/255., 61/255.)
(0.3299319727891157, 0.7067307692307692, 0.8156862745098039)
To avoid errors like this you can use colorir. It uses colorsys under the hood but allows for format specification:
>>> from colorir import sRGB
>>> sRGB(64, 208, 61).hsv(round_to=1)
HSV(118.8, 0.7, 0.8)
Related
I have a numpy array where each element has 3 values (RGB) from 0 to 255, and it spans from [0, 0, 0] to [255, 255, 255] with 256 elements evenly spaced. I want to plot it as a 16 by 16 grid but have no idea how to map the colors (as the numpy array) to the data to create the grid.
import numpy as np
# create an evenly spaced RGB representation as integers
all_colors_int = np.linspace(0, (255 << 16) + (255 << 8) + 255, dtype=int)
# convert the evenly spaced integers to RGB representation
rgb_colors = np.array(tuple(((((255<<16)&k)>>16), ((255<<8)&k)>>8, (255)&k) for k in all_colors_int))
# data to fit the rgb_colors as colors into a plot as a 16 by 16 numpy array
data = np.array(tuple((k,p) for k in range(16) for p in range(16)))
So, how to map the rgb_colors as colors to the data data into a grid plot?
There's quite a bit going on here, and I think it's valuable to talk about it.
linspace
I suggest you read the linspace documentation.
https://numpy.org/doc/stable/reference/generated/numpy.linspace.html
If you want a 16x16 grid, then you should start by generating 16x16=256 values, however if you inspect the shape of the all_colors_int array, you'll notice that it's only generated 50 values, which is the default value of the linspace num argument.
all_colors_int = np.linspace(0, (255 << 16) + (255 << 8) + 255, dtype=int)
print(all_colors_int.shape) # (50,)
Make sure you specify this third 'num' argument to generate the correct quantity of RGB pixels.
As a further side note, (255 << 16) + (255 << 8) + 255 is equivalent to (2^24)-1. The 2^N-1 formula is usually what's used to fill the first N bits of an integer with 1's.
numpy is faster
On your next line, your for loop manually iterates over all of the elements in python.
rgb_colors = np.array(tuple(((((255<<16)&k)>>16), ((255<<8)&k)>>8, (255)&k) for k in all_colors_int))
While this might work, this isn't considered the correct way to use numpy arrays.
You can directly perform bitwise operations to the entire numpy array without the python for loop. For example, to extract bits [16, 24) (which is usually the red channel in an RGB integer):
# Shift over so the 16th bit is now bit 0, then select only the first 8 bits.
RedChannel = (all_colors_int >> 16) & 255
Building the grid
There are many ways to do this in numpy, however I would suggest this approach.
Images are usually represented with a 3-dimensional numpy array, usually of the form
(HEIGHT, WIDTH, CHANNELS)
First, reshape your numpy int array into the 16x16 grid that you want.
reshaped = all_colors_int.reshape((16, 16))
Again, the numpy documentation is really great, give it a read:
https://numpy.org/doc/stable/reference/generated/numpy.reshape.html
Now, extract the red, green and blue channels, as described above, from this reshaped array. If you operate directly on the numpy array, you won't need a nested for-loop to iterate over the 16x16 grid, numpy will handle this for you.
RedChannel = (reshaped >> 16) & 255
GreenChannel = ... # TODO
BlueChannel = ... # TODO
And then finally, we can convert our 3, 16x16 grids, into a 16x16x3 grid, using the numpy stack function
https://numpy.org/doc/stable/reference/generated/numpy.stack.html
grid_rgb = np.stack((
RedChannel,
GreenChannel,
BlueChannel
), axis=2).astype(np.uint8)
Notice two things here
When we 'stack' arrays, we create a new dimension. The axis=2 argument tells numpy to add this new dimension at index 2 (e.g. the third axis). Without this, the shape of our grid would be (3, 16, 16) instead of (16, 16, 3)
The .astype(np.uint8) casts all of the values in this numpy array into a uint8 data type. This is so the grid is compatible with other image manipulation libraries, such as openCV, and PIL.
Show the image
We can use PIL for this.
If you want to use OpenCV, then remember that OpenCV interprets images as BGR not RGB and so your channels will be inverted.
# Show Image
from PIL import Image
Image.fromarray(grid_rgb).show()
If you've done everything right, you'll see an image... And it's all gray.
Why is it gray?
There are over 16 million possible colours. Selecting only 256 of them just so happens to select only pixels with the same R, G and B values which results in an image without any color.
If you want to see some colours, you'll need to either show a bigger image (e.g. 256x256), or alternatively, you can use a dimension that's not a power of two. For example, try a prime number, as this will add a small amount of pseudo-randomness to the RGB selection, e.g. try 17.
Best of luck.
Based solely on the title 'How to plot a normalized RGB map' rather than the approach you've provided, it appears that you'd like to plot a colour spectrum in RGB.
The following approach can be taken to manually construct this.
import cv2
import matplotlib.pyplot as plt
import numpy as np
h = np.repeat(np.arange(0, 180), 180).reshape(180, 180)
s = np.ones((180, 180))*255
v = np.ones((180, 180))*255
hsv = np.stack((h, s, v), axis=2).astype('uint8')
rgb = cv2.cvtColor(hsv, cv2.COLOR_HSV2RGB)
plt.imshow(rgb)
Explanation:
It's generally easier to construct (and decompose) a colour palette using the HSV (hue, saturation, value) colour scale; where hue is the colour itself, saturation can be thought of as the intensity and value as the distance from black. Therefore, there's really only one value to worry about, hue. Saturation and value can be set to 255, for 'full intensity'.
cv2 is used here to simply convert the constructed HSV colourscale to RGB and matplotlib is used to plot the image. (I didn't use cv2 for plotting as it doesn't play nicely with Jupyter.)
The actual spectrum values are constructed in numpy.
Breakdown:
Create the colour spectrum of hue and plug 255 in for the saturation and value. Why is 180 used?
h = np.repeat(np.arange(0, 180), 180).reshape(180, 180)
s = np.ones((180, 180))*255
v = np.ones((180, 180))*255
Stack the three channels H+S+V into a 3-dimensional array, convert the array values to unsigned 8-bit integers, and have cv2 convert from HSV to RGB for us, to be lazy and save us working out the math.
hsv = np.stack((h, s, v), axis=2).astype('uint8')
rgb = cv2.cvtColor(hsv, cv2.COLOR_HSV2RGB)
Plot the RGB image.
plt.imshow(rgb)
I am new to OpenCV and Python. I want to perform both Gaussian filter and median filter by first adding noise to the image. I have got successful output for the Gaussian filter but I could not get median filter.Can anyone please explain how to perform median filtering in OpenCV with Python for noise image. Following is my code:
import numpy as np
import cv2
img = cv2.imread('lizard.jpg').astype(np.float32)
gaussian_blur = np.array([[1,2,1],[2,4,2],[1,2,1]],dtype=np.float32)
gaussian_blur = gaussian_blur/np.sum(gaussian_blur)
img_noise = img + np.random.uniform(-20,20,size=np.shape(img))
cv2.imwrite('gt3_plus_noise.jpg',img_noise)
median = cv2.medianBlur(img_noise.astype(np.float32),(3),0)
cv2.imshow('Median Blur',median)
cv2.waitKey()
cv2.destroyAllWindows()
img_blur_g = cv2.filter2D(img_noise.astype(np.float32), -1,gaussian_blur)
cv2.imwrite('gt3_noise_filtered_gaussian.jpg',img_blur_g)
Output:
noise filtered gaussian
noise image
median filter image
OpneCV has function medianBlur in Python and C++ to perform median filtering. You can get details from here: http://docs.opencv.org/2.4/modules/imgproc/doc/filtering.html#medianblur
To use this function, follow following code snippet:
n=3; #where n*n is the size of filter
output_image = cv2.medianBlur(input_image, n)
When OpenCV has a float image, it assumes that the range is between 0 and 1. However, your image still has values between 0 and 255 (and maybe a little above and below that). This is fine for manipulation, but in order to view your image you'll either need to normalize it to the range 0 and 1, or, you'll have to convert back to a uint8 image and saturate the values. Currently your image is just overflowing past 1, which is the assumed max value for a float image. The colors are only showing in the darker regions of the image since the values are very small; specifically, less than 1.
Saturating the values for a uint8 image means anything below 0 is fixed at 0 and anything above 255 is fixed at 255. Normal numpy operations do not saturate values, they overflow and roll over (so np.array(-1).astype(np.uint8) ==> 255, meaning any dark values that have some bit subtracted off will turn bright). See here for more about saturation.
This problem isn't too hard to solve, and there are a number of solutions. An explicit way is to simply fix the values greater than 255 at 255 and fix the values less than 0 to 0 and convert to a uint8 image:
>>> img = np.array([[150, 0], [255, 150]], dtype=np.float32)
>>> noise = np.array([[20, -20], [20, -20]], dtype=np.float32)
>>> noisy_img = img+noise
>>> noisy_img
array([[ 170., -20.],
[ 275., 130.]], dtype=float32)
>>> noisy_img[noisy_img>255] = 255
>>> noisy_img[noisy_img<0] = 0
>>> noisy_img = np.uint8(noisy_img)
>>> noisy_img
array([[170, 0],
[255, 130]], dtype=uint8)
You can also use cv2.convertScaleAbs() to cast with saturation, which is simpler but less explicit:
>>> img = np.array([[150, 0], [255, 150]], dtype=np.float32)
>>> noise = np.array([[20, -20], [20, -20]], dtype=np.float32)
>>> noisy_img = img+noise
>>> cv2.convertScaleAbs(noisy_img)
array([[170, 20],
[255, 130]], dtype=uint8)
i am trying to make a tracking program that takes an image and displays where the the object with the specified color is:
example: https://imgur.com/a/8LR40
to do this i am using RGB right now but it is realy hard to work with it so i want to convert it into a hue so it is easier to work with. i am trying to use colorsys but after doing some research i have no idea what parameters it wants in and what it gives. i have tried to get a match using colorizer.org but i get some nonsence.
>>> import colorsys
>>> colorsys.rgb_to_hsv(45,201,18)
(0.3087431693989071, 0.9104477611940298, 201)
alredy the colorsys is not acting as documented because at https://docs.python.org/2/library/colorsys.html it says that the output is always a float between 0 and 1, but the value is 201. that also is impossible as in standard HSV the value is between 0 and 100.
my questions are:
what does colorsys expect as an input?
how do i convert the output to standard HSV? (Hue = 0-360, saturation = 0-100, value = 0-100)
Coordinates in all of these color spaces are floating point values. In the YIQ space, the Y coordinate is between 0 and 1, but the I and Q coordinates can be positive or negative. In all other spaces, the coordinates are all between 0 and 1.
https://docs.python.org/3/library/colorsys.html
You must scale from 0 - 255 to 0 - 1, or divide your RGB values with 255. If using python 2 make sure not to do floor division.
I am having the following problem: I am saving 16-bit tiff images with a microscope and I need to analyze them. I want to do that with numpy and matplotlib, but when I want to do something as simple as plotting the image in green (I will later need to superpose other images), it fails.
Here is an example when I try to plot the image either as a RGB array, or with the default jet colormap.
import numpy as np
import matplotlib.pyplot as plt
import cv2
imageName = 'image.tif'
# image as luminance
img1 = cv2.imread(imageName,-1)
# image as RGB array
shape = (img1.shape[0], img1.shape[1], 3)
img2 = np.zeros(shape,dtype='uint16')
img2[...,1] += img1
fig = plt.figure(figsize=(20,8))
ax1 = fig.add_subplot(1,2,1)
ax2 = fig.add_subplot(1,2,2)
im1 = ax1.imshow(img1,interpolation='none')
im2 = ax2.imshow(img2,interpolation='none')
fig.show()
Which to me yields the following figure:
I am sorry if the question is too basic, but I have no idea why the right plot is showing this artifacts. I would like to get with the green scale, something like how the figure looks (imageJ also yields somthing similar to the left plot).
Thank you very much for your collaboration.
I find the right plot much more artistic...
matplotlib is rather complicated when it comes to interpreting images. It goes roughly as follows:
if the image is a NxM array of any type, it is interpreted through the colormap (autoscale, if not indicated otherwise). (In principle, if the array is a float array scaled to 0..1, it should be interpreted as a grayscale image. This is what the documentation says, but in practice this does not happen.)
if the image is a NxMx3 float array, the RGB components are interpreted as RGB components between 0..1. If the values are outside of this range, they are taken with positive modulo 1, i.e. 1.2 -> 0.2, -1.7 -> 0.3, etc.
if the image is a NxMx3 uint8 array, it is interpreted as a standard image (0..255 components)
if the image is NxMx4, the interpretation is as above, but the fourth component is the opacity (alpha)
So, if you give matplotlib a NxMx3 array of integers other than uint8 or float, the results are not defined. However, by looking at the source code, the odd behavour can be understood:
if A.dtype != np.uint8:
A = (255*A).astype(np.uint8)
where A is the image array. So, if you give it uint16 values 0, 1, 2, 3, 4..., you get 0, 255, 254, 253, ... Yes, it will look very odd. (IMHO, the interpretation could be a bit more intuitive, but this is how it is done.)
In this case the easiest solution is to divide the array by 65535., and then the image should be as expected. Also, if your original image is truly linear, then you'll need to make the reverse gamma correction:
img1_corr = (img1 / 65535.)**(1/2.2)
Otherwise your middle tones will be too dark.
I approached this by normalising the image by the maximum value of the given datatype, which said by DrV, for uint16 is 65535. The helper function would look something like:
def normalise_bits(img):
bits = 1.0 # catch all
try:
# Test integer value, e.g. np.uint16
bits = np.iinfo(img.dtype).max
except ValueError:
# Try float maximum, e.g. np.float32
bits = np.finfo(img.dtype).max
return (img / bits).astype(float)
Then the image can be handled by matplotlib as a float [0.0, 1.0]
I have a number between 0 and 1, and I would like to convert it into the corresponding RGB components in a blue scale.
So, when the number is 0, I would like to obtain [255,255,255] (i.e. white), while if the number is 1 I would like to obtain [0,0,255] (blue).
Do you know a formula or how this can be implemented with Python?
It would be nice to know also how to convert the 0-1 number to other color scales, for example green (in the sense that 0 corresponds again to [255,255,255], while 1 corresponds to [0,255,0]).
Thanks in advance for any help you can provide!
def green(brightness):
brightness = round(255 * brightness) # convert from 0.0-1.0 to 0-255
return [255 - brightness, 255, 255 - brightness]
def blue(brightness):
brightness = round(255 * brightness) # convert from 0.0-1.0 to 0-255
return [255 - brightness, 255 - brightness, 255]
# etc., green(1.0) -> [0, 255, 0]
That is a strange "scale", since it goes from white to blue, rather than from black ([0, 0, 0]) to blue.
Anyway, it's just a linear interpolation:
def make_blue(alpha):
invalpha = 1 - alpha
scaled = int(255 * invalpha)
return [scaled, scaled, 255]
How to extending this to other color components should be obvious.
See also this answer for the general question of blending between two colors. In your case, one of the colors is white, the other one is blue for this case, but you also wondered about red and green.
Use the HSL color space to get the correct value. Define the correct color with the Hue, for blue take for example the vale '0.6'. The Saturation can be the max value of 1, and take your number to control the Lightness. 0 means black, 0.5 is the color, and 1 is white. So we only use the range of 0.5 to 1. After you have define your correct HSL color, convert it to the RGB color space.
Putting it all togehter
import colorsys
colorsys.hls_to_rgb(0.6, 1, YOURNUMBER * 0.5 + 0.5)