Find darkest color in image with python - python

I was just wondering you could find the darkest colour in a greyscale image with python. I would prefer using pillow but OpenCV would be fine.
I found this but couldn't make sense of it.
If this is simple just say.
Thanks
** edit **
The issue that I have is that with the rest of the script the darkest colour is very unlikely to be black.

So in the image, after you read using OpenCV or Pillow, and because it is grayscale, the darkest "color" should be 0, which is black. The range of pixel values is from 0 and 255. If you want to find the darkest value, you can just use the minimal function.
import cv2
img = cv2.imread("2.png", 0)
print(min(img.flatten()))
"For grayscale images, the pixel value is a single number that represents the brightness of the pixel. The most common pixel format is the byte image, where this number is stored as an 8-bit integer giving a range of possible values from 0 to 255. Typically zero is taken to be black, and 255 is taken to be white."
Reference: https://homepages.inf.ed.ac.uk/rbf/HIPR2/value.htm

Related

Evaluating Dots and Noise in an Image

I have an image that consists of small black dots, and in each dot's vicinity there is some noise that appears as grayish smudge.
I'm trying to use some sort of image processing in Python in order to find both the number of (correct) dots and the number of noise smudges, as well as calculate their paramaters (i.e size).
I was thinking of using some sort of contour detection with a certain threshold, since the dots' borders are more distinct, but perhaps there's a better way that I'm not familiar with.
Thank you for your help!
Use the Pillow module to analyze each pixel color and compare it against whether its RGB values added together (Assuming its only black and white) are:
Black: 0
Grey: 1-764
White: 765
Hope that helps

is there a coding way to check whether a color palette follows the color schemes of a color wheel like Analogous, monochromatic, triadic etc..?

I am eager to build a system which tells whether my color palette follows any color schemes from the color wheel, i.e., monochromatic, analogous, complementary, split complementary, triadic, square, and rectangle (or tetradic).
I have been thinking about this problem since last few days but couldn't come up with something.
I don't have that much clues about it as to where to start, please if I can get some initial ideas as to how to proceed using python.
I'm no colour-theorist and will happily remove my suggestion if someone with knowledge/experience contributes something more professional.
I guess you want to convert to HSL/HSV colourspace, which you can do with ImageMagick or OpenCV very simply.
For monochromatic, you'd look at the Hue channel (angle) and see if all the Hues are within say 10-15 degrees of each other.
For complementary, you'd be looking for 2 groups of Hue angles around 180 degrees apart.
For triadic, 3 clusters of angles separated by around 120 degrees. And so on. I don't know the more exotic schemes.
You can get HSV with OpenCV like this:
import cv2
# Open image as BGR
im = cv2.imread(XXX)
# Convert to HSV
HSV = cv2.cvtColor(im, cv2.COLOR_BGR2HSV)
Bear in mind that OpenCV Hue angles are half the conventional values so that the 0..360 range fits inside an unsigned 8-bit integer as 0..180 if dealing with 8-bit images. Range is the conventional 0..360 when dealing with floats.
Here is a little example with a colour wheel, where I split the image into Hue, Saturation and Value with ImageMagick and then lay out the channels beside each other with Hue on the left, Saturation in the centre and Value on the right:
magick colorwheel.png -colorspace HSV -separate +append separated.png
Input Image
Separated Image
Hopefully you can see that the Hue values go around the colour wheel in the left panel, that the Saturation decreases as you move away from the centre and that the Value increases as you move out radially.
You can hover a colour picker over, say, the green tones and see that they all have similar Hue. So, if you were looking for complementary colours. hover over the two and see if they are 180 degrees of Hue apart, for example.

Scaling Image in Python makes it darker

Good day,
I have an image which I generate through a deep learning process. The image is RGB, and the values of the image range from -0.28 to 1.25. Typically I would rescale the image so that the values are floating point between 0 and 1, or integers between 0 and 255. However I have found that in my current experiment doing this has made my images much darker. The image type is np.array (float64).
If I plot the image using matplotlib.pyplot then the values of the original image get clipped, but the image is not darkened.
The problem with this is that I am unable to save this version of the image. plt.imsave('image.png', art) gives an error.
When I scale the image I get the below output which is dark. This image can be saved using plt.imsave().
Here is my scaling function:
def scale(img):
return((img - img.min())/(img.max() - img.min()) * 255)
My questions:
1) Why I am I not able to save my image in the first (bright) image? If scaling is the problem, then:
2) Why does scaling make the image dark.
Help is much appreciated.
1) Why am I not able to save my image in the first (bright) image?
It's hard to answer this without seeing the specific error you're getting, but my guess is it might have to do with the range of values in your image. Maybe negative values are an issue, or the fact that you have both negative floats and floats larger than 1.
If I create some fake RGB image data in the range [-0.28, 1.25] and try to save it with plt.imsave(), I get the following error:
ValueError: Floating point image RGB values must be in the 0..1 range.
2) Why does scaling make the image dark?
Scaling your image's pixel values will likely change the appearance.
Imagine you had a light image, such that the values in the image ranged from [200, 255]. When you scale the values, you spread the values from [0, 255] and now you have pixels that were previously bright (around 200) being mapped to black (0). If you have a generally bright image, it will seem darker after scaling. This seems to be the case for you.
As a side note: I would suggest using Pillow or OpenCV rather than Matplotlib if you're doing lots of image-related work :)
EDIT
As #alkasm pointed out in a comment, when you use plt.imshow() to display the image, the values are clipped. This means that the first image will have all negative values mapped to 0, and all values greater than 1 mapped to 1. The first image is clipped and saturated to make it appear that there are more dark and bright pixels than there should be.
So it's not that the second image is darker, it's that the first image isn't displayed properly.

how to extract the relative colour intensity in a black and white image in python?

Suppose I have got a black an white image, how do I convert the colour intensity at each point into a numerical value that represents its relativity intensity?
I checked somewhere on the web and found the following:
Intensity = np.asarray(PIL.Image.open('test.jpg'))
What's the difference between asarray and array?
Besides, the shape of the array Intensity is '181L, 187L, 3L'. The size of the image test.jpg is 181x187, so what does the extra '3' represent?
And are there any other better ways of extracting the colour intensity of an image?
thank you.
The image is being opened as a color image, not as a black and white one. The shape is 181x187x3 because of that: the 3 is there because each pixel is an RGB value. Quite often images in black and white are actually stored in an RGB format. For an image image, if np.all(image[:,:,0]==image[:,:,1]) and so on, then you can just choose to use any of them (eg, image[:,:,0]). Alternatively, you could take the mean with np.mean(image,axis=2).
Note too that the range of values will depend on the format, and so depending upon what you mean by color intensity, you may need to normalize them. In the case of a jpeg, they are probably uint8s, so you may want image[:,:,0].astype('float')/255 or something similar.

opencv, BGR2HSV creates lots of artifacts

This image is just an example. Top right is the original image, top left is the hue, bottom left the saturation and bottom right is the value. As can be easily seen both H and S are filled with artifacts. I want to reduce the brightness so the result picks a lot of this artifacts.
What I am doing wrong?
My code is simply:
vc = cv2.VideoCapture( 0 )
# while true and checking ret
ret, frame = vc.read()
frame_hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
cv2.imshow("h", frame_hsv[:,:,0])
cv2.imshow("s", frame_hsv[:,:,1])
cv2.imshow("v", frame_hsv[:,:,2])
I feel there is a misunderstanding in your question. While the answer of Boyko Peranov is certainly true, there are no problems with the images you provided. The logic behind it is the following: your camera takes pictures in the RGB color space, which is by definition a cube. When you convert it to the HSV color space, all the pixels are mapped to the following cone:
The Hue (first channel of HSV) is the angle on the cone, the Saturation (second channel of HSV, called Chroma in the image) is the distance to the center of the cone and the Value (third channel of HSV) is the height on the cone.
The Hue channel is usually defined between 0-360 and starts with red at 0 (In the case of 8 bit images, OpenCV use the 0-180 range to fit a unsigned char as stated in the documentation). But the thing is, two pixels of value 0 and 359 are really really close together in color. It can be seen more easily when flattening the HSV cone by taking only the outer surface (when Saturation is maximal):
Even if these values are perceptually close (perfectly red at 0 and red with a little tiny bit of purple at 359), these two values are far apart. This is the cause of the "artifacts" you describe in the Hue channel. When OpenCV shows it to you in grayscale, it mapped black to 0 and white to 359. They are, in fact, really similar colors, but when mapped in grayscale, are displayed too far apart. There are two ways to circumvent this counter-intuitive fact: you can re-cast the H channel into RGB space with a fixed saturation and value, which will show a closer representation to our perception. You could also use another color space based on perception (such as the Lab color space) which won't give you these mathematical side-effects.
The reason why these artifact patches are square are explained by Boyko Peranov. The JPEG compression works by replacing pixels by bigger squares that approximates the patch it replaces. If you put the quality of the compression really low when you create the jpg, you can see these squares appears even in the RGB image. The lower the quality, the bigger and more visible are the squares. The mean value of these squares is a single value which, for tints of red, may end up being between 0 and 5 (displayed as black) or 355 and 359 (displayed as white). That explains why the "artifacts" are square-shaped.
We may also ask ourselves why are there more JPEG compression artifacts visible in the hue channel. This is because of chroma subsampling, where studies based on perception showed that our eyes are less prone to see rapid variations in color than rapid variations in intensity. So, when compression, JPEG deliberately loses chroma information because we won't notice it anyway.
The story is similar for the saturation (your bottom left image) white varying spots. You're describing pixels nearly black (on the tip of the cone). Hence, the Saturation value could vary much but won't affect the color of the pixel much: it will always be near black. This is also a side-effect of the HSV color space not being purely based on perception.
The conversion between RGB (or BGR for OpenCV) and HSV is (in theory) lossless. You can convince yourself of this: re-convert your HSV image into the RGB one, you get the exact same image as you began with, no artifacts added.
You are working with a lossy compressed image, hence the rectangular artifacts. With video you have low exposition time, can have bandwidth limitations, etc. So the overall picture quality degrades. You can:
Use a series of still shots by using Capture instead of VideoCapture or
Extract 5-10 video frames, and average them.

Categories