I'm looking just for an idea/conception to resolve my problem.
I need to CHECK if the color of a surface does not exceed a certain gray level. So I thought to calculate its luminance.
Problem is that colors like this one #BCB0F5 will give me an acceptable gray level, however the color of the surface must not look that blue for the human eyes. It must look (for human eyes) just as a certain gray level (black and white).
How can I resolve this problem ?
Thank you for any hints.
In a perceptual model of colour, we can talk of a particular colour's luminance and chromaticity (it's "brightness" and it's "quality"). By converting the samples you have from RGB to CIELAB (via ColorPy say), you can filter out colours which are brighter than your desired grey (L_sample > L_grey) and whose distance from the white point are greater than a JND (e.g. sqrt(a_sample**2 + b_sample**2) > 2.3)
Related
I have an image that consists of small black dots, and in each dot's vicinity there is some noise that appears as grayish smudge.
I'm trying to use some sort of image processing in Python in order to find both the number of (correct) dots and the number of noise smudges, as well as calculate their paramaters (i.e size).
I was thinking of using some sort of contour detection with a certain threshold, since the dots' borders are more distinct, but perhaps there's a better way that I'm not familiar with.
Thank you for your help!
Use the Pillow module to analyze each pixel color and compare it against whether its RGB values added together (Assuming its only black and white) are:
Black: 0
Grey: 1-764
White: 765
Hope that helps
Regarding the following cv2.inRange(...) invocation:
mask = cv2.inRange(quantized_img, color, color)
Must the 'quantized_img' and 'color' arguments be strictly in HSV or it's OK to have RGB image and RGB 'color'? It seems that RGB works for me, but all examples I could find are HSV-based. So I'm concerned about the correct usage.
Thanks!
In general, use whatever color space you like. RGB/BGR is fine, HSV is fine, something completely made up (with cv.transform) is fine too.
inRange spans a "cube".
Think about it. Imagine a 3D plot with R,G,B axes, or with H,S,V axes. In RGB space, the faces of the cube are aligned with those RGB axes. in HSV space the faces of the cube are aligned with those axes instead.
Now, a cube spanned in RGB space, when transformed into HSV space, is not aligned with the axes in HSV space. In fact it's not even a cube anymore, but likely some kind of torus or section of a cone or something. Same goes the other way around.
If the area of values you're interested in, in whatever space you choose, is flat or even stick-shaped (instead of a mostly spherical cloud), the cube you have to span might align very badly with the area of values you are interested in, and would have to include a lot of values you aren't interested in.
So you move into another color space where your values of interest are somewhat better aligned with the axes in that space. Then the cube spanned by inRange fits your purpose better.
Imagine a "stick" in RGB space going from the black corner to the white corner. It represents "colors" with no saturation to them (because colors are in the other six corners of the cube). Try spanning a cube over that area. Doesn't fit well.
In HSV space however, it's trivial. Usually it's visualized as a cylinder/inverted cone though... span a thin cylinder in the center: any Hue (angle), any Value (height), with very low Saturation (close to the center axis). If you took HSV as a cube, you'd span a thin wall instead. And it all would fit very well.
The explanation given by #Christoph Rackwitz is completely correct. I'll just like to add a few tips observed by me.
HSV and Lab color spaces are the best ones for color segmentation.
Keep BGR color space as probably the last option.
Do not just blindly start finding the range in HSV or Lab color segmentation for your color. Look for other methods too.
Other methods include:
Visualize each color channel of HSV and Lab separately as a grayscale image. You might see some pattern there only.
One thing that helped in my case was I did Otsu's thresholding on "Hue" and "Saturation" channels of my image and then performed a bitwise OR operation on their output. The final image had everything I need without any errors. Do a hit-and-try on your input images to observe such patterns. This helps a lot.
I am eager to build a system which tells whether my color palette follows any color schemes from the color wheel, i.e., monochromatic, analogous, complementary, split complementary, triadic, square, and rectangle (or tetradic).
I have been thinking about this problem since last few days but couldn't come up with something.
I don't have that much clues about it as to where to start, please if I can get some initial ideas as to how to proceed using python.
I'm no colour-theorist and will happily remove my suggestion if someone with knowledge/experience contributes something more professional.
I guess you want to convert to HSL/HSV colourspace, which you can do with ImageMagick or OpenCV very simply.
For monochromatic, you'd look at the Hue channel (angle) and see if all the Hues are within say 10-15 degrees of each other.
For complementary, you'd be looking for 2 groups of Hue angles around 180 degrees apart.
For triadic, 3 clusters of angles separated by around 120 degrees. And so on. I don't know the more exotic schemes.
You can get HSV with OpenCV like this:
import cv2
# Open image as BGR
im = cv2.imread(XXX)
# Convert to HSV
HSV = cv2.cvtColor(im, cv2.COLOR_BGR2HSV)
Bear in mind that OpenCV Hue angles are half the conventional values so that the 0..360 range fits inside an unsigned 8-bit integer as 0..180 if dealing with 8-bit images. Range is the conventional 0..360 when dealing with floats.
Here is a little example with a colour wheel, where I split the image into Hue, Saturation and Value with ImageMagick and then lay out the channels beside each other with Hue on the left, Saturation in the centre and Value on the right:
magick colorwheel.png -colorspace HSV -separate +append separated.png
Input Image
Separated Image
Hopefully you can see that the Hue values go around the colour wheel in the left panel, that the Saturation decreases as you move away from the centre and that the Value increases as you move out radially.
You can hover a colour picker over, say, the green tones and see that they all have similar Hue. So, if you were looking for complementary colours. hover over the two and see if they are 180 degrees of Hue apart, for example.
I have multiple grayscale images in which each image has the sun's reflection or also known as glare as a bright glaring spot. It looks like a bright white blob which I want to remove. It is basically the front portion of a car image when the sun's reflection falls on the front steel grill of the car.
I want to remove this reflection as much as possible. I would appreciate if anyone can point me to a good algorithm , preferably in Python, that I can leverage to remove this glare and pre-process as much as possible.
I tried the approach of applying a threshold to the image pixels and then setting anything that is above 200 to a value of 128. It doesn't work very well because there are other parts of the image that contains white and those get affected.
do not forget to add some sample images ,...
I would try first identify the sun spot
by intensity and the shape of graph intensity=f(distance from spot middle)
it may have a distinct shape that could be used to identify the spot more reliably
After that I would bleed colors
from spot outer area to the its inside
recoloring the spot with its surrounding color ...
by finding all spot pixels that are next to non spot pixels
recoloring them to average of neighboring non spot pixels
and clearing them in spot mask
looping this until no spot pixel in mask is left
[notes]
But without any input images to test is this just a theory
also if you have the source RGB image not just gray the color patterns can may be also be used to help identify the spots
by checking for saturation of white and or some rainbow like pattern
This image is just an example. Top right is the original image, top left is the hue, bottom left the saturation and bottom right is the value. As can be easily seen both H and S are filled with artifacts. I want to reduce the brightness so the result picks a lot of this artifacts.
What I am doing wrong?
My code is simply:
vc = cv2.VideoCapture( 0 )
# while true and checking ret
ret, frame = vc.read()
frame_hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
cv2.imshow("h", frame_hsv[:,:,0])
cv2.imshow("s", frame_hsv[:,:,1])
cv2.imshow("v", frame_hsv[:,:,2])
I feel there is a misunderstanding in your question. While the answer of Boyko Peranov is certainly true, there are no problems with the images you provided. The logic behind it is the following: your camera takes pictures in the RGB color space, which is by definition a cube. When you convert it to the HSV color space, all the pixels are mapped to the following cone:
The Hue (first channel of HSV) is the angle on the cone, the Saturation (second channel of HSV, called Chroma in the image) is the distance to the center of the cone and the Value (third channel of HSV) is the height on the cone.
The Hue channel is usually defined between 0-360 and starts with red at 0 (In the case of 8 bit images, OpenCV use the 0-180 range to fit a unsigned char as stated in the documentation). But the thing is, two pixels of value 0 and 359 are really really close together in color. It can be seen more easily when flattening the HSV cone by taking only the outer surface (when Saturation is maximal):
Even if these values are perceptually close (perfectly red at 0 and red with a little tiny bit of purple at 359), these two values are far apart. This is the cause of the "artifacts" you describe in the Hue channel. When OpenCV shows it to you in grayscale, it mapped black to 0 and white to 359. They are, in fact, really similar colors, but when mapped in grayscale, are displayed too far apart. There are two ways to circumvent this counter-intuitive fact: you can re-cast the H channel into RGB space with a fixed saturation and value, which will show a closer representation to our perception. You could also use another color space based on perception (such as the Lab color space) which won't give you these mathematical side-effects.
The reason why these artifact patches are square are explained by Boyko Peranov. The JPEG compression works by replacing pixels by bigger squares that approximates the patch it replaces. If you put the quality of the compression really low when you create the jpg, you can see these squares appears even in the RGB image. The lower the quality, the bigger and more visible are the squares. The mean value of these squares is a single value which, for tints of red, may end up being between 0 and 5 (displayed as black) or 355 and 359 (displayed as white). That explains why the "artifacts" are square-shaped.
We may also ask ourselves why are there more JPEG compression artifacts visible in the hue channel. This is because of chroma subsampling, where studies based on perception showed that our eyes are less prone to see rapid variations in color than rapid variations in intensity. So, when compression, JPEG deliberately loses chroma information because we won't notice it anyway.
The story is similar for the saturation (your bottom left image) white varying spots. You're describing pixels nearly black (on the tip of the cone). Hence, the Saturation value could vary much but won't affect the color of the pixel much: it will always be near black. This is also a side-effect of the HSV color space not being purely based on perception.
The conversion between RGB (or BGR for OpenCV) and HSV is (in theory) lossless. You can convince yourself of this: re-convert your HSV image into the RGB one, you get the exact same image as you began with, no artifacts added.
You are working with a lossy compressed image, hence the rectangular artifacts. With video you have low exposition time, can have bandwidth limitations, etc. So the overall picture quality degrades. You can:
Use a series of still shots by using Capture instead of VideoCapture or
Extract 5-10 video frames, and average them.