How to determine the scene is dark by opencv? - python

There are many photos are dark, These photos does not make sense to the viewer.
So I want to use opencv identified it, how to do that by opencv?
Any python source example will good:)

Perhaps you can transform the image to the HSV color space and use the V values for measuring the amount of light in the scene. Extract a histogram of the V values and compare it between light and dark images to see the differences.

your question is a little unclear as of what you wish to do. Can you give an example of the scene? I wish to see how dark it is before coming to a clear conclusion.
But if you just want to determine if an image is dark as via your title, its simple, draw an Histogram. Dark images tends to have peaks on the left hand side of the histogram, where the intensity of the pixels are pretty much in the value range of 0 to 20 or maybe can go up to 40/50, the right hand side of the histogram should pretty much have nothing should the image is really dark as what you mentioned. Following which, do a threshold on the image to determine whether the image is dark or not.
If you want to make the image clearer to the human eye, you can do histogram normalization. But that depends on how bad the image really is.
Was that what you are looking for?

Related

clustering pixels to create a solid shape

I am working on a script that detects the bottom of a cooking pot, given a picture from above at a slightly slanted angle. This is a tough task due to lighting, lack of edges, and the glare of the metal pot.
This is a sample image I am working with:
image
img = cv2.imread('img.jpg')
gray_img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
The bottom of the pot is visible, but hard to detect.
So far, I was able to produce this image using:
th2 = cv2.adaptiveThreshold(gray_img,255,cv2.ADAPTIVE_THRESH_MEAN_C,cv2.THRESH_BINARY,11,2)
And this image using:
edges = cv2.Canny(img,30,60)
The solution seems intuitive since in both images the base can be detected easily, but I am not able to figure out the implementation.
My intuition tells me that I should cluster white pixels starting from the center of the image, and line the border of the segmentation generated from the clustering. I am not quite sure about how to go about that. Any advice would be greatly appreciated.
WORK IN PROGRESS
What is your overall goal? Do you want to detect something at the bottom? Count bubbles or something like that? Or you you want to check for defects?.
As you already mentioned, it's hard because of the reflections, so you could start using less direct light and maybe use a diffuse source.
No single light bulb, no direct sun, maybe use a white canvas. Or span a thin piece of white cloth between the current light source and the camera.

Histogram equalization (or stretching) of RGB picture in python

I am creating automatically JPG pictures from multispectral data. Created picture is very dark. So I thought it would be best idea change brightness (like Image.Enhance in PIL). But there was a problem, because some pictures need more brightness than others.
So next idea was try linear stretching of histogram. So I created script which iterate over RGB tuples and compute new intensity for pixels. There was very small difference. Probably because the range of values was everytime 0-255. Then I tried histogram equalization (ImageOps) for R, G and B but the result was no good, please see middle part of picture. I found on the internet that this is not good approach because colors can change dramatically. It is probably my case.
The best idea looks convert RGB array to HSL and then change luminance but I can't use constant for maximize Luminance because pictures are different and need different constants for. Should I use histogram equalization on Luminance or what is the best approach how stretch or probably better histogram equalization of my picture?
I am looking for something like Image/Auto adjust colors in IrfanView or in some SW are used name Linear Normalization...
I hope that picture will be help to you understand my problem. I probably choose bad way how to achieve my goal.
Thank you for any answer, I will be very glad.
EDIT
Left image for download
Next images I can upload later, today.
I would suggest proceeding with the same approach as you have stated with slight modification.
Convert the RGB image to LAB image.
Apply localized histogram equalization to the L-channel.
Merge it back with the other channels.
Convert it back to RGB image.
You can check my answer for this in a different question here:
The code I have there is written for OpenCV using python. You can modify it for C language if you wish.
Let me know if it has helped you!!
I am not sure if this applies, and I have not applied this myself, but I was reading on this article about underwater contrast stretching:
http://www.iaeng.org/IJCS/issues_v34/issue_2/IJCS_34_2_12.pdf
What it suggests might help
"In order to address the issues discussed above, we propose
an approach based on slide stretching. Firstly, we use contrast
stretching of RGB algorithm to equalize the colour contrast in
the images. Secondly, we apply the saturation and intensity
stretching of HSI to increase the true colour and solve the
problem of lighting"

How to remove sun's reflection on a photo using image processing

I have multiple grayscale images in which each image has the sun's reflection or also known as glare as a bright glaring spot. It looks like a bright white blob which I want to remove. It is basically the front portion of a car image when the sun's reflection falls on the front steel grill of the car.
I want to remove this reflection as much as possible. I would appreciate if anyone can point me to a good algorithm , preferably in Python, that I can leverage to remove this glare and pre-process as much as possible.
I tried the approach of applying a threshold to the image pixels and then setting anything that is above 200 to a value of 128. It doesn't work very well because there are other parts of the image that contains white and those get affected.
do not forget to add some sample images ,...
I would try first identify the sun spot
by intensity and the shape of graph intensity=f(distance from spot middle)
it may have a distinct shape that could be used to identify the spot more reliably
After that I would bleed colors
from spot outer area to the its inside
recoloring the spot with its surrounding color ...
by finding all spot pixels that are next to non spot pixels
recoloring them to average of neighboring non spot pixels
and clearing them in spot mask
looping this until no spot pixel in mask is left
[notes]
But without any input images to test is this just a theory
also if you have the source RGB image not just gray the color patterns can may be also be used to help identify the spots
by checking for saturation of white and or some rainbow like pattern

Create silhoutte of an image using PIL

I have an image like this that I'm trying to create a dark silhouette of, such as this.
I've tried using image.filter(ImageFilter.BLUR) with varying radii but the intensity gets reduced a lot. Is there any other way to do this?
FYI your first link doesn't work. If by intensity you mean darkness of the color you might want to try using ImageFilter.UnsharpMask. That might mix less of what I assume to be a white background into the middle of the logo while giving the soft edges you are looking for.

How to solarize an image

I am trying to make a function that will solarize an image, but before I can do that I need to know how to solarize an image.
According to Wiki:
"the image recorded on a negative or on a photographic print is wholly or partially reversed in tone. Dark areas appear light or light areas appear dark"
I am running a python framework which allows me to access all the pixels in an image and their RGB values. From there does anyone have any suggestions on how to achieve this?
Here's some good information on how that effect is digitally recreated

Categories