How to remove sun's reflection on a photo using image processing - python

I have multiple grayscale images in which each image has the sun's reflection or also known as glare as a bright glaring spot. It looks like a bright white blob which I want to remove. It is basically the front portion of a car image when the sun's reflection falls on the front steel grill of the car.
I want to remove this reflection as much as possible. I would appreciate if anyone can point me to a good algorithm , preferably in Python, that I can leverage to remove this glare and pre-process as much as possible.
I tried the approach of applying a threshold to the image pixels and then setting anything that is above 200 to a value of 128. It doesn't work very well because there are other parts of the image that contains white and those get affected.

do not forget to add some sample images ,...
I would try first identify the sun spot
by intensity and the shape of graph intensity=f(distance from spot middle)
it may have a distinct shape that could be used to identify the spot more reliably
After that I would bleed colors
from spot outer area to the its inside
recoloring the spot with its surrounding color ...
by finding all spot pixels that are next to non spot pixels
recoloring them to average of neighboring non spot pixels
and clearing them in spot mask
looping this until no spot pixel in mask is left
[notes]
But without any input images to test is this just a theory
also if you have the source RGB image not just gray the color patterns can may be also be used to help identify the spots
by checking for saturation of white and or some rainbow like pattern

Related

Image analysis help needed with identifying hot pixels in a set of images

I conducted an experiment to see radioactivity using a camera sensor. so i captured long exposure images(greyscale) in a dark environment and saw that there is a pixel pattern (bright pixels) repeating in my image data set(I think they are called hot pixels).I need to identify these pixels and ignore them when calculating the number of radiation interactions observed in each image. A radiation interaction would also appear as a bright pixel or a collection of a couple of pixels.
sample image
**I am using python to analyze the image data set. I am a newbie to Python programming and do not have much knowledge about handling these level problems or the libraries/functions to be used in analysis like this. thought it would be a good project to learn a bit of image analysis.
I dont know how to code this problem. So I definitely need help with that. **However **i thought of an algorithm that could possibly help me achieve my goal. ****
-Since hot pixels and radiation interactions appear as white pixels/spots in a black/dark background, I would assign a suitable pixel value threshold to the image to set the background pixel value to 0 (min) and white pixels to 255(max).
then I would check each pixel value in all 100 images and identify the pixel positions that have the same value in all images. (eg: lets say pixel value at position (1,1) in an image is 255 for all 100 images. then i would note that position as a hot pixel).
Next, I would set the pixel value of those positions to 0, so i will be left with bright pixels from radiation events only.
sometimes radiation events can have more than one pixel (but they will be next to each other). So i need to know a method to count them as one event.
I would truly appreciate your help if you can help me resolve this problem in a much more efficient manner.

Evaluating Dots and Noise in an Image

I have an image that consists of small black dots, and in each dot's vicinity there is some noise that appears as grayish smudge.
I'm trying to use some sort of image processing in Python in order to find both the number of (correct) dots and the number of noise smudges, as well as calculate their paramaters (i.e size).
I was thinking of using some sort of contour detection with a certain threshold, since the dots' borders are more distinct, but perhaps there's a better way that I'm not familiar with.
Thank you for your help!
Use the Pillow module to analyze each pixel color and compare it against whether its RGB values added together (Assuming its only black and white) are:
Black: 0
Grey: 1-764
White: 765
Hope that helps

Extract number of pixels from a cropped region

I was hoping to find out the number of pixels in a specific region from a pretty dark image , but cant find the right way. Can I please get some help.
I was trying to crop the image along the two red boundaries and then draw a white line over the red line and count nonzero pixels over it. But i think its a pretty ugly approach.
My end goal : It is to find the pixel to mm ratio (1 block between two red lines is 1.891mm)
P.S - It may sound a little silly to ask but i had a question too , if i zoom in to the image , does the number of pixels in a fixed region change than the original state(i believe they dont, not sure) while resizing does affects the number of pixels in a region right ?
Thank you so much in advance.
The red line stands out from the background. Pick its color and mark every pixel of a similar color. This should result in a binary image with only the scale.
Then follow the horizontal line from left to right, preferring horizontal moves over diagonal ones. Then determine the local width by means of vertical displacements. The maxima of the width will reveal the positions of the tick marks.
You can probably refine the measurements and resort to subpixel analysis, but this is another story.

How to determine the scene is dark by opencv?

There are many photos are dark, These photos does not make sense to the viewer.
So I want to use opencv identified it, how to do that by opencv?
Any python source example will good:)
Perhaps you can transform the image to the HSV color space and use the V values for measuring the amount of light in the scene. Extract a histogram of the V values and compare it between light and dark images to see the differences.
your question is a little unclear as of what you wish to do. Can you give an example of the scene? I wish to see how dark it is before coming to a clear conclusion.
But if you just want to determine if an image is dark as via your title, its simple, draw an Histogram. Dark images tends to have peaks on the left hand side of the histogram, where the intensity of the pixels are pretty much in the value range of 0 to 20 or maybe can go up to 40/50, the right hand side of the histogram should pretty much have nothing should the image is really dark as what you mentioned. Following which, do a threshold on the image to determine whether the image is dark or not.
If you want to make the image clearer to the human eye, you can do histogram normalization. But that depends on how bad the image really is.
Was that what you are looking for?

Determine height of Coffee in the pot using Python imaging

We have a web-cam in our office kitchenette focused at our coffee maker. The coffee pot is clearly visible. Both the location of the coffee pot and the camera are static. Is it possible to calculate the height of coffee in the pot using image recognition? I've seen image recognition used for quite complex stuff like face-recognition. As compared to those projects, this seems to be a trivial task of measuring the height.
(That's my best guess and I have no idea of the underlying complexities.)
How would I go about this? Would this be considered a very complex job to partake? FYI, I've never done any kind of imaging-related work.
Since the coffee pot position is stationary, get a sample frame and locate a single column of pixels where the minimum and maximum coffee quantities can easily be seen, in a spot where there are no reflections. Check the green vertical line segment in the following picture:
(source: nullnetwork.net)
The easiest way is to have two frames, one with the pot empty, one with the pot full (obviously under the same lighting conditions, which typically would be the case), convert to grayscale (colorsys.rgb_to_hsv each RGB pixel and keep only the v (3rd) component) and sum the luminosity of all pixels in the chosen line segment. Let's say the pot-empty case reaches a sum of 550 and the pot-full case a sum of 220 (coffee is dark). By comparing an input frame sum to these two sums, you can have a rough estimate of the percentage of coffee in the pot.
I wouldn't bet my life on the accuracy of this method, though, and the fluctuations even from second to second might be wild :)
N.B: in my example, the green column of pixels should extend to the bottom of the pot; I just provided an example of what I meant.
Steps that I'd try:
Convert the image in grayscale.
Binarize the image, and leave only the coffee. You can discover a good threshold manually through experimentation.
Blob extraction. Blob's area (number of pixels) is one way to calculate the height, ie area / width.
First do thresholding, then segmentation. Then you can more easily detect edges.
You're looking for edge detection. But you only need to do it between the brown/black of the coffee and the color of the background behind the pot.
make pictures of the pot with different levels of coffe in it.
downsample the image to maybe 4*10 pixels.
make the same in a loop for each new live picture.
calculate the difference of each pixels value compared to the reference images.
take the reference image with the least difference sum and you get the state of your coffe machine.
you might experiment if a grayscale version or only red or green might give better results.
if it gives problems with different light settings this aproach is useless. just buy a spotlight for the coffe machine, or lighten up, or darken each picture till the sum of all pixels reaches a reference value.

Categories