I have an image that consists of small black dots, and in each dot's vicinity there is some noise that appears as grayish smudge.
I'm trying to use some sort of image processing in Python in order to find both the number of (correct) dots and the number of noise smudges, as well as calculate their paramaters (i.e size).
I was thinking of using some sort of contour detection with a certain threshold, since the dots' borders are more distinct, but perhaps there's a better way that I'm not familiar with.
Thank you for your help!
Use the Pillow module to analyze each pixel color and compare it against whether its RGB values added together (Assuming its only black and white) are:
Black: 0
Grey: 1-764
White: 765
Hope that helps
Related
My code get many small images with digits in them. I try to compare it with my templates and get right data. It worked..
original= cv2.imread('im/10.png')
sought = (254,254,254)
result = np.count_nonzero(np.all(original==sought,axis=2))
As you can see in these cases, white digits are at different corners of pics, and no problem, results (quanity of white pixels) are equal and I got, that 18=18.
but now there are new pics,.
First of all, digits here aren't (254,254,254). I think image is more dim maybe. or less quality and i try to use:
result = np.count_nonzero(np.all(original > 200,axis=2))
Unfortunately, it give me different data. Two pics with 13 inside aren't equal.
What i want:
Method of pointing out white digits from dark background, except thin white circle around. (at 13 pics) Circle isn't a problem, if I crop image at center and get rid of circle my results wouldn't change - 13 still != 13.
Maybe it possible with converting image to grayscale, maybe to HSV, maybe just to make it brighter or more contrast.
Pixels counting is good for me it is quetly fast and usually accurate.
I'll repeat: now my code see two pics with 13 like different (there are difference colors, or brightness or black/white ratio, I don't know) I want to get rid of this problem.
I propose that you first threshold the image, meaning that pixels with letters will be of value 255, and the rest 0. This can be done with OpenCV's Adaptive Threshold function (you can see this answer).
Then, you will need a better way to determine the numbers. Simply counting the number of white pixels is not robust enough. For example, the numbers 13 and 31 should have the same number of white pixels. This can break your algorithm.
The best solution here is AI. You can look into Python Tesseract for example, but there are a lot of Python packages and tutorials. Just Google this, and you get to helpful answers like this one. There are even tutorials like this one.
If you don't want to use AI, then your algorithm will always break somewhere.
I have an input image which consists of 3 colors.These colors are circular in shape and nested.
The image is similar to this :
https://www.google.ie/search?q=red+yellow+blue+nested+circles&client=ms-unknown&tbm=isch&tbs=rimg:CbQTsOKsM7yhIkCaDOdJHzqnN2Xk-DhItFHm0Zqt6wMB32Tm1CzyzQ7wrXERbVqngEyMBzO57J8UuHLak9WPqWfjV7kgvdJ47BJlKhIJmgznSR86pzcR8SW2ldYWlqIqEgll5Pg4SLRR5hG-6WlMFrVBvioSCdGaresDAd9kEVFfCyyB-AgqKhIJ5tQs8s0O8K0RT790ELynuK8qEglxEW1ap4BMjBHBPar4Jd2NtioSCQczueyfFLhyEY7iP_13IGcsOKhIJ2pPVj6ln41cRTMOWeqZE5oYqEgm5IL3SeOwSZREray5kAy-dzw%3D%3D&tbo=u&ved=0ahUKEwjWzfSC6IbXAhXFbBoKHW3GBrUQuIIBCCM#imgrc=5tQs8s0O8K32hM:
I will be working with several images like this.The imaage is always of the same thing.however dude to different camera , lighting , even printer differences , the actual color can vary. In that it will always be red yellow green in the order showen. By using HSV and thresholds I can easily determine upper and lower values for each color. However If I change to a different set of images theses values are no longer functional.
My idea to overcome this is to look for contours first in the image.
For each contour I would like to get an upper and lower threshold. Using a combination of canny , gaussian and contours I am able to draw contour around each color from testing this seems general enough for purpose.
Where I'm stuck is getting the threshold value from within the contours. Is this possible? Or is there simpler logic I am over looking to achieve this ?
At present I'm using python , but language is secondary.
Forget about the contours, they will make things harder than necessary.
A better approach could be by classifying the pixels using k-means. Initialize with at least three clusters, centered around green, yellow, red. Maybe one centered on white, for the background.
After convergence you should have the exact colors, together with segmentation.
https://en.wikipedia.org/wiki/K-means_clustering#Standard_algorithm
I have multiple grayscale images in which each image has the sun's reflection or also known as glare as a bright glaring spot. It looks like a bright white blob which I want to remove. It is basically the front portion of a car image when the sun's reflection falls on the front steel grill of the car.
I want to remove this reflection as much as possible. I would appreciate if anyone can point me to a good algorithm , preferably in Python, that I can leverage to remove this glare and pre-process as much as possible.
I tried the approach of applying a threshold to the image pixels and then setting anything that is above 200 to a value of 128. It doesn't work very well because there are other parts of the image that contains white and those get affected.
do not forget to add some sample images ,...
I would try first identify the sun spot
by intensity and the shape of graph intensity=f(distance from spot middle)
it may have a distinct shape that could be used to identify the spot more reliably
After that I would bleed colors
from spot outer area to the its inside
recoloring the spot with its surrounding color ...
by finding all spot pixels that are next to non spot pixels
recoloring them to average of neighboring non spot pixels
and clearing them in spot mask
looping this until no spot pixel in mask is left
[notes]
But without any input images to test is this just a theory
also if you have the source RGB image not just gray the color patterns can may be also be used to help identify the spots
by checking for saturation of white and or some rainbow like pattern
I'm looking just for an idea/conception to resolve my problem.
I need to CHECK if the color of a surface does not exceed a certain gray level. So I thought to calculate its luminance.
Problem is that colors like this one #BCB0F5 will give me an acceptable gray level, however the color of the surface must not look that blue for the human eyes. It must look (for human eyes) just as a certain gray level (black and white).
How can I resolve this problem ?
Thank you for any hints.
In a perceptual model of colour, we can talk of a particular colour's luminance and chromaticity (it's "brightness" and it's "quality"). By converting the samples you have from RGB to CIELAB (via ColorPy say), you can filter out colours which are brighter than your desired grey (L_sample > L_grey) and whose distance from the white point are greater than a JND (e.g. sqrt(a_sample**2 + b_sample**2) > 2.3)
I have an image with a solid background and three types of objects on it. The three objects are of different (but slightly varying sizes) and of different colors.
For example if there is a 40-60 pixel group that is black in color that grouping would be put in group A, if there is a 40-60 pixel group that is white in color that grouping would be put in group B and if there is a 90-110 pixel group that is black in color that grouping would be put in group C. Then the algorithm would need to count the number of objects in each group.
How would I go about creating an algorithm to do this (preferably using python)?
Thanks in advance.
The answaer to your problem is Canny edge detector.
You can find python implementation of it here.
Good luck!
A way which works is to binarize the image according to the color first.
For example in your case, to detect object A, you make all pixels which are not black white. Then you get an image with only two kinds of colors (black and white).
And you can use connected component detection to find all the groups of black pixels. If number of pixels in a group is large than 40, you can think it is an object A.
You make want to check out my matlab code Detect-Gray-Square