i am learning opencv for a project and i need to detect edges on a very bad image. I want to mark the edge between the light gray and the dark gray area.(marked yellow) My problem is to find the right combination of operators to do this job.
So far i tried simple thresholding, but the gray-value on the right circle goes from 30 to 100 which is found everywhere else in the image. The image is also very noisy and filters (median,gaussian) are not very useful. I am also familiar with Sobel-Edge Detection but the wanted egde is not big enough to sort out the edges that are made from the noise.
I also tried the Canny-Operator but even this operator detects many edges on the right side of the image because of the many white and black pixels.
Someone an idea?
Related
I want to detect dust in the circle in the image, but there is a problem in that the edges are detected even in areas that are not actually defective. I am curious to see if it helps to handle the border part.
I would like to know how to clean up the pixels of the partial parts.
I am trying to find the Australian Aboriginal flag in a collection of historic artwork.
Aboriginal Flag: https://i.imgur.com/sGsnLkv.png
Example Artworks: https://imgur.com/a/t6RisPx
I am new to OpenCV but have been programming for a long time.
I am confident I can code the solution but I can't seem to figure out what an easier solution would look like.
Should I;
Gray scale, find contours, find circles that exist inside rectangles where the area of the circle is a certain percentage area of the rectangle?
Find red, yellow, black shapes that have close proximity?
Gray scale, find circles that exist in rectangles. Then check if the circle is yellow in the original?
Anyone have any cool ideas on how to make this easy?
I plan on running this over 2.8 million images ahaha
My original reason is here -> https://news.ycombinator.com/item?id=24187794
i need some advice in a computer vision projekt that i am working on. I am trying to extract a corner in the image below. The edge im searching for is marked yellow in the right image. The edge detection is always failing because the edge is too blurred in the middle.
I run this process with opencv and python.
I started to remove the white dots with a threshold method. After that a big median blur (31-53). After that a adaptive Threshod method to seperate the areas left and right from the corners. But the sepearation is always bad because the edge is barely visible.
Is there some other way to extract this edge or do i have to try with a better camera?
Thanks for your help.
First do you have other dataset? because it is hard to discuss it just from 1 input.
Couple things that you can do.
The best is you change the camera of imaging technique to have a better and clear edge.
When it is hard to do so. Try model-based fitting.If you image is repeatable in all class. I can observe some circles on the right and 2 sharp straight-line edges on the left. Your wanted red soft edge circle is in the middle of those 2 apparent features. That can be considered as a model. then you can always use some other technique for the pixel in-between those 2 region(because they are easy to detect) . Those technique includes but not limit to histogram equalization, high pass filter or even wavelet transform.
The Wost way is to use parameter fitting to do. What you want to segment is sth not a strong edge and sth not a smooth plane. So you can tweak the canny edge detect to find those edge which is not so strong. I do not support this method. If you really no choice and no other image, then you can try it.
Last way is to use deep learning based method to train and auto segment this part out. This method might work. but it needs you to have hundred if not thousands of dataset and labels.
Regards
Shenghai Yuan
I am working on a script that detects the bottom of a cooking pot, given a picture from above at a slightly slanted angle. This is a tough task due to lighting, lack of edges, and the glare of the metal pot.
This is a sample image I am working with:
image
img = cv2.imread('img.jpg')
gray_img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
The bottom of the pot is visible, but hard to detect.
So far, I was able to produce this image using:
th2 = cv2.adaptiveThreshold(gray_img,255,cv2.ADAPTIVE_THRESH_MEAN_C,cv2.THRESH_BINARY,11,2)
And this image using:
edges = cv2.Canny(img,30,60)
The solution seems intuitive since in both images the base can be detected easily, but I am not able to figure out the implementation.
My intuition tells me that I should cluster white pixels starting from the center of the image, and line the border of the segmentation generated from the clustering. I am not quite sure about how to go about that. Any advice would be greatly appreciated.
WORK IN PROGRESS
What is your overall goal? Do you want to detect something at the bottom? Count bubbles or something like that? Or you you want to check for defects?.
As you already mentioned, it's hard because of the reflections, so you could start using less direct light and maybe use a diffuse source.
No single light bulb, no direct sun, maybe use a white canvas. Or span a thin piece of white cloth between the current light source and the camera.
There are many photos are dark, These photos does not make sense to the viewer.
So I want to use opencv identified it, how to do that by opencv?
Any python source example will good:)
Perhaps you can transform the image to the HSV color space and use the V values for measuring the amount of light in the scene. Extract a histogram of the V values and compare it between light and dark images to see the differences.
your question is a little unclear as of what you wish to do. Can you give an example of the scene? I wish to see how dark it is before coming to a clear conclusion.
But if you just want to determine if an image is dark as via your title, its simple, draw an Histogram. Dark images tends to have peaks on the left hand side of the histogram, where the intensity of the pixels are pretty much in the value range of 0 to 20 or maybe can go up to 40/50, the right hand side of the histogram should pretty much have nothing should the image is really dark as what you mentioned. Following which, do a threshold on the image to determine whether the image is dark or not.
If you want to make the image clearer to the human eye, you can do histogram normalization. But that depends on how bad the image really is.
Was that what you are looking for?