Edge Detection from image using python libraries and Contours Draw - python

Hellow everyone,
I am trying very hard to extract edges from a specific image. I have tried many many ways, including;
grayscale, blurring (laplacian,gaussian, averaging etc), gradient (sobel, prewitt, canny)
With morfological transformations
Even thresholding with different combinations
Even HSV convert and masking and then thresholding
Using Contour Methods with area thresholding
Except all of this, I have tried different combinations with all the above. BUT neither of the above, had an excellent result. Main problem is still too many edges/lines. The image is an orthomosaic 2D photo of a marble wall. I will upload the image. Anyone have any ideas?
P.S The final result should be an image that has only the "skeleton' or/ shape of the marbles.
Wall.tif

Related

Different ways to detect LED Screen from the image

image
I want to extract the LED screen from the image above. Some approaches that I have tried include:
I first converted the image to HSV and made a trackbar GUI through which I noted at what value of HSV our mask filters out our ROI.
using canny edge detection, contours extraction, and selecting the contour with 4 vertices and area greater than 100
Both of the solutions do work.
The problem with the first approach is that it only works on a pinkish screen. While the second approach is more generic, but both approaches need a lot of fine-tuning to get the required result.
My question is that if there's any other more generalised approach that I can try.

How to draw contours around black areas in pixeld image?

i am quite new to Python and i try to write some code for image analysing.
Here is my initial image:
Initial image
After splitting the image in to the rgb channels, converting in to gradient, using a threshold and merging them back together i get the following image:
Gradient/Threshold
Now i have to draw contours around the black areas and get the size of the surrounded areas. I just dont know how to do it, since my trials with find/draw.contours in opencv are not succesfull at all.
Maybe someone also knows an easier way to get that from the initial image.
Hope someone can help me here!
I am coding in Python 3.
Try adaptive thresholding on the grayscale image of the input image.
Also play with the last two parameters of the adaptive thresholding. You will find good results as I have shown in the image. (Tip: Create trackbar and play with value, this will be quick and easy method to get best values of these params.)

Extract Data from an Image with Python/OpenCV/Tesseract?

I'm trying to extract some contents from a cropped image. I tried pytesseract and opencv template matching but the results are very poor. OpenCV template matching sometimes fails due to poor quality of the icons and tesseract gives me a line of text with false characters.
I'm trying to grab the values like this:
0:26 83 1 1
Any thoughts or techniques?
A technique you could use would be to blur your image. From what it looks like, the image is kind of low res and blurry already, so you wouldn't need to blur the image super hard. Whenever I need to use a blur function in Opencv, I normally choose the gaussian blur, as its technique of blurring each pixel as well as each surrounding pixel is great. Once the image is blurred, I would threshold, or adaptive threshold the image. Once you have gotten this far, the image that should be shown should be mostly hard lines with little bits of short lines mixed between. Afterwards, dilate the threshold image just enough to have the bits where there are a lot of hard edges connect. Once a dilate has been performed, find the contours of that image, and sort based on their height with the image. Since I assume the position of those numbers wont change, you will only have to sort your contours based on the height of the image. Afterwards, once you have sorted your contours, just create bounding boxes over them, and read the text from there.
However, if you want to do this the quick and dirty way, you can always just manually create your own ROI's around each area you want to read and do it that way.
First Method
Gaussian blur the image
Threshold the image
Dilate the image
Find Contours
Sort Contours based on height
Create bounding boxes around relevent contours
Second Method
Manually create ROI's around the area you want to read text from

How to create a boundary mask around an object?

I have some processed images that have noise (background pixels) around the boundaries. Is there a way to detect only the boundary of the object itself and create a mask to remove the background pixels around the boundaries?
Im a beginner to OpenCV so any code samples would help.
Example:
Original Image
Processed Image
Expected Output
I have tried the findContours method but it creates a mask that includes the noisy pixels as well.
Also i have tried the erode method but it does not give the same results for different image sizes so that is not the solution im looking for.

OpenCV Thresholding adaptive to different lightning conditions

For a school project I am trying to write a program in Python that tracks the movement of the pupil. In order to do that I am using OpenCV.
After looking up some tutorials on the internet, I noticed that almost everyone is using thresholding to achieve this, since a binary image is necessary for almost every step further down the road (e.g. HoughCircle Transofrmation, Contours). However, from my understanding thresholding is extremly light sensitive, therefore such an approach would only return good results in optimal lightning conditions.
So here comes my question: Is there any alternative or better approach than just Thresholding the image? Or is my understanding of thresholding in OpenCV wrong in the first place?
Here is a example image:
The purpose of thresholding is to segment the desired objects from the background where you can then perform additional processing (applying morphological operations) then perform contour filtering to further isolate the desired objects. Instead of applying image processing techniques on a BGR (3-channel) image or a grayscale (1-channel) image with range [0...255], thresholding allows us to obtain a binary image where every pixel is either 0 or 1 which makes distinguishing objects easier. Depending on your situation, there are many ways obtain a binary image, here are several methods:
cv2.Canny - Canny edge detection which uses a minVal and maxVal to determine edges
cv2.threshold - Simple thresholding with user selected arbitrary global threshold value
cv2.threshold + cv2.THRESH_OTSU - Otsu's thresholding to automatically calculate the threshold value.
cv2.adaptiveThreshold - Adaptive thresholding where the image has different lighting conditions in different areas. Essentially it will automatically calculate the threshold value for different regions of the image and gives better results with images with varying illumination
cv2.inRange - Color segmentation. The idea is to use lower and upper threshold ranges to obtain a binary image. Useful when trying to isolate a single color range

Categories