i am quite new to Python and i try to write some code for image analysing.
Here is my initial image:
Initial image
After splitting the image in to the rgb channels, converting in to gradient, using a threshold and merging them back together i get the following image:
Gradient/Threshold
Now i have to draw contours around the black areas and get the size of the surrounded areas. I just dont know how to do it, since my trials with find/draw.contours in opencv are not succesfull at all.
Maybe someone also knows an easier way to get that from the initial image.
Hope someone can help me here!
I am coding in Python 3.
Try adaptive thresholding on the grayscale image of the input image.
Also play with the last two parameters of the adaptive thresholding. You will find good results as I have shown in the image. (Tip: Create trackbar and play with value, this will be quick and easy method to get best values of these params.)
Related
image
I want to extract the LED screen from the image above. Some approaches that I have tried include:
I first converted the image to HSV and made a trackbar GUI through which I noted at what value of HSV our mask filters out our ROI.
using canny edge detection, contours extraction, and selecting the contour with 4 vertices and area greater than 100
Both of the solutions do work.
The problem with the first approach is that it only works on a pinkish screen. While the second approach is more generic, but both approaches need a lot of fine-tuning to get the required result.
My question is that if there's any other more generalised approach that I can try.
I have some processed images that have noise (background pixels) around the boundaries. Is there a way to detect only the boundary of the object itself and create a mask to remove the background pixels around the boundaries?
Im a beginner to OpenCV so any code samples would help.
Example:
Original Image
Processed Image
Expected Output
I have tried the findContours method but it creates a mask that includes the noisy pixels as well.
Also i have tried the erode method but it does not give the same results for different image sizes so that is not the solution im looking for.
Hellow everyone,
I am trying very hard to extract edges from a specific image. I have tried many many ways, including;
grayscale, blurring (laplacian,gaussian, averaging etc), gradient (sobel, prewitt, canny)
With morfological transformations
Even thresholding with different combinations
Even HSV convert and masking and then thresholding
Using Contour Methods with area thresholding
Except all of this, I have tried different combinations with all the above. BUT neither of the above, had an excellent result. Main problem is still too many edges/lines. The image is an orthomosaic 2D photo of a marble wall. I will upload the image. Anyone have any ideas?
P.S The final result should be an image that has only the "skeleton' or/ shape of the marbles.
Wall.tif
I am a newbie to computer vision, image processing and OCR. As a part of task, i need to perform the OCR on attached emails. However, the problem is ROI is not constant in all images. For example, in all images we need to extract the information related with patient Yadav.
This needs to be achieved in Python and OpenCV. Please help. I have already tried the approach as given below:
Change DPI
Grayscale
Binarize
However, facing the problem with layout or zone analysis.
Thanks for help.
Welcome to the computer vision world.
I think you're not understanding your problem well enough. Just trying something and check whether it work will never work.
At first read how tesseract (OCR engine) do to improve their accuracy at https://github.com/tesseract-ocr/tesseract/wiki/ImproveQuality
Ok, then what make OCR do not work on your data. As you can see your data is clean, and there is nearly none of noise. So it is the border of printed email (as "ROI" in your question) does not correctly line up.
So have can we make the ROI line up correctly . we could use some perspective transformation. i took an example from https://www.pyimagesearch.com/2014/08/25/4-point-opencv-getperspective-transform-example/
So how can we perform perspective transformation. We have to find the 4 corners of the ROI and move it back to corners of the image. To find 4 corner of the ROI you could find some contours .
So here the summary, here is the steps.
1. Find the ROI (white color) using color segmentation
2. Find the contour which cover the ROI
3. Find 4 corners of the ROI's contour
4. Apply perspective transform
5. Run tesseract on transformed image
Hope that help
I am currently trying very hard to figure out a way to make these four trapezoid images into one nice image. The final image should look something like this(used photoshop to make it):
That above image will be complied with four of these images:
The problem is that when I try to rotate and combine these images, the black surroundings come into the final image as well like this:
How am I supposed to rid of the blacked out area or make it transparent? I've tried using a mask but that only make the black area white instead. I have also tried using the alpha channel, but that didn't work(although maybe I was doing wrong). Any ideas on what I can do in OpenCV?
I did actually figure it out. I did it with these steps:
Create two SAME SIZED black backgrounds with numpy zeros
Put one image in each background where you want them(for me, it was left and top)
Then all you need to do is cv.add(first, second)
The reason it works is because black pixels are 0,0,0 so adding to a pixel that is, say, 25,62,34, the pixel doesn't change and thus rids of the black corner.