The role of this program is to draw Houghlines on an external window, when lines are detected in a game. However, when I implemented the ROI to avoid having useless lines detected, the border of the ROI is still detected as a line (obviously it is)
As shown in the image below, you may notice how the arrow show the detected line from the ROI.
How can I make it that it ignores the lines around the border?
I tried to overlap the ROI with other lines but openCV still detects the borders as lines.
I am using Python.
Thanks!
Solved it, was an amateur mistake! If anyone meets this problem, you will have to fix it by putting the region of interest code after converting the image to gray and then to edge detection; hence, Convert image to gray -> Convert image to Canny/Sobel (edge detection) -> Set image to ROI
Related
First image is the Original image. Second image is the Processed image. The image consists of a test tube containing some amount of precipitate at the bottom. I am trying to crop out only the Precipitate area of the image.
For achieving this, I have tried processing the original image so that the Upper edge of precipitate is detected. I am able to detect the upper edge of precipitate as you can see in the Processed image.
My question is, how do I crop the image from that upper edge to the bottom of test tube using OpenCV (Python) or some other library perhaps ? I am open to ideas and codes that can help. Thanks!
Original image
Processed image
The surface of the precipitate is bright.
This is what you can get by getting the saturation component, applying Gaussian filtering horizontally and binarizing.
Horizontal delimitation of the ROI should not be difficult.
I want to detect dust in the circle in the image, but there is a problem in that the edges are detected even in areas that are not actually defective. I am curious to see if it helps to handle the border part.
I would like to know how to clean up the pixels of the partial parts.
i am quite new to Python and i try to write some code for image analysing.
Here is my initial image:
Initial image
After splitting the image in to the rgb channels, converting in to gradient, using a threshold and merging them back together i get the following image:
Gradient/Threshold
Now i have to draw contours around the black areas and get the size of the surrounded areas. I just dont know how to do it, since my trials with find/draw.contours in opencv are not succesfull at all.
Maybe someone also knows an easier way to get that from the initial image.
Hope someone can help me here!
I am coding in Python 3.
Try adaptive thresholding on the grayscale image of the input image.
Also play with the last two parameters of the adaptive thresholding. You will find good results as I have shown in the image. (Tip: Create trackbar and play with value, this will be quick and easy method to get best values of these params.)
I am trying to identify numbers in images. I am using the cv2.findContours function to "separate" the digits in the photo. But even after several modifications to the image the function recognises arbitrary contours even in the absolute top left corner of the image even though the final modified image has an absolutely black background with only the digits being white. Why is it so? Complete source code along with photo and everything else :
https://github.com/tanmay-edgelord/HandwrittenDigitRecognition
Source code :
https://github.com/tanmay-edgelord/HandwrittenDigitRecognition/blob/master/performRecognition.ipynb
In this code I am using the rects = [cv2.boundingRect(ctr) for ctr in ctrs] line to identify the contours returned by the function. Upon printing it out I found out that many of the bounding rectangles are in (0,0,0,0). If any further details/clarification is required please comment.
When we use some image processing library to rotate an image, the rotated image will always contains some black area. For example, I use the following python code to rotate an image:
from scipy import misc
img = misc.imread('test.jpg')
img = misc.imrotate(img,15)
misc.imsave('rotated.jpg')
The image is as follows:
My question is: how can I rotate an image without producing black area. I believe there exists some interpolation method to compensate for the missing area, which makes the image more natural.
It will be appreciated if anyone can provide a python code to achieve my task.
If you want to 'clone' or 'heal' the missing areas based on some part of the background, that's a complex problem, usually done with user intervention (in tools like Photoshop or GIMP).
Alternatives would be to fill the background with a calculated average colour - or just leave the original image. Neither will look 'natural' though.
The only approach that will work for all images will be to crop the rotated image to the largest rectangle within the rotated area. That will achieve your objective of having no black areas and looking natural, but at the cost of reducing the image size.
isnt there a simple paint fill function in your "some image library" ?, simple do that at all 4 corner pixels and then make it white or so.