I have a binary mask. In the binary image, I always have this mask which goes as horizontal band. I can use cv2.findContours() to find the boundary around the mask but I am only interested in the top line of mask as shown in the image https://i.stack.imgur.com/tZm1a.png (The line is hand drawn so it is not perfect). My question specifically is how to just draw the top line and not the lower part.
Using: OpenCV and Python
If you are able to detect both top and bottom parts of the object you want to detect, you can know which line is at the top by looking at their pixel position. You could apply a morphological gradient and cv2.connectedComponents() to have the two borders detected and then average their pixel positions to know which border is more at the top.
I do not know if your problem is in drawing the straight lines too, but I think it should be easy having the two points (left and right axis crossings) and then using cv2.line(image, start_point, end_point, color, thickness) in which the start_point and end_point are the limits of the top part of the mask.
If you post the original image, it should help even more.
Hope it works.
Related
I am trying to get the coordinates from a Zoom image.
Suppose there are 20 "boxes" inside Zoom image. Is there any way to
get each box coordinate (upper left, upper right, lower left, lower right)?
I am trying different methods likes Canny edge detection / erosion by Python opencv
but it will also get the "content" inside the zoom image box which I don't expect to use.
I only need the "red circle" (ref to the image)
Thanks
Alex
I'm trying to remove uneven white borders from different set of pictures. They all look like these:
What I'm doing right now is just drawing a rectangle around the picture in hope that it covers the white area:
h, w = img.shape
cv2.rectangle(img, (0,0), (w,h), (0,0,0), 2)
Depending on the picture it might work or not. As there are a variety number of pictures which are in similar situation I'm looking for a more logical solution which is applicable to all pictures with this kind of issue.
I think your way is right, but it's unaware whether it overlays figures (you may increase the thickness if you know there won't be figures with that margin) and the desired thickness is unknown.
You may use findContour. Find the "thick" figures (if you expect particular metrics as in the picture). Sort their extreme coordinates, add some margin and that would set the max depth of the border.
However then not a rectangle, but a line would be better drawn per each side, in case there are figures very close to the border.
Another scenario: first draw concentrating black rectangles (or lines per side) in order to clear the unevenness, then draw the white lines/rectangle with the desired thickness.
EDIT: This is a deeper explanation of a question I asked earlier, which is still not solved for me.
I'm currently trying to write some code that can extract data from some uncommon graphs in a book. I scanned the pages of the book, and by using opencv I would like to detect some features ofthe graphs in order to convert them into useable data. In the left graph I'm looking for the height of the "triangles" and in the right graph the distance from the center to the points where the dotted lines intersect with the gray area. In both cases I would like to convert these values into numeric data for further usage.
For the left graph, I thought of detecting all the individual colors and computing the area of each sector by counting the amount of pixels in that color. When I have the area of these sectors, I can easily calculate their heights, using basic math. The following code snippet shows how far I've gotten already with identifying different colors. However I can't manage to make this work accurately. It always seems to detect some colors of other sectors as well, or not detect all pixels of one sector. I think it has something to do with the boundaries I'm using. I can't quite figure out how to make them work. Does someone know how I can determine these values?
import numpy as np
import cv2
img = cv2.imread('images/test2.jpg')
lower = np.array([0,0,100])
upper = np.array([50,56,150])
mask = cv2.inRange(img, lower, upper)
output = cv2.bitwise_and(img, img, mask = mask)
cv2.imshow('img', img)
cv2.imshow('mask', mask)
cv2.imshow('output', output)
cv2.waitKey(0)
cv2.destroyAllWindows()
For the right graph, I still have no idea how to extract data from it. I thought of identifying the center by detecting all the dotted lines, and then by detecting the intersections of these dotted lines with the gray area, I could measure the distance between the center and these intersections. However I couldn't yet figure out how to do this properly, since it sounds quite complex. The following code snippet shows how far I've gotten with the line detection. Also in this case the detection is far from accurate. Does someone have an idea how to tackle this problem?
import numpy as np
import cv2
# Reading the image
img = cv2.imread('test2.jpg')
# Convert the image to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Apply edge detection
edges = cv2.Canny(gray,50,150,apertureSize = 3)
# Line detection
lines = cv2.HoughLinesP(edges,1,np.pi/180,100,minLineLength=50,maxLineGap=20)
for line in lines:
x1,y1,x2,y2 = line[0]
cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)
cv2.imwrite('linesDetected.jpg',img)
For the left image, using your approach, try to look at the RGB histogram, the colors should be significant peaks, if you would like to use the relative area of the segments.
Another alternative could be to use Hough Circle Transform, which should work on circle segments. See also here.
For the right image ... let me think ...
You could create a "empty" diagram with no data inside. You know the locations of the circle segment ("cake pieces"). Then you could identify the area where the data is (the dark ones), either by using a grey threshold, an RGB threshold, or Find Contours or look for Watershed / Distance Transform.
In the end the idea is to make a boolean overlay between the cleared image and the segments (your data) that was found. Then you can identify which share of your circle segments is covered, or knowing the center, find the farthest point from the center.
Need to get rectangular shapes from a noisy color segmented image.
The problem is that sometimes the object isn't uniformly the correct color causing holes in the image, or sometimes reflection of the object in the background cause noise/false positive for the color segmentation.
The object could be in any position of the image and of any unknown rectangular size, the holes can occur anywhere inside the object and the noise could occur on any side of the object.
The only known constant is that the object is rectangular in shape.
Whats the best way to filter out that noise to the left of the object and get a bounding box around the object?
Using erosion would remove the detail from the bottom of the object and would cause the size of the bounding box to be wrong
I can't comment because of my rep, but I think you could try to analyse the colored image using other color spaces. Create a upper and a lower bound of the color you want until it selects the object, leaving you with less noise, which you can filter with erode/dilate/opening/closing.
For example, in my project I wanted to found a bounding box of a color-changing green rectangle, so I went and tried a lot of diferent color spaces with a lot of diferent upper/lower bounds until I finally got something worthy. Here is a nice read of what I'm talking about : Docs
You can also try filtering the object by área, after dilating it (you dilate first so the closer points connect to one another, while the more distant ones, which are the noise, don't, creating a big rectangle with lots of noise, but then you filter by a big área).
One method is to take histogram projection on both the horizontal and vertical axes, and select the intersection of ranges that have high projections.
The projections are just totals of object pixels in each row and each column. When you are looking for only one rectangle, the values indicated the probablity of the row/column belonging to the rectangle.
I'm stumped. It's a simple white rectangle on a black background. Houghlines can't find the top line. It can find all the others, just not the top.
Anyone?
https://www.screencast.com/t/bNu4sptcS3a
Make sure that the top edge appears on the result of canny edge detection.
Dilate the image to make the edge fatter, so that it can get more votes
Make sure that the resolution of parameter rho is 1, so that the detector dos not miss the line. See here for a description on parameters.
Decrease the threshold value for voting, just in case. While the top and bottom lines are supposed to get equal number of votes, this might be different in practice.
The value of rho should be lower for the top horizontal line than the bottom horizontal line.