I am trying to get the coordinates from a Zoom image.
Suppose there are 20 "boxes" inside Zoom image. Is there any way to
get each box coordinate (upper left, upper right, lower left, lower right)?
I am trying different methods likes Canny edge detection / erosion by Python opencv
but it will also get the "content" inside the zoom image box which I don't expect to use.
I only need the "red circle" (ref to the image)
Thanks
Alex
Related
First image is the Original image. Second image is the Processed image. The image consists of a test tube containing some amount of precipitate at the bottom. I am trying to crop out only the Precipitate area of the image.
For achieving this, I have tried processing the original image so that the Upper edge of precipitate is detected. I am able to detect the upper edge of precipitate as you can see in the Processed image.
My question is, how do I crop the image from that upper edge to the bottom of test tube using OpenCV (Python) or some other library perhaps ? I am open to ideas and codes that can help. Thanks!
Original image
Processed image
The surface of the precipitate is bright.
This is what you can get by getting the saturation component, applying Gaussian filtering horizontally and binarizing.
Horizontal delimitation of the ROI should not be difficult.
I have a binary mask. In the binary image, I always have this mask which goes as horizontal band. I can use cv2.findContours() to find the boundary around the mask but I am only interested in the top line of mask as shown in the image https://i.stack.imgur.com/tZm1a.png (The line is hand drawn so it is not perfect). My question specifically is how to just draw the top line and not the lower part.
Using: OpenCV and Python
If you are able to detect both top and bottom parts of the object you want to detect, you can know which line is at the top by looking at their pixel position. You could apply a morphological gradient and cv2.connectedComponents() to have the two borders detected and then average their pixel positions to know which border is more at the top.
I do not know if your problem is in drawing the straight lines too, but I think it should be easy having the two points (left and right axis crossings) and then using cv2.line(image, start_point, end_point, color, thickness) in which the start_point and end_point are the limits of the top part of the mask.
If you post the original image, it should help even more.
Hope it works.
Im trying to do perspective transform on a video of a football pitch, I have found many resources for ways of doing this when all four corners of the pitch are visible however how can i do this when not all corners are visible? maybe a way of extrapolating beyond the video box?
Instead of using the four corners of the field, use the part of the field that is visible. In this image, the field is visible only to the 50 yard line.
Two corners and two midpoints are used to warp the image. Instead of warping the source image to the corners of the destination image; the source image is warped into two corners and two midpoints.
If 75% of the field is visible, warp the source image to the to that percentage of the destination image.
image source: https://www.coloradoan.com/story/sports/csu/football/2016/12/21/story-behind-boise-states-blue-football-field/95712824/
I'm stumped. It's a simple white rectangle on a black background. Houghlines can't find the top line. It can find all the others, just not the top.
Anyone?
https://www.screencast.com/t/bNu4sptcS3a
Make sure that the top edge appears on the result of canny edge detection.
Dilate the image to make the edge fatter, so that it can get more votes
Make sure that the resolution of parameter rho is 1, so that the detector dos not miss the line. See here for a description on parameters.
Decrease the threshold value for voting, just in case. While the top and bottom lines are supposed to get equal number of votes, this might be different in practice.
The value of rho should be lower for the top horizontal line than the bottom horizontal line.
I honestly have no idea why this doesn't work. The rotate cause the image to scale up and down constantly. I looked around and haven't found a solution to my problem.
Main http://tinypaste.com/1c5025fa
Module http://tinypaste.com/f42f9c58
Also can someone explain why this program's box abruptly stops rotating?
Etc 'http://tinypaste.com/82b3b30e' (remove the quotes, I'm not allowed to post more than 2 hyperlinks)
From what I can tell, the scaling that you're seeing is a sort of artifact of how the rotation operation works. As a rectangle is rotated, the bounding box will necessarily be larger than the original rectangle. See for example, the blue rectangle in the image, below. R is the radius of the rectangle...so when it's rotated, the rectangle sweeps out the area covered by the red circle in the second image. The bounding box for the rotation is the now the gray rectangle. pygame has to fill in both the red area and the gray area. What color does pygame use to fill in the padding area?
The pygame.transform.rotate docs say...
"Unless rotating by 90 degree increments, the image will be padded larger to hold the new size. If the image has pixel alphas, the padded area will be transparent. Otherwise pygame will pick a color that matches the Surface colorkey or the topleft pixel value."
So, the solution is to explicitly set the color key or alpha value for the image (in your case, when you construct your saved_image surface.) Then, when the saved_image is rotated, the newly produced image will have the padding area filled with the appropriate color.
Give it a go and see if that works.