How to get box around contour using skimage.segmentation.felzenszwalb? - python

I'm trying to get a box around a segmented object on the edge of the image, that is, there is no contour around the segmentation because the object is only partially inside the image region.
I use skimage.segmentation, find_boundaries, clear_border, and regioprops. However, regionprops does not provide those edge cases
segments_fz = felzenszwalb(cv2.cvtColor(image, cv2.COLOR_BGR2RGB), scale=300, sigma=0.5, min_size=50)
cleared = clear_border(segments_fz)
label_image = label(cleared)
regionprops(label_image)
A box around segmented object near the limit of the image region.

You shouldn't use clear_border. Then the objects on the border will be treated like any other. The bbox property should give you a bounding box for your object of interest, while find_boundaries and mark_boundaries will let you get or visualise the boundaries between segments.

Related

Get real GPS coordinates out of known edges values on python

I'm trying to find a way to convert pixels into a real coordinates. I have an image with known (GPS) edges values.
Top left = 43.51281, -70.46223
Top right = 43.51279, -70.46213
Bottom left = 43.51272, -70.46226
Bottom right = 43.51270, -70.46215
Image with known edges values
I have another script that prints the coordinates in pixels of an image. Is there any way that the value of each corner is declared, and that it prints the real coordinates of where I clicked?
For example: The next image shape is [460, 573] and when I click somewhere on it, the pixels of that click are shown, I want it to be real coordinates.
Example
An option is to use OpenCV's getPerspectiveTransform() function, see this for an intuitive explanation of how the function maps real world coordinates to coordinates on another image (which in your case would be mapping the GPS values to the pixel values within the image):
https://towardsdatascience.com/how-to-track-football-players-using-yolo-sort-and-opencv-6c58f71120b8
And these for an example of the function being used:
Python Open CV perspectiveTransform()
https://www.geeksforgeeks.org/perspective-transformation-python-opencv/

rectangle coordinates of Abbyy Cloud Ocr (Xml output)

so im trying to extract data from invoices for that im using abby cloud ocr. im got the output as xml file now what i want to do is look for a text and take its rectangle cordinates and then look for closest rectangle and take its value
to do that i need the rectangle coordinates well the xml file actually return cordinates but i cant understand it
ill show u an example of the xml output (ill replace uneeded text with '....')
<line baseline="2062" l="2037" t="2033" r="2206" b="2064">....</line>
<line baseline="2101" l="295" t="2070" r="588" b="2097">....</line>
these are too different rectangles anyway i went to see the documentation and this is what is says
baseline — the distance from the base line to the top edge of the page
l — the coordinate of the left border of the surrounding rectangle,
t — the coordinate of the top border of the surrounding rectangle
r — the coordinate of the right border of the surrounding rectangle
b — the coordinate of the bottom border of the surrounding rectangle
what coordinate of the left border of the surrounding rectangle mean ?
isnt the rectangle coordinates on this format [[x1,y1],[x2,y2],[x3,y3],[x4,y4]]?
can you explain to me what they mean by these coordinates or how can i use it ??

Using Simple ITK to find bounding box

How can I capture the bounding box from the 3D mask by using Simple ITK in python?
The ITK has the bounding box function, but I couldn't find similar function in SITK.
You need a LabelShapeStatisticsImageFilter, and after Execute you can get the BoundingBox around certain values.
In case of several masks you can iterate on range(1,labelimfilter.GetNumberOfLabels()+1).
(Works this way because you can't calculate BoundingBox on the value 0.)
import SimpleITK as sitk
bbox=[]
labelimfilter=sitk.LabelShapeStatisticsImageFilter()
labelimfilter.Execute(yourmaskimage)
for i in range(1,labelimfilter.GetNumberOfLabels()+1):
box=labelimfilter.GetBoundingBox(i)
bbox.append(box)
This will return the bounding box coordinates in [xstart, ystart, zstart, xsize, ysize, zsize] order

pixel image to tile map

I am working on a game in pygame/python, and I am wondering who has the know how to show me to turn an image into a map.
The idea is simple. The image is colored by tile type. When the program loads the image, I want the color (example) #ff13ae to be matched to a certain grass tile, and the color (example) #ff13bd to a different tile. Now, I know that I may very well have to convert from hexcodes to rgb, but that is trivial. I just want to know the way I would go about this, mainly because all my other games don't do anything of this sort.
Use pygame.PixelArray:
The PixelArray wraps a Surface and provides direct access to the surface's pixels.
[...]
pxarray = pygame.PixelArray(surface)
# Check, if the first pixel at the topleft corner is blue
if pxarray[0, 0] == surface.map_rgb((0, 0, 255)):
...

PIL - Identifying an object with a virtual box

I have an image (sorry cannot link it for copyright purposes) that has a character outlined in a black line. The black line that outlines the character is the darkest thing on the picture (planned on using this fact to help find it). What I need to do is obtain four coordinates that draw a virtual box around the character. The box should be as small as possible while still keeping the outlined character inside its contents. I intend on using the box to help pinpoint what would be the central point of the character's figure by using the center point of the box.
I started with trying to identify parts of the outline. Since it's the darkest line on the image, I used getextrema() to obtain at least one point on the outline, but I can't figure out how to get more points and then combine those points to make a box.
Any insight into this problem is greatly appreciated. Cheers!
EDIT *
This is what I have now:
im = Image.open("pic.jpg")
im = im.convert("L")
lo, hi = im.getextrema()
im = im.point(lambda p: p == lo)
rect = im.getbbox()
x = 0.5 * (rect[0] + rect[2])
y = 0.5 * (rect[1] + rect[3])
It seems to be pretty consistent to getting inside the figure, but it's really not that close to the center. Any idea why?
Find an appropriate threshold that separates the outline from the rest of the image, perhaps using the extrema you already have. If the contrast is big enough this shouldn't be too hard, just add some value to the minimum.
Threshold the image with the value you found, see this question. You want the dark part to become white in the binary thresholded image, so use a smaller-than threshold (lambda p: p < T).
Use thresholdedImage.getbbox() to get the bounding box of the outline

Categories