There is a library on python does flood fill, the PIL.ImageDraw.floodfill function colors some part of the image using Flood Fill algorithm.
I want to write python application that allows me to write some memo about specific area on the map. The map input will be an image, if you click an area on the map, a window will pop up so you can write a memo about the area.
The problem occurs when you try to get memos written about particular area. I want to implement the program to show all memos related to clicked area.
To do this, I decided to save memos with its coordinate information so I can select it with flood fill algorithm. in order to do this, I have to flood fill to select the area, not color the area.
The term select means watching some area so you can decide whether or not a coordinate is inside of the area.
So what should I do to select the area? are there any further improvements on my program design?
I was able to detect contours with this post solution. Then we can just traverse all contours list and check if click coords is in some polygon.
Related
I'm trying to make an indoor navigation and I need indoor map that robot can automatically navigate the way. I'm thinking of using image which have different colors for each place(certain section), and I want to know the way to get the coordinates of the certain colors. So that I can designate places to certain color area using that coordinates. I am currently using pycharm
How can I get the coordinates of each of the pink,purple, and yellow part?
RGB code of the colors are pink(255,128,255), yellow(255,255,0), purple(128,128, 255).
This is the image that I'll use
The solution to your problem will involve two main parts:
Detecting the color from input image
Converting the blob to a single coordinate.
Let's take the first problem. You can use cv2.inRange() with various colors, to get a binary mask for each of your marked squares in the input image.
Now you can use cv2.findContours on the binary mask(s) to detect the largest contour and the take it's mid-point or something.
I'm trying to isolate the main area art from Pokemon cards and crop them. For example,
I want to extract the main area, as seen in the bounding box below.
I want to use the OpenCV in Python for the task. I've tried out shape detection and corner detection, but I can't seem to make them work as intended, and they seem to pick up anything but the main area that I want to extract.
I can't hard-code the bounding box because I want to process many cards and the position of the main area is different per card.
What are the steps needed to extract the main area and save a png file of just the main area?
If the area of interest is always contained inside a somewhat contrasted quasi-rectangular frame you can try your luck with a Hough transform, keep the long horizontal and vertical edges (obtained separately) and try to reconstitute that frame.
To begin, process a small number of cards and observe the results of Hough and figure out which systematic rules you could use to select the right segments by length/position/alignment/embedding in a larger frame...
I'm creating GUI using python tkinter to visualize Road Scenarios (the main vehicle & close by vehicles). I draw in the canvas lines to give road top view (as the Picture below).
The user can insert a rectangle (vehicle) then move it freely on the canvas.
What I want is: after the user moves the rectangle to where ever he wants, the y coordination of the rectangle will relocate to the nearest lane, to have a nice looking png at the end.
My thought about it:
Divide the canvas to regions (each Region represent a lane)
Create a function which knows when the rectangle finished moving, then modify the y coordination of it to the nearest Region (lane).
Not sure how to apply this in Code though. Any useful canvas functions or another approach are much appreciated.
The approach I mentioned at the question worked for me.
A list identifing the y-axis sides of each Region was created.
After creating the items needed, they all share a common Tag.
Choose which part of the items you want to consider the original Point (which will be used later as the item's current location). Canvas.bboc(CURRENT) can be sufficient to do that.
Detect when does the item enter a Region, by comparing if the item's current Location is within the boundaries of a Region.
Use Canvas.coords() or Cancas.move() methods to move the items at the middle of the regoin they have entered.
I would like to extract the bounding box of a SVG drawing.
Since Python is already available on the system and also used to perform other tasks, I don't want to use JavaScript or any other language. My understanding is if the bounding box of a single element can be calculated (but I don't know how).
The bounding box of the whole drawing is just the minimum and maximum x,y values a of all elements, hence probably the bounding boxes of all elements need to be calculated.
I am a Python beginner, but svgwrite is probably not the right module and so far I was scared by the installation of rsvg on a Windows system.
Thank you for any hints and pointing me to the right direction.
I'm trying to make a simple scanner program, which takes in an image of a piece of paper and create a binary image based off of that. An example of what I am trying to do is below:
However, as you can see, the program uses the 4 corners of the paper to create the image. This means that the program doesn't take into account the curvature of the paper.
Is there anyway to "warp" the image with more than four points? By this, I mean find the bounding rectangle, and if the contour line is outside the rectangle, shrink that row of pixels, and if the contour line is inside, then extend it?
I feel like this should exist in some way or form, but if it doesn't, it may be time to delve into the depths of OpenCV :D
Thanks!