picture example
I have recently started learning Python with Spyder IDE and I'm a bit lost so I ask for advice.
The thing is that I need to program an algorithm that, given a random image representing a board with black spots in it (in the picture I upload It is a 4x5 board) so It recognizes the edges properly and draw a AxB grid on it. I also need to save each cell separately so as to work with them.
I know that open CV treat images and I have even tried auto_canny but I don't really know how to solve this problem. Can anybody give me some indications please?
as I understand from your question you need to have as an output the grid of the matrix in your picture (eg. 4x3) and each cell as separate image.
This is the way I would approach this problem:
Use canny + corner detection to get the intersection of the lines
With the coordinates of the corners you can form your regions of interest, crop each individually and save it as a new image
For the grid you can check the X's and the Y's of the coordinates, for example you will have something like: ((50, 30), (50,35),(50,40)) and from this you can tell that there are 3 points on the horizontal axis. I would encourage you to set a error margin as the points might not be all on the same coordinate, but may not differ a lot.
Good luck!
Related
I was wondering if there is a way to get the shape of the contour of a region determined by several points. For instance, in the image below, I show as a background (in gray) a collection of points, but it does not look very nice, so I would like to determine automatically what are the edges or the contour that the points are delimiting, to just plot the shape of the background instead of plotting thousands of points to show it.
Edit: As kindly pointed out by #heltonbiker, actually the \chi-shape in http://www.geosensor.net/papers/duckham08.PR.pdf would perfectly do the job, however I still have no clue on how to implement it. Any help would be highly appreciated!
So I've been struggling with this problem for a while so I would appreciate it if somebody helped me out with this.
I'm trying to create a physical robot that solves a puzzle. The image of the completed puzzle will be provided along with a picture of scattered pieces
Scattered piece picture
I've gotten opencv to find contours and single out each piece and rotate them so they are all parallel to the horizontal axes (all "diamond" or "diagonal" pieces are rotated so they look like squares)
I've been using SIFT to match a bunch of small square pieces to the complete picture.
Comparing an un-rotated square piece to the full picture
The problem is this is not in the correct orientation. How would I go about finding out whether I need to rotate 90, 180, 270 degrees?
Another problem I have is to determine which quadrant (non-adrant?) the piece is in. For example, this piece belongs to the bottom right corner. Is there a function that identifies the majority of similar keypoints and then classify into one of the nine regions?
Since SIFT are designed to be rotation-invariant, it is a good thing that the feature matches even though you have a rotation.
To determine how much rotation you need, you generally need to have your camera calibration parameter in order to unproject the picture into a view that is top-down. For your robot, it looks like the pictures are already top-down.
If this assumption holds, you can perform a regression to figure out what angle you need to rotate your piece. If you also know that your pieces are always square, you only have 4 choices to choose from. In that case, you can try all 4 and see which one is "closest" to your extracted patch (matched via SIFT to the big picture).
Determining the quadrant the matched piece is in can be done by looking at the coordinates of the matched points. Their distance to the corners should be what you need.
I'm trying to make a simple scanner program, which takes in an image of a piece of paper and create a binary image based off of that. An example of what I am trying to do is below:
However, as you can see, the program uses the 4 corners of the paper to create the image. This means that the program doesn't take into account the curvature of the paper.
Is there anyway to "warp" the image with more than four points? By this, I mean find the bounding rectangle, and if the contour line is outside the rectangle, shrink that row of pixels, and if the contour line is inside, then extend it?
I feel like this should exist in some way or form, but if it doesn't, it may be time to delve into the depths of OpenCV :D
Thanks!
I am trying to find a repeatable process to find the coordinates of grid intersection points from an image. The image is a montage of many smaller images. Each 'tile' of the montage has inconsistent contrast, so my naive methods are failing (the tile boundary is being selected) . A small example:
I have had minor advances from the ideas explained in How to remove convexity defects in a Sudoku square? and Grid detection in matlab
However, the grid lines are NOT necessarily straight over the entire image, so cannot approximate as a grid of straight lines. I am familiar with imageJ or Gatan digitalMicrograph software, if anyone knows of a simple solution. Otherwise matlab/python Opencv would be useful
My first idea: write a script to chop your image into tiles, and apply some contrast normalization such as CLAHE to each one. Then reassemble the tiles using the Stitching plugin with the Linear Blending option on, to avoid the sharp tile lines. After that, segmenting the grid will become much easier; see ImageJ's Segmentation page for an introduction.
This is the kind of image analysis problem that is better discussed on the ImageJ Forum where people can throw ideas and script snippets back and forth, to converge on a solution.
I need some help developing some code that segments a binary image into components of a certain pixel density. I've been doing some research in OpenCV algorithms, but before developing my own algorithm to do this, I wanted to ask around to make sure it hasn't been made already.
For instance, in this picture, I have code that imports it as a binary image. However, is there a way to segment objects in the objects from the lines? I would need to segment nodes (corners) and objects (the circle in this case). However, the object does not necessarily have to be a shape.
The solution I thought was to use pixel density. Most of the picture will made up of lines, and the objects have a greater pixel density than that of the line. Is there a way to segment it out?
Below is a working example of the task.
Original Picture:
Resulting Images after Segmentation of Nodes (intersection of multiple lines) and Components (Electronic components like the Resistor or the Voltage Source in the picture)
You can use an integral image to quickly compute the density of black pixels in a rectangular region. Detection of regions with high density can then be performed with a moving window in varying scales. This would be very similar to how face detection works but using only one super-simple feature.
It might be beneficial to make all edges narrow with something like skeletonizing before computing the integral image to make the result insensitive to wide lines.
OpenCV has some functionality for finding contours that is able to put the contours in a hierarchy. It might be what you are looking for. If not, please add some more information about your expected output!
If I understand correctly, you want to detect the lines and the circle in your image, right?
If it is the case, have a look at the Hough line transform and Hough circle transform.