contour edges determined by points in matplotlib - python

I was wondering if there is a way to get the shape of the contour of a region determined by several points. For instance, in the image below, I show as a background (in gray) a collection of points, but it does not look very nice, so I would like to determine automatically what are the edges or the contour that the points are delimiting, to just plot the shape of the background instead of plotting thousands of points to show it.
Edit: As kindly pointed out by #heltonbiker, actually the \chi-shape in http://www.geosensor.net/papers/duckham08.PR.pdf would perfectly do the job, however I still have no clue on how to implement it. Any help would be highly appreciated!

Related

Get a transparent colormap in 3D with Python

I have been looking for a Python library that would allow me, from a set of points in 3D and RGB colors associated to each point, to get a transparent surface (i. e., with some degree of transparency). The idea would be to be able to display (and manipulate/rotate) things similar to the image below:
The atoms and bonds are inside a 3D surface that is smooth and constructed from a series of points each with a RGB color.
I could get some rough Poisson reconstruction running with Mayavi but the colors appeared very pixelized and I couldn't find a way to make the surface transparent. I could obtain a lot of features I wanted for this work with Open3D (I actually place these objects inside crystal structures so I need to represent bonds, atoms, crystal edges, axes and so on), but here again I couldn't find a Poisson reconstruction algorithm to recreate the smooth surface from points nor any functionality to make a surface transparent. Any suggestion would be appreciated.

How to use opencv functions on roi

My project is REM sleep detector, and the provided pictures show the contour of my eyelid. As my eye looks in directions, this contour moves in a distinct way. For lack of a better solution, my first attempt is to draw a grid of rois on my video stream, with that in place I want to use the countnonzero function or use blob detection on the rois. Depending on which rois in the grid change values, movement and direction is detected. (I am sure there is better way)
Problem: I can not specify one or several rois of my choice, the function always work only on the entire image. How do I retrieve values from each roi specifically? Rois are set up by means of multiple rectangle functions. Code is in python. Any help greatly appreciated.
Contour of eyelid:

edge detection of an image and saving cells of a grid

picture example
I have recently started learning Python with Spyder IDE and I'm a bit lost so I ask for advice.
The thing is that I need to program an algorithm that, given a random image representing a board with black spots in it (in the picture I upload It is a 4x5 board) so It recognizes the edges properly and draw a AxB grid on it. I also need to save each cell separately so as to work with them.
I know that open CV treat images and I have even tried auto_canny but I don't really know how to solve this problem. Can anybody give me some indications please?
as I understand from your question you need to have as an output the grid of the matrix in your picture (eg. 4x3) and each cell as separate image.
This is the way I would approach this problem:
Use canny + corner detection to get the intersection of the lines
With the coordinates of the corners you can form your regions of interest, crop each individually and save it as a new image
For the grid you can check the X's and the Y's of the coordinates, for example you will have something like: ((50, 30), (50,35),(50,40)) and from this you can tell that there are 3 points on the horizontal axis. I would encourage you to set a error margin as the points might not be all on the same coordinate, but may not differ a lot.
Good luck!

How to generate bounding box data from heatmap data of an image?

I have a group of images and some separate heatmap data which (imperfectly) explains where subject of the image is. The heatmap data is in a numpy array with shape (224,224,3). I would like to generate bounding box data from this heatmap data.
The heatmaps are not always perfect, So I guess I'm wondering if anyone can think of an intelligent way to do this.
Here are some examples of what happens when I apply the heatmap data to the image:
I found a solution to this in matlab, but I have no idea how to read this code! I am a python programmer, unfortunately.
https://github.com/metalbubble/CAM/tree/master/bboxgenerator
Anyone have any ideas about how to approach something like this?
I am not quite sure how the heatmap data of your project exactly looks like, but it seems to me that you can use something like Selective Search. You can also have a look on this interesting paper. Maybe you can use this approach on your dataset.
I'm attempting a similar method for automating the creation of bounding boxes (since, lets face it: creating boxes manually takes along time)
this other stackpost covers a similar idea:
EDIT: (i originally had put a link to the current stack post 🤦 - but here is the stack post i was referring to)
Generating bounding boxes from heatmap data
the problem at hand that i recognize is that heatmaps can be fragmented and a bit arbitrary. the solution that comes to mind initially is setting a threshold of the heat map. So in the case of the example heat map images - when applying a bounding box cover all regions that are yellow/orange/red than say green/blue.
It depends on how many bounding boxes you need. You can set a threshold and have multiple bounding boxes for each of the highly activated regions, or try connecting the regions (by a morphological operation maybe) and calculate a single bounding box for connected activated pixels.

Image Segmentation based on Pixel Density

I need some help developing some code that segments a binary image into components of a certain pixel density. I've been doing some research in OpenCV algorithms, but before developing my own algorithm to do this, I wanted to ask around to make sure it hasn't been made already.
For instance, in this picture, I have code that imports it as a binary image. However, is there a way to segment objects in the objects from the lines? I would need to segment nodes (corners) and objects (the circle in this case). However, the object does not necessarily have to be a shape.
The solution I thought was to use pixel density. Most of the picture will made up of lines, and the objects have a greater pixel density than that of the line. Is there a way to segment it out?
Below is a working example of the task.
Original Picture:
Resulting Images after Segmentation of Nodes (intersection of multiple lines) and Components (Electronic components like the Resistor or the Voltage Source in the picture)
You can use an integral image to quickly compute the density of black pixels in a rectangular region. Detection of regions with high density can then be performed with a moving window in varying scales. This would be very similar to how face detection works but using only one super-simple feature.
It might be beneficial to make all edges narrow with something like skeletonizing before computing the integral image to make the result insensitive to wide lines.
OpenCV has some functionality for finding contours that is able to put the contours in a hierarchy. It might be what you are looking for. If not, please add some more information about your expected output!
If I understand correctly, you want to detect the lines and the circle in your image, right?
If it is the case, have a look at the Hough line transform and Hough circle transform.

Categories