Plotting heatmaps in python - python

I am using heatmap.py to plot a heatmap on python. I read on the doc (same page in the details section) that 'points' is "an iteratable list of tuples, where the contents are the
| x,y coordinates to plot. e.g., [(1, 1), (2, 2), (3, 3)]"
Therefore, we can specify the points (x,y) to color, but how is it possible to specify the intensity of each point (x,y)?

You do not directly specify the intensity, it is inferred from the number of points you place at any given coordinates. From the documentation:
The dot is placed into the output image for each input point at the translated output image coordinate. […] Dots are blended into the output image with an additive process: as points are placed on top of each other, they become darker. After all input points have been blended into the output image, the ouput image is colored based on the darkness of each pixel.
It seems you can make an area of the heat map more intense by adding more points the lie in that area.

Related

How to create a mesh with void using coordinates?

I have the coordinates of four 3d points representing boundaries of a plane shown in red. I want to generate a mesh with a void whose boundaries are green points. I've searched for many methods as openmesh but they all require the faces to be defined vertex by vertex. I want to do with something like 'Delaunay triangulation' but in 3d so that the input is something like that:
plane_boundaries=[#Red points coordinates)
Void_boundaries= [#green points coordinates)
Can any one help?
Here is a sample of what I found:
Here is the coordinates:
The desired output:

Density or area based heatmap or contourmap

I have one image given
here and I have centroids and area of every small and big defect present here, for example I have three lists x, y and area where x and y are coordinates of centroids of defect(every yellow object considers defect) in the image and area is area of defect computed from contour. I want to show density map or heatmap on this image where it is clearly shown that defect with higher area is having more peak compare to defect with lower area, how can I do this in python? for reference I have attached one more image from one paper given here, here based one kde and weighted kde of image it is clearly shown where bigger defect(big yellow circle) is having more area.
So you are trying to draw a heatmap superimposed on an image, to represent what you are calling the "defects" in the image (it's not clear from your explanation what those are--maybe deviations from a reference image?)? This sounds like it would be VERY confusing for a viewer to interpret, having to mentally separate the heatmap pixels from the pixels of the image itself. Much better would be to create a new blank image with the same dimensions as the original, then plot points in that image whose center (x,y) represent the location in the original image, and whose radius/color represent area.

How to get width given by a set of points that are not aligned to axis?

I have a depth image but I am manipulating it like a (2d) grayscale image to analyze the shape of the figures I have.
I am trying to get the width (distance) of a shape, as given by this image. The width is shown by the red line, which also follows the direction of vector v2.
I have the vectors shown in the image, resulting of a 2-components PCA to gather the direction of the shape (the shape in the picture is cropped, since I just need the width on red, on this part of the shape).
I have no clue, how to rotate the points to origin, or how to project the points to the line and then to calculate the width, somehow by calculating eucledian distance from min to max.
How to get width given by a set of points that are not aligned to axis?
I managed it using a rotated bounding box from cv2, as described by this solution.

Matplotlib Patch Size in Points

How can I draw shapes in matplotlib using point/inch dimensions?
I've gone through the patch/transform documentation so I understand how to work in pixel, data, axes or figure coordinates but I cannot figure out how to dimension a rectangle in points/inches.
Ideally I would like to position a rectangle in data coordinates but set its size in points, much like how line markers work.
Here is an example of the plot I am trying to create. I currently position the black and red boxes in (data, axes) coordinates. This works when the graph is a known size, but fails when it gets rescaled as the boxes become smaller even through the text size is constant.
Ended up figuring it out with help from this question: How do I offset lines in matplotlib by X points
There is no built in way to specify patch dimensions in points, so you have to manually calculate a ratio of axes or data coordinates to inches/points. This ratio will of course vary depending on figure/axes size.
This is accomplished by running a (1, 1) point through the axes transform and seeing where it ends up in pixel coordinates. Pixels can then be converted to inches or points via the figure dpi.
t = axes.transAxes.transform([(0,0), (1,1)])
t = axes.get_figure().get_dpi() / (t[1,1] - t[0,1]) / 72
# Height = 18 points
height = 18 * t

Using 3D perception in opencv2

Can anyone please explain if it is possible, and if so how, to work with cv2.getPerspectiveTransform().
I have 3d information about my image: I know the length of a,b and also the Different heights of c,d,e,f and g. I made the height different to get more 3d information but if it isn't needed that will be preferable.
Ultimately I need to know where the pink dot really is in the rectangle after implementing the transform on my [x,y] position I get from the camera feed.
If you denote by C,D,E,F the positions of the four corners of the black polygon in the original image (each of them is a 2D point), and C',D',E',F' the positions of the corresponding points in your target image (probably (0,0), (a, 0), (a, b), (0, b)), M = cv2.getPerspectiveTransform({C,D,E,F}, {C',D',E',F'}) is the perspective transformation from one polygon to the other.
Given the position G of the vertical projection of g onto the black polygon in the original image, you can compute its position in the target image as cv2.transform(G, M). This will return a point (x,y,z), where the last coordinate z is a normalizing term. This z is zero when your point would be "at infinity" in the target image. If z is not zero, the point you are looking for is (x/z, y/z).
If z is zero, your point is at infinity, in the direction of the support of vector (x, y) (think of the case where G would be at the intersection of the supporting lines of two opposite sides of the black polygon in the source image).
If you know that the heights of c,d,e,f,g are equal, these points are also coplanar, and the exact same method applies to c,d,e,f,g instead of C,D,E,F,G.

Categories