How can I draw shapes in matplotlib using point/inch dimensions?
I've gone through the patch/transform documentation so I understand how to work in pixel, data, axes or figure coordinates but I cannot figure out how to dimension a rectangle in points/inches.
Ideally I would like to position a rectangle in data coordinates but set its size in points, much like how line markers work.
Here is an example of the plot I am trying to create. I currently position the black and red boxes in (data, axes) coordinates. This works when the graph is a known size, but fails when it gets rescaled as the boxes become smaller even through the text size is constant.
Ended up figuring it out with help from this question: How do I offset lines in matplotlib by X points
There is no built in way to specify patch dimensions in points, so you have to manually calculate a ratio of axes or data coordinates to inches/points. This ratio will of course vary depending on figure/axes size.
This is accomplished by running a (1, 1) point through the axes transform and seeing where it ends up in pixel coordinates. Pixels can then be converted to inches or points via the figure dpi.
t = axes.transAxes.transform([(0,0), (1,1)])
t = axes.get_figure().get_dpi() / (t[1,1] - t[0,1]) / 72
# Height = 18 points
height = 18 * t
Related
I have two images, which I represent as 2D Arrays. Both images are the same, but with a different resolution. Say the original image shows a 10m x 10m area.
Image 1: 1 pixel per meter, size = 10 x 10
Image 2: 10 pixel per meter, size = 100 x 100
If I do
ax.imshow(my_image, origin="lower")
I will get two different plots. The first image will have values 0-10 on each axis and the second one 0-100 on each axis.
How can I set the scale / unit of the axis, so the values will in both cases be 0 to 10.
I have seen the extent keyword on https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.imshow.html. But I don't think that would be a good idea for two reasons:
I want to stretch the image itself, but rather keep the resolution
I have additional points in image coordinates I want to plot. Would be easier to just plot them normally and don't translate them as well
I have a HxWx3 array where the 3 denotes x,y,z co-ordinates (so it's HxW 3d points, organised in a 2D array of height H and width W). Actually I only care about the angle that x, y, z makes to the origin. I want to turn this into an RGB image of HxW that's informative. Informative means that angles close to one another should have always have similar colors, and angles far from one another should have different colors.
For context to anyone who works with computer vision: I want to do a colormap of the normal obtained from a depth map.
EDIT - If it makes it any easier I only need one hemisphere, not a whole sphere. So circular consistency only needs to happen in one dimension, not two (I think).
I have a depth image but I am manipulating it like a (2d) grayscale image to analyze the shape of the figures I have.
I am trying to get the width (distance) of a shape, as given by this image. The width is shown by the red line, which also follows the direction of vector v2.
I have the vectors shown in the image, resulting of a 2-components PCA to gather the direction of the shape (the shape in the picture is cropped, since I just need the width on red, on this part of the shape).
I have no clue, how to rotate the points to origin, or how to project the points to the line and then to calculate the width, somehow by calculating eucledian distance from min to max.
How to get width given by a set of points that are not aligned to axis?
I managed it using a rotated bounding box from cv2, as described by this solution.
I have a binary image with dots, which I obtained using OpenCV's goodFeaturesToTrack, as shown on Image1.
Image1 : Cloud of points
I would like to fit a grid of 4*25 dots on it, such as the on shown on Image2 (Not all points are visible on the image, but it is a regular 4*25 points rectangle).
Image2 : Model grid of points
My model grid of 4*25 dots is parametrized by :
1 - The position of the top left corner
2 - The inclination of the rectangle with the horizon
The code below shows a function that builds such a model.
This problem seems to be close to a chessboard corner problem.
I would like to know how to fit my model cloud of points to the input image and get the position and angle of the cloud.
I can easily measure a distance in between the two images (the input one and the on with the model grid) but I would like to avoid having to check every pixel and angle on the image for finding the minimum of this distance.
def ModelGrid(pos, angle, shape):
# Initialization of output image of size shape
table = np.zeros(shape)
# Parameters
size_pan = [32, 20]# Pixels
nb_corners= [4, 25]
index = np.ndarray([nb_corners[0], nb_corners[1], 2],dtype=np.dtype('int16'))
angle = angle*np.pi/180
# Creation of the table
for i in range(nb_corners[0]):
for j in range(nb_corners[1]):
index[i,j,0] = pos[0] + j*int(size_pan[1]*np.sin(angle)) + i*int(size_pan[0]*np.cos(angle))
index[i,j,1] = pos[1] + j*int(size_pan[1]*np.cos(angle)) - i*int(size_pan[0]*np.sin(angle))
if 0 < index[i,j,0] < table.shape[0]:
if 0 < index[i,j,1] < table.shape[1]:
table[index[i,j,0], index[i,j,1]] = 1
return table
A solution I found, which works relatively well is the following :
First, I create an index of positions of all positive pixels, just going through the image. I will call these pixels corners.
I then use this index to compute an average angle of inclination :
For each of the corners, I look for others which would be close enough in certain areas, as to define a cross. I manage, for each pixel to find the ones that are directly on the left, right, top and bottom of it.
I use this cross to calculate an inclination angle, and then use the median of all obtained inclination angles as the angle for my model grid of points.
Once I have this angle, I simply build a table using this angle and the positions of each corner.
The optimization function measures the number of coincident pixels on both images, and returns the best position.
This way works fine for most examples, but the returned 'best position' has to be one of the corners, which does not imply that it corresponds to the best position... Mainly if the top left corner of the grid within the cloud of corners is missing.
I have a problem in controlling the size of objects in network plots done by igraph. The documentation of the plot command says:
bbox:: The bounding box of the plot. This must be a tuple containing the desired width and height of the plot. The default plot is 600 pixels wide and 600 pixels high.
arrow_size: Size (length) of the arrowhead on the edge if the graph is directed, relative to 15 pixels.
vertex_size: Size of the vertex in pixels
So to my understanding all these arguments represent numbers of pixels.
Therefore, multiplying all of them, say, by a factor of 2, I would expect the images to scale completely with this factor.
Consider this following minimal example in python:
from igraph import Graph, plot
def visualize(res=1.0):
g=Graph([(0,1), (1,0)], directed=True)
layout = g.layout_fruchterman_reingold()
plot(g, target='plot.png',
layout=layout,
bbox=(120*res,120*res),
vertex_size=5*res,
arrow_size=10*res)
This plots a trivial graph,
However for res=1.0 and res=2.0 the arrows and vertices become smaller compared to the image size.
How is that possible?
Just a wild guess, but could the stroke width account for the difference? The default stroke width is 1 units, and you don't seem to scale the stroke width. Try setting vertex_frame_width=res in the call to plot().