Whats wrong with the image scaling in igraph? - python

I have a problem in controlling the size of objects in network plots done by igraph. The documentation of the plot command says:
bbox:: The bounding box of the plot. This must be a tuple containing the desired width and height of the plot. The default plot is 600 pixels wide and 600 pixels high.
arrow_size: Size (length) of the arrowhead on the edge if the graph is directed, relative to 15 pixels.
vertex_size: Size of the vertex in pixels
So to my understanding all these arguments represent numbers of pixels.
Therefore, multiplying all of them, say, by a factor of 2, I would expect the images to scale completely with this factor.
Consider this following minimal example in python:
from igraph import Graph, plot
def visualize(res=1.0):
g=Graph([(0,1), (1,0)], directed=True)
layout = g.layout_fruchterman_reingold()
plot(g, target='plot.png',
layout=layout,
bbox=(120*res,120*res),
vertex_size=5*res,
arrow_size=10*res)
This plots a trivial graph,
However for res=1.0 and res=2.0 the arrows and vertices become smaller compared to the image size.
How is that possible?

Just a wild guess, but could the stroke width account for the difference? The default stroke width is 1 units, and you don't seem to scale the stroke width. Try setting vertex_frame_width=res in the call to plot().

Related

Matplotlib axis scale

I have two images, which I represent as 2D Arrays. Both images are the same, but with a different resolution. Say the original image shows a 10m x 10m area.
Image 1: 1 pixel per meter, size = 10 x 10
Image 2: 10 pixel per meter, size = 100 x 100
If I do
ax.imshow(my_image, origin="lower")
I will get two different plots. The first image will have values 0-10 on each axis and the second one 0-100 on each axis.
How can I set the scale / unit of the axis, so the values will in both cases be 0 to 10.
I have seen the extent keyword on https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.imshow.html. But I don't think that would be a good idea for two reasons:
I want to stretch the image itself, but rather keep the resolution
I have additional points in image coordinates I want to plot. Would be easier to just plot them normally and don't translate them as well

How to get width given by a set of points that are not aligned to axis?

I have a depth image but I am manipulating it like a (2d) grayscale image to analyze the shape of the figures I have.
I am trying to get the width (distance) of a shape, as given by this image. The width is shown by the red line, which also follows the direction of vector v2.
I have the vectors shown in the image, resulting of a 2-components PCA to gather the direction of the shape (the shape in the picture is cropped, since I just need the width on red, on this part of the shape).
I have no clue, how to rotate the points to origin, or how to project the points to the line and then to calculate the width, somehow by calculating eucledian distance from min to max.
How to get width given by a set of points that are not aligned to axis?
I managed it using a rotated bounding box from cv2, as described by this solution.

Python Getdist library : issue with shift between lines contours and filled contour

I am faced now to a new problem using GetDist library available on home page of GetDist. Examples are given in this getdist plot gallery.
This is a tool to plot joint distribution for a set of covariance matrices.
Everything works fine except one detail that disturbs me : If I zoom very deeply, I notice a slight shift between the contours filled and the lines contours. I illustrate this by the following zoomed figure (smallest contours refers to 1 sigma uncertainty and the largest 2 sigma) representing the ellipse of 2 covariance matrices.
In this figure, I zoom very deeply on a subplot. Classically, if I unzoom the figure, I get this kind of image :
The relevant section that generates the triplot is :
# Call triplot
g.triangle_plot([matrix1, matrix2],
names,
filled = True,
legend_labels = [],
contour_colors = ['darkblue','red'],
line_args = [{'lw':2, 'color':'darkblue'},
{'lw':2, 'color':'red'}],
)
I don't understand why filled area (red and darkblue) exceeds slightly the lines of the corresponding contours.
Maybe it is related to my conputation of limits of ellipse along x-coordinates and y-coordinates in order to fully fill the subplot and the rounding errors. I tried to modify these paramters without success.
I haven't looked in the code, but what I can see from the image is, that the border is half inset and half outset. I assume that the border has a transparency like the shape's fill color and thus it has the effect of a shifted dark border while this is just the part where the transparent border and the transparent background overlay.
The following example shows two circles, with a backgroundcolor rgba(0,0,0,0.5). The border on circle A has no opacity: rgb(0,0,0,1) while on circle B the border color matches the fill color (so 50% opacity: rgba(0,0,0,0.5).

Calculating how much area of an ellipsis is covered by a certain pixel in Python

I am working with Python and currently trying to figure out the following: If I place an ellipsis of which the semi-axes, the centre's location and the orientation are known, on a pixel map, and the ellipsis is large enough to cover multiple pixels, how do I figure out which pixel covers which percentage of the total area of the ellipsis? As an example, let's take a map of 10*10 pixels (i.e. interval of [0,9]) and an ellipsis with the centre at (6.5, 6.5), semi-axes of (0.5, 1.5) and an orientation angle of 30° between the horizontal and the semi-major axis. I have honestly no idea, so any help is appreciated.
edit: To clarify, the pixels (or cells) have an area. I know the area of the ellipsis, its position and its orientation, and I want to find out how much of its area is located within pixel 1, how much it is within pixel 2 etc.
Following the equation of an elipse
The easiest way to find which pixels from your mesh are inside and which are out would be to assign (x, y, alpha) for each pixel in the above equation.
If the result <=1, the pixel is inside. Otherwise, it is outside.
You can count the pixels.
This is math problem. Try math.exchange rather than stackoverflow.
I suggest you to transform the plane: translation to get the center in the middle, rotation to get the ellipsis's axes on the x-y ones and dilatation on x to get a circle. And then work with a circle on rhombus tiles.
Your problem won't be less or more tractable in the new formulation but the math and code you have to work on will be slightly lighter.

Matplotlib Patch Size in Points

How can I draw shapes in matplotlib using point/inch dimensions?
I've gone through the patch/transform documentation so I understand how to work in pixel, data, axes or figure coordinates but I cannot figure out how to dimension a rectangle in points/inches.
Ideally I would like to position a rectangle in data coordinates but set its size in points, much like how line markers work.
Here is an example of the plot I am trying to create. I currently position the black and red boxes in (data, axes) coordinates. This works when the graph is a known size, but fails when it gets rescaled as the boxes become smaller even through the text size is constant.
Ended up figuring it out with help from this question: How do I offset lines in matplotlib by X points
There is no built in way to specify patch dimensions in points, so you have to manually calculate a ratio of axes or data coordinates to inches/points. This ratio will of course vary depending on figure/axes size.
This is accomplished by running a (1, 1) point through the axes transform and seeing where it ends up in pixel coordinates. Pixels can then be converted to inches or points via the figure dpi.
t = axes.transAxes.transform([(0,0), (1,1)])
t = axes.get_figure().get_dpi() / (t[1,1] - t[0,1]) / 72
# Height = 18 points
height = 18 * t

Categories