Matplotlib axis scale - python

I have two images, which I represent as 2D Arrays. Both images are the same, but with a different resolution. Say the original image shows a 10m x 10m area.
Image 1: 1 pixel per meter, size = 10 x 10
Image 2: 10 pixel per meter, size = 100 x 100
If I do
ax.imshow(my_image, origin="lower")
I will get two different plots. The first image will have values 0-10 on each axis and the second one 0-100 on each axis.
How can I set the scale / unit of the axis, so the values will in both cases be 0 to 10.
I have seen the extent keyword on https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.imshow.html. But I don't think that would be a good idea for two reasons:
I want to stretch the image itself, but rather keep the resolution
I have additional points in image coordinates I want to plot. Would be easier to just plot them normally and don't translate them as well

Related

How can I resize an image without using .resize() in python PIL

I have the enlarge ratio provided, like two times in width, three times in height.
Here's my thought, I want to go through each pixel in the image, enlarge all of them by ratio and put them in a new created picture.
Well, your thought is the first step. There is some other steps. The most important step is interpolation.
For example, if you want to enlarge a image with 2 pixels in width, 2 pixels in height to 4 pixel in width and height. As your thought, you should let
new_img[0][0] = old_img[0][0]
new_img[2][0] = old_img[1][0]
new_img[0][2] = old_img[0][1]
new_img[2][2] = old_img[1][1]
But only 4 pixels' value are assigned, what is the value of the other 12 pixels? You have to provide an algorithm to interpolate.
You can find more information from Wikipedia.

How to crop and resize Images without chaning the original coordinates

I have bunch of images with it's X and Y Coordinates which are markers for training. I have to resize the images to 500 X 500. Without affecting the original position how to change the size?
I have tried tf.image.resize_with_crop_or_pad but when I plot the dots using its x and Y to the cropped image it shows wrong coordinates which is left top.
newImage = tf.image.resize_with_crop_or_pad(
image,
500,
500
)
I want the original Image to be unaffected of it's original coordinates adn have space so when I plot the original points on the image it perfectly fits.
You can move the dots with
x += (500 - image.shape(0)) / 2
y += (500 - image.shape(1)) / 2
This should work because the function always crops and pads the image keeping it centered. You might have to check how it works when the original image has an odd size because this would give a float number and it could be half a pixel away from the intended point.

How to find the mins and maxs of an array of pixels in opencv

I'm working on a project that converts a graph into datapoints. I converted the image to grayscale and found the pixels that were only the graph from this:
pixels = np.argwhere(gray == 138)
I recreated the image to only represent the graph in black and white:
My issue now is how to deal with the np array. I converted it into a two row array, and I have between 10-30 y values for each x value. I need an algorithm to find out which is the 'right' y value. I've tried taking the avg of all y's, but the results aren't great. Most places where it is just a block of pixels, I want the max/min as the y value.

Whats wrong with the image scaling in igraph?

I have a problem in controlling the size of objects in network plots done by igraph. The documentation of the plot command says:
bbox:: The bounding box of the plot. This must be a tuple containing the desired width and height of the plot. The default plot is 600 pixels wide and 600 pixels high.
arrow_size: Size (length) of the arrowhead on the edge if the graph is directed, relative to 15 pixels.
vertex_size: Size of the vertex in pixels
So to my understanding all these arguments represent numbers of pixels.
Therefore, multiplying all of them, say, by a factor of 2, I would expect the images to scale completely with this factor.
Consider this following minimal example in python:
from igraph import Graph, plot
def visualize(res=1.0):
g=Graph([(0,1), (1,0)], directed=True)
layout = g.layout_fruchterman_reingold()
plot(g, target='plot.png',
layout=layout,
bbox=(120*res,120*res),
vertex_size=5*res,
arrow_size=10*res)
This plots a trivial graph,
However for res=1.0 and res=2.0 the arrows and vertices become smaller compared to the image size.
How is that possible?
Just a wild guess, but could the stroke width account for the difference? The default stroke width is 1 units, and you don't seem to scale the stroke width. Try setting vertex_frame_width=res in the call to plot().

Matplotlib Patch Size in Points

How can I draw shapes in matplotlib using point/inch dimensions?
I've gone through the patch/transform documentation so I understand how to work in pixel, data, axes or figure coordinates but I cannot figure out how to dimension a rectangle in points/inches.
Ideally I would like to position a rectangle in data coordinates but set its size in points, much like how line markers work.
Here is an example of the plot I am trying to create. I currently position the black and red boxes in (data, axes) coordinates. This works when the graph is a known size, but fails when it gets rescaled as the boxes become smaller even through the text size is constant.
Ended up figuring it out with help from this question: How do I offset lines in matplotlib by X points
There is no built in way to specify patch dimensions in points, so you have to manually calculate a ratio of axes or data coordinates to inches/points. This ratio will of course vary depending on figure/axes size.
This is accomplished by running a (1, 1) point through the axes transform and seeing where it ends up in pixel coordinates. Pixels can then be converted to inches or points via the figure dpi.
t = axes.transAxes.transform([(0,0), (1,1)])
t = axes.get_figure().get_dpi() / (t[1,1] - t[0,1]) / 72
# Height = 18 points
height = 18 * t

Categories