i'm generating a dataset from 1D signals with a 2D transformation that generates 2D pictures. I'm using matplotlib to save the output with the following line:
plt.savefig(name_s , dpi = 'figure' , bbox_inches='tight', pad_inches=0.0 , format='png')
I'm having some trouble unerstanding if an high numerical dpi value is needed or if it is just a rescale, in particular when with the dpi = 'figure' parameter the function outputs a lower resolution image.
Moreover, if there is a specific format i can use to generate or save better images or any advice in general is gladly accepted.
Related
I've been working on temperature maps and have been trying to save images using the matplotlib colourmap viridis. Originally, I was using the following code to create images and then save them:
# define normalisation and colourmap
norm = Normalize(vmin=2, vmax=42)
cmap = plt.get_cmap('viridis').copy()
cmap.set_bad('white', 1.)
# apply colourmap to normalised data
map = cmap(norm(map))
# create and save figure
plt.imshow(map, cmap=cmap)
plt.axis('off')
plt.savefig(imgpath, bbox_inches='tight', pad_inches=0)
plt.cla()
This method was giving me the following image:
result using plt.imshow() and plt.savefig()
I quickly noticed that plt.savefig() does not save the image with the original pixel resolution, but does so based on my screen resolution. So instead, I used plt.imsave, which preserves the original size of the array. Because plt.imsave doesn't have a flag for cmap, I applied it directly as follows:
plt.imsave(imgpath, arr=cmap(norm(map)), format='png')
However, using plt.imsave gave me a different map: result using plt.imsave(). The same thing happens when cmap is applied directly when using plt.imshow()
I can't figure out what I'm doing wrong, I've tried with different maps from different areas, I'm sure that I've been using exactly the same instance of the colourmap and normalisation on both methods.
Can anyone tell me what the difference between the two methods is?
Is there an easy way to make different axes scale logarithmically? I am using the Matplotlib2DGridContourViewer and I managed to make the plotted data scale logarithmically using fipy.Viewer(vars=somevariable, log=True) but I couldn't find anything regarding the axes scaling. In my case I just need the y axis logarithmic.
Also I have another question about the aspect ratio of the viewer. In the documentation of Matplotlib2DGridContourViewer there is a property called figaspect:
figaspect (float, optional) – desired aspect ratio of figure. If a number, use that aspect ratio. If auto, the aspect ratio will be determined from the vars’s mesh.
I work in Jupyter Notebook and if I set a desired number as aspect ratio e.g. 0.5 it doesn't change the ratio of the lengths of the axes, but rather the aspect ratio of the whole viewer/figure area which means the data won't be more readable, just the viewer area gets squeezed with the plot aspect ratio unchanged. The reason for me wanting to change the axes length ratio is that I have a 2D mesh with 1000x1000 cells and for some reason the default aspect ratio is not determined by that (1:1), but rather from the set maximum coordinates for mesh.x and mesh.y. This way if I want to examine a 1:100 mesh I get a basically unreadable, very long plot. (I understand why it is implemented this way but I'm using the 2D mesh for plotting time dependency on a 1D mesh so the time and space coordinates are not even close.)
I guess my question is that is there any way tom make figaspect work the way I want, or is there any other relatively easy way to be able to set the ratio of axes legths? If I could tie the aspect ratio to the number of mesh cells that would also be acceptable.
Matplotlib2DGridContourViewer wraps matplotlib contourf() which does not appear to offer any direct option for log scaling, however, all MatplotlibViewer subclasses have an .axes property. You should be able to call viewer.axes.set_yscale() to get log scaling.
As to figaspect, a concrete example of what you're trying to do would be helpful, but I think I understand. figaspect controls the aspect ratio of the figure. It is used to generate the figsize= argument to matplotlib figure(). If you set figaspect='auto', then FiPy tries to set the aspect ratio of the figure, including the colorbar, to respect the aspect ratio of the Mesh. I don't know what it does when you set the figaspect to something else and try to view only a subset of the Mesh.
It does appear that Matplotlib2DGridContourViewer doesn't respect the aspect ratio of the Mesh when the aspect ratio becomes very large. Matplotlib2DGridViewer works as expected. Compare
import fipy as fp
mesh = fp.Grid2D(nx=100, dx=0.5, ny=100, dy=1)
var = fp.CellVariable(mesh=mesh, name=r"$\phi$")
cviewer = fp.Matplotlib2DGridContourViewer(vars=var)
gviewer = fp.Matplotlib2DGridViewer(vars=var)
to
import fipy as fp
mesh = fp.Grid2D(nx=100, dx=0.01, ny=100, dy=1)
var = fp.CellVariable(mesh=mesh, name=r"$\phi$")
cviewer = fp.Matplotlib2DGridContourViewer(vars=var)
gviewer = fp.Matplotlib2DGridViewer(vars=var)
I don't know why this is. I neither use Matplotlib2DGridContourViewer nor model on crazy aspect ratio meshes. I've filed a ticket so this doesn't get lost.
Ultimately, I'm not convinced that using a 2D CellVariable to store 1D+time data is a great idea. I'd think you'd be better off extracting the 1D data into a numpy array and then using matplotlib directly to render whatever you're interested in. The whole point of a CellVariable and of a Viewer is to store and render data that lies on the geometry and topology of a Mesh. None of it's designed to deal with Mesh dimensions that aren't spatial.
I want to increase the dpi of plots in matplotlib, but the window that displays the plot gets far too large when deviating from the default of
100. I've been using
import matplotlib
matplotlib.rcParams['figure.dpi'] = 300
matplotlib.rcParams['figure.figsize'] = (6.4, 4.8)
to increase the dpi of all plots shown and forcing it to have the default size but it still has the size issue. I would like it so that all plots displayed are uniform in size and dpi without having to individually set this for every figure. Any way to do this?
I think that this won't work as you wish for. The resolution (given in dpi) determines how many points an inch has. The size defines how many inches the figure should have. But none of both sets the number of pixels that your monitor should display for an inch. The thing is that matplotlib and python do not resize plots (only images). So if you save the plot as an image and open it again (with any image viewer) and you click on "show me 100% size", the figure will behave as you intended it to. But while drawing the pixels in a plot (that is what matplotlib does if you call matplotlib.pyplot.draw()), it needs to draw every pixel, which is why one might think that figuresize and dpi both result in a larger plot in matplotlib. Essentially figuresize tells the image viewer how to resize the image when displaying it.
I found this post is particularly useful for explaining the different behavior of size and resolution.
I'm looking for away to directly convert my 2D array to the RGB data of matplotlib's matshow() method. What I've acknowledged from the source code is that it uses imshow() method, which sets some hyperparameters and then calls add_image(), in which based on https://github.com/matplotlib/matplotlib/blob/1722bfd6ae4fac707811c8e8dca171138cb5d2a6/lib/matplotlib/axes/_base.py calls append(image). And I'm stuck from this.
So, is there any way to directly map a raw 2D array to image RGB array after matshow() method (with colormap integrated) without calling the plotting?
Edit: In case that my above explanation is hard to understand, I have a 2D matrix (not a grayscale image array). I'm gonna plot it using matshow() with a certain colormap, and vmin & vmax values. I can extract the image pixel values as a 3D array using fig.canvas.show() and np.fromstring() as in here. However my application has very strict time constraint that plotting the data would take too much time (and also very unstable). So instead of plotting (which sequentially call figure(), subplot(), matshow()...) I want to get the 3D image data directly (through some mapping) from my original 2D matrix. I believe it is possible if I understand how pyplot maps the data, but unfortunately I couldn't find the solution in their source code yet.
I'm trying to create a pdf using Python's reportlab module.
I generated a png with matplotlib and saved it in the pdf file using report labs canvas.drawImage method.
My problem is that the generated png file is very fuzzy. I specified the size in inches with plt.figure(figsize=(20,10)) and saved the picture with the plt.savefig method.
This works out perfectly (except the fuzzy quality of the picture).
But when I increase the dpi within the savefig method the size of the picture increases.
Is there any way to improve the dpi without changing the picture size.
Or is there a way to resize it to the predefined values?
Thanks!
f = df.plot()
fig = f.get_figure()
fig.set_size_inches((2,2))
fig.savefig('C:/temp/foo.png', bbox_inches='tight', dpi=1500)