FiPy viewer logarithmic axes scaling and aspect ratio - python

Is there an easy way to make different axes scale logarithmically? I am using the Matplotlib2DGridContourViewer and I managed to make the plotted data scale logarithmically using fipy.Viewer(vars=somevariable, log=True) but I couldn't find anything regarding the axes scaling. In my case I just need the y axis logarithmic.
Also I have another question about the aspect ratio of the viewer. In the documentation of Matplotlib2DGridContourViewer there is a property called figaspect:
figaspect (float, optional) – desired aspect ratio of figure. If a number, use that aspect ratio. If auto, the aspect ratio will be determined from the vars’s mesh.
I work in Jupyter Notebook and if I set a desired number as aspect ratio e.g. 0.5 it doesn't change the ratio of the lengths of the axes, but rather the aspect ratio of the whole viewer/figure area which means the data won't be more readable, just the viewer area gets squeezed with the plot aspect ratio unchanged. The reason for me wanting to change the axes length ratio is that I have a 2D mesh with 1000x1000 cells and for some reason the default aspect ratio is not determined by that (1:1), but rather from the set maximum coordinates for mesh.x and mesh.y. This way if I want to examine a 1:100 mesh I get a basically unreadable, very long plot. (I understand why it is implemented this way but I'm using the 2D mesh for plotting time dependency on a 1D mesh so the time and space coordinates are not even close.)
I guess my question is that is there any way tom make figaspect work the way I want, or is there any other relatively easy way to be able to set the ratio of axes legths? If I could tie the aspect ratio to the number of mesh cells that would also be acceptable.

Matplotlib2DGridContourViewer wraps matplotlib contourf() which does not appear to offer any direct option for log scaling, however, all MatplotlibViewer subclasses have an .axes property. You should be able to call viewer.axes.set_yscale() to get log scaling.
As to figaspect, a concrete example of what you're trying to do would be helpful, but I think I understand. figaspect controls the aspect ratio of the figure. It is used to generate the figsize= argument to matplotlib figure(). If you set figaspect='auto', then FiPy tries to set the aspect ratio of the figure, including the colorbar, to respect the aspect ratio of the Mesh. I don't know what it does when you set the figaspect to something else and try to view only a subset of the Mesh.
It does appear that Matplotlib2DGridContourViewer doesn't respect the aspect ratio of the Mesh when the aspect ratio becomes very large. Matplotlib2DGridViewer works as expected. Compare
import fipy as fp
mesh = fp.Grid2D(nx=100, dx=0.5, ny=100, dy=1)
var = fp.CellVariable(mesh=mesh, name=r"$\phi$")
cviewer = fp.Matplotlib2DGridContourViewer(vars=var)
gviewer = fp.Matplotlib2DGridViewer(vars=var)
to
import fipy as fp
mesh = fp.Grid2D(nx=100, dx=0.01, ny=100, dy=1)
var = fp.CellVariable(mesh=mesh, name=r"$\phi$")
cviewer = fp.Matplotlib2DGridContourViewer(vars=var)
gviewer = fp.Matplotlib2DGridViewer(vars=var)
I don't know why this is. I neither use Matplotlib2DGridContourViewer nor model on crazy aspect ratio meshes. I've filed a ticket so this doesn't get lost.
Ultimately, I'm not convinced that using a 2D CellVariable to store 1D+time data is a great idea. I'd think you'd be better off extracting the 1D data into a numpy array and then using matplotlib directly to render whatever you're interested in. The whole point of a CellVariable and of a Viewer is to store and render data that lies on the geometry and topology of a Mesh. None of it's designed to deal with Mesh dimensions that aren't spatial.

Related

Description of arguments in surface plot function in Python

Hi I am writing a code which uses ax.plot_surface() to plot data on a unit sphere for theta = linspace(0,pi,100) and phi = linspace(0,2*pi,100).
For some reason my image is distorted in the sense that the sphere is ahead of the axis. Does anyone have any idea of why this would be the case?
3D plotting isn't necessarily a good place to start learning how to use plotting libraries; the fundamentals are more often explained in simpler 2d plots. But to get started,
read the 3d tutorial
use the 3d examples for reference
experiment! Produce the same figure with different parameter settings.
The specific parameters you asked about:
linewidth is not relevant for the plot_surface, but does make a big difference in the closely related plot_wireframe. See this example and experiment with the linewidth value. 1 is default.
alpha refers to transparency, of a graphical element. Any value <1 will mean it is possible to see other lines etc, even directly behind. This example uses alpha=0.3 in 3d
antialiased controls whether the rendering is done with anti-aliasing or not. It is more expensive to use, but the result is to reduce visual distortions. See https://stackoverflow.com/a/8750463 which also links this explanation of the method.

Python adaptive histogram widths

I am currently working on a project where I have to bin up to 10-dimensional data. This works totally fine with numpy.histogramdd, however with one have a serious obstacle:
My parameter space is pretty large, but only a fraction is actually inhabited by data (say, maybe a few % or so...). In these regions, the data is quite rich, so I would like to use relatively small bin widths. The problem here, however, is that the RAM usage totally explodes. I see usage of 20GB+ for only 5 dimensions which is already absolutely not practical. I tried defining the grid myself, but the problem persists...
My idea would be to manually specify the bin edges, where I just use very large bin widths for empty regions in the data space. Only in regions where I actually have data, I would need to go to a finer scale.
I was wondering if anyone here knows of such an implementation already which works in arbitrary numbers of dimensions.
thanks 😊
I think you should first remap your data, then create the histogram, and then interpret the histogram knowing the values have been transformed. One possibility would be to tweak the histogram tick labels so that they display mapped values.
One possible way of doing it, for example, would be:
Sort one dimension of data as an unidimensional array;
Integrate this array, so you have a cumulative distribution;
Find the steepest part of this distribution, and choose a horizontal interval corresponding to a "good" bin size for the peak of your histogram - that is, a size that gives you good resolution;
Find the size of this same interval along the vertical axis. That will give you a bin size to apply along the vertical axis;
Create the bins using the vertical span of that bin - that is, "draw" horizontal, equidistant lines to create your bins, instead of the most common way of drawing vertical ones;
That way, you'll have lots of bins where data is more dense, and lesser bins where data is more sparse.
Two things to consider:
The mapping function is the cumulative distribution of the sorted values along that dimension. This can be quite arbitrary. If the distribution resembles some well known algebraic function, you could define it mathematically and use it to perform a two-way transform between actual value data and "adaptive" histogram data;
This applies to only one dimension. Care must be taken as how this would work if the histograms from multiple dimensions are to be combined.

How to make data points in a 3D python scatter plot look like "discs" instead of "spheres"

In a standard 3D python plot, each data point is, by default, represented as a sphere in 3D. For the data I'm plotting, the z-axis is very sensitive, while the x and y axes are very general, so is there a way to make each point on the scatter plot spread out over the x and y direction as it normally would with, for example, s=500, but not spread at all along the z-axis? Ideally this would look like a set of stacked discs, rather than overlapping spheres.
Any ideas? I'm relatively new to python and I don't know if there's a way to make custom data points like this with a scatter plot.
I actually was able to do this using the matplotlib.patches library, creating a patch for every data point, and then making it whatever shape I wanted with the help of mpl_toolkits.mplot3d.art3d.
You might look for something called "jittering". Take a look at
Matplotlib: avoiding overlapping datapoints in a "scatter/dot/beeswarm" plot
It works by adding random noise to your data.
Another way might be to reduce the variance of the data on your z-axis (e.g. applying a log-function) or adjusting the scale. You could do that with ax.set_zscale("log"). It is documented here http://matplotlib.org/mpl_toolkits/mplot3d/api.html#mpl_toolkits.mplot3d.axes3d.Axes3D.set_zscale

Equal bin sizes including border using mayavi imshow?

I am doing a very simple task of plotting a 2d numpy histogram and displaying with with
mayavi.mlab.imshow(my2dhistogram, interpolate=False)
For a 5x5 array the output is the following,
I would like the bins along the border to be the same size as the ones in the center. I understand the logic of what mayavi is doing but for this application I absolutely need the bins to be equal size. This is for a scientific visualization where each bin represents a measurement on a detector surface.
Any suggestions?
I don't know how to do this the right way (it seems like it would be very difficult to get right from what I know about imshow), but I have a conceptual suggestion.
Represent your NxN matrix of items on the surface with an (N+2)x(N+2) matrix and set the border entries to be -1. Then make a customized colormap such that your desired colormap is contained between 0 and 1, with all other entries as (0,0,0,0). I'm not exactly sure how to do that -- iirc mayavi modules don't allow you to setup discontinuous color tables, but you could still hack it in this way. Let me know if the part about the color table is confusing, and I can provide some code to make it work.
Also, is there a reason you need to use mayavi's imshow as opposed to say matplotlib for this essentially 2D problem?

transforming coordinates in matplotlib

I'm trying to plot a series of rectangles and lines based on a tab delimited text file in matplotlib. The coordinates are quite large in the data and shown be drawn to scale -- except scaled down by some factor X -- in matplotlib.
What's the easiest way to do this in matplotlib? I know that there are transformations, but I am not sure how to define my own transformation (i.e. where the origin is and what the scale factor is) in matplotlib and have it easily convert between "data space" and "plot space". Can someone please show a quick example or point me to the right place?
If you simply use matplotlib's plot function, the plot will fit into one online window, so you don't really need to 'rescale' explicitly. Linearly rescaling is pretty easy, if you include some code sample to show your formatting of the data, somebody can help you in translating the origin and scaling the coordinates.

Categories