This question explains how to change the "camera position" of a 3D plot in matplotlib by specifying the elevation and azimuth angles. ax.view_init(elev=10,azim=20), for example.
Is there a similar way to specify the zoom of the figure numerically -- i.e. without using the mouse?
The only relevant question I could find is this one, but the accepted answer to that involves installing another library, which then also requires using the mouse to zoom.
EDIT:
Just to be clear, I'm not talking about changing the figure size (using fig.set_size_inches() or similar). The figure size is fine; the problem is that the plotted stuff only takes up a small part of the figure:
The closest solution to view_init is setting ax.dist directly. According to the docs for get_proj "dist is the distance of the eye viewing point from the object point". The initial value is currently hardcoded with dist = 10. Lower values (above 0!) will result in a zoomed in plot.
Note: This behavior is not really documented and may change. Changing the limits of the axes to plot only the relevant parts is probably a better solution in most cases. You could use ax.autoscale(tight=True) to do this conveniently.
Working IPython/Jupyter example:
%matplotlib inline
from IPython.display import display
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# Grab some test data.
X, Y, Z = axes3d.get_test_data(0.05)
# Plot a basic wireframe.
ax.view_init(90, 0)
ax.plot_wireframe(X, Y, Z, rstride=10, cstride=10)
plt.close()
from ipywidgets import interact
#interact(dist=(1, 20, 1))
def update(dist=10):
ax.dist = dist
display(fig)
Output
dist = 10
dist = 5
Related
I would like to plot a series of curves in the same Axes each having a constant y offset from eachother. Because the data I have needs to be displayed in log scale, simply adding a y offset to each curve (as done here) does not give the desired output.
I have tried using matplotlib.transforms to achieve the same, i.e. artificially shifting the curve in Figure coordinates. This achieves the desired result, but requires adjusting the Axes y limits so that the shifted curves are visible. Here is an example to illustrate this, though such data would not require log scale to be visible:
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
fig, ax = plt.subplots(1,1)
for i in range(1,19):
x, y = np.arange(200), np.random.rand(200)
dy = 0.5*i
shifted = mpl.transforms.offset_copy(ax.transData, y=dy, fig=fig, units='inches')
ax.set_xlim(0, 200)
ax.set_ylim(0.1, 1e20)
ax.set_yscale('log')
ax.plot(x, y, transform=shifted, c=mpl.cm.plasma(i/18), lw=2)
The problem is that to make all the shifted curves visible, I would need to adjust the ylim to a very high number, which compresses all the curves so that the features visible because of the log scale cannot be seen anymore.
Since the displayed y axis values are meaningless to me, is there any way to artificially extend the Axes limits to display all the curves, without having to make the Figure very large? Apparently this can be done with seaborn, but if possible I would like to stick to matplotlib.
EDIT:
This is the kind of data I need to plot (an X-ray diffraction pattern varying with temperature):
I am working with matplotlib to plot a heat map with some information and I want to move the xticks and the yticks to the center. I have searched in stackoverflow for previous questions but I couldn't reach one suitable for the problem. I attach my code and the image that I get:
import matplotlib.pyplot as plt
from matplotlib import colors
import numpy as np
def plot():
intensity= np.random.rand(10,10)
matrix_intensity=np.matrix(intensity)
max_intensity=matrix_intensity.max()
min_intensity = matrix_intensity.min()
for e in range(len(intensity)):
for i in range(len(intensity[e])):
intensity[e][i]=float(intensity[e][i])/float(max_intensity)
np.random.seed(101)
cmap = colors.ListedColormap(['white','khaki', 'goldenrod','yellowgreen','mediumseagreen','darkcyan','tomato','indianred' ,'sienna','maroon'])
bounds = np.linspace(min_intensity/max_intensity,1,11).tolist()
norm = colors.BoundaryNorm(bounds, cmap.N)
img = plt.imshow(intensity, interpolation='none', origin='lower',extent=[0,len(intensity),0,len(intensity)],
cmap=cmap, norm=norm)
cb=plt.colorbar(img, fraction=0.1,cmap=cmap, norm=norm, boundaries=bounds,format='%.2f') #'%.2f')
cb.set_label(label='Ratio',fontsize=12,labelpad=10)
plt.ylabel('Origin',fontsize=11)
plt.xlabel('Destination',fontsize=11)
plt.title('Best route:',fontsize=10)
plt.suptitle('Best Solution:',fontsize=10)
plt.xticks(range(1,len(intensity)+1))
plt.yticks(range(1,len(intensity)+1))
plt.savefig('images/hello.png')
plt.show()
The fact is that I would like the x and the y ticks to point out the center of every square because otherwise, it doesn't make sense to plot the squares. Does somebody know how to fix this? Maybe this question is obvious but the matplotlib documentation for all the statements sometimes is difficult to understand.
The obvious solution would probably to use a different extent, namely to let the image live in the range between 0.5 and len(intensity)+0.5.
extent=[.5, len(intensity)+.5, .5, len(intensity)+.5]
img = plt.imshow(intensity, interpolation='none', origin='lower',extent=extent,
cmap=cmap, norm=norm)
You need to change the way you set your xticks and yticks loc and labels to below:
plt.xticks([x-0.5 for x in list(range(1,len(intensity)+1))], range(1,len(intensity)+1))
plt.yticks([x-0.5 for x in list(range(1,len(intensity)+1))], range(1,len(intensity)+1))
Output:
The other answers are both good, however I would like to provide a more general implementation that also doesn't alter default ticks, as I have a function that can be used to calculate the axis limits and set them as in #ImportanceOfBeingErnest answer.
import numpy as np
def span_from_pixels(p,n=None):
"""From positions of pixel centers p returns a range from side to side. Useful to adjust plot extent in imshow.
In alternative, p can be provided as range and number of pixels.
Note that np.linspace has flag retsteps to return step size."""
if n is None:
n=len(p)
dx=(np.max(p)-np.min(p))/(n-1)
return (np.min(p)-dx/2,np.max(p)+dx/2)
def test_span_from_pixels():
print (span_from_pixels([0,3],4)) #[-0.5,3.5]
print (span_from_pixels([0,2],3)) #[-0.5,2.5]
print (span_from_pixels([0,1,2])) #[-0.5,2.5]
print (span_from_pixels([0,0.5,1,1.5,2])) #[-0.25,2.25]
Please let me know if something doesn't work, these are tested in my code, but I made some change to remove dependencies. I assume I didn't break anything, but I cannot test it now.
I'm trying to plot a surface in 3D from a set of data which specifies the z-values. I get some weird transparency artefact though, where I can see through the surface, even though I set alpha=1.0.
The artefact is present both when plotting and when saved to file (both as png and pdf):
I have tried changing the line width, and changing the number of strides from 1 to 10 (in the latter case, the surface is not visible though due to too rough resolution).
Q: How can I get rid of this transparency?
Here is my code:
import sys
import numpy as np
import numpy.ma as ma
import matplotlib as mpl
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
y_label = r'x'
x_label = r'y'
z_label = r'z'
x_scale = 2.0*np.pi
y_scale = 2.0*np.pi
y_numPoints = 250
x_numPoints = 250
def quasiCrystal(x, y):
z = 0
for i in range(0,5):
z += np.sin(x * np.cos(float(i)*np.pi/5.0) +
y * np.sin(float(i)*np.pi/5.0))
return z
x = np.linspace(-x_scale, x_scale, x_numPoints)
y = np.linspace(-y_scale, y_scale, y_numPoints)
X,Y = np.meshgrid(x,y)
Z = quasiCrystal(X, Y)
f = plt.figure()
ax = f.gca(projection='3d')
surf = ax.plot_surface( X, Y, Z,
rstride=5, cstride=5,
cmap='seismic',
alpha=1,
linewidth=0,
antialiased=True,
vmin=np.min(Z),
vmax=np.max(Z)
)
ax.set_zlim3d(np.min(Z), np.max(Z))
f.colorbar(surf, label=z_label)
ax.set_xlabel(x_label)
ax.set_ylabel(y_label)
ax.set_zlabel(z_label)
plt.show()
Here is another picture of my actual data where it is easier to see the artefact:
Matplotlib is not a "real" 3D engine. This is a very well known problem and once in a while a similar question to yours appears appears (see this and this). The problem is that the same artefact can originate problems that seem to be different. I believe such is the case for you.
Before going on with my recommendations let me just quote this information from the maplotlib website:
My 3D plot doesn’t look right at certain viewing angles
This is probably the most commonly reported issue with mplot3d. The problem is
that – from some viewing angles – a 3D object would appear in front of
another object, even though it is physically behind it. This can
result in plots that do not look “physically correct.”
Unfortunately, while some work is being done to reduce the occurance
of this artifact, it is currently an intractable problem, and can not
be fully solved until matplotlib supports 3D graphics rendering at its
core.
The problem occurs due to the reduction of 3D data down to 2D +
z-order scalar. A single value represents the 3rd dimension for all
parts of 3D objects in a collection. Therefore, when the bounding
boxes of two collections intersect, it becomes possible for this
artifact to occur. Furthermore, the intersection of two 3D objects
(such as polygons or patches) can not be rendered properly in
matplotlib’s 2D rendering engine.
This problem will likely not be solved until OpenGL support is added
to all of the backends (patches are greatly welcomed). Until then, if
you need complex 3D scenes, we recommend using MayaVi.
It seems that Mayavi has finally moved on to Python 3, so its certainly a possibility. If you want to stick with matplotlib for this kind of plot my advice is that you work with rstride and cstride values to see which ones produce a plot satisfactory to you.
surf = ax.plot_surface( X, Y, Z,
rstride=5, cstride=5,
cmap='jet',
alpha=1,
linewidth=0,
antialiased=True,
vmin=0,
rstride=10,
cstride=10,
vmax=z_scale
)
Other possibility is to try to see if other kinds of 3D plots do better. Check plot_trisurf, contour or contourf. I know its not ideal but in the past I also managed to circumvent other type of artefacts using 3D polygons.
Sorry for not having a more satisfactory answer. Perhaps other SO users have better solutions for this. Best of luck.
I ran into some similar issues and found that they were antialiasing artifacts and could be fixed by setting antialiased=False in plot_surface.
I am developing some code to produce an arbitrary number of 2D plots (maps and simple contour plots) on a figure. The matplotlib subplots routine works great for this. In the simplified example below, everything works as it should. However, in my real application - which uses the exact same commands for subplots, contourf and colorbar, only that these are dispersed across several routines - the labels on the colorbars are not showing up (the color patches seem to be ok though). Even after hours of reading documentation and searching the web, I don't even have a clue where I could start looking for what the problem is. If I have my colorbar instance (cbar), I should be able to find out if the ticklabel position makes sense, if the ticklabels are set to visible, if my font settings make sense, etc.... But how do I actually check these properties? Has anyone encountered similar problems already? (and even better: found a solution?) Oh yes: if I manually create a new figure and axes in the actual plotting routine (where the contourf command is issued), then it will work again. But that means losing all control over the figure layout etc. Could it be that I am not passing my axes instance correctly? Here is what I do:
fig, ax = plt.subplots(nrows, ncols)
row, col = getCurrent(...)
plotMap(x, y, data, ax=ax[row,col], ...)
Then, inside plotMap:
c = ax.contourf(x, y, data, ...)
ax.figure.colorbar(c, ax=ax, orientation="horizontal", shrink=0.8)
As said above, the example below with simplified plots and artificial data works fine:
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(0.,360.,5.)*np.pi/180.
y = np.arange(0.,360.,5.)*np.pi/180.
data = np.zeros((y.size, x.size))
for i in range(x.size):
data[:,i] = np.sin(x[i]**2*y**2)
fig, ax = plt.subplots(2,1)
contour = ax[0].contourf(x, y, data)
cbar = ax[0].figure.colorbar(contour, ax=ax[0], orientation='horizontal', shrink=0.8)
contour = ax[1].contourf(x, y, data, levels=[0.01,0.05,0.1,0.05])
cbar = ax[1].figure.colorbar(contour, ax=ax[1], orientation='horizontal', shrink=0.8)
plt.show()
Thanks for any help!
Addition after some further poking around:
for t in cbar.ax.get_xticklabels():
print t.get_position(), t.get_text(), t.get_visible()
shows me the correct text and visible=True, but all positions are (0.,0.). Could this be a problem?
BTW: axis labels are also missing sometimes... and I am using matplotlib version 1.1.1 with python 2.7.3 on windows.
OK - I could track it down: matplotlib is working as it should!
The error was embedded in a utility routine that adds some finishing touches to each page (=figure) once the given number of plot panels has been produced. In this routine I wanted to hide empty plot panels (i.e. on the last page) and I did this with
ax = fig.axes
for i in range(axCurrent, len(ax)):
ax[i].set_axis_off()
However, axCurrent was already reset to zero when the program entered this routine for any page but the last, hence the axes were switched off for all axes in figure. Adding
if axCurrent > 0:
before the for i... solves the problem.
Sorry if I stole anyone's time. Thanks anyway to everyone who was considering to help!
I was wondering if there's a way to plot a data cube in Python. I mean I have three coordinate for every point
x=part.points[:,0]
y=part.points[:,1]
z=part.points[:,2]
And for every point I have a scalar field t(x,y,z)
I would like to plot a 3D data cube showing the position of the point and for every point a color which is proportional to the scalar field t in that point.
I tried with histogramdd but it didn't work.
You can use matplotlib.
Here you have a working example (that moves!):
import random
from matplotlib import pyplot
from mpl_toolkits.mplot3d import Axes3D
mypoints = []
for _ in range(100):
mypoints.append([random.random(), #x
random.random(), #y
random.random(), #z
random.randint(10,100)]) #scalar
data = zip(*mypoints) # use list(zip(*mypoints)) with py3k
fig = pyplot.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(data[0], data[1], data[2], c=data[3])
pyplot.show()
You probably have to customize the relation of your scalar values with the corresponding colors.
Matplotlib has a very nice look but it can be slow drawing and moving these 3D drawings when you have many points. In these cases I used to use Gnuplot controlled by gnuplot.py. Gnuplot can also be used directly as a subprocess as shown here and here.
Another option is Dots plot, produced by MathGL. It is GPL plotting library. Add it don't need many memory if you save in bitmap format (PNG, JPEG, GIF and so on).