I'm trying to make a 3D plot that consists of a series of 2D planes through an RGB stack, like this:
I know that it's possible to do this using mpl_toolkits.mplot3d by passing the x, y, z coordinates and the RGB(A) colours of each pixel to plot_surface:
import numpy as np
from matplotlib import pyplot as pp
from mpl_toolkits.mplot3d.axes3d import Axes3D
def plot_stack_slices(rgbstack, scale=(1., 1., 1.), z_interval=10.):
fig, ax = pp.subplots(1,1,subplot_kw={'projection':'3d'})
ax.invert_zaxis()
ax.hold(True)
sx, sy, sz = scale
nz, ny, nx, nc = rgbstack.shape
stack_xyz = np.mgrid[:nx*sx:nx*1j, :ny*sy:ny*1j, :nz*sz:nz*1j]
slices = rgbstack[::-z_interval]
slice_xyz = np.rollaxis(stack_xyz, 3, 0)[::-z_interval]
surflist = []
for (img,xyz) in zip(slices, slice_xyz):
x, y, z = xyz
s = ax.plot_surface(x, y, z, facecolors=img**0.75,
rstride=50, cstride=50)
surflist.append(s)
return fig, ax, surflist
Unfortunately this becomes extremely slow if I set rstride=1, cstride=1 in order to display the textures at full resolution.
I'm also aware that Mayavi can easily handle displaying multiple 2D textures at full resolution:
from mayavi import mlab
def plot_stack_slices2(stack, scale=(1., 1., 20.), z_interval=10.):
mfig = mlab.figure(bgcolor=(1,)*3)
sx, sy, sz = scale
nz, ny, nx = stack.shape
slices = stack[::-z_interval]
slice_z = np.linspace(0,nz*sz,nz)[::z_interval]
surflist = []
for (img,z) in zip(slices, slice_z):
im = mlab.imshow(img.T, colormap='gray', figure=mfig)
im.actor.scale = [sx,sy,sz]
im.actor.position = [0, 0, z]
surflist.append(z)
return fig, surflist
However, the problem now is that there does not seem to be any way of displaying true-colour RGB textures using Mayavi - according to the docs I can only specify either a single (R, G, B) tuple, or a pre-defined colourmap.
Does anyone know of a better way to display true-colour 2D RGB textures in a 3D plot?
Given enough time I could probably figure out how do do this in Vtk or even pure OpenGL if necessary, but I'm really hoping that there are existing libraries that will do the job.
Big thanks to aestrivex for providing working solutions using Mayavi/VTK - it's useful info that I may need for doing more complicated things in the future.
In the end I actually chose to go with cgohlke's suggestion of using visvis, which turned out to be a lot simpler to implement:
import visvis as vv
vv.use('wx')
import numpy as np
from matplotlib.image import imread
from matplotlib.cbook import get_sample_data
imgdata = imread(get_sample_data('lena.png'))
nr, nc = imgdata.shape[:2]
x,y = np.mgrid[:nr, :nc]
z = np.ones((nr, nc))
for ii in xrange(5):
vv.functions.surf(x, y, z*ii*100, imgdata, aa=3)
I don't know about other libraries -- volshow looks neat but I havent tested it -- but you can do this in vtk.
I have been working on doing this generally in mayavi (see How to directly set RGB/RGBA colors in mayavi) but for certain image sources mayavi structures the vtk pipeline in a way that was not designed to deal with this at all. My efforts to convert a 2D vtk.ImageData to true color starting with mlab.imshow were met with resistance at every step, but I managed it.
First, here is how I have managed to do it in mayavi using mlab. This is far too hacky and "magic"-reliant even for my standards:
from mayavi import mlab
import numpy as np
from tvtk.api import tvtk
k=mlab.imshow(np.random.random((10,10)),colormap='bone')
colors=tvtk.UnsignedCharArray()
colors.from_array(np.random.randint(256,size=(100,3)))
k.mlab_source.dataset.point_data.scalars=colors
k.actor.input.point_data.scalars=colors
#the latter set of scalars is what is actually used in the VTK pipeline in this
#case, but if they don't play nice with the mayavi source then tvtk will
#complain because we are circumventing the structure it expects
k.actor.input.scalar_type='unsigned_char'
k.actor.input.number_of_scalar_components=3
k.image_map_to_color.lookup_table=None
k.actor.input.modified()
mlab.draw()
#this draw fails. As it fails, there is an interaction here, somewhere deep in
#tvtk, causing the ImageData to partially reset.. I have not been able to track
#it down yet. ignore the error output
k.actor.input.scalar_type='unsigned_char'
k.actor.input.number_of_scalar_components=3
#now after we reset these back to what they should be, it works
mlab.draw()
mlab.show()
But in pure tvtk it's not nearly so bad:
import numpy as np
from tvtk.api import tvtk
colors=np.random.randint(256,size=(100,3))
an_image=tvtk.ImageData()
an_image.number_of_scalar_components=3
an_image.scalar_type='unsigned_char'
an_image.point_data.scalars=tvtk.UnsignedCharArray()
an_image.point_data.scalars.from_array(colors)
an_image.dimensions=np.array((10,10,1))
an_actor=tvtk.ImageActor()
an_actor.input=an_image
an_actor.interpolate=False
ren=tvtk.Renderer()
renWin=tvtk.RenderWindow()
renWin.add_renderer(ren)
ren.add_actor2d(an_actor)
iren=tvtk.RenderWindowInteractor()
iren.render_window=renWin
iren.interactor_style=tvtk.InteractorStyleTrackballCamera()
renWin.render()
iren.start()
Of course, doing it in vtk is more work. You might even be able to wrap this nicely so that it's pretty reasonable.
I want to fix mayavi to handle this properly, but as you can see from my snippet it is not straightforward and could take a while.
Related
I am having difficulties accessing (the right) data when using holoviews/bokeh, either for connected plots showing a different aspect of the dataset, or just customising a plot with dynamic access to the data as plotted (say a tooltip).
TLDR: How to add a projection plot of my dataset (different set of dimensions and linked to main plot, like a marginal distribution but, you know, not restricted to histogram or distribution) and probably with a similar solution a related question I asked here on SO
Let me exemplify (straight from a ipynb, should be quite reproducible):
import numpy as np
import random, pandas as pd
import bokeh
import datashader as ds
import holoviews as hv
from holoviews import opts
from holoviews.operation.datashader import datashade, shade, dynspread, spread, rasterize
hv.extension('bokeh')
With imports set up, let's create a dataset (N target 10e12 ;) to use with datashader. Beside the key dimensions, I really need some value dimensions (here z and z2).
import numpy as np
import pandas as pd
N = int(10e6)
x_r = (0,100)
y_r = (100,2000)
z_r = (0,10e8)
x = np.random.randint(x_r[0]*1000,x_r[1]*1000,size=(N, 1))
y = np.random.randint(y_r[0]*1000,y_r[1]*1000,size=(N, 1))
z = np.random.randint(z_r[0]*1000,z_r[1]*1000,size=(N, 1))
z2 = np.ones((N,1)).astype(int)
df = pd.DataFrame(np.column_stack([x,y,z,z2]), columns=['x','y','z','z2'])
df[['x','y','z']] = df[['x','y','z']].div(1000, axis=0)
df
Now I plot the data, rasterised, and also activate the tooltip to see the defaults. Sure, x/y is trivial, but as I said, I care about the value dimensions. It shows z2 as x_y z2. I have a question related to tooltips with the same sort of data here on SO for value dimension access for the tooltips.
from matplotlib.cm import get_cmap
palette = get_cmap('viridis')
# palette_inv = palette.reversed()
p=hv.Points(df,['x','y'], ['z','z2'])
P=rasterize(p, aggregator=ds.sum("z2"),x_range=(0,100)).opts(cmap=palette)
P.opts(tools=["hover"]).opts(height=500, width=500,xlim=(0,100),ylim=(100,2000))
Now I can add a histogram or a marginal distribution which is pretty close to what I want, but there are issues with this soon past the trivial defaults. (E.g.: P << hv.Distribution(p, kdims=['y']) or P.hist(dimension='y',weight_dimension='x_y z',num_bins = 2000,normed=True))
Both are close approaches, but do not give me the other value dimension I'd like visualise. If I try to access the other value dimension ('x_y z') this fails. Also, the 'x_y z2' way seems very clumsy, is there a better way?
When I do something like this, my browser/notebook-extension blows up, of course.
transformed = p.transform(x=hv.dim('z'))
P << hv.Curve(transformed)
So how do I access all my data in the right way?
I am trying to create a rectangular grid out of xy array using the following code in Matplotlib. My nx = 12720 and ny = 10117
from matplotlib.path import Path
nx, ny = range_samples, azimuth_lines # 12720, 10117
x, y = np.meshgrid(np.arange(nx), np.arange(ny)). # MEMORY ERROR
x, y = x.flatten(), y.flatten()
points = np.vstack((x, y)).T
import ipdb
ipdb.set_trace()
path = Path(sar_ver)
grid = path.contains_points(points)
Is there any other library I can use to do this operation? Any ideas on fixing it will be really appreciated!
Actually, fixed it by killing heavy running processes. There was one process that was using a lot of memory. This was nothing related to Matplotlib. Let me know if I have to delete this question. It was a little silly on my part!
Trying to interpolate data from a regular input grid, and came across this in the documentation for scipy.interpolate.interp2d:
See also RectBivariateSpline Much faster 2D interpolation if your
input data is on a grid
So I tried using scipy.interpolate.RectBivariateSpline instead of interp2d. Docs for both functions seem very similar, so I expected this to produce similar results:
import numpy as np
from scipy.interpolate import RectBivariateSpline, interp2d
from .constants import data
x_coords = y_coords = np.arange(data.shape[0]) # Square array
interp_fun = interp2d(x_coords, y_coords, data)
bivar_fun = RectBivariateSpline(x_coords, y_coords, data)
data[250, 60] # 76.1451873779
interp_fun(60, 250) # 76.14518738
bivar_fun(60, 250, grid=False) # 345.24444
Am I calling this wrong? I have no idea why the interpolation based on RectBivariateSpline is so far off?
I did suspect that maybe RectBivariateSpline operates on a cartesian grid and inverted the y-Axis of input data, but still no luck.
Right, just before submitting this I thought I should try calling bivar_fun(y, x) instead of bivar_fun(x, y) and things suddenly work:
data[250, 60] # 76.1451873779
interp_fun(60, 250) # 76.14518738
bivar_fun(250, 60, grid=False) # [ 76.14518738]
Still not quite sure why, because the first arguments to interp_fun and bivar_fun should be the same:
RectBivariateSpline.__call__(x, y, mth=None, dx=0, dy=0, grid=True)
RectBivariateSpline Docs
interp2d.__call__(x, y, dx=0, dy=0)
Interp2d Docs
There's also a related issue on Github: https://github.com/scipy/scipy/issues/3164
I am trying to create four gabor patches, very similar to those below.
I don't need them to be identical to the pictures below, but similar.
Despite a bit of tinkering, I have been unable to reproduce these images...
I believe they were created in MATLAB originally. I don't have access to the original MATLAB code.
I have the following code in python (2.7.10):
import numpy as np
from scipy.misc import toimage # One can also use matplotlib*
data = gabor_fn(sigma = ???, theta = 0, Lambda = ???, psi = ???, gamma = ???)
toimage(data).show()
*graphing a numpy array with matplotlib
gabor_fn, from here, is defined below:
def gabor_fn(sigma,theta,Lambda,psi,gamma):
sigma_x = sigma;
sigma_y = float(sigma)/gamma;
# Bounding box
nstds = 3;
xmax = max(abs(nstds*sigma_x*numpy.cos(theta)),abs(nstds*sigma_y*numpy.sin(theta)));
xmax = numpy.ceil(max(1,xmax));
ymax = max(abs(nstds*sigma_x*numpy.sin(theta)),abs(nstds*sigma_y*numpy.cos(theta)));
ymax = numpy.ceil(max(1,ymax));
xmin = -xmax; ymin = -ymax;
(x,y) = numpy.meshgrid(numpy.arange(xmin,xmax+1),numpy.arange(ymin,ymax+1 ));
(y,x) = numpy.meshgrid(numpy.arange(ymin,ymax+1),numpy.arange(xmin,xmax+1 ));
# Rotation
x_theta=x*numpy.cos(theta)+y*numpy.sin(theta);
y_theta=-x*numpy.sin(theta)+y*numpy.cos(theta);
gb= numpy.exp(-.5*(x_theta**2/sigma_x**2+y_theta**2/sigma_y**2))*numpy.cos(2*numpy.pi/Lambda*x_theta+psi);
return gb
As you may be able to tell, the only difference (I believe) between the images is contrast. So, gabor_fn would likely needed to be altered to do allow for this (unless I misunderstand one of the params)...I'm just not sure how.
UPDATE:
from math import pi
from matplotlib import pyplot as plt
data = gabor_fn(sigma=5.,theta=pi/2.,Lambda=12.5,psi=90,gamma=1.)
unit = #From left to right, unit was set to 1, 3, 7 and 9.
bound = 0.0009/unit
fig = plt.imshow(
data
,cmap = 'gray'
,interpolation='none'
,vmin = -bound
,vmax = bound
)
plt.axis('off')
The problem you are having is a visualization problem (although, I think you are chossing too large parameters).
By default matplotlib, and scipy's (toimage) use bilinear (or trilinear) interpolation, depending on your matplotlib's configuration script. That's why your image looks so smooth. It is because your pixels values are being interpolated, and you are not displaying the raw kernel you have just calculated.
Try using matplotlib with no interpolation:
from matplotlib import pyplot as plt
plt.imshow(data, 'gray', interpolation='none')
plt.show()
For the following parameters:
data = gabor_fn(sigma=5.,theta=pi/2.,Lambda=25.,psi=90,gamma=1.)
You get this output:
If you reduce lamda to 15, you get something like this:
Additionally, the sigma you choose changes the strength of the smoothing, adding parameters vmin=-1 and vmax=1 to imshow (similar to what #kazemakase) suggested, will give you the desired contrast.
Check this guide for sensible values (and ways to use) gabor kernels:
http://scikit-image.org/docs/dev/auto_examples/plot_gabor.html
It seems like toimage scales the input data so that the min/max values are mapped to black/white.
I do not know what amplitudes to reasonably expect from gabor patches, but you should try something like this:
toimage(data, cmin=-1, cmax=1).show()
This tells toimage what range your data is in. You can try to play around with cmin and cmax, but make sure they are symmetric (i.e. cmin=-x, cmax=x) so that a value of 0 maps to grey.
I am working in image processing right now in python using numpy and scipy all the time. I have one piece of code that can enlarge an image, but not sure how this works.
So please some expert in scipy/numpy in python can explain to me line by line. I am always eager to learn.
import numpy as N
import os.path
import scipy.signal
import scipy.interpolate
import matplotlib.pyplot as plt
import matplotlib.cm as cm
def enlarge(img, rowscale, colscale, method='linear'):
x, y = N.meshgrid(N.arange(img.shape[1]), N.arange(img.shape[0]))
pts = N.column_stack((x.ravel(), y.ravel()))
xx, yy = N.mgrid[0.:float(img.shape[1]):1/float(colscale),
0.:float(img.shape[0]):1/float(rowscale)]
large = scipy.interpolate.griddata(pts, img.flatten(), (xx, yy), method).T
large[-1,:] = large[-2,:]
large[:,-1] = large[:,-2]
return large
Thanks a lot.
First, a grid of empty points is created with point per pixel.
x, y = N.meshgrid(N.arange(img.shape[1]), N.arange(img.shape[0]))
The actual image pixels are placed into the variable pts which will be needed later.
pts = N.column_stack((x.ravel(), y.ravel()))
After that, it creates a mesh grid with one point per pixel for the enlarged image; if the original image was 200x400, the colscale set to 4 and rowscale set to 2, the mesh grid would have (200*4)x(400*2) or 800x800 points.
xx, yy = N.mgrid[0.:float(img.shape[1]):1/float(colscale),
0.:float(img.shape[0]):1/float(rowscale)]
Using scipy, the points in pts variable are interpolated into the larger grid. Interpolation is the manner in which missing points are filled or estimated usually when going from a smaller set of points to a larger set of points.
large = scipy.interpolate.griddata(pts, img.flatten(), (xx, yy), method).T
I am not 100% certain what the last two lines do without going back and looking at what the griddata method returns. It appears to be throwing out some additional data that isn't needed for the image or performing a translation.
large[-1,:] = large[-2,:]
large[:,-1] = large[:,-2]
return large