Basic 3D voxel grid in Mayavi - python

I'm trying to visualize a 3D array through Mayavi in Python. I simply want to create a structured 3D voxel grid in which I can show some pre-specified voxel-space-filling points. I do not think that I want
The only example that I can find that I think is relatively relevant is this MRI example. I can use the following code to get a somewhat workable example:
import numpy as np
from mayavi import mlab
data = (100, 100, 100)
data = np.zeros(data)
data[0:50, 50:70, 0:50] = 1
data[0:50, 0:20, 0:50] = 1
src = mlab.pipeline.scalar_field(data)
outer = mlab.pipeline.iso_surface(src)
mlab.show()
This is able to generate the following images:
As you can see, not all sides of the boxes are generated, even though those points have the same value as the sides of the boxes that are generated.
Is there a way to visualize every single point in the numpy array that has value equal to 1? I am fine if there is no iso-surface visualization -- in fact, I would prefer some Minecraft-esque blocky voxel visualization.

Hi
import mayavi.mlab
import numpy
data = (100, 100, 100)
data = numpy.zeros(data)
data[0:50, 50:70, 0:50] = 1
data[0:50, 0:20, 0:50] = 1
xx, yy, zz = numpy.where(data == 1)
mayavi.mlab.points3d(xx, yy, zz,
mode="cube",
color=(0, 1, 0),
scale_factor=1)
mayavi.mlab.show()

Related

What is the best way/method to digitize the data of a 3D surface into a grid of pixels with smaller resolution in Python?

I want to digitize (= average out over cells) photon count data into pixels given by a grid that tells how they are aligned. The photon count data is stored in a 2D array. I want to split that data into cells, each of which would correspond to a pixel. The idea is basically the same as changing an HD image to a smaller resolution. I'd like to achieve this in Python.
The digitizing function I've written:
import numpy as np
def digitize(function_data, grid_shape):
"""
function_data = 2D array of function values of some 3D shape,
eg.: exp(-(x^2 + y^2 -> want to digitize this
grid_shape: an array of length 2 which contains the dimensions of the smaller resolution
"""
l = len(function_data)
pixel_len_x = int(l/grid_shape[0])
pixel_len_y = int(l/grid_shape[1])
digitized_data = np.empty((grid_shape[0], grid_shape[1]))
for i in range(grid_shape[0]): #row-index of pixel in smaller-resolution grid
for j in range(grid_shape[1]): #column-index of pixel in smaller-resolution grid
hd_pixel = []
for k in range(pixel_len_y):
hd_pixel.append(z_data[k][j:j*pixel_len_x])
hd_pixel = np.ravel(hd_pixel) #turns 2D array into 1D to be able to compute average
pixel_avg = np.average(hd_pixel)
digitized_data[i][j] = pixel_avg
return digitized_data
In theory, this function should do what I want to achieve, but when tested it doesn't yield the expected results. Either a completed version of my function or any other method that achieves my goal would be extremely helpful.
You could also use a interpolation function, if you can use SciPy. Here we use one of the gridded data interpolating functions, RectBivariateSpline to upsample your function, but you can find numerous examples on this and other sites.
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import RectBivariateSpline as rbs
# Sampling coordinates
x = np.linspace(-2,2,20)
y = np.linspace(-2,2,30)
# Your function
f = np.exp(-(x[:,None]**2 + y**2))
# Interpolator
interp = rbs(x, y, f)
# Higher resolution coordinates
x_hd = np.linspace(x.min(), x.max(), x.size * 5)
y_hd = np.linspace(y.min(), y.max(), y.size * 5)
# New higher res function
f_hd = interp(x_hd, y_hd, grid = True)
# Some plots
fig, ax = plt.subplots(ncols = 2)
ax[0].imshow(f)
ax[1].imshow(f_hd)

Generating a 3D object (e.g. via Mayavi) and exporting it as 3D image stack (e.g. tiff)

My current task is to generate a 3D image space where there are 3D objects (of iso-surfaces) that I designed and export it as an image stack (numpy or tiff).
I came down to using Mayavi to generate 3D iso-surfaces. I know Mayavi is originally designed to provide 3D visualizations on its own, but I would like to find a way that I can export a 3D object to a 3D image stack as in a numpy form of (z,y,x). My initial idea was to iteratively take snapshots of the sliced volume from the Mayavi mlab object along z-axis but I am not sure if there's any option to save a sliced image of an iso-surface as a snapshot.
The best case scenario would be to export a 3D image stack (tiff) exactly from what I see from a Mayavi window. Otherwise, I will take any suggestions to carry out this task in general.
Here's an example code.
import numpy as np
from mayavi import mlab
# Produce some nice data.
n_mer, n_long = 6, 11
pi = np.pi
dphi = pi/1000.0
phi = np.arange(0.0, 2*pi + 0.5*dphi, dphi, 'd')
mu = phi*n_mer
x = np.cos(mu)*(1+np.cos(n_long*mu/n_mer)*0.5)
y = np.sin(mu)*(1+np.cos(n_long*mu/n_mer)*0.5)
z = np.sin(n_long*mu/n_mer)*0.5
# Init plot
source = mlab.points3d(x, y, z)
You might go for the vtk class vtkImplicitModeller.
E.g.:
import numpy as np
from vedo import Points, Volume
n_mer, n_long = 6, 11
dphi = np.pi/1000.0
phi = np.arange(0.0, 2*np.pi + 0.5*dphi, dphi, 'd')
mu = phi*n_mer
x = np.cos(mu)*(1+np.cos(n_long*mu/n_mer)*0.5)
y = np.sin(mu)*(1+np.cos(n_long*mu/n_mer)*0.5)
z = np.sin(n_long*mu/n_mer)*0.5
source = Points([x, y, z], r=4)
modl = source.implicitModeller(
distance=0.15,
res=(60,60,30),
bounds=(-1.8,1.8,-1.8,1.8,-0.7,0.7),
)
modl.smoothLaplacian().computeNormals()
modl.c("blue9").lw(1).lighting("metallic").show(axes=1)
#######################################################
import vtk
imp = vtk.vtkImplicitModeller()
imp.SetInputData(source.polydata())
imp.SetSampleDimensions(50,50,30)
imp.SetModelBounds(-1.8,1.8,-1.8,1.8,-0.7,0.7)
imp.Update()
vol = Volume(imp.GetOutput())
arr = np.clip(vol.getDataArray(), 0, 1.2)
print(arr.shape)

To do a spline surface fit using scipy's RectBivariateSpline and SmoothBivariateSpline on noisy data

I am trying to do a 2D-surface fit on some imaging data. I attached an example of such data, which is basically a 1014 x 1014 array with substantial amount of noise. Example_image. Some patches of this array are invalid data, which I masked and set to NaN values, as shown in yellow in the Example image. As you can see in the image, there is a background gradient from left (brighter) to right (dimmer), which I am trying to remove. The gradient cannot be well fitted by a polynomial, hence my goal is to do a 2D-surface bivariate spline fit, and subtract the gradient off.
I have tried a number of tasks in scipy, but most of them do not return ideal result.
To start with I have tried the [RectBivariateSpline] Bivariate structured interpolation of large array with NaN values or mask), but since my image have NaNs in it, running RectBivariateSpline gives only an output of NaNs.
I have also tried SmoothBivariateSpline, which is the irregular gridded version of the task. I omitted those pixels that have NaN values and converted the rest into 1D arrays as input. But it failed as the array size is too big. I then tried to chop my array to try to run it on smaller chunks, but it gives the following error and quit with a segmentation fault, which I have no idea what it means.
fitpack2.py:1044: UserWarning:
Error on entry, no approximation returned. The following conditions
must hold:
xb<=x[i]<=xe, yb<=y[i]<=ye, w[i]>0, i=0..m-1
If iopt==-1, then
xb
I then tried to first filled in the NaN patches in my image with values using griddata with linear interpolation. Since the patches are huge, the interpolation is not ideal, but at least it gave me an array without NaN. I then use this array to run RectBivariateSpline again. But the output array is still NaNs.
I suspect that the noise in my image is screwing up the behaviour of both tasks, so I also tried to first run a Gaussian kernel on my image to smooth it, then filled in the NaN patches with griddata, then run RectBivariateSpline or SmoothBivariateSpline, but they still give me arrays with NaN values as output.
I am not sure that I understand the manual of both tasks correctly, so I attach the following script:
#!/usr/bin/python
import matplotlib
matplotlib.use('qt5agg')
#matplotlib.rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']})
#matplotlib.rc('text.latex', preamble=r'\usepackage{cmbright}')
#matplotlib.rc('text.latex', preamble=r'\usepackage[scaled]{helvet} \renewcommand\familydefault{\sfdefault} \usepackage[T1]{fontenc}')
#matplotlib.rc('text', usetex=True)
import matplotlib.pyplot as plt
import numpy as np
import astropy.io.fits as pyfits
import scipy.interpolate as sp
from astropy.convolution import convolve
from astropy.convolution import Gaussian2DKernel
#------------------------------------------------------------
#Read in the arrays
hdulistorg = pyfits.open('icmj01jrq_flt.fits')
hdulistorg.info()
errarrorg = np.swapaxes(hdulistorg[1].data, 0,1)
hdulist = pyfits.open('jrq_sci_nan_deep.fits')
hdulist.info()
dataarrorg = np.swapaxes(hdulist[0].data, 0,1) #image array
errarrorg = np.swapaxes(hdulistorg[1].data, 0,1) #error array
#Flag some of the problematic values, turn NaNs into 0 for easier handling
dataarr = np.copy(dataarrorg)
w=np.isnan(dataarr)
ww=np.where(dataarr == 0)
www=np.where(dataarr > 100)
wwww=np.where(dataarr < 0)
errarr = 1.0 / (np.copy(errarrorg)+1e-5) # Try to use 1/error as the estimate for weight below
errarr[w] = 0
errarr[ww] = 0
errarr[www] = 0
errarr[wwww]=0
dataarr[w]= 0
dataarr[ww]= 0
dataarr[www]=0
dataarr[wwww]=0
#Make a gaussian kernel smoothed data
maskarr = np.copy(errarr) #For masking the nan regions so they dun get smoothed
maskarr[:]=0
maskarr[w]=1
maskarr[ww]=1
maskarr[www]=1
maskarr[wwww]=1
gauss = Gaussian2DKernel(stddev=5)
condataarr = convolve(dataarr,gauss,normalize_kernel=True,boundary='extend',mask=maskarr)
condataarr[w]=0
conerrarr = np.copy(errarr)
#Setting x,y arrays for the Spline functions
nx, ny = (1014,1014)
x = np.linspace(0, 1013, nx)
y = np.linspace(0, 1013, ny)
xv, yv = np.meshgrid(x, y)
#Make an 1D version of these 2D arrays
dataarrflat = np.ravel(condataarr[0:200,0:200]) #Try only a small chunk!
xvflat = np.ravel(xv[0:200,0:200])
yvflat = np.ravel(yv[0:200,0:200])
errarrflat = np.ravel(conerrarr[0:200,0:200])
notnanloc = np.where(dataarrflat != 0) #Not NaNs
#SmoothBivariateSpline!
rect_S_spline = sp.SmoothBivariateSpline(xvflat[notnanloc], yvflat[notnanloc], dataarrflat[notnanloc],w=errarrflat[notnanloc], kx=3, ky=3)
#Also try using grid data to fix the grid?
gddataarr = np.copy(condataarr)
gddataarrflat = np.ravel(gddataarr)
gdloc = np.where(gddataarrflat != 0) #Not NaNs
gdxvflat = np.ravel(xv)
gdyvflat = np.ravel(yv)
xyarr = np.c_[gdxvflat[gdloc],gdyvflat[gdloc]]
x_grid, y_grid = np.mgrid[0:1013:1014j,0:1013:1014j]
grid_z2 = sp.griddata(xyarr, gddataarrflat[gdloc], (x_grid, y_grid), method='linear')
plt.imshow(grid_z2.T)
#plt.show()
#RectBivariatSpline
rect_B_spline = sp.RectBivariateSpline(x, y, grid_z2.T)
#Result grid (same as input for now)
xnew = np.arange(0, 1013, 1)
ynew = np.arange(0, 1013, 1)
znewS = rect_S_spline(xnew, ynew)
znewB = rect_B_spline(xnew, ynew)
print 'znewS', znewS
print 'znewB', znewB
#Write FITS files
condataarr = np.swapaxes(condataarr, 0, 1)
hdu2 = pyfits.PrimaryHDU(condataarr)
hdulist2 = pyfits.HDUList([hdu2])
hdulist2.writeto('contest.fits',overwrite=True)
hdulist2.close()
hdu3 = pyfits.PrimaryHDU(znewS)
hdulist3 = pyfits.HDUList([hdu3])
hdulist3.writeto('Stest.fits',overwrite=True)
hdulist3.close()
I can not exactly solve your problem, but I have some code that interfaces a FORTRAN interpolation routine with python. You can just call the routines directly from python, no fortran needed.
You can find the code and a description of it at this github page
https://github.com/haakoan/inter

Python/Matplotlib: Randomly select "sample" scatter points for different marker

Pretty much exactly what the question states, but a little context:
I'm creating a program to plot a large number of points (~10,000, but it will be more later on). This is being done using matplotlib's plt.scatter. This command is part of a loop that saves the figure, so I can later animate it.
What I want to be able to do is randomly select a small portion of these particles (say, maybe 100?) and give them a different marker than the rest, even though they're part of the same data set. This is so I can use them as placeholders to see the motion of individual particles, as well as the bulk material.
Is there a way to use a different marker for a small subset of the same data?
For reference, the particles are uniformly distributed just using the numpy random sampler, but my code for that is:
for i in range(N): # N number of particles
particle_position[i] = np.random.uniform(0, xmax) # Initialize in spatial domain
particle_velocity[i] = np.random.normal(0, 5) # Initialize in velocity space
for i in range(maxtime):
plt.scatter(particle_position, particle_velocity, s=1, c=norm_xvel, cmap=br_disc, lw=0)
The position and velocity change on each iteration of the main loop (there's quite a bit of code), but these are the main initialization and plotting routines.
I had an idea that perhaps I could randomly select a bunch of i values from range(N), and use an ax.scatter() command to plot them on the same axes?
Here is a possible solution to have a subset of your points identified with a different marker:
import matplotlib.pyplot as plt
import numpy as np
SIZE = 100
SAMPLE_SIZE = 10
def select_subset(seq, size):
"""selects a subset of the data using ...
"""
return seq[:size]
points_x = np.random.uniform(-1, 1, size=SIZE)
points_y = np.random.uniform(-1, 1, size=SIZE)
plt.scatter(points_x, points_y, marker=".", color="blue")
plt.scatter(select_subset(points_x, SAMPLE_SIZE),
select_subset(points_y, SAMPLE_SIZE),
marker="o", color="red")
plt.show()
It uses plt.scatter twice; once on the full data set, the other on the sample points.
You will have to decide how you want to select the sample of points - it is isolated in the select_subset function..
You could also extract the sample points from the data set to prevent marking them twice, but numpy is rather inefficient at deleting or resizing.
Maybe a better method is to use a mask? A mask has the advantage of leaving your original data intact and in order.
Here is a way to proceed with masks:
import matplotlib.pyplot as plt
import numpy as np
import random
SIZE = 100
SAMPLE_SIZE = 10
def make_mask(data_size, sample_size):
mask = np.array([True] * sample_size + [False ] * (data_size - sample_size))
np.random.shuffle(mask)
return mask
points_x = np.random.uniform(-1, 1, size=SIZE)
points_y = np.random.uniform(-1, 1, size=SIZE)
mask = make_mask(SIZE, SAMPLE_SIZE)
not_mask = np.invert(mask)
plt.scatter(points_x[not_mask], points_y[not_mask], marker=".", color="blue")
plt.scatter(points_x[mask], points_y[mask], marker="o", color="red")
plt.show()
As you see, scatter is called once on a subset of the data points (the ones not selected in the sample), and a second time on the sampled subset, and draws each subset with its own marker. It is efficient & leaves the original data intact.
The code below does what you want. I have selected a random set v_sub_index of N_sub indices in the correct range (0 to N) and draw those (with _sub suffix) from the larger samples particle_position and particle_velocity. Please note that you don't have to loop to generate random samples. Numpy has great functionality for that without having to use for loops.
import numpy as np
import matplotlib.pyplot as pl
N = 100
xmax = 1.
v_sigma = 2.5 / 2. # 95% of the samples contained within 0, 5
v_mean = 2.5 # mean at 2.5
N_sub = 10
v_sub_index = np.random.randint(0, N, N_sub)
particle_position = np.random.rand (N) * xmax
particle_velocity = np.random.randn(N)
particle_position_sub = np.array(particle_position[v_sub_index])
particle_velocity_sub = np.array(particle_velocity[v_sub_index])
particle_position_nosub = np.delete(particle_position, v_sub_index)
particle_velocity_nosub = np.delete(particle_velocity, v_sub_index)
pl.scatter(particle_position_nosub, particle_velocity_nosub, color='b', marker='o')
pl.scatter(particle_position_sub , particle_velocity_sub , color='r', marker='^')
pl.show()

How to remove the red frame and white arrow in scalar_cut_plane when using mayavi?

Hi I want to use mayavi to visualize the data in structured grid in a cut plane.
To exemplify this, I have the following code obtained from http://docs.enthought.com/mayavi/mayavi/auto/example_structured_grid.html written by Eric Jones
#!/usr/bin/env python
import numpy as np
from numpy import cos, sin, pi
from tvtk.api import tvtk
from mayavi import mlab
def generate_annulus(r=None, theta=None, z=None):
# Find the x values and y values for each plane.
x_plane = (cos(theta)*r[:,None]).ravel()
y_plane = (sin(theta)*r[:,None]).ravel()
# Allocate an array for all the points. We'll have len(x_plane)
# points on each plane, and we have a plane for each z value, so
# we need len(x_plane)*len(z) points.
points = np.empty([len(x_plane)*len(z),3])
# Loop through the points for each plane and fill them with the
# correct x,y,z values.
start = 0
for z_plane in z:
end = start + len(x_plane)
# slice out a plane of the output points and fill it
# with the x,y, and z values for this plane. The x,y
# values are the same for every plane. The z value
# is set to the current z
plane_points = points[start:end]
plane_points[:,0] = x_plane
plane_points[:,1] = y_plane
plane_points[:,2] = z_plane
start = end
return points
# Make the data.
dims = (51, 25, 25)
# The coordinates
theta = np.linspace(0, 2*np.pi, dims[0])
# 'y' corresponds to varying 'r'
r = np.linspace(1, 10, dims[1])
z = np.linspace(0, 5, dims[2])
pts = generate_annulus(r, theta, z)
# Make the grid
sgrid = tvtk.StructuredGrid(dimensions=dims)
sgrid.points = pts
s = np.sqrt(pts[:,0]**2 + pts[:,1]**2 + pts[:,2]**2)
sgrid.point_data.scalars = np.ravel(s.copy())
sgrid.point_data.scalars.name = 'scalars'
d = mlab.pipeline.add_dataset(sgrid)
mlab.pipeline.scalar_cut_plane(d)
mlab.show()
However, I would like to get rid of the annoying red frame and the white arrow in when saving the plot. How do I do this?
I first tried to use the module mlab.pipeline.scalar_field to do this, but I got an error saying that I needed to specify the data as an array.
I've also searched the gui to see if there is somewhere I can turn this off, but i cannot seem to find it
You can simply disable the widget. Note however that this means that you cannot drag around your plane anymore (but it sounds like you do not want to have this functionality)
In last line, change
mlab.pipeline.scalar_cut_plane(d)
with
cut = mlab.pipeline.scalar_cut_plane(d)
cut.implicit_plane.widget.enabled = False
It is also possible to do this in the GUI.
Go to the ScalarCutPlane in the pipeline menu, then disable the widget by unchecking "enable" in the tab "ImplicitPlane".
...and there you go
You can make it nicer by adding:
cut = mlab.pipeline.scalar_cut_plane(d)
input('Press any key to snap in . . .')
cut.implicit_plane.widget.enabled = False
Now you can place in desired position first.

Categories