Plotting a numpy array in healpy - python

I am attempting to produce a beam on a healpix map, using healpy. For starters, I would like to be able to produce a 2D gaussian in a mollweide projection, but I really don't know where to begin.
I can define a 2D gaussian:
import numpy as np
def gaussian_2D(x,y,mu_x=0.,mu_y=0.,sig_x=1.,sig_y=1.):
return np.exp(-0.5*(((x-mu_x) / sig_x)**2 + ((y-mu_y) / sig_y)**2))
such that I can build up a 3D X, Y, Z space like:
delta = 0.025
x = np.arange(-4, 4, delta)
y = np.arange(-4, 4, delta)
X, Y = np.meshgrid(x,y)
Z = gaussian_2D(X,Y)
but from here I'm pretty lost, and can't track down much useful documentation concerning how and/or what to project. Any suggestions for a direction of attack would be much appreciated!

here is how I do this:
using a small trick. I insert a point at the desired Gaussian centrer and then I use "smearing" to create a Gaussian with some sigma.
Here is some example:
#!/usr/bin/env python
import numpy as np
import healpy as hp
import pylab as pl
NSIDE=512 #the map garannularity
m_sm=np.arange(hp.nside2npix(NSIDE)) # creates the map
m_sm=m_sm*0. # sets all values to zero
theta=np.radians(80.) # coordinates for the gaussian
phi=np.radians(20.)
indx=hp.pixelfunc.ang2pix(NSIDE,theta,phi) # getting the index of the point corresponding to the coordinates
m_sm[indx]=1. # setting that point value to 1.
gmap=hp.smoothing(m_sm, sigma=np.radians(20.),verbose=False,lmax=1024) # creating a new map, smmeared version of m_sm
hp.mollview(gmap, title="Gaussian Map") #draw it
pl.show()
now if you want to do that by hand, you would use a function for a gaussian
1) you feed it some coordinates
2) you retrieve the index corresponding to that coordinate using:
indx=hp.pixelfunc.ang2pix(NSIDE,theta,phi)
3) you set the value for that point to the value from your gaussian function. i.e.:
my_healpy_map[indx]=my_gauss(theta, phy, mean_theta, mean_phy, sigma_theta, sigma_phy)

Related

Getting coordinates of surface nodes using pyvista

I'm wondering if anyone could help me figure out how to apply pyvista to extract the surface nodes of a 3D object. For example, suppose I have a collection of points that builds out a sphere, including 'interior' and 'surface' points:
import numpy as np
import matplotlib.pyplot as plt
N = 50
max_rad = 1
thetavec = np.linspace(0,np.pi,N)
phivec = np.linspace(0,2*np.pi,2*N)
[th, ph] = np.meshgrid(thetavec,phivec)
R = np.random.rand(*th.shape) * max_rad
x = R*np.sin(th)*np.cos(ph)
y = R*np.sin(th)*np.sin(ph)
z = R*np.cos(th)
ax = plt.axes(projection='3d')
ax.plot(x.flatten(), y.flatten(), z.flatten(), '*')
Now I'd like to apply pyvista's extract_surface to locate the 'nodes' that live on the surface, together with their coordinates. That is, I'd like for extract_surface to return an array or dataframe of the coordinates of the surface points. I've tried to build a polydata object just with the vertices above (see link and section 'Initialize with just vertices')
Any help is much appreciated. Thanks!
Since you've confirmed in a comment that you're looking for a convex hull, you can do this using the delaunay_3d() filter. The output of the triangulation is an UnstructuredGrid that contains a grid of tetrahedra that fills the convex hull of you mesh. Calling extract_surface() on this space-filling mesh will give you the actual exterior, i.e. the convex hull:
import numpy as np
import pyvista as pv
# your example data
N = 50
max_rad = 1
thetavec = np.linspace(0,np.pi,N)
phivec = np.linspace(0,2*np.pi,2*N)
[th, ph] = np.meshgrid(thetavec,phivec)
R = np.random.rand(*th.shape) * max_rad
x = R*np.sin(th)*np.cos(ph)
y = R*np.sin(th)*np.sin(ph)
z = R*np.cos(th)
# create a PyVista point cloud (in a PolyData)
points = np.array([x, y, z]).reshape(3, -1).T # shape (n_points, 3)
cloud = pv.PolyData(points)
# extract surface by Delaunay triangulation to get the convex hull
convex_hull = cloud.delaunay_3d().extract_surface() # contains faces
surface_points = convex_hull.cast_to_pointset() # only points
# check what we've got
surface_points.plot(
render_points_as_spheres=True,
point_size=10,
background='paleturquoise',
scalar_bar_args={'color': 'black'},
)
(On older PyVista versions where PolyData.cast_to_pointset() is not available, one can convex_hull.extract_points(range(convex_hull.n_points))).
The result looks like this:
Playing around with this interactively it's obvious that it only contains points from the convex hull (i.e. it doesn't contain interior points).
Also note the colouring: the scalars used are called 'vtkOriginalPointIds' which are what you would actually expect if you tried to guess: it is the index of each point in the original point cloud. So we can use these scalars to extract the indices of the points making up the point cloud:
# grab original point indices
surface_point_inds = surface_points.point_data['vtkOriginalPointIds']
# confirm that the indices are correct
print(np.array_equal(surface_points.points, cloud.points[surface_point_inds, :]))
# True
Of course if you don't need to identify the surface points in the original point cloud then you can just use surface_points.points or even convex_hull.points to get a standalone array of convex hull point coordinates.

Is there a python friendly function to calculate moving average for 3D data?

I am trying to create a smooth contour map by using a simple moving average filter. I have a .CSV that has three column x, y (which are poisition) and z which is the heat at a given x,y. Each column has 23,236 value. I have tried 1d moving average seperatly for x and then for y. Given the size of the data it has been neither fast nor effective at smoothing. I would appricate any help. the code below plots the contour map.
df = 'D:/F1_amp .csv'
df = pd.read_csv("F1_amp.csv");
Z = df.pivot_table(index='x', columns='y', values='z').T.values
X_unique = np.sort(df.x.unique())
Y_unique = np.sort(df.y.unique())
X, Y = np.meshgrid(X_unique, Y_unique)
pd.DataFrame(Z).round(3)
pd.DataFrame(X).round(3)
pd.DataFrame(Y).round(3)
plt.contourf(X, Y, Z, 20, cmap='hot')
plt.colorbar();
You can do this via Convolution. This approach is fast and easy to generalize.
For 2D convolution you could use scipy.signal.covolve2d(). You can vary the size and the values of the kernel, I used a constant 3x3 kernel as example.
import numpy as np
import scipy.signal as sg
kernel_shape = (3, 3)
kernel = np.full(kernel_shape, 1/np.prod(kernel_shape))
z = sg.convolve2d(z, kernel, mode='valid')

Discrete fourier transformation from a list of x-y points

What I'm trying to do is, from a list of x-y points that has a periodic pattern, calculate the period. With my limited mathematics knowledge I know that Fourier Transformation can do this sort of thing.
I'm writing Python code.
I found a related answer here, but it uses an evenly-distributed x axis, i.e. dt is fixed, which isn't the case for me. Since I don't really understand the math behind it, I'm not sure if it would work properly in my code.
My question is, does it work? Or, is there some method in numpy that already does my work? Or, how can I do it?
EDIT: All values are Pythonic float (i.e. double-precision)
For samples that are not evenly spaced, you can use scipy.signal.lombscargle to compute the Lomb-Scargle periodogram. Here's an example, with a signal whose
dominant frequency is 2.5 rad/s.
from __future__ import division
import numpy as np
from scipy.signal import lombscargle
import matplotlib.pyplot as plt
np.random.seed(12345)
n = 100
x = np.sort(10*np.random.rand(n))
# Dominant periodic signal
y = np.sin(2.5*x)
# Add some smaller periodic components
y += 0.15*np.cos(0.75*x) + 0.2*np.sin(4*x+.1)
# Add some noise
y += 0.2*np.random.randn(x.size)
plt.figure(1)
plt.plot(x, y, 'b')
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
dxmin = np.diff(x).min()
duration = x.ptp()
freqs = np.linspace(1/duration, n/duration, 5*n)
periodogram = lombscargle(x, y, freqs)
kmax = periodogram.argmax()
print("%8.3f" % (freqs[kmax],))
plt.figure(2)
plt.plot(freqs, np.sqrt(4*periodogram/(5*n)))
plt.xlabel('Frequency (rad/s)')
plt.grid()
plt.axvline(freqs[kmax], color='r', alpha=0.25)
plt.show()
The script prints 2.497 and generates the following plots:
As starting point:
(I assume all coordinates are positive and integer, otherwise map them to reasonable range like 0..4095)
find max coordinates xMax, yMax in list
make 2D array with dimensions yMax, xMax
fill it with zeros
walk through you list, set array elements, corresponding to coordinates, to 1
make 2D Fourier transform
look for peculiarities (peaks) in FT result
This page from Scipy shows you basic knowledge of how Discrete Fourier Transform works:
http://docs.scipy.org/doc/numpy-1.10.0/reference/routines.fft.html
They also provide API for using DFT. For your case, you should look at how to use fft2.

How to set a maximum distance between points for interpolation when using scipy.interpolate.griddata?

I have a spatial set of data with Z values I want to interpolate using some matplotlib or scipy module. My XY points have a concave shape and I don't want interpolated values in the empty zone. Is there a method that easily allow user to set a maximum distance between points to avoid interpolation in the empty zone?
I struggled with the same question and found a work around by re-using the kd-tree implementation that scipy itself uses for the nearest neighbour interpolation, masking the interpolated result array with the result of the kd-tree querying result.
Consider the example code below:
import numpy as np
import scipy.interpolate
import matplotlib.pyplot as plt
# Generate some random data
xy = np.random.random((2**15, 2))
z = np.sin(10*xy[:,0]) * np.cos(10*xy[:,1])
grid = np.meshgrid(
np.linspace(0, 1, 512),
np.linspace(0, 1, 512)
)
# Interpolate
result1 = scipy.interpolate.griddata(xy, z, tuple(grid), 'linear')
# Show
plt.figimage(result1)
plt.show()
# Remove rectangular window
mask = np.logical_and.reduce((xy[:,0] > 0.2, xy[:,0] < 0.8, xy[:,1] > 0.2, xy[:,1] < 0.8))
xy, z = xy[~mask], z[~mask]
# Interpolate
result2 = scipy.interpolate.griddata(xy, z, tuple(grid), 'linear')
# Show
plt.figimage(result2)
plt.show()
This generates the following two images. Notices the strong interpolation artefacts because of the missing rectangle window in the centre of the data.
Now if we run the code below on the same example data, the following image is obtained.
THRESHOLD = 0.01
from scipy.interpolate.interpnd import _ndim_coords_from_arrays
from scipy.spatial import cKDTree
# Construct kd-tree, functionality copied from scipy.interpolate
tree = cKDTree(xy)
xi = _ndim_coords_from_arrays(tuple(grid), ndim=xy.shape[1])
dists, indexes = tree.query(xi)
# Copy original result but mask missing values with NaNs
result3 = result2[:]
result3[dists > THRESHOLD] = np.nan
# Show
plt.figimage(result3)
plt.show()
I realize it may not be the visual effect you're after exactly. Especially if your dataset is not very dense you'll need to have a high distance threshold value in order for legitimately interpolated data not to be masked. If your data is dense enough, you might be able to get away with a relatively small radius, or maybe come up with a smarter cut-off function. Hope that helps.

How to display a volume with non-cubic voxels correctly in mayavi

I'm using mayavi (3.3.2) to display volume isosurfaces.
Generally, my volumes do not have cubic voxels; for example, the sampling grid might be 1mm x 1mm in X and Y, but 1.4mm in the Z direction.
How can I get such volumes to display with the correct spatial proportions using mayavi's mlab.contour3d or mlab.pipeline.iso_surface ? I'd really prefer to not resample the volumes to a cubic grid.
Another way of stating the problem: what can I do to get the below code to display a sphere instead of a flattened elipsoid (taking the volume with it's intended 1:1:2 aspect-ratio voxels as a given, and without regenerating or resampling the volume).
import numpy as np
from enthought.mayavi import mlab
def sqr(x): return x*x
s=64
x,y,z = np.ogrid[0:s,0:s,0:s/2]
volume = np.sqrt(sqr(x-s/2)+sqr(y-s/2)+sqr(2*z-s/2))
isos = mlab.contour3d(volume,contours=[5,15,25],transparent=True)
mlab.show()
I'm guessing there ought to be some way of getting at the underlying VTK graphics pipeline (its transforms etc) and inserting the appropriate anisotropic scaling (if there isn't some way of doing it more directly through the mlab API).
For this, it's easiest to explicitly create a scalar_field object from the input data.
I actually do this quite frequently, as we like to put things in depth (where positive is downwards) in geology. That means that you need a negative increment in the z-direction. It would be nice if it was just an argument to the various mlab functions, but its still not too hard to do.
from mayavi import mlab
import numpy as np
s=64
x,y,z = np.ogrid[0:s,0:s,0:s/2]
data = np.sqrt((x-s/2)**2 + (y-s/2)**2 + (2*z-s/2)**2)
grid = mlab.pipeline.scalar_field(data)
grid.spacing = [1.0, 1.0, 2.0]
contours = mlab.pipeline.contour_surface(grid,
contours=[5,15,25], transparent=True)
mlab.show()

Categories