Apply rotation to HEALPix map in healpy - python

I have a HEALPix map that I have read in using healpy, however it is in galactic coordinates and I need it in celestial/equatorial coordinates. Does anybody know of a simple way to convert the map?
I have tried using healpy.Rotator to convert from (l,b) to (phi,theta) and then using healpy.ang2pix to reorder the pixels, but the map still looks strange.
It would be great if there was a function similar to Rotator that you could call like: map = AnotherRotator(map,coord=['G','C']). Anybody know of any such function??
Thanks,
Alex

I realise that this was asked ages ago, but I was having the same problem myself this week and found your post. I've found a couple of potential solutions, so I'll share incase someone else comes upon this and finds it useful.
Solution 1: This sort of depends on the format your data is coming in. Mine came in a (theta, phi) grid.
import numpy as np
import healpy as H
map = <your original map>
nside = <your map resolution, mine=256>
npix = H.nside2npix(nside)
pix = N.arange(npix)
t,p = H.pix2ang(nside,pix) #theta, phi
r = H.Rotator(deg=True, rot=[<THETA ROTATION>, <PHI ROTATION>])
map_rot = np.zeros(npix)
for i in pix:
trot, prot = r(t[i],p[i])
tpix = int(trot*180./np.pi) #my data came in a theta, phi grid -- this finds its location there
ppix = int(prot*180./np.pi)
map_rot[i] = map[ppix,tpix] #this being the rright way round may need double-checking
Solution 2: Haven't quite finished testing this, but just came across it AFTER doing the annoying work above...
map_rot = H.mollview(map,deg=True,rot=[<THETA>,<PHI>], return_projected_map=True)
which gives a 2D numpy array. I'm interested to know how to convert this back into a healpix map...

I found another potential solution, after searching off and on for a few months. I haven't tested it very much yet, so please be careful!
Saul's Solution 2, above, is the key (great suggestion!)
Basically, you combine the functionality of healpy.mollview (gnomview, cartview, and orthview work as well) with that of the reproject_to_healpix function in the reproject package (http://reproject.readthedocs.org/en/stable/).
The resulting map is suitable for my angular scales, but I can't say how accurate the transformation is compared to other methods.
-----Basic Outline----------
Step 1: Read-in the map and make the rectangular array via cartview. As Saul indicated above, this is also one way to do the rotation. If you're just doing a standard rotation/coordinate transformation, then all you need is the coord keyword. From Celestial to Galactic coordinates, set coord = ['C','G']
map_Gal = hp.cartview(map_Cel, coord=['C','G'], return_projected_map=True, xsize=desired_xsize, norm='hist',nest=False)
Step 2: Write a template all-sky FITS header (as in the example below). I wrote mine to have the same average pixel-scale as my desired HEALPix map.
Step 3: Use reproject.transform_to_healpix
reproject includes a function for mapping a "normal" array (or FITS file) into the HEALPix projection. Combine that with the ability to return the array created by healpy.mollview/cartview/orthview/gnomview, and you can rotate a HEALPix map of one coordinate system (Celestial) into another coordinate system (Galactic).
map_Gal_HP, footprint_Gal_HP = rp.reproject_to_healpix((map_Gal, target_header), coord_system_out= 'GALACTIC', nside=nside, nested=False)
It comes down, essentially, to those two commands. However you'll have to make a template header, giving the pixel scale and size corresponding to the intermediary all-sky map you want to make.
-----Full Working Example (iPython notebook format + FITS sample data)------
https://github.com/aaroncnb/healpix_coordtrans_example/tree/master
The code there should run very quickly, but that's because the maps are heavily degraded. I did the same for my NSIDE 1024 and 2048 maps, and it took
about an hour.
------Before and After Images------

This function seems to do the trick (reasonably slow, but should be better than the for loop):
def rotate_map(hmap, rot_theta, rot_phi):
"""
Take hmap (a healpix map array) and return another healpix map array
which is ordered such that it has been rotated in (theta, phi) by the
amounts given.
"""
nside = hp.npix2nside(len(hmap))
# Get theta, phi for non-rotated map
t,p = hp.pix2ang(nside, np.arange(hp.nside2npix(nside))) #theta, phi
# Define a rotator
r = hp.Rotator(deg=False, rot=[rot_phi,rot_theta])
# Get theta, phi under rotated co-ordinates
trot, prot = r(t,p)
# Interpolate map onto these co-ordinates
rot_map = hp.get_interp_val(hmap, trot, prot)
return rot_map
Using this on data from PyGSM gives the following:
hp.mollview(np.log(rotate_map(gsm.generated_map_data, 0,0)))
Upon rotation of phi:
hp.mollview(np.log(rotate_map(gsm.generated_map_data, 0,np.pi)))
Or rotating theta:
hp.mollview(np.log(rotate_map(gsm.generated_map_data, np.pi/4,0)))

Related

Build Shapely point objects from .TIF

I would like to convert an image (.tiff) into Shapely points. There are 45 million pixels, I need a way to accomplish this without a loop (currently taking 15+ hours)
For example, I have a .tiff file which when opened is a 5000x9000 array. The values are pixel values (colors) that range from 1 to 215.
I open tif with rasterio.open(xxxx.tif).
Desired epsg is 32615
I need to preserve the pixel value but also attach geospatial positioning. This is to be able to sjoin over a polygon to see if the points are inside. I can handle the transform after processing, but I cannot figure a way to accomplish this without a loop. Any help would be greatly appreciated!
If you just want a boolean array indicating whether the points are within any of the geometries, I'd dissolve the shapes into a single MultiPolygon then use shapely.vectorized.contains. The shapely.vectorized module is currently not covered in the documentation, but it's really good to know about!
Something along the lines of
# for a gridded dataset with 2-D arrays lats, lons
# and a list of shapely polygons/multipolygons all_shapes
XX = lons.ravel()
YY = lats.ravel()
single_multipolygon = shapely.ops.unary_union(all_shapes)
in_any_shape = shapely.vectorized.contains(single_multipolygon, XX, YY)
If you're looking to identify which shape the points are in, use geopandas.points_from_xy to convert your x, y point coordinates into a GeometryArray, then use geopandas.sjoin to find the index of the shape corresponding to each (x, y) point:
geoarray = geopandas.points_from_xy(XX, YY)
points_gdf = geopandas.GeoDataFrame(geometry=geoarray)
shapes_gdf = geopandas.GeoDataFrame(geometry=all_shapes)
shape_index_by_point = geopandas.sjoin(
shapes_gdf, points_gdf, how='right', predicate='contains',
)
This is still a large operation, but it's vectorized and will be significantly faster than a looped solution. The geopandas route is also a good option if you'd like to convert the projection of your data or use other geopandas functionality.

2D interpolate list of many points [duplicate]

So, I have three numpy arrays which store latitude, longitude, and some property value on a grid -- that is, I have LAT(y,x), LON(y,x), and, say temperature T(y,x), for some limits of x and y. The grid isn't necessarily regular -- in fact, it's tripolar.
I then want to interpolate these property (temperature) values onto a bunch of different lat/lon points (stored as lat1(t), lon1(t), for about 10,000 t...) which do not fall on the actual grid points. I've tried matplotlib.mlab.griddata, but that takes far too long (it's not really designed for what I'm doing, after all). I've also tried scipy.interpolate.interp2d, but I get a MemoryError (my grids are about 400x400).
Is there any sort of slick, preferably fast way of doing this? I can't help but think the answer is something obvious... Thanks!!
Try the combination of inverse-distance weighting and
scipy.spatial.KDTree
described in SO
inverse-distance-weighted-idw-interpolation-with-python.
Kd-trees
work nicely in 2d 3d ..., inverse-distance weighting is smooth and local,
and the k= number of nearest neighbours can be varied to tradeoff speed / accuracy.
There is a nice inverse distance example by Roger Veciana i Rovira along with some code using GDAL to write to geotiff if you're into that.
This is of coarse to a regular grid, but assuming you project the data first to a pixel grid with pyproj or something, all the while being careful what projection is used for your data.
A copy of his algorithm and example script:
from math import pow
from math import sqrt
import numpy as np
import matplotlib.pyplot as plt
def pointValue(x,y,power,smoothing,xv,yv,values):
nominator=0
denominator=0
for i in range(0,len(values)):
dist = sqrt((x-xv[i])*(x-xv[i])+(y-yv[i])*(y-yv[i])+smoothing*smoothing);
#If the point is really close to one of the data points, return the data point value to avoid singularities
if(dist<0.0000000001):
return values[i]
nominator=nominator+(values[i]/pow(dist,power))
denominator=denominator+(1/pow(dist,power))
#Return NODATA if the denominator is zero
if denominator > 0:
value = nominator/denominator
else:
value = -9999
return value
def invDist(xv,yv,values,xsize=100,ysize=100,power=2,smoothing=0):
valuesGrid = np.zeros((ysize,xsize))
for x in range(0,xsize):
for y in range(0,ysize):
valuesGrid[y][x] = pointValue(x,y,power,smoothing,xv,yv,values)
return valuesGrid
if __name__ == "__main__":
power=1
smoothing=20
#Creating some data, with each coodinate and the values stored in separated lists
xv = [10,60,40,70,10,50,20,70,30,60]
yv = [10,20,30,30,40,50,60,70,80,90]
values = [1,2,2,3,4,6,7,7,8,10]
#Creating the output grid (100x100, in the example)
ti = np.linspace(0, 100, 100)
XI, YI = np.meshgrid(ti, ti)
#Creating the interpolation function and populating the output matrix value
ZI = invDist(xv,yv,values,100,100,power,smoothing)
# Plotting the result
n = plt.normalize(0.0, 100.0)
plt.subplot(1, 1, 1)
plt.pcolor(XI, YI, ZI)
plt.scatter(xv, yv, 100, values)
plt.title('Inv dist interpolation - power: ' + str(power) + ' smoothing: ' + str(smoothing))
plt.xlim(0, 100)
plt.ylim(0, 100)
plt.colorbar()
plt.show()
There's a bunch of options here, which one is best will depend on your data...
However I don't know of an out-of-the-box solution for you
You say your input data is from tripolar data. There are three main cases for how this data could be structured.
Sampled from a 3d grid in tripolar space, projected back to 2d LAT, LON data.
Sampled from a 2d grid in tripolar space, projected into 2d LAT LON data.
Unstructured data in tripolar space projected into 2d LAT LON data
The easiest of these is 2. Instead of interpolating in LAT LON space, "just" transform your point back into the source space and interpolate there.
Another option that works for 1 and 2 is to search for the cells that maps from tripolar space to cover your sample point. (You can use a BSP or grid type structure to speed up this search) Pick one of the cells, and interpolate inside it.
Finally there's a heap of unstructured interpolation options .. but they tend to be slow.
A personal favourite of mine is to use a linear interpolation of the nearest N points, finding those N points can again be done with gridding or a BSP. Another good option is to Delauney triangulate the unstructured points and interpolate on the resulting triangular mesh.
Personally if my mesh was case 1, I'd use an unstructured strategy as I'd be worried about having to handle searching through cells with overlapping projections. Choosing the "right" cell would be difficult.
I suggest you taking a look at GRASS (an open source GIS package) interpolation features (http://grass.ibiblio.org/gdp/html_grass62/v.surf.bspline.html). It's not in python but you can reimplement it or interface with C code.
Am I right in thinking your data grids look something like this (red is the old data, blue is the new interpolated data)?
alt text http://www.geekops.co.uk/photos/0000-00-02%20%28Forum%20images%29/DataSeparation.png
This might be a slightly brute-force-ish approach, but what about rendering your existing data as a bitmap (opengl will do simple interpolation of colours for you with the right options configured and you could render the data as triangles which should be fairly fast). You could then sample pixels at the locations of the new points.
Alternatively, you could sort your first set of points spatially and then find the closest old points surrounding your new point and interpolate based on the distances to those points.
There is a FORTRAN library called BIVAR, which is very suitable for this problem. With a few modifications you can make it usable in python using f2py.
From the description:
BIVAR is a FORTRAN90 library which interpolates scattered bivariate data, by Hiroshi Akima.
BIVAR accepts a set of (X,Y) data points scattered in 2D, with associated Z data values, and is able to construct a smooth interpolation function Z(X,Y), which agrees with the given data, and can be evaluated at other points in the plane.

Changing the coordinate map in function of its coordinates (gravitational lensing)

I'm trying to compute the optical phenomenon called Gravitational Lensing. In simple words its when a massive object (or with massiva mass) its between me as an observer and a star or some clase of light source. Because its massive mass the light will bend and for us it will apparently come from another location than it real position. There is a particular case (and simpler) where we suppose the mass is spheric, so from our perspective its circular in a 2D plane (or photo).
My idea for code that was changing the coordinates of a 2D plane in function of where my source light its. In other words, if I have a spheric light source, if it is far from my massive object it will image no change, but if its close to te spheric mass it will change (in fact, if its exactly behind the massive object I as an observer will see the called Einstein Ring).
For compute that I first write a mapping of this function. I take the approximation of a = x + sin(t)/exp(x) , b = y + cos(t)/exp(y). So when the source light its far from the mass, the exponential will be approximately zero, and if it is just behind the mass the source light coordinates will be (0,0), so the imagen will return (sin(t),cos(t)) the Einstein circle I was expected to get.
I code that in this way, first I define my approximation:
def coso1(x,y):
t = arange(0,2*pi, .01);
a = x + sin(t)/exp(x)
b = y + cos(t)/exp(y)
plt.plot(a,b)
plt.show()
Then I try to plot that to see how the coordinate map is changing:
from numpy import *
from matplotlib.pyplot import *
x=linspace(-10,10,10)
y=linspace(-10,10,10)
y = y.reshape(y.size, 1)
x = x.reshape(x.size, 1)
plot(coso1(x,y))
And I get this plot.
Graphic
Notice that it looks that way because the intervale I choose to take values for the x and y coordinates. If I take place in the "frontier" case where x={-1,0,1} and y={-1,0,1} it will show how the space its been deformed (or I'm guessing thats what Im seeing).
I then have a few questions. An easy question but that I hadnt find an easy answer its if I can manipulate this transformation (rotate with the mouse to aprecciate the deformation, a controller of how x or y change). And the two hard questions: Can I plot the countour lines to see how exactly are changing the topography of my map in every level of x (suppose I let y be constant), and the other question: If this is my "new" way of how the map is acting, can I use this new coordinate map as a tool where If a project any image it will be distorted in function of this "new" map. Something analogous of how cameras works with fish lens effect.

Transforming Simplex Map for UV mapping on a sphere

I would like to dynamically generate a simplex-based image that will map nicely on a sphere in three.js, however I'm having problems with distortion at the poles. Here is the initial map code:
def generate_map(seed,width=WIDTH,height=HEIGHT,xoffset=0.0,yoffset=0.0,zoom=1.0):
""" Return a simple matrix of simplex noise from 0-255."""
mapdata = []
random.seed(seed)
zoom=zoom * 100.0
for x in xrange(height):
row=[]
for y in xrange (width):
xparam=merc_x(float((x+xoffset)/zoom))
yparam=merc_y(float((y+yoffset)/zoom))
noisevalue=snoise2(xparam, yparam, NOISEOCTAVES, 0.52,2.0, height/zoom*2, width/zoom, float(seed) )
#convert 1.0...-1.0 to 255...0
pixel=int((noisevalue+1)/2*PIXEL_DEPTH-1)
cell={'height': pixel, 'x':x, 'y':y }
row.append( cell )
mapdata.append(row)
return mapdata
This generates data which creates a map that tiles sideways, but not at the top and bottom (which is ideal).
The problem arises when this is mapped on a sphere in three.js:
return new THREE.Mesh(
new THREE.SphereGeometry(radius, segments, segments),
new THREE.MeshPhongMaterial({
map: THREE.ImageUtils.loadTexture('/worldmap.png?seed='+seed),
})
);
You end up with pinching near the poles due to UVMapping that takes place:
.
I suspect the simplest way to solve the pinching issues is to use a mercator transformation on the source image, but for the life of me, I can't figure out how to implement it.
Does anyone have any suggestions? The full source of the project can be found here if you want to give it a try.
Update:
Looking at the handy UV Mapping graph on wikipedia in combination with the Three.JS sphere mapping example, I think these are a good place to start...

How can I use scipy.ndimage.interpolation.affine_transform to rotate an image about its centre?

I am perplexed by the API to scipy.ndimage.interpolation.affine_transform. And judging by this issue I'm not the only one. I'm actually wanting to do more interesting things with affine_transform than just rotating an image, but a rotation would do for starters. (And yes I'm well aware of scipy.ndimage.interpolation.rotate, but figuring out how to drive affine_transform is what interests me here).
When I want to do this sort of thing in systems like OpenGL, I'm think in terms of computing the transform which applies a 2x2 rotation matrix R about a centre c, and therefore thinking of points p being transformed (p-c)R+c = pR+c-cR, which gives a c-cR term to be used as the translation component of a transform. However, according to the issue above, scipy's affine_transform does "offset first" so we actually need to compute an offset s such that (p-c)R+c=(p+s)R which with a bit of rearrangement gives s=(c-cR)R' where R' is the inverse of R.
If I plug this into an ipython notebook (pylab mode; code below maybe needs some additional imports):
img=scipy.misc.lena()
#imshow(img,cmap=cm.gray);show()
centre=0.5*array(img.shape)
a=15.0*pi/180.0
rot=array([[cos(a),sin(a)],[-sin(a),cos(a)]])
offset=(centre-centre.dot(rot)).dot(linalg.inv(rot))
rotimg=scipy.ndimage.interpolation.affine_transform(
img,rot,order=2,offset=offset,cval=0.0,output=float32
)
imshow(rotimg,cmap=cm.gray);show()
I get
which unfortunately isn't rotated about the centre.
So what's the trick I'm missing here?
Once treddy's answer got me a working baseline, I managed to get a better working model of affine_transform. It's not actually as odd as the issue linked in the original question hints.
Basically, each point (coordinate) p in the output image is transformed to pT+s where T and s are the matrix and offset passed to the function.
So if we want point c_out in the output to be mapped to and sampled from c_in from the input image, with rotation R and (possibly anisotropic) scaling S we need pT+s = (p-c_out)RS+c_in which can be rearranged to yield s = (c_int-c_out)T (with T=RS).
For some reason I then need to pass transform.T to affine_transform but I'm not going to worry about that too much; probably something to do with row-coordinates with transforms on the right (assumed above) vs column-coordinates with transforms on the left.
So here's a simple test rotating a centred image:
src=scipy.misc.lena()
c_in=0.5*array(src.shape)
c_out=array((256.0,256.0))
for i in xrange(0,7):
a=i*15.0*pi/180.0
transform=array([[cos(a),-sin(a)],[sin(a),cos(a)]])
offset=c_in-c_out.dot(transform)
dst=scipy.ndimage.interpolation.affine_transform(
src,transform.T,order=2,offset=offset,output_shape=(512,512),cval=0.0,output=float32
)
subplot(1,7,i+1);axis('off');imshow(dst,cmap=cm.gray)
show()
Here's it modified for different image sizes
src=scipy.misc.lena()[::2,::2]
c_in=0.5*array(src.shape)
c_out=array((256.0,256.0))
for i in xrange(0,7):
a=i*15.0*pi/180.0
transform=array([[cos(a),-sin(a)],[sin(a),cos(a)]])
offset=c_in-c_out.dot(transform)
dst=scipy.ndimage.interpolation.affine_transform(
src,transform.T,order=2,offset=offset,output_shape=(512,512),cval=0.0,output=float32
)
subplot(1,7,i+1);axis('off');imshow(dst,cmap=cm.gray)
show()
And here's a version with anisotropic scaling to compensate for the anisotropic resolution of the source image.
src=scipy.misc.lena()[::2,::4]
c_in=0.5*array(src.shape)
c_out=array((256.0,256.0))
for i in xrange(0,7):
a=i*15.0*pi/180.0
transform=array([[cos(a),-sin(a)],[sin(a),cos(a)]]).dot(diag(([0.5,0.25])))
offset=c_in-c_out.dot(transform)
dst=scipy.ndimage.interpolation.affine_transform(
src,transform.T,order=2,offset=offset,output_shape=(512,512),cval=0.0,output=float32
)
subplot(1,7,i+1);axis('off');imshow(dst,cmap=cm.gray)
show()
Based on the insight from #timday that matrix and offset are defined in the output coordinate system, I would offer the following reading of the issue, which fits with standard notations in linear algebra and allows to understand the scaling of images as well. I use here T.inv=T^-1 as pseudo-python notation to mean the inverse of a matrix and * to mean the dot product.
For each point o in the output image, affine_transform finds the corresponding point i in the input image as i=T.inv*o+s, where matrix=T.inv is the inverse of the 2x2 transformation matrix that one would use to define the forward affine transformation and offset=s is the translation defined in the output coordinates. For a pure rotation T=R=[[cos,-sin],[sin,cos]], and in this special case matrix=T.inv=T.T, which is the reason why #timday had to apply the transposition still (alternatively one could just use the negative angle).
The value for the offset s is found exactly the way described by #timday: if c_in is supposed to be positioned, after the affine transformation, at c_out (e.g. the input centre should be placed at the output centre) then c_in=T.inv*c_out+s or s=c_in-T.inv*c_out (note the conventional mathematical order of the matrix product used here, matrix*vector, which is why #timday, who used the revers order, didn't need a transposition at this point in his code).
If one wants a scaling S first and then a rotation R it holds that T=R*S and therefore T.inv=S.inv*R.inv (note the reversed order). For example, if one wants to make the image double as wide in the columns direction ('x'), then S=diag((1, 2)), hence S.inv=diag((1, 0.5)).
src = scipy.misc.lena()
c_in = 0.5 * array(src.shape)
dest_shape = (512, 1028)
c_out = 0.5 * array(dest_shape)
for i in xrange(0, 7):
a = i * 15.0 * pi / 180.0
rot = array([[cos(a), -sin(a)], [sin(a), cos(a)]])
invRot = rot.T
invScale = diag((1.0, 0.5))
invTransform = dot(invScale, invRot)
offset = c_in - dot(invTransform, c_out)
dest = scipy.ndimage.interpolation.affine_transform(
src, invTransform, order=2, offset=offset, output_shape=dest_shape, cval=0.0, output=float32
)
subplot(1, 7, i + 1);axis('off');imshow(dest, cmap=cm.gray)
show()
If the image is to be first rotated, then stretched, the order of the dot product needs to be reversed:
invTransform = dot(invRot, invScale)
Just doing some quick & dirty testing I noticed that taking the negative value of your offset seems to rotate about the centre.

Categories