Related
How to design a simple code to automatically quantify a 2D rough surface based on given scatter points geometrically? For example, to use a number, r=0 for a smooth surface, r=1 for a very rough surface and the surface is in between smooth and rough when 0 < r < 1.
To more explicitly illustrate this question, the attached figure below is used to show several sketches of 2D rough surfaces. The dots are the scattered points with given coordinates. Accordingly, every two adjacent dots can be connected and a normal vector of each segment can be computed (marked with arrow). I would like to design a function like
def roughness(x, y):
...
return r
where x and y are sequences of coordinates of each scatter point. For example, in case (a), x=[0,1,2,3,4,5,6], y=[0,1,0,1,0,1,0]; in case (b), x=[0,1,2,3,4,5], y=[0,0,0,0,0,0]. When we call the function roughness(x, y), we will get r=1 (very rough) for case (a) and r=0 (smooth) for case (b). Maybe r=0.5 (medium) for case (d). The question is refined to what appropriate components do we need to put inside the function roughness?
Some initial thoughts:
Roughness of a surface is a local concept, which we only consider within a specific range of area, i.e. only with several local points around the location of interest. To use mean of local normal vectors? This may fail: (a) and (b) are with the same mean, (0,1), but (a) is rough surface and (b) is smooth surface. To use variance of local normal vectors? This may also fail: (c) and (d) are with the same variance, but (c) is rougher than (d).
maybe something like this:
import numpy as np
def roughness(x, y):
# angles between successive points
t = np.arctan2(np.diff(y), np.diff(x))
# differences between angles
ts = np.sin(t)
tc = np.cos(t)
dt = ts[1:] * tc[:-1] - tc[1:] * ts[:-1]
# sum of squares
return np.sum(dt**2) / len(dt)
would give you something like you're asking?
Maybe you should consider a protocol definition:
1) geometric definition of the surface first
2) grant unto that geometric surface intrinsic properties.
2.a) step function can be based on quadratic curve between two peaks or two troughs with their concatenated point as the focus of the 'roughness quadratic' using the slope to define roughness in analogy to the science behind road speed-bumps.
2.b) elliptical objects can be defined by a combination of deformation analysis with centered circles on the incongruity within the body. This can be solved in many ways analogous to step functions.
2.c) flat lines: select points that deviate from the mean and do a Newtonian around with a window of 5-20 concatenated points or what ever is clever.
3) define a proper threshold that fits what ever intuition you are defining as "roughness" or apply conventions of any professional field to your liking.
This branched approach might be quicker to program, but I am certain this solution can be refactored into a Euclidean construct of 3-point ellipticals, if someone is up for a geometry problem.
The mathematical definitions of many surface parameters can be found here, which can be easily put into numpy:
https://www.keyence.com/ss/products/microscope/roughness/surface/parameters.jsp
Image (d) shows a challenge: basically you want to flatten the shape before doing the calculation. This requires prior knowledge of the type of geometry you want to fit. I found an app Gwyddion that can do this in 3D, but it can only interface with Python 2.7, not 3.
If you know which base shape lies underneath:
fit the known shape
calculate the arc distance between each two points
remap the numbers by subtracting 1) from the original data and assigning new coordinates according to 2)
perform normal 2D/3D roughness calculations
I'm trying to compute the optical phenomenon called Gravitational Lensing. In simple words its when a massive object (or with massiva mass) its between me as an observer and a star or some clase of light source. Because its massive mass the light will bend and for us it will apparently come from another location than it real position. There is a particular case (and simpler) where we suppose the mass is spheric, so from our perspective its circular in a 2D plane (or photo).
My idea for code that was changing the coordinates of a 2D plane in function of where my source light its. In other words, if I have a spheric light source, if it is far from my massive object it will image no change, but if its close to te spheric mass it will change (in fact, if its exactly behind the massive object I as an observer will see the called Einstein Ring).
For compute that I first write a mapping of this function. I take the approximation of a = x + sin(t)/exp(x) , b = y + cos(t)/exp(y). So when the source light its far from the mass, the exponential will be approximately zero, and if it is just behind the mass the source light coordinates will be (0,0), so the imagen will return (sin(t),cos(t)) the Einstein circle I was expected to get.
I code that in this way, first I define my approximation:
def coso1(x,y):
t = arange(0,2*pi, .01);
a = x + sin(t)/exp(x)
b = y + cos(t)/exp(y)
plt.plot(a,b)
plt.show()
Then I try to plot that to see how the coordinate map is changing:
from numpy import *
from matplotlib.pyplot import *
x=linspace(-10,10,10)
y=linspace(-10,10,10)
y = y.reshape(y.size, 1)
x = x.reshape(x.size, 1)
plot(coso1(x,y))
And I get this plot.
Graphic
Notice that it looks that way because the intervale I choose to take values for the x and y coordinates. If I take place in the "frontier" case where x={-1,0,1} and y={-1,0,1} it will show how the space its been deformed (or I'm guessing thats what Im seeing).
I then have a few questions. An easy question but that I hadnt find an easy answer its if I can manipulate this transformation (rotate with the mouse to aprecciate the deformation, a controller of how x or y change). And the two hard questions: Can I plot the countour lines to see how exactly are changing the topography of my map in every level of x (suppose I let y be constant), and the other question: If this is my "new" way of how the map is acting, can I use this new coordinate map as a tool where If a project any image it will be distorted in function of this "new" map. Something analogous of how cameras works with fish lens effect.
I have a HEALPix map that I have read in using healpy, however it is in galactic coordinates and I need it in celestial/equatorial coordinates. Does anybody know of a simple way to convert the map?
I have tried using healpy.Rotator to convert from (l,b) to (phi,theta) and then using healpy.ang2pix to reorder the pixels, but the map still looks strange.
It would be great if there was a function similar to Rotator that you could call like: map = AnotherRotator(map,coord=['G','C']). Anybody know of any such function??
Thanks,
Alex
I realise that this was asked ages ago, but I was having the same problem myself this week and found your post. I've found a couple of potential solutions, so I'll share incase someone else comes upon this and finds it useful.
Solution 1: This sort of depends on the format your data is coming in. Mine came in a (theta, phi) grid.
import numpy as np
import healpy as H
map = <your original map>
nside = <your map resolution, mine=256>
npix = H.nside2npix(nside)
pix = N.arange(npix)
t,p = H.pix2ang(nside,pix) #theta, phi
r = H.Rotator(deg=True, rot=[<THETA ROTATION>, <PHI ROTATION>])
map_rot = np.zeros(npix)
for i in pix:
trot, prot = r(t[i],p[i])
tpix = int(trot*180./np.pi) #my data came in a theta, phi grid -- this finds its location there
ppix = int(prot*180./np.pi)
map_rot[i] = map[ppix,tpix] #this being the rright way round may need double-checking
Solution 2: Haven't quite finished testing this, but just came across it AFTER doing the annoying work above...
map_rot = H.mollview(map,deg=True,rot=[<THETA>,<PHI>], return_projected_map=True)
which gives a 2D numpy array. I'm interested to know how to convert this back into a healpix map...
I found another potential solution, after searching off and on for a few months. I haven't tested it very much yet, so please be careful!
Saul's Solution 2, above, is the key (great suggestion!)
Basically, you combine the functionality of healpy.mollview (gnomview, cartview, and orthview work as well) with that of the reproject_to_healpix function in the reproject package (http://reproject.readthedocs.org/en/stable/).
The resulting map is suitable for my angular scales, but I can't say how accurate the transformation is compared to other methods.
-----Basic Outline----------
Step 1: Read-in the map and make the rectangular array via cartview. As Saul indicated above, this is also one way to do the rotation. If you're just doing a standard rotation/coordinate transformation, then all you need is the coord keyword. From Celestial to Galactic coordinates, set coord = ['C','G']
map_Gal = hp.cartview(map_Cel, coord=['C','G'], return_projected_map=True, xsize=desired_xsize, norm='hist',nest=False)
Step 2: Write a template all-sky FITS header (as in the example below). I wrote mine to have the same average pixel-scale as my desired HEALPix map.
Step 3: Use reproject.transform_to_healpix
reproject includes a function for mapping a "normal" array (or FITS file) into the HEALPix projection. Combine that with the ability to return the array created by healpy.mollview/cartview/orthview/gnomview, and you can rotate a HEALPix map of one coordinate system (Celestial) into another coordinate system (Galactic).
map_Gal_HP, footprint_Gal_HP = rp.reproject_to_healpix((map_Gal, target_header), coord_system_out= 'GALACTIC', nside=nside, nested=False)
It comes down, essentially, to those two commands. However you'll have to make a template header, giving the pixel scale and size corresponding to the intermediary all-sky map you want to make.
-----Full Working Example (iPython notebook format + FITS sample data)------
https://github.com/aaroncnb/healpix_coordtrans_example/tree/master
The code there should run very quickly, but that's because the maps are heavily degraded. I did the same for my NSIDE 1024 and 2048 maps, and it took
about an hour.
------Before and After Images------
This function seems to do the trick (reasonably slow, but should be better than the for loop):
def rotate_map(hmap, rot_theta, rot_phi):
"""
Take hmap (a healpix map array) and return another healpix map array
which is ordered such that it has been rotated in (theta, phi) by the
amounts given.
"""
nside = hp.npix2nside(len(hmap))
# Get theta, phi for non-rotated map
t,p = hp.pix2ang(nside, np.arange(hp.nside2npix(nside))) #theta, phi
# Define a rotator
r = hp.Rotator(deg=False, rot=[rot_phi,rot_theta])
# Get theta, phi under rotated co-ordinates
trot, prot = r(t,p)
# Interpolate map onto these co-ordinates
rot_map = hp.get_interp_val(hmap, trot, prot)
return rot_map
Using this on data from PyGSM gives the following:
hp.mollview(np.log(rotate_map(gsm.generated_map_data, 0,0)))
Upon rotation of phi:
hp.mollview(np.log(rotate_map(gsm.generated_map_data, 0,np.pi)))
Or rotating theta:
hp.mollview(np.log(rotate_map(gsm.generated_map_data, np.pi/4,0)))
I am perplexed by the API to scipy.ndimage.interpolation.affine_transform. And judging by this issue I'm not the only one. I'm actually wanting to do more interesting things with affine_transform than just rotating an image, but a rotation would do for starters. (And yes I'm well aware of scipy.ndimage.interpolation.rotate, but figuring out how to drive affine_transform is what interests me here).
When I want to do this sort of thing in systems like OpenGL, I'm think in terms of computing the transform which applies a 2x2 rotation matrix R about a centre c, and therefore thinking of points p being transformed (p-c)R+c = pR+c-cR, which gives a c-cR term to be used as the translation component of a transform. However, according to the issue above, scipy's affine_transform does "offset first" so we actually need to compute an offset s such that (p-c)R+c=(p+s)R which with a bit of rearrangement gives s=(c-cR)R' where R' is the inverse of R.
If I plug this into an ipython notebook (pylab mode; code below maybe needs some additional imports):
img=scipy.misc.lena()
#imshow(img,cmap=cm.gray);show()
centre=0.5*array(img.shape)
a=15.0*pi/180.0
rot=array([[cos(a),sin(a)],[-sin(a),cos(a)]])
offset=(centre-centre.dot(rot)).dot(linalg.inv(rot))
rotimg=scipy.ndimage.interpolation.affine_transform(
img,rot,order=2,offset=offset,cval=0.0,output=float32
)
imshow(rotimg,cmap=cm.gray);show()
I get
which unfortunately isn't rotated about the centre.
So what's the trick I'm missing here?
Once treddy's answer got me a working baseline, I managed to get a better working model of affine_transform. It's not actually as odd as the issue linked in the original question hints.
Basically, each point (coordinate) p in the output image is transformed to pT+s where T and s are the matrix and offset passed to the function.
So if we want point c_out in the output to be mapped to and sampled from c_in from the input image, with rotation R and (possibly anisotropic) scaling S we need pT+s = (p-c_out)RS+c_in which can be rearranged to yield s = (c_int-c_out)T (with T=RS).
For some reason I then need to pass transform.T to affine_transform but I'm not going to worry about that too much; probably something to do with row-coordinates with transforms on the right (assumed above) vs column-coordinates with transforms on the left.
So here's a simple test rotating a centred image:
src=scipy.misc.lena()
c_in=0.5*array(src.shape)
c_out=array((256.0,256.0))
for i in xrange(0,7):
a=i*15.0*pi/180.0
transform=array([[cos(a),-sin(a)],[sin(a),cos(a)]])
offset=c_in-c_out.dot(transform)
dst=scipy.ndimage.interpolation.affine_transform(
src,transform.T,order=2,offset=offset,output_shape=(512,512),cval=0.0,output=float32
)
subplot(1,7,i+1);axis('off');imshow(dst,cmap=cm.gray)
show()
Here's it modified for different image sizes
src=scipy.misc.lena()[::2,::2]
c_in=0.5*array(src.shape)
c_out=array((256.0,256.0))
for i in xrange(0,7):
a=i*15.0*pi/180.0
transform=array([[cos(a),-sin(a)],[sin(a),cos(a)]])
offset=c_in-c_out.dot(transform)
dst=scipy.ndimage.interpolation.affine_transform(
src,transform.T,order=2,offset=offset,output_shape=(512,512),cval=0.0,output=float32
)
subplot(1,7,i+1);axis('off');imshow(dst,cmap=cm.gray)
show()
And here's a version with anisotropic scaling to compensate for the anisotropic resolution of the source image.
src=scipy.misc.lena()[::2,::4]
c_in=0.5*array(src.shape)
c_out=array((256.0,256.0))
for i in xrange(0,7):
a=i*15.0*pi/180.0
transform=array([[cos(a),-sin(a)],[sin(a),cos(a)]]).dot(diag(([0.5,0.25])))
offset=c_in-c_out.dot(transform)
dst=scipy.ndimage.interpolation.affine_transform(
src,transform.T,order=2,offset=offset,output_shape=(512,512),cval=0.0,output=float32
)
subplot(1,7,i+1);axis('off');imshow(dst,cmap=cm.gray)
show()
Based on the insight from #timday that matrix and offset are defined in the output coordinate system, I would offer the following reading of the issue, which fits with standard notations in linear algebra and allows to understand the scaling of images as well. I use here T.inv=T^-1 as pseudo-python notation to mean the inverse of a matrix and * to mean the dot product.
For each point o in the output image, affine_transform finds the corresponding point i in the input image as i=T.inv*o+s, where matrix=T.inv is the inverse of the 2x2 transformation matrix that one would use to define the forward affine transformation and offset=s is the translation defined in the output coordinates. For a pure rotation T=R=[[cos,-sin],[sin,cos]], and in this special case matrix=T.inv=T.T, which is the reason why #timday had to apply the transposition still (alternatively one could just use the negative angle).
The value for the offset s is found exactly the way described by #timday: if c_in is supposed to be positioned, after the affine transformation, at c_out (e.g. the input centre should be placed at the output centre) then c_in=T.inv*c_out+s or s=c_in-T.inv*c_out (note the conventional mathematical order of the matrix product used here, matrix*vector, which is why #timday, who used the revers order, didn't need a transposition at this point in his code).
If one wants a scaling S first and then a rotation R it holds that T=R*S and therefore T.inv=S.inv*R.inv (note the reversed order). For example, if one wants to make the image double as wide in the columns direction ('x'), then S=diag((1, 2)), hence S.inv=diag((1, 0.5)).
src = scipy.misc.lena()
c_in = 0.5 * array(src.shape)
dest_shape = (512, 1028)
c_out = 0.5 * array(dest_shape)
for i in xrange(0, 7):
a = i * 15.0 * pi / 180.0
rot = array([[cos(a), -sin(a)], [sin(a), cos(a)]])
invRot = rot.T
invScale = diag((1.0, 0.5))
invTransform = dot(invScale, invRot)
offset = c_in - dot(invTransform, c_out)
dest = scipy.ndimage.interpolation.affine_transform(
src, invTransform, order=2, offset=offset, output_shape=dest_shape, cval=0.0, output=float32
)
subplot(1, 7, i + 1);axis('off');imshow(dest, cmap=cm.gray)
show()
If the image is to be first rotated, then stretched, the order of the dot product needs to be reversed:
invTransform = dot(invRot, invScale)
Just doing some quick & dirty testing I noticed that taking the negative value of your offset seems to rotate about the centre.
I've been tasked with writing a python based plugin for a graph drawing program that generates an STL model of a graph. A graph being an object made up of vertices and edges, where a vertex is represented by a 3D ball (a tessellated icosahedron), and an edge is represented with a cylinder that connects with two balls at either end. The end result of the 3D model is that it will get dumped out to an STL file for 3D printing. I'm able to generate the 3D models for the balls and cylinders without any issues, but I'm having some issues generating the overall model, and getting the balls and cylinders to connect properly.
My original idea was to create tessellated icosahedrons at the origin, then translate them out to the positions of the vertices. This works fine. I then, for each edge, I would create a cylinder at the origin, rotate it to the correct angle so that it points in the correct direction, then translate it to the midpoint between the two vertices so that the ends of the cylinders are embedded in the icosahedrons. This is where things are going wrong. I'm having some difficulties getting the rotations correct. To calculate the rotations, I'm doing the following:
First, I find the angle between the two points as follows (where source and target are both vertices in the graph, belonging to the edge that I'm currently processing):
deltaX = source.x - target.x
deltaY = source.y - target.y
deltaZ = source.z - target.z
xyAngle = math.atan2(deltaX, deltaY)
xzAngle = math.atan2(deltaX, deltaZ)
yzAngle = math.atan2(deltaY, deltaZ)
The angles being calculated seem reasonable, and as far as I can tell, do actually represent the angle between the vertices. For example, if I have a vertex at (1, 1, 0) and another vertex at (3, 3, 0), the angle edge connecting them does show up as a 45 degree angle between the two vertices. (That, or -135 degrees, depending which vertex is the source and which is the target).
Once I have the angles calculated, I create a cylinder and rotate it by the angles that have been calculated, like so, using some other classes that I've created:
c = cylinder()
c.createCylinder(edgeThickness, edgeLength)
c.rotateX(-yzAngle)
c.rotateY(xzAngle)
c.rotateZ(-xyAngle)
c.translate(edgePosition.x, edgePosition.y, edgePosition.z)
(Where edgePosition is the midpoint between the two vertices in the graph, edgeThickness is the radius of the cylinder being created, and edgeLength is the distance between the two vertices).
As mentioned, its the rotating of the cylinders that doesn't work as expected. It seems to do the correct rotation on the x/y plane, but as soon as an edge has vertices that differ in all three components (x, y, and z), the rotation fails. Here's an example of a graph that differs in the x, and y components, but not in the z component:
And here's the resulting STL file, as seen in Makerware (which is used to send the 3D models to the 3D printer):
(The extra cylinder looking bit in the bottom left is something I've currently left in for testing purposes - a cylinder that points in the direction of the z axis, located at the origin).
If I take that same graph and move the middle vertex out in the z axis, so now all the edges involve angles in all three axis, I get a result something like the following:
As show in the app:
The resulting STL file, as show in Makerware:
...and that same model as viewed from the side:
As you can see, the cylinders definitely aren't meeting up with the balls like I thought they would. My question is this: Is my approach to doing this flawed, or is it some small but critical mistake that I'm making somewhere in my rotations? I'm pretty sure it isn't a problem with the rotation functions themselves, as I've been able to independently verify that they work as expected. I also tried creating a rotate function that takes in a yaw, pitch, and roll and does all three at once, and it seemed to generate the same result, like so:
c.rotateYawPitchRoll(xzAngle, -yzAngle, -xyAngle)
So... anyone have any ideas on what I might be doing wrong?
UPDATE: As joojaa pointed out, it was a combination of calculating the correct angles as well as the order that they were applied. In order to get things working, I first calculate the rotation on the x axis, as follows:
zyAngle = math.atan2(deltaVector.z, deltaVector.y)
where deltaVector is the difference between the target and source vectors. This rotation is not yet applied though! The next step is to calculate the rotation on the y axis, as follows:
angle = vector.angleBetweenVectors(vector(target.x - source.x, target.y - source.y, target.z - source.z), vector(target.x - source.x, target.y - source.y, 0.0))
Once both rotations are calculated, they are then applied... in the reverse order! First, the x, then the y:
c.rotateY(angle)
c.rotateX(-zyAngle) #... where c is a cylinder object
There still seems to be a few bugs, but this seems to at least work for a simple test case.
Rotation happens in successive order, so the angles affect each other. It is not possible to use a Euler model to rotate them at once. This is why you can not just calculate the rotations based on the first static situation. Just imagine turning a cube so that it is standing on its corner upright. Yes the first rotation is 45 but the second is not since the cube is already turned by that time (draw a each step of the sequence and see what happens). Space rotations aren't trivial.
So you need to rotate one angle then re calculate the second angle and so forth. This is also why your first rotation works right. You only need 2 rotations unless your interested in making sure the rotation around the shaft has a certain direction.
I would suggest you use axis angles or matrices instead to do this. Mainly because in axis angles this is trivial the angle is the dot between the along tube start and end vectors and the axis is the cross between those 2. You can then convert those to Euler angles if you need. But probably you can just use the matrix directly. For ideas on how conversions and how the rotation could directly be calculated see: transformations.py by Christoph Gohlke. Also see the accompanying c source.
I think i need to expand this answer a bit
There is a really easy way out for this question that sidesteps all your and many other persons problems. The answer is do not use Euler angle rotation. Ive used a lot of brainpower to try to explain Euler rotations to problems that are ultimately solved more easily without Euler rotations. To justify i will leave just one reason for this if you want more think up of some more answers.
The reason most to use Euler rotation sequences is that you probably don't understand Euler angles. There are in fact only a handful of situations where they are good. No self respecting programmer uses Euler rotations to solve this issue. What you do is you use vector math instead.
So you have the direction vector from the source to target which is usually calculated:
along = normalize(target-source)
this is simply one of your matrix rows (or column notation is up to model maker), the one that corresponds to your cylinders original direction (the rows are just x y z w), then you need another vector perpendicular to this one. Choose a arbitrary vector like up (or left if your along is pointing close to up). cross product this up vector by your along for the second row direction. and finally put your source as the last row with 1 in the last column. Done fully formed affine matrix describing the cylinders prition. Much easier to understand since you can draw the vectors.
There are shorter ways but this one is easy to understand.