Normalising angled Earth magnetic field - python

Me and my team are participating in ESA Astro Pi challenge. Our program will ran on the ISS for 3 hours and we will our results back and analyze them.
We want to investigate the connection between the magnetic intensity measurements from Sense HAT magnetometer and predictions from the World Magnetic Model (WMM). We want to research the accuracy of the magnetometer on Sense HAT.
The program will get raw magnetometer data (X, Y and Z) in microteslas from Sense HAT and calculate values H and F as described in British geological survey's article (section 2.1). It will then save them to CSV file, along with timestamp and location calculated with ephem.
We will then compare values Z, H and F from ISS and WMM and create maps with our data and differences (like figures 6, 8 and 10). We will then research, how accurate are Sense HAT magnetometer data.
We want to compare our data with data from WMM to see how accurate is Sense HAT magnetometer, but we have a problem that orientation of magnetometer will always be different. Because of that, our data will always be (very) different from WMM so we won't be able to compare them correctly.
We talked with Astro Pi support team and they suggested to "normalise the angled measurements so it looks like they were taken by a device aligned North/South".
Unfortunately, we (and they) don't know how to do this, so they suggested to ask this question on Stack Exchange. I asked it on Math Stack Exchange, Physics Stack Exchange and Raspberry Pi Forums. Unforcenetly, they didn't received any answers, so I am asking this question again.
How can we do this? We have data for timestamp, ISS location (latitude, longitude, elevation), magnetic data (X, Y and Z) and also direction from the North.
We want to normalise our data so we will be able to correctly compare them with data from WMM.
Here is part of our program that calculates magnetometer values (which gets not normalised data):
compass = sense.get_compass_raw()
try:
# Get raw data (values are swapped because Sense HAT on ISS is in different position)
# x: northerly intensity
# y: easterly intensity
# z: vertical intensity
x = float(compass['z'])
y = float(compass['y'])
z = float(compass['x'])
except (ValueError, KeyError) as err:
# Write error to log (excluded from this snippet)
pass
try:
# h: horizontal intensity
# f: total intensity
# d: declination
# i: inclination
h = sqrt(x ** 2 + y ** 2)
f = sqrt(h ** 2 + z ** 2)
d = degrees(atan(y / x))
i = degrees(atan(z / h))
except (TypeError, ValueError, ZeroDivisionError) as err:
# Write error to log (excluded from this snippet)
pass
There is also some simple simulator available with our code: https://trinket.io/library/trinkets/cc87813ce7
Part of email from Astro Pi team about location and position of magnetometer:
Z is going down through the middle of the Sense Hat.
X runs between the USB ports and SD card slot.
Y runs across from the HDMI port to the 40 way pin header.
On the ISS the AstroPi orientation is that the Ethernet + USB ports face the deck and the SD card slot is towards the sky.
So, that's basically a rotation around the Y axis from flat. So you keep the Y axis the same and swap around Z and X.
It can help to look at the Google Street view of the interior of the ISS Columbus module to get a better idea how the AstroPi is positioned;
https://www.google.com/streetview/#international-space-station/columbus-research-laboratory
If you pan the camera down and to the right, you'll see a green light - that's the AstroPi. The direction of travel for the whole space station is towards the inflatable Earth ball you can see on the left.
So, broadly speaking, the SD card slot points towards azimuth as in away from the centre of the Earth (so the X axis).
The LED matrix is facing the direction of travel of the space station (the Z axis).
Because of the orbital path of the ISS the Z and Y axes will continually change direction relative to the poles as it moves around the Earth.
So, I am guessing you want to normalise the angled measurements so it looks like they were taken by a device aligned North/South?

I think you need to create local reference coordinate system similar to NEH (north,east,height/altitude/up) something like
Representing Points on a Circular Radar Math approach.
Its commonly used in aviation as a reference frame (heading is derived from it) so your reference frame is computed from your geo location and its axises pointing to North, East and Up.
Now the problem is what does it mean aligned North/South and normalizing.. ?
If reference device measure just projection than you would need to do something like this:
dot(measured_vector,reference_unit_direction)
where the direction would be the North direction but as unit vector.
If the reference device measure a full 3D too then you need to transform both reference and tested measured data into the same coordinate system. That is done by using
transform matrices
So simple matrix * vector multiplication will do ... Only then compute the values H,F,Z which I do not know what they are and too lazy to go through papers ... would expect E,H or B vectors instead.
However if you do not have the geo location at moment of measure then you have just the North direction in respect to the ISS in form of Euler angles so you can not construct 3D reference frame at all (unless you got 2 known vectors instead of just one like UP). In such case you need to go with the option 1 projection (using dot product and north direction vector). So you will handle just scalar values instead of 3D vectors afterwards.
[Edit1]
From the link of yours:
The geomagnetic field vector, B, is described by the orthogonal
components X (northerly intensity), Y (easterly intensity) and Z
(vertical intensity, positive downwards);
This is not my field of expertise so I might be wrong here but this is how I understand it:
B(Bx,By,Bz) - magnetic field vector
a(ax,ay,az) - acceleration
Now F is a magnitude of B so its invariant on rotation:
F = |B| = sqrt( Bx*Bx + By*By + Bz*Bz )
you need to compute the X,Y,Z values of B in the NED reference frame (North,East,Down) so you need the basis vectors first:
Down = a/|a| // gravity points down
North = B/|B| // north is close to B direction
East = cross(Down,North) // East is perpendicular to Down and North
North = cross(East,Down) // north is perpendicular to Down and East, this should convert North to the horizontal plane
You should render them to visually check if they point to correct directions if not negate them by reordering the cross operands (I might have the order wrong I am used to use Up vector instead). Now just convert B to NED :
X = dot(North,B)
Y = dot(East,B)
Z = dot(Down,B)
And now you can compute the H
H = sqrt( X*X +Y*Y )
The vector math needed for this you will find in the transform matrix link above.
beware this will work only if no accelerations are present (the sensor is not on a robotic arm during its operation, or ISS is not doing a burn...) Otherwise you need to obtain the NED frame differently (like from onboard systems)
If this not work correctly then you can compute NED from your ISS position but for that you would need to know the exact orientation and displacement of the sensor in respect to your simulation model that provide your location. I do not know what rotations ISS do so I would not touch that subject unless as a last resort.
I am afraid that I will not have time for coding for some time ... anyway coding without sample input data nor the coordinate system expalnations and all the input/output variables is insanity ... simple negation of axis will invalidate the whole thing and there is a lot of duplicities along the ways and to cover all of them you would end up with many many versions to try to...
Apps should be build up incrementally but I am afraid that without the access to simulation or real HW that is not possible. And there is a whole bunch of things that could go wrong ... making even simple programs a magnitude harder to code... I would first check the F as it does not require any "normalization" first to see if the results are off or not. If off it might suggest different units or god knows what ...

Related

Taking the coordinates of an object and creating a formula to drag an arrow

I am using OpenCV to triangulate the position of an object, and am trying to create some kind of formula to pass the coordinates that I obtain through to drag a pull arrow, casting a fishing rod. I tried using polynomial regression to a very high degree, but it is still inaccurate due to the regression not being able to take into account an (x,y) input to an (x,y) output, rather just an x input to x output etc. I have attached screenshots below for clarity, alongside my obtained formulas from the regression. Any help/ideas/suggestions would be appreciated, thanks.
Edit:
The xy coordinates are organized from the landing position to the position where the arrow was pulled to for the bobber to land there. This is because the fishing blob is the input, and the arrow pull end location comes from the blob location. I am using OpenCV to obtain the x,y coordinates, which I believe is just an x,y coordinate system of the 2d screen.
The avatar position is locked, and the button to cast the rod is located at an absolute position of (957,748).
The camera position is locked with no rotation or movement.
I believe that the angle the rod is cast at is likely a 1:1 opposite of where it is pulled to. Ex: if the rod was pulled to 225 degrees it would cast at 45 degrees. I am not 100% sure, but I think that the strength is linear. I used linear regression partially because I was not sure about this. There is no altitude difference/slope/wind that affects the cast. The only affecting factor of landing position is where the arrow is dragged to. The arrow will not drag past the 180/360 degree position sideways (relative to cast button) and will simply lock the cast angle in the x direction if it is held there.
The x-y data was collected with a simple program to move the mouse to the same position (957,748) and drag the arrow to cast the rod with different drag strengths/positions to create some kind of line of best fit for a general formula for casting the rod. The triang_x and y functions included are what the x and y coordinates were run through respectively to triangulate the ending drag coordinate for the arrow. This does not work very well because matching the x-to-x and y-to-y doesn't account for x and y data in each formula, just x-to-x etc.
Left column is fishing spot coordinates, right column is where arrow is dragged to to hit the fish spot.
(1133,359) to (890,890)
(858,334) to (886, 900)
(755,579) to (1012,811)
(1013,255) to (933,934)
(1166,469) to (885,855)
(1344,654) to (855,794)
(804,260) to (1024,939)
(1288,287) to (822,918)
(624,422) to (1075,869)
(981,460) to (949,851)
(944,203) to (963,957)
(829,367) to (1005,887)
(1129,259) to (885,932)
(773,219) to (1036,949)
(1052,314) to (919,908)
(958,662) to (955,782)
(1448,361) to (775,906)
(1566,492) to (751,837)
(1275,703) to (859,764)
(1210,280) to (852,926)
(668,513) to (1050,836)
(830,243) to (1011,939)
(688,654) to (1022,792)
(635,437) to (1072,864)
(911,252) to (976,935)
(1499,542) to (785,825)
(793,452) to (1017,860)
(1309,354) to (824,891)
(1383,522) to (817,838)
(1262,712) to (867,758)
(927,225) to (980,983)
(644,360) to (1097,919)
(1307,648) to (862,798)
(1321,296) to (812,913)
(798,212) to (1026,952)
(1315,460) to (836,854)
(700,597) to (1028,809)
(868,573) to (981,811)
(1561,497) to (758,838)
(1172,588) to (896,816)
Shows bot actions taken within function and how formula is used.
coeffs_x = np.float64([
-7.9517089428836911e+005,
4.1678460255861210e+003,
-7.5075555590709371e+000,
4.2001528427460097e-003,
2.3767929866943760e-006,
-4.7841176483548307e-009,
6.1781765539212100e-012,
-5.2769581174002655e-015,
-4.3548777375857698e-019,
2.5342561455214514e-021,
-1.4853535063513160e-024,
1.5268121610772846e-027,
-2.9667978919426497e-031,
-9.5670287721717018e-035,
-2.0270490020866057e-037,
-2.8248895597371365e-040,
-4.6436110892973750e-044,
6.7719507722602512e-047,
7.1944028726480678e-050,
1.2976299392064562e-052,
7.3188205383162127e-056,
-6.3972284918241943e-059,
-4.1991571617797430e-062,
2.5577340340980386e-066,
-4.3382682133956009e-068,
1.5534384486024757e-071,
5.1736875087411699e-075,
7.8137258396620031e-078,
2.6423817496804479e-081,
2.5418438527686641e-084,
-2.8489136942892384e-087,
-2.3969101111450846e-091,
-3.3499890707855620e-094,
-1.4462592756075361e-096,
6.8375394909274851e-100,
-2.4083095685910846e-103,
7.0453288171977301e-106,
-2.8342463921987051e-109
])
triang_x = np.polynomial.Polynomial(coeffs_x)
coeffs_y = np.float64([
2.6215449742035207e+005,
-5.7778572049616614e+003,
5.1995066291482431e+001,
-2.3696608508824663e-001,
5.2377319234985116e-004,
-2.5063316505492962e-007,
-9.2022083686040928e-010,
3.8639053124052189e-013,
2.7895763914453325e-015,
7.3703786336356152e-019,
-1.3411964395287408e-020,
1.5532055573746500e-023,
-6.9719956967963252e-027,
1.9573598517734802e-029,
-3.3847482160483597e-032,
-5.5368209294319872e-035,
7.1463648457003723e-038,
4.6713369979545088e-040,
-7.5070219026265008e-043,
-4.5089676791698693e-047,
-3.2970870269153785e-049,
1.6283636917056585e-051,
-1.4312555782661719e-054,
7.8463441723355399e-058,
1.9439588820918080e-060,
2.1292310369635749e-063,
-1.4191866473449773e-065,
-2.1353539347524828e-070,
2.5876946863828411e-071,
-1.6182477348921458e-074
])
triang_y = np.polynomial.Polynomial(coeffs_y)
First you need to clarify few things:
the xy data
Is position of object you want to hit or position what you hit when used specific input data (which is missing in that case)?In what coordinate system?
what position is your avatar?
how is the view defined?
is it fully 3D with 6DOF or just fixed (no rotation or movement) relative to avatar?
what is the physics/logic of your rod casting
is it angle (one or two), strength?Is the strength linear to distance?Does throwing acount for altitude difference between avatar and target?does ground elevation (slope) play a role?Are there any other factors like wind, tupe of rod etc?
You shared the xy data but what against you want to correlate or make formula for it? it does not make sense you obviously forget to add something like each position was taken for what conditions?
I would solve this by (no further details before you clarify stuff above):
transform targets xy to player relative coordinate system aligned to ground
compute azimut angle (geometricaly)
simple atan2(y,x) will do but you need to take into account your coordinate system notations.
compute elevation angle and strength (geometricaly)
simple balistic physics should apply however depends on the physics the game or whatever you write this for uses.
adjust for additional stuff
You know for example wind can slightly change your angle and strength
In case you have real physics and data you can do #3,#4 at the same time. See similar:
C++ intersection time of 2 bullets
[Edit1] puting your data into your image
OK your coordinates obviously do not match your screenshot as the image taken is scaled after some intuition I rescaled it and draw into image in C++ to match again so here the result:
I converted your Cartesian points:
int ava_x=957,ava_y=748; // avatar
int data[]= // target(x1,y1) , drag(x0,y0)
{
1133,359,890,890,
858,334,886, 900,
755,579,1012,811,
1013,255,933,934,
1166,469,885,855,
1344,654,855,794,
804,260,1024,939,
1288,287,822,918,
624,422,1075,869,
981,460,949,851,
944,203,963,957,
829,367,1005,887,
1129,259,885,932,
773,219,1036,949,
1052,314,919,908,
958,662,955,782,
1448,361,775,906,
1566,492,751,837,
1275,703,859,764,
1210,280,852,926,
668,513,1050,836,
830,243,1011,939,
688,654,1022,792,
635,437,1072,864,
911,252,976,935,
1499,542,785,825,
793,452,1017,860,
1309,354,824,891,
1383,522,817,838,
1262,712,867,758,
927,225,980,983,
644,360,1097,919,
1307,648,862,798,
1321,296,812,913,
798,212,1026,952,
1315,460,836,854,
700,597,1028,809,
868,573,981,811,
1561,497,758,838,
1172,588,896,816,
};
Into polar relative to ava_x,ava_y using atan2 and 2D distance formula and simply print the angular difference +180deg and ratio between line sizes (that is the yellow texts in left of the screenshot) first is ordinal number then angle difference [deg] and then ratio between line lengths...
as you can see the angle difference is +/-10.6deg and length ratio is <2.5,3.6> probably because of inaccuracy of OpenCV findings and some randomness for fishing rod castings from the game logic itself.
As you can see polar coordinates are best for this. For starters you could do simply this:
// wanted target in polar (obtained by CV)
x = target_x-ava_x;
y = target_y-ava_y;
a = atan2(y,x);
l = sqrt((x*x)+(y*y));
// aiming drag in polar
a += 3.1415926535897932384626433832795; // +=180 deg
l /= 3.0; // "avg" ratio between line sizes
// aiming drag in cartesian
aim_x = ava_x + l*cos(a);
aim_y = ava_y + l*sin(a);
You can optimize it to:
aim_x = ava_x - ((target_x-ava_x)/3);
aim_y = ava_y - ((target_y-ava_y)/3);
Now to improve precision you could measure the dependency or line ratio and line size (it might be not linear) , also the angular difference might be bigger for bigger lines ...
Also note that second cast (ordinal 2) is probably a bug (wrongly detected x,y by CV) if you render the 2 lines you will see they do not match so you should not account that and throw them away from dataset.
Also note that I code in C++ so my goniometrics use radians (not sure if true for python if not you need to convert to degrees) also equations might need some additional tweaking for your coordinate systems (negate y?)

Changing the coordinate map in function of its coordinates (gravitational lensing)

I'm trying to compute the optical phenomenon called Gravitational Lensing. In simple words its when a massive object (or with massiva mass) its between me as an observer and a star or some clase of light source. Because its massive mass the light will bend and for us it will apparently come from another location than it real position. There is a particular case (and simpler) where we suppose the mass is spheric, so from our perspective its circular in a 2D plane (or photo).
My idea for code that was changing the coordinates of a 2D plane in function of where my source light its. In other words, if I have a spheric light source, if it is far from my massive object it will image no change, but if its close to te spheric mass it will change (in fact, if its exactly behind the massive object I as an observer will see the called Einstein Ring).
For compute that I first write a mapping of this function. I take the approximation of a = x + sin(t)/exp(x) , b = y + cos(t)/exp(y). So when the source light its far from the mass, the exponential will be approximately zero, and if it is just behind the mass the source light coordinates will be (0,0), so the imagen will return (sin(t),cos(t)) the Einstein circle I was expected to get.
I code that in this way, first I define my approximation:
def coso1(x,y):
t = arange(0,2*pi, .01);
a = x + sin(t)/exp(x)
b = y + cos(t)/exp(y)
plt.plot(a,b)
plt.show()
Then I try to plot that to see how the coordinate map is changing:
from numpy import *
from matplotlib.pyplot import *
x=linspace(-10,10,10)
y=linspace(-10,10,10)
y = y.reshape(y.size, 1)
x = x.reshape(x.size, 1)
plot(coso1(x,y))
And I get this plot.
Graphic
Notice that it looks that way because the intervale I choose to take values for the x and y coordinates. If I take place in the "frontier" case where x={-1,0,1} and y={-1,0,1} it will show how the space its been deformed (or I'm guessing thats what Im seeing).
I then have a few questions. An easy question but that I hadnt find an easy answer its if I can manipulate this transformation (rotate with the mouse to aprecciate the deformation, a controller of how x or y change). And the two hard questions: Can I plot the countour lines to see how exactly are changing the topography of my map in every level of x (suppose I let y be constant), and the other question: If this is my "new" way of how the map is acting, can I use this new coordinate map as a tool where If a project any image it will be distorted in function of this "new" map. Something analogous of how cameras works with fish lens effect.

Difficulties with RA/Dec and Alt/Azi conversions with pyEphem

I'm trying to go from alt/azi to RA/Dec for a point on the sky at a fixed location, trying out pyEphem. I've tried a couple of different ways, and I get sort of the right answer, within a couple of degrees, but I'm expecting better, and I can't figure out where the problems lie.
I've been using Canopus as a test case (I'm not after stars specifically, so I can't use the in-built catalogue). So in my case, I know that at
stn = ephem.Observer()
# yalgoo station, wa
stn.long = '116.6806'
stn.lat = '-28.3403'
stn.elevation = 328.0
stn.pressure = 0 # no refraction correction.
stn.epoch = ephem.J2000
stn.date = '2014/12/15 14:32:09' #UTC
Stellarium, checked with other web sites tell me Canopus should be at
azi, alt '138:53:5.1', '56:09:52.6' or in equatorial RA 6h 23m 57.09s/ Dec. -52deg 41' 44.6"
but trying:
cano = ephem.FixedBody()
cano._ra = '6:23:57.1'
cano._dec = '-52:41:44.0'
cano._epoch = ephem.J2000
cano.compute( stn)
print( cano.az, cano.alt)
>>>(53:22:44.0, 142:08:03.0)
about 3 degrees out. I've also tried the reverse,
ra, dec = stn.radec_of('138:53:5.1', '56:09:52.6')
>>>(6:13:18.52, -49:46:09.5)
where I'm expecting 6:23 not 6:13. Turning on refraction correction makes a small difference, but not enough, and I've always understood aberration and nutation were much smaller effects than this offset as well?
As a follow up, I've tried manual calculations, based on 'Practical Astronomy with your calculator'; so for dec:
LAT = math.radians(-28.340335)
LON = math.radians(116.680621667)
ALT = math.radians(56.16461)
AZ = math.radians(138.88475)
sinDEC = (math.sin( LAT)*math.sin( ALT)
+ math.cos( LAT)*math.cos( ALT)*math.cos( AZ) )
DEC = math.asin( sinDEC)
DEC_deg = math.degrees(DEC)
print( 'dec = ', DEC_deg )
>>>('dec = ', -49.776032754148986)
again, quite different from '56:09:52.6', but reasonably close to pyEphem - so now I'm thoroughly confused! So now I'm suspecting the problem is my understanding, rather than pyEphem - could someone enlighten me about the correct way to go do RADEC/ALTAZI conversions, and why things are not lining up?!
First some notes
Atmospheric scattering and relative speed between observer and object
have the maximal error (near horizon) up to 0.6 degree which is nowhere near your error.
how can altitude be over 90 degrees?
you got swapped data for azimut and altitude
I put your observer data into mine program and result was similar to yours
but I visually search for that star instead of putting the coordinates. Result was also about 3-4 degrees off in RA axis
RA=6.4h Dec=-52.6deg
azi=142.4deg alt=53.9deg
mine engine is in C++, using Kepler's equation
Now what can be wrong:
mine stellar catalog can be wrongly converted
rotated wrongly with some margin but strongly doubt that it is 3 degrees. Also perspective transforms can add some error while rendering at 750AU distance from observer. I never tested for Southern sky (not visible from mine place).
we are using different Earth reference frame then the data you comparing to
I found out that some sites like NASA Horizon use different reference frame which does not correspond with mine observations. Look here
calculate the time when the sun is X degrees below/above the Horizon
at the start of the answer is link to 2 sites with different reference frames when you compare the result they are off. The second link is corresponding with mine observations the rest is dealing (included source code) with Kepler's equation based Solar system simulation. The other sublinks are also worth looking into.
I could have a bug in mine simulation/data
I have referenced data to this engine which could partially hide any computation errors from mine observer position so handle all above text with taken that it mind.
you could use wrong time/Julian date to stellar time conversions
if your time is off then the angles will not match...
How to resolve this?
pick up your Telescope, set up equatoreal coordinate system/mount to it and measure Ra/Dec Azi/Alt for known (distant) object in reality and compare with computed positions. Only this way you can decide which value is good or wrong (for reference frame you are using). Do this on star not planet !!! Do this on high altitude angles not near Horizon !!!
How to transform between azimutal and equatoreal coordinates
I compute transform matrix Eath representing earth's coordinate system (upper right) in heliocentric coordinate system as global coordinate system (left) then I compute another matrix NEH representing observer on Earth's surface (North,East,High/Altitude ... lower right).
After this it is just a matter of matrix and vector multiplications and conversion between Cartesian and spherical coordinate systems look here:
Representing Points on a Circular Radar Math approach
for more insight to azimutal coordinates. if you use just that simple equation like in your example then you do not account for many things... The Earth position is computed by Kepler's equation, rotation is given by daily rotation, nutation and precession included.
I use 64 bit floating point values which can create round errors but not that high ...
I use geometric North Pole as observer reference (this could add some serious error near poles).
The biggest thing that can affect this is the speed of light but that account for near earth 'moving' objects like planets not stars (except Sun) because their computed position is visible after some time ... For example Sun-Earth distance is about 8 light minutes so we see the Sun where it was 8 minutes ago. If the effemerides data is geometrical only (not account for this) then this can lead to high errors if not computed properly.
Newer effemerides models use gravity integration instead of Kepler so their data must be geometrical and the final output is then corrected by the time shift ...

Plotting the path through a 2d slice of an electric field

I have a piece of code that randomly scatters a range of point charges on the xy plane at z=0, and then determines the electric field above the charges at z=1. I won't include the code as it is long and fairly cumbersome, but for visualisation purposes here is a plot of the random assortment of 4 charges:
The vector field is stored in terms of two 2d arrays, Ex and Ey. What I need to do next is to plot the path of a negative charge, randomly thrown into the plot above. I have done some random walks in the past and think the syntax would have to go something like this (written just in words for now)
Randomly generate an x and y co ordinate within relevant limits
for i in range(1, number_of_steps)
walkpathx[i]=walkpath[i-1]+Ex_at_that_co_ordinate
walkpathy[i]=walkpath[i-1]+Ey_at_that_co_ordinate
The issue I'm having is getting Ex_at_that_co_ordinate - the random initial position will give me an arbitrary number, but my vector field is not continuous so I'm not sure how to get my co ordinates to pick out the Efield at that point. Any help would be much appreciated, and apologies if I've formatted anything wrong. Some important points - we ignore any z components and imagine the charge is stuck in the xy plane at z=1, and ignore inertia and mass effects, its literally just meant to be a walk following the field starting at a random point.
For a given position of the charges and a random initial position of the particle, not on the grid the simplest choice is to compute the field ex-novo:
ex, ey = E(x,y,P) # where P = [(x0, y0, c0), (x1, y1, c1), ...]
If you're not allowed to do this, you have to resort to interpolation; which kind of interpolation? in my opinion that's a matter that does not pertain to SO.
In general you have to choose the four adjacent points in the grid, pick the EF values and suitably combine them... If you're in trouble with that, edit your Q accordingly (or start a new one).

3D rotations to connect balls and cylinders

I've been tasked with writing a python based plugin for a graph drawing program that generates an STL model of a graph. A graph being an object made up of vertices and edges, where a vertex is represented by a 3D ball (a tessellated icosahedron), and an edge is represented with a cylinder that connects with two balls at either end. The end result of the 3D model is that it will get dumped out to an STL file for 3D printing. I'm able to generate the 3D models for the balls and cylinders without any issues, but I'm having some issues generating the overall model, and getting the balls and cylinders to connect properly.
My original idea was to create tessellated icosahedrons at the origin, then translate them out to the positions of the vertices. This works fine. I then, for each edge, I would create a cylinder at the origin, rotate it to the correct angle so that it points in the correct direction, then translate it to the midpoint between the two vertices so that the ends of the cylinders are embedded in the icosahedrons. This is where things are going wrong. I'm having some difficulties getting the rotations correct. To calculate the rotations, I'm doing the following:
First, I find the angle between the two points as follows (where source and target are both vertices in the graph, belonging to the edge that I'm currently processing):
deltaX = source.x - target.x
deltaY = source.y - target.y
deltaZ = source.z - target.z
xyAngle = math.atan2(deltaX, deltaY)
xzAngle = math.atan2(deltaX, deltaZ)
yzAngle = math.atan2(deltaY, deltaZ)
The angles being calculated seem reasonable, and as far as I can tell, do actually represent the angle between the vertices. For example, if I have a vertex at (1, 1, 0) and another vertex at (3, 3, 0), the angle edge connecting them does show up as a 45 degree angle between the two vertices. (That, or -135 degrees, depending which vertex is the source and which is the target).
Once I have the angles calculated, I create a cylinder and rotate it by the angles that have been calculated, like so, using some other classes that I've created:
c = cylinder()
c.createCylinder(edgeThickness, edgeLength)
c.rotateX(-yzAngle)
c.rotateY(xzAngle)
c.rotateZ(-xyAngle)
c.translate(edgePosition.x, edgePosition.y, edgePosition.z)
(Where edgePosition is the midpoint between the two vertices in the graph, edgeThickness is the radius of the cylinder being created, and edgeLength is the distance between the two vertices).
As mentioned, its the rotating of the cylinders that doesn't work as expected. It seems to do the correct rotation on the x/y plane, but as soon as an edge has vertices that differ in all three components (x, y, and z), the rotation fails. Here's an example of a graph that differs in the x, and y components, but not in the z component:
And here's the resulting STL file, as seen in Makerware (which is used to send the 3D models to the 3D printer):
(The extra cylinder looking bit in the bottom left is something I've currently left in for testing purposes - a cylinder that points in the direction of the z axis, located at the origin).
If I take that same graph and move the middle vertex out in the z axis, so now all the edges involve angles in all three axis, I get a result something like the following:
As show in the app:
The resulting STL file, as show in Makerware:
...and that same model as viewed from the side:
As you can see, the cylinders definitely aren't meeting up with the balls like I thought they would. My question is this: Is my approach to doing this flawed, or is it some small but critical mistake that I'm making somewhere in my rotations? I'm pretty sure it isn't a problem with the rotation functions themselves, as I've been able to independently verify that they work as expected. I also tried creating a rotate function that takes in a yaw, pitch, and roll and does all three at once, and it seemed to generate the same result, like so:
c.rotateYawPitchRoll(xzAngle, -yzAngle, -xyAngle)
So... anyone have any ideas on what I might be doing wrong?
UPDATE: As joojaa pointed out, it was a combination of calculating the correct angles as well as the order that they were applied. In order to get things working, I first calculate the rotation on the x axis, as follows:
zyAngle = math.atan2(deltaVector.z, deltaVector.y)
where deltaVector is the difference between the target and source vectors. This rotation is not yet applied though! The next step is to calculate the rotation on the y axis, as follows:
angle = vector.angleBetweenVectors(vector(target.x - source.x, target.y - source.y, target.z - source.z), vector(target.x - source.x, target.y - source.y, 0.0))
Once both rotations are calculated, they are then applied... in the reverse order! First, the x, then the y:
c.rotateY(angle)
c.rotateX(-zyAngle) #... where c is a cylinder object
There still seems to be a few bugs, but this seems to at least work for a simple test case.
Rotation happens in successive order, so the angles affect each other. It is not possible to use a Euler model to rotate them at once. This is why you can not just calculate the rotations based on the first static situation. Just imagine turning a cube so that it is standing on its corner upright. Yes the first rotation is 45 but the second is not since the cube is already turned by that time (draw a each step of the sequence and see what happens). Space rotations aren't trivial.
So you need to rotate one angle then re calculate the second angle and so forth. This is also why your first rotation works right. You only need 2 rotations unless your interested in making sure the rotation around the shaft has a certain direction.
I would suggest you use axis angles or matrices instead to do this. Mainly because in axis angles this is trivial the angle is the dot between the along tube start and end vectors and the axis is the cross between those 2. You can then convert those to Euler angles if you need. But probably you can just use the matrix directly. For ideas on how conversions and how the rotation could directly be calculated see: transformations.py by Christoph Gohlke. Also see the accompanying c source.
I think i need to expand this answer a bit
There is a really easy way out for this question that sidesteps all your and many other persons problems. The answer is do not use Euler angle rotation. Ive used a lot of brainpower to try to explain Euler rotations to problems that are ultimately solved more easily without Euler rotations. To justify i will leave just one reason for this if you want more think up of some more answers.
The reason most to use Euler rotation sequences is that you probably don't understand Euler angles. There are in fact only a handful of situations where they are good. No self respecting programmer uses Euler rotations to solve this issue. What you do is you use vector math instead.
So you have the direction vector from the source to target which is usually calculated:
along = normalize(target-source)
this is simply one of your matrix rows (or column notation is up to model maker), the one that corresponds to your cylinders original direction (the rows are just x y z w), then you need another vector perpendicular to this one. Choose a arbitrary vector like up (or left if your along is pointing close to up). cross product this up vector by your along for the second row direction. and finally put your source as the last row with 1 in the last column. Done fully formed affine matrix describing the cylinders prition. Much easier to understand since you can draw the vectors.
There are shorter ways but this one is easy to understand.

Categories