Animated flight path plot in python - python

I would like to write a program for plotting an animated 3D ribbon image in python. The ribbon follows the flight path as provided by a log containing the following information:
Timestamp, pitch, roll, altitude, latitude, longitude, heading, turn rate.
The final figure should look something like this (without the aircraft figure).
Aircraft manoeuvres:
Please let me know how this can be done.
Edit:
Heading is the angle between the flight and the north direction. It is a scalar quantity measured in degrees.
Turn rate is rate of change of heading.
Pitch is up/down rotation and roll is rotation given by rolling over on either side. They are measured in degrees.
Below is a sample of the data:
Time: 801.475 alt (ft): 12599.88668 lat(deg): 63.94230675 lon(deg): -22.72656178 pitch(deg): 39.60080719 roll(deg): 40.49394608 heading(deg): 344.7094606 turnspeed: 8.104816363
These are all from the inertial frame of reference, which can also serve as axes for this plot.
Thank you

This is not a complete answer.
First, you need to convert your data to x,y,z coordinates.
let's say that you use feet, then you have to convert latitude and longitude to x,y.
It will not work if the plane moved a long distance, because the earth is not flat.
Your pitch, roll and heading data serves to calculate the (x,y,z) coordinates of the tips of the wings (or the border of the ribbon) relative to the (x,y,z) of the center of the plane.
You need quaternions or geometric algebra to do that. You can also do it with basic trigonometry, but you risk gimbal lock bugs.
Then you need to interpolate your data, for the timestep you choose.
With your interpolated data, you get 2 arrays of (x,y,z) for each ribbon border, which you can plot as lines in 3D.
To do the plot you need to choose a position of the viewer, and a direction to which he is looking at.
If you want to do an animation, you use the same data, but google how to do animations after you managed to do the interpolations.
When the position of the plane at the time t, t+1 is
position[t] =[x,y,z]
position[t+1]=[x1,y1,z1]
the direction at which the plane is moving is
velocity[t+1/2]=position[t+1]-position[t]=[x1-x,y1-y,z1-z]/(t+1/2-t)
From the Position[t], you have to calculate the position of the right wing tip WR[t], and left wing tip WL[t], which are the [x,y,z] coordinates of the ribbons at the time t
Now, following this convention from Wikipedia
If we conjecture that the viewer is looking at the North, X is positive towards East, Y is positive towards the North, and Z is positive away from the center of earth (That depends on how did you converted latitude/longitude to x,y)
-If the length of a wing is L, then the coordinate of the rigth wing tip, relative to the center of the plane Position[t], would be [L,0,0], when the plane is leveled, aiming at the North.
At the time t the right wing tip WR[t] shold be at coordinates:
WR[t] = Position[t] + RotateVectorByRoll(RotateVectorByPitch(RotateVectorByHeading([L,0,0], Θh), ΘP), ΘR)

Related

Taking the coordinates of an object and creating a formula to drag an arrow

I am using OpenCV to triangulate the position of an object, and am trying to create some kind of formula to pass the coordinates that I obtain through to drag a pull arrow, casting a fishing rod. I tried using polynomial regression to a very high degree, but it is still inaccurate due to the regression not being able to take into account an (x,y) input to an (x,y) output, rather just an x input to x output etc. I have attached screenshots below for clarity, alongside my obtained formulas from the regression. Any help/ideas/suggestions would be appreciated, thanks.
Edit:
The xy coordinates are organized from the landing position to the position where the arrow was pulled to for the bobber to land there. This is because the fishing blob is the input, and the arrow pull end location comes from the blob location. I am using OpenCV to obtain the x,y coordinates, which I believe is just an x,y coordinate system of the 2d screen.
The avatar position is locked, and the button to cast the rod is located at an absolute position of (957,748).
The camera position is locked with no rotation or movement.
I believe that the angle the rod is cast at is likely a 1:1 opposite of where it is pulled to. Ex: if the rod was pulled to 225 degrees it would cast at 45 degrees. I am not 100% sure, but I think that the strength is linear. I used linear regression partially because I was not sure about this. There is no altitude difference/slope/wind that affects the cast. The only affecting factor of landing position is where the arrow is dragged to. The arrow will not drag past the 180/360 degree position sideways (relative to cast button) and will simply lock the cast angle in the x direction if it is held there.
The x-y data was collected with a simple program to move the mouse to the same position (957,748) and drag the arrow to cast the rod with different drag strengths/positions to create some kind of line of best fit for a general formula for casting the rod. The triang_x and y functions included are what the x and y coordinates were run through respectively to triangulate the ending drag coordinate for the arrow. This does not work very well because matching the x-to-x and y-to-y doesn't account for x and y data in each formula, just x-to-x etc.
Left column is fishing spot coordinates, right column is where arrow is dragged to to hit the fish spot.
(1133,359) to (890,890)
(858,334) to (886, 900)
(755,579) to (1012,811)
(1013,255) to (933,934)
(1166,469) to (885,855)
(1344,654) to (855,794)
(804,260) to (1024,939)
(1288,287) to (822,918)
(624,422) to (1075,869)
(981,460) to (949,851)
(944,203) to (963,957)
(829,367) to (1005,887)
(1129,259) to (885,932)
(773,219) to (1036,949)
(1052,314) to (919,908)
(958,662) to (955,782)
(1448,361) to (775,906)
(1566,492) to (751,837)
(1275,703) to (859,764)
(1210,280) to (852,926)
(668,513) to (1050,836)
(830,243) to (1011,939)
(688,654) to (1022,792)
(635,437) to (1072,864)
(911,252) to (976,935)
(1499,542) to (785,825)
(793,452) to (1017,860)
(1309,354) to (824,891)
(1383,522) to (817,838)
(1262,712) to (867,758)
(927,225) to (980,983)
(644,360) to (1097,919)
(1307,648) to (862,798)
(1321,296) to (812,913)
(798,212) to (1026,952)
(1315,460) to (836,854)
(700,597) to (1028,809)
(868,573) to (981,811)
(1561,497) to (758,838)
(1172,588) to (896,816)
Shows bot actions taken within function and how formula is used.
coeffs_x = np.float64([
-7.9517089428836911e+005,
4.1678460255861210e+003,
-7.5075555590709371e+000,
4.2001528427460097e-003,
2.3767929866943760e-006,
-4.7841176483548307e-009,
6.1781765539212100e-012,
-5.2769581174002655e-015,
-4.3548777375857698e-019,
2.5342561455214514e-021,
-1.4853535063513160e-024,
1.5268121610772846e-027,
-2.9667978919426497e-031,
-9.5670287721717018e-035,
-2.0270490020866057e-037,
-2.8248895597371365e-040,
-4.6436110892973750e-044,
6.7719507722602512e-047,
7.1944028726480678e-050,
1.2976299392064562e-052,
7.3188205383162127e-056,
-6.3972284918241943e-059,
-4.1991571617797430e-062,
2.5577340340980386e-066,
-4.3382682133956009e-068,
1.5534384486024757e-071,
5.1736875087411699e-075,
7.8137258396620031e-078,
2.6423817496804479e-081,
2.5418438527686641e-084,
-2.8489136942892384e-087,
-2.3969101111450846e-091,
-3.3499890707855620e-094,
-1.4462592756075361e-096,
6.8375394909274851e-100,
-2.4083095685910846e-103,
7.0453288171977301e-106,
-2.8342463921987051e-109
])
triang_x = np.polynomial.Polynomial(coeffs_x)
coeffs_y = np.float64([
2.6215449742035207e+005,
-5.7778572049616614e+003,
5.1995066291482431e+001,
-2.3696608508824663e-001,
5.2377319234985116e-004,
-2.5063316505492962e-007,
-9.2022083686040928e-010,
3.8639053124052189e-013,
2.7895763914453325e-015,
7.3703786336356152e-019,
-1.3411964395287408e-020,
1.5532055573746500e-023,
-6.9719956967963252e-027,
1.9573598517734802e-029,
-3.3847482160483597e-032,
-5.5368209294319872e-035,
7.1463648457003723e-038,
4.6713369979545088e-040,
-7.5070219026265008e-043,
-4.5089676791698693e-047,
-3.2970870269153785e-049,
1.6283636917056585e-051,
-1.4312555782661719e-054,
7.8463441723355399e-058,
1.9439588820918080e-060,
2.1292310369635749e-063,
-1.4191866473449773e-065,
-2.1353539347524828e-070,
2.5876946863828411e-071,
-1.6182477348921458e-074
])
triang_y = np.polynomial.Polynomial(coeffs_y)
First you need to clarify few things:
the xy data
Is position of object you want to hit or position what you hit when used specific input data (which is missing in that case)?In what coordinate system?
what position is your avatar?
how is the view defined?
is it fully 3D with 6DOF or just fixed (no rotation or movement) relative to avatar?
what is the physics/logic of your rod casting
is it angle (one or two), strength?Is the strength linear to distance?Does throwing acount for altitude difference between avatar and target?does ground elevation (slope) play a role?Are there any other factors like wind, tupe of rod etc?
You shared the xy data but what against you want to correlate or make formula for it? it does not make sense you obviously forget to add something like each position was taken for what conditions?
I would solve this by (no further details before you clarify stuff above):
transform targets xy to player relative coordinate system aligned to ground
compute azimut angle (geometricaly)
simple atan2(y,x) will do but you need to take into account your coordinate system notations.
compute elevation angle and strength (geometricaly)
simple balistic physics should apply however depends on the physics the game or whatever you write this for uses.
adjust for additional stuff
You know for example wind can slightly change your angle and strength
In case you have real physics and data you can do #3,#4 at the same time. See similar:
C++ intersection time of 2 bullets
[Edit1] puting your data into your image
OK your coordinates obviously do not match your screenshot as the image taken is scaled after some intuition I rescaled it and draw into image in C++ to match again so here the result:
I converted your Cartesian points:
int ava_x=957,ava_y=748; // avatar
int data[]= // target(x1,y1) , drag(x0,y0)
{
1133,359,890,890,
858,334,886, 900,
755,579,1012,811,
1013,255,933,934,
1166,469,885,855,
1344,654,855,794,
804,260,1024,939,
1288,287,822,918,
624,422,1075,869,
981,460,949,851,
944,203,963,957,
829,367,1005,887,
1129,259,885,932,
773,219,1036,949,
1052,314,919,908,
958,662,955,782,
1448,361,775,906,
1566,492,751,837,
1275,703,859,764,
1210,280,852,926,
668,513,1050,836,
830,243,1011,939,
688,654,1022,792,
635,437,1072,864,
911,252,976,935,
1499,542,785,825,
793,452,1017,860,
1309,354,824,891,
1383,522,817,838,
1262,712,867,758,
927,225,980,983,
644,360,1097,919,
1307,648,862,798,
1321,296,812,913,
798,212,1026,952,
1315,460,836,854,
700,597,1028,809,
868,573,981,811,
1561,497,758,838,
1172,588,896,816,
};
Into polar relative to ava_x,ava_y using atan2 and 2D distance formula and simply print the angular difference +180deg and ratio between line sizes (that is the yellow texts in left of the screenshot) first is ordinal number then angle difference [deg] and then ratio between line lengths...
as you can see the angle difference is +/-10.6deg and length ratio is <2.5,3.6> probably because of inaccuracy of OpenCV findings and some randomness for fishing rod castings from the game logic itself.
As you can see polar coordinates are best for this. For starters you could do simply this:
// wanted target in polar (obtained by CV)
x = target_x-ava_x;
y = target_y-ava_y;
a = atan2(y,x);
l = sqrt((x*x)+(y*y));
// aiming drag in polar
a += 3.1415926535897932384626433832795; // +=180 deg
l /= 3.0; // "avg" ratio between line sizes
// aiming drag in cartesian
aim_x = ava_x + l*cos(a);
aim_y = ava_y + l*sin(a);
You can optimize it to:
aim_x = ava_x - ((target_x-ava_x)/3);
aim_y = ava_y - ((target_y-ava_y)/3);
Now to improve precision you could measure the dependency or line ratio and line size (it might be not linear) , also the angular difference might be bigger for bigger lines ...
Also note that second cast (ordinal 2) is probably a bug (wrongly detected x,y by CV) if you render the 2 lines you will see they do not match so you should not account that and throw them away from dataset.
Also note that I code in C++ so my goniometrics use radians (not sure if true for python if not you need to convert to degrees) also equations might need some additional tweaking for your coordinate systems (negate y?)

Projected Area calculation of a cube

I am working on measuring the projected area of a cube facing the sun for my spacecraft coursework. The cube is of 1x1x1 dimensions, and constantly rotates due to its orbit. Using a program called "STK", data for the angle shift according to a reference was obtained. So now I have the shift of orientation of the cube every 30 minutes but now I need to calculate how much of the projected area will be exposed to the sun (I can assume the Sunlight is coming from a single direction).
I need to be able to translate the coordinate shift in orientation of the cube to how much of a projected area will be facing the sun at each interval of time. Let me give you an example:
At the initial time, the cube is facing you (you are the sun...because you are my sunshine ;) ) and no shift has occurred, hence the projected area will be 1 m^2.
After 30 mins, there has been a shift only on the x axis of 45 degrees. Now the projected area is 1.4142 m^2 (since cos 45 * 1 = 0.7071 and now you have 2 faces facing you).
After 60 mins, only a shift in the y axis occurs (45 degrees). Now you have 3 partial faces of the cube facing you and possess a projected area of 1.707 m^2.
This isn't to hard to do with little shifts, but I need to do this for multiple (more than a 100 shifts). I am thinking of writing a python program that rotates a 3D object and measures the projected area at each interval. Any recommendations on libraries that allow 3D body definition and rotation? libraries that can measure areas of projected surfaces?
Establish a unit vector perpendicular to each face of the cube. Depending on the output of your rotation program, you may be using angular rotations from the base axes or you can take vector cross product of 2 edges of face (be careful w/ right hand rule to ensure result faces outward)
take the dot product of each of the resultant 6 vectors individually with a vector "pointing to the sun"
drop any negative results (facing away from sun)
sum the remainder
Unit vectors will suffice because the surface area of each face is 1 sq unit.

Normalising angled Earth magnetic field

Me and my team are participating in ESA Astro Pi challenge. Our program will ran on the ISS for 3 hours and we will our results back and analyze them.
We want to investigate the connection between the magnetic intensity measurements from Sense HAT magnetometer and predictions from the World Magnetic Model (WMM). We want to research the accuracy of the magnetometer on Sense HAT.
The program will get raw magnetometer data (X, Y and Z) in microteslas from Sense HAT and calculate values H and F as described in British geological survey's article (section 2.1). It will then save them to CSV file, along with timestamp and location calculated with ephem.
We will then compare values Z, H and F from ISS and WMM and create maps with our data and differences (like figures 6, 8 and 10). We will then research, how accurate are Sense HAT magnetometer data.
We want to compare our data with data from WMM to see how accurate is Sense HAT magnetometer, but we have a problem that orientation of magnetometer will always be different. Because of that, our data will always be (very) different from WMM so we won't be able to compare them correctly.
We talked with Astro Pi support team and they suggested to "normalise the angled measurements so it looks like they were taken by a device aligned North/South".
Unfortunately, we (and they) don't know how to do this, so they suggested to ask this question on Stack Exchange. I asked it on Math Stack Exchange, Physics Stack Exchange and Raspberry Pi Forums. Unforcenetly, they didn't received any answers, so I am asking this question again.
How can we do this? We have data for timestamp, ISS location (latitude, longitude, elevation), magnetic data (X, Y and Z) and also direction from the North.
We want to normalise our data so we will be able to correctly compare them with data from WMM.
Here is part of our program that calculates magnetometer values (which gets not normalised data):
compass = sense.get_compass_raw()
try:
# Get raw data (values are swapped because Sense HAT on ISS is in different position)
# x: northerly intensity
# y: easterly intensity
# z: vertical intensity
x = float(compass['z'])
y = float(compass['y'])
z = float(compass['x'])
except (ValueError, KeyError) as err:
# Write error to log (excluded from this snippet)
pass
try:
# h: horizontal intensity
# f: total intensity
# d: declination
# i: inclination
h = sqrt(x ** 2 + y ** 2)
f = sqrt(h ** 2 + z ** 2)
d = degrees(atan(y / x))
i = degrees(atan(z / h))
except (TypeError, ValueError, ZeroDivisionError) as err:
# Write error to log (excluded from this snippet)
pass
There is also some simple simulator available with our code: https://trinket.io/library/trinkets/cc87813ce7
Part of email from Astro Pi team about location and position of magnetometer:
Z is going down through the middle of the Sense Hat.
X runs between the USB ports and SD card slot.
Y runs across from the HDMI port to the 40 way pin header.
On the ISS the AstroPi orientation is that the Ethernet + USB ports face the deck and the SD card slot is towards the sky.
So, that's basically a rotation around the Y axis from flat. So you keep the Y axis the same and swap around Z and X.
It can help to look at the Google Street view of the interior of the ISS Columbus module to get a better idea how the AstroPi is positioned;
https://www.google.com/streetview/#international-space-station/columbus-research-laboratory
If you pan the camera down and to the right, you'll see a green light - that's the AstroPi. The direction of travel for the whole space station is towards the inflatable Earth ball you can see on the left.
So, broadly speaking, the SD card slot points towards azimuth as in away from the centre of the Earth (so the X axis).
The LED matrix is facing the direction of travel of the space station (the Z axis).
Because of the orbital path of the ISS the Z and Y axes will continually change direction relative to the poles as it moves around the Earth.
So, I am guessing you want to normalise the angled measurements so it looks like they were taken by a device aligned North/South?
I think you need to create local reference coordinate system similar to NEH (north,east,height/altitude/up) something like
Representing Points on a Circular Radar Math approach.
Its commonly used in aviation as a reference frame (heading is derived from it) so your reference frame is computed from your geo location and its axises pointing to North, East and Up.
Now the problem is what does it mean aligned North/South and normalizing.. ?
If reference device measure just projection than you would need to do something like this:
dot(measured_vector,reference_unit_direction)
where the direction would be the North direction but as unit vector.
If the reference device measure a full 3D too then you need to transform both reference and tested measured data into the same coordinate system. That is done by using
transform matrices
So simple matrix * vector multiplication will do ... Only then compute the values H,F,Z which I do not know what they are and too lazy to go through papers ... would expect E,H or B vectors instead.
However if you do not have the geo location at moment of measure then you have just the North direction in respect to the ISS in form of Euler angles so you can not construct 3D reference frame at all (unless you got 2 known vectors instead of just one like UP). In such case you need to go with the option 1 projection (using dot product and north direction vector). So you will handle just scalar values instead of 3D vectors afterwards.
[Edit1]
From the link of yours:
The geomagnetic field vector, B, is described by the orthogonal
components X (northerly intensity), Y (easterly intensity) and Z
(vertical intensity, positive downwards);
This is not my field of expertise so I might be wrong here but this is how I understand it:
B(Bx,By,Bz) - magnetic field vector
a(ax,ay,az) - acceleration
Now F is a magnitude of B so its invariant on rotation:
F = |B| = sqrt( Bx*Bx + By*By + Bz*Bz )
you need to compute the X,Y,Z values of B in the NED reference frame (North,East,Down) so you need the basis vectors first:
Down = a/|a| // gravity points down
North = B/|B| // north is close to B direction
East = cross(Down,North) // East is perpendicular to Down and North
North = cross(East,Down) // north is perpendicular to Down and East, this should convert North to the horizontal plane
You should render them to visually check if they point to correct directions if not negate them by reordering the cross operands (I might have the order wrong I am used to use Up vector instead). Now just convert B to NED :
X = dot(North,B)
Y = dot(East,B)
Z = dot(Down,B)
And now you can compute the H
H = sqrt( X*X +Y*Y )
The vector math needed for this you will find in the transform matrix link above.
beware this will work only if no accelerations are present (the sensor is not on a robotic arm during its operation, or ISS is not doing a burn...) Otherwise you need to obtain the NED frame differently (like from onboard systems)
If this not work correctly then you can compute NED from your ISS position but for that you would need to know the exact orientation and displacement of the sensor in respect to your simulation model that provide your location. I do not know what rotations ISS do so I would not touch that subject unless as a last resort.
I am afraid that I will not have time for coding for some time ... anyway coding without sample input data nor the coordinate system expalnations and all the input/output variables is insanity ... simple negation of axis will invalidate the whole thing and there is a lot of duplicities along the ways and to cover all of them you would end up with many many versions to try to...
Apps should be build up incrementally but I am afraid that without the access to simulation or real HW that is not possible. And there is a whole bunch of things that could go wrong ... making even simple programs a magnitude harder to code... I would first check the F as it does not require any "normalization" first to see if the results are off or not. If off it might suggest different units or god knows what ...

How to plot an orbit with pyephem/matplotlib

With pyephem I can take orbital elements from the Minor Planet Center and add them into a ephem.EllipticalBody().
To plot the orbit I've been getting the computing the sun distance, heliocentric latitiude and longitude over a the period of the orbit and plotting that:
dt = body._epoch
period = (sqrt(a**3))
timespace = np.linspace(dt-((period*365)/2),
dt+((period*365)/2), 720)
theta_e = []
r_e = []
phi_e = []
for t in timespace:
body.compute(t)
theta_e.append(body.hlon)
phi_e.append(body.hlat)
r_e.append(body.sun_distance)
subplot = figure(figsize=(20, 20)).add_subplot(111, polar=True)
subplot.scatter(theta_e, r_e*cos(phi_e), s=0.5)
This approch works ok for minor planets that have short periods (a few years). However when I go to the extreme and plot say Sedna I get:
http://i.stack.imgur.com/wxRCW.png
So it looks as though pyephem is being clever and taking apsidal precession into account. Which isn't something I need for this.
Any suggestions so on how I can plot the orbit from the orbital elements or what I'm doing wrong here? Ideally I'll like to be able to do plots from different 'views' to show the inclination as well as eccentricity or perhaps a live mayavi scene.
I suspect that the phenomenon here is not apsidal precession, because the libastro library beneath PyEphem uses simple Keplerian orbits when given elliptical coordinates.
Instead, my guess is that you are seeing the Earth’s polar precession, which is moving the heliocentric latitude and longitude system out from under you as you are trying to plot. For most objects the effect is negligible, but an 11,000-year orbit is enough time for the Earth’s pole to make it a good bit of the way around the sky.
The only coordinates that libastro produces that are likely to be immune to the effect (assuming that a compute(…, epoch=J2000) does not make the heliocentric longitude stable?) are the a_ra and a_dec coordinates, so you might have to pull those for both the Sun and Sedna, convert them into vectors, and subtract.

Difficulties with RA/Dec and Alt/Azi conversions with pyEphem

I'm trying to go from alt/azi to RA/Dec for a point on the sky at a fixed location, trying out pyEphem. I've tried a couple of different ways, and I get sort of the right answer, within a couple of degrees, but I'm expecting better, and I can't figure out where the problems lie.
I've been using Canopus as a test case (I'm not after stars specifically, so I can't use the in-built catalogue). So in my case, I know that at
stn = ephem.Observer()
# yalgoo station, wa
stn.long = '116.6806'
stn.lat = '-28.3403'
stn.elevation = 328.0
stn.pressure = 0 # no refraction correction.
stn.epoch = ephem.J2000
stn.date = '2014/12/15 14:32:09' #UTC
Stellarium, checked with other web sites tell me Canopus should be at
azi, alt '138:53:5.1', '56:09:52.6' or in equatorial RA 6h 23m 57.09s/ Dec. -52deg 41' 44.6"
but trying:
cano = ephem.FixedBody()
cano._ra = '6:23:57.1'
cano._dec = '-52:41:44.0'
cano._epoch = ephem.J2000
cano.compute( stn)
print( cano.az, cano.alt)
>>>(53:22:44.0, 142:08:03.0)
about 3 degrees out. I've also tried the reverse,
ra, dec = stn.radec_of('138:53:5.1', '56:09:52.6')
>>>(6:13:18.52, -49:46:09.5)
where I'm expecting 6:23 not 6:13. Turning on refraction correction makes a small difference, but not enough, and I've always understood aberration and nutation were much smaller effects than this offset as well?
As a follow up, I've tried manual calculations, based on 'Practical Astronomy with your calculator'; so for dec:
LAT = math.radians(-28.340335)
LON = math.radians(116.680621667)
ALT = math.radians(56.16461)
AZ = math.radians(138.88475)
sinDEC = (math.sin( LAT)*math.sin( ALT)
+ math.cos( LAT)*math.cos( ALT)*math.cos( AZ) )
DEC = math.asin( sinDEC)
DEC_deg = math.degrees(DEC)
print( 'dec = ', DEC_deg )
>>>('dec = ', -49.776032754148986)
again, quite different from '56:09:52.6', but reasonably close to pyEphem - so now I'm thoroughly confused! So now I'm suspecting the problem is my understanding, rather than pyEphem - could someone enlighten me about the correct way to go do RADEC/ALTAZI conversions, and why things are not lining up?!
First some notes
Atmospheric scattering and relative speed between observer and object
have the maximal error (near horizon) up to 0.6 degree which is nowhere near your error.
how can altitude be over 90 degrees?
you got swapped data for azimut and altitude
I put your observer data into mine program and result was similar to yours
but I visually search for that star instead of putting the coordinates. Result was also about 3-4 degrees off in RA axis
RA=6.4h Dec=-52.6deg
azi=142.4deg alt=53.9deg
mine engine is in C++, using Kepler's equation
Now what can be wrong:
mine stellar catalog can be wrongly converted
rotated wrongly with some margin but strongly doubt that it is 3 degrees. Also perspective transforms can add some error while rendering at 750AU distance from observer. I never tested for Southern sky (not visible from mine place).
we are using different Earth reference frame then the data you comparing to
I found out that some sites like NASA Horizon use different reference frame which does not correspond with mine observations. Look here
calculate the time when the sun is X degrees below/above the Horizon
at the start of the answer is link to 2 sites with different reference frames when you compare the result they are off. The second link is corresponding with mine observations the rest is dealing (included source code) with Kepler's equation based Solar system simulation. The other sublinks are also worth looking into.
I could have a bug in mine simulation/data
I have referenced data to this engine which could partially hide any computation errors from mine observer position so handle all above text with taken that it mind.
you could use wrong time/Julian date to stellar time conversions
if your time is off then the angles will not match...
How to resolve this?
pick up your Telescope, set up equatoreal coordinate system/mount to it and measure Ra/Dec Azi/Alt for known (distant) object in reality and compare with computed positions. Only this way you can decide which value is good or wrong (for reference frame you are using). Do this on star not planet !!! Do this on high altitude angles not near Horizon !!!
How to transform between azimutal and equatoreal coordinates
I compute transform matrix Eath representing earth's coordinate system (upper right) in heliocentric coordinate system as global coordinate system (left) then I compute another matrix NEH representing observer on Earth's surface (North,East,High/Altitude ... lower right).
After this it is just a matter of matrix and vector multiplications and conversion between Cartesian and spherical coordinate systems look here:
Representing Points on a Circular Radar Math approach
for more insight to azimutal coordinates. if you use just that simple equation like in your example then you do not account for many things... The Earth position is computed by Kepler's equation, rotation is given by daily rotation, nutation and precession included.
I use 64 bit floating point values which can create round errors but not that high ...
I use geometric North Pole as observer reference (this could add some serious error near poles).
The biggest thing that can affect this is the speed of light but that account for near earth 'moving' objects like planets not stars (except Sun) because their computed position is visible after some time ... For example Sun-Earth distance is about 8 light minutes so we see the Sun where it was 8 minutes ago. If the effemerides data is geometrical only (not account for this) then this can lead to high errors if not computed properly.
Newer effemerides models use gravity integration instead of Kepler so their data must be geometrical and the final output is then corrected by the time shift ...

Categories