I am using OpenCV to triangulate the position of an object, and am trying to create some kind of formula to pass the coordinates that I obtain through to drag a pull arrow, casting a fishing rod. I tried using polynomial regression to a very high degree, but it is still inaccurate due to the regression not being able to take into account an (x,y) input to an (x,y) output, rather just an x input to x output etc. I have attached screenshots below for clarity, alongside my obtained formulas from the regression. Any help/ideas/suggestions would be appreciated, thanks.
Edit:
The xy coordinates are organized from the landing position to the position where the arrow was pulled to for the bobber to land there. This is because the fishing blob is the input, and the arrow pull end location comes from the blob location. I am using OpenCV to obtain the x,y coordinates, which I believe is just an x,y coordinate system of the 2d screen.
The avatar position is locked, and the button to cast the rod is located at an absolute position of (957,748).
The camera position is locked with no rotation or movement.
I believe that the angle the rod is cast at is likely a 1:1 opposite of where it is pulled to. Ex: if the rod was pulled to 225 degrees it would cast at 45 degrees. I am not 100% sure, but I think that the strength is linear. I used linear regression partially because I was not sure about this. There is no altitude difference/slope/wind that affects the cast. The only affecting factor of landing position is where the arrow is dragged to. The arrow will not drag past the 180/360 degree position sideways (relative to cast button) and will simply lock the cast angle in the x direction if it is held there.
The x-y data was collected with a simple program to move the mouse to the same position (957,748) and drag the arrow to cast the rod with different drag strengths/positions to create some kind of line of best fit for a general formula for casting the rod. The triang_x and y functions included are what the x and y coordinates were run through respectively to triangulate the ending drag coordinate for the arrow. This does not work very well because matching the x-to-x and y-to-y doesn't account for x and y data in each formula, just x-to-x etc.
Left column is fishing spot coordinates, right column is where arrow is dragged to to hit the fish spot.
(1133,359) to (890,890)
(858,334) to (886, 900)
(755,579) to (1012,811)
(1013,255) to (933,934)
(1166,469) to (885,855)
(1344,654) to (855,794)
(804,260) to (1024,939)
(1288,287) to (822,918)
(624,422) to (1075,869)
(981,460) to (949,851)
(944,203) to (963,957)
(829,367) to (1005,887)
(1129,259) to (885,932)
(773,219) to (1036,949)
(1052,314) to (919,908)
(958,662) to (955,782)
(1448,361) to (775,906)
(1566,492) to (751,837)
(1275,703) to (859,764)
(1210,280) to (852,926)
(668,513) to (1050,836)
(830,243) to (1011,939)
(688,654) to (1022,792)
(635,437) to (1072,864)
(911,252) to (976,935)
(1499,542) to (785,825)
(793,452) to (1017,860)
(1309,354) to (824,891)
(1383,522) to (817,838)
(1262,712) to (867,758)
(927,225) to (980,983)
(644,360) to (1097,919)
(1307,648) to (862,798)
(1321,296) to (812,913)
(798,212) to (1026,952)
(1315,460) to (836,854)
(700,597) to (1028,809)
(868,573) to (981,811)
(1561,497) to (758,838)
(1172,588) to (896,816)
Shows bot actions taken within function and how formula is used.
coeffs_x = np.float64([
-7.9517089428836911e+005,
4.1678460255861210e+003,
-7.5075555590709371e+000,
4.2001528427460097e-003,
2.3767929866943760e-006,
-4.7841176483548307e-009,
6.1781765539212100e-012,
-5.2769581174002655e-015,
-4.3548777375857698e-019,
2.5342561455214514e-021,
-1.4853535063513160e-024,
1.5268121610772846e-027,
-2.9667978919426497e-031,
-9.5670287721717018e-035,
-2.0270490020866057e-037,
-2.8248895597371365e-040,
-4.6436110892973750e-044,
6.7719507722602512e-047,
7.1944028726480678e-050,
1.2976299392064562e-052,
7.3188205383162127e-056,
-6.3972284918241943e-059,
-4.1991571617797430e-062,
2.5577340340980386e-066,
-4.3382682133956009e-068,
1.5534384486024757e-071,
5.1736875087411699e-075,
7.8137258396620031e-078,
2.6423817496804479e-081,
2.5418438527686641e-084,
-2.8489136942892384e-087,
-2.3969101111450846e-091,
-3.3499890707855620e-094,
-1.4462592756075361e-096,
6.8375394909274851e-100,
-2.4083095685910846e-103,
7.0453288171977301e-106,
-2.8342463921987051e-109
])
triang_x = np.polynomial.Polynomial(coeffs_x)
coeffs_y = np.float64([
2.6215449742035207e+005,
-5.7778572049616614e+003,
5.1995066291482431e+001,
-2.3696608508824663e-001,
5.2377319234985116e-004,
-2.5063316505492962e-007,
-9.2022083686040928e-010,
3.8639053124052189e-013,
2.7895763914453325e-015,
7.3703786336356152e-019,
-1.3411964395287408e-020,
1.5532055573746500e-023,
-6.9719956967963252e-027,
1.9573598517734802e-029,
-3.3847482160483597e-032,
-5.5368209294319872e-035,
7.1463648457003723e-038,
4.6713369979545088e-040,
-7.5070219026265008e-043,
-4.5089676791698693e-047,
-3.2970870269153785e-049,
1.6283636917056585e-051,
-1.4312555782661719e-054,
7.8463441723355399e-058,
1.9439588820918080e-060,
2.1292310369635749e-063,
-1.4191866473449773e-065,
-2.1353539347524828e-070,
2.5876946863828411e-071,
-1.6182477348921458e-074
])
triang_y = np.polynomial.Polynomial(coeffs_y)
First you need to clarify few things:
the xy data
Is position of object you want to hit or position what you hit when used specific input data (which is missing in that case)?In what coordinate system?
what position is your avatar?
how is the view defined?
is it fully 3D with 6DOF or just fixed (no rotation or movement) relative to avatar?
what is the physics/logic of your rod casting
is it angle (one or two), strength?Is the strength linear to distance?Does throwing acount for altitude difference between avatar and target?does ground elevation (slope) play a role?Are there any other factors like wind, tupe of rod etc?
You shared the xy data but what against you want to correlate or make formula for it? it does not make sense you obviously forget to add something like each position was taken for what conditions?
I would solve this by (no further details before you clarify stuff above):
transform targets xy to player relative coordinate system aligned to ground
compute azimut angle (geometricaly)
simple atan2(y,x) will do but you need to take into account your coordinate system notations.
compute elevation angle and strength (geometricaly)
simple balistic physics should apply however depends on the physics the game or whatever you write this for uses.
adjust for additional stuff
You know for example wind can slightly change your angle and strength
In case you have real physics and data you can do #3,#4 at the same time. See similar:
C++ intersection time of 2 bullets
[Edit1] puting your data into your image
OK your coordinates obviously do not match your screenshot as the image taken is scaled after some intuition I rescaled it and draw into image in C++ to match again so here the result:
I converted your Cartesian points:
int ava_x=957,ava_y=748; // avatar
int data[]= // target(x1,y1) , drag(x0,y0)
{
1133,359,890,890,
858,334,886, 900,
755,579,1012,811,
1013,255,933,934,
1166,469,885,855,
1344,654,855,794,
804,260,1024,939,
1288,287,822,918,
624,422,1075,869,
981,460,949,851,
944,203,963,957,
829,367,1005,887,
1129,259,885,932,
773,219,1036,949,
1052,314,919,908,
958,662,955,782,
1448,361,775,906,
1566,492,751,837,
1275,703,859,764,
1210,280,852,926,
668,513,1050,836,
830,243,1011,939,
688,654,1022,792,
635,437,1072,864,
911,252,976,935,
1499,542,785,825,
793,452,1017,860,
1309,354,824,891,
1383,522,817,838,
1262,712,867,758,
927,225,980,983,
644,360,1097,919,
1307,648,862,798,
1321,296,812,913,
798,212,1026,952,
1315,460,836,854,
700,597,1028,809,
868,573,981,811,
1561,497,758,838,
1172,588,896,816,
};
Into polar relative to ava_x,ava_y using atan2 and 2D distance formula and simply print the angular difference +180deg and ratio between line sizes (that is the yellow texts in left of the screenshot) first is ordinal number then angle difference [deg] and then ratio between line lengths...
as you can see the angle difference is +/-10.6deg and length ratio is <2.5,3.6> probably because of inaccuracy of OpenCV findings and some randomness for fishing rod castings from the game logic itself.
As you can see polar coordinates are best for this. For starters you could do simply this:
// wanted target in polar (obtained by CV)
x = target_x-ava_x;
y = target_y-ava_y;
a = atan2(y,x);
l = sqrt((x*x)+(y*y));
// aiming drag in polar
a += 3.1415926535897932384626433832795; // +=180 deg
l /= 3.0; // "avg" ratio between line sizes
// aiming drag in cartesian
aim_x = ava_x + l*cos(a);
aim_y = ava_y + l*sin(a);
You can optimize it to:
aim_x = ava_x - ((target_x-ava_x)/3);
aim_y = ava_y - ((target_y-ava_y)/3);
Now to improve precision you could measure the dependency or line ratio and line size (it might be not linear) , also the angular difference might be bigger for bigger lines ...
Also note that second cast (ordinal 2) is probably a bug (wrongly detected x,y by CV) if you render the 2 lines you will see they do not match so you should not account that and throw them away from dataset.
Also note that I code in C++ so my goniometrics use radians (not sure if true for python if not you need to convert to degrees) also equations might need some additional tweaking for your coordinate systems (negate y?)
I am quite new to field of orbital mechanics and currently struggling a bit with the following problem, which should be quite easy to solve with Skyfield, yet I am a bit overwhelmed by all the different coordinate systems and the translation between them.
I have a Topos location on Earth and a Topos location of a LEO satellite. I am considering the line-of-sight between them. I want to determine the latitude and longitude of the position along this path, where it intersects a specific layer of the atmosphere.
An example would be the mesosphere and an existing dataset on its properties at around 100km that is given based on latitude and longitude. The intersection would allow me to better understand the interaction these properties have on the communication with the satellite.
I tried doing it with Skyfield directly, but only end up with an Apparent object that I cannot convert back to latitude, longitude on Earth. First, I trigonometrically determined the distance from Earth to the point, where the height of 100km is reached.
Then, I took the position on Earth and used the unchanged elevation, azimuth to keep the direction of the path and finally added the calculated distance to arrive at this position. I think I need to get a Geocentric object to use subpoint() in order to get the desired latitude, longitude of this location.
This is what I have so far:
from skyfield.api import load, Distance
from skyfield.toposlib import Topos
import numpy as np
ts = load.timescale()
earth_position = Topos('52.230039 N', '4.842402 E', elevation_m=10)
space_position = Topos('51.526200 N', '5.347795 E', elevation_m=625 * 1000)
difference = (space_position - earth_position).at(ts.now()).altaz()
distance_to_height = 100 / np.sin(difference[0].radians)
position = earth_position.at(ts.now()).from_altaz(alt_degrees=difference[0].degrees, az_degrees=difference[1].degrees, distance=Distance(km=distance_to_height))
I have gone through the documentation multiple times, and stumbled upon frame_latlon(frame) for Generic ICRF objects, but am not sure how to further proceed.
Trying it completely trigonomatrically with the latitudes and longitudes didn't yield the desired results either.
Unfortunately I do not really have any validated results that could be used to solve this problem more easily. Imagining it again trigonometrically, it is obvious that an increase in altitude of the satellite position would move the lat, lon of the intersection closer to the position on Earth. Decreasing the altitude would then move this intersection closer to the satellite.
That is an interesting problem, which Skyfield’s API provides no easy way to ask about; if you could outline the larger problem that will be solved by knowing the intersection of the line-of-sight with a particular altitude, then it is possible that a routine addressing that problem could be written for future users tackling the same question.
In the meantime:
To get your script running I had to import Distance from api.
The name dis was not recognized, so I replaced it with distance_to_height, hoping that it was the name intended.
Calling ts.now() is giving you a slightly different date and time on each call. While the script runs so fast that it probably does not matter, I have for clarity pivoted to calling now() only once at the beginning of the script, which is also slightly faster than calling it repeatedly. (Actually in this case it’s much faster, because the rotation matrices only get computed once rather than having to be computed over again for each separate time object, but that’s a hidden detail that’s not easy to see.)
I suspect a problem with your geometry: the 100 / sin() maneuver would only work if the Earth were flat, I think? But maybe you are always dealing with nearly-overhead satellites and so the error is manageable? (Or maybe I am mis-imagining the geometry; feel free to provide a diagram if the math is in fact correct.)
For readability I’ll give the components of altaz() names rather than numbers.
With those tweaks in place,
I think the answer is that you need to manually construct a Geocentric
position by adding together the position of the observer
and the relative vector you have created
between the observer and the kind-of-100km point along the line of sight.
Having to take a manual step like this suggests a possible area
where Skyfield can improve.
Here is how it looks in code:
from skyfield.api import load, Distance
from skyfield.positionlib import Geocentric
from skyfield.toposlib import Topos
import numpy as np
ts = load.timescale()
t = ts.now()
earth_position = Topos('52.230039 N', '4.842402 E', elevation_m=10)
space_position = Topos('51.526200 N', '5.347795 E', elevation_m=625 * 1000)
alt, az, distance = (space_position - earth_position).at(t).altaz()
distance_to_height = 100 / np.sin(alt.radians)
e = earth_position.at(t)
p = e.from_altaz(alt_degrees=alt.degrees, az_degrees=az.degrees, distance=Distance(km=distance_to_height))
g = Geocentric(e.position.au + p.position.au, t=t)
s = g.subpoint()
print(s)
print(s.elevation.km, '<- warning: 100/sin() did not produce exactly 100')
The result I see is:
Topos 52deg 06' 30.0" N 04deg 55' 51.7" E
100.02752954478532 <- warning: 100/sin() did not produce exactly 100
And for the future,
I have added some thoughts to the Skyfield TODO.rst file
that together might move towards unlocking a more idiomatic way
to perform this kind of calculation in the future —
though I suspect that a few more steps even beyond these will be necessary:
https://github.com/skyfielders/python-skyfield/commit/ba1172a0ccfef84473436d9d7b8a7d7011344cbd
Me and my team are participating in ESA Astro Pi challenge. Our program will ran on the ISS for 3 hours and we will our results back and analyze them.
We want to investigate the connection between the magnetic intensity measurements from Sense HAT magnetometer and predictions from the World Magnetic Model (WMM). We want to research the accuracy of the magnetometer on Sense HAT.
The program will get raw magnetometer data (X, Y and Z) in microteslas from Sense HAT and calculate values H and F as described in British geological survey's article (section 2.1). It will then save them to CSV file, along with timestamp and location calculated with ephem.
We will then compare values Z, H and F from ISS and WMM and create maps with our data and differences (like figures 6, 8 and 10). We will then research, how accurate are Sense HAT magnetometer data.
We want to compare our data with data from WMM to see how accurate is Sense HAT magnetometer, but we have a problem that orientation of magnetometer will always be different. Because of that, our data will always be (very) different from WMM so we won't be able to compare them correctly.
We talked with Astro Pi support team and they suggested to "normalise the angled measurements so it looks like they were taken by a device aligned North/South".
Unfortunately, we (and they) don't know how to do this, so they suggested to ask this question on Stack Exchange. I asked it on Math Stack Exchange, Physics Stack Exchange and Raspberry Pi Forums. Unforcenetly, they didn't received any answers, so I am asking this question again.
How can we do this? We have data for timestamp, ISS location (latitude, longitude, elevation), magnetic data (X, Y and Z) and also direction from the North.
We want to normalise our data so we will be able to correctly compare them with data from WMM.
Here is part of our program that calculates magnetometer values (which gets not normalised data):
compass = sense.get_compass_raw()
try:
# Get raw data (values are swapped because Sense HAT on ISS is in different position)
# x: northerly intensity
# y: easterly intensity
# z: vertical intensity
x = float(compass['z'])
y = float(compass['y'])
z = float(compass['x'])
except (ValueError, KeyError) as err:
# Write error to log (excluded from this snippet)
pass
try:
# h: horizontal intensity
# f: total intensity
# d: declination
# i: inclination
h = sqrt(x ** 2 + y ** 2)
f = sqrt(h ** 2 + z ** 2)
d = degrees(atan(y / x))
i = degrees(atan(z / h))
except (TypeError, ValueError, ZeroDivisionError) as err:
# Write error to log (excluded from this snippet)
pass
There is also some simple simulator available with our code: https://trinket.io/library/trinkets/cc87813ce7
Part of email from Astro Pi team about location and position of magnetometer:
Z is going down through the middle of the Sense Hat.
X runs between the USB ports and SD card slot.
Y runs across from the HDMI port to the 40 way pin header.
On the ISS the AstroPi orientation is that the Ethernet + USB ports face the deck and the SD card slot is towards the sky.
So, that's basically a rotation around the Y axis from flat. So you keep the Y axis the same and swap around Z and X.
It can help to look at the Google Street view of the interior of the ISS Columbus module to get a better idea how the AstroPi is positioned;
https://www.google.com/streetview/#international-space-station/columbus-research-laboratory
If you pan the camera down and to the right, you'll see a green light - that's the AstroPi. The direction of travel for the whole space station is towards the inflatable Earth ball you can see on the left.
So, broadly speaking, the SD card slot points towards azimuth as in away from the centre of the Earth (so the X axis).
The LED matrix is facing the direction of travel of the space station (the Z axis).
Because of the orbital path of the ISS the Z and Y axes will continually change direction relative to the poles as it moves around the Earth.
So, I am guessing you want to normalise the angled measurements so it looks like they were taken by a device aligned North/South?
I think you need to create local reference coordinate system similar to NEH (north,east,height/altitude/up) something like
Representing Points on a Circular Radar Math approach.
Its commonly used in aviation as a reference frame (heading is derived from it) so your reference frame is computed from your geo location and its axises pointing to North, East and Up.
Now the problem is what does it mean aligned North/South and normalizing.. ?
If reference device measure just projection than you would need to do something like this:
dot(measured_vector,reference_unit_direction)
where the direction would be the North direction but as unit vector.
If the reference device measure a full 3D too then you need to transform both reference and tested measured data into the same coordinate system. That is done by using
transform matrices
So simple matrix * vector multiplication will do ... Only then compute the values H,F,Z which I do not know what they are and too lazy to go through papers ... would expect E,H or B vectors instead.
However if you do not have the geo location at moment of measure then you have just the North direction in respect to the ISS in form of Euler angles so you can not construct 3D reference frame at all (unless you got 2 known vectors instead of just one like UP). In such case you need to go with the option 1 projection (using dot product and north direction vector). So you will handle just scalar values instead of 3D vectors afterwards.
[Edit1]
From the link of yours:
The geomagnetic field vector, B, is described by the orthogonal
components X (northerly intensity), Y (easterly intensity) and Z
(vertical intensity, positive downwards);
This is not my field of expertise so I might be wrong here but this is how I understand it:
B(Bx,By,Bz) - magnetic field vector
a(ax,ay,az) - acceleration
Now F is a magnitude of B so its invariant on rotation:
F = |B| = sqrt( Bx*Bx + By*By + Bz*Bz )
you need to compute the X,Y,Z values of B in the NED reference frame (North,East,Down) so you need the basis vectors first:
Down = a/|a| // gravity points down
North = B/|B| // north is close to B direction
East = cross(Down,North) // East is perpendicular to Down and North
North = cross(East,Down) // north is perpendicular to Down and East, this should convert North to the horizontal plane
You should render them to visually check if they point to correct directions if not negate them by reordering the cross operands (I might have the order wrong I am used to use Up vector instead). Now just convert B to NED :
X = dot(North,B)
Y = dot(East,B)
Z = dot(Down,B)
And now you can compute the H
H = sqrt( X*X +Y*Y )
The vector math needed for this you will find in the transform matrix link above.
beware this will work only if no accelerations are present (the sensor is not on a robotic arm during its operation, or ISS is not doing a burn...) Otherwise you need to obtain the NED frame differently (like from onboard systems)
If this not work correctly then you can compute NED from your ISS position but for that you would need to know the exact orientation and displacement of the sensor in respect to your simulation model that provide your location. I do not know what rotations ISS do so I would not touch that subject unless as a last resort.
I am afraid that I will not have time for coding for some time ... anyway coding without sample input data nor the coordinate system expalnations and all the input/output variables is insanity ... simple negation of axis will invalidate the whole thing and there is a lot of duplicities along the ways and to cover all of them you would end up with many many versions to try to...
Apps should be build up incrementally but I am afraid that without the access to simulation or real HW that is not possible. And there is a whole bunch of things that could go wrong ... making even simple programs a magnitude harder to code... I would first check the F as it does not require any "normalization" first to see if the results are off or not. If off it might suggest different units or god knows what ...