Finding a gps location at a certain time given two points? - python

If I have two known locations and a known speed, how can I calculate the current position at distance d (in km)?
For example, given:
Two gps locations in ES4236:
37.783333, -122.416667 # San Francisco
32.715, -117.1625 # San Diego
Traveling at 1km/min in a straight line (ignoring altitude)
How can I find the gps coordinate at a certain distance? A similar SO question uses VincentyDistance in geopy to calculate the next point based on bearing and distance.
I guess, more specifically:
How can I calculate the bearing between two gps points using geopy?
Using VincentyDistance to get the next gps point by bearing and distance, how do I know if I have arrived at my destination, or if I should keep going? It doesn't need to be exactly on the destination to be considered being arrived. Maybe any point with a radius of .5 km of the destination is considered 'arrived'.
ie,
import geopy
POS1 = (37.783333, -122.416667) # origin
POS2 = (32.715, -117.1625) # dest
def get_current_position(d):
# use geopy to calculate bearing between POS1 and POS2
# then use VincentyDistance to get next coord
return gps_coord_at_distance_d
# If current position is within .5 km of destination, consider it 'arrived'
def has_arrived(curr_pos):
return True/False
d = 50 # 50 km
print get_current_position(d)
print has_arrived(get_current_position(d))

Ok, figured I'd come back to this question and give it my best shot given that it hasn't seen any other solutions. Unfortunately I can't test code right now, but I believe there is a solution to your problem using both geopy and geographiclib. Here goes.
From the terminal (possibly with sudo)
pip install geographiclib
pip install geopy
Now with Python
Get Current Position
import geographiclib
from geopy import geopy.distance
# Get the first azimuth, which should be the bearing
bearing = geographiclib.WGS84.Inverse(37.783333, -122.416667, 32.715, -117.1625)[2]
# Now we use geopy to calculate the distance over time
dist = geopy.distance.VincentyDistance(kilometers = 1)
san_fran = geopy.Point(37.783333, -122.416667)
print dist.destination(point=san_fran, bearing=bearing)
Has Arrived
def has_arrived(d):
return geopy.distance.vincenty(curr_pos, (32.715, -117.1625)).kilometers < .5
Like I said, I unfortunately can't test this, but I believe this is correct. It's possible there will be some unit differences with the bearing calculation: it calculates bearing off of North as seen here. Sorry if this isn't exactly correct, but like I said since this hasn't received a response since I figured I may as well throw in what I know.

Related

Travelling Salesperson with scipy.optimize.dual_annealing [duplicate]

How do I solve a Travelling Salesman problem in python? I did not find any library, there should be a way using scipy functions for optimization or other libraries.
My hacky-extremelly-lazy-pythonic bruteforcing solution is:
tsp_solution = min( (sum( Dist[i] for i in izip(per, per[1:])), n, per) for n, per in enumerate(i for i in permutations(xrange(Dist.shape[0]), Dist.shape[0])) )[2]
where Dist (numpy.array) is the distance matrix.
If Dist is too big this will take forever.
Suggestions?
The scipy.optimize functions are not constructed to allow straightforward adaptation to the traveling salesman problem (TSP). For a simple solution, I recommend the 2-opt algorithm, which is a well-accepted algorithm for solving the TSP and relatively straightforward to implement. Here is my implementation of the algorithm:
import numpy as np
# Calculate the euclidian distance in n-space of the route r traversing cities c, ending at the path start.
path_distance = lambda r,c: np.sum([np.linalg.norm(c[r[p]]-c[r[p-1]]) for p in range(len(r))])
# Reverse the order of all elements from element i to element k in array r.
two_opt_swap = lambda r,i,k: np.concatenate((r[0:i],r[k:-len(r)+i-1:-1],r[k+1:len(r)]))
def two_opt(cities,improvement_threshold): # 2-opt Algorithm adapted from https://en.wikipedia.org/wiki/2-opt
route = np.arange(cities.shape[0]) # Make an array of row numbers corresponding to cities.
improvement_factor = 1 # Initialize the improvement factor.
best_distance = path_distance(route,cities) # Calculate the distance of the initial path.
while improvement_factor > improvement_threshold: # If the route is still improving, keep going!
distance_to_beat = best_distance # Record the distance at the beginning of the loop.
for swap_first in range(1,len(route)-2): # From each city except the first and last,
for swap_last in range(swap_first+1,len(route)): # to each of the cities following,
new_route = two_opt_swap(route,swap_first,swap_last) # try reversing the order of these cities
new_distance = path_distance(new_route,cities) # and check the total distance with this modification.
if new_distance < best_distance: # If the path distance is an improvement,
route = new_route # make this the accepted best route
best_distance = new_distance # and update the distance corresponding to this route.
improvement_factor = 1 - best_distance/distance_to_beat # Calculate how much the route has improved.
return route # When the route is no longer improving substantially, stop searching and return the route.
Here is an example of the function being used:
# Create a matrix of cities, with each row being a location in 2-space (function works in n-dimensions).
cities = np.random.RandomState(42).rand(70,2)
# Find a good route with 2-opt ("route" gives the order in which to travel to each city by row number.)
route = two_opt(cities,0.001)
And here is the approximated solution path shown on a plot:
import matplotlib.pyplot as plt
# Reorder the cities matrix by route order in a new matrix for plotting.
new_cities_order = np.concatenate((np.array([cities[route[i]] for i in range(len(route))]),np.array([cities[0]])))
# Plot the cities.
plt.scatter(cities[:,0],cities[:,1])
# Plot the path.
plt.plot(new_cities_order[:,0],new_cities_order[:,1])
plt.show()
# Print the route as row numbers and the total distance travelled by the path.
print("Route: " + str(route) + "\n\nDistance: " + str(path_distance(route,cities)))
If the speed of algorithm is important to you, I recommend pre-calculating the distances and storing them in a matrix. This dramatically decreases the convergence time.
Edit: Custom Start and End Points
For a non-circular path (one which ends at a location different from where it starts), edit the path distance formula to
path_distance = lambda r,c: np.sum([np.linalg.norm(c[r[p+1]]-c[r[p]]) for p in range(len(r)-1)])
and then reorder the cities for plotting using
new_cities_order = np.array([cities[route[i]] for i in range(len(route))])
With the code as it is, the starting city is fixed as the first city in cities, and the ending city is variable.
To make the ending city the last city in cities, restrict the range of swappable cities by changing the range of swap_first and swap_last in two_opt() with the code
for swap_first in range(1,len(route)-3):
for swap_last in range(swap_first+1,len(route)-1):
To make both the starting and ending cities variable, instead expand the range of swap_first and swap_last with
for swap_first in range(0,len(route)-2):
for swap_last in range(swap_first+1,len(route)):
I recently found out this option to use linear optimization for the TSP problem
https://gist.github.com/mirrornerror/a684b4d439edbd7117db66a56f2483e0
Nonetheless I agree with some of the other comments, just a remainder that there are ways to use linear optimization for this problem.
Some academic publications include the following
http://www.opl.ufc.br/post/tsp/
https://phabi.ch/2021/09/19/tsp-subtour-elimination-by-miller-tucker-zemlin-constraint/

How do I calculate GPS distance accurately in Python

I am trying to update distance traveled between GPS coordinates. My error is that the GPS can move short distances while sitting still. I am currently adding the new coordinates to a list every second, calculating the distance between this second and last second, then appending the distances to a new list then add them all together.
The issue is that the small movements in distance while standing still keep accumulating. Does anyone know the proper way to do this?
self.breadcrumbs = []
#Calc Linear Distance GPS
while 1:
report = gpsp.get_current_value() #Retrieves GPS Values
try:
self.lat = report.lat
self.lon = report.lon
self.latlon = (self.lat, self.lon) #Put lat lon into tuple
self.breadcrumbs.append(self.latlon) #Append lat lon to breadcrumb list
breadcrumb_distances = [] #Holds distances between latlon data points
for i, b in enumerate(self.breadcrumbs):
current_location = b
last_location = self.breadcrumbs[i - 1]
miles = geodesic(current_location, last_location).miles
feet = miles * 5280 #convert to feet
breadcrumb_distances.append(feet)
cumulative_distance = round(sum(breadcrumb_distances),2)
print(cumulative_distance)
except Exception as e:
print(e)
sleep(1)
There is no single "right" way to do this. The problem is that when you are moving very slowly, the "erroneous" movement overwhelms what the user would perceive as the actual movement. It becomes a tradeoff between taking data that is in error and dropping data that represents actual motion. The problem increases the faster you take data and the slower the velocity.
One method is to set a minimum distance that will cause you to log new data. If the new point is within some distance ϵ of the previous one, drop the point. For good choices, this will ignore motion when actually stopped. You then need to not care about the time between data points, or you need to log the time for points (or somehow indicate the location and duration of the gaps). If the problem is due to stopped periods, this may be the best.
Another method is to reduce the logging frequency. For some applications, backing off to 5s or similar may be sufficient.

Fast great circle for multiple points - Python geopy

Is it possible to speed up the great_circle(pos1, pos2).miles from geopy if using it for multiple thousand points?
I want to create something like a distance matrix and at the moment my machine needs 5 seconds for 250,000 calculations.
Actually pos1 is always the same if it helps.
Another "restriction" in my case is that I only want all points pos2 which have a distance less than a constant x.
(The exact distance doesn't matter in my case)
Is there a fast method? Do I need to use a faster function than great_circle which is less accurate or is it possible to speed it up without losing accuracy?
Update
In my case the question is whether a point is inside a circle.
Therefore it is easily possible to first get whether a point is inside a square.
start = geopy.Point(mid_point_lat, mid_point_lon)
d = geopy.distance.VincentyDistance(miles=radius)
p_north_lat = d.destination(point=start, bearing=0).latitude
# check whether the given point lat is > p_north_lat
# and so on for east, south and west

Find third coordinate of (right) triangle given 2 coordinates and ray to third

I start explaining my problem from very far, so you could suggest completely different approaches and understand custom objects and functions.
Over years I have recorded many bicycle GPS tracks (.gpx). I decided to merge these (mostly overlapping) tracks into a large graph and merge/remove most of track points. So far, I have managed to simplify tracks (feature in gpxpy module, that removes about 90% of track-points, while preserving positions of corners) and load them into my current program.
Current Python 3 program consists of loading gpx tracks and optimising graph with four scans. Here's planned steps in my program:
Import points from gpx (working)
Join points located close to each other (working)
Merge edges under small angles (Problem is with this step)
Remove points on straights (angle between both edges is over 170 degrees). Looks like it is working.
Clean-up by resetting unique indexing of points (working)
Final checking of all edges in graph.
In my program I started counting steps with 0, because first one is simply opening and parsing file. Stackoverflow doesn't let me to start ordering from 0.
To store graph, I have a dictionary punktid (points in estonian), where punkt (point) object is stored at key uid/ui (unique ID). Unique ID is also stored in point itself too. Weight attribute is used in 2-nd and 3-rd step to find average of points while taking into account earlier merges.
class punkt:
def __init__(self,lo,la,idd,edge=set(),ele=0, wei=1):
self.lng=lo #Longtitude
self.lat=la #Latitude
self.uid=idd #Unique ID
self.edges=edge #Set of neighbour nodes
self.att=ele #Elevation
self.weight=wei #Used to get weighted average
>>> punktid
{1: <__main__.punkt object at 0x0000006E9A9F7FD0>,
2: <__main__.punkt object at 0x0000006E9AADC470>, 3: ...}
>>> punktid[1].__dict__
{'weight': 90, 'uid': 9000, 'att': 21.09333333333333, 'lat': 59.41757, 'lng': 24.73907, 'edges': {1613, 1218, 1530}}
As you can see, there is a minor bug in clean-up, where uid was not updated. I have fixed it by now, but I left it in, so you could see scale of graph. Largest index in punktid was 1699/11787.
Getting to core problem
Let's say I have 3 points: A, B and C (i, lyhem(2) and lyhem(0) respectively in following code slice). A has common edge with B and C, but B and C might not have common edge. C is closer to A than B. To reduce size of graph, I want to move C closer to edge AB (while respecting weights of B and C) and redirect AB through C.
Solution I came up with is to find temporary point D on AB, which is closest to C. Then find weighted average between D and C, save it as E and redirect all C edges and AB to that. Simplified figure - note, that E=(C+D)/2 is not completely accurate. I cannot add more than two links, but I have additional 2 images illustrating my problem.
Biggest problem was finding coordinates of D. I found possible solution on Mathematica site, but it contains ± sign, because when finding coordinate there are two possible coordinates. But I have line, where point is located on. Anyway, I don't know how to implement it correctly and my code has become quite messy:
# 2-nd run: Merge edges under small angles
for i in set(punktid.keys()):
try:
naabrid1=frozenset(punktid[i].edges) # naabrid / neighbours
for e in naabrid1:
t=set(naabrid1)
t.remove(e)
for u in t:
try:
a=nurk_3(punktid[i], punktid[e], punktid[u]) #Returns angle EIU in degrees. 0<=a<=180
if a<10:
de=((punktid[i].lat-punktid[e].lat)**2+
((punktid[i].lng-punktid[u].lng))*2 **2) #distance i-e
du=((punktid[i].lat-punktid[u].lat)**2+
((punktid[i].lng-punktid[u].lng)*2) **2) #distance i-u
b=radians(a)
if du<de:
lyhem=[u,du,e] # lühem in English is shorter
else: # but currently it should be lähem/closer
lyhem=[e,de,u]
if sin(b)*lyhem[1]<r:
lr=abs(sin(b)*lyhem[1])
ml=tan(nurk_coor(punktid[i],punktid[lyhem[0]])) #Lühema tõus / Slope of closer (C)
mp=tan(nurk_coor(punktid[i],punktid[lyhem[2]])) #Pikema / ...farer / B
mr=-1/ml #Ristsirge / ...BD
p1=(punktid[i].lng+lyhem[1]*(1/(1+ml**2)**0.5), punktid[i].lat+lyhem[1]*(ml/(1+ml**2)**0.5))
p2=(punktid[i].lng-lyhem[1]*(1/(1+ml**2)**0.5), punktid[i].lat-lyhem[1]*(ml/(1+ml**2)**0.5))
d1=((punktid[lyhem[0]].lat-p1[1])**2+
((punktid[lyhem[0]].lng-p1[0])*2)**2)**0.5 #distance i-e
d2=((punktid[lyhem[0]].lat-p2[1])**2+
((punktid[lyhem[0]].lng-p2[0])*2)**2)**0.5 #distance i-u
if d1<d2: # I experimented with one idea,
x=p1[0]#but it made things worse.
y=p1[1]#Originally I simply used p1 coordinates
else:
x=p2[0]
y=p2[1]
lo=punktid[lyhem[2]].weight*p2[0] # Finding weighted average
la=punktid[lyhem[2]].weight*p2[1]
la+=punktid[lyhem[0]].weight*punktid[lyhem[0]].lat
lo+=punktid[lyhem[0]].weight*punktid[lyhem[0]].lng
kaal=punktid[lyhem[2]].weight+punktid[lyhem[0]].weight #kaal = weight
c=(la/kaal,lo/kaal)
punktid[ui]=punkt(c[1],c[0], ui,punktid[lyhem[0]].edges, punktid[lyhem[0]].att,kaal)
punktid[i].edges.remove(lyhem[2])
punktid[lyhem[2]].edges.remove(i)
try:
for n in punktid[ui].edges: #In all neighbours
try: #Remove link to old point
punktid[n].edges.remove(lyhem[0])
except KeyError:
pass #If it doesn't link to current
punktid[n].edges.add(ui) #And add new point
if log:
printf(punktid[n].edges,'naabri '+str(n)+' edges')
except KeyError: #If neighbour itself has been removed
pass #(in same merge), Ignore
punktid[ui].edges.add(lyhem[2])
punktid[lyhem[2]].edges.add(ui)
punktid.pop(lyhem[0])
ui+=1
except KeyError: # u has been removed
pass
except KeyError: # i has been removed
pass
This is a code segment and it is likely to not run after copy-pasting because of missing variables/functions. New point is being calculated on lines 22 to 43, in 3rd if-statement from beginning if sin(b)*lyhem[1]<r to punktid[ui]=... After that is redirecting old edges to new node.
Stating question clearly: How to find point on ray (AB), if two coordinates of line segment (AC) and angles at these points are known (angle ACB should be 90 degrees)? How to implement it in Python 3.5?
PS. (Meta) If somebody needs full source, how could I provide it (uploading single text file without registration)? Pastebin or pasting (spamming) it here? If I upload it to other site, how to provide link, if newbie users are limited to two?

Using pyephem to calculate when a satellite crosses a Longitude

I am having a hard time figuring out how to calculate when a satellite crosses a specific Longitude. It would be nice to able to provide a time period and a TLE and be able to return all the times at which the satellite crosses a given longitude during the specified time period. Does pyephem support something like this?
There are so many possible circumstances that users might ask about — when a satellite crosses a specific longitude; when it reaches a specific latitude; when it reaches a certain height or descends to its lowest altitude; when its velocity is greatest or least — that PyEphem does not try to provide built-in functions for all of them. Instead, it provides a newton() function that lets you find the zero-crossing of whatever comparison you want to make between a satellite attribute and a pre-determined value of that attribute that you want to search for.
Note that the SciPy Python library contains several very careful search functions that are much more sophisticated than PyEphem's newton() function, in case you are dealing with a particularly poorly-behaved function:
http://docs.scipy.org/doc/scipy/reference/optimize.html
Here is how you might search for when a satellite — in this example, the ISS — passes a particular longitude, to show the general technique. This is not the fastest possible approach — the minute-by-minute search, in particular, could be sped up if we were very careful — but it is written to be very general and very safe, in case there are other values besides longitude that you also want to search for. I have tried to add documentation and comments to explain what is going on, and why I use znorm instead of returning the simple difference. Let me know if this script works for you, and explains its approach clearly enough!
import ephem
line0 = 'ISS (ZARYA) '
line1 = '1 25544U 98067A 13110.27262069 .00008419 00000-0 14271-3 0 6447'
line2 = '2 25544 51.6474 35.7007 0010356 160.4171 304.1803 15.52381363825715'
sat = ephem.readtle(line0, line1, line2)
target_long = ephem.degrees('-83.8889')
def longitude_difference(t):
'''Return how far the satellite is from the target longitude.
Note carefully that this function does not simply return the
difference of the two longitudes, since that would produce a
terrible jagged discontinuity from 2pi to 0 when the satellite
crosses from -180 to 180 degrees longitude, which could happen to be
a point close to the target longitude. So after computing the
difference in the two angles we run degrees.znorm on it, so that the
result is smooth around the point of zero difference, and the
discontinuity sits as far away from the target position as possible.
'''
sat.compute(t)
return ephem.degrees(sat.sublong - target_long).znorm
t = ephem.date('2013/4/20')
# How did I know to make jumps by minute here? I experimented: a
# `print` statement in the loop showing the difference showed huge jumps
# when looping by a day or hour at a time, but minute-by-minute results
# were small enough steps to bring the satellite gradually closer to the
# target longitude at a rate slow enough that we could stop near it.
#
# The direction that the ISS travels makes the longitude difference
# increase with time; `print` statements at one-minute increments show a
# series like this:
#
# -25:16:40.9
# -19:47:17.3
# -14:03:34.0
# -8:09:21.0
# -2:09:27.0
# 3:50:44.9
# 9:45:50.0
# 15:30:54.7
#
# So the first `while` loop detects if we are in the rising, positive
# region of this negative-positive pattern and skips the positive
# region, since if the difference is positive then the ISS has already
# passed the target longitude and is on its way around the rest of
# the planet.
d = longitude_difference(t)
while d > 0:
t += ephem.minute
sat.compute(t)
d = longitude_difference(t)
# We now know that we are on the negative-valued portion of the cycle,
# and that the ISS is closing in on our longitude. So we keep going
# only as long as the difference is negative, since once it jumps to
# positive the ISS has passed the target longitude, as in the sample
# data series above when the difference goes from -2:09:27.0 to
# 3:50:44.9.
while d < 0:
t += ephem.minute
sat.compute(t)
d = longitude_difference(t)
# We are now sitting at a point in time when the ISS has just passed the
# target longitude. The znorm of the longitude difference ought to be a
# gently sloping zero-crossing curve in this region, so it should be
# safe to set Newton's method to work on it!
tn = ephem.newton(longitude_difference, t - ephem.minute, t)
# This should be the answer! So we print it, and also double-check
# ourselves by printing the longitude to see how closely it matches.
print 'When did ISS cross this longitude?', target_long
print 'At this specific date and time:', ephem.date(tn)
sat.compute(tn)
print 'To double-check, at that time, sublong =', sat.sublong
The output that I get when running this script suggests that it has indeed found the moment (within reasonable tolerance) when the ISS reaches the target longitude:
When did ISS cross this longitude? -83:53:20.0
At this specific date and time: 2013/4/20 00:18:21
To double-check, at that time, sublong = -83:53:20.1
There is a difference of time between the time the program calculates the passes over the longitude and the real time. I've checked it with the LIS system ( that it's inside the ISS ) of the Nasa to find lightnings.
And I have discovered that in Europe in some orbits the time that the program calculates the pass, it's 30 seconds in advanced than the real time. And in Colombia in some orbits the time in advanced is about 3 minutes ( perhaps because 1 degree of longitud in Colombia is bigger in amount of Km than 1 degree of longitude in Europe ). But this problem only happens in 2 particular orbits ! The one that pass over France and goes down in Sicilia. And the one that pass over USA, and goes down in Cuba.
Why could this be possible ?
In my pinion I think maybe there's some mistake in the ephem.newton algorithm or maybe with the TLE, that normally it reads the one created at 00:00:00 at night when it changes of day (and not the actual, because the ISS creates 3-4 TLE per day ) or maybe with the sat.sublong function that calculates a wrong Nadir of the satellite.
Does anyone has an idea or an explanation for this problem ?
Why it happens ?.
PS: I need to checked it for sure because I need to know when the ISS crosses an area ( for detecting the lightnings inside the area ). And if the time that the program calculates in some orbits it's in advanced than the real time, then the sat.sublong function calculates it's outside the area ( it calculates it hasn't arrived to the area yet ) but the program shows it's inside the area. So the real time doesn't match with the one that the program calculates, in some occasions.
Thanks a lot for your time !

Categories