Python route finder for aviation - python

I made an app that besides all other things try to find a valid route between 2 airports.
I have all the required data in a sqlite3 database witch i query and plot in a basemap embeded in PyQt5 with signals live.
My problem is i can't find the algorithm to make all possible variations(with disqualify some, as all possibilies are enormous ) and store them to output the final valid routes.
Dijkstra's algorithm i think can't implemented as any time a route can reach a dead end.
My main problem is the algorithm and its implementation and not the data so don't hesitate and write any data required for any possible algorithm.
The algorith hints are:
I have a starting waypoint.
I find all routes that includes this starting point(disqualifying oposite headings)(each route have various waypoints).
Find next waypoint for each route.
Now this waypoint can be connected to other route and so on.
Routes then tested and disquilyfied by various variants or reaching dead ends.
Continue until you reach to the final(target waypoint).
Output the somehow stored route(s)
What i got so far with stack issues:
##finding base direction##
base_radians = math.atan2(self.dest_coord[0]-self.dep_coord[0], self.dest_coord[1]-self.dep_coord[1])
base_degrees = math.degrees(base_radians)
print(base_degrees)
if base_degrees < 0 :
base_heading = 'W'
else:
base_heading = 'E'
### finding all routes connected to first waypoint###
self.cursor.execute("select DISTINCT ats_ident,seq_num from dafif_ats where wpt1_ident = ? AND ats_icao = ? AND direction = ? ORDER BY ats_ident,seq_num ASC",('ATV','LGGG',base_heading))
sub_ats_idents = self.cursor.fetchall()
#### for each route find next waypoints###
for i in sub_ats_idents:
self.cursor.execute("select wpt1_ident,wpt2_ident from dafif_ats where ats_ident = ? and ats_icao = ? and direction = ? and seq_num >= ? ORDER BY seq_num ASC",(i[0],'LGGG',base_heading,i[1]))
each_wpt_combo = self.cursor.fetchall()
#### for each next waypoint find possible routes###
for x in each_wpt_combo:
self.cursor.execute("select DISTINCT ats_ident,seq_num from dafif_ats where wpt1_ident = ? AND ats_icao = ? AND direction = ? ORDER BY ats_ident,seq_num ASC",(x[0],'LGGG',base_heading))
each_ats = self.cursor.fetchall()
print(each_ats)
#### for each subroute plot waypoints###
for z in each_ats:
self.cursor.execute("select wpt1_dlon,wpt1_dlat,wpt2_dlon,wpt2_dlat from dafif_ats where wpt1_ident = ? AND ats_icao = ? AND direction = ? ORDER BY ats_ident,seq_num ASC",(x[0],'LGGG',base_heading))
plot_var = self.cursor.fetchall()
self.route_sender.emit(plot_var)
time.sleep(0.1)
Any material or example to read will be super.
Thx in advance.

For future readers , A* algorithm with hieristics is the solution for these kind of problems.

Related

Simple API Rest python script takes a lot of time (or doesn't end)

For a business process I need to calculate driving distance between 1 origin and 30 k destinations.
I get both origins and destinations coordinates from a Google Sheet. Destinations is a matrix (approx 100 x 30).
I'm using HERE api to calculate the distance.
The result should be the same destinations matrix but with the distance (in the same order as the destinations coordinates).
This is the part of the script that calculates the distance and, I think, the one which lasts a lot:
distance= []
distance= pd.DataFrame(distance)
for row in destinations.itertuples():
a= row[1:]
distance1 = []
for column in a:
try:
args = {'waypoint0': 'geo!'+origins, 'waypoint1': 'geo!'+column, 'mode': 'fastest;truck'}
qstr = urlencode(args)
url = "https://route.ls.hereapi.com/routing/7.2/calculateroute.json?apiKey=xxxx" + qstr
response = urllib.request.urlopen(url)
dist = json.loads(response.read())['response']['route'][0]['leg'][0]['length']/1000
except Exception:
dist = 10000
distance1.append(dist)
distance2 = pd.DataFrame(distance1)
distance2 = distance2.T
distance = distance.append(distance2)
Does anyone think of a better way to make the script to actually finish?
Thanks!!
The logic looks pretty much accurate. If you need to limit the loop count, please check Large-Scale Matrix Routing API if it aligns with the use case.
The Large-Scale Matrix Routing service is a HTTP JSON API for calculating routing matrices with a large number of start and destinations points (e.g. 10,000 x 10,000).
For more details, please refer the following doc :
https://developer.here.com/documentation/large-matrix/api-reference-swagger.html
Note : please remove appKey from the shared code snippet

Pygmo2: migration between islands in an archipelago during evolution

I'm trying to use the Python library Pygmo2 (https://esa.github.io/pagmo2/index.html) to parallelize an optimization problem.
To my understanding, parallelization can be achieved with an archipelago of islands (in this case, mp_island).
As a minimal working example, one of the tutorials from the official site can serve: https://esa.github.io/pagmo2/docs/python/tutorials/using_archipelago.html
I extracted the code:
class toy_problem:
def __init__(self, dim):
self.dim = dim
def fitness(self, x):
return [sum(x), 1 - sum(x*x), - sum(x)]
def gradient(self, x):
return pg.estimate_gradient(lambda x: self.fitness(x), x)
def get_nec(self):
return 1
def get_nic(self):
return 1
def get_bounds(self):
return ([-1] * self.dim, [1] * self.dim)
def get_name(self):
return "A toy problem"
def get_extra_info(self):
return "\tDimensions: " + str(self.dim)
import pygmo as pg
a_cstrs_sa = pg.algorithm(pg.cstrs_self_adaptive(iters=1000))
p_toy = pg.problem(toy_problem(50))
p_toy.c_tol = [1e-4, 1e-4]
archi = pg.archipelago(n=32,algo=a_cstrs_sa, prob=p_toy, pop_size=70)
print(archi)
archi.evolve()
print(archi)
Looking at the documentation of the old version of the library (http://esa.github.io/pygmo/documentation/migration.html), migration between islands seems to be an essential feature of the island parallelization model.
Also, to my understanding, optimization algorithms like evolutionary algorithms could not work without it.
However, in the documentation of Pygmo2, I can nowhere find how to perform migration.
Is it happening automatically in an archipelago?
Does it depend on the selected algorithm?
Is it not yet implemented in Pygmo2?
Is the documentation on this yet missing or did I just not find it?
Can somebody enlighten me?
pagmo2 is now implementing migration since v2.11, the PR has benn completed and merged into master. Almost all capabilities present in pagmo1.x are restored. We will still add more topologies in the future, but they can already be implemented manually. Refer to docs here: https://esa.github.io/pagmo2/docs/cpp/cpp_docs.html
Tutorial and example are missing and will be added in the near future (help is welcome)
The migration framework has not been fully ported from pagmo1 to pagmo2 yet. There is a long-standing PR open here:
https://github.com/esa/pagmo2/pull/102
We will complete the implementation of the migration framework in the next few months, hopefully by the beginning of the summer.
IMHO, the PyGMO2/pagmo documentation is confirming the migration feature to be present.
The archipelago class is the main parallelization engine of pygmo. It essentially is a container of island able to initiate evolution (optimization tasks) in each island asynchronously while keeping track of the results and of the information exchange (migration) between the tasks ...
With an exception of thread_island-s ( where some automated inference may take place and enforce 'em for thread-safe UDI-s ), all other island types - { mp_island | ipyparallel_island }-s do create a GIL-independent form of a parallelism, yet the computing is performed via an async-operated .evolve() method
In original PyGMO, the archipelago class was auto .__init__()-ed with attribute topology = unconnected(), unless specified explicitly, as documented in PyGMO, having a tuple of call-interfaces for archipelago.__init__() method ( showing just the matching one ):
__init__( <PyGMO.algorithm> algo,
<PyGMO.problem> prob,
<int> n_isl,
<int> n_ind [, topology = unconnected(),
distribution_type = point_to_point,
migration_direction = destination
]
)
But, adding that, one may redefine the default, so as to meet one's PyGMO evolutionary process preferences:
topo = topology.erdos_renyi( nodes = 100,
p = 0.03
) # Erdos-Renyi ( random ) topology
or
set a Clustered Barabási-Albert, with ageing vertices graph topology:
topo = topology.clustered_ba( m0 = 3,
m = 3,
p = 0.5,
a = 1000,
nodes = 0
) # clustered Barabasi-Albert,
# # with Ageing vertices topology
or:
topo = topology.watts_strogatz( nodes = 100,
p = 0.1
) # Watts-Strogatz ( circle
# + links ) topology
and finally, set it by assignment into the class-instance attribute:
archi = pg.archipelago( n = 32,
algo = a_cstrs_sa,
prob = p_toy,
pop_size = 70
) # constructs an archipelago
archi.topology = topo # sets the topology to the
# # above selected, pre-defined <topo>

Linq, Python or Sql, need advice for TSS WSS BSS calculation

Hi i am making a staditical soft in c++ with QT.
I need to make many calculation over a table with the output of multivariate cluster analysis:
Var1,Var2,Var3,..VarN, k2,k3,k4...kn
where Var1 to n are the variables of study,
and k2 to kn the cluster clasification.
Table Example:
Var1,Var2,Var3,Var4,k2,k3,k4,k5,k6
3464.57,2992.33,2688.33,504.79,2,3,2,3,2
2895.32,3365.35,2824.35,504.86,1,2,3,2,6
2249.32,3300.19,2382.19,504.92,2,1,4,3,4
3417.81,3311.04,2426.04,504.97,1,2,2,5,2
3329.66,3497.14,2467.14,505.03,2,2,1,4,2
3087.85,3653.53,2296.53,505.09,2,1,2,3,4
The c++ storage will be defined like:
QList table;
Struct record
{
QList<double> vars;
QList<int> cluster;
}
I need to calculate the total, the within group and the between group square sum.
https://en.wikipedia.org/wiki/F-test
So by example to calculate WSS for Var1 and k2 need to:
in pseudo code:
get the size of every group:
count(*) group by(k2),
calculate the mean of every group:
sum(Var1) group by(k2), and then divide every one by the previous count.
compute the diference:
pow((xgroup1-xmeangroup1),2)
and many other operations....
Which alternatives will have more easy and powerfull codification:
1)Create a MySQL table on the fly and make SQL operations.
2)Use LINQ, but i does not if QT have QTLinq class.
3)Try to make trough Python Equivalents of LINQ Methods,
(how is the interaction between QT and Python, I see that Qgis have many plugin writed in Python)
Also in my app need to many make other calculus.
I hope to be clear.
Greetings
After some time I respond to my self,
the solution was maked in Python with Pandas.
This link is very ussefull:
Iterating through groups on: http://pandas.pydata.org/pandas-docs/stable/groupby.html
Also the book "Python for Data Analysis, West McKinney" pag 255
This video show how to make calculation:
ANOVA 2: Calculating SSW and SSB (total sum of squares within and between) | Khan Academy
https://www.youtube.com/watch?v=j9ZPMlVHJVs
[code]
def getDFrameFixed2D():
y = np.array([3,2,1,5,3,4,5,6,7])
k = np.array([1,1,1,2,2,2,3,3,3])
clusters = pd.DataFrame([[a,b] for a,b in zip(y,k)],columns=['Var1','K2'])
# print (clusters.head()) print("shape(0):",clusters.shape[0])
return clusters
X2D=getDFrameFixed2D()
MainMean = X2D['Var1'].mean(0)
print("Main mean:",MainMean)
grouped = X2D['Var1'].groupby(X2D['K2'])
print("-----Iterating Over Groups-------------")
Wss=0
Bss=0
for name, group in grouped:
#print(type(name))
#print(type(group))
print("Group key:",name)
groupmean = group.mean(0)
groupss = sum((group-groupmean)**2)
print(" groupmean:",groupmean)
print(" groupss:",groupss)
Wss+= groupss
Bss+= ((groupmean - MainMean)**2)*len(group)
print("----------------------------------")
print("Wss:",Wss)
print("Bss:",Bss)
print("T=B+W:",Bss+Wss)
Tss = np.sum((X-X.mean(0))**2)
print("Tss:",Tss)
print("----------------------------------")
[/code]
I am sure that could be do with aggregates(lambdas func) or apply.
But i don´t figure how
(if somebody know, please post here)
Greetings

Elman Network in Pybrain

I'm trying to make an Elman Network (aka Simple Recurent Network) with Pybrain, I think the code should look something like this:
n = RecurentNetwork()
n.addInputModule(LinearLayer(5, name = 'in'))
n.addModule(TanhLayer(10, name = 'hidden'))
n.addModule(LinearLayer(10, name = 'context'))
n.addOutputModule(LinearLayer(5, name = 'out'))
n.addConnection(FullConnection(n['in'], n['hidden'], name = 'in_to_hidden'))
n.addConnection(FullConnection(n['hidden'], n['out'], name = 'hidden_to_out'))
n.addConnection(IdentityConnection(n['hidden'], n['context'], name = 'hidden_to_context'))
n.addConnection(IdentityConnection(n['context'], n['hidden'], name = 'context_to_hidden'))
My problem is that I don't know how to get the context nodes (at time t) to keep the values of the hidden nodes of the last iteration (at time t-1) in order to give them to the hidden nodes in this iteration (at time t) and how to fix the weights in hidden_to_context to be 1. How it is right now I get an error saying there is a "loop" in the net (and indeed there is one). Any help would be much appreciated. Thank you very much.
Cheers,
Bruno
I would look at this section:
http://pybrain.org/docs/tutorial/netmodcon.html#using-recurrent-networks
In particular,
The RecurrentNetwork class has one additional method, .addRecurrentConnection(), which looks back in time one timestep.

python function not recurving properly (adding nodes to graph)

I'm having a rare honest-to-goodness computer science problem (as opposed to the usual how-do-I-make-this-language-I-don't-write-often-enough-do-what-I-want problem), and really feeling my lack of a CS degree for a change.
This is a bit messy, because I'm using several dicts of lists, but the basic concept is this: a Twitter-scraping function that adds retweets of a given tweet to a graph, node-by-node, building outwards from the original author (with follower relationships as edges).
for t in RTs_list:
g = nx.DiGraph()
followers_list=collections.defaultdict(list)
level=collections.defaultdict(list)
hoppers=collections.defaultdict(list)
retweets = []
retweeters = []
try:
u = api.get_status(t)
original_tweet = u.retweeted_status.id_str
print original_tweet
ot = api.get_status(original_tweet)
node_adder(ot.user.id, 1)
# Can't paginate -- can only get about ~20 RTs max. Need to work on small data here.
retweets = api.retweets(original_tweet)
for r in retweets:
retweeters.append(r.user.id)
followers_list["0"] = api.followers_ids(ot.user.id)[0]
print len(retweets),"total retweets"
level["1"] = ot.user.id
g.node[ot.user.id]['crossover'] = 1
if g.node[ot.user.id]["followers_count"]<4000:
bum_node_adder(followers_list["0"],level["1"], 2)
for r in retweets:
rt_iterator(r,retweets,0,followers_list,hoppers,level)
except:
print ""
def rt_iterator(r,retweets,q,followers_list,hoppers,level):
q = q+1
if r.user.id in followers_list[str(q-1)]:
hoppers[str(q)].append(r.user.id)
node_adder(r.user.id,q+1)
g.add_edge(level[str(q)], r.user.id)
try:
followers_list[str(q)] = api.followers_ids(r.user.id)[0]
level[str(q+1)] = r.user.id
if g.node[r.user.id]["followers_count"]<4000:
bum_node_adder(followers_list[str(q)],level[str(q+1)],q+2)
crossover = pull_crossover(followers_list[str(q)],followers_list[str(q-1)])
if q<10:
for r in retweets:
rt_iterator(r,retweets,q,followers_list,hoppers,level)
except:
print ""
There's some other function calls in there, but they're not related to the problem. The main issue is how Q counts when going from a (e.g.) a 2-hop node to a 3-hop node. I need it to build out to the maximum depth (10) for every branch from the center, whereas right now I believe it's just building out to the maximum depth for the first branch it tries. Hope that makes sense. If not, typing it up here has helped me; I think I'm just missing a loop in there somewhere but it's tough for me to see.
Also, ignore that various dicts refer to Q+1 or Q-1, that's an artifact of how I implemented this before I refactored to make it recurve.
Thanks!
I'm not totally sure what you mean by "the center" but I think you want something like this:
def rt_iterator(depth, other-args):
# store whatever info you need from this point in the tree
if depth>= MAX_DEPTH:
return
# look at the nodes you want to expand from here
for each node, in the order you want them expanded:
rt_iterator(depth+1, other-args)
think I've fixed it... this way Q isn't incremented when it shouldn't be.
def rt_iterator(r,retweets,q,depth,followers_list,hoppers,level):
def node_iterator (r,retweets,q,depth,followers_list,hoppers,level):
for r in retweets:
if r.user.id in followers_list[str(q-1)]:
hoppers[str(q)].append(r.user.id)
node_adder(r.user.id,q+1)
g.add_edge(level[str(q)], r.user.id)
try:
level[str(q+1)] = r.user.id
if g.node[r.user.id]["followers_count"]<4000:
followers_list[str(q)] = api.followers_ids(r.user.id)[0]
bum_node_adder(followers_list[str(q)],level[str(q+1)],q+2)
crossover = pull_crossover(followers_list[str(q)],followers_list[str(q-1)])
if q<10:
node_iterator(r,retweets,q+1,depth,followers_list,hoppers,level)
except:
print ""
depth = depth+1
q = depth
if q<10:
rt_iterator(r,retweets,q,depth,followers_list,hoppers,level)

Categories