Good evening, StackOverflow.
Lately, I've been wrestling with a Python program which I'll try to outline as briefly as possible.
In essence, my program plots (and then fits a function to) graphs. Consider this graph.
The graph plots just fine, but I'd like it to do a little more than that: since the data is periodic over an interval OrbitalPeriod (1.76358757), I'd like it to start with our first x value and then iteratively plot all of the points OrbitalPeriod away from it, and then do the same exact thing over the next region of length OrbitalPeriod.
I know that there is a way to slice lists in Python of the form
croppedList = List[a:b]
where a and b are the indices of the first and last elements you'd like to include in the new list, respectively. However, I have no idea what the indices are going to be for each of the values, or how many values fall between each OrbitalPeriod-sized interval.
What I want to do in pseudo-code looks something like this.
croppedList = fullList on the domain [a + (N * OrbitalPeriod), a + (N+1 * OrbitalPeriod)]
where a is the x-value of the first meaningful data point.
If you have a workaround for this or a cropping method that would accept values instead of indices as arguments, please let me know. Thanks!
If you are working with numpy, you can use it inside the brackets
m = x
M = x + OrbitalPeriod
croppedList = List[m <= List]
croppedList = croppedList[croppedList < M]
Related
I have two lists of coordinates which should have some overlap (within a certain range) and I'm trying to output a new list which contains all of the coords which are contained in one list but not the other. See first image below for plot of these lists.
Points in each list might be slightly different so I'm allowing for a small amount around each point.
So far I have something like what's shown below which outputs the opposite of what I want - all of the points which are the in common between the two lists.
range = 0.33
different_points = [[],[]]
for i in range(len(All_points[0])):
for j in range(len(Initial_points[1][0])):
if Initial_points[0][j] - range <= All_points[0][i] <= Initial_points[0][j] + range and Initial_points[1][j] - range <= All_points[1][i] <= Initial_points[1][j] + range:
different_points[0].append((All_points[1][0][i]))
different_points[1].append((All_points[1][1][i]))
I'm struggling how to find the opposite list or if there's a much simpler way of doing this as a whole which I'm missing.
Thanks in advance for the help.
Use sets. In particular, intersection and difference.
Either it will help you, or I misunderstood your question completely.
I feel this should be simple but I'm stuck on finding a neat solution. The code I have provided works, and gives the output I expect, but I don't feel it is Pythonic and it's getting on my nerves.
I have produced three sets of coordinates, X, Y & Z using 'griddata' from a base data set. The coordinates are evenly spaced over an unknown total area / shape (not necessarily square / rectangle) producing the NaN results which I want to ignore of the boundaries of each list. The list should be traversed from the 'bottom left' (in a coordinate system), across the x axis, up one space in the y direction then right to left before continuing. There could be an odd or even number of rows.
The operation to be performed on each point is the same no matter the direction, and it is guaranteed that the every point which exists in X a point exists in Y and Z as can be seen in the code below.
Arrays (lists?) are of the format DataPoint[rows][columns].
k = 0
for i in range(len(x)):
if k % 2 == 0: # cut left to right, then right to left
for j in range(len(x[i])):
if not numpy.isnan(x[i][j]):
file.write(f'X{x[i][j]} Y{y[i][j]} Z{z[i][j]}')
else:
for j in reversed(range(len(x[i]))):
if not numpy.isnan(x[i][j]):
file.write(f'X{x[i][j]} Y{y[i][j]} Z{z[i][j]}')
k += 1
One solution I could think of would be to reverse every other row in each of the lists before running the loop. It would save me a few lines, but probably wouldn't make sense from a performance standpoint - anyone have any better suggestions?
Expected route through list:
End════<══════╗
╔══════>══════╝
╚══════<══════╗
Start══>══════╝
Here's a variant:
for i, (x_row, y_row, z_row) in enumerate(zip(x, y, z)):
if i % 2:
z_row = reversed(x_row)
y_row = reversed(y_row)
z_row = reversed(z_row)
row_strs = list()
for x_elem, y_elem, z_elem in zip(x_row, y_row, z_row):
if not numpy.isnan(x_elem):
row_strs.append(f"X{x_elem} Y{y_elem} Z{z_elem}")
file.write("".join(row_strs))
Considerations:
There is no recipe for an optimization that will always perform better than any other. It also depends on the data that the code handles. Here's a list of things that I could think of, without knowing how the data looks like:
for index range(len(sequence)): is not a Pythonic way of iterating. Here, the foreach idiom is used. If the index is required, [Python 3.Docs]: Built-in Functions - enumerate(iterable, start=0) could be used
This no longer applies because of the previous bullet, but reversed(range(n)) is same as range(n - 1, -1, -1). Don't know whether the latter is faster, but it looks like it would be
Iterate over multiple iterables at once, using [Python 3.Docs]: Built-in Functions - zip(*iterables)
Don't need k, already have i
In general when working with files, it's better to read / write fewer times bigger chunks of data than many times smaller chunks of data (files generally reside on disk and disk operations are slow). However, buffering occurs by default (at Python, OS levels), so this is no longer an issue, but still. But again as always, it's a trade-off between resources (time, memory, ...). I chose to write to file once per line (rather than once per element - as it was originally). Of course, there's the 3rd possibility of writing everything at once, but I imagined that for larger data sets, it won't be the best solution
Probably, some optimizations could also happen at NumPy level (as it would handle bulk data much faster than Python code (iterating) does), but I'm not an expert in that area, nor do I know how the data looks like
I agree with #Prune, your code looks readable and does what it should do. You could compress it a bit by precomputing the indices, like so (note that this start from the top left):
import numpy as np
# generate some sample data
x = np.arange(100).reshape(10,10)
#precompute both directions
fancyranges = (
list(range(len(x[0,:]))),
reversed(list(range(len(x[0,:]))))
)
for a in range(x.shape[0]):
# call appropriate directions
for b in fancyranges[a%2]:
# do things
print(x[a,b])
you can move repeatable code to sub_func for further changes in one place
def func():
def sub_func():
# repeatable code
if not numpy.isnan(x[i][j]):
print(f'X{x[i][j]}...')
k = 0
for i in range(len(x)):
if k % 2 == 0: # cut left to right, then right to left
for j in range(len(x[i])):
sub_func()
else:
for j in reversed(range(len(x[i]))):
sub_func()
k += 1
func()
I have a list of t values. My code for finding the minima values is as follows;
for i in np.arange(0,499,1):
if t[i]<t[i-1] and t[i]<t[i+1] :
t_min.append(t[i])
My t values change every time and hence it may happen that one of the minima occurs at the beginning or end in that case this code would not work. So I need a general code which will work for any range of t values.
You can loop around the end using the % operator and adding one to the length of the iterator. This treats your array 'as a circle', which is what you really want.
t_min = []
for i in range(len(t)):
if t[i] < min(t[i - 1], t[(i + 1) % len(t)]):
t_min.append(t[i])
Edit: Fix the range of values i takes so that the first element isn't checked twice, thanks to #Jasper for pointing this out
Instead of looping over the array, I suggest using scipy.signal.argrelmin which finds all local minima. You can pick two you like most from those.
from scipy.signal import argrelmin
import numpy as np
t = np.sin(np.linspace(0, 4*np.pi, 500))
relmin = argrelmin(t)[0]
print(relmin)
This outputs [187 437].
To treat the array as wrapping around, use argrelmin(t, mode=‘wrap’)
Without wrap-around, argrelmin does not recognize the beginning and end of an array as candidates for local minimum. (There are different interpretations of "local minimum": one allows the endpoints, the other does not.) If you want the endpoints to be included when the function achieves minimum there, do it like this:
if t[0] < t[1]:
relmin = np.append(relmin, 0)
if t[-1] < t[-2]:
relmin = np.append(relmin, len(t)-1)
Now the output is [187 437 0].
I want to build an algorithm in python to flip linestrings (arrays of coordinates) in a linestring collection which represent segments along a road, so that I can merge all coordinates into a single array where the coordinates are rising monotonic.
So my Segmentcollection looks something like this:
segmentCollection = [['1,1', '1,3', '2,3'],
['4,3', '2,3'],
['4,3', '7,10', '5,5']]
EDIT: SO the structure is a list of lists of 2D cartesian coordinate tuples ('1,1' for example is a point at x=1 and y=1, '7,10' is a point at x=7 and y=10, and so on). The whole problem is to merge all these lists to one list of coordinate tuples which are ordered in the sense of following a road in one direction...in fact these are segments which I get from a road network routing service,but I only get segments,where each segment is directed the way it is digitized in the database,not into the direction you have to drive. I would like to get a single polyline for the navigation route out of it.
So:
- I can assume, that all segments are in the right order
- I cannot assume that the Coordinates of each segment are in the right order
- Therefore I also cannot assume that the first coordinate of the first segment is the beginning
- And I also cannot assume that the last coordinate of the last segment is the end
- (EDIT) Even thought I Know,where the start and end point of my navigation request is located,these do not have to be identical with one of the coordinate tuples in these lists,because they only have to be somewhere near a routing graph element.
The algorithm should iterate through every segment, flip it if necessary, and append it then to the resulting array. For the first segment,the challenge is to find the starting point (the point which is NOT connected to the next segment). All other segments are then connected with one point to the last segment in the order (a directed graph).
I'd wonder if there isn't some kind of sorting data structure (sorting tree or anything) which does exactly that. Could you please give some ideas? After messing around a while with loops and array comparisons my brain is knocked out, and I just need a kick into the right direction in the true sense of the word.
If I understand correctly, you don't even need to sort things. I just translated your English text into Python:
def joinSegments( s ):
if s[0][0] == s[1][0] or s[0][0] == s[1][-1]:
s[0].reverse()
c = s[0][:]
for x in s[1:]:
if x[-1] == c[-1]:
x.reverse()
c += x
return c
It still contains duplicate points, but removing those should be straightforward.
def merge_seg(s):
index_i = 0
while index_i+1<len(s):
index_j=index_i+1
while index_j<len(s):
if c[index_i][-1] == c[index_j][0]:
c[index_i].extend(c[index_j][1:])
del c[index_j]
elif c[index_i][-1] == c[index_j][-1]:
c[index_i].extend(c[index_j].reverse()[1:])
del c[index_j]
else:
index_j+=1
index_i+=1
result = []
s.reverse()
for seg_index in range(len(s)-1):
result+=s[seg_index][:-1]#use [:-1] to delete the duplicate items
result+=s[-1]
return result
In inner while loop,every successive segment of s[index_i] is appended to s[index_i]
then index_i++ until every segments is processed.
therefore it is easy to proof that after these while loops, s[0][0] == s[1][-1], s[1][0] == s[2][-1], etc. so just reverse the list and put them together finally you will get your result.
Note: It is the most simple and straightford way, but not most time efficient.
for more algo see:http://en.wikipedia.org/wiki/Sorting_algorithm
You say that you can assume that all segments are in the right order, which means that independently of the coordinates order, your problem is basically to merge sorted arrays.
You would have to flip a segment if it's not defined in the right order, but this doesn't have a single impact on the main algorithm.
simply defind this reordering function:
def reorder(seg):
s1 = min(seg)
e1 = max(seg)
return (s1, e1)
and this comparison funciton
def cmp(seg1, seg2):
return cmp(reorder(seg1), reorder(seg2))
and you are all set, just run a typical merge algorithm:
http://en.wikipedia.org/wiki/Merge_algorithm
And in case, I didn't really understand your problem statement, here's another idea:
Use a segment tree which is a structure that is made exactly to store segments :)
Having not worked with cartesian graphs since high school, I have actually found a need for them relevant to real life. It may be a strange need, but I have to allocate data to points on a cartesian graph, that will be accessible by calling cartesian coordinates. There needs to be infinite points on the graphs. For Eg.
^
[-2-2,a ][ -1-2,f ][0-2,k ][1-2,p][2-2,u]
[-2-1,b ][ -1-1,g ][0-1,l ][1-1,q][1-2,v]
<[-2-0,c ][ -1-0,h ][0-0,m ][1-0,r][2-0,w]>
[-2--1,d][-1--1,i ][0--1,n][1-1,s][2-1,x]
[-2--2,e][-1--2,j ][0--2,o][1-2,t][2-2,y]
v
The actual values aren't important. But, say I am on variable m, this would be 0-0 on the cartesian graph. I need to calculate the cartesian coordinates for if I moved up one space, which would leave me on l.
Theoretically, say I have a python variable which == ("0-1"), I believe I need to split it at the -, which would leave x=0, y=1. Then, I would need to perform (int(y)+1), then re-attach x to y with a '-' in between.
What I want to be able to do is call a function with the argument (x+1,y+0), and for the program to perform the above, and then return the cartesian coordinate it has calculated.
I don't actually need to retrieve the value of the space, just the cartesian coordinate. I imagine I could utilise re.sub(), however I am not sure how to format this function correctly to split around the '-', and I'm also not sure how to perform the calculation correctly.
How would I do this?
To represent an infinite lattice, use a dictionary which maps tuples (x,y) to values.
grid[(0,0)] = m
grid[(0,1)] = l
print(grid[(0,0)])
I'm not sure I fully understand the problem but I would suggest using a list of lists to get the 2D structure.
Then to look up a particular value you could do coords[x-minX][y-minY] where x,y are the integer indices you want, and minX and minY are the minimum values (-2 in your example).
You might also want to look at NumPy which provides an n-dim object array type that is much more flexible, allowing you to 'slice' each axis or get subranges. The NumPy documentation might be helpful if you are new to working with arrays like this.
EDIT:
To split a string like 0-1 into the constituent integers you can use:
s = '0-1'
[int(x) for x in s.split('-')]
You want to create a bidirectional mapping between the variable names and the coordinates, then you can look up coordinates by variable name, apply your function to it, then find the next variable using the new set of coordinates produced by your function.
Mapping between numeric tuples you can apply your function to, and strings usable as keys in a dict, and back, is easy.