Arrange line segments consecutively to make a polygon - python

Im trying to arrange line segments to create a closed polygon with python. At the moment I've managed to solve it but is really slow when the number of segments increase (its like a bubble sort but for the end point of segments). I'm attaching a sample file of coordinates (the real ones are really complex but is useful for testing purposes). The file contains the coordinates for the segments of two separetes closed polygons. The image below is the result of the coordinates I've attached.
This is my code for joining the segments. The file 'Curve' is in the dropbox link above:
from ast import literal_eval as make_tuple
from random import shuffle
from Curve import Point, Curve, Segment
def loadFile():
print 'Loading File'
file = open('myFiles/coordinates.txt','r')
for line in file:
pairs.append(make_tuple(line))
file.close()
def sortSegment(segPairs):
polygons = []
segments = segPairs
while (len(segments) > 0):
counter = 0
closedCurve = Curve(Point(segments[0][0][0], segments[0][0][1]), Point(segments[0][1][0], segments[0][1][1]))
segments.remove(segments[0])
still = True
while (still):
startpnt = Point(segments[counter][0][0], segments[counter][0][1])
endpnt = Point(segments[counter][1][0], segments[counter][1][1])
seg = Segment(startpnt, endpnt)
val= closedCurve.isAppendable(seg)
if(closedCurve.isAppendable(seg)):
if(closedCurve.isClosed(seg)):
still =False
polygons.append(closedCurve.vertex)
segments.remove(segments[counter])
else:
closedCurve.appendSegment(Segment(Point(segments[counter][0][0], segments[counter][0][1]), Point(segments[counter][1][0], segments[counter][1][1])))
segments.remove(segments[counter])
counter = 0
else:
counter+=1
if(len(segments)<=counter):
counter = 0
return polygons
def toTupleList(list):
curveList = []
for curve in list:
pointList = []
for point in curve:
pointList.append((point.x,point.y))
curveList.append(pointList)
return curveList
def convertPolyToPath(polyList):
path = []
for curves in polyList:
curves.insert(1, 'L')
curves.insert(0, 'M')
curves.append('z')
path = path + curves
return path
if __name__ == '__main__':
pairs =[]
loadFile();
polygons = sortSegment(pairs)
polygons = toTupleList(polygons)
polygons = convertPolyToPath(polygons)

Assuming that you are only looking for the approach and not the code, here is how I would attempt it.
While you read the segment coordinates from the file, keep adding the coordinates to a dictionary with one coordinate (string form) of the segment as the key and the other coordinate as the value. At the end, it should look like this:
{
'5,-1': '5,-2',
'4,-2': '4,-3',
'5,-2': '4,-2',
...
}
Now pick any key-value pair from this dictionary. Next, pick the key-value pair from the dictionary where the key is same as the value in the previous key-value pair. So if first key-value pair is '5,-1': '5,-2', next look for the key '5,-2' and you will get '5,-2': '4,-2'. Next look for the key '4,-2' and so on.
Keep removing the key-value pairs from the dictionary so that once one polygon is complete, you can check if there are any elements left which means there might be more polygons.
Let me know if you need the code as well.

I had to do something similar. I needed to turn coastline segments (that were not ordered properly) into polygons. I used NetworkX to arrange the segments into connected components and order them using this function.
It turns out that my code will work for this example as well. I use geopandas to display the results, but that dependency is optional for the original question here. I also use shapely to turn the lists of segments into polygons, but you could just use CoastLine.rings to get the lists of segments.
I plan to include this code in the next version of PyRiv.
from shapely.geometry import Polygon
import geopandas as gpd
import networkx as nx
class CoastLine(nx.Graph):
def __init__(self, *args, **kwargs):
"""
Build a CoastLine object.
Parameters
----------
Returns
-------
A CoastLine object
"""
self = super(CoastLine, self).__init__(*args, **kwargs)
#classmethod
def read_shp(cls, shp_fn):
"""
Construct a CoastLine object from a shapefile.
"""
dig = nx.read_shp(shp_fn, simplify=False)
return cls(dig)
def connected_subgraphs(self):
"""
Get the connected component subgraphs. See the NetworkX
documentation for `connected_component_subgraphs` for more
information.
"""
return nx.connected_component_subgraphs(self)
def rings(self):
"""
Return a list of rings. Each ring is a list of nodes. Each
node is a coordinate pair.
"""
rings = [list(nx.dfs_preorder_nodes(sg)) for sg in self.connected_subgraphs()]
return rings
def polygons(self):
"""
Return a list of `shapely.Polygon`s representing each ring.
"""
return [Polygon(r) for r in self.rings()]
def poly_geodataframe(self):
"""
Return a `geopandas.GeoDataFrame` of polygons.
"""
return gpd.GeoDataFrame({'geometry': self.polygons()})
With this class, the original question can be solved:
edge_list = [
((5, -1), (5, -2)),
((6, -1), (5, -1)),
((1, 0), (1, 1)),
((4, -3), (2, -3)),
((2, -2), (1, -2)),
((9, 0), (9, 1)),
((2, 1), (2, 2)),
((0, -1), (0, 0)),
((5, 0), (6, 0)),
((2, -3), (2, -2)),
((6, 0), (6, -1)),
((4, 1), (5, 1)),
((10, -1), (8, -1)),
((10, 1), (10, -1)),
((2, 2), (4, 2)),
((5, 1), (5, 0)),
((8, -1), (8, 0)),
((9, 1), (10, 1)),
((8, 0), (9, 0)),
((1, -2), (1, -1)),
((1, 1), (2, 1)),
((5, -2), (4, -2)),
((4, 2), (4, 1)),
((4, -2), (4, -3)),
((1, -1), (0, -1)),
((0, 0), (1, 0)) ]
eG = CoastLine()
for e in edge_list:
eG.add_edge(*e)
eG.poly_geodataframe().plot()
This will be the result:

Related

Find common union groups among tuples in a set

I need help to write a function that:
takes as input set of tuples
returns the number of tuples that has unique numbers
Example 1:
# input:
{(0, 1), (3, 4), (0, 0), (1, 1), (3, 3), (2, 2), (1, 0)}
# expected output: 3
The expected output is 3, because:
(3,4) and (3,3) contain common numbers, so this counts as 1
(0, 1), (0, 0), (1, 1), and (1, 0) all count as 1
(2, 2) counts as 1
So, 1+1+1 = 3
Example 2:
# input:
{(0, 1), (2, 1), (0, 0), (1, 1), (0, 3), (2, 0), (0, 2), (1, 0), (1, 3)}
# expected output: 1
The expected output is 1, because all tuples are related to other tuples by containing numbers in common.
This may not be the most efficient algorithm for it, but it is simple and looks nice.
from functools import reduce
def unisets(iterables):
def merge(fsets, fs):
if not fs: return fsets
unis = set(filter(fs.intersection, fsets))
return {reduce(type(fs).union, unis, fs), *fsets-unis}
return reduce(merge, map(frozenset, iterables), set())
us = unisets({(0,1), (3,4), (0,0), (1,1), (3,3), (2,2), (1,0)})
print(us) # {frozenset({3, 4}), frozenset({0, 1}), frozenset({2})}
print(len(us)) # 3
Features:
Input can be any kind of iterable, whose elements are iterables (any length, mixed types...)
Output is always a well-behaved set of frozensets.
this code works for me
but check it maby there edge cases
how this solution?
def count_groups(marked):
temp = set(marked)
save = set()
for pair in temp:
if pair[1] in save or pair[0] in save:
marked.remove(pair)
else:
save.add(pair[1])
save.add(pair[0])
return len(marked)
image

Paths in Python/Sage

I've been working on this problem (https://imgur.com/a/nJEMfM9) asking me to plot all lattice paths in a nxn grid for the last week, and I have no idea how to proceed.
This is about as far as I've been able to get
def NE_lattice_paths(x,y):
Vn= vector([0,1])
Ve= vector([1,0])
plot(Vn) + plot(Ve, start=Vn)
I know I have to use vectors, and I have to use the "def" command to make a function, but how would I make a function that can plot every path and know to take a different one each time? What I wrote doesn't really make sense, but I could use some guidance on how to proceed. Thank you!
You can get all the paths with a nested for loop (or list comprehension).
So this will give all the paths.
def NE_lattice_paths(x,y):
paths = []
for i in range(x):
path = []
for j in range(y):
path.append((i,j))
paths.append(path)
return paths
result = NE_lattice_paths(5,3)
print(result)
result
[[(0, 0), (0, 1), (0, 2)], [(1, 0), (1, 1), (1, 2)], [(2, 0), (2, 1), (2, 2)], [(3, 0), (3, 1), (3, 2)], [(4, 0), (4, 1), (4, 2)]]
I will leave it as an excersize for the OP to do the animation...

geopandas difference only if a column's value is greater

Initialize data:
import pandas as pd
from shapely.geometry import Polygon
geoms = gpd.GeoSeries([
Polygon([(0, 0), (2, 0), (2, 2), (0, 2)]),
Polygon([(1, 1), (3, 1), (3, 3), (1, 3)]),
Polygon([(0, 0), (3, 0), (3, 3), (0, 3)]),
])
gdf = gpd.GeoDataFrame(geometry=geoms)
gdf["value"] = [3, 2, 1]
gdf.plot(cmap='tab10', alpha=0.5)
original
Then I want to make holes into the polygons where values are greater than the current row.
gdf_list = []
for value in gdf["value"]:
gdf_equal_value = gdf.loc[gdf["value"] == value, "geometry"]
gdf_above_value = gdf.loc[gdf["value"] > value, "geometry"]
gdf_list.append(
(value, gdf_equal_value.difference(gdf_above_value.unary_union))
)
import matplotlib.pyplot as plt
for value, geom in gdf_list:
geom.plot()
plt.xlim(0, 3)
plt.ylim(0, 3)
plt.title(value)
holes
Since I have much more unique values in my actual dataset, is there a way to optimize this (e.g. not have to loop through each one)?
As I mentioned in my comment, I'm not 100% sure I understand what you want your final product to look like. Please consider editing your question to make that clearer.
In your original question, your final product was a list of (value, geodataframe) pairs, and the geodataframe contained the rows of the original gdf differenced with respect to a dissolved polygon of the gdf elements whose values were larger than the reference value.
Is that exactly what you want?
Here's a quick solution to get to something similar, but not exactly identical.
import numpy as np
import pandas as pd
import geopandas as gpd
from shapely.geometry import Polygon
import matplotlib.pyplot as plt
geoms = gpd.GeoSeries([
Polygon([(0, 0), (2, 0), (2, 2), (0, 2)]),
Polygon([(1, 1), (3, 1), (3, 3), (1, 3)]),
Polygon([(0, 0), (3, 0), (3, 3), (0, 3)]),
])
gdf = gpd.GeoDataFrame(geometry=geoms)
gdf["value"] = [3, 2, 1]
gdf_list = []
for value in gdf["value"].unique():
gdf['classif'] = np.select(
condlist=[(gdf['value'] == value), (gdf['value'] > value)],
choicelist=['Equal','Larger'],
default=np.nan)
gdf_diss = gdf.dissolve(by='classif',dropna=True).reset_index()
if gdf_diss['classif'].isin(['Equal','Larger']).sum() == 2:
gdf_list.append(
(value, gdf_diss.iloc[0]['geometry'].difference(gdf_diss.iloc[1]['geometry']))
)
In this case, the gdf_list contains (value,Polygon) pairs. The Polygons are the result of the difference between two other polygons:
A) The dissolved polygon of all the rows that have a value in the value column that is equal to the reference value.
B) The dissolved polygon of all the rows that have a value in the value column that is larger to the reference value.
Note that the result isn't a GeoDataFrame of the differences - for each value, it's a single Polygon.
While this might not be exactly what you were looking for, I hope the tricks I used (dissolving instead of subsetting) might help what you're trying to do.

Getting the correct max value from a list of tuples

My list of tuples look like this:
[(0, 0), (3, 0), (3, 3), (0, 3), (0, 0), (0, 6), (3, 6), (3, 9), (0, 9), (0, 6), (6, 0), (9, 0), (9, 3), (6, 3), (6, 0), (0, 3), (3, 3), (3, 6), (0, 6), (0, 3)]
It has the format of (X, Y) where I want to get the max and min of all Xs and Ys in this list.
It should be min(X)=0, max(X)=9, min(Y)=0, max(Y)=9
However, when I do this:
min(listoftuples)[0], max(listoftuples)[0]
min(listoftuples)[1], max(listoftuples)[1]
...for the Y values, the maximum value shown is 3 which is incorrect.
Why is that?
for the Y values, the maximum value shown is 3
because max(listoftuples) returns the tuple (9, 3), so max(listoftuples)[0] is 9 and max(listoftuples)[1] is 3.
By default, iterables are sorted/compared based on the values of the first index, then the value of the second index, and so on.
If you want to find the tuple with the maximum value in the second index, you need to use key function:
from operator import itemgetter
li = [(0, 0), (3, 0), ... ]
print(max(li, key=itemgetter(1)))
# or max(li, key=lambda t: t[1])
outputs
(3, 9)
Here is a simple way to do it using list comprehensions:
min([arr[i][0] for i in range(len(arr))])
max([arr[i][0] for i in range(len(arr))])
min([arr[i][1] for i in range(len(arr))])
max([arr[i][1] for i in range(len(arr))])
In this code, I have used a list comprehension to create a list of all X and all Y values and then found the min/max for each list. This produces your desired answer.
The first two lines are for the X values and the last two lines are for the Y values.
Tuples are ordered by their first value, then in case of a tie, by their second value (and so on). That means max(listoftuples) is (9, 3). See How does tuple comparison work in Python?
So to find the highest y-value, you have to look specifically at the second elements of the tuples. One way you could do that is by splitting the list into x-values and y-values, like this:
xs, ys = zip(*listoftuples)
Or if you find that confusing, you could use this instead, which is roughly equivalent:
xs, ys = ([t[i] for t in listoftuples] for i in range(2))
Then get each of their mins and maxes, like this:
x_min_max, y_min_max = [(min(L), max(L)) for L in (xs, ys)]
print(x_min_max, y_min_max) # -> (0, 9) (0, 9)
Another way is to use NumPy to treat listoftuples as a matrix.
import numpy as np
a = np.array(listoftuples)
x_min_max, y_min_max = [(min(column), max(column)) for column in a.T]
print(x_min_max, y_min_max) # -> (0, 9) (0, 9)
(There's probably a more idiomatic way to do this, but I'm not super familiar with NumPy.)

Strange interferences bewteen Heapq module and dictionary

On one hand, I have a grid defaultdict that stores the neighboring nodes of each node on a grid and its weight (all 1 in the example below).
node (w nbr_node)
grid = { 0: [(1, -5), (1, -4), (1, -3), (1, -1), (1, 1), (1, 3), (1, 4), (1, 5)],
1: [(1, -4), (1, -3), (1, -2), (1, 0), (1, 2), (1, 4), (1, 5), (1, 6)],
2: [(1, -3), (1, -2), (1, -1), (1, 1), (1, 3), (1, 5), (1, 6), (1, 7)],
3: [(1, -2), (1, -1), (1, 0), (1, 2), (1, 4), (1, 6), (1, 7), (1, 8)],
...
}
On the other, I have a Djisktra function that computes the shortest path between 2 nodes on this grid. The algorithm uses the heapq module and works perfectly fine.
import heapq
def Dijkstra(s, e, grid): #startpoint, endpoint, grid
visited = set()
distances = {s: 0}
p = {}
queue = [(0, s)]
while queue != []:
weight, node = heappop(queue)
if node in visited:
continue
visited.add(node)
for n_weight, n_node in grid[node]:
if n_node in visited:
continue
total = weight + n_weight
if n_node not in distances or distances[n_node] > total:
distances[n_node] = total
heappush(queue, (total, n_node))
p[n_node] = node
Problem: when calling the Djikstra function multiple times, heappush is... adding new keys in the grid dictionary for no reason !
Here is a MCVE:
from collections import defaultdict
# Creating the dictionnary
grid = defaultdict(list)
N = 4
kernel = (-N-1, -N, -N+1, -1, 1, N-1, N, N+1)
for i in range(N*N):
for n in kernel:
if i > N and i < (N*N) - 1 - N and (i%N) > 0 and (i%N) < N - 1:
grid[i].append((1, i+n))
# Calling Djikstra multiple times
keys = [*range(N*N)]
while keys:
k1, k2 = random.sample(keys, 2)
Dijkstra(k1, k2, grid)
keys.remove(k1)
keys.remove(k2)
The original grid defaultdict:
dict_keys([5, 6, 9, 10])
...and after calling the Djikstra function multiple times:
dict_keys([5, 6, 9, 10, 4, 0, 1, 2, 8, 3, 7, 11, 12, 13, 14, 15])
When calling the Djikstra function multiple times without heappush (just commenting heappush at the end):
dict_keys([5, 6, 9, 10])
Question:
How can I avoid this strange behavior ?
Please note that I'm using Python 2.7 and can't use numpy.
I could reproduce and fix. The problem is in the way you are building grid: it contains values that are not in keys from -4 to 0 and from 16 to 20 in the example. So you push those inexistant nodes on the head, and later pop them.
And you end in executing for n_weight, n_node in grid[node]: where node does not (still) exists in grid. As grid is a defaultdict, a new node is automatically inserted with an empty list as value.
The fix is trivial (at least for the example data): it is enough to ensure that all nodes added as value is grid exist as key with a modulo:
for i in range(N*N):
for n in kernel:
grid[i].append((1, (i+n + N + 1)%(N*N)))
But even for real data it should not be very hard to ensure that all nodes existing in grid values also exist in keys...
BTW, if grid had been a simple dict the error would have been immediate with a KeyError on grid[node].

Categories