How to scale polygon using shapely? - python

Im trying to scale one shape to a larger one, like this:
I have an example here
poly_context = {'type': 'MULTIPOLYGON',
'coordinates': [[[[1, 2], [2, 1], [4, 3], [3, 4]]]]}
poly_shape = shapely.geometry.asShape(poly_context)

If your polygon is not convex, the scale method may not give you the desired output. For example:
import geopandas as gpd
from shapely import Polygon
from shapely import affinity
vertices = [(0, 0), (1, 1), (2, 0.5), (2.5, 2), (0.5, 2.5)]
# Create the polygon
polygon = Polygon(vertices)
scaled_polygon = affinity.scale(polygon, xfact=1.2, yfact=1.2)
gdf = gpd.GeoDataFrame({'geometry': [scaled_polygon, polygon]})
gdf.plot(column='geometry')
So, maybe the desired method should be bufferinstead of scale.
Example:
buffered_polygon = polygon.buffer(0.2, join_style=2)
gdf = gpd.GeoDataFrame({'geometry': [buffered_polygon, polygon]})
gdf.plot(column='geometry')

Related

Extrude a concave, complex polygon in PyVista

I wish to take a concave and complex (containing holes) polygon and extrude it 'vertically' into a polyhedron, purely for visualisation. I begin with a shapely Polygon, like below:
poly = Polygon(
[(0,0), (10,0), (10,10), (5,8), (0,10), (1,7), (0,5), (1,3)],
holes=[
[(2,2),(4,2),(4,4),(2,4)],
[(6,6), (7,6), (6.5,6.5), (7,7), (6,7), (6.2,6.5)]])
which I correctly plot (reorientating the exterior coordinates to be clockwise, and the hole coordinates to be counterclockwise) in matplotlib as:
I then seek to render this polygon extruded out-of-the-page (along z), using PyVista. There are a few hurdles; PyVista doesn't directly support concave (nor complex) input to its PolyData type. So we first create an extrusion of simple (hole-free) concave polygons, as per this discussion.
def extrude_simple_polygon(xy, z0, z1):
# force counter-clockwise ordering, so PyVista interprets polygon correctly
xy = _reorient_coords(xy, clockwise=False)
# remove duplication of first & last vertex
xyz0 = [(x,y,z0) for x,y in xy]
if (xyz0[0] == xyz0[-1]):
xyz0.pop()
# explicitly set edge_source
base_vert = [len(xyz0)] + list(range(len(xyz0)))
base_data = pyvista.PolyData(xyz0, base_vert)
base_mesh = base_data.delaunay_2d(edge_source=base_data)
vol_mesh = base_mesh.extrude((0, 0, z1-z0), capping=True)
# force triangulation, so PyVista allows boolean_difference
return vol_mesh.triangulate()
Observe this works when extruding the outer polygon and each of its internal polygons in-turn:
extrude_simple_polygon(list(poly.exterior.coords), 0, 5).plot()
extrude_simple_polygon(list(poly.interiors[0].coords), 0, 5).plot()
extrude_simple_polygon(list(poly.interiors[1].coords), 0, 5).plot()
I reasoned that to create an extrusion of the original complex polygon, I could compute the boolean_difference. Alas, the result of
outer_vol = extrude_simple_polygon(list(poly.exterior.coords), 0, 5)
for hole in poly.interiors:
hole_vol = extrude_simple_polygon(list(hole.coords), 0, 5)
outer_vol = outer_vol.boolean_difference(hole_vol)
outer_vol.plot()
is erroneous:
The doc advises to inspect the normals via plot_normals, revealing that all extruded volumes have inward-pointing (or else, unexpected) normals:
The extrude doc mentions nothing of the extruded surface normals nor the original object (in this case, a polygon) orientation.
We could be forgiven for expecting our polygons must be clockwise, so we set clockwise=True in the first line of extrude_simple_polygon and try again. Alas, PolyData now misinterprets our base polygon; calling base_mesh.plot() reveals (what should look like our original blue outer polygon):
with extrusion
Does PyVista always expect counter-clockwise polygons?
Why does extrude create volumes with inward-pointing surface normals?
How can I correct the extruded surface normals?
Otherwise, how can I make PyVista correctly visualise what should be an incredibly simply-extruded concave complex polygon??
You're very close. What you have to do is use a single call to delaunay_2d() with all three polygons (i.e. the enclosing one and the two holes) as edge source (loop source?). It's also important to have faces (rather than lines) from each polygon; this is what makes it possible to enforce the holeyness of the holes.
Here's a complete example for your input (where I manually flipped the orientation of the holes; you seem to have a _reorient_coords() helper that you should use instead):
import pyvista as pv
# coordinates of enclosing polygon
poly_points = [
(0, 0), (10, 0), (10, 10), (5, 8), (0, 10), (1, 7), (0, 5), (1, 3),
]
# hole point order hard-coded here; use your _reorient_coords() function
holes = [
[(2, 2), (4, 2), (4, 4), (2, 4)][::-1],
[(6, 6), (7, 6), (6.5, 6.5), (7, 7), (6, 7), (6.2, 6.5)][::-1],
]
z0, z1 = 0.0, 5.0
def is_clockwise(xy):
value = 0
for i in range(len(xy)):
x1, y1 = xy[i]
x2, y2 = xy[(i+1)%len(coords)]
value += (x2-x1)*(y2+y1)
return (value > 0)
def reorient_coords(xy, clockwise):
if is_clockwise(xy) == clockwise:
return xy
return xy[::-1]
def points_2d_to_poly(xy, z, clockwise):
"""Convert a sequence of 2d coordinates to a polydata with a polygon."""
# ensure vertices are currently ordered without repeats
xy = reorient_coords(xy, clockwise)
if xy[0] == xy[-1]:
xy = xy[:-1]
xyz = [(x,y,z) for x,y in xy]
faces = [len(xyz), *range(len(xyz))]
data = pv.PolyData(xyz, faces=faces)
return data
# bounding polygon
polygon = points_2d_to_poly(poly_points, z0)
# add all holes
for hole_points in holes:
polygon += points_2d_to_poly(hole_points, z0)
# triangulate poly with all three subpolygons supplying edges
# (relative face orientation is critical here)
polygon_with_holes = polygon.delaunay_2d(edge_source=polygon)
# extrude
holey_solid = polygon_with_holes.extrude((0, 0, z1 - z0), capping=True)
holey_solid.plot()
Here's the top view of the polygon pre-extrusion:
plotter = pv.Plotter()
plotter.add_mesh(polygon_with_holes, show_edges=True, color='cyan')
plotter.view_xy()
plotter.show()

Vizualize distance matrix into graph

For the following distance matrix:
∞, 1, 2
∞, ∞, 1
∞, ∞, ∞
I would need to visualise the following graph:
That's how it should look like
I tried with the following code:
import networkx as nx
import numpy as np
import string
dt = [('len', float)]
A = np.array([ (0, 1, None, 3, None),
(2, 0, 4, 1, None),
(5, None, 0, 3, None),
(None, None, None, 0, None),
(None, None, None, 2, 0),
])*10
A = A.view(dt)
G = nx.from_numpy_matrix(A)
G = nx.drawing.nx_agraph.to_agraph(G)
G.node_attr.update(color="red", style="filled")
G.edge_attr.update(color="blue", width="2.0")
G.draw('out.png', format='png', prog='neato')
but I cannot seem to input infinity (∞) to show that there is no connection. I tried with None, -1, and even ∞ but nothing seems to work right, so if anyone has any idea how I can visualise that distance matrix, please let me know.
It's not immediately obvious if this is what you are after, but one option is to use np.inf to denote the infinity. Below is a snippet where edges with value np.inf are removed, but whether this makes sense will depend on the context:
import networkx as nx
import numpy as np
A = np.array(
[
(0, 1, np.inf),
(2, 0, 4),
(5, np.inf, 0),
],
dtype="float",
)
# if edge is np.inf replace with zero
A[A == np.inf] = 0
G = nx.from_numpy_matrix(A, create_using=nx.DiGraph)
G = nx.drawing.nx_agraph.to_agraph(G)
G.node_attr.update(color="red", style="filled")
G.edge_attr.update(color="blue", width="0.3")
G.draw("out.png", format="png", prog="neato")

How to retain node ordering when converting graph from networkx to pytorch geometric?

Question: How to retain the node ordering/labels when converting a graph from networkx to pytorch geometric?
Code: (to be run in Google Colab)
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import networkx as nx
import torch
from torch.nn import Linear
import torch.nn.functional as F
torch.__version__
# install pytorch geometric
!pip install torch-scatter torch-sparse torch-cluster torch-spline-conv torch-geometric -f https://data.pyg.org/whl/torch-1.10.0+cpu.html
from torch_geometric.nn import GCNConv
from torch_geometric.utils.convert import to_networkx, from_networkx
# Make the networkx graph
G = nx.Graph()
# Add some cars
G.add_nodes_from([
('Ford', {'y': 0, 'Name': 'Ford'}),
('Lexus', {'y': 1, 'Name': 'Lexus'}),
('Peugot', {'y': 2, 'Name': 'Peugot'}),
('Mitsubushi', {'y': 3, 'Name': 'Mitsubishi'}),
('Mazda', {'y': 4, 'Name': 'Mazda'}),
])
# Relabel the nodes
remapping = {x[0]: i for i, x in enumerate(G.nodes(data = True))}
G = nx.relabel_nodes(G, remapping, copy=False)
# Add some edges --> A = [(0, 1, 0, 1, 1), (1, 0, 1, 1, 0), (0, 1, 0, 0, 1), (1, 1, 0, 0, 0), (1, 0, 1, 0, 0)] as the adjacency matrix
G.add_edges_from([
(0, 1), (0, 3), (0, 4),
(1, 2), (1, 3),
(2, 1), (2, 4),
(3, 0), (3, 1),
(4, 0), (4, 2)
])
# Convert the graph into PyTorch geometric
pyg_graph = from_networkx(G)
pyg_graph.edge_index
When I print the edge indices in the last line of the code, I get different answers each time I run it. Most importantly, I am looking to consistently get the same (correct) answer whereby each node numbering is retained from networkx:
tensor([[0, 0, 1, 1, 1, 2, 2, 3, 3, 4, 4, 4],
[4, 2, 4, 2, 3, 0, 1, 1, 4, 0, 1, 3]])
The form of this edge index tensor is:
the first list contains the node ids of the source node
the second list contains the node ids of the target node
For the node ids to be retained, we would expect node 0 to appear three times in the first (source) list instead of just twice.
Is there any way for me to force PyTorch Geometric to copy over the node ids?
Thanks
[EDIT] One possible work-around I have is using the following bit of code which is able to produce edge index and weight tensors for PyTorch geometric
# Create a dictionary of the mappings from company --> node id
mapping_dict = {x: i for i, x in enumerate(list(G.nodes()))}
# Get the number of nodes
num_nodes = len(mapping_dict)
# Now create a source, target, and edge list for PyTorch geometric graph
edge_source_list = []
edge_target_list = []
edge_weight_list = []
# iterate through all the edges
for e in G.edges():
# first element of tuple is appended to source edge list
edge_source_list.append(mapping_dict[e[0]])
# last element of tuple is appended to target edge list
edge_target_list.append(mapping_dict[e[1]])
# add the edge weight to the edge weight list
edge_weight_list.append(1)
# now create full edge lists for pytorch geometric - undirected edges need to be defined in both directions
full_source_list = edge_source_list + edge_target_list # full source list
full_target_list = edge_target_list + edge_source_list # full target list
full_weight_list = edge_weight_list + edge_weight_list # full edge weight list
print(len(edge_source_list), len(edge_target_list), len(full_source_list))
# now convert these to torch tensors
edge_index_tensor = torch.LongTensor( np.concatenate([ [np.array(full_source_list)], [np.array(full_target_list)]] ))
edge_weight_tensor = torch.FloatTensor(np.array(full_weight_list))
It seems this issue was resolved in the comments (the solution proposed by #Sparky05 is to use copy=True, which is the default for nx.relabel_nodes), but below is the explanation for why the node order is changed.
When copy=False is passed, nx.relabel_nodes will re-add the nodes to the graph in the order they appear in the set of keys of remapping dict. The relevant lines in the code are here:
def _relabel_inplace(G, mapping):
old_labels = set(mapping.keys())
new_labels = set(mapping.values())
if len(old_labels & new_labels) > 0:
# skip codes for labels sets that overlap
else:
# non-overlapping label sets
nodes = old_labels
# skip lines
for old in nodes: # this is now in the set order
By using set the order of nodes is modified, so to preserve the order the non-overlapping label sets should be treated as:
else:
# non-overlapping label sets
nodes = mapping.keys()
A related PR is submitted here.

How to assign graph label for graph in pytorch geometric?

Question: How can we assign a graph-level label to a graph made in PyTorch geometric?
Example: Let us say we create an undirected graph in PyTorch geometric and now we want to label that graph according to its class (can use a numerical value). How could we now assign a class label for the whole graph, such that it can be used for graph classification tasks? Furthermore, how could we collect a bunch of graphs with labels to form our dataset?
Code: (to be run in Google Colab)
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import networkx as nx
import torch
from torch.nn import Linear
import torch.nn.functional as F
torch.__version__
# install pytorch geometric
!pip install torch-scatter torch-sparse torch-cluster torch-spline-conv torch-geometric -f https://data.pyg.org/whl/torch-1.10.0+cpu.html
from torch_geometric.nn import GCNConv
from torch_geometric.utils.convert import to_networkx, from_networkx
# Make the networkx graph
G = nx.Graph()
# Add some cars
G.add_nodes_from([
('Ford', {'y': 0, 'Name': 'Ford'}),
('Lexus', {'y': 1, 'Name': 'Lexus'}),
('Peugot', {'y': 2, 'Name': 'Peugot'}),
('Mitsubushi', {'y': 3, 'Name': 'Mitsubishi'}),
('Mazda', {'y': 4, 'Name': 'Mazda'}),
])
# Relabel the nodes
remapping = {x[0]: i for i, x in enumerate(G.nodes(data = True))}
G = nx.relabel_nodes(G, remapping, copy=True)
# Add some edges --> A = [(0, 1, 0, 1, 1), (1, 0, 1, 1, 0), (0, 1, 0, 0, 1), (1, 1, 0, 0, 0), (1, 0, 1, 0, 0)] as the adjacency matrix
G.add_edges_from([
(0, 1), (0, 3), (0, 4),
(1, 2), (1, 3),
(2, 1), (2, 4),
(3, 0), (3, 1),
(4, 0), (4, 2)
])
# Convert the graph into PyTorch geometric
pyg_graph = from_networkx(G)
Now how could we give this graph a label = 0 (for class e.g. cars)? Then if we did that for lots of graphs, how could we bunch them together to form a dataset?
Thanks
The pyg_graph object has type torch_geometric.data.Data.
Inspecting the source code of Data class, you can see that it defines the dunder methods __setattr__ and __setitem__.
Thanks to __setattr__, you can assign the label with the line
pyg_graph.label = 0
or you can instead use __setitem__ doing
pyg_graph["label"] = 0
The two notations perform the same action internally, so they can be used interchangeably.
To create a batch of graphs and labels, you can simply do
batch = torch_geometric.data.Batch.from_data_list([pyg_graph, pyg_graph])
>>> batch.label
tensor([0, 0])
and PyG takes care of the batching of all attributes automatically.

geopandas difference only if a column's value is greater

Initialize data:
import pandas as pd
from shapely.geometry import Polygon
geoms = gpd.GeoSeries([
Polygon([(0, 0), (2, 0), (2, 2), (0, 2)]),
Polygon([(1, 1), (3, 1), (3, 3), (1, 3)]),
Polygon([(0, 0), (3, 0), (3, 3), (0, 3)]),
])
gdf = gpd.GeoDataFrame(geometry=geoms)
gdf["value"] = [3, 2, 1]
gdf.plot(cmap='tab10', alpha=0.5)
original
Then I want to make holes into the polygons where values are greater than the current row.
gdf_list = []
for value in gdf["value"]:
gdf_equal_value = gdf.loc[gdf["value"] == value, "geometry"]
gdf_above_value = gdf.loc[gdf["value"] > value, "geometry"]
gdf_list.append(
(value, gdf_equal_value.difference(gdf_above_value.unary_union))
)
import matplotlib.pyplot as plt
for value, geom in gdf_list:
geom.plot()
plt.xlim(0, 3)
plt.ylim(0, 3)
plt.title(value)
holes
Since I have much more unique values in my actual dataset, is there a way to optimize this (e.g. not have to loop through each one)?
As I mentioned in my comment, I'm not 100% sure I understand what you want your final product to look like. Please consider editing your question to make that clearer.
In your original question, your final product was a list of (value, geodataframe) pairs, and the geodataframe contained the rows of the original gdf differenced with respect to a dissolved polygon of the gdf elements whose values were larger than the reference value.
Is that exactly what you want?
Here's a quick solution to get to something similar, but not exactly identical.
import numpy as np
import pandas as pd
import geopandas as gpd
from shapely.geometry import Polygon
import matplotlib.pyplot as plt
geoms = gpd.GeoSeries([
Polygon([(0, 0), (2, 0), (2, 2), (0, 2)]),
Polygon([(1, 1), (3, 1), (3, 3), (1, 3)]),
Polygon([(0, 0), (3, 0), (3, 3), (0, 3)]),
])
gdf = gpd.GeoDataFrame(geometry=geoms)
gdf["value"] = [3, 2, 1]
gdf_list = []
for value in gdf["value"].unique():
gdf['classif'] = np.select(
condlist=[(gdf['value'] == value), (gdf['value'] > value)],
choicelist=['Equal','Larger'],
default=np.nan)
gdf_diss = gdf.dissolve(by='classif',dropna=True).reset_index()
if gdf_diss['classif'].isin(['Equal','Larger']).sum() == 2:
gdf_list.append(
(value, gdf_diss.iloc[0]['geometry'].difference(gdf_diss.iloc[1]['geometry']))
)
In this case, the gdf_list contains (value,Polygon) pairs. The Polygons are the result of the difference between two other polygons:
A) The dissolved polygon of all the rows that have a value in the value column that is equal to the reference value.
B) The dissolved polygon of all the rows that have a value in the value column that is larger to the reference value.
Note that the result isn't a GeoDataFrame of the differences - for each value, it's a single Polygon.
While this might not be exactly what you were looking for, I hope the tricks I used (dissolving instead of subsetting) might help what you're trying to do.

Categories