For the following distance matrix:
∞, 1, 2
∞, ∞, 1
∞, ∞, ∞
I would need to visualise the following graph:
That's how it should look like
I tried with the following code:
import networkx as nx
import numpy as np
import string
dt = [('len', float)]
A = np.array([ (0, 1, None, 3, None),
(2, 0, 4, 1, None),
(5, None, 0, 3, None),
(None, None, None, 0, None),
(None, None, None, 2, 0),
])*10
A = A.view(dt)
G = nx.from_numpy_matrix(A)
G = nx.drawing.nx_agraph.to_agraph(G)
G.node_attr.update(color="red", style="filled")
G.edge_attr.update(color="blue", width="2.0")
G.draw('out.png', format='png', prog='neato')
but I cannot seem to input infinity (∞) to show that there is no connection. I tried with None, -1, and even ∞ but nothing seems to work right, so if anyone has any idea how I can visualise that distance matrix, please let me know.
It's not immediately obvious if this is what you are after, but one option is to use np.inf to denote the infinity. Below is a snippet where edges with value np.inf are removed, but whether this makes sense will depend on the context:
import networkx as nx
import numpy as np
A = np.array(
[
(0, 1, np.inf),
(2, 0, 4),
(5, np.inf, 0),
],
dtype="float",
)
# if edge is np.inf replace with zero
A[A == np.inf] = 0
G = nx.from_numpy_matrix(A, create_using=nx.DiGraph)
G = nx.drawing.nx_agraph.to_agraph(G)
G.node_attr.update(color="red", style="filled")
G.edge_attr.update(color="blue", width="0.3")
G.draw("out.png", format="png", prog="neato")
Related
Im trying to scale one shape to a larger one, like this:
I have an example here
poly_context = {'type': 'MULTIPOLYGON',
'coordinates': [[[[1, 2], [2, 1], [4, 3], [3, 4]]]]}
poly_shape = shapely.geometry.asShape(poly_context)
If your polygon is not convex, the scale method may not give you the desired output. For example:
import geopandas as gpd
from shapely import Polygon
from shapely import affinity
vertices = [(0, 0), (1, 1), (2, 0.5), (2.5, 2), (0.5, 2.5)]
# Create the polygon
polygon = Polygon(vertices)
scaled_polygon = affinity.scale(polygon, xfact=1.2, yfact=1.2)
gdf = gpd.GeoDataFrame({'geometry': [scaled_polygon, polygon]})
gdf.plot(column='geometry')
So, maybe the desired method should be bufferinstead of scale.
Example:
buffered_polygon = polygon.buffer(0.2, join_style=2)
gdf = gpd.GeoDataFrame({'geometry': [buffered_polygon, polygon]})
gdf.plot(column='geometry')
Question: How to retain the node ordering/labels when converting a graph from networkx to pytorch geometric?
Code: (to be run in Google Colab)
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import networkx as nx
import torch
from torch.nn import Linear
import torch.nn.functional as F
torch.__version__
# install pytorch geometric
!pip install torch-scatter torch-sparse torch-cluster torch-spline-conv torch-geometric -f https://data.pyg.org/whl/torch-1.10.0+cpu.html
from torch_geometric.nn import GCNConv
from torch_geometric.utils.convert import to_networkx, from_networkx
# Make the networkx graph
G = nx.Graph()
# Add some cars
G.add_nodes_from([
('Ford', {'y': 0, 'Name': 'Ford'}),
('Lexus', {'y': 1, 'Name': 'Lexus'}),
('Peugot', {'y': 2, 'Name': 'Peugot'}),
('Mitsubushi', {'y': 3, 'Name': 'Mitsubishi'}),
('Mazda', {'y': 4, 'Name': 'Mazda'}),
])
# Relabel the nodes
remapping = {x[0]: i for i, x in enumerate(G.nodes(data = True))}
G = nx.relabel_nodes(G, remapping, copy=False)
# Add some edges --> A = [(0, 1, 0, 1, 1), (1, 0, 1, 1, 0), (0, 1, 0, 0, 1), (1, 1, 0, 0, 0), (1, 0, 1, 0, 0)] as the adjacency matrix
G.add_edges_from([
(0, 1), (0, 3), (0, 4),
(1, 2), (1, 3),
(2, 1), (2, 4),
(3, 0), (3, 1),
(4, 0), (4, 2)
])
# Convert the graph into PyTorch geometric
pyg_graph = from_networkx(G)
pyg_graph.edge_index
When I print the edge indices in the last line of the code, I get different answers each time I run it. Most importantly, I am looking to consistently get the same (correct) answer whereby each node numbering is retained from networkx:
tensor([[0, 0, 1, 1, 1, 2, 2, 3, 3, 4, 4, 4],
[4, 2, 4, 2, 3, 0, 1, 1, 4, 0, 1, 3]])
The form of this edge index tensor is:
the first list contains the node ids of the source node
the second list contains the node ids of the target node
For the node ids to be retained, we would expect node 0 to appear three times in the first (source) list instead of just twice.
Is there any way for me to force PyTorch Geometric to copy over the node ids?
Thanks
[EDIT] One possible work-around I have is using the following bit of code which is able to produce edge index and weight tensors for PyTorch geometric
# Create a dictionary of the mappings from company --> node id
mapping_dict = {x: i for i, x in enumerate(list(G.nodes()))}
# Get the number of nodes
num_nodes = len(mapping_dict)
# Now create a source, target, and edge list for PyTorch geometric graph
edge_source_list = []
edge_target_list = []
edge_weight_list = []
# iterate through all the edges
for e in G.edges():
# first element of tuple is appended to source edge list
edge_source_list.append(mapping_dict[e[0]])
# last element of tuple is appended to target edge list
edge_target_list.append(mapping_dict[e[1]])
# add the edge weight to the edge weight list
edge_weight_list.append(1)
# now create full edge lists for pytorch geometric - undirected edges need to be defined in both directions
full_source_list = edge_source_list + edge_target_list # full source list
full_target_list = edge_target_list + edge_source_list # full target list
full_weight_list = edge_weight_list + edge_weight_list # full edge weight list
print(len(edge_source_list), len(edge_target_list), len(full_source_list))
# now convert these to torch tensors
edge_index_tensor = torch.LongTensor( np.concatenate([ [np.array(full_source_list)], [np.array(full_target_list)]] ))
edge_weight_tensor = torch.FloatTensor(np.array(full_weight_list))
It seems this issue was resolved in the comments (the solution proposed by #Sparky05 is to use copy=True, which is the default for nx.relabel_nodes), but below is the explanation for why the node order is changed.
When copy=False is passed, nx.relabel_nodes will re-add the nodes to the graph in the order they appear in the set of keys of remapping dict. The relevant lines in the code are here:
def _relabel_inplace(G, mapping):
old_labels = set(mapping.keys())
new_labels = set(mapping.values())
if len(old_labels & new_labels) > 0:
# skip codes for labels sets that overlap
else:
# non-overlapping label sets
nodes = old_labels
# skip lines
for old in nodes: # this is now in the set order
By using set the order of nodes is modified, so to preserve the order the non-overlapping label sets should be treated as:
else:
# non-overlapping label sets
nodes = mapping.keys()
A related PR is submitted here.
Question: How can we assign a graph-level label to a graph made in PyTorch geometric?
Example: Let us say we create an undirected graph in PyTorch geometric and now we want to label that graph according to its class (can use a numerical value). How could we now assign a class label for the whole graph, such that it can be used for graph classification tasks? Furthermore, how could we collect a bunch of graphs with labels to form our dataset?
Code: (to be run in Google Colab)
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import networkx as nx
import torch
from torch.nn import Linear
import torch.nn.functional as F
torch.__version__
# install pytorch geometric
!pip install torch-scatter torch-sparse torch-cluster torch-spline-conv torch-geometric -f https://data.pyg.org/whl/torch-1.10.0+cpu.html
from torch_geometric.nn import GCNConv
from torch_geometric.utils.convert import to_networkx, from_networkx
# Make the networkx graph
G = nx.Graph()
# Add some cars
G.add_nodes_from([
('Ford', {'y': 0, 'Name': 'Ford'}),
('Lexus', {'y': 1, 'Name': 'Lexus'}),
('Peugot', {'y': 2, 'Name': 'Peugot'}),
('Mitsubushi', {'y': 3, 'Name': 'Mitsubishi'}),
('Mazda', {'y': 4, 'Name': 'Mazda'}),
])
# Relabel the nodes
remapping = {x[0]: i for i, x in enumerate(G.nodes(data = True))}
G = nx.relabel_nodes(G, remapping, copy=True)
# Add some edges --> A = [(0, 1, 0, 1, 1), (1, 0, 1, 1, 0), (0, 1, 0, 0, 1), (1, 1, 0, 0, 0), (1, 0, 1, 0, 0)] as the adjacency matrix
G.add_edges_from([
(0, 1), (0, 3), (0, 4),
(1, 2), (1, 3),
(2, 1), (2, 4),
(3, 0), (3, 1),
(4, 0), (4, 2)
])
# Convert the graph into PyTorch geometric
pyg_graph = from_networkx(G)
Now how could we give this graph a label = 0 (for class e.g. cars)? Then if we did that for lots of graphs, how could we bunch them together to form a dataset?
Thanks
The pyg_graph object has type torch_geometric.data.Data.
Inspecting the source code of Data class, you can see that it defines the dunder methods __setattr__ and __setitem__.
Thanks to __setattr__, you can assign the label with the line
pyg_graph.label = 0
or you can instead use __setitem__ doing
pyg_graph["label"] = 0
The two notations perform the same action internally, so they can be used interchangeably.
To create a batch of graphs and labels, you can simply do
batch = torch_geometric.data.Batch.from_data_list([pyg_graph, pyg_graph])
>>> batch.label
tensor([0, 0])
and PyG takes care of the batching of all attributes automatically.
So I look at this sample and I want to create a mesh/surface grid that fills that distance. How to do such a thing in PyVista?
What I try and seem not to be able to bridge with Andras Deak beautiful answer:
import pyvista as pv
import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial import KDTree
import PVGeo
from PVGeo import interface
from PVGeo.filters import BuildSurfaceFromPoints
b = pv.read('./top.vtk') # PolyData
t = pv.read('./bottom.vtk') # PolyData
dim = (int(b.bounds[1]-b.bounds[0]), int(b.bounds[3]-b.bounds[2]), 1)
z_range = np.arange(b.bounds[4], b.bounds[5] )
bottom = BuildSurfaceFromPoints().apply(b)
top = BuildSurfaceFromPoints().apply(t)
grid_2d = top.points.reshape(dim[:-1] + (3,), order='F')[..., :-1]
That sadly fails on line with grid_2d with
ValueError: cannot reshape array of size 1942464 into shape
(30150,26750,3)
I don't know if there's a built-in way to interpolate between two surfaces, but it's not very hard to do so using just numpy.
Here's an example that uses Perlin noise to generate two sheets of data on the same grid in two different heights. The actual code for the answer comes after.
import numpy as np
import pyvista as pv
# generate two sheets of input data
noise = pv.perlin_noise(2, (0.2, 0.2, 0.2), (0, 0, 0))
bounds_2d = (-10, 10, -10, 10)
dim = (40, 50, 1)
bottom, top = [
pv.sample_function(noise, dim=dim, bounds=bounds_2d + (z, z)).warp_by_scalar()
for z in [-5, 5]
]
# actual answer starts here
# the top and bottom sheets are named `top` and `bottom`
# and they share the same 2d grid
# rebuild grid points
grid_2d = top.points.reshape(dim[:-1] + (3,), order='F')[..., :-1]
values_x = grid_2d[:, 0, 0]
values_y = grid_2d[0, :, 1]
# generate full grid with equidistant interpolation in each (x, y)
nz = 10
scale = np.linspace(0, 1, nz)
scale_z = scale[:, None] * [0, 0, 1] # shape (nz, 3)
scale_z_inv = (1 - scale[:, None]) * [0, 0, 1] # shape (nz, 3)
z_bottom = bottom.points.reshape(dim[:-1] + (3,), order='F')[..., -1] # shape (nx, ny)
z_top = top.points.reshape(dim[:-1] + (3,), order='F')[..., -1] # shape (nx, ny)
interpolated_z = scale * z_bottom[..., None] + (1 - scale) * z_top[..., None] # shape (nx, ny, nz)
grid_2d_in_3d = np.pad(grid_2d, [(0, 0), (0, 0), (0, 1)]) # shape (nx, ny, 3)
final_grid = grid_2d_in_3d[..., None, :] + interpolated_z[..., None] * [0, 0, 1] # shape (nx, ny, nz, 3)
mesh = pv.StructuredGrid(*final_grid.transpose())
# plot the two sheets and the interpolated grid
pv.set_plot_theme('document')
plotter = pv.Plotter()
plotter.add_mesh(bottom, show_scalar_bar=False)
plotter.add_mesh(top, show_scalar_bar=False)
plotter.add_mesh(mesh, style='wireframe')
plotter.show()
I want to implement the forward kinematic of a robot with TensorFlow; mainly to gain automatic differentiation and to plug this module into larger network architectures.
In general I have a bunch of 4x4 transformation matrices, defined by the dh-parameters (d, theta, a, alpha) and the joint angle q:
[[ cos(theta+q), -sin(theta+q), 0, a],
[sin(theta+q)*cos(alpha), cos(theta+q)*cos(alpha), -sin(alpha), -sin(alpha)*d],
[sin(theta+q)*sin(alpha), cos(theta+q)*sin(alpha), cos(alpha), cos(alpha)*d],
[ 0, 0, 0, 1]])
My robot has 10 different joints, all connected sequentially.
I thought it would be smart to precompute sine and cosine.
q = tf.keras.layers.Input((10,))
sin_q = tf.sin(q)
cos_q = tf.cos(q)
Lets look at the transformation at the first joint with the specific set of dh-parameters (d=0.1055, theta=0, a=0, alpha=0):
m0 = [[cos(q0), -sin(q0), 0, 0],
[sin(q0), cos(q0), 0, 0],
0, 0, 1, 0.10550],
0, 0, 0, 1]]
My first problem is how to build something like this with TensorFlow?
In numpy I would initialize the matrix and fill in the nonzero values.
m_shape = tf.TensorShape((batch_size,4,4))
m0 = tf.zeros(m_shape)
m0[..., 0, 0] = cos_q[..., 0]
m0[..., 0, 1] = -sin_q[..., 0]
m0[..., 1, 0] = cos_q[..., 0]
m0[..., 1, 1] = sin_q[..., 0]
m0[..., 2, 3] = 0.10550
m0[..., 3, 3] = 1
Error -> 'Tensor' object does not support item assignment
But Tensorflow doesn't allow assignment for tensors.
It seems that the way to go is via tf.stack(). I need to create a vector of ones of the same size as my not specified batch_size, stack and reshape.
(Note: In the general case there are less zero values)
e = tf.ones_like(q[..., 0])
m0 = tf.stack([cos_q[..., 0], -sin_q[..., 0], 0*e, 0*e,
sin_q[..., 0], cos_q[..., 0], 0*e, 0*e,
0*e, 0*e, 1*e, 0.10550*e,
0*e, 0*e, 0*e, 1*e], axis=-1)
m0 = tf.keras.layers.Reshape((4, 4))(m0)
Is this correct or is there a smarter way to build such general transformations in TensorFlow?
As final result I am interested in the transformation at the end of the kinematic chain. I want to put in an array of different joint configurations (?, 10) and get the transformation at the end effector (?, 4, 4).
m_end = m0 # m1 # m2 # ... # m10
forward_net = tf.keras.Model(inputs=[q], outputs=[m_end]
result = forward_net.predict(np.random.random((100, 10)))
This works but its neither elegant nor fast.
The speed is my bigger problem; the same implementation in numpy is 150x faster.
How can I improve the speed? I thought TensorFlow should excel at tasks like this.
Should I build it as Model and use predict to calculate the results; there is nothing to learn here, so I am not sure what to use.
If you want to build 4x4 rotation matrices from an angle, or from the sine and cosine of an angle, you can do it like this:
import tensorflow as tf
def make_rotation(alpha, axis):
return make_rotation_sincos(tf.math.sin(alpha), tf.math.cos(alpha), axis)
def make_rotation_sincos(sin, cos, axis):
axis = axis.strip().lower()
zeros = tf.zeros_like(sin)
ones = tf.ones_like(sin)
if axis == 'x':
rot = tf.stack([
tf.stack([ ones, zeros, zeros], -1),
tf.stack([zeros, cos, -sin], -1),
tf.stack([zeros, sin, cos], -1),
], -2)
elif axis == 'y':
rot = tf.stack([
tf.stack([ cos, zeros, sin], -1),
tf.stack([zeros, ones, zeros], -1),
tf.stack([ -sin, zeros, cos], -1),
], -2)
elif axis == 'z':
rot = tf.stack([
tf.stack([ cos, -sin, zeros], -1),
tf.stack([ sin, cos, zeros], -1),
tf.stack([zeros, zeros, ones], -1),
], -2)
else:
raise ValueError('Invalid axis {!r}.'.format(axis))
last_row = tf.expand_dims(tf.stack([zeros, zeros, zeros], -1), -2)
last_col = tf.expand_dims(tf.stack([zeros, zeros, zeros, ones], -1), -1)
return tf.concat([tf.concat([rot, last_row], -2), last_col], -1)
About computing the forward kinematic chain, you can do that with tf.scan. For example, assuming the initial shape (?, 10):
# Make rotation matrices
rots = make_rotation(...)
rots_t = tf.transpose(rots, (1, 0, 2, 3))
out = tf.scan(tf.matmul, rots_t)[-1]