Given a connected graph and a list of N-assigned vertexes, I want to find an efficient way to create N subgraphs, each containing one of the assigned vertexes.
To achieve that, we can prune the edges. However, we should prune less edge weight as possible.
For example, let's start with the following graph. We want to obtain three subgraphs containing one of the three red vertexes
The result should look like the following:
Right now, I'm using a heuristic, but it is not working well in some edge cases and has n^2 complexity on the number of vertexes. The idea is to calculate the shortest path between two vertex and remove the lightest edge and repeat until the vertex are disconnected.
Here is my code:
import pandas as pd
import igraph as ig
from collections import Counter
ucg_df = pd.DataFrame(
[
[0, 1, 100],
[0, 2, 110],
[2, 3, 70],
[3, 4, 100],
[3, 1, 90],
[0, 3, 85],
[5, 7, 90],
[0, 8, 100],
[3, 6, 10],
[2, 5, 60],
],
columns=["nodeA", "nodeB", "weight"],
)
ucg_graph = ig.Graph.DataFrame(ucg_df, directed=False)
ig.plot(
ucg_graph,
target='stack1.pdf',
edge_label=ucg_graph.es["weight"],
vertex_color=['red']*3 + ['green']*(len(ucg_df)-3),
vertex_label = ucg_graph.vs.indices
)
def generate_subgraphs_from_vertexes(g, vertex_list):
for i, vertex in enumerate(vertex_list):
for j in range(i + 1, len(vertex_list)):
while True:
path = g.get_shortest_paths(vertex_list[i], vertex_list[j], mode='ALL', output='epath',
weights='weight')[0]
if len(path) == 0:
break
edge_2_drop = min(g.es[path], key=lambda x: x['weight'])
edge_2_drop.delete()
return g
graph = generate_subgraphs_from_vertexes(ucg_graph, ucg_graph.vs[0,1,2])
ig.plot(
graph,
target='stack2.pdf',
edge_label=graph.es["weight"],
vertex_color=['red']*3 + ['green']*(len(ucg_df)-3),
vertex_label = graph.vs.indices
)
what kind of algorithm could I use to better solve this problem?
I am not familiar with igraph in Python, but below is my attempt in R. Hope you can get some hint here.
I think your problem can be reformulated into an assignment problem, since the key part is assigning "red" to associated "green" vertices to maximize the cost
library(igraph)
library(lpSolve)
# red vertices
vred <- V(g)[V(g)$color == "red"]
# subgraph that contains vred
sg <- induced.subgraph(
g,
unique(unlist(ego(g, 1, vred)))
)
# green vertices in sg
vgreen <- V(sg)[V(sg)$color == "green"]
# cost matrix
cost.mat <- get.adjacency(sg, attr = "label", sparse = FALSE)[vred, ][, vgreen]
p <- lp.assign(cost.mat, "max")
idx <- which(p$solution > 0, arr.ind = TRUE)
# edge list for max assignment
el1 <- cbind(names(vred[idx[, 1]]), names(vgreen[idx[, 2]]))
# all edges associated with vred
el <- get.edgelist(g)
el2 <- el[rowSums(matrix(el %in% names(vred), ncol = 2)) > 0, ]
# remove edges that are not obtained for the max assignment
rmEls <- do.call(
paste,
c(
data.frame(
el2[!apply(el2, 1, function(x) toString(sort(x))) %in% apply(el1, 1, function(x) toString(sort(x))), ]
),
sep = "|"
)
)
out <- g %>%
delete.edges(rmEls)
When running plot(out, layout = layout_nicely(g)), you will see
Data
df <- data.frame(
from = c(0, 0, 2, 3, 3, 0, 5, 0, 3, 2),
to = c(1, 2, 3, 4, 1, 3, 7, 8, 6, 5),
weight = c(100, 110, 70, 100, 90, 85, 90, 100, 10, 60)
)
# original graph object
g <- df %>%
graph_from_data_frame(directed = FALSE) %>%
set_edge_attr(name = "label", value = df$weight) %>%
set_vertex_attr(name = "color", value = ifelse(names(V(.)) %in% c("0", "1", "2"), "red", "green"))
Inspired by
Find rows of matrix which contain rows of another matrix,
I found, assuming the graph is undirected:
mtch <- matrix(match(el2, el1), ncol = 2)
idx <- which(abs(mtch[,1] - mtch[,2]) == nrow(el1))
rmEls <- get.edge.ids(g, t(el2[-idx,]))
rmEls
## [1] 1 2 3 6
Related
I'm able to calculate a rolling correlation coefficient for a 1D-array (data against [0, 1, 2, 3, 4]) using a loop.
I'm looking for a smarter solution using numpy (not pandas).
Here is my current code:
import numpy as np
data = np.array([10,5,8,9,15,22,26,11,15,16,18,7,4,8,-2,-3,-4,-6,-2,0,10,0,5,8])
x = np.zeros_like(data).astype('float32')
length = 5
for i in range(length, data.shape[0]):
x[i] = np.corrcoef(data[i - length:i], np.arange(length))[0, 1]
print(x)
x gives :
[ 0. 0. 0. 0. 0. 0.607 0.959 0.98 0.328 -0.287
-0.61 -0.314 -0.18 -0.8 -0.782 -0.847 -0.811 -0.825 -0.869 -0.283
0.566 0.863 0.643 0.454]
Any solution without the loop please?
Use a numpy.lib.stride_tricks.sliding_window_view (available in numpy v1.20.0+)
swindow = np.lib.stride_tricks.sliding_window_view(data, (length,))
which gives a view on the data array that looks like so:
array([[10, 5, 8, 9, 15],
[ 5, 8, 9, 15, 22],
[ 8, 9, 15, 22, 26],
[ 9, 15, 22, 26, 11],
[15, 22, 26, 11, 15],
[22, 26, 11, 15, 16],
[26, 11, 15, 16, 18],
[11, 15, 16, 18, 7],
[15, 16, 18, 7, 4],
[16, 18, 7, 4, 8],
[18, 7, 4, 8, -2],
[ 7, 4, 8, -2, -3],
[ 4, 8, -2, -3, -4],
[ 8, -2, -3, -4, -6],
[-2, -3, -4, -6, -2],
[-3, -4, -6, -2, 0],
[-4, -6, -2, 0, 10],
[-6, -2, 0, 10, 0],
[-2, 0, 10, 0, 5],
[ 0, 10, 0, 5, 8]])
Now, we want to apply the correlation coefficient calculation to each row of this array. Unfortunately, np.corrcoef doesn't take an axis argument, it applies the calculation to the entire matrix and doesn't provide a way to do so for each row/column.
However, the calculation for the correlation coefficient of two vectors is quite simple:
Applying that here:
def vec_corrcoef(X, y, axis=1):
Xm = np.mean(X, axis=axis, keepdims=True)
ym = np.mean(y)
n = np.sum((X - Xm) * (y - ym), axis=axis)
d = np.sqrt(np.sum((X - Xm)**2, axis=axis) * np.sum((y - ym)**2))
return n / d
Now, call this function with our array and arange:
cc = vec_corrcoef(swindow, np.arange(length))
which gives the desired result:
array([ 0.60697698, 0.95894955, 0.98 , 0.3279521 , -0.28709766,
-0.61035663, -0.31390158, -0.17995394, -0.80041656, -0.78192905,
-0.84702587, -0.81091772, -0.82464375, -0.86892667, -0.28347335,
0.56568542, 0.86304424, 0.64326752, 0.45374261, 0.38135638])
To get your x, just set the appropriate indices of a zeros array of the correct size.
Note: I think your x should contain nonzero values starting at the 4 index (because that's where the sliding window is full) instead of starting at index 5.
x = np.zeros(data.shape)
x[-len(cc):] = cc
If you are sure that your values should start at the index 5, then you can do:
x = np.zeros(data.shape)
x[length:] = cc[:-1] # Ignore the last value in cc
Comparing the runtimes of your original approach with those suggested in the answers here:
f_OP_loopy is your approach, which implements a sliding window using a loop
f_PH_numpy is my approach, which uses the sliding_window_view and the vectorized function for row-wise calculation of the vector correlation coefficient
f_RA_numpy is Rontogiannis's approach, which tiles the arange, calculates the correlation coefficient for the entire matrices, and only selects the first len(data) - length rows of the last column
f_RA_recur is Rontogiannis's recursive approach, but I didn't time this because it misses out on the last correlation coefficient.
Unsurprisingly, the numpy-only solution is faster than the loopy approach.
My numpy solution, which computes the row-wise correlation coefficient, is faster than that shown by Rontogiannis below, because the extra work involved in tiling the vector input and calculating the correlation of the entire matrix, only to discard the unwanted elements, is avoided by my approach.
As the input data size increases, this "extra work" in Rontogiannis's approach increases so much that its runtime is worse even than the loopy approach! I am unsure if this extra time is in the np.corrcoef calculation or in the np.tile operation.
Note: This plot was obtained on my 2.2GHz i7 Macbook Air with 8GB RAM, Python 3.10.7 and numpy 1.23.3. Similar results were obtained on Google Colab
If you're interested in the timing code, here it is:
import timeit
import numpy as np
from matplotlib import pyplot as plt
def time_funcs(funcs, sizes, arg_gen, N=20):
times = np.zeros((len(sizes), len(funcs)))
gdict = globals().copy()
for i, s in enumerate(sizes):
args = arg_gen(s)
print(args)
for j, f in enumerate(funcs):
gdict.update(locals())
try:
times[i, j] = timeit.timeit("f(*args)", globals=gdict, number=N) / N
print(f"{i}/{len(sizes)}, {j}/{len(funcs)}, {times[i, j]}")
except ValueError:
print(f"ERROR in {f}, with args=", *args)
return times
def plot_times(times, funcs):
fig, ax = plt.subplots()
for j, f in enumerate(funcs):
ax.plot(sizes, times[:, j], label=f.__name__)
ax.set_xlabel("Array size")
ax.set_ylabel("Time per function call (s)")
ax.set_xscale("log")
ax.set_yscale("log")
ax.legend()
ax.grid()
fig.tight_layout()
return fig, ax
#%%
def arg_gen(n):
return [np.random.randint(-100, 100, (n,)), 5]
#%%
def f_OP_loopy(data, length):
x = np.zeros_like(data).astype('float32')
for i in range(length-1, data.shape[0]):
x[i] = np.corrcoef(data[i - length + 1:i+1], np.arange(length))[0, 1]
return x
def f_PH_numpy(data, length):
swindow = np.lib.stride_tricks.sliding_window_view(data, (length,))
cc = vec_corrcoef(swindow, np.arange(length))
x = np.zeros(data.shape)
x[-len(cc):] = cc
return x
def f_RA_recur(data, length):
return np.concatenate((
np.zeros([length,]),
rolling_correlation_recurse(data, 0, length)
))
def f_RA_numpy(data, length):
n = len(data)
cc = np.corrcoef(np.lib.stride_tricks.sliding_window_view(data, length), np.tile(np.arange(length), (n-length+1, 1)))[:n-length+1, -1]
x = np.zeros(data.shape)
x[-len(cc):] = cc
return x
#%%
def rolling_correlation_recurse(data, i, length) :
assert i+length < data.size
left = np.array([np.corrcoef(data[i:i+length], np.arange(length))[0, 1]])
if i+length+1 == data.size :
return left
right = rolling_correlation_recurse(data, i+1, length)
return np.concatenate((left, right))
def vec_corrcoef(X, y, axis=1):
Xm = np.mean(X, axis=axis, keepdims=True)
ym = np.mean(y)
n = np.sum((X - Xm) * (y - ym), axis=axis)
d = np.sqrt(np.sum((X - Xm)**2, axis=axis) * np.sum((y - ym)**2))
return n / d
#%%
if __name__ == "__main__":
#%% Set up sim
sizes = [5, 10, 50, 100, 500, 1000, 5000, 10_000] #, 50_000, 100_000]
funcs = [f_OP_loopy, #f_RA_recur,
f_PH_numpy, f_RA_numpy]
#%% Run timing
time_fcalls = np.zeros((len(sizes), len(funcs))) * np.nan
time_fcalls = time_funcs(funcs, sizes, arg_gen)
fig, ax = plot_times(time_fcalls, funcs)
ax.set_xlabel(f"Input size")
plt.show()
input("Enter x to exit")
Ask and you shall receive. Here is a solution that uses recursion:
import numpy as np
data = np.array([10,5,8,9,15,22,26,11,15,16,18,7,4,8,-2,-3,-4,-6,-2,0,10,0,5,8])
length = 5
def rolling_correlation_recurse(data, i, length) :
assert i+length < data.size
left = np.array([np.corrcoef(data[i:i+length], np.arange(length))[0, 1]])
if i+length+1 == data.size :
return left
right = rolling_correlation_recurse(data, i+1, length)
return np.concatenate((left, right))
def rolling_correlation(data, length) :
return np.concatenate((
np.zeros([length,]),
rolling_correlation_recurse(data, 0, length)
))
print(rolling_correlation(data, length))
Edit: here is a numpy solution too:
n = len(data)
print(np.corrcoef(np.lib.stride_tricks.sliding_window_view(data, length), np.tile(np.arange(length), (n-length+1, 1)))[:n-length+1, -1])
I am working with graph data defined as 2d array of edges.
I.e.
[[1, 0],
[2, 5],
[1, 5],
[3, 4],
[1, 4]]
Defines a graph, all elements define a node id, there are no self loops, it is directed, and no value in a column exists in the other column.
Now to the question,
I need to select all edges where both 'nodes' occur more than once in the list.
How do I do that in a quick way. Currently I am iterating over each edge and looking at the nodes individually. It feels like a really bad way to do this.
Current dumb/slow solution
edges = []
for edge in graph:
src, dst = edge[0], edge[1]
# Check src for existance in col 1 & 2
src_fan = np.count_nonzero(graph == src, axis=1).sum()
dst_fan = np.count_nonzero(graph == dst, axis=1).sum()
if(src_fan >= 2 and dst_fan >= 2):
# Add to edges
edges.append(edge)
I am also not entirely sure this way is even correct...
# Obtain the unique nodes and their counts
from_nodes, from_counts = np.unique(a[:, 0], return_counts = True)
to_nodes, to_counts = np.unique(a[:, 1], return_counts = True)
# Obtain the duplicated nodes
dup_from_nodes = from_nodes[from_counts > 1]
dup_to_nodes = to_nodes[to_counts > 1]
# Obtain the edge whose nodes are duplicated
graph[np.in1d(a[:, 0], dup_from_nodes) & np.in1d(a[:, 1], dup_to_nodes)]
Out[297]: array([[1, 4]])
a solution using networkx:
import networkx as nx
edges = [[1, 0],
[2, 5],
[1, 5],
[3, 4],
[1, 4]]
G = nx.DiGraph()
G.add_edges_from(edges)
print([node for node in G.nodes if G.degree[node]>1])
edit:
print([edge for edge in G.edges if (G.degree[edge[0]]>1) & (G.degree[edge[1]]>1)])
import numpy as np
graph = np.array([[1, 0],
[2, 5],
[1, 5],
[3, 4],
[1, 4]])
# get a 1d array of all nodes
array = graph.reshape(-1)
# get occurances of each element
occurances = np.sum(np.equal(array, array[:,np.newaxis]), axis=0)
# reshape back to graph shape
occurances = occurances.reshape(graph.shape)
# check if both edges occur more than once
mask = np.all(occurances > 1, axis=1)
# select the masked elements
edges = graph[mask]
Based on my test this method is almost 2 times faster than the accepted answer.
Test:
import timeit
import numpy as np
graph = np.array([[1, 0],
[2, 5],
[1, 5],
[3, 4],
[1, 4]])
# accepted answer
def method1(a):
# Obtain the unique nodes and their counts
from_nodes, from_counts = np.unique(a[:, 0], return_counts = True)
to_nodes, to_counts = np.unique(a[:, 1], return_counts = True)
# Obtain the duplicated nodes
dup_from_nodes = from_nodes[from_counts > 1]
dup_to_nodes = to_nodes[to_counts > 1]
# Obtain the edge whose nodes are duplicated
return graph[np.in1d(a[:, 0], dup_from_nodes) & np.in1d(a[:, 1], dup_to_nodes)]
# this answer
def method2(graph):
# get a 1d array of all nodes
array = graph.reshape(-1)
# get occurances of each element then reshape back to graph shape
occurances = np.sum(np.equal(array, array[:,np.newaxis]), axis=0).reshape(graph.shape)
# check if both edges occur more than once
mask = np.all(occurances > 1, axis=1)
# select the masked elements
edges = graph[mask]
return edges
print('method1 (accepted answer): ', timeit.timeit(lambda: method1(graph), number=10000))
print('method2 (this answer): ', timeit.timeit(lambda: method2(graph), number=10000))
Outhput:
method1 (accepted answer): 0.20238440000000013
method2 (this answer): 0.06534320000000005
The sample data is as follows:
unique_list = ['home0', 'page_a0', 'page_b0', 'page_a1', 'page_b1',
'page_c1', 'page_b2', 'page_a2', 'page_c2', 'page_c3']
sources = [0, 0, 1, 2, 2, 3, 3, 4, 4, 7, 6]
targets = [3, 4, 4, 3, 5, 6, 8, 7, 8, 9, 9]
values = [2, 1, 1, 1, 1, 2, 1, 1, 1, 1, 2]
Using the sample code from the documentation
fig = go.Figure(data=[go.Sankey(
node = dict(
pad = 15,
thickness = 20,
line = dict(color = "black", width = 0.5),
label = unique_list,
color = "blue"
),
link = dict(
source = sources,
target = targets,
value = values
))])
fig.show()
This outputs the following sankey diagram
However, I would like to get all the values which end in the same number in the same vertical column, just like how the leftmost column has all of it's nodes ending with a 0. I see in the docs that it is possible to move the node positions, however I was wondering if there was a cleaner way to do it other than manually inputting x and y values. Any help appreciated.
In go.Sankey() set arrangement='snap' and adjust x and y positions in x=<list> and y=<list>. The following setup will place your nodes as requested.
Plot:
Please note that the y-values are not explicitly set in this example. As soon as there are more than one node for a common x-value, the y-values will be adjusted automatically for all nodes to be displayed in the same vertical position. If you do want to set all positions explicitly, just set arrangement='fixed'
Edit:
I've added a custom function nodify() that assigns identical x-positions to label names that have a common ending such as '0' in ['home0', 'page_a0', 'page_b0']. Now, if you as an example change page_c1 to page_c2 you'll get this:
Complete code:
import plotly.graph_objects as go
unique_list = ['home0', 'page_a0', 'page_b0', 'page_a1', 'page_b1',
'page_c1', 'page_b2', 'page_a2', 'page_c2', 'page_c3']
sources = [0, 0, 1, 2, 2, 3, 3, 4, 4, 7, 6]
targets = [3, 4, 4, 3, 5, 6, 8, 7, 8, 9, 9]
values = [2, 1, 1, 1, 1, 2, 1, 1, 1, 1, 2]
def nodify(node_names):
node_names = unique_list
# uniqe name endings
ends = sorted(list(set([e[-1] for e in node_names])))
# intervals
steps = 1/len(ends)
# x-values for each unique name ending
# for input as node position
nodes_x = {}
xVal = 0
for e in ends:
nodes_x[str(e)] = xVal
xVal += steps
# x and y values in list form
x_values = [nodes_x[n[-1]] for n in node_names]
y_values = [0.1]*len(x_values)
return x_values, y_values
nodified = nodify(node_names=unique_list)
# plotly setup
fig = go.Figure(data=[go.Sankey(
arrangement='snap',
node = dict(
pad = 15,
thickness = 20,
line = dict(color = "black", width = 0.5),
label = unique_list,
color = "blue",
x=nodified[0],
y=nodified[1]
),
link = dict(
source = sources,
target = targets,
value = values
))])
fig.show()
I'm using Python and Graphviz to draw some cluster graph consist of nodes.
I want to assign different colors to each node, dependent on an attribute, e.g. its x-coordinate.
Here's how I produce graph:
def add_nodes(graph, nodes):
for n in nodes:
if isinstance(n, tuple):
graph.node(n[0], **n[1])
else:
graph.node(n)
return graph
A = [[517, 1, [409], 10, 6],
[534, 1, [584], 10, 12],
[614, 1, [247], 11, 5],
[679, 1, [228], 13, 7],
[778, 1, [13], 14, 14]]
nodesgv = []
for node in A:
nodesgv.append((str(node[0]),{'label': str(node[0]), 'color': ???, 'style': 'filled'}))
graph = functools.partial(gv.Graph, format='svg', engine='neato')
add_nodes(graph(), nodesgv).render(('img/test'))
And now I want to assign a color to each node with the ordering of the first value of each node.
More specifically what I want is:
a red node (517)
a yellow node (534)
a green node (614)
a blue node (679)
and a purple node (778)
I know how to assign colors to the graph, but what I'm looking for is something similar to the c=x part when using matplotlib.
Problem is I'm not able to know the number of nodes (clusters) beforehand, so for example if I've got 7 nodes, I still want a graph with 7 nodes that start from a red one, and end with a purple one.
plt.scatter(x, y, c=x, s=node_sizes)
So is there any attribute in Graphviz that can do this?
Or can anyone tell me how does the colormap in matplotlib work?
Sorry for the lack of clarity. T^T
Oh I figured out a way to get what I want.
Just for recording and for someone else may have a same problem(?)
Can just rescale a color map and assign the corresponding index (of color) to the nodes.
def add_nodes(graph, nodes):
for n in nodes:
if isinstance(n, tuple):
graph.node(n[0], **n[1])
else:
graph.node(n)
return graph
A = [[517, 1, [409], 10, 6],
[534, 1, [584], 10, 12],
[614, 1, [247], 11, 5],
[679, 1, [228], 13, 7],
[778, 1, [13], 14, 14]]
nodesgv = []
Arange = [ a[0] for a in A]
norm = mpl.colors.Normalize(vmin = min(Arange), vmax = max(Arange))
cmap = cm.jet
for index, i in enumerate(A):
x = i[0]
m = cm.ScalarMappable(norm = norm, cmap = cmap)
mm = m.to_rgba(x)
M = colorsys.rgb_to_hsv(mm[0], mm[1], mm[2])
nodesgv.append((str(i[0]),{'label': str((i[1])), 'color': "%f, %f, %f" % (M[0], M[1], M[2]), 'style': 'filled'}))
graph = functools.partial(gv.Graph, format='svg', engine='neato')
add_nodes(graph(), nodesgv).render(('img/test'))
I am looking for a Python library which would support mesh queries. For now, I have looked at openmesh, but I am a bit afraid that would be an overkill for my small master thesis project. The features which I need is:
to iterate over vertices around a given vertex
iterate over all edges, faces, vertices
easily associate function values with each vertex, face, edge (I picture that these geometric entities are indexed)
And if I am really successful, I might need also to:
change the topology of the mesh, like adding or removing a vertex
Is it possible to do this with numpy so I could keep my depedency list small? For now I plan that the initial mesh will be generated with distmesh (pydistmesh). Does it have parts which could be useful for my mesh queries?
Theese kinds of queries became quite easy and effiecient with improved face based data structure which is used by CGAL. Here I have implemented code to valk around one specific vertex:
# The demonstration of improved face based data structure
from numpy import array
triangles = array([[ 5, 7, 10],
[ 7, 5, 6],
[ 4, 0, 3],
[ 0, 4, 6],
[ 4, 7, 6],
[ 4, 9, 10],
[ 7, 4, 10],
[ 0, 2, 1],
[ 2, 0, 6],
[ 2, 5, 1],
[ 5, 2, 6],
[ 8, 4, 3],
[ 4, 11, 9],
[ 8, 11, 4],
[ 9, 11, 3],
[11, 8, 3]], dtype=int)
points = array([[ 0.95448092, 0.45655774],
[ 0.86370317, 0.02141752],
[ 0.53821089, 0.16915935],
[ 0.97218064, 0.72769053],
[ 0.55030382, 0.70878147],
[ 0.34692982, 0.08765148],
[ 0.46289581, 0.29827649],
[ 0.21159925, 0.39472549],
[ 0.61679844, 0.79488884],
[ 0.4272861 , 0.93375762],
[ 0.12451604, 0.54267654],
[ 0.45974728, 0.91139648]])
import pylab as plt
fig = plt.figure()
pylab.triplot(points[:,0],points[:,1],triangles)
for i,tri in enumerate(triangles):
v1,v2,v3 = points[tri]
vavg = (v1 + v2 + v3)/3
plt.text(vavg[0],vavg[1],i)
#plt.show()
## constructing improved face based data structure
def edge_search(v1,v2,skip):
"""
Which triangle has edge with verticies i and j and aren't triangle <skip>?
"""
neigh = -1
for i,tri in enumerate(triangles):
if (v1 in tri) and (v2 in tri):
if i is skip:
continue
else:
neigh = i
break
return(neigh)
def triangle_search(i):
"""
For given vertex with index i return any triangle from neigberhood
"""
for i,tri in enumerate(triangles):
if i in tri:
return(i)
neighberhood = []
for i,tri in enumerate(triangles):
v1, v2, v3 = tri
t3 = edge_search(v1,v2,i)
t1 = edge_search(v2,v3,i)
t2 = edge_search(v3,v1,i)
neighberhood.append([t1,t2,t3])
neighberhood = array(neighberhood,dtype=int)
faces = []
for vi,_ in enumerate(points):
faces.append(triangle_search(vi))
## Now walking over first ring can be implemented
def triangle_ring(vertex):
tri_start = faces[vertex]
tri = tri_start
## with asumption that vertex is not on the boundary
for i in range(10):
yield tri
boolindx = triangles[tri]==vertex
# permutating to next and previous vertex
w = boolindx[[0,1,2]]
cw = boolindx[[2,0,1]]
ccw = boolindx[[1,2,0]]
ct = neighberhood[tri][cw][0]
if ct==tri_start:
break
else:
tri=ct
for i in triangle_ring(6):
print(i)
## Using it for drawing lines on plot
vertex = 6
ring_points = []
for i in triangle_ring(vertex):
vi = triangles[i]
cw = (vi==vertex)[[2,0,1]]
print("v={}".format(vi[cw][0]))
ring_points.append(vi[cw][0])
data = array([points[i] for i in ring_points])
plt.plot(data[:,0],data[:,1],"ro")
#plt.savefig("topology.png")
plt.show()
input("Press Enter to continue...")
plt.close("all")