On one hand, I have a grid defaultdict that stores the neighboring nodes of each node on a grid and its weight (all 1 in the example below).
node (w nbr_node)
grid = { 0: [(1, -5), (1, -4), (1, -3), (1, -1), (1, 1), (1, 3), (1, 4), (1, 5)],
1: [(1, -4), (1, -3), (1, -2), (1, 0), (1, 2), (1, 4), (1, 5), (1, 6)],
2: [(1, -3), (1, -2), (1, -1), (1, 1), (1, 3), (1, 5), (1, 6), (1, 7)],
3: [(1, -2), (1, -1), (1, 0), (1, 2), (1, 4), (1, 6), (1, 7), (1, 8)],
...
}
On the other, I have a Djisktra function that computes the shortest path between 2 nodes on this grid. The algorithm uses the heapq module and works perfectly fine.
import heapq
def Dijkstra(s, e, grid): #startpoint, endpoint, grid
visited = set()
distances = {s: 0}
p = {}
queue = [(0, s)]
while queue != []:
weight, node = heappop(queue)
if node in visited:
continue
visited.add(node)
for n_weight, n_node in grid[node]:
if n_node in visited:
continue
total = weight + n_weight
if n_node not in distances or distances[n_node] > total:
distances[n_node] = total
heappush(queue, (total, n_node))
p[n_node] = node
Problem: when calling the Djikstra function multiple times, heappush is... adding new keys in the grid dictionary for no reason !
Here is a MCVE:
from collections import defaultdict
# Creating the dictionnary
grid = defaultdict(list)
N = 4
kernel = (-N-1, -N, -N+1, -1, 1, N-1, N, N+1)
for i in range(N*N):
for n in kernel:
if i > N and i < (N*N) - 1 - N and (i%N) > 0 and (i%N) < N - 1:
grid[i].append((1, i+n))
# Calling Djikstra multiple times
keys = [*range(N*N)]
while keys:
k1, k2 = random.sample(keys, 2)
Dijkstra(k1, k2, grid)
keys.remove(k1)
keys.remove(k2)
The original grid defaultdict:
dict_keys([5, 6, 9, 10])
...and after calling the Djikstra function multiple times:
dict_keys([5, 6, 9, 10, 4, 0, 1, 2, 8, 3, 7, 11, 12, 13, 14, 15])
When calling the Djikstra function multiple times without heappush (just commenting heappush at the end):
dict_keys([5, 6, 9, 10])
Question:
How can I avoid this strange behavior ?
Please note that I'm using Python 2.7 and can't use numpy.
I could reproduce and fix. The problem is in the way you are building grid: it contains values that are not in keys from -4 to 0 and from 16 to 20 in the example. So you push those inexistant nodes on the head, and later pop them.
And you end in executing for n_weight, n_node in grid[node]: where node does not (still) exists in grid. As grid is a defaultdict, a new node is automatically inserted with an empty list as value.
The fix is trivial (at least for the example data): it is enough to ensure that all nodes added as value is grid exist as key with a modulo:
for i in range(N*N):
for n in kernel:
grid[i].append((1, (i+n + N + 1)%(N*N)))
But even for real data it should not be very hard to ensure that all nodes existing in grid values also exist in keys...
BTW, if grid had been a simple dict the error would have been immediate with a KeyError on grid[node].
Related
I need help to write a function that:
takes as input set of tuples
returns the number of tuples that has unique numbers
Example 1:
# input:
{(0, 1), (3, 4), (0, 0), (1, 1), (3, 3), (2, 2), (1, 0)}
# expected output: 3
The expected output is 3, because:
(3,4) and (3,3) contain common numbers, so this counts as 1
(0, 1), (0, 0), (1, 1), and (1, 0) all count as 1
(2, 2) counts as 1
So, 1+1+1 = 3
Example 2:
# input:
{(0, 1), (2, 1), (0, 0), (1, 1), (0, 3), (2, 0), (0, 2), (1, 0), (1, 3)}
# expected output: 1
The expected output is 1, because all tuples are related to other tuples by containing numbers in common.
This may not be the most efficient algorithm for it, but it is simple and looks nice.
from functools import reduce
def unisets(iterables):
def merge(fsets, fs):
if not fs: return fsets
unis = set(filter(fs.intersection, fsets))
return {reduce(type(fs).union, unis, fs), *fsets-unis}
return reduce(merge, map(frozenset, iterables), set())
us = unisets({(0,1), (3,4), (0,0), (1,1), (3,3), (2,2), (1,0)})
print(us) # {frozenset({3, 4}), frozenset({0, 1}), frozenset({2})}
print(len(us)) # 3
Features:
Input can be any kind of iterable, whose elements are iterables (any length, mixed types...)
Output is always a well-behaved set of frozensets.
this code works for me
but check it maby there edge cases
how this solution?
def count_groups(marked):
temp = set(marked)
save = set()
for pair in temp:
if pair[1] in save or pair[0] in save:
marked.remove(pair)
else:
save.add(pair[1])
save.add(pair[0])
return len(marked)
image
I am filtering a subset of edges so I can iterate through them. In this case, I am excluding the "end edges", which are the final edges along a chain:
import networkx as nx
graph = nx.Graph()
graph.add_edges_from([(0, 1), (1, 2), (2, 3), (3, 4)])
end_nodes = [n for n in graph.nodes if nx.degree(graph, n) == 1]
end_edges = graph.edges(end_nodes)
print(f"end edges: {end_edges}")
for edge in graph.edges:
if edge not in end_edges:
print(f"edge {edge} is not an end edge.")
else:
print(f"edge {edge} is an end edge.")
However, when you run this code, you get the following output:
end edges: [(0, 1), (4, 3)]
edge (0, 1) is an end edge.
edge (1, 2) is an end edge.
edge (2, 3) is an end edge.
edge (3, 4) is an end edge.
Edges (1, 2) and (2, 3) are not in end_edges, yet it returns False when the conditional edge not in end_edges is checked (seeming to imply that it is in fact included, when it seems to not be).
What is going on, and how can I filter this properly?
Python version is 3.7, NetworkX is 2.4.
You can convert end_nodes to a set of edges and keep the edges unordered.
>>> graph = nx.Graph()
>>> graph.add_edges_from([(0, 1), (1, 2), (2, 3), (3, 4)])
>>> end_nodes = [n for n in graph.nodes if nx.degree(graph, n) == 1]
>>> end_edges = set(map(frozenset, graph.edges(end_nodes)))
>>> end_edges
{frozenset({3, 4}), frozenset({0, 1})}
>>> for edge in graph.edges:
... print(edge, frozenset(edge) in end_edges)
...
(0, 1) True
(1, 2) False
(2, 3) False
(3, 4) True
import networkx as nx
graph = nx.Graph()
graph.add_edges_from([(0, 1), (1, 2), (2, 3), (3, 4)])
end_nodes = [n for n in graph.nodes if nx.degree(graph, n) == 1]
end_edges = graph.edges(end_nodes)
print(f"end edges: {end_edges}")
for edge in graph.edges:
if edge not in list(end_edges):
print(f"edge {edge} is not an end edge.")
else:
print(f"edge {edge} is an end edge.")
This should return what you ask for.
I think this can be done with relabel_nodes, but how can I create a mapping that permutes the nodes? I want to permute the nodes of a graph while keeping the network structure intact. Currently I am rebuilding the graph with a shuffled set of nodes which doesn't seem the most efficient way to go about things:
import networkx as nx
import random
n=10
nodes=[]
for i in range(0,n):
nodes.append(i)
G=nx.gnp_random_graph(n,.5)
newG=nx.empty_graph(n)
shufflenodes=nodes
random.shuffle(shufflenodes)
for i in range(0,n-1):
for j in range(i+1,n):
if(G.has_edge(i,j)):
newG.add_edge(shufflenodes[i],shufflenodes[j])
Anyone have any ideas how to speed this up?
What you can do is to build a random mapping and use relabel_nodes.
Code:
# create a random mapping old label -> new label
node_mapping = dict(zip(G.nodes(), sorted(G.nodes(), key=lambda k: random.random())))
# build a new graph
G_new = nx.relabel_nodes(G, node_mapping)
Example:
>>> G.nodes()
NodeView((0, 1, 2, 3, 4))
>>> G.edges()
EdgeView([(0, 1), (0, 2), (0, 3), (1, 2), (3, 4)])
>>> node_mapping
{0: 2, 1: 0, 2: 3, 3: 4, 4: 1}
>>> G_new.nodes()
NodeView((2, 0, 3, 4, 1))
>>> G_new.edges()
EdgeView([(2, 0), (2, 3), (2, 4), (0, 3), (4, 1)])
I am working on my phd and I am stuck on this step. The problem consists of implementing a finite element mesh merging algorithm and maybe my solution is not the best, so if you think of a better one I am open to suggestions.
Regarding the problem: I have a finite element mesh, which is composed of QUAD elements (squares with 4 nodes) and TRIA elements (triangles with 3 nodes). These elements are connected on edges, an edge is defined by 2 nodes (edge=[node1,node2]). I have a list of edges that I do not want to merge, but for the rest of the edges I want the program to merge the elements with the common edge.
As a simple example: assume I have 4 elements A,B,C and D (QUAD elms, defined by 4 nodes). The mesh looks something like this
1--------------2----------------3
| | |
| A | B |
| | |
4--------------5----------------6
| | |
| C | D |
| | |
7--------------8----------------9
These elements are defined in a dictionary:
mesh_dict={'A': [1,2,5,4], 'B':[2,3,6,5], 'C':[4,5,8,7],'D':[5,6,9,8]}
I also have a dictionary for the node position with values for X,Y,Z coordinates. Let's say I want to merge on edge [4,5] and [5,6].
My solution is the following: I start iterating through the elements in mesh_dict, I find the neighbors of the element with a function get_elm_neighbors(element), I check the angle between elements with function check_angle(elm1,elm2,angle) (I need the angle between elements to be below a certain threshold), than I check for which edge should be merged by get_edge_not_bar(), than I have a function which updates the nodes for the first element to complete the merging.
for e in mesh_dict:
if e not in delete_keys:
neighbors=get_elm_neighbors(e)
for key,value in neighbors.items():
check = check_angle(e,key,0.5)
if check:
nodes = get_edge_not_bar(value)
if nodes:
new_values=merge_elms(e,key,nodes)
d = {e: new_values}
mesh_dict_merged.update(d)
mesh_dict.update(d)
delete_keys.append(key)
My problem is that I need to delete the elements that remain after the merging. For example in the above case I start on element A and I merge on the edge [4,5], after that the elm A definition will be 'A':[1,2,8,7], then I need to delete elm C and proceed with the iteration.
My solution was to create a duplicate dictionary mesh_dict_merge in which I update the values for the elements and then delete the ones that I don't want to while iterating through the original dict but taking into consideration the deleted elements (deleted_keys list) to not go through them
I guess my question is if there is a way to iterate through the dictionary, update values and delete keys while doing so ? Or if there is a better solution to approach this problem, maybe iterate through nodes instead of elements ?
EDIT: changed 'A': [1,2,4,5] to 'A': [1,2,5,4]
It can be done updating the elements on-the-fly. But I should not recommend it because your algorithm will depend on the order you iterate the elements, and may be not deterministic. This mean that two meshes with identical geometry and topology could give different results depending on the labels you use.
The recommendation is :
Compute all dihedral angles in your mesh. Store those that are under your merge threshold.
Find the minimum angle and merge the two elements that share that edge.
Update the dihedral angles around the new element. This include removing angles from elements that have merged, and optionally include new angles for the new element.
Repeat from step 2 until every angle is over the threshold, or until the number of elements is the desired.
The optional part in step 3 allows to determine the aggressiveness of your method. Sometimes it is better not to include new angles and repeat several times the complete process to avoid focus the reduction too much in a zone.
I thought about how to find adjacent elements by finding elements that shared the same edge - but I had to have edges as a pair of end indices in sorted order.
I could then work out touches (should work for triangle elements too).
I introduce dont_merge as a set of ordered edge indices that cannot be merged away then merge into merged_ordered_edges and finally convert back to the mesh format of your original with edges going around each element.
I have commented out a call to check_angle(name1, name2) which you would have to add in. I assume that the check would succeed every time by the comment.
# -*- coding: utf-8 -*-
"""
Finite element mesh merge algorithm
https://stackoverflow.com/questions/59079755/how-to-merge-values-from-dictionary-on-different-keys-while-iterating-through-it
Created on Thu Nov 28 21:59:07 2019
#author: Paddy3118
"""
#%%
mesh_dict={'A': [1,2,5,4], 'B':[2,3,6,5], 'C':[4,5,8,7],'D':[5,6,9,8]}
#
ordered_edges = {k: {tuple(sorted(endpoints))
for endpoints in zip(v, v[1:] + v[:1])}
for k, v in mesh_dict.items()}
# = {'A': {(1, 2), (1, 4), (2, 5), (4, 5)},
# 'B': {(2, 3), (2, 5), (3, 6), (5, 6)},
# 'C': {(4, 5), (4, 7), (5, 8), (7, 8)},
# 'D': {(5, 6), (5, 8), (6, 9), (8, 9)}}
#%%
from collections import defaultdict
touching = defaultdict(list)
for name, edges in ordered_edges.items():
for edge in edges:
touching[edge].append(name)
touches = {edge: names
for edge, names in touching.items()
if len(names) > 1}
# = {(2, 5): ['A', 'B'],
# (4, 5): ['A', 'C'],
# (5, 6): ['B', 'D'],
# (5, 8): ['C', 'D']}
#%%
dont_merge = set([(4, 5), (23, 24)])
for edge, (name1, name2) in touches.items():
if (edge not in dont_merge
and ordered_edges[name1] and ordered_edges[name2]
#and check_angle(name1, name2)
):
# merge
ordered_edges[name1].update(ordered_edges[name2])
ordered_edges[name1].discard(edge) # that edge is merged away
ordered_edges[name2] = set() # gone
merged_ordered_edges = {}
for name, edges in ordered_edges.items():
if edges:
merged_ordered_edges[name] = sorted(edges)
edges.clear() # Only one name of shared object used
# = {'A': [(1, 2), (1, 4), (2, 3), (3, 6), (4, 5), (5, 6)],
# 'C': [(4, 5), (4, 7), (5, 6), (6, 9), (7, 8), (8, 9)]}
## You would then need a routine to change the ordered edges format
## back to your initial mesh_dict format that goes around the periphery
## (Or would you)?
#%%
def ordered_to_periphery(edges):
"""
In [124]: ordered_to_periphery([(1, 2), (1, 4), (2, 3), (3, 6), (4, 5), (5, 8), (6, 9), (8, 9)])
Out[124]: [(1, 2), (2, 3), (3, 6), (6, 9), (9, 8), (8, 5), (5, 4), (4, 1)]
"""
p = [edges.pop(0)] if edges else []
last = p[-1][-1] if p else None
while edges:
for n, (i, j) in enumerate(edges):
if i == last:
p.append((i, j))
last = j
edges.pop(n)
break
elif j == last:
p.append((j, i))
last = i
edges.pop(n)
break
return p
#%%
merged_mesh = {name: ordered_to_periphery(edges)
for name, edges in merged_ordered_edges.items()}
# = {'A': [(1, 2), (2, 3), (3, 6), (6, 5), (5, 4), (4, 1)],
# 'C': [(4, 5), (5, 6), (6, 9), (9, 8), (8, 7), (7, 4)]}
P.S. Any chance of a mention if you use this?
This is probably a beginner question at best, but have been playing with graphs and have been implementing BFS searches on various exercises. I can't quite figure out how to actually keep track on the weight of the edges I have visited in order to create a minimum complete spanning of the graph. My graph is in the format:
{0: [(1, 1), (2, 1)], 1: [(0, 1), (2, 1)], 2: [(1, 1), (0, 1)]}
Where the first vertice is 0 with adjacent vertices of 1 and 2 with weights of 1 and 1 respectively. So in clearer terms the keys in the graph dictionary represent vertices, and each tuple in the key value represent a vertice, weight pair.
So what I have in my BFS function is:
def bfs(graph, start):
"""returns total weight needed to visit
each vertice in the graph with the minimum
overall weight possible"""
if [] in graph.values():
return "Not Possible"
weight = 0
visited, queue = set(), [start]
while queue:
vertex = queue.pop(0)
if vertex not in visited:
visited.add(vertex)
for node in graph[vertex]:
queue.append(node[0])
weight += node[1]
return weight
At the moment with my original graph this function would return 6 where it should be 2. I think this is because it is iterating over each vertice and adding the adjacent weights, even though they have already been visited.
This also wouldn't actually choose the minimum weighted path, it only keep track of the weight of the path it has taken, whatever that may be. How can I address this?
A longer example:
{0: [(1, 5), (2, 7), (3, 12)], 1: [(0, 5), (2, 9), (4, 7)], 2: [(0, 7), (1, 9), (3, 4), (4, 4), (5, 3)], 3: [(0, 12), (2, 4), (5, 7)], 4: [(1, 7), (2, 4), (5, 2), (6, 5)], 5: [(2, 3), (3, 7), (4, 2), (6, 2)], 6: [(4, 5), (5, 2)]}
This produces a weight of 134 where the correct answer should be 23
Is there some algorithm I am missing that can keep track of the weighted edges and choose the best path from this?
I am aware of Dijkstra’s Algorithm but as far as I am aware that is suitable for a path with a designated start and end, and not a complete graph span?
Dijkastra's algorithm and bfs are useful in finding minimum path between two vertices.However if you want to find the minimum spanning tree please check out Kruskal's algorithm instead.
Here is the link:
https://en.wikipedia.org/wiki/Kruskal%27s_algorithm
Pseudocode:
KRUSKAL(G):
1 A = ∅
2 foreach v ∈ G.V:
3 MAKE-SET(v)
4 foreach (u, v) in G.E ordered by weight(u, v), increasing:
5 if FIND-SET(u) ≠ FIND-SET(v):
6 A = A ∪ {(u, v)}
7 UNION(u, v)
8 return A
It is implemented using union-find(disjointed set) data structure.