Python: Depth First Search 'maximum recursion depth exceeded' - python

I have a recursive depth first search algorithm which takes in a black/white mask image, e.g.:
And outputs the x largest clusters of white pixels in that mask, e.g. (for x = 2):
The function is below:
def split_mask_into_x_biggest_clusters(input_mask, x):
def generate_neighbours(point):
neighbours = [
(1, -1), (1, 0), (1, 1),
(0, -1), (0, 1),
(1, -1), (1, 0), (-1, 1)
]
for neigh in neighbours:
yield tuple(map(sum, zip(point, neigh)))
def find_regions(p, points):
reg = []
seen = set()
def dfs(point):
if point not in seen:
seen.add(point)
if point in points:
reg.append(point)
points.remove(point)
for n in generate_neighbours(point):
dfs(n)
dfs(p)
return reg
region = []
data = np.array(input_mask)[:, :, 0]
wpoint = np.where(data == 255)
points = set((x, y) for x, y in zip(*wpoint))
while points:
cur = next(iter(points))
reg = find_regions(cur, points)
region.append(reg.copy())
areas = {idx: area for idx, area in enumerate(map(len, region))}
areas = sorted(areas.items(), key=lambda x: x[1], reverse=True)
num = x
masks = []
for idx, area in enumerate(areas[:num]):
input_mask = np.zeros((512, 512, 3))
for x, y in region[area[0]]:
input_mask[x, y] = [255, 255, 255]
input_mask = input_mask.astype(np.uint8)
masks.append(Image.fromarray(input_mask))
return masks
My problem is that when I run it I get the following error: maximum recursion depth exceeded. Experimentally, I have tried increasing the recursion limit to 2000 then to 10000(!):
sys.setrecursionlimit(10000)
This fixes the problem sometimes but not all the time (i.e. not when the clusters of white pixels are bigger).
What can I do to fix this problem? Thank you for any help.

For big images you will always end up with this error.
You can either change you implementation to iterative DFS (which doesn't use recursion), or use BFS.
UPD
Implementation can be found here (for iterative DFS)
BFS implementation

Related

Calculate the Laplacian matrix of a graph object in NetworkX

I am writing my own function that calculates the Laplacian matrix for any directed graph, and am struggling with filling the diagonal entries of the resulting matrix. The following equation is what I use to calculate entries of the Laplacian matrix, where e_ij represents an edge from node i to node j.
I am creating graph objects with NetworkX (https://networkx.org/). I know NetworkX has its own Laplacian function for directed graphs, but I want to be 100% sure I am using a function that carries out the correct computation for my purposes. The code I have developed thus far is shown below, for the following example graph:
# Create a simple example of a directed weighted graph
G = nx.DiGraph()
G.add_nodes_from([1, 2, 3])
G.add_weighted_edges_from([(1, 2, 1), (1, 3, 1), (2, 1, 1), (2, 3, 1), (3, 1, 1), (3, 2, 1)])
# Put node, edge, and weight information into Python lists
node_list = []
for item in G.nodes():
node_list.append(item)
edge_list = []
weight_list = []
for item in G.edges():
weight_list.append(G.get_edge_data(item[0],item[1])['weight'])
item = (item[0]-1,item[1]-1)
edge_list.append(item)
print(edge_list)
> [(0, 1), (0, 2), (1, 0), (1, 2), (2, 0), (2, 1)]
# Fill in the non-diagonal entries of the Laplacian
num_nodes = len(node_list)
num_edges = len(edge_list)
J = np.zeros(shape = (num_nodes,num_nodes))
for x in range(num_edges):
i = edge_list[x][0]
j = edge_list[x][1]
J[i,j] = weight_list[x]
I am struggling to figure out how to fill in the diagonal entries. edge_list is a list of tuples. To perform the computation in the above equation for L(G), I need to loop through the second entries of each tuple, store the first entry into a temporary list, sum over all the elements of that temporary list, and finally store the negative of the sum in the correct diagonal entry of L(G).
Any suggestions would be greatly appreciated, especially if there are steps above that can be done more efficiently or elegantly.
I adjusted networkx.laplacian_matrix function for undirected graphs a little bit
import networkx as nx
import scipy.sparse
G = nx.DiGraph()
G.add_nodes_from([1, 2, 3])
G.add_weighted_edges_from([(1, 2, 1), (1, 3, 1), (2, 1, 1), (2, 3, 1), (3, 1, 1), (3, 2, 1)])
nodelist = list(G)
A = nx.to_scipy_sparse_matrix(G, nodelist=nodelist, weight="weight", format="csr")
n, m = A.shape
diags = A.sum(axis=0) # 1 = outdegree, 0 = indegree
D = scipy.sparse.spdiags(diags.flatten(), [0], m, n, format="csr")
print((A - D).todense())
# [[-2 1 1]
# [ 1 -2 1]
# [ 1 1 -2]]
I will deviate a little from your method, since I prefer to work with Numpy if possible :P.
In the following snippet, I generate test data for a network of n=10 nodes; that is, I generate an array of tuples V to populate with random nodes, and also a (n,n) array A with the values of the edges between nodes. Hopefully the code is somewhat self-explanatory and is correct (let me know otherwise):
from random import sample
import numpy as np
# Number and list of nodes
n = 10
nodes = list(np.arange(n)) # random.sample needs list
# Test array of linked nodes
# V[i] is a tuple with all nodes the i-node connects to.
V = np.zeros(n, dtype = tuple)
for i in range(n):
nv = np.random.randint(5) # Random number of edges from node i
# To avoid self-loops (do not know if it is your case - comment out if necessary)
itself = True
while itself:
cnodes = sample(nodes, nv) # samples nv elements from the nodes list w/o repetition
itself = i in cnodes
V[i] = cnodes
# Test matrix of weighted edges (from i-node to j-node)
A = np.zeros((n,n))
for i in range(n):
for j in range(n):
if j in V[i]:
A[i,j] = np.random.random()*5
# Laplacian of network
J = np.copy(A) # This already sets the non-diagonal elements
for i in range(n):
J[i,i] = - np.sum(A[:,i]) - A[i,i]
Thank you all for your suggestions! I agree that numpy is the way to go. As a rudimentary solution that I will optimize later, this is what I came up with:
def Laplacian_all(edge_list,weight_list,num_nodes,num_edges):
J = np.zeros(shape = (num_nodes,num_nodes))
for x in range(num_edges):
i = edge_list[x][0]
j = edge_list[x][1]
J[i,j] = weight_list[x]
for i in range(num_nodes):
temp = []
for x in range(num_edges):
if i == edge_list[x][1]:
temp.append(weight_list[x])
temp_sum = -1*sum(temp)
J[i,i] = temp_sum
return J
I have yet to test this on different graphs, but this was what I was hoping to figure out for my immediate purposes.

Find neighbors given a specific coordinate in a 2D array

I am trying to solve a problem using python and this is my first time to write python so I hope you could help me out. I have a 2D array its values is -1,0,1 what I want to do is take the co-ordinates of a specific element and get the co-ordinates of all the adjacent elements
Matrix = [[ 1,-1, 0],
[ 1, 0, 0],
[-1,-1, 1]]
for example if I have (0,0) so the function could return (0,1),(1,0)
Since you want to work from the coordinates, a simple way I can think of is to define a grid graph using NetworkX and to look for the neighbours:
import networkx as nx
import numpy as np
a = np.array([[1,-1,0],
[1,0,0],
[-1,-1,1]])
G = nx.grid_2d_graph(*a.shape)
list(G.neighbors((0,0)))
# [(1, 0), (0, 1)]
Or for the "coordinates" of the middle value for instance:
list(G.neighbors((1,1)))
# [(0, 1), (2, 1), (1, 0), (1, 2)]
If you want to use them to index the array:
ix = list(G.neighbors((0,0)))
a[tuple(ix)]
# array([ 1, -1])
It's not the best solution but it can help if you don't want to import any lib:
def get_neighbors(matrix, x, y):
positions = []
positions.append(get_neighbor(matrix, x, y-1))
positions.append(get_neighbor(matrix, x, y+1))
positions.append(get_neighbor(matrix, x-1, y))
positions.append(get_neighbor(matrix, x+1, y))
return list(filter(None, positions))
def get_neighbor(matrix, x, y):
if (x >= 0 and x < len(matrix[0])) and (y >= 0 and y < len(matrix[1])):
return (x, y)
get_neighbors(your_matrix, x_position, y_position)

How can I find duplicates in a list, and delete all duplicates except for a certain one?

I have a sorted list dataPts that is sorted based on the angle each point makes with the minimum Y value minY in dataPts, such as [(0, 0), (10, 10), (20, 20) ... ] (0, 0) being minY.
Then I create a new list angles which is a list of all those angles, for instance [0, 45, 45, ...].
You will notice that angles contains duplicate values, for instance 45, 45,. What I want to do is locate the points in dataPts that share the same angle. I then want to delete those points, EXCEPT the one that is the furthest distance from minY using a function that returns a value.
For example, (10, 10) and (20, 20) both have corresponding values in angles, which is 45. How can I pick out the value with greater distance to minY which is (20, 20) and delete (10, 10)?
you could create a dict using the angles as keys, where the values are all of the elements with a given angle, then choose the max based on your distance function.
i.e. something like:
d = defaultdict(lambda: [])
for angle, pt in zip(angles, dataPts):
d[angle].append(pt)
result = [max(pt, key=my_dist_func) for angle, pt in d.items()]
Given the ymin and distance function you're describing, I think this works:
from collections import defaultdict
dataPts = [(0, 0), (10, 10), (20, 20) ]
angles = [0,45,45]
ymin = min((p[1] for p in dataPts))
d = defaultdict(lambda: [])
for angle, pt in zip(angles, dataPts):
d[angle].append(pt)
result = [max(pt, key=lambda p: p[1]-ymin) for angle, pt in d.items()]
Try this
angles1 = [(0, 0), (10, 10), (20, 20)]
angles = [0, 45, 45]
dumy = {}
duplicates = []
for i,items in enumerate(angles):
if (items not in dumy):
dumy[items] = ""
else:
duplicates.append(i)
if((angles[i-1] == items) and i-1 not in duplicates):
duplicates.append(i-1)
for i in (duplicates):
del angles1[i]
Suppose if you want to remove the only duplicates, try the following code
for i,items in enumerate(angles):
if (items not in dumy):
dumy[items] = ""
else:
duplicates.append(i)
del angles1[i]
if((angles[i-1] == items) and i-1 not in duplicates):
del angles1[i-1]

How to index a Cartesian product

Suppose that the variables x and theta can take the possible values [0, 1, 2] and [0, 1, 2, 3], respectively.
Let's say that in one realization, x = 1 and theta = 3. The natural way to represent this is by a tuple (1,3). However, I'd like to instead label the state (1,3) by a single index. A 'brute-force' method of doing this is to form the Cartesian product of all the possible ordered pairs (x,theta) and look it up:
import numpy as np
import itertools
N_x = 3
N_theta = 4
np.random.seed(seed = 1)
x = np.random.choice(range(N_x))
theta = np.random.choice(range(N_theta))
def get_box(x, N_x, theta, N_theta):
states = list(itertools.product(range(N_x),range(N_theta)))
inds = [i for i in range(len(states)) if states[i]==(x,theta)]
return inds[0]
print (x, theta)
box = get_box(x, N_x, theta, N_theta)
print box
This gives (x, theta) = (1,3) and box = 7, which makes sense if we look it up in the states list:
[(0, 0), (0, 1), (0, 2), (0, 3), (1, 0), (1, 1), (1, 2), (1, 3), (2, 0), (2, 1), (2, 2), (2, 3)]
However, this 'brute-force' approach seems inefficient, as it should be possible to determine the index beforehand without looking it up. Is there any general way to do this? (The number of states N_x and N_theta may vary in the actual application, and there might be more variables in the Cartesian product).
If you always store your states lexicographically and the possible values for x and theta are always the complete range from 0 to some maximum as your examples suggests, you can use the formula
index = x * N_theta + theta
where (x, theta) is one of your tuples.
This generalizes in the following way to higher dimensional tuples: If N is a list or tuple representing the ranges of the variables (so N[0] is the number of possible values for the first variable, etc.) and p is a tuple, you get the index into a lexicographically sorted list of all possible tuples using the following snippet:
index = 0
skip = 1
for dimension in reversed(range(len(N))):
index += skip * p[dimension]
skip *= N[dimension]
This might not be the most Pythonic way to do it but it shows what is going on: You think of your tuples as a hypercube where you can only go along one dimension, but if you reach the edge, your coordinate in the "next" dimension increases and your traveling coordinate resets. The reader is advised to draw some pictures. ;)
I think it depends on the data you have. If they are sparse, the best solution is a dictionary. And works for any tuple's dimension.
import itertools
import random
n = 100
m = 100
l1 = [i for i in range(n)]
l2 = [i for i in range(m)]
a = {}
prod = [element for element in itertools.product(l1, l2)]
for i in prod:
a[i] = random.randint(1, 100)
A very good source about the performance is in this discution.
For the sake of completeness I'll include my implementation of Julian Kniephoff's solution, get_box3, with a slightly adapted version of the original implementation, get_box2:
# 'Brute-force' method
def get_box2(p, N):
states = list(itertools.product(*[range(n) for n in N]))
return states.index(p)
# 'Analytic' method
def get_box3(p, N):
index = 0
skip = 1
for dimension in reversed(range(len(N))):
index += skip * p[dimension]
skip *= N[dimension]
return index
p = (1,3,2) # Tuple characterizing the total state of the system
N = [3,4,3] # List of the number of possible values for each state variable
print "Brute-force method yields %s" % get_box2(p, N)
print "Analytical method yields %s" % get_box3(p, N)
Both the 'brute-force' and 'analytic' method yield the same result:
Brute-force method yields 23
Analytical method yields 23
but I expect the 'analytic' method to be faster. I've changed the representation to p and N as suggested by Julian.

Check if some elements in a matrix are cohesive

I have to write a very little Python program that checks whether some group of coordinates are all connected together (by a line, not diagonally). The next 2 pictures show what I mean. In the left picture all colored groups are cohesive, in the right picture not:
I've already made this piece of code, but it doesn't seem to work and I'm quite stuck, any ideas on how to fix this?
def cohesive(container):
co = container.pop()
container.add(co)
return connected(co, container)
def connected(co, container):
done = {co}
todo = set(container)
while len(neighbours(co, container, done)) > 0 and len(todo) > 0:
done = done.union(neighbours(co, container, done))
return len(done) == len(container)
def neighbours(co, container, done):
output = set()
for i in range(-1, 2):
if i != 0:
if (co[0] + i, co[1]) in container and (co[0] + i, co[1]) not in done:
output.add((co[0] + i, co[1]))
if (co[0], co[1] + i) in container and (co[0], co[1] + i) not in done:
output.add((co[0], co[1] + i))
return output
this is some reference material that should return True:
cohesive({(1, 2), (1, 3), (2, 2), (0, 3), (0, 4)})
and this should return False:
cohesive({(1, 2), (1, 4), (2, 2), (0, 3), (0, 4)})
Both tests work, but when I try to test it with different numbers the functions fail.
You can just take an element and attach its neighbors while it is possible.
def dist(A,B):return abs(A[0]-B[0]) + abs(A[1]-B[1])
def grow(K,E):return {M for M in E for N in K if dist(M,N)<=1}
def cohesive(E):
K={min(E)} # an element
L=grow(K,E)
while len(K)<len(L) : K,L=L,grow(L,E)
return len(L)==len(E)
grow(K,E) return the neighborhood of K.
In [1]: cohesive({(1, 2), (1, 3), (2, 2), (0, 3), (0, 4)})
Out[1]: True
In [2]: cohesive({(1, 2), (1, 4), (2, 2), (0, 3), (0, 4)})
Out[2]: False
Usually, to check if something is connected, you need to use disjoint set data structures, the more efficient variations include weighted quick union, weighted quick union with path compression.
Here's an implementation, http://algs4.cs.princeton.edu/15uf/WeightedQuickUnionUF.java.html which you can modify to your needs. Also, the implementation found in the book "The Design and Analysis of Computer Algorithms" by A. Aho, allows you to specify the name of the group that you add 2 connected elements to, so I think that's the modification you're looking for.(It just involves using 1 extra array which keeps track of group numbers).
As a side note, since disjoint sets usually apply to arrays, don't forget that you can represent an N by N matrix as an array of size N*N.
EDIT: just realized that it wasn't clear to me what you were asking at first, and I realized that you also mentioned that diagonal components aren't connected, in that case the algorithm is as follows:
0) Check if all elements refer to the same group.
1) Iterate through the array of pairs that represent coordinates in the matrix in question.
2) For each pair make a set of pairs that satisfies the following formula:
|entry.x - otherEntry.x| + |entry.y - otherEntry.y|=1.
'entry' refers to the element that the outer for loop is referring to.
3) Check if all of the sets overlap. That can be done by "unioning" the sets you're looking at, at the end if you get more than 1 set, then the elements are not cohesive.
The complexity is O(n^2 + n^2 * log(n)).
Example:
(0,4), (1,2), (1,4), (2,2), (2,3)
0) check that they are all in the same group:
all of them belong to group 5.
1) make sets:
set1: (0,4), (1,4)
set2: (1,2), (2,2)
set3: (0,4), (1,4) // here we suppose that sets are sorted, other than that it
should be (1,4), (0,4)
set4: (1,2), (2,2), (2,3)
set5: (2,2), (2,3)
2) check for overlap:
set1 overlaps with set3, so we get:
set1' : (0,4), (1,4)
set2 overlaps with set4 and set 5, so we get:
set2' : (1,2), (2,2), (2,3)
as you can see set1' and set2' don't overlap, hence you get 2 disjoint sets that are in the same group, so the answer is 'false'.
Note that this is inefficient, but I have no idea how to do it more efficiently, but this answers your question.
The logic in your connected function seems wrong. You make a todo variable, but then never change its contents. You always look for neighbours around the same starting point.
Try this code instead:
def connected(co, container):
done = {co}
todo = {co}
while len(todo) > 0:
co = todo.pop()
n = neighbours(co, container, done)
done = done.union(n)
todo = todo.union(n)
return len(done) == len(container)
todo is a set of all the points we are still to check.
done is a set of all the points we have found to be 4-connected to the starting point.
I'd tackle this problem differently... if you're looking for five exactly, that means:
Every coordinate in the line has to be neighbouring another coordinate in the line, because anything less means that coordinate is disconnected.
At least three of the coordinates have to be neighbouring another two or more coordinates in the line, because anything less and the groups will be disconnected.
Hence, you can just get the coordinate's neighbours and check whether both conditions are fulfilled.
Here is a basic solution:
def cells_are_connected(connections):
return all(c > 0 for c in connections)
def groups_are_connected(connections):
return len([1 for c in connections if c > 1]) > 2
def cohesive(coordinates):
connections = []
for x, y in coordinates:
neighbours = [(x-1, y), (x+1, y), (x, y-1), (x, y+1)]
connections.append(len([1 for n in neighbours if n in coordinates]))
return cells_are_connected(connections) and groups_are_connected(connections)
print cohesive([(1, 2), (1, 3), (2, 2), (0, 3), (0, 4)]) # True
print cohesive([(1, 2), (1, 4), (2, 2), (0, 3), (0, 4)]) # False
No need for a general-case solution or union logic. :) Do note that it's specific to the five-in-a-line problem, however.

Categories