I'm trying to solve the leetcode question https://leetcode.com/problems/open-the-lock/ with BFS and came up with the approach below.
def openLock(self, deadends: List[str], target: str): -> int
def getNeighbors(node) ->List[str]:
neighbors = []
dirs = [-1, 1]
for i in range(len(node)):
for direction in dirs:
x = (int(node[i]) + direction) % 10
neighbors.append(node[:i] + str(x) + node[i+1:])
return neighbors
start = '0000'
if start in deadends:
return -1
queue = deque()
queue.append((start, 0))
turns = 0
visited = set()
deadends = set(deadends)
while queue:
state, turns = queue.popleft()
visited.add(state)
if state == target:
return turns
for nb in getNeighbors(state):
if nb not in visited and nb not in deadends:
queue.append((nb, turns+1))
return -1
However this code runs into a Time Limit Exceeded exception and it only passes 2/48 of the test cases. For ex. with a testcase where the deadends are ["8887","8889","8878","8898","8788","8988","7888","9888"] and the target is "8888" (the start is always "0000") my solution TLE. I notice that if I essentially change one line and add neighboring nodes to the visited set in my inner loop, my solution passes that test case and all other test cases.
for nb in getNeighbors(state):
if nb not in visited and nb not in deadends:
visited.add(nb)
queue.append((nb, turns+1))
My while loop then becomes
while queue:
state, turns = queue.popleft()
if state == target:
return turns
for nb in getNeighbors(state):
if nb not in visited and nb not in deadends:
visited.add(nb)
queue.append((nb, turns+1))
Can someone explain why the first approach runs into TLE while the second doesn't?
Yes. Absolutely. You need to add the item to visited when it gets added to the queue the first time, not when it gets removed from the queue. Otherwise your queue is going to grow exponentially.
Let's look at say 1111. Your code is going to add 1000, 0100, 0010, 0001 (and others) to the queue. Then your code is going to add each of 1100, 1010, 1001, 0110, 0101, 0011, and others twice because they are neighbors of two elements in the queue, and they haven't been visited yet. And then you add each of 1110, 1101, 1011, 0111 four times. If instead you're searching for 5555, you'll have many many duplicates on your queue.
Change the name of your visited variable to seen. Notice that once an item has been put on the queue, it never needs to be put on the queue again.
Seriously, try searching for 5555 with no dead ends. When you find it, look at the queue and see how many elements are on the queue, versus how many distinct elements are on the queue.
You are adding next states to try (neighboring states) to the queue, but since you only check if new states have either already been visited, or are dead ends, you end up adding states you haven't tried yet to the queue many times. (i.e. you have no way of telling if a new state was already queued)
Basically, you're getting a bit too fancy here. A simpler implementation of the same idea:
def openLock(dead_ends: list[str], target: str, start: str = '0000') -> int:
# you could create locks with hexadecimal, alphabetic etc. disks
symbols = list(map(str, [*range(10)]))
if start in dead_ends or target in dead_ends:
return -1
elif start == target:
return 0
assert all(l == len(start) == len(target) for l in map(len, dead_ends)), \
'length of start, target and dead ends must match'
assert all(all(s in symbols for s in code) for code in dead_ends + [start, target]), \
f'all symbols in start, target and dead ends must be in {symbols}'
try_neighbors = [start]
tries = {start: 0}
while try_neighbors:
state = try_neighbors.pop(0)
for i, s in enumerate(state):
for step in (-1, +1):
neighbor = state[:i] + symbols[(symbols.index(s) + step) % len(symbols)] + state[i+1:]
if neighbor in tries:
continue
if neighbor == target:
return tries[state] + 1
elif neighbor not in dead_ends:
tries[neighbor] = tries[state] + 1
try_neighbors.append(neighbor)
return -1
# Provides the answer `-1` because it cannot be solved.
print(openLock(["8887","8889","8878","8898","8788","8988","7888","9888"], "8888"))
# Provides the answer `8` steps
print(openLock(["8887","8889","8878","8898","8788","8988","7888"], "8888"))
Your own solution with the problems fixed:
from collections import deque
def openLock(deadends: list[str], target: str) -> int:
def getNeighbors(node) -> list[str]:
neighbors = []
dirs = [-1, 1]
for i in range(len(node)):
for direction in dirs:
x = (int(node[i]) + direction) % 10
neighbors.append(node[:i] + str(x) + node[i + 1:])
return neighbors
start = '0000'
if start in deadends:
return -1
queue = deque()
queue.append((start, 0))
# keep track of all you've seen
seen = set()
deadends = set(deadends)
while queue:
state, turns = queue.popleft()
seen.add(state)
for nb in getNeighbors(state):
# why wait for it to come up in the queue?
if nb == target:
return turns + 1
if nb not in seen and nb not in deadends:
queue.append((nb, turns + 1))
# add everything you've queued up to seen
seen.add(nb)
return -1
print(openLock(["8887","8889","8878","8898","8788","8988","7888","9888"], "8888"))
Related
this is my code to find the height of a tree of up to 10^5 nodes. May I know why I get the following error?
Warning, long feedback: only the beginning and the end of the feedback message is shown, and the middle was replaced by " ... ". Failed case #18/24: time limit exceeded
Input:
100000
Your output:
stderr:
(Time used: 6.01/3.00, memory used: 24014848/2147483648.)
Is there a way to speed up this algo?
This is the exact problem description:
Problem Description
Task. You are given a description of a rooted tree. Your task is to compute and output its height. Recall
that the height of a (rooted) tree is the maximum depth of a node, or the maximum distance from a
leaf to the root. You are given an arbitrary tree, not necessarily a binary tree.
Input Format. The first line contains the number of nodes π. The second line contains π integer numbers
from β1 to π β 1 β parents of nodes. If the π-th one of them (0 β€ π β€ π β 1) is β1, node π is the root,
otherwise itβs 0-based index of the parent of π-th node. It is guaranteed that there is exactly one root.
It is guaranteed that the input represents a tree.
Constraints. 1 β€ π β€ 105
Output Format. Output the height of the tree.
# python3
import sys, threading
from collections import deque, defaultdict
sys.setrecursionlimit(10**7) # max depth of recursion
threading.stack_size(2**27) # new thread will get stack of such size
class TreeHeight:
def read(self):
self.n = int(sys.stdin.readline())
self.parent = list(map(int, sys.stdin.readline().split()))
def compute_height(self):
height = 0
nodes = [[] for _ in range(self.n)]
for child_index in range(self.n):
if self.parent[child_index] == -1:
# child_index = child value
root = child_index
nodes[0].append(root)
# do not add to index
else:
parent_index = None
counter = -1
updating_child_index = child_index
while parent_index != -1:
parent_index = self.parent[updating_child_index]
updating_child_index = parent_index
counter += 1
nodes[counter].append(child_index)
# nodes[self.parent[child_index]].append(child_index)
nodes2 = list(filter(lambda x: x, nodes))
height = len(nodes2)
return(height)
def main():
tree = TreeHeight()
tree.read()
print(tree.compute_height())
threading.Thread(target=main).start()
First, why are you using threading? Threading isn't good. It is a source of potentially hard to find race conditions and confusing complexity. Plus in Python, thanks to the GIL, you often don't get any performance win.
That said, your algorithm essentially looks like this:
for each node:
travel all the way to the root
record its depth
If the tree is entirely unbalanced and has 100,000 nodes, then for each of 100,000 nodes you have to visit an average of 50,000 other nodes for roughly 5,000,000,000 operations. This takes a while.
What you need to do is stop constantly traversing the tree back to the root to find the depths. Something like this should work.
import sys
class TreeHeight:
def read(self):
self.n = int(sys.stdin.readline())
self.parent = list(map(int, sys.stdin.readline().split()))
def compute_height(self):
height = [None for _ in self.parent]
todo = list(range(self.n))
while 0 < len(todo):
node = todo.pop()
if self.parent[node] == -1:
height[node] = 1
elif height[node] is None:
if height[self.parent[node]] is None:
# We will try again after doing our parent
todo.append(node)
todo.append(self.parent[node])
else:
height[node] = height[self.parent[node]] + 1
return max(height)
if __name__ == "__main__":
tree = TreeHeight()
tree.read()
print(tree.compute_height())
(Note, I switched to a standard indent, and then made that indent 4. See this classic study for evidence that an indent in the 2-4 range is better for comprehension than an indent of 8. And, of course, the pep8 standard for Python specifies 4 spaces.)
Here is the same code showing how to handle accidental loops, and hardcode a specific test case.
import sys
class TreeHeight:
def read(self):
self.n = int(sys.stdin.readline())
self.parent = list(map(int, sys.stdin.readline().split()))
def compute_height(self):
height = [None for _ in self.parent]
todo = list(range(self.n))
in_redo = set()
while 0 < len(todo):
node = todo.pop()
if self.parent[node] == -1:
height[node] = 1
elif height[node] is None:
if height[self.parent[node]] is None:
if node in in_redo:
# This must be a cycle, lie about its height.
height[node] = -100000
else:
in_redo.add(node)
# We will try again after doing our parent
todo.append(node)
todo.append(self.parent[node])
else:
height[node] = height[self.parent[node]] + 1
return max(height)
if __name__ == "__main__":
tree = TreeHeight()
# tree.read()
tree.n = 5
tree.parent = [-1, 0, 4, 0, 3]
print(tree.compute_height())
To solve problem 83 of project euler I tried to use the A* algorithm. The algorithm works fine for the given problem and I get the correct result. But when I visualized the algorithm I realized that it seems as if the algorithm checked way to many possible nodes. Is it because I didn't implement the algorithm properly or am I missing something else? I tried using two different heuristic functions which you can see in the code below, but the output didn't change much.
Are any tips to make the code efficient?
import heapq
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from matplotlib import animation
import numpy as np
class PriorityQueue:
def __init__(self):
self.elements = []
def empty(self):
return not self.elements
def put(self, item, priority):
heapq.heappush(self.elements, (priority, item))
def get(self):
return heapq.heappop(self.elements)[1]
class A_star:
def __init__(self, data, start, end):
self.data = data
self.start = start
self.end = end
self.a = len(self.data)
self.b = len(self.data[0])
def h_matrix(self):
elements = sorted([self.data[i][j] for j in range(self.b) for i in range(self.a)])
n = self.a + self.b - 1
minimum = elements[:n]
h = []
for i in range(self.a):
h_i = []
for j in range(self.b):
h_i.append(sum(minimum[:(n-i-j-1)]))
h.append(h_i)
return h
def astar(self):
h = self.h_matrix()
open_list = PriorityQueue()
open_list.put(self.start, 0)
came_from = {}
cost_so_far = {}
came_from[self.start] = None
cost_so_far[self.start] = self.data[0][0]
checked = []
while not open_list.empty():
current = open_list.get()
checked.append(current)
if current == self.end:
break
neighbors = [(current[0]+x, current[1]+y) for x, y in {(-1,0), (0,-1), (1,0), (0,1)}
if 0 <= current[0]+x < self.a and 0 <= current[1]+y < self.b]
for next in neighbors:
new_cost = cost_so_far[current] + self.data[next[0]][next[1]]
if next not in cost_so_far or new_cost < cost_so_far[next]:
cost_so_far[next] = new_cost
priority = new_cost + h[next[0]][next[1]]
open_list.put(next, priority)
came_from[next] = current
return came_from, checked, cost_so_far[self.end]
def reconstruct_path(self):
paths = self.astar()[0]
best_path = [self.end]
while best_path[0] is not None:
new = paths[best_path[0]]
best_path.insert(0, new)
return best_path[1:]
def minimum(self):
return self.astar()[2]
if __name__ == "__main__":
liste = [[131, 673, 234, 103, 18], [201, 96, 342, 965, 150], [630, 803, 746, 422, 111], [537, 699, 497, 121, 956], [805, 732, 524, 37, 331]]
path = A_star(liste, (0,0), (4,4))
print(path.astar())
#print(path.reconstruct_path())
path.plot_path(speed=200)
Here you can see my visualization for the 80x80 matrix given in the problem. Blue are all the points in checked and red is the optimal path. From my understanding it shouldn't be the case that every point in the matrix is in checked i.e. blue.
https://i.stack.imgur.com/LKkdh.png
My initial guess would be that my heuristic function is not good enough. If I choose h=0, which would mean Dijkstra Algorithm the length of my checked list is 6400. Contrary if I use my custom h the length is 6455. But how can I improve the heuristic function for an arbitrary matrix?
Let me start at the end of your post:
Marking cells as checked
if I use my custom h the length is 6455.
You should not have a size of checked that exceeds the number of cells. So let me first suggest an improvement for that: instead of using a list, use set, and skip anything popped from the priority queue that is already in the set. The relevant code will then look like this:
checked = set()
while not open_list.empty():
current = open_list.get()
if current in checked: # don't repeat the same search
continue
checked.add(current)
And if in the end you need the list version of checked, just return it that way:
return came_from, list(checked), cost_so_far[self.end]
Now to the main question:
Improving heuristic function
From my understanding it shouldn't be the case that every point in the matrix is in checked i.e. blue. My initial guess would be that my heuristic function is not good enough.
That is the right explanation. Combine that with the fact that the given matrix has paths which have a total cost which come quite close, so there is a competition field that involves much of the matrix.
how can I improve the heuristic function for an arbitrary matrix?
One idea is to consider the following. A path must include at least one element from each "forward" diagonal (/). So if we work with the minimum value on each such diagonal and create running sums (backwards -- starting from the target), we'll have a workable value for h.
Here is the code for that idea:
def h_matrix(self):
min_diagonals = [float("inf")] * (self.a + self.b - 1)
# For each diagonal get the minimum cost on that diagonal
for i, row in enumerate(self.data):
for j, cost in enumerate(row):
min_diagonals[i+j] = min(min_diagonals[i+j], cost)
# Create a running sum in backward direction
for i in range(len(min_diagonals) - 2, -1, -1):
min_diagonals[i] += min_diagonals[i + 1]
min_diagonals.append(0) # Add an entry for allowing the next logic to work
# These sums are a lower bound for the remaining cost towards the target
h = [
[min_diagonals[i + j + 1] for j in range(self.b)]
for i in range(self.a)
]
return h
With these improvements, we get these counts:
len(cost_so_far) == 6374
len(checked) == 6339
This represents still a large portion of the matrix, but at least a few cells were left out.
Dear computer science enthusiasts,
I have stumbled upon an issue when trying to implement the Dijkstra-algorithm to determine the shortest path between a starting node and all other nodes in a graph.
To be precise I will provide you with as many code snippets and information as I consider useful to the case. However, should you miss anything, please let me know.
I implemented a PQueue class to handle Priority Queues of each individual node and it looks like this:
class PQueue:
def __init__(self):
self.items = []
def push(self, u, value):
self.items.append((u, value))
# insertion sort
j = len(self.items) - 1
while j > 0 and self.items[j - 1][1] > value:
self.items[j] = self.items[j - 1] # Move element 1 position backwards
j -= 1
# node u now belongs to position j
self.items[j] = (u, value)
def decrease_key(self, u, value):
for i in range(len(self.items)):
if self.items[i][0] == u:
self.items[i][1] = value
j = i
break
# insertion sort
while j > 0 and self.items[j - 1][1] > value:
self.items[j] = self.items[j - 1] # Move element 1 position backwards
j -= 1
# node u now belongs to position j
self.items[j] = (u, value)
def pop_min(self):
if len(self.items) == 0:
return None
self.items.__delitem__(0)
return self.items.index(min(self.items))
In case you're not too sure about what the Dijkstra-algorithm is, you can refresh your knowledge here.
Now to get to the actual problem, I declared a function dijkstra:
def dijkstra(self, start):
# init
totalCosts = {} # {"node"= cost,...}
prevNodes = {} # {"node"= prevNode,...}
minPQ = PQueue() # [[node, cost],...]
visited = set()
# start init
totalCosts[str(start)] = 0
prevNodes[str(start)] = start
minPQ.push(start, 0)
# set for all other nodes cost to inf
for node in range(self.graph.length): # #nodes
if node != start:
totalCosts[str(node)] = np.inf
while len(minPQ.items) != 0: # Main loop
# remove smallest item
curr_node = minPQ.items[0][0] # get index/number of curr_node
minPQ.pop_min()
visited.add(curr_node)
# check neighbors
for neighbor in self.graph.adj_list[curr_node]:
# check if visited
if neighbor not in visited:
# check cost and put it in totalCost and update prev node
cost = self.graph.val_edges[curr_node][neighbor] # update cost of curr_node -> neighbor
minPQ.push(neighbor, cost)
totalCosts[str(neighbor)] = cost # update cost
prevNodes[str(neighbor)] = curr_node # update prev
# calc alternate path
altpath = totalCosts[str(curr_node)] + self.graph.val_edges[curr_node][neighbor]
# val_edges is a adj_matrix with values for the connecting edges
if altpath < totalCosts[str(neighbor)]: # check if new path is better
totalCosts[str(neighbor)] = altpath
prevNodes[str(neighbor)] = curr_node
minPQ.decrease_key(neighbor, altpath)
Which in my eyes should solve the problem mentioned above (optimal path for a starting node to every other node). But it does not. Can someone help me clean up this mess that I have been trying to debug for a while now. Thank you in advance!
Assumption:
In fact I realized that my dictionaries used to store the previously visited nodes (prevNodes) and the one where I save the corresponding total cost of visiting a node (totalCosts) are unequally long. And I do not understand why.
I have a fun little problem.
I need to count the amount of 'groups' of characters in a file. Say the file is...
..##.#..#
##..####.
.........
###.###..
##...#...
The code will then count the amount of groups of #'s. For example, the above would be 3. It includes diagonals. Here is my code so far:
build = []
height = 0
with open('file.txt') as i:
build.append(i)
height += 1
length = len(build[0])
dirs = {'up':(-1, 0), 'down':(1, 0), 'left':(0, -1), 'right':(0, 1), 'upleft':(-1, -1), 'upright':(-1, 1), 'downleft':(1, -1), 'downright':(1, 1)}
def find_patches(grid, length):
queue = []
queue.append((0, 0))
patches = 0
while queue:
current = queue.pop(0)
line, cell = path[-1]
if ## This is where I am at. I was making a pathfinding system.
Hereβs a naive solution I came up with. Originally I just wanted to loop through all the elements once an check for each, if I can put it into an existing group. That didnβt work however as some groups are only combined later (e.g. the first # in the second row would not belong to the big group until the second # in that row is processed). So I started working on a merge algorithm and then figured I could just do that from the beginning.
So how this works now is that I put every # into its own group. Then I keep looking at combinations of two groups and check if they are close enough to each other that they belong to the same group. If thatβs the case, I merge them and restart the check. If I completely looked at all possible combinations and could not merge any more, I know that Iβm done.
from itertools import combinations, product
def canMerge (g, h):
for i, j in g:
for x, y in h:
if abs(i - x) <= 1 and abs(j - y) <= 1:
return True
return False
def findGroups (field):
# initialize one-element groups
groups = [[(i, j)] for i, j in product(range(len(field)), range(len(field[0]))) if field[i][j] == '#']
# keep joining until no more joins can be executed
merged = True
while merged:
merged = False
for g, h in combinations(groups, 2):
if canMerge(g, h):
g.extend(h)
groups.remove(h)
merged = True
break
return groups
# intialize field
field = '''\
..##.#..#
##..####.
.........
###.###..
##...#...'''.splitlines()
groups = findGroups(field)
print(len(groups)) # 3
I'm not exactly sure what your code is trying to do. Your with statement opens a file, but all you do is append the file object to a list before the with ends and it gets closed (without its contents ever being read). I suspect his is not what you intend, but I'm not sure what you were aiming for.
If I understand your problem correctly, you are trying to count the connected components of a graph. In this case, the graph's vertices are the '#' characters, and the edges are wherever such characters are adjacent to each other in any direction (horizontally, vertically or diagonally).
There are pretty simple algorithms for solving that problem. One is to use a disjoint set data structure (also known as a "union-find" structure, since union and find are the two operations it supports) to connect groups of '#' characters together as they're read in from the file.
Here's a fairly minimal disjoint set I wrote to answer another question a while ago:
class UnionFind:
def __init__(self):
self.rank = {}
self.parent = {}
def find(self, element):
if element not in self.parent: # leader elements are not in `parent` dict
return element
leader = self.find(self.parent[element]) # search recursively
self.parent[element] = leader # compress path by saving leader as parent
return leader
def union(self, leader1, leader2):
rank1 = self.rank.get(leader1,1)
rank2 = self.rank.get(leader2,1)
if rank1 > rank2: # union by rank
self.parent[leader2] = leader1
elif rank2 > rank1:
self.parent[leader1] = leader2
else: # ranks are equal
self.parent[leader2] = leader1 # favor leader1 arbitrarily
self.rank[leader1] = rank1+1 # increment rank
And here's how you can use it for your problem, using x, y tuples for the nodes:
nodes = set()
groups = UnionFind()
with open('file.txt') as f:
for y, line in enumerate(f): # iterate over lines
for x, char in enumerate(line): # and characters within a line
if char == '#':
nodes.add((x, y)) # maintain a set of node coordinates
# check for neighbors that have already been read
neighbors = [(x-1, y-1), # up-left
(x, y-1), # up
(x+1, y-1), # up-right
(x-1, y)] # left
for neighbor in neighbors:
if neighbor in nodes:
my_group = groups.find((x, y))
neighbor_group = groups.find(neighbor)
if my_group != neighbor_group:
groups.union(my_group, neighbor_group)
# finally, count the number of unique groups
number_of_groups = len(set(groups.find(n) for n in nodes))
I am working on implementing an iterative deepening depth first search to find solutions for the 8 puzzle problem. I am not interested in finding the actual search paths themselves, but rather just to time how long it takes for the program to run. (I have not yet implemented the timing function).
However, I am having some issues trying to implement the actual search function (scroll down to see). I pasted all the code I have so far, so if you copy and paste this, you can run it as well. That may be the best way to describe the problems I'm having...I'm just not understanding why I'm getting infinite loops during the recursion, e.g. in the test for puzzle 2 (p2), where the first expansion should yield a solution. I thought it may have something to do with not adding a "Return" in front of one of the lines of code (it's commented below). When I add the return, I can pass the test for puzzle 2, but something more complex like puzzle 3 fails, since it appears that the now the code is only expanding the left most branch...
Been at this for hours, and giving up hope. I would really appreciate another set of eyes on this, and if you could point out my error(s). Thank you!
#Classic 8 puzzle game
#Data Structure: [0,1,2,3,4,5,6,7,8], which is the goal state. 0 represents the blank
#We also want to ignore "backward" moves (reversing the previous action)
p1 = [0,1,2,3,4,5,6,7,8]
p2 = [3,1,2,0,4,5,6,7,8]
p3 = [3,1,2,4,5,8,6,0,7]
def z(p): #returns the location of the blank cell, which is represented by 0
return p.index(0)
def left(p):
zeroLoc = z(p)
p[zeroLoc] = p[zeroLoc-1]
p[zeroLoc-1] = 0
return p
def up(p):
zeroLoc = z(p)
p[zeroLoc] = p[zeroLoc-3]
p[zeroLoc-3] = 0
return p
def right(p):
zeroLoc = z(p)
p[zeroLoc] = p[zeroLoc+1]
p[zeroLoc+1] = 0
return p
def down(p):
zeroLoc = z(p)
p[zeroLoc] = p[zeroLoc+3]
p[zeroLoc+3] = 0
return p
def expand1(p): #version 1, which generates all successors at once by copying parent
x = z(p)
#p[:] will make a copy of parent puzzle
s = [] #set s of successors
if x == 0:
s.append(right(p[:]))
s.append(down(p[:]))
elif x == 1:
s.append(left(p[:]))
s.append(right(p[:]))
s.append(down(p[:]))
elif x == 2:
s.append(left(p[:]))
s.append(down(p[:]))
elif x == 3:
s.append(up(p[:]))
s.append(right(p[:]))
s.append(down(p[:]))
elif x == 4:
s.append(left(p[:]))
s.append(up(p[:]))
s.append(right(p[:]))
s.append(down(p[:]))
elif x == 5:
s.append(left(p[:]))
s.append(up(p[:]))
s.append(down(p[:]))
elif x == 6:
s.append(up(p[:]))
s.append(right(p[:]))
elif x == 7:
s.append(left(p[:]))
s.append(up(p[:]))
s.append(right(p[:]))
else: #x == 8
s.append(left(p[:]))
s.append(up(p[:]))
#returns set of all possible successors
return s
goal = [0,1,2,3,4,5,6,7,8]
def DFS(root, goal): #iterative deepening DFS
limit = 0
while True:
result = DLS(root, goal, limit)
if result == goal:
return result
limit = limit + 1
visited = []
def DLS(node, goal, limit): #limited DFS
if limit == 0 and node == goal:
print "hi"
return node
elif limit > 0:
visited.append(node)
children = [x for x in expand1(node) if x not in visited]
print "\n limit =", limit, "---",children #for testing purposes only
for child in children:
DLS(child, goal, limit - 1) #if I add "return" in front of this line, p2 passes the test below, but p3 will fail (only the leftmost branch of the tree is getting expanded...)
else:
return "No Solution"
#Below are tests
print "\ninput: ",p1
print "output: ",DFS(p1, goal)
print "\ninput: ",p2
print "output: ",DLS(p2, goal, 1)
#print "output: ",DFS(p2, goal)
print "\ninput: ",p3
print "output: ",DLS(p3, goal, 2)
#print "output: ",DFS(p2, goal)
The immediate issue you're having with your recursion is that you're not returning anything when you hit your recursive step. However, unconditionally returning the value from the first recursive call won't work either, since the first child isn't guaranteed to be the one that finds the solution. Instead, you need to test to see which (if any) of the recursive searches you're doing on your child states is successful. Here's how I'd change the end of your DLS function:
for child in children:
child_result = DLS(child, goal, limit - 1)
if child_result != "No Solution":
return child_result
# note, "else" removed here, so you can fall through to the return from above
return "No Solution"
A slightly more "pythonic" (and faster) way of doing this would be to use None as the sentinel value rather than the "No Solution" string. Then your test would simply be if child_result: return child_result and you could optionally leave off the return statement for the failed searches (since None is the default return value of a function).
There are some other issues going on with your code that you'll run into once this recursion issue is fixed. For instance, using a global visited variable is problematic, unless you reset it each time you restart another recursive search. But I'll leave those to you!
Use classes for your states! This should make things much easier. To get you started. Don't want to post the whole solution right now, but this makes things much easier.
#example usage
cur = initialPuzzle
for k in range(0,5): # for 5 iterations. this will cycle through, so there is some coding to do
allsucc = cur.succ() # get all successors as puzzle instances
cur = allsucc[0] # expand first
print 'expand ',cur
import copy
class puzzle:
'''
orientation
[0, 1, 2
3, 4, 5
6, 7, 8]
'''
def __init__(self,p):
self.p = p
def z(self):
''' returns the location of the blank cell, which is represented by 0 '''
return self.p.index(0)
def swap(self,a,b):
self.p[a] = self.p[b]
self.p[b] = 0
def left(self):
self.swap(self.z(),self.z()+1) #FIXME: raise exception if not allowed
def up(self):
self.swap(self.z(),self.z()+3)
def right(self):
self.swap(self.z(),self.z()-1)
def down(self):
self.swap(self.z(),self.z()-3)
def __str__(self):
return str(self.p)
def copyApply(self,func):
cpy = self.copy()
func(cpy)
return cpy
def makeCopies(self,s):
''' some bookkeeping '''
flist = list()
if 'U' in s:
flist.append(self.copyApply(puzzle.up))
if 'L' in s:
flist.append(self.copyApply(puzzle.left))
if 'R' in s:
flist.append(self.copyApply(puzzle.right))
if 'D' in s:
flist.append(self.copyApply(puzzle.down))
return flist
def succ(self):
# return all successor states for this puzzle state
# short hand of allowed success states
m = ['UL','ULR','UR','UDR','ULRD','UDL','DL','LRD','DR']
ss= self.makeCopies(m[self.z()]) # map them to copies of puzzles
return ss
def copy(self):
return copy.deepcopy(self)
# some initial state
p1 = [0,1,2,3,4,5,6,7,8]
print '*'*20
pz = puzzle(p1)
print pz
a,b = pz.succ()
print a,b