I am trying to write a chess engine in python, i can find the best move given a position, but i am struggling to collect the principal variation from that position, the following is what i have tried so far:
def alphabeta(board, alpha, beta, depth, pvtable):
if depth == 0:
return evaluate.eval(board)
for move in board.legal_moves:
board.push(move)
score = -alphabeta(board, -beta, -alpha, depth - 1, pvtable)
board.pop()
if score >= beta:
return beta
if score > alpha:
alpha = score
pvtable[depth-1] = str(move)
return alpha
i am using pvtable[depth - 1] = str(move) to append moves but in the end i find that pvtable contains random non consistent moves, things like ['g1h3', 'g8h6', 'h3g5', 'd8g5'] for the starting position.
I know that similar questions about that have been asked but i still didn't figure out how i can solve this problem.
I think your moves are overwritten when the search reaches the same depth again (in a different branch of the game tree).
This site explains quite good how to retrieve the principal variation: https://web.archive.org/web/20071031100114/http://www.brucemo.com:80/compchess/programming/pv.htm
Applied to your code example, it should be something like this (I didn't test it):
def alphabeta(board, alpha, beta, depth, pline):
line = []
if depth == 0:
return evaluate.eval(board)
for move in board.legal_moves:
board.push(move)
score = -alphabeta(board, -beta, -alpha, depth - 1, line)
board.pop()
if score >= beta:
return beta
if score > alpha:
alpha = score
pline[:] = [str(move)] + line
return alpha
Related
I am trying to simulate the following problem:
Given a 2D random walk (in a lattice grid) starting from the origin what is the average waiting time to hit the line y=1-x
import numpy as np
from tqdm import tqdm
N=5*10**3
results=[]
for _ in tqdm(range(N)):
current = [0,0]
step=0
while (current[1]+current[0] != 1):
step += 1
a = np.random.randint(0,4)
if (a==0):
current[0] += 1
elif (a==1):
current[0] -= 1
elif (a==2):
current[1] += 1
elif (a==3):
current[1] -= 1
results.append(step)
This code is slow even for N<10**4 I am not sure how to optimize it or change it to properly simulate the problem.
Instead of simulating a bunch of random walks sequentially, lets try simulating multiple paths at the same time and tracking the probabilities of those happening, for instance we start at position 0 with probability 1:
states = {0+0j: 1}
and the possible moves along with their associated probabilities would be something like this:
moves = {1+0j: 0.25, 0+1j: 0.25, -1+0j: 0.25, 0-1j: 0.25}
# moves = {1: 0.5, -1:0.5} # this would basically be equivelent
With this construct we can update to new states by going over the combination of each state and each move and update probabilities accordingly
def simulate_one_step(current_states):
newStates = {}
for cur_pos, prob_of_being_here in current_states.items():
for movement_dist,prob_of_moving_this_way in moves.items():
newStates.setdefault(cur_pos+movement_dist, 0)
newStates[cur_pos+movement_dist] += prob_of_being_here*prob_of_moving_this_way
return newStates
Then we just iterate this popping out all winning states at each step:
for stepIdx in range(1, 100):
states = simulate_one_step(states)
winning_chances = 0
# use set(keys) to make copy so we can delete cases out of states as we go.
for pos, prob in set(states.items()):
# if y = 1-x
if pos.imag == 1 - pos.real:
winning_chances += prob
# we no longer consider this a state that propogated because the path stops here.
del states[pos]
print(f"probability of winning after {stepIdx} moves is: {winning_chances}")
you would also be able to look at states for an idea of the distribution of possible positions, although totalling it in terms of distance from the line simplifies the data. Anyway, the final step would be to average the steps taken by the probability of taking that many steps and see if it converges:
total_average_num_moves += stepIdx * winning_chances
But we might be able to gather more insight by using symbolic variables! (note I'm simplifying this to a 1D problem which I describe how at the bottom)
import sympy
x = sympy.Symbol("x") # will sub in 1/2 later
moves = {
1: x, # assume x is the chances for us to move towards the target
-1: 1-x # and therefore 1-x is the chance of moving away
}
This with the exact code as written above gives us this sequence:
probability of winning after 1 moves is: x
probability of winning after 2 moves is: 0
probability of winning after 3 moves is: x**2*(1 - x)
probability of winning after 4 moves is: 0
probability of winning after 5 moves is: 2*x**3*(1 - x)**2
probability of winning after 6 moves is: 0
probability of winning after 7 moves is: 5*x**4*(1 - x)**3
probability of winning after 8 moves is: 0
probability of winning after 9 moves is: 14*x**5*(1 - x)**4
probability of winning after 10 moves is: 0
probability of winning after 11 moves is: 42*x**6*(1 - x)**5
probability of winning after 12 moves is: 0
probability of winning after 13 moves is: 132*x**7*(1 - x)**6
And if we ask the OEIS what the sequence 1,2,5,14,42,132... means it tells us those are Catalan numbers with the formula of (2n)!/(n!(n+1)!) so we can write a function for the non-zero terms in that series as:
f(n,x) = (2n)! / (n! * (n+1)!) * x^(n+1) * (1-x)^n
or in actual code:
import math
def probability_of_winning_after_2n_plus_1_steps(n, prob_of_moving_forward = 0.5):
return (math.factorial(2*n)/math.factorial(n)/math.factorial(n+1)
* prob_of_moving_forward**(n+1) * (1-prob_of_moving_forward)**n)
which now gives us a relatively instant way of calculating relevant parameters for any length, or more usefully ask wolfram alpha what the average would be (it diverges)
Note that we can simplify this to a 1D problem by considering y-x as one variable: "we start at y-x = 0 and move such that y-x either increases or decreases by 1 each move with equal chance and we are interested when y-x = 1. This means we can consider the 1D case by subbing in z=y-x.
Vectorisation would result in much faster code, approximately ~90K times faster. Here is the function that would return step to hit y=1-x line starting from (0,0) and trajectory generation on the 2D grid with unit steps .
import numpy as np
def _random_walk_2D(sim_steps):
""" Walk on 2D unit steps
return x_sim, y_sim, trajectory, number_of_steps_first_hit to y=1-x """
random_moves_x = np.insert(np.random.choice([1,0,-1], sim_steps), 0, 0)
random_moves_y = np.insert(np.random.choice([1,0,-1], sim_steps), 0, 0)
x_sim = np.cumsum(random_moves_x)
y_sim = np.cumsum(random_moves_y)
trajectory = np.array((x_sim,y_sim)).T
y_hat = 1-x_sim # checking if hit y=1-x
y_hit = y_hat-y_sim
hit_steps = np.where(y_hit == 0)
number_of_steps_first_hit = -1
if hit_steps[0].shape[0] > 0:
number_of_steps_first_hit = hit_steps[0][0]
return x_sim, y_sim, trajectory, number_of_steps_first_hit
if number_of_steps_first_hit is -1 it means trajectory does not hit the line.
A longer simulation and repeating might give the average behaviour, but the following one tells if it does not escape to Infiniti it hits line on average ~84 steps.
sim_steps= 5*10**3 # 5K steps
#Repeat
nrepeat = 40000
hit_step = [_random_walk_2D(sim_steps)[3] for _ in range(nrepeat)]
hit_step = [h for h in hit_step if h > -1]
np.mean(hit_step) # ~84 step
Much longer sim_steps will change the result though.
PS:
Good exercise, hope that this wasn't a homework, if it was homework, please cite this answer if it is used.
Edit
As discussed in the comments current _random_walk_2D works for 8-directions. To restrict it to cardinal direction we could do the following filtering:
cardinal_x_y = [(t[0], t[1]) for t in zip(random_moves_x, random_moves_y)
if np.abs(t[0]) != np.abs(t[1])]
random_moves_x = [t[0] for t in cardinal_x_y]
random_moves_y = [t[1] for t in cardinal_x_y]
though this would slow it down the function a bit but still will be super fast compare to for loop solutions.
I've trying to implement transition from an amount of space to another which is similar to acceleration and deceleration, except i failed and the only thing that i got from this was this infinite stack of mess, here is a screenshot showing this in action:
you can see a very black circle here, which are in reality something like 100 or 200 circles stacked on top of each other
and i reached this result using this piece of code:
def Place_circles(curve, circle_space, cs, draw=True, screen=None):
curve_acceleration = []
if type(curve) == tuple:
curve_acceleration = curve[1][0]
curve_intensity = curve[1][1]
curve = curve[0]
#print(curve_intensity)
#print(curve_acceleration)
Circle_list = []
idx = [0,0]
for c in reversed(range(0,len(curve))):
for p in reversed(range(0,len(curve[c]))):
user_dist = circle_space[curve_intensity[c]] + curve_acceleration[c] * p
dist = math.sqrt(math.pow(curve[c][p][0] - curve[idx[0]][idx[1]][0],2)+math.pow(curve [c][p][1] - curve[idx[0]][idx[1]][1],2))
if dist > user_dist:
idx = [c,p]
Circle_list.append(circles.circles(round(curve[c][p][0]), round(curve[c][p][1]), cs, draw, screen))
This place circles depending on the intensity (a number between 0 and 2, random) of the current curve, which equal to an amount of space (let's say between 20 and 30 here, 20 being index 0, 30 being index 2 and a number between these 2 being index 1).
This create the stack you see above and isn't what i want, i also came to the conclusion that i cannot use acceleration since the amount of time to move between 2 points depend on the amount of circles i need to click on, knowing that there are multiple circles between each points, but not being able to determine how many lead to me being unable to the the classic acceleration formula.
So I'm running out of options here and ideas on how to transition from an amount of space to another.
any idea?
PS: i scrapped the idea above and switched back to my master branch but the code for this is still available in the branch i created here https://github.com/Mrcubix/Osu-StreamGenerator/tree/acceleration .
So now I'm back with my normal code that don't possess acceleration or deceleration.
TL:DR i can't use acceleration since i don't know the amount of circles that are going to be placed between the 2 points and make the time of travel vary (i need for exemple to click circles at 180 bpm of one circle every 0.333s) so I'm looking for another way to generate gradually changing space.
First, i took my function that was generating the intensity for each curves in [0 ; 2]
Then i scrapped the acceleration formula as it's unusable.
Now i'm using a basic algorithm to determine the maximum amount of circles i can place on a curve.
Now the way my script work is the following:
i first generate a stream (multiple circles that need to be clicked at high bpm)
this way i obtain the length of each curves (or segments) of the polyline.
i generate an intensity for each curve using the following function:
def generate_intensity(Circle_list: list = None, circle_space: int = None, Args: list = None):
curve_intensity = []
if not Args or Args[0] == "NewProfile":
prompt = True
while prompt:
max_duration_intensity = input("Choose the maximum amount of curve the change in intensity will occur for: ")
if max_duration_intensity.isdigit():
max_duration_intensity = int(max_duration_intensity)
prompt = False
prompt = True
while prompt:
intensity_change_odds = input("Choose the odds of occurence for changes in intensity (1-100): ")
if intensity_change_odds.isdigit():
intensity_change_odds = int(intensity_change_odds)
if 0 < intensity_change_odds <= 100:
prompt = False
prompt = True
while prompt:
min_intensity = input("Choose the lowest amount of spacing a circle will have: ")
if min_intensity.isdigit():
min_intensity = float(min_intensity)
if min_intensity < circle_space:
prompt = False
prompt = True
while prompt:
max_intensity = input("Choose the highest amount of spacing a circle will have: ")
if max_intensity.isdigit():
max_intensity = float(max_intensity)
if max_intensity > circle_space:
prompt = False
prompt = True
if Args:
if Args[0] == "NewProfile":
return [max_duration_intensity, intensity_change_odds, min_intensity, max_intensity]
elif Args[0] == "GenMap":
max_duration_intensity = Args[1]
intensity_change_odds = Args[2]
min_intensity = Args[3]
max_intensity = Args[4]
circle_space = ([min_intensity, circle_space, max_intensity] if not Args else [Args[0][3],circle_space,Args[0][4]])
count = 0
for idx, i in enumerate(Circle_list):
if idx == len(Circle_list) - 1:
if random.randint(0,100) < intensity_change_odds:
if random.randint(0,100) > 50:
curve_intensity.append(2)
else:
curve_intensity.append(0)
else:
curve_intensity.append(1)
if random.randint(0,100) < intensity_change_odds:
if random.randint(0,100) > 50:
curve_intensity.append(2)
count += 1
else:
curve_intensity.append(0)
count += 1
else:
if curve_intensity:
if curve_intensity[-1] == 2 and not count+1 > max_duration_intensity:
curve_intensity.append(2)
count += 1
continue
elif curve_intensity[-1] == 0 and not count+1 > max_duration_intensity:
curve_intensity.append(0)
count += 1
continue
elif count+1 > 2:
curve_intensity.append(1)
count = 0
continue
else:
curve_intensity.append(1)
else:
curve_intensity.append(1)
curve_intensity.reverse()
if curve_intensity.count(curve_intensity[0]) == len(curve_intensity):
print("Intensity didn't change")
return circle_space[1]
print("\n")
return [circle_space, curve_intensity]
with this, i obtain 2 list, one with the spacing i specified, and the second one is the list of randomly generated intensity.
from there i call another function taking into argument the polyline, the previously specified spacings and the generated intensity:
def acceleration_algorithm(polyline, circle_space, curve_intensity):
new_circle_spacing = []
for idx in range(len(polyline)): #repeat 4 times
spacing = []
Length = 0
best_spacing = 0
for p_idx in range(len(polyline[idx])-1): #repeat 1000 times / p_idx in [0 ; 1000]
# Create multiple list containing spacing going from circle_space[curve_intensity[idx-1]] to circle_space[curve_intensity[idx]]
spacing.append(np.linspace(circle_space[curve_intensity[idx]],circle_space[curve_intensity[idx+1]], p_idx).tolist())
# Sum distance to find length of curve
Length += abs(math.sqrt((polyline[idx][p_idx+1][0] - polyline[idx][p_idx][0]) ** 2 + (polyline [idx][p_idx+1][1] - polyline[idx][p_idx][1]) ** 2))
for s in range(len(spacing)): # probably has 1000 list in 1 list
length_left = Length # Make sure to reset length for each iteration
for dist in spacing[s]: # substract the specified int in spacing[s]
length_left -= dist
if length_left > 0:
best_spacing = s
else: # Since length < 0, use previous working index (best_spacing), could also jsut do `s-1`
if spacing[best_spacing] == []:
new_circle_spacing.append([circle_space[1]])
continue
new_circle_spacing.append(spacing[best_spacing])
break
return new_circle_spacing
with this, i obtain a list with the space between each circles that are going to be placed,
from there, i can Call Place_circles() again, and obtain the new stream:
def Place_circles(polyline, circle_space, cs, DoDrawCircle=True, surface=None):
Circle_list = []
curve = []
next_circle_space = None
dist = 0
for c in reversed(range(0, len(polyline))):
curve = []
if type(circle_space) == list:
iter_circle_space = iter(circle_space[c])
next_circle_space = next(iter_circle_space, circle_space[c][-1])
for p in reversed(range(len(polyline[c])-1)):
dist += math.sqrt((polyline[c][p+1][0] - polyline[c][p][0]) ** 2 + (polyline [c][p+1][1] - polyline[c][p][1]) ** 2)
if dist > (circle_space if type(circle_space) == int else next_circle_space):
dist = 0
curve.append(circles.circles(round(polyline[c][p][0]), round(polyline[c][p][1]), cs, DoDrawCircle, surface))
if type(circle_space) == list:
next_circle_space = next(iter_circle_space, circle_space[c][-1])
Circle_list.append(curve)
return Circle_list
the result is a stream with varying space between circles (so accelerating or decelerating), the only issue left to be fixed is pygame not updating the screen with the new set of circle after i call Place_circles(), but that's an issue i'm either going to try to fix myself or ask in another post
the final code for this feature can be found on my repo : https://github.com/Mrcubix/Osu-StreamGenerator/tree/Acceleration_v02
I'm evaluating chess positions, the implementation isn't really relevant. I've inserted print checks to see how many paths I'm able to prune, but nothing is printed, meaning I don't really prune anything.
I've understood the algorithm and followed the pseudo-code to the letter. Anyone has any idea on what's going wrong?
def alphabeta(self,node,depth,white, alpha,beta):
ch = Chessgen()
if(depth == 0 or self.is_end(node)):
return self.stockfish_evaluation(node.board)
if (white):
value = Cp(-10000)
for child in ch.chessgen(node):
value = max(value, self.alphabeta(child,depth-1,False, alpha,beta))
alpha = max(alpha, value)
if (alpha >= beta):
print("Pruned white")
break
return value
else:
value = Cp(10000)
for child in ch.chessgen(node):
value = min(value, self.alphabeta(child,depth-1,True, alpha,beta))
beta = min(beta,value)
if(beta <= alpha):
print("Pruned black")
break
return value
What is your pseudo code ?
The one I found gives a little diferent code:
As I do not have your full code, I cannot run it:
def alphabeta(self,node,depth,white, alpha,beta):
ch = Chessgen() ### can you do the init somewhere else to speed up the code ?
if(depth == 0 or self.is_end(node)):
return self.stockfish_evaluation(node.board)
if (white):
value = Cp(-10000)
for child in ch.chessgen(node):
value = max(value, self.alphabeta(child,depth-1,False, alpha,beta))
if (value >= beta):
print("Pruned white")
return value
alpha = max(alpha, value)
return value
else:
value = Cp(10000)
for child in ch.chessgen(node):
value = min(value, self.alphabeta(child,depth-1,True, alpha,beta))
if(value <= alpha):
print("Pruned black")
return value
beta = min(beta,value)
return value
A full working simple chess program can be found here:
https://andreasstckl.medium.com/writing-a-chess-program-in-one-day-30daff4610ec
For a project I'm working on I'm trying to write some code to detect collisions between non-point particles in a 2D space. My goal is to try to detect collision for a few thousand particles at least a few times per time step which I know is a tall order for python. I've followed this blog post which implements a quadtree to significantly reduce the number pairwise checks I need to make. So where I believe I'm running into issues is this function:
def get_index(self, particle):
index = -1
bounds = particle.aabb
v_midpoint = self.bounds.x + self.bounds.width/2
h_midpoint = self.bounds.y + self.bounds.height/2
top_quad = bounds.y < h_midpoint and bounds.y + bounds.height < h_midpoint
bot_quad = bounds.y > h_midpoint
if bounds.x < v_midpoint and bounds.x + bounds.width < v_midpoint:
if top_quad:
index = 1
elif bot_quad:
index = 2
elif bounds.x > v_midpoint:
if top_quad:
index = 0
elif bot_quad:
index = 3
return index
This function from my initial profiling is the bottleneck and I need it to be blistering fast, because of its high call count. Originally I was just supplying an object axis-aligned bounding box which was working almost at the speed I needed, then realized I had no way of determining which particles may actually be colliding. So now I'm passing in a list of particles to my quadtree constructor and just using the class attribute aabb to get my bounds.
Is there someway I could pass something analogues to a object pointer instead of the whole object? Additionally are there other recommendation to optimize this above code?
Don't know if they'll help, but here are a few ideas:
v_midpoint and h_midpoint are re-calculated for every particle added to the quadtree. Instead, calculate them once when a Quad is initialized, then access them as attributes.
I don't think the and is needed in calculating top_quad. bounds.x + bounds.width < v_midpoint is sufficient. Same for left_quad.
Do the simpler checks first and only do the longer one if necessary: bounds.x > v_midpoint vs. bounds.x + bounds.width < v_midpoint
bounds.x + bounds.width is calculated multiple times for most particles. Maybe bounds.left and bounds.right can be calculated once as attributes of each particle.
No need to calculate bot_quad if top_quad is True. Or visa-versa.
Maybe like this:
def get_index(self, particle):
bounds = particle.aabb
# right
if bounds.x > self.v_midpoint:
# bottom
if bounds.y > self.h_midpoint:
return 3
# top
elif bounds.y + bounds.height < self.h_midpoint:
return 0
# left
elif bounds.x + bounds.width < self.v_midpoint:
# bottom
if bounds.y > self.h_midpoint:
return 2
# top
elif bounds.y + bounds.height < self.h_midpoint:
return 1
return -1
I developed my own program in Python for solving 8-puzzle. Initially I used "blind" or uninformed search (basically brute-forcing) generating and exploring all possible successors and using breadth-first search. When it finds the "goal" state, it basically back-tracks to the initial state and delivers (what I believe) is the most optimized steps to solve it. Of course, there were initial states where the search would take a lot of time and generate over 100,000 states before finding the goal.
Then I added the heuristic - Manhattan Distance. The solutions started coming exponentially quickly and with lot less explored states. But my confusion is that some of the times, the optimized sequence generated was longer than the one reached using blind or uninformed search.
What I am doing is basically this:
For each state, look for all possible moves (up, down, left and right), and generate the successor states.
Check if state is repeat. If yes, then ignore it.
Calculate Manhattan for the state.
Pick out the successor(s) with lowest Manhattan and add at the end of the list.
Check if goal state. If yes, break the loop.
I am not sure whether this would qualify as greedy-first, or A*.
My question is, is this an inherent flaw in the Manhattan Distance Heuristic that sometimes it would not give the most optimal solution or am i doing something wrong.
Below is the code. I apologize that it is not a very clean code but being mostly sequential it should be simple to understand. I also apologize for a long code - I know I need to optimize it. Would also appreciate any suggestions/guidance for cleaning up the code. Here is what it is:
import numpy as np
from copy import deepcopy
import sys
# calculate Manhattan distance for each digit as per goal
def mhd(s, g):
m = abs(s // 3 - g // 3) + abs(s % 3 - g % 3)
return sum(m[1:])
# assign each digit the coordinate to calculate Manhattan distance
def coor(s):
c = np.array(range(9))
for x, y in enumerate(s):
c[y] = x
return c
#################################################
def main():
goal = np.array( [1, 2, 3, 4, 5, 6, 7, 8, 0] )
rel = np.array([-1])
mov = np.array([' '])
string = '102468735'
inf = 'B'
pos = 0
yes = 0
goalc = coor(goal)
puzzle = np.array([int(k) for k in string]).reshape(1, 9)
rnk = np.array([mhd(coor(puzzle[0]), goalc)])
while True:
loc = np.where(puzzle[pos] == 0) # locate '0' (blank) on the board
loc = int(loc[0])
child = np.array([], int).reshape(-1, 9)
cmove = []
crank = []
# generate successors on possible moves - new states no repeats
if loc > 2: # if 'up' move is possible
succ = deepcopy(puzzle[pos])
succ[loc], succ[loc - 3] = succ[loc - 3], succ[loc]
if ~(np.all(puzzle == succ, 1)).any(): # repeat state?
child = np.append(child, [succ], 0)
cmove.append('up')
crank.append(mhd(coor(succ), goalc)) # manhattan distance
if loc < 6: # if 'down' move is possible
succ = deepcopy(puzzle[pos])
succ[loc], succ[loc + 3] = succ[loc + 3], succ[loc]
if ~(np.all(puzzle == succ, 1)).any(): # repeat state?
child = np.append(child, [succ], 0)
cmove.append('down')
crank.append(mhd(coor(succ), goalc))
if loc % 3 != 0: # if 'left' move is possible
succ = deepcopy(puzzle[pos])
succ[loc], succ[loc - 1] = succ[loc - 1], succ[loc]
if ~(np.all(puzzle == succ, 1)).any(): # repeat state?
child = np.append(child, [succ], 0)
cmove.append('left')
crank.append(mhd(coor(succ), goalc))
if loc % 3 != 2: # if 'right' move is possible
succ = deepcopy(puzzle[pos])
succ[loc], succ[loc + 1] = succ[loc + 1], succ[loc]
if ~(np.all(puzzle == succ, 1)).any(): # repeat state?
child = np.append(child, [succ], 0)
cmove.append('right')
crank.append(mhd(coor(succ), goalc))
for s in range(len(child)):
if (inf in 'Ii' and crank[s] == min(crank)) \
or (inf in 'Bb'):
puzzle = np.append(puzzle, [child[s]], 0)
rel = np.append(rel, pos)
mov = np.append(mov, cmove[s])
rnk = np.append(rnk, crank[s])
if np.array_equal(child[s], goal):
print()
print('Goal achieved!. Successors generated:', len(puzzle) - 1)
yes = 1
break
if yes == 1:
break
pos += 1
# generate optimized steps by back-tracking the steps to the initial state
optimal = np.array([], int).reshape(-1, 9)
last = len(puzzle) - 1
optmov = []
rank = []
while last != -1:
optimal = np.insert(optimal, 0, puzzle[last], 0)
optmov.insert(0, mov[last])
rank.insert(0, rnk[last])
last = int(rel[last])
# show optimized steps
optimal = optimal.reshape(-1, 3, 3)
print('Total optimized steps:', len(optimal) - 1)
print()
for s in range(len(optimal)):
print('Move:', optmov[s])
print(optimal[s])
print('Manhattan Distance:', rank[s])
print()
print()
################################################################
# Main Program
if __name__ == '__main__':
main()
Here are some of the initial states and the optimized steps calculated if you would like to check (above code would give this option to choose between blind vs Informed search)
Initial states
- 283164507 Blind: 19 Manhattan: 21
- 243780615 Blind: 15 Manhattan: 21
- 102468735 Blind: 11 Manhattan: 17
- 481520763 Blind: 13 Manhattan: 23
- 723156480 Blind: 16 Manhattan: 20
I have deliberately chosen examples where results would be quick (within seconds or few minutes).
Your help and guidance would be much appreciated.
Edit: I have made some quick changes and managed to reduce some 30+ lines. Unfortunately can't do much at this time.
Note: I have hardcoded the initial state and the blind vs informed choice. Please change the value of variable "string" for initial state and the variable "inf" [I/B] for Informed/Blind. Thanks!