As I understand, when implementing iterative deepening the best move at one depth should be used for ordering moves at higher depths. I have one issue with this: say I got the move m as my best move at the depth n, then when searching at the depth n + 1 should the move orderer only prioritize m at the highest level of search or at every level where move m is legal?
My current implementation of iterative deepening:
Search:
pvLine = None
for depth in range(1, self.maxDepth):
self.auxSearch(board, depth, initalHash)
# find the principal variation from the TT
pvLine = self.getPVLine(board, initalHash)
bestMove = pvLine[0][0]
bestValue = pvLine[0][1]
self.ordering.setBestMove(bestMove, depth + 1)
print(f'{depth=} | {bestValue=} | {bestMove=} | {pvLine=}')
return pvLine
Move ordering:
if((move, depth) == self.bestMove):
priority += self.BESTMOVE_BONUS
setBestMove function:
def setBestMove(self, move: chess.Move, depth: int) -> None:
self.bestMove = (move, depth)
self.BESTMOVE_BONUS is a very big number, so the move will have the highest priority.
Currently, I am making sure that the move orderer only prioritizes the best move from previous shallower search at the highest level of the current search. I am not sure if my approach is correct or not?
Move ordering will give you a much faster algorithm than without and is fairly easy to implement. You can read more about it here: https://www.chessprogramming.org/Move_Ordering.
I suggest you do as of now and put the best move from previous iteration first. The best move (or the best move sequence, "principal variation") is always the move from previous depths. So if you get a sequence of moves a1, b1, and c1 from depth 3, then at depth 4 you will try a1 at depth 1, b1 at depth 2, and c1 at depth 3 first.
Second you should try good capture moves, often found with MVV-LVA. Capturing a queen with a pawn is usually a good move, but the other way around could be bad if the pawn is protected.
Other easy to implement techniques are Killer moves and History moves, also found in the link above.
Related
I was working on a medium level leetcode question 11. Container With Most Water. Besides the brute force solution with O(n^2), there is an optimal solution with complexity of O(n) by using two pointers from left and right side of the container. I am a little bit confused why this "two pointers" method must include the optimal solution. Does anyone know how to prove the correctness of this algorithm mathematically? This is an algorithm that I don't know of. Thank you!
The original question is:
You are given an integer array height of length n. There are n vertical lines drawn such that the two endpoints of the ith line are (i, 0) and (i, height[i]).
Find two lines that together with the x-axis form a container, such that the container contains the most water. Return the maximum amount of water a container can store. Notice that you may not slant the container.
A brutal solution for this question is(O(n^2)):
def maxArea(self, height: List[int]) -> int:
length = len(height)
volumn = 0
#calculate all possible combinations, and compare one by one:
for position1 in range(0,length):
for position2 in range (position1 + 1, length):
if min(height[position1],height[position2])*(position2 - position1) >=volumn:
volumn = min(height[position1],height[position2])*(position2 - position1)
else:
volumn = volumn
return volumn
Optimal solution approach, The code I wrote is like this(O(n)):
def maxArea(self, height: List[int]) -> int:
pointerOne, pointerTwo = 0, len(height)-1
maxVolumn = 0
#Move left or right pointer one step for whichever is smaller
while pointerOne != pointerTwo:
if height[pointerOne] <= height[pointerTwo]:
maxVolumn = max(height[pointerOne]*(pointerTwo - pointerOne), maxVolumn)
pointerOne += 1
else:
maxVolumn = max(height[pointerTwo]*(pointerTwo - pointerOne), maxVolumn)
pointerTwo -= 1
return maxVolumn
Does anyone know why this "two pointers" method can find the optimal solution? Thanks!
Based on my understanding the idea is roughly:
Staring from widest bars (i.e. first and last bar) and then narrowing
width to find potentially better pair(s).
Steps:
We need to have ability to loop over all 'potential' candidates (the candidates better than what we have on hand rather than all candidates as you did in brutal solution) thus starting from outside bars and no inner pairs will be missed.
If an inner bar pair does exist, it means the height is higher than bars we have on hand, so you should not just #Move left or right pointer one step but #Move left or right pointer to next taller bar .
Why #Move left or right pointer whichever is smaller? Because the smaller bar doesn't fulfill the 'potential' of the taller bar.
The core idea behind the steps is: starting from somewhere that captures optimal solution inside (step 1), then by each step you are reaching to a better solution than what you have on hand (step 2 and 3), and finally you will reach to the optimal solution.
One question left for you think about: what makes sure the optimal solution is not missed when you executing steps above? :)
An informal proof could go something like this: imagine we are at some position in the iteration before reaching the optimal pair:
|
|
|~~~~~~~~~~~~~~~~~~~~~~~|
|~~~~~~~~~~~~~~~~~~~~~~~|
|~~~~~~~~~~~~~~~~~~~~~~~|
|~~~~~~~~~~~~~~~~~~~~~~~|
^ ^
A B
Now lets fix A (the smaller vertical line) and consider all of the choices left of B that we could pair with it. Clearly all of them yield a container with a smaller amount of water than we have currently between A and B.
Since we have stated that we have yet to reach the optimal solution, clearly A cannot be one of the lines contributing to it. Therefore, we move its pointer.
Q.E.D.
I'm working through MIT6.0002 on OpenCourseWare (https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-0002-introduction-to-computational-thinking-and-data-science-fall-2016/assignments/) and I am stumped on Part B of Problem Set 1. The problem, which is presented as a version of the knapsack problem, is stated as follows:
[The Aucks have found a colony of geese that lay golden eggs of various weights] They want to carry as few eggs as possible on their trip as they don’t have a lot of space
on their ships. They have taken detailed notes on the weights of all the eggs that geese can lay
in a given flock and how much weight their ships can hold.
Implement a dynamic programming algorithm to find the minimum number of eggs needed to
make a given weight for a certain ship in dp_make_weight. The result should be an integer
representing the minimum number of eggs from the given flock of geese needed to make the
given weight. Your algorithm does not need to return what the weight of the eggs are, just the
minimum number of eggs.
Assumptions:
All the eggs weights are unique between different geese, but a given goose will always lay the same size egg
The Aucks can wait around for the geese to lay as many eggs as they need [ie there is an infinite supply of each size of egg].
There are always eggs of size 1 available
The problem also states that the solution must use dynamic programming. I have written a solution (in Python) which I think finds the optimal solution, but it does not use dynamic programming, and I fail to understand how dynamic programming is applicable. It was also suggested that the solution should use recursion.
Can anybody explain to me what the advantage is of using memoization in this case, and what I would gain by implementing a recursive solution?
(Apologies if my question is too vague or if the solution is too obvious for words; I'm a relative beginner to programming, and to this site).
My code:
#================================
# Part B: Golden Eggs
#================================
# Problem 1
def dp_make_weight(egg_weights, target_weight, memo = {}):
"""
Find number of eggs to bring back, using the smallest number of eggs. Assumes there is
an infinite supply of eggs of each weight, and there is always a egg of value 1.
Parameters:
egg_weights - tuple of integers, available egg weights sorted from smallest to largest value (1 = d1 < d2 < ... < dk)
target_weight - int, amount of weight we want to find eggs to fit
memo - dictionary, OPTIONAL parameter for memoization (you may not need to use this parameter depending on your implementation)
Returns: int, smallest number of eggs needed to make target weight
"""
egg_weights = sorted(egg_weights, reverse=True)
eggs = 0
while target_weight != 0:
while egg_weights[0] <= target_weight:
target_weight -= egg_weights[0]
eggs += 1
del egg_weights[0]
return eggs
# EXAMPLE TESTING CODE, feel free to add more if you'd like
if __name__ == '__main__':
egg_weights = (1, 5, 10, 25)
n = 99
print("Egg weights = (1, 5, 10, 25)")
print("n = 99")
print("Expected ouput: 9 (3 * 25 + 2 * 10 + 4 * 1 = 99)")
print("Actual output:", dp_make_weight(egg_weights, n))
print()
The problem here is a classic DP situation where greediness can sometimes give optimal solutions, but sometimes not.
The situation in this problem is similar to the classic DP problem coin change where we wish to find the fewest number of different valued coins to make change given a target value. The denominations available in some countries such as the USA (which uses coins valued 1, 5, 10, 25, 50, 100) are such that it's optimal to greedily choose the largest coin until the value drops below it, then move on to the next coin. But with other denomination sets like 1, 3, 4, greedily choosing the largest value repeatedly can produce sub-optimal results.
Similarly, your solution works fine for certain egg weights but fails on others. If we choose our egg weights to be 1, 6, 9 and give a target weight of 14, the algorithm chooses 9 immediately and is then unable to make progress on 6. At that point, it slurps a bunch of 1s and ultimately thinks 6 is the minimal solution. But that's clearly wrong: if we intelligently ignore the 9 and pick two 6s first, then we can hit the desired weight with only 4 eggs.
This shows that we have to consider the fact that at any decision point, taking any of our denominations might ultimately lead us to a globally optimal solution. But we have no way of knowing in the moment. So, we try every denomination at every step. This is very conducive to recursion and could be written like this:
def dp_make_weight(egg_weights, target_weight):
least_taken = float("inf")
if target_weight == 0:
return 0
elif target_weight > 0:
for weight in egg_weights:
sub_result = dp_make_weight(egg_weights, target_weight - weight)
least_taken = min(least_taken, sub_result)
return least_taken + 1
if __name__ == "__main__":
print(dp_make_weight((1, 6, 9), 14))
For each call, we have 3 possibilities:
Base case target_weight < 0: return something to indicate no solution possible (I used infinity for convenience).
Base case target_weight == 0: we found a candidate solution. Return 0 to indicate no step was taken here and give the caller a base value to increment.
Recursive case target_weight > 0: try taking every available egg_weight by subtracting it from the total and recursively exploring the path rooted at the new state. After exploring every possible outcome from the current state, pick the one that took the least number of steps to reach the target. Add 1 to count the current step's egg taken and return.
So far, we've seen that a greedy solution is incorrect and how to fix it but haven't motivated dynamic programming or memoization. DP and memoization are purely optimization concepts, so you can add them after you've found a correct solution and need to speed it up. Time complexity of the above solution is exponential: for every call, we have to spawn len(egg_weights) recursive calls.
There are many resources explaining DP and memoization and I'm sure your course covers it, but in brief, our recursive solution shown above re-computes the same results over and over by taking different recursive paths that ultimately lead to the same values being given for target_weight. If we keep a memo (dictionary) that stores the results of every call in memory, then whenever we re-encounter a call, we can look up its result instead of re-computing it from scratch.
def dp_make_weight(egg_weights, target_weight, memo={}):
least_taken = float("inf")
if target_weight == 0:
return 0
elif target_weight in memo:
return memo[target_weight]
elif target_weight > 0:
for weight in egg_weights:
sub_result = dp_make_weight(egg_weights, target_weight - weight)
least_taken = min(least_taken, sub_result)
memo[target_weight] = least_taken + 1
return least_taken + 1
if __name__ == "__main__":
print(dp_make_weight((1, 6, 9, 12, 13, 15), 724)) # => 49
Since we're using Python, the "Pythonic" way to do it is probably to decorate the function. In fact, there's a builtin memoizer called lru_cache, so going back to our original function without any memoization, we can add memoization (caching) with two lines of code:
from functools import lru_cache
#lru_cache
def dp_make_weight(egg_weights, target_weight):
# ... same code as the top example ...
Memoizing with a decorator has the downside of increasing the size of the call stack proportional to the wrapper's size so it can increase the likelihood of blowing the stack. That's one motivation for writing DP algorithms iteratively, bottom-up (that is, start with the solution base cases and build up a table of these small solutions until you're able to build the global solution), which might be a good exercise for this problem if you're looking for another angle on it.
I'm currently working on a python problem:
Given a number line from -infinity to +infinity. You start at 0 and can go either to the left or to the right. The condition is that in i’th move, you take i steps. In the first move take 1 step, second move 2 steps and so on.
Hint: 3 can be reached in 2 steps (0, 1) (1, 3). 2 can be reached in 3 steps (0, 1) (1,-1) (-1, 2)
a) Find the optimal number of steps to reach position 1000000000 and -1000000000.
I have managed to code the following:
def steps(source, step, dest):
if abs(source) > dest:
return sys.maxint
if source == dest:
return step
pos = steps(source+step+1, step+1, dest)
neg = steps(source-step-1, step+1, dest)
return min(pos, neg)
The problem is that even though this function gives me the correct answer, it cannot extend to the range asked of me. Is there a work around for this or would I have to go about a different method of solving the question?
I think this can actually be solved with pen, paper and a calculator. I don't want to spoil the entire puzzle (sounds like homework), but I'll give some hints.
Imagine we just walk in the direction of 1000000000, i.e. we take steps to the right each time.
Can you come up with a closed formula that tells you where you are after n steps?
From there, can you compute how many steps it'll take until you "overshoot" past 1000000000? This is obviously a lower bound for the required number of steps, because with fewer steps we simply can't cover the distance.
Where exactly do you end up at the moment you overshoot?
Finally, can you modify your path in such a way that you end up exactly on target, in the same amount of steps?
To avoid recursion you can save the state in a list, I'll explain better.
def steps(source, step, dest):
q = Queue()
q.put((0,1))
while True:
source, ste = q.get()
if abs(source) > dest:
return sys.maxint
if source == dest:
return step
q.put(source+step+1, step+1)
q.put(source-step-1, step+1)
Doing so you save in the queue the position to check, and every time you check a position which is not the final one you will add the new 2 position at the end of the queue. This method also garantee that the solution found is the shortest.
To improve speed even more you cold save a list of number already visited which will stop the research every time you will find the same number again, this will speed up the task a lot
I'm using a version of Dijkstra's algorithm written in Python which I found online, and it works great. But because this is for bus routes, changing 10 times might be the shortest route, but probably not the quickest and definitely not the easiest. I need to modify it somehow to return the path with the least number of changes, regardless of distance to be honest (obviously if 2 paths have equal number of changes, choose the shortest one). My current code is as follows:
from priodict import priorityDictionary
def Dijkstra(stops,start,end=None):
D = {} # dictionary of final distances
P = {} # dictionary of predecessors
Q = priorityDictionary() # est.dist. of non-final vert.
Q[start] = 0
for v in Q:
D[v] = Q[v]
print v
if v == end: break
for w in stops[v]:
vwLength = D[v] + stops[v][w]
if w in D:
if vwLength < D[w]:
raise ValueError, "Dijkstra: found better path to already-final vertex"
elif w not in Q or vwLength < Q[w]:
Q[w] = vwLength
P[w] = v
return (D,P)
def shortestPath(stops,start,end):
D,P = Dijkstra(stops,start,end)
Path = []
while 1:
Path.append(end)
if end == start: break
end = P[end]
Path.reverse()
return Path
stops = MASSIVE DICTIONARY WITH VALUES (7800 lines)
print shortestPath(stops,'Airport-2001','Comrie-106')
I must be honest - I aint no mathematician so I don't quite understand the algorithm fully, despite all my research on it.
I have tried changing a few things but I don't get even close.
Any help? Thanks!
Here is a possible solution:
1)Run breadth first search from the start vertex. It will find the path with the least number of changes, but not the shortest among them. Let's assume that after running breadth first search dist[i] is the distance between the start and the i vertex.
2)Now one can run Djikstra algorithm on modified graph(add only those edges from the initial graph which satisfy this condition: dist[from] + 1 == dist[to]). The shortest path in this graph is the one you are looking for.
P.S If you don't want to use breadth first search, you can use Djikstra algorithm after making all edges' weights equal to 1.
What i would do is to add an offset to the actual costs if you have to change the line. For example if your edge weights represent the time needed between 2 stations, i would add the average waiting time between Line1 Line2 at station X (e.g. 0.5*maxWaitingTime) during the search process. Of course this is a heuristic solution for the problem. If your timetables are known, you can calculate a "exact" solution or at least a solution that satisfies the model because in reality you can't assume that every bus is always on time.
The solution is simple: instead of using the distances as weights, use a wright of 1 for each stop. Dijkstra's algorithm will minimize the number of changes as you requested (the total path weight is the number of rides, which is the number of changes +1). If you want to use the distance to break ties, use something like
vwLength = D[v] + 1+ alpha*stops[v][w]
where alpha<<1, e.g. alpha=0.0001
Practically, I think you're approach is exaggerated. You don't want to fly from Boston to Toronto through Paris even if two of flights are the minimum. I would play with alpha to get an approximation of total traveling time, which is what probably matters.
I'm trying to write the minimax algorithm in python with one for loop (yes I know wikipedia says the min and max players are often treated separately), and I'm using the variable turn to keep track of whether the min or max player is currently exploring options. I think, however, that at present the code wrongly evaluates for X when it is the O player's turn and O when it is the X player's turn.
Here's the source (p12) : http://web.cs.wpi.edu/~rich/courses/imgd4000-d10/lectures/E-MiniMax.pdf
Things you might be wondering about:
b is a list of lists; 0 denotes an available space
evaluate is used both for checking for a victory (by default) as well as for scoring the board for a particular player (we look for places where the value of a cell on the board ).
makeMove returns the row of the column the piece is placed in (used for subsequent removal)
Any help would be very much appreciated. Please let me know if anything is unclear.
def minMax(b, turn, depth=0):
player, piece = None, None
best, move = None, -1
if turn % 2 == 0 : # even player is max player
player, piece = 'max', 'X'
best, move = -1000, -1
else :
player, piece = 'min', 'O'
best, move = 1000, -1
if boardFull(b) or depth == MAX_DEPTH:
return evaluate(b, False, piece)
for col in range(N_COLS):
if possibleMove(b, col) :
row = makeMove(b, col, piece)
turn += 1 # now the other player's turn
score = minMax(b, turn, depth+1)
if player == 'max':
if score > best:
best, move = score, col
else:
if score < best:
best, move = score, col
reset(b, row, col)
return move
#seaotternerd. Yes I was wondering about that. But I'm not sure that is the problem. Here is one printout. As you can see, X has been dropped in the fourth column by AI but is evaluating from the min player's perspective (it counts 2 O units in the far right column).
Here's what the evaluate function determines, depending on piece:
if piece == 'O':
return best * -25
return best * 25
You are incrementing turn every time that you find a possible move and not undoing it. As a result, when control returns to a given minMax call, turn is 1 greater than it was before. Then, the next time your program finds a possible move, it increments turn again. This will cause the next call to minMax to select the wrong player as the current one. Overall, I believe this will result in the board getting evaluated for the wrong player approximately half the time. You can fix this by adding 1 to turn in the recursive call to minMax(), rather than by changing the value stored in the variables:
row = makeMove(b, col, piece)
score = minMax(b, turn+1, depth+1)
EDIT: Digging deeper into your code, I'm finding a number of additional problems:
MAX_DEPTH is set to 1. This will not allow the ai to see its own next move, instead forcing it to make decisions solely based on getting in the way of the other player.
minMax() returns the score if it has reached MAX_DEPTH or a win condition, but otherwise it returns a move. This breaks propagation of the score back up the recursion tree.
This is not critical, but it's something to keep in mind: your board evaluation function only takes into account how long a given player's longest string is, ignoring how the other player is doing and any other factors that may make one placement better than another. This mostly just means that your AI won't be very "smart."
EDIT 2: A big part of the problem with the way that you're keeping track of min and max is in your evaluation function. You check to see if each piece has won. You are then basing the score of that board off of who the current player is, but the point of having a min player and a max player is that you don't need to know who the current player is to evaluate the board. If max has won, the score is infinity. If min has won, the score is -infinity.
def evaluate(b, piece):
if evaluate_aux(b, True, 'X'):
return 100000
if evaluate_aux(b, True, 'O'):
return -100000
return evaluate_aux(b, False, piece)
In general, I think there is a lot that you could do to make your code cleaner and easier to read, which would make it a lot easier to detect errors. For instance, if you are saying that "X" is always max and "Y" is always min, then you don't need to bother keeping track of both player and piece. Additionally, having evaluate_aux sometimes return a boolean and sometimes return an int is confusing. You could just have it count the number of each piece in a row, for instance, with contiguous "X"s counting positive and contiguous "O"s counting negative and sum the scores; an evaluation function isn't supposed to be from one player's perspective or the other. Obviously you would still need to have a check for win conditions in there. This would also address point 3.
It's possible that there are more problems, but like I said, this code is not particularly easy to wade through. If you fix the things that I've already found and clean it up, I can take another look.