Recursive loops with increasing size/arguments (Hamiltonian Paths?) Python - python

I have a code that takes n inputs and computes the shortest distance between them without ever revisiting the same point twice. I think this the same as the Hamiltonian Path problem.
My code takes n addresses as inputs and iterates over all possible combinations without repeating. Right now I have the 'brute force' method where each loop grabs the start/end location, calcs distance, excludes replicated locations, then adds paths that only visit every point to my Df. Since there are 5 locations, the 5th nested for loop block write the sequence and distance to the DF.
DF with values:
Index Type start_point
0 Start (38.9028613352942, -121.339977998194)
1 A (38.8882610961556, -121.297759)
2 B (38.9017768701178, -121.328815149117)
3 C (38.902337877551, -121.273244306122)
4 D (38.8627754142291, -121.313577618114)
5 E (38.882338375, -121.277366625)
My code goes like:
from geopy.distance import vincenty
import pandas as pd
master=pd.DataFrame()
master['locations']=''
master['distance']=''
n=0
df1a=source[source.Type != source.loc[0,'Type']]
df1a=df1a.reset_index(drop=True)
for i1a in df1a.index:
i1_master=vincenty(source.loc[0,'start_point'],df1a.loc[i1a,'start_point']).mile s
for i2 in df1a.index:
df2a=df1a[df1a.Type != df1a.loc[i2,'Type']]
df2a=df2a.reset_index(drop=True)
for i2a in df2a.index:
if df1a.loc[i1a,'Type']==df2a.loc[i2a,'Type']:
break
else:
i2_master=i1_master+vincenty(df1a.loc[i1a,'start_point'],df2a.loc[i2a,'start_point']).miles
for i3 in df2a.index:
df3a=df2a[df2a.Type != df2a.loc[i3,'Type']]
df3a=df3a.reset_index(drop=True)
for i3a in df3a.index:
if df1a.loc[i1a,'Type']==df3a.loc[i3a,'Type']:
break
if df2a.loc[i2a,'Type']==df3a.loc[i3a,'Type']:
break
else:
i3_master=i2_master+vincenty(df2a.loc[i2a,'start_point'],df3a.loc[i3a,'start_point']).miles
for i4 in df3a.index:
df4a=df3a[df3a.Type != df3a.loc[i4,'Type']]
df4a=df4a.reset_index(drop=True)
for i4a in df4a.index:
if df1a.loc[i1a,'Type']==df4a.loc[i4a,'Type']:
break
if df2a.loc[i2a,'Type']==df4a.loc[i4a,'Type']:
break
if df3a.loc[i3a,'Type']==df4a.loc[i4a,'Type']:
break
else:
i4_master=i3_master+vincenty(df3a.loc[i3a,'start_point'],df4a.loc[i4a,'start_point']).miles
for i5 in df4a.index:
df5a=df4a[df4a.Type != df4a.loc[i5,'Type']]
df5a=df5a.reset_index(drop=True)
for i5a in df5a.index:
if df1a.loc[i1a,'Type']==df5a.loc[i5a,'Type']:
break
if df2a.loc[i2a,'Type']==df5a.loc[i5a,'Type']:
break
if df3a.loc[i3a,'Type']==df5a.loc[i5a,'Type']:
break
if df4a.loc[i4a,'Type']==df5a.loc[i5a,'Type']:
break
if df4a.loc[i4a,'Type']==df5a.loc[i5a,'Type']:
break
else:
i5_master=i4_master+vincenty(df4a.loc[i4a,'start_point'],df5a.loc[i5a,'start_point']).miles
#This loop is special, it calculates distance back to the start.
for i5 in df4a.index:
df5a=df4a[df4a.Type != df4a.loc[i5,'Type']]
df5a=df5a.reset_index(drop=True)
for i5a in df5a.index:
master.loc[n,'locations']=source.loc[0,'Type']+'_'+df1a.loc[i1a,'Type']+'_'+df2a.loc[i2a,'Type']+'_'+df3a.loc[i3a,'Type']+'_'+df4a.loc[i4a,'Type']+'_'+df5a.loc[i5a,'Type']+'_'+source.loc[0,'Type']
master.loc[n,'distance']=i5_master+vincenty(df5a.loc[i5a,'start_point'],df1a.loc[0,'start_point']).miles
n=n+1
Is there a way to use recursive code to build this structure? As a Chemical Engineer I am out of my league ;)
For example: The number of if statements (to check for sequentially repeated start_points ) increases in each section and changes in terms of arguments.
Any other pointers are appreciated.

This is a special case of the Travelling Salesman Problem, which is perhaps the most famous example of an intractable problem - one which cannot be solved in sensible time for any sizable input. Using recursion on this would take O(N!) memory and time, which can only be viable (even on modern systems) for small numbers of inputs (< 10 maybe).
If you are willing to sacrifice perfect solutions for the sake of resources, check out some sub-optimal heuristic solutions here: http://www.math.tamu.edu/~mpilant/math167/Notes/Chapter2.pdf

Related

8 Queens on a chessboard | PYTHON | Memory Error

I came across this question where 8 queens should be placed on a chessboard such that none can kill each other.This is how I tried to solve it:
import itertools
def allAlive(position):
qPosition=[]
for i in range(8):
qPosition.append(position[2*i:(2*i)+2])
hDel=list(qPosition) #Horizontal
for i in range(8):
a=hDel[0]
del hDel[0]
l=len(hDel)
for j in range(l):
if a[:1]==hDel[j][:1]:
return False
vDel=list(qPosition) #Vertical
for i in range(8):
a=vDel[0]
l=len(vDel)
for j in range(l):
if a[1:2]==vDel[j][1:2]:
return False
cDel=list(qPosition) #Cross
for i in range(8):
a=cDel[0]
l=len(cDel)
for j in range(l):
if abs(ord(a[:1])-ord(cDel[j][:1]))==1 and abs(int(a[1:2])-int(cDel[j][1:2]))==1:
return False
return True
chessPositions=['A1','A2','A3','A4','A5','A6','A7','A8','B1','B2','B3','B4','B5','B6','B7','B8','C1','C2','C3','C4','C5','C6','C7','C8','D1','D2','D3','D4','D5','D6','D7','D8','E1','E2','E3','E4','E5','E6','E7','E8','F1','F2','F3','F4','F5','F6','F7','F8','G1','G2','G3','G4','G5','G6','G7','G8','H1','H2','H3','H4','H5','H6','H7','H8']
qPositions=[''.join(p) for p in itertools.combinations(chessPositions,8)]
for i in qPositions:
if allAlive(i)==True:
print(i)
Traceback (most recent call last):
qPositions=[''.join(p) for p in itertools.combinations(chessPositions,8)]
MemoryError
I'm still a newbie.How can I overcome this error?Or is there any better way to solve this problem?
What you are trying to do is impossible ;)!
qPositions=[''.join(p) for p in itertools.combinations(chessPositions,8)]
means that you will get a list with length 64 choose 8 = 4426165368, since len(chessPositions) = 64, which you cannot store in memory. Why not? Combining what I stated in the comments and #augray in his answer, the result of above operation would be a list which would take
(64 choose 8) * 2 * 8 bytes ~ 66GB
of RAM, since it will have 64 choose 8 elements, each element will have 8 substrings like 'A1' and each substring like this consists of 2 character. One character takes 1 byte.
You have to find another way. I am not answering to that because that is your job. The n-queens problem falls into dynamic programming. I suggest you to google 'n queens problem python' and search for an answer. Then try to understand the code and dynamic programming.
I did searching for you, take a look at this video. As suggested by #Jean François-Fabre, backtracking. Your job is now to watch the video once, twice,... as long as you don't understand the solution to problem. Then open up your favourite editor (mine is Vi :D) and code it down!
This is one case where it's important to understand the "science" (or more accurately, math) part of computer science as much as it is important to understand the nuts and bolts of programming.
From the documentation for itertools.combinations, we see that the number of items returned is n! / r! / (n-r)! where n is the length of the input collection (in your case the number of chess positions, 64) and r is the length of the subsequences you want returned (in your case 8). As #campovski has pointed out, this results in 4,426,165,368. Each returned subsequence will consist of 8*2 characters, each of which is a byte (not to mention the overhead of the other data structures to hold these and calculate the answer). Each character is 1 byte, so in total, just counting the memory consumption of the resulting subsequences gives 4,426,165,368*2*8=70818645888. dividing this by 1024^3 gives the number of Gigs of memory held by these subsequences, about 66GB.
I'm assuming you don't have that much memory :-) . Calculating the answer to this question will require a well thought out algorithm, not just "brute force". I recommend doing some research on the problem- Wikipedia looks like a good place to start.
As the other answers stated you cant get every combination to fit in memory, and you shouldn't use brute force because the speed will be slow. However, if you want to use brute force, you could constrain the problem, and eliminate common rows and columns and check the diagonal
from itertools import permutations
#All possible letters
letters = ['a','b','c','d','e','f','g','h']
#All possible numbers
numbers = [str(i) for i in range(1,len(letters)+1)]
#All possible permutations given rows != to eachother and columns != to eachother
r = [zip(letters, p) for p in permutations(numbers,8)]
#Formatted for your function
points = [''.join([''.join(z) for z in b]) for b in r]
Also as a note, this line of code attempts to first find all of the combinations, then feed your function, which is a waste of memory.
qPositions=[''.join(p) for p in itertools.combinations(chessPositions,8)]
If you decided you do want to use a brute force method, it is possible. Just modify the code for itertools combinations. Remove the yield and return and just feed your check function one at a time.

Code finding the first triangular number with more than 500 divisors will not finish running

Okay, so I'm working on Euler Problem 12 (find the first triangular number with a number of factors over 500) and my code (in Python 3) is as follows:
factors = 0
y=1
def factornum(n):
x = 1
f = []
while x <= n:
if n%x == 0:
f.append(x)
x+=1
return len(f)
def triangle(n):
t = sum(list(range(1,n)))
return t
while factors<=500:
factors = factornum(triangle(y))
y+=1
print(y-1)
Basically, a function goes through all the numbers below the input number n, checks if they divide into n evenly, and if so add them to a list, then return the length in that list. Another generates a triangular number by summing all the numbers in a list from 1 to the input number and returning the sum. Then a while loop continues to generate a triangular number using an iterating variable y as the input for the triangle function, and then runs the factornum function on that and puts the result in the factors variable. The loop continues to run and the y variable continues to increment until the number of factors is over 500. The result is then printed.
However, when I run it, nothing happens - no errors, no output, it just keeps running and running. Now, I know my code isn't the most efficient, but I left it running for quite a bit and it still didn't produce a result, so it seems more likely to me that there's an error somewhere. I've been over it and over it and cannot seem to find an error.
I'd merely request that a full solution or a drastically improved one isn't given outright but pointers towards my error(s) or spots for improvement, as the reason I'm doing the Euler problems is to improve my coding. Thanks!
You have very inefficient algorithm.
If you ask for pointers rather than full solution, main pointers are:
There is a more efficient way to calculate next triangular number. There is an explicit formula in the wiki. Also if you generate sequence of all numbers it is just more efficient to add next n to the previous number. (Sidenote list in sum(list(range(1,n))) makes no sense to me at all. If you want to use this approach anyway, sum(xrange(1,n) will probably be much more efficient as it doesn't require materialization of the range)
There are much more efficient ways to factorize numbers
There is a more efficient way to calculate number of factors. And it is actually called after Euler: see Euler's totient function
Generally Euler project problems (as in many other programming competitions) are not supposed to be solvable by sheer brute force. You should come up with some formula and/or more efficient algorithm first.
As far as I can tell your code will work, but it will take a very long time to calculate the number of factors. For 150 factors, it takes on the order of 20 seconds to run, and that time will grow dramatically as you look for higher and higher number of factors.
One way to reduce the processing time is to reduce the number of calculations that you're performing. If you analyze your code, you're calculating n%1 every single time, which is an unnecessary calculation because you know every single integer will be divisible by itself and one. Are there any other ways you can reduce the number of calculations? Perhaps by remembering that if a number is divisible by 20, it is also divisible by 2, 4, 5, and 10?
I can be more specific, but you wanted a pointer in the right direction.
From the looks of it the code works fine, it`s just not the best approach. A simple way of optimizing is doing until the half the number, for example. Also, try thinking about how you could do this using prime factors, it might be another solution. Best of luck!
First you have to def a factor function:
from functools import reduce
def factors(n):
step = 2 if n % 2 else 1
return set(reduce(list.__add__,
([i, n//i] for i in range(1, int(pow(n,0.5) + 1)) if n % i
== 0)))
This will create a set and put all of factors of number n into it.
Second, use while loop until you get 500 factors:
a = 1
x = 1
while len(factors(a)) < 501:
x += 1
a += x
This loop will stop at len(factors(a)) = 500.
Simple print(a) and you will get your answer.

Sum of primes below 2,000,000 in python

I am attempting problem 10 of Project Euler, which is the summation of all primes below 2,000,000. I have tried implementing the Sieve of Erasthotenes using Python, and the code I wrote works perfectly for numbers below 10,000.
However, when I attempt to find the summation of primes for bigger numbers, the code takes too long to run (finding the sum of primes up to 100,000 took 315 seconds). The algorithm clearly needs optimization.
Yes, I have looked at other posts on this website, like Fastest way to list all primes below N, but the solutions there had very little explanation as to how the code worked (I am still a beginner programmer) so I was not able to actually learn from them.
Can someone please help me optimize my code, and clearly explain how it works along the way?
Here is my code:
primes_below_number = 2000000 # number to find summation of all primes below number
numbers = (range(1, primes_below_number + 1, 2)) # creates a list excluding even numbers
pos = 0 # index position
sum_of_primes = 0 # total sum
number = numbers[pos]
while number < primes_below_number and pos < len(numbers) - 1:
pos += 1
number = numbers[pos] # moves to next prime in list numbers
sum_of_primes += number # adds prime to total sum
num = number
while num < primes_below_number:
num += number
if num in numbers[:]:
numbers.remove(num) # removes multiples of prime found
print sum_of_primes + 2
As I said before, I am new to programming, therefore a thorough explanation of any complicated concepts would be deeply appreciated. Thank you.
As you've seen, there are various ways to implement the Sieve of Erasthotenes in Python that are more efficient than your code. I don't want to confuse you with fancy code, but I can show how to speed up your code a fair bit.
Firstly, searching a list isn't fast, and removing elements from a list is even slower. However, Python provides a set type which is quite efficient at performing both of those operations (although it does chew up a bit more RAM than a simple list). Happily, it's easy to modify your code to use a set instead of a list.
Another optimization is that we don't have to check for prime factors all the way up to primes_below_number, which I've renamed to hi in the code below. It's sufficient to just go to the square root of hi, since if a number is composite it must have a factor less than or equal to its square root.
We don't need to keep a running total of the sum of the primes. It's better to do that at the end using Python's built-in sum() function, which operates at C speed, so it's much faster than doing the additions one by one at Python speed.
# number to find summation of all primes below number
hi = 2000000
# create a set excluding even numbers
numbers = set(xrange(3, hi + 1, 2))
for number in xrange(3, int(hi ** 0.5) + 1):
if number not in numbers:
#number must have been removed because it has a prime factor
continue
num = number
while num < hi:
num += number
if num in numbers:
# Remove multiples of prime found
numbers.remove(num)
print 2 + sum(numbers)
You should find that this code runs in a a few seconds; it takes around 5 seconds on my 2GHz single-core machine.
You'll notice that I've moved the comments so that they're above the line they're commenting on. That's the preferred style in Python since we prefer short lines, and also inline comments tend to make the code look cluttered.
There's another small optimization that can be made to the inner while loop, but I let you figure that out for yourself. :)
First, removing numbers from the list will be very slow. Instead of this, make a list
primes = primes_below_number * True
primes[0] = False
primes[1] = False
Now in your loop, when you find a prime p, change primes[k*p] to False for all suitable k. (You wouldn't actually do multiply, you'd continually add p, of course.)
At the end,
primes = [n for n i range(primes_below_number) if primes[n]]
This should be a great deal faster.
Second, you can stop looking once your find a prime greater than the square root of primes_below_number, since a composite number must have a prime factor that doesn't exceed its square root.
Try using numpy, should make it faster. Replace range by xrange, it may help you.
Here's an optimization for your code:
import itertools
primes_below_number = 2000000
numbers = list(range(3, primes_below_number, 2))
pos = 0
while pos < len(numbers) - 1:
number = numbers[pos]
numbers = list(
itertools.chain(
itertools.islice(numbers, 0, pos + 1),
itertools.ifilter(
lambda n: n % number != 0,
itertools.islice(numbers, pos + 1, len(numbers))
)
)
)
pos += 1
sum_of_primes = sum(numbers) + 2
print sum_of_primes
The optimization here is because:
Removed the sum to outside the loop.
Instead of removing elements from a list we can just create another one, memory is not an issue here (I hope).
When creating the new list we create it by chaining two parts, the first part is everything before the current number (we already checked those), and the second part is everything after the current number but only if they are not divisible by the current number.
Using itertools can make things faster since we'd be using iterators instead of looping through the whole list more than once.
Another solution would be to not remove parts of the list but disable them like #saulspatz said.
And here's the fastest way I was able to find: http://www.wolframalpha.com/input/?i=sum+of+all+primes+below+2+million 😁
Update
Here is the boolean method:
import itertools
primes_below_number = 2000000
numbers = [v % 2 != 0 for v in xrange(primes_below_number)]
numbers[0] = False
numbers[1] = False
numbers[2] = True
number = 3
while number < primes_below_number:
n = number * 3 # We already excluded even numbers
while n < primes_below_number:
numbers[n] = False
n += number
number += 1
while number < primes_below_number and not numbers[number]:
number += 1
sum_of_numbers = sum(itertools.imap(lambda index_n: index_n[1] and index_n[0] or 0, enumerate(numbers)))
print(sum_of_numbers)
This executes in seconds (took 3 seconds on my 2.4GHz machine).
Instead of storing a list of numbers, you can instead store an array of boolean values. This use of a bitmap can be thought of as a way to implement a set, which works well for dense sets (there aren't big gaps between the values of members).
An answer on a recent python sieve question uses this implementation python-style. It turns out a lot of people have implemented a sieve, or something they thought was a sieve, and then come on SO to ask why it was slow. :P Look at the related-questions sidebar from some of them if you want more reading material.
Finding the element that holds the boolean that says whether a number is in the set or not is easy and extremely fast. array[i] is a boolean value that's true if i is in the set, false if not. The memory address can be computed directly from i with a single addition.
(I'm glossing over the fact that an array of boolean might be stored with a whole byte for each element, rather than the more efficient implementation of using every single bit for a different element. Any decent sieve will use a bitmap.)
Removing a number from the set is as simple as setting array[i] = false, regardless of the previous value. No searching, not comparison, no tracking of what happened, just one memory operation. (Well, two for a bitmap: load the old byte, clear the correct bit, store it. Memory is byte-addressable, but not bit-addressable.)
An easy optimization of the bitmap-based sieve is to not even store the even-numbered bytes, because there is only one even prime, and we can special-case it to double our memory density. Then the membership-status of i is held in array[i/2]. (Dividing by powers of two is easy for computers. Other values are much slower.)
An SO question:
Why is Sieve of Eratosthenes more efficient than the simple "dumb" algorithm? has many links to good stuff about the sieve. This one in particular has some good discussion about it, in words rather than just code. (Nevermind the fact that it's talking about a common Haskell implementation that looks like a sieve, but actually isn't. They call this the "unfaithful" sieve in their graphs, and so on.)
discussion on that question brought up the point that trial division may be fast than big sieves, for some uses, because clearing the bits for all multiples of every prime touches a lot of memory in a cache-unfriendly pattern. CPUs are much faster than memory these days.

Filtering out conditioned arrays from a matrix

In Sweden there is a football (soccer) betting game where you try to find the outcome of 13 matches. Since each match can have a home win, a draw or an away team win this leads to 3**13=1594323 possible outcomes. You win if you have 10 to 13 matches correct. You don't want this to occur when a lot of other people also have high scores since the prize sum is divided among all winners. This is the background to the more generic question that I'm looking for an answer to: how to find all arrays that differ by at least x elements from a given array within the matrix (in this case 1594323*13).
The first obvious idea that came to my mind was to have 13 nested for loops and compare one array at the time. However, I'm using this problem as a training session to learn myself Python programming. Python is not the optimal tool for this kind of task and I could turn to C to get a faster program but I'm interested in the best algorithm as such.
In Python I tried the nested for loop method up to 10 matches, then the execution time got too long, 5 seconds on the netbook I'm using. For each added match, the execution time went up tenfold.
Another approach would be to use a database and that might be the solution but I'm curious what the fastest way of solving this kind of problem is. I haven't been successful googling this problem, maybe because it's hard to use the correct description of the problem in a short search.
Here's a recursive solution that terminates in about 5 seconds on my machine, when given an x value of 0 (the worst case). For x values of 10 and above it's nearly instantaneous.
def arrays_with_differences(arr, x):
if x > len(arr):
return []
if len(arr) == 0:
return [[]]
smallColl1 = arrays_with_differences(arr[:len(arr)-1], x)
smallColl2 = arrays_with_differences(arr[:len(arr)-1], x-1)
last = arr[len(arr)-1]
altLast1 = (last + 1) % 3
altLast2 = (last + 2) % 3
result = [smallArr + [last] for smallArr in smallColl1]
result.extend([smallArr + [altLast1] for smallArr in smallColl2])
result.extend([smallArr + [altLast2] for smallArr in smallColl2])
return result
result = arrays_with_differences([1,0,1,0,1,1,1,1,1,1,1,1,1], 4)
print(len(result))

How to solve the "Mastermind" guessing game?

How would you create an algorithm to solve the following puzzle, "Mastermind"?
Your opponent has chosen four different colours from a set of six (yellow, blue, green, red, orange, purple). You must guess which they have chosen, and in what order. After each guess, your opponent tells you how many (but not which) of the colours you guessed were the right colour in the right place ["blacks"] and how many (but not which) were the right colour but in the wrong place ["whites"]. The game ends when you guess correctly (4 blacks, 0 whites).
For example, if your opponent has chosen (blue, green, orange, red), and you guess (yellow, blue, green, red), you will get one "black" (for the red), and two whites (for the blue and green). You would get the same score for guessing (blue, orange, red, purple).
I'm interested in what algorithm you would choose, and (optionally) how you translate that into code (preferably Python). I'm interested in coded solutions that are:
Clear (easily understood)
Concise
Efficient (fast in making a guess)
Effective (least number of guesses to solve the puzzle)
Flexible (can easily answer questions about the algorithm, e.g. what is its worst case?)
General (can be easily adapted to other types of puzzle than Mastermind)
I'm happy with an algorithm that's very effective but not very efficient (provided it's not just poorly implemented!); however, a very efficient and effective algorithm implemented inflexibly and impenetrably is not of use.
I have my own (detailed) solution in Python which I have posted, but this is by no means the only or best approach, so please post more! I'm not expecting an essay ;)
Key tools: entropy, greediness, branch-and-bound; Python, generators, itertools, decorate-undecorate pattern
In answering this question, I wanted to build up a language of useful functions to explore the problem. I will go through these functions, describing them and their intent. Originally, these had extensive docs, with small embedded unit tests tested using doctest; I can't praise this methodology highly enough as a brilliant way to implement test-driven-development. However, it does not translate well to StackOverflow, so I will not present it this way.
Firstly, I will be needing several standard modules and future imports (I work with Python 2.6).
from __future__ import division # No need to cast to float when dividing
import collections, itertools, math
I will need a scoring function. Originally, this returned a tuple (blacks, whites), but I found output a little clearer if I used a namedtuple:
Pegs = collections.namedtuple('Pegs', 'black white')
def mastermindScore(g1,g2):
matching = len(set(g1) & set(g2))
blacks = sum(1 for v1, v2 in itertools.izip(g1,g2) if v1 == v2)
return Pegs(blacks, matching-blacks)
To make my solution general, I pass in anything specific to the Mastermind problem as keyword arguments. I have therefore made a function that creates these arguments once, and use the **kwargs syntax to pass it around. This also allows me to easily add new attributes if I need them later. Note that I allow guesses to contain repeats, but constrain the opponent to pick distinct colours; to change this, I only need change G below. (If I wanted to allow repeats in the opponent's secret, I would need to change the scoring function as well.)
def mastermind(colours, holes):
return dict(
G = set(itertools.product(colours,repeat=holes)),
V = set(itertools.permutations(colours, holes)),
score = mastermindScore,
endstates = (Pegs(holes, 0),))
def mediumGame():
return mastermind(("Yellow", "Blue", "Green", "Red", "Orange", "Purple"), 4)
Sometimes I will need to partition a set based on the result of applying a function to each element in the set. For instance, the numbers 1..10 can be partitioned into even and odd numbers by the function n % 2 (odds give 1, evens give 0). The following function returns such a partition, implemented as a map from the result of the function call to the set of elements that gave that result (e.g. { 0: evens, 1: odds }).
def partition(S, func, *args, **kwargs):
partition = collections.defaultdict(set)
for v in S: partition[func(v, *args, **kwargs)].add(v)
return partition
I decided to explore a solver that uses a greedy entropic approach. At each step, it calculates the information that could be obtained from each possible guess, and selects the most informative guess. As the numbers of possibilities grow, this will scale badly (quadratically), but let's give it a try! First, I need a method to calculate the entropy (information) of a set of probabilities. This is just -∑p log p. For convenience, however, I will allow input that are not normalized, i.e. do not add up to 1:
def entropy(P):
total = sum(P)
return -sum(p*math.log(p, 2) for p in (v/total for v in P if v))
So how am I going to use this function? Well, for a given set of possibilities, V, and a given guess, g, the information we get from that guess can only come from the scoring function: more specifically, how that scoring function partitions our set of possibilities. We want to make a guess that distinguishes best among the remaining possibilites — divides them into the largest number of small sets — because that means we are much closer to the answer. This is exactly what the entropy function above is putting a number to: a large number of small sets will score higher than a small number of large sets. All we need to do is plumb it in.
def decisionEntropy(V, g, score):
return entropy(collections.Counter(score(gi, g) for gi in V).values())
Of course, at any given step what we will actually have is a set of remaining possibilities, V, and a set of possible guesses we could make, G, and we will need to pick the guess which maximizes the entropy. Additionally, if several guesses have the same entropy, prefer to pick one which could also be a valid solution; this guarantees the approach will terminate. I use the standard python decorate-undecorate pattern together with the built-in max method to do this:
def bestDecision(V, G, score):
return max((decisionEntropy(V, g, score), g in V, g) for g in G)[2]
Now all I need to do is repeatedly call this function until the right result is guessed. I went through a number of implementations of this algorithm until I found one that seemed right. Several of my functions will want to approach this in different ways: some enumerate all possible sequences of decisions (one per guess the opponent may have made), while others are only interested in a single path through the tree (if the opponent has already chosen a secret, and we are just trying to reach the solution). My solution is a "lazy tree", where each part of the tree is a generator that can be evaluated or not, allowing the user to avoid costly calculations they won't need. I also ended up using two more namedtuples, again for clarity of code.
Node = collections.namedtuple('Node', 'decision branches')
Branch = collections.namedtuple('Branch', 'result subtree')
def lazySolutionTree(G, V, score, endstates, **kwargs):
decision = bestDecision(V, G, score)
branches = (Branch(result, None if result in endstates else
lazySolutionTree(G, pV, score=score, endstates=endstates))
for (result, pV) in partition(V, score, decision).iteritems())
yield Node(decision, branches) # Lazy evaluation
The following function evaluates a single path through this tree, based on a supplied scoring function:
def solver(scorer, **kwargs):
lazyTree = lazySolutionTree(**kwargs)
steps = []
while lazyTree is not None:
t = lazyTree.next() # Evaluate node
result = scorer(t.decision)
steps.append((t.decision, result))
subtrees = [b.subtree for b in t.branches if b.result == result]
if len(subtrees) == 0:
raise Exception("No solution possible for given scores")
lazyTree = subtrees[0]
assert(result in endstates)
return steps
This can now be used to build an interactive game of Mastermind where the user scores the computer's guesses. Playing around with this reveals some interesting things. For example, the most informative first guess is of the form (yellow, yellow, blue, green), not (yellow, blue, green, red). Extra information is gained by using exactly half the available colours. This also holds for 6-colour 3-hole Mastermind — (yellow, blue, green) — and 8-colour 5-hole Mastermind — (yellow, yellow, blue, green, red).
But there are still many questions that are not easily answered with an interactive solver. For instance, what is the most number of steps needed by the greedy entropic approach? And how many inputs take this many steps? To make answering these questions easier, I first produce a simple function that turns the lazy tree of above into a set of paths through this tree, i.e. for each possible secret, a list of guesses and scores.
def allSolutions(**kwargs):
def solutions(lazyTree):
return ((((t.decision, b.result),) + solution
for t in lazyTree for b in t.branches
for solution in solutions(b.subtree))
if lazyTree else ((),))
return solutions(lazySolutionTree(**kwargs))
Finding the worst case is a simple matter of finding the longest solution:
def worstCaseSolution(**kwargs):
return max((len(s), s) for s in allSolutions(**kwargs)) [1]
It turns out that this solver will always complete in 5 steps or fewer. Five steps! I know that when I played Mastermind as a child, I often took longer than this. However, since creating this solver and playing around with it, I have greatly improved my technique, and 5 steps is indeed an achievable goal even when you don't have time to calculate the entropically ideal guess at each step ;)
How likely is it that the solver will take 5 steps? Will it ever finish in 1, or 2, steps? To find that out, I created another simple little function that calculates the solution length distribution:
def solutionLengthDistribution(**kwargs):
return collections.Counter(len(s) for s in allSolutions(**kwargs))
For the greedy entropic approach, with repeats allowed: 7 cases take 2 steps; 55 cases take 3 steps; 229 cases take 4 steps; and 69 cases take the maximum of 5 steps.
Of course, there's no guarantee that the greedy entropic approach minimizes the worst-case number of steps. The final part of my general-purpose language is an algorithm that decides whether or not there are any solutions for a given worst-case bound. This will tell us whether greedy entropic is ideal or not. To do this, I adopt a branch-and-bound strategy:
def solutionExists(maxsteps, G, V, score, **kwargs):
if len(V) == 1: return True
partitions = [partition(V, score, g).values() for g in G]
maxSize = max(len(P) for P in partitions) ** (maxsteps - 2)
partitions = (P for P in partitions if max(len(s) for s in P) <= maxSize)
return any(all(solutionExists(maxsteps-1,G,s,score) for l,s in
sorted((-len(s), s) for s in P)) for i,P in
sorted((-entropy(len(s) for s in P), P) for P in partitions))
This is definitely a complex function, so a bit more explanation is in order. The first step is to partition the remaining solutions based on their score after a guess, as before, but this time we don't know what guess we're going to make, so we store all partitions. Now we could just recurse into every one of these, effectively enumerating the entire universe of possible decision trees, but this would take a horrifically long time. Instead I observe that, if at this point there is no partition that divides the remaining solutions into more than n sets, then there can be no such partition at any future step either. If we have k steps left, that means we can distinguish between at most nk-1 solutions before we run out of guesses (on the last step, we must always guess correctly). Thus we can discard any partitions that contain a score mapped to more than this many solutions. This is the next two lines of code.
The final line of code does the recursion, using Python's any and all functions for clarity, and trying the highest-entropy decisions first to hopefully minimize runtime in the positive case. It also recurses into the largest part of the partition first, as this is the most likely to fail quickly if the decision was wrong. Once again, I use the standard decorate-undecorate pattern, this time to wrap Python's sorted function.
def lowerBoundOnWorstCaseSolution(**kwargs):
for steps in itertools.count(1):
if solutionExists(maxsteps=steps, **kwargs):
return steps
By calling solutionExists repeatedly with an increasing number of steps, we get a strict lower bound on the number of steps needed in the worst case for a Mastermind solution: 5 steps. The greedy entropic approach is indeed optimal.
Out of curiosity, I invented another guessing game, which I nicknamed "twoD". In this, you try to guess a pair of numbers; at each step, you get told if your answer is correct, if the numbers you guessed are no less than the corresponding ones in the secret, and if the numbers are no greater.
Comparison = collections.namedtuple('Comparison', 'less greater equal')
def twoDScorer(x, y):
return Comparison(all(r[0] <= r[1] for r in zip(x, y)),
all(r[0] >= r[1] for r in zip(x, y)),
x == y)
def twoD():
G = set(itertools.product(xrange(5), repeat=2))
return dict(G = G, V = G, score = twoDScorer,
endstates = set(Comparison(True, True, True)))
For this game, the greedy entropic approach has a worst case of five steps, but there is a better solution possible with a worst case of four steps, confirming my intuition that myopic greediness is only coincidentally ideal for Mastermind. More importantly, this has shown how flexible my language is: all the same methods work for this new guessing game as did for Mastermind, letting me explore other games with a minimum of extra coding.
What about performance? Obviously, being implemented in Python, this code is not going to be blazingly fast. I've also dropped some possible optimizations in favour of clear code.
One cheap optimization is to observe that, on the first move, most guesses are basically identical: (yellow, blue, green, red) is really no different from (blue, red, green, yellow), or (orange, yellow, red, purple). This greatly reduces the number of guesses we need consider on the first step — otherwise the most costly decision in the game.
However, because of the large runtime growth rate of this problem, I was not able to solve the 8-colour, 5-hole Mastermind problem, even with this optimization. Instead, I ported the algorithms to C++, keeping the general structure the same and employing bitwise operations to boost performance in the critical inner loops, for a speedup of many orders of magnitude. I leave this as an exercise to the reader :)
Addendum, 2018: It turns out the greedy entropic approach is not optimal for the 8-colour, 4-hole Mastermind problem either, with a worst-case length of 7 steps when an algorithm exists that takes at most 6!
I once wrote a "Jotto" solver which is essentially "Master Mind" with words. (We each pick a word and we take turns guessing at each other's word, scoring "right on" (exact) matches and "elsewhere" (correct letter/color, but wrong placement).
The key to solving such a problem is the realization that the scoring function is symmetric.
In other words if score(myguess) == (1,2) then I can use the same score() function to compare my previous guess with any other possibility and eliminate any that don't give exactly the same score.
Let me give an example: The hidden word (target) is "score" ... the current guess is "fools" --- the score is 1,1 (one letter, 'o', is "right on"; another letter, 's', is "elsewhere"). I can eliminate the word "guess" because the `score("guess") (against "fools") returns (1,0) (the final 's' matches, but nothing else does). So the word "guess" is not consistent with "fools" and a score against some unknown word that gave returned a score of (1,1).
So I now can walk through every five letter word (or combination of five colors/letters/digits etc) and eliminate anything that doesn't score 1,1 against "fools." Do that at each iteration and you'll very rapidly converge on the target. (For five letter words I was able to get within 6 tries every time ... and usually only 3 or 4). Of course there's only 6000 or so "words" and you're eliminating close to 95% for each guess.
Note: for the following discussion I'm talking about five letter "combination" rather than four elements of six colors. The same algorithms apply; however, the problem is orders of magnitude smaller for the old "Master Mind" game ... there are only 1296 combinations (6**4) of colored pegs in the classic "Master Mind" program, assuming duplicates are allowed. The line of reasoning that leads to the convergence involves some combinatorics: there are 20 non-winning possible scores for a five element target (n = [(a,b) for a in range(5) for b in range(6) if a+b <= 5] to see all of them if you're curious. We would, therefore, expect that any random valid selection would have a roughly 5% chance of matching our score ... the other 95% won't and therefore will be eliminated for each scored guess. This doesn't account for possible clustering in word patterns but the real world behavior is close enough for words and definitely even closer for "Master Mind" rules. However, with only 6 colors in 4 slots we only have 14 possible non-winning scores so our convergence isn't quite as fast).
For Jotto the two minor challenges are: generating a good world list (awk -f 'length($0)==5' /usr/share/dict/words or similar on a UNIX system) and what to do if the user has picked a word that not in our dictionary (generate every letter combination, 'aaaaa' through 'zzzzz' --- which is 26 ** 5 ... or ~1.1 million). A trivial combination generator in Python takes about 1 minute to generate all those strings ... an optimized one should to far better. (I can also add a requirement that every "word" have at least one vowel ... but this constraint doesn't help much --- 5 vowels * 5 possible locations for that and then multiplied by 26 ** 4 other combinations).
For Master Mind you use the same combination generator ... but with only 4 or 5 "letters" (colors). Every 6-color combination (15,625 of them) can be generated in under a second (using the same combination generator as I used above).
If I was writing this "Jotto" program today, in Python for example, I would "cheat" by having a thread generating all the letter combos in the background while I was still eliminated words from the dictionary (while my opponent was scoring me, guessing, etc). As I generated them I'd also eliminate against all guesses thus far. Thus I would, after I'd eliminated all known words, have a relatively small list of possibilities and against a human player I've "hidden" most of my computation lag by doing it in parallel to their input. (And, if I wrote a web server version of such a program I'd have my web engine talk to a local daemon to ask for sequences consistent with a set of scores. The daemon would keep a locally generated list of all letter combinations and would use a select.select() model to feed possibilities back to each of the running instances of the game --- each would feed my daemon word/score pairs which my daemon would apply as a filter on the possibilities it feeds back to that client).
(By comparison I wrote my version of "Jotto" about 20 years ago on an XT using Borland TurboPascal ... and it could do each elimination iteration --- starting with its compiled in list of a few thousand words --- in well under a second. I build its word list by writing a simple letter combination generator (see below) ... saving the results to a moderately large file, then running my word processor's spell check on that with a macro to delete everything that was "mis-spelled" --- then I used another macro to wrap all the remaining lines in the correct punctuation to make them valid static assignments to my array, which was a #include file to my program. All that let me build a standalone game program that "knew" just about every valid English 5 letter word; the program was a .COM --- less than 50KB if I recall correctly).
For other reasons I've recently written a simple arbitrary combination generator in Python. It's about 35 lines of code and I've posted that to my "trite snippets" wiki on bitbucket.org ... it's not a "generator" in the Python sense ... but a class you can instantiate to an infinite sequence of "numeric" or "symbolic" combination of elements (essentially counting in any positive integer base).
You can find it at: Trite Snippets: Arbitrary Sequence Combination Generator
For the exact match part of our score() function you can just use this:
def score(this, that):
'''Simple "Master Mind" scoring function'''
exact = len([x for x,y in zip(this, that) if x==y])
### Calculating "other" (white pegs) goes here:
### ...
###
return (exact,other)
I think this exemplifies some of the beauty of Python: zip() up the two sequences,
return any that match, and take the length of the results).
Finding the matches in "other" locations is deceptively more complicated. If no repeats were allowed then you could simply use sets to find the intersections.
[In my earlier edit of this message, when I realized how I could use zip() for exact matches, I erroneously thought we could get away with other = len([x for x,y in zip(sorted(x), sorted(y)) if x==y]) - exact ... but it was late and I was tired. As I slept on it I realized that the method was flawed. Bad, Jim! Don't post without adequate testing!* (Tested several cases that happened to work)].
In the past the approach I used was to sort both lists, compare the heads of each: if the heads are equal, increment the count and pop new items from both lists. otherwise pop a new value into the lesser of the two heads and try again. Break as soon as either list is empty.
This does work; but it's fairly verbose. The best I can come up with using that approach is just over a dozen lines of code:
other = 0
x = sorted(this) ## Implicitly converts to a list!
y = sorted(that)
while len(x) and len(y):
if x[0] == y[0]:
other += 1
x.pop(0)
y.pop(0)
elif x[0] < y[0]:
x.pop(0)
else:
y.pop(0)
other -= exact
Using a dictionary I can trim that down to about nine:
other = 0
counters = dict()
for i in this:
counters[i] = counters.get(i,0) + 1
for i in that:
if counters.get(i,0) > 0:
other += 1
counters[i] -= 1
other -= exact
(Using the new "collections.Counter" class (Python3 and slated for Python 2.7?) I could presumably reduce this a little more; three lines here are initializing the counters collection).
It's important to decrement the "counter" when we find a match and it's vital to test for counter greater than zero in our test. If a given letter/symbol appears in "this" once and "that" twice then it must only be counted as a match once.
The first approach is definitely a bit trickier to write (one must be careful to avoid boundaries). Also in a couple of quick benchmarks (testing a million randomly generated pairs of letter patterns) the first approach takes about 70% longer as the one using dictionaries. (Generating the million pairs of strings using random.shuffle() took over twice as long as the slower of the scoring functions, on the other hand).
A formal analysis of the performance of these two functions would be complicated. The first method has two sorts, so that would be 2 * O(nlog(n)) ... and it iterates through at least one of the two strings and possibly has to iterate all the way to the end of the other string (best case O(n), worst case O(2n)) -- force I'm mis-using big-O notation here, but this is just a rough estimate. The second case depends entirely on the perfomance characteristics of the dictionary. If we were using b-trees then the performance would be roughly O(nlog(n) for creation and finding each element from the other string therein would be another O(n*log(n)) operation. However, Python dictionaries are very efficient and these operations should be close to constant time (very few hash collisions). Thus we'd expect a performance of roughly O(2n) ... which of course simplifies to O(n). That roughly matches my benchmark results.
Glancing over the Wikipedia article on "Master Mind" I see that Donald Knuth used an approach which starts similarly to mine (and 10 years earlier) but he added one significant optimization. After gathering every remaining possibility he selects whichever one would eliminate the largest number of possibilities on the next round. I considered such an enhancement to my own program and rejected the idea for practical reasons. In his case he was searching for an optimal (mathematical) solution. In my case I was concerned about playability (on an XT, preferably using less than 64KB of RAM, though I could switch to .EXE format and use up to 640KB). I wanted to keep the response time down in the realm of one or two seconds (which was easy with my approach but which would be much more difficult with the further speculative scoring). (Remember I was working in Pascal, under MS-DOS ... no threads, though I did implement support for crude asynchronous polling of the UI which turned out to be unnecessary)
If I were writing such a thing today I'd add a thread to do the better selection, too. This would allow me to give the best guess I'd found within a certain time constraint, to guarantee that my player didn't have to wait too long for my guess. Naturally my selection/elimination would be running while waiting for my opponent's guesses.
Have you seem Raymond Hettingers attempt? They certainly match up to some of your requirements.
I wonder how his solutions compares to yours.
There is a great site about MasterMind strategy here. The author starts off with very simple MasterMind problems (using numbers rather than letters, and ignoring order and repetition) and gradually builds up to a full MasterMind problem (using colours, which can be repeated, in any order, even with the possibility of errors in the clues).
The seven tutorials that are presented are as follows:
Tutorial 1 - The simplest game setting (no errors, fixed order, no repetition)
Tutorial 2 - Code may contain blank spaces (no errors, fixed order, no repetition)
Tutorial 3 - Hints may contain errors (fixed order, no repetition)
Tutorial 4 - Game started from the middle (no errors, fixed order, no repetition)
Tutorial 5 - Digits / colours may be repeated (no errors, fixed order, each colour repeated at most 4 times)
Tutorial 6 - Digits / colours arranged in random order (no errors, random order, no repetition)
Tutorial 7 - Putting it all together (no errors, random order, each colour repeated at most 4 times)
Just thought I'd contribute my 90 odd lines of code. I've build upon #Jim Dennis' answer, mostly taking away the hint on symetric scoring. I've implemented the minimax algorithm as described on the Mastermind wikipedia article by Knuth, with one exception: I restrict my next move to current list of possible solutions, as I found performance deteriorated badly when taking all possible solutions into account at each step. The current approach leaves me with a worst case of 6 guesses for any combination, each found in well under a second.
It's perhaps important to note that I make no restriction whatsoever on the hidden sequence, allowing for any number of repeats.
from itertools import product, tee
from random import choice
COLORS = 'red ', 'green', 'blue', 'yellow', 'purple', 'pink'#, 'grey', 'white', 'black', 'orange', 'brown', 'mauve', '-gap-'
HOLES = 4
def random_solution():
"""Generate a random solution."""
return tuple(choice(COLORS) for i in range(HOLES))
def all_solutions():
"""Generate all possible solutions."""
for solution in product(*tee(COLORS, HOLES)):
yield solution
def filter_matching_result(solution_space, guess, result):
"""Filter solutions for matches that produce a specific result for a guess."""
for solution in solution_space:
if score(guess, solution) == result:
yield solution
def score(actual, guess):
"""Calculate score of guess against actual."""
result = []
#Black pin for every color at right position
actual_list = list(actual)
guess_list = list(guess)
black_positions = [number for number, pair in enumerate(zip(actual_list, guess_list)) if pair[0] == pair[1]]
for number in reversed(black_positions):
del actual_list[number]
del guess_list[number]
result.append('black')
#White pin for every color at wrong position
for color in guess_list:
if color in actual_list:
#Remove the match so we can't score it again for duplicate colors
actual_list.remove(color)
result.append('white')
#Return a tuple, which is suitable as a dictionary key
return tuple(result)
def minimal_eliminated(solution_space, solution):
"""For solution calculate how many possibilities from S would be eliminated for each possible colored/white score.
The score of the guess is the least of such values."""
result_counter = {}
for option in solution_space:
result = score(solution, option)
if result not in result_counter.keys():
result_counter[result] = 1
else:
result_counter[result] += 1
return len(solution_space) - max(result_counter.values())
def best_move(solution_space):
"""Determine the best move in the solution space, being the one that restricts the number of hits the most."""
elim_for_solution = dict((minimal_eliminated(solution_space, solution), solution) for solution in solution_space)
max_elimintated = max(elim_for_solution.keys())
return elim_for_solution[max_elimintated]
def main(actual = None):
"""Solve a game of mastermind."""
#Generate random 'hidden' sequence if actual is None
if actual == None:
actual = random_solution()
#Start the game of by choosing n unique colors
current_guess = COLORS[:HOLES]
#Initialize solution space to all solutions
solution_space = all_solutions()
guesses = 1
while True:
#Calculate current score
current_score = score(actual, current_guess)
#print '\t'.join(current_guess), '\t->\t', '\t'.join(current_score)
if current_score == tuple(['black'] * HOLES):
print guesses, 'guesses for\t', '\t'.join(actual)
return guesses
#Restrict solution space to exactly those hits that have current_score against current_guess
solution_space = tuple(filter_matching_result(solution_space, current_guess, current_score))
#Pick the candidate that will limit the search space most
current_guess = best_move(solution_space)
guesses += 1
if __name__ == '__main__':
print max(main(sol) for sol in all_solutions())
Should anyone spot any possible improvements to the above code than I would be very much interested in your suggestions.
To work out the "worst" case, instead of using entropic I am looking to the partition that has the maximum number of elements, then select the try that is a minimum for this maximum => This will give me the minimum number of remaining possibility when I am not lucky (which happens in the worst case).
This always solve standard case in 5 attempts, but it is not a full proof that 5 attempts are really needed because it could happen that for next step a bigger set possibilities would have given a better result than a smaller one (because easier to distinguish between).
Though for the "Standard game" with 1680 I have a simple formal proof:
For the first step the try that gives the minimum for the partition with the maximum number is 0,0,1,1: 256. Playing 0,0,1,2 is not as good: 276.
For each subsequent try there are 14 outcomes (1 not placed and 3 placed is impossible) and 4 placed is giving a partition of 1. This means that in the best case (all partition same size) we will get a maximum partition that is a minimum of (number of possibilities - 1)/13 (rounded up because we have integer so necessarily some will be less and other more, so that the maximum is rounded up).
If I apply this:
After first play (0,0,1,1) I am getting 256 left.
After second try: 20 = (256-1)/13
After third try : 2 = (20-1)/13
Then I have no choice but to try one of the two left for the 4th try.
If I am unlucky a fifth try is needed.
This proves we need at least 5 tries (but not that this is enough).
Here is a generic algorithm I wrote that uses numbers to represent the different colours. Easy to change, but I find numbers to be a lot easier to work with than strings.
You can feel free to use any whole or part of this algorithm, as long as credit is given accordingly.
Please note I'm only a Grade 12 Computer Science student, so I am willing to bet that there are definitely more optimized solutions available.
Regardless, here's the code:
import random
def main():
userAns = raw_input("Enter your tuple, and I will crack it in six moves or less: ")
play(ans=eval("("+userAns+")"),guess=(0,0,0,0),previousGuess=[])
def play(ans=(6,1,3,5),guess=(0,0,0,0),previousGuess=[]):
if(guess==(0,0,0,0)):
guess = genGuess(guess,ans)
else:
checker = -1
while(checker==-1):
guess,checker = genLogicalGuess(guess,previousGuess,ans)
print guess, ans
if not(guess==ans):
previousGuess.append(guess)
base = check(ans,guess)
play(ans=ans,guess=base,previousGuess=previousGuess)
else:
print "Found it!"
def genGuess(guess,ans):
guess = []
for i in range(0,len(ans),1):
guess.append(random.randint(1,6))
return tuple(guess)
def genLogicalGuess(guess,previousGuess,ans):
newGuess = list(guess)
count = 0
#Generate guess
for i in range(0,len(newGuess),1):
if(newGuess[i]==-1):
newGuess.insert(i,random.randint(1,6))
newGuess.pop(i+1)
for item in previousGuess:
for i in range(0,len(newGuess),1):
if((newGuess[i]==item[i]) and (newGuess[i]!=ans[i])):
newGuess.insert(i,-1)
newGuess.pop(i+1)
count+=1
if(count>0):
return guess,-1
else:
guess = tuple(newGuess)
return guess,0
def check(ans,guess):
base = []
for i in range(0,len(zip(ans,guess)),1):
if not(zip(ans,guess)[i][0] == zip(ans,guess)[i][1]):
base.append(-1)
else:
base.append(zip(ans,guess)[i][1])
return tuple(base)
main()
Here's a link to pure Python solver for Mastermind(tm): http://code.activestate.com/recipes/496907-mastermind-style-code-breaking/ It has a simple version, a way to experiment with various guessing strategies, performance measurement, and an optional C accelerator.
The core of the recipe is short and sweet:
import random
from itertools import izip, imap
digits = 4
fmt = '%0' + str(digits) + 'd'
searchspace = tuple([tuple(map(int,fmt % i)) for i in range(0,10**digits)])
def compare(a, b, imap=imap, sum=sum, izip=izip, min=min):
count1 = [0] * 10
count2 = [0] * 10
strikes = 0
for dig1, dig2 in izip(a,b):
if dig1 == dig2:
strikes += 1
count1[dig1] += 1
count2[dig2] += 1
balls = sum(imap(min, count1, count2)) - strikes
return (strikes, balls)
def rungame(target, strategy, verbose=True, maxtries=15):
possibles = list(searchspace)
for i in xrange(maxtries):
g = strategy(i, possibles)
if verbose:
print "Out of %7d possibilities. I'll guess %r" % (len(possibles), g),
score = compare(g, target)
if verbose:
print ' ---> ', score
if score[0] == digits:
if verbose:
print "That's it. After %d tries, I won." % (i+1,)
break
possibles = [n for n in possibles if compare(g, n) == score]
return i+1
def strategy_allrand(i, possibles):
return random.choice(possibles)
if __name__ == '__main__':
hidden_code = random.choice(searchspace)
rungame(hidden_code, strategy_allrand)
Here is what the output looks like:
Out of 10000 possibilities. I'll guess (6, 4, 0, 9) ---> (1, 0)
Out of 1372 possibilities. I'll guess (7, 4, 5, 8) ---> (1, 1)
Out of 204 possibilities. I'll guess (1, 4, 2, 7) ---> (2, 1)
Out of 11 possibilities. I'll guess (1, 4, 7, 1) ---> (3, 0)
Out of 2 possibilities. I'll guess (1, 4, 7, 4) ---> (4, 0)
That's it. After 5 tries, I won.
My friend was considering relatively simple case - 8 colors, no repeats, no blanks.
With no repeats, there's no need for the max entropy consideration, all guesses have the same entropy and first or random guessing all work fine.
Here's the full code to solve that variant:
# SET UP
import random
import itertools
colors = ('red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet', 'ultra')
# ONE FUNCTION REQUIRED
def EvaluateCode(guess, secret_code):
key = []
for i in range(0, 4):
for j in range(0, 4):
if guess[i] == secret_code[j]:
key += ['black'] if i == j else ['white']
return key
# MAIN CODE
# choose secret code
secret_code = random.sample(colors, 4)
print ('(shh - secret code is: ', secret_code, ')\n', sep='')
# create the full list of permutations
full_code_list = list(itertools.permutations(colors, 4))
N_guess = 0
while True:
N_guess += 1
print ('Attempt #', N_guess, '\n-----------', sep='')
# make a random guess
guess = random.choice(full_code_list)
print ('guess:', guess)
# evaluate the guess and get the key
key = EvaluateCode(guess, secret_code)
print ('key:', key)
if key == ['black', 'black', 'black', 'black']:
break
# remove codes from the code list that don't match the key
full_code_list2 = []
for i in range(0, len(full_code_list)):
if EvaluateCode(guess, full_code_list[i]) == key:
full_code_list2 += [full_code_list[i]]
full_code_list = full_code_list2
print ('N remaining: ', len(full_code_list), '\n', full_code_list, '\n', sep='')
print ('\nMATCH after', N_guess, 'guesses\n')

Categories