Related
I'm trying to implement a simple Markov Chain Monte Carlo in Python 2.7, using numpy. The goal is to find the solution to the "Knapsack Problem," where given a set of m objects of value vi and weight wi, and a bag with holding capacity b, you find the greatest value of objects that can be fit into your bag, and what those objects are. I started coding in the Summer, and my knowledge is extremely lopsided, so I apologize if I'm missing something obvious, I'm self-taught and have been jumping all over the place.
The code for the system is as follows (I split it into parts in an attempt to figure out what's going wrong).
import numpy as np
import random
def flip_z(sackcontents):
##This picks a random object, and changes whether it's been selected or not.
pick=random.randint(0,len(sackcontents)-1)
clone_z=sackcontents
np.put(clone_z,pick,1-clone_z[pick])
return clone_z
def checkcompliance(checkedz,checkedweights,checkedsack):
##This checks to see whether a given configuration is overweight
weightVector=np.dot(checkedz,checkedweights)
weightSum=np.sum(weightVector)
if (weightSum > checkedsack):
return False
else:
return True
def getrandomz(length):
##I use this to get a random starting configuration.
##It's not really important, but it does remove the burden of choice.
z=np.array([])
for i in xrange(length):
if random.random() > 0.5:
z=np.append(z,1)
else:
z=np.append(z,0)
return z
def checkvalue(checkedz,checkedvalue):
##This checks how valuable a given configuration is.
wealthVector= np.dot(checkedz,checkedvalue)
wealthsum= np.sum(wealthVector)
return wealthsum
def McKnapsack(weights, values, iterations,sack):
z_start=getrandomz(len(weights))
z=z_start
moneyrecord=0.
zrecord=np.array(["error if you see me"])
failures=0.
for x in xrange(iterations):
current_z= np.array ([])
current_z=flip_z(z)
current_clone=current_z
if (checkcompliance(current_clone,weights,sack))==True:
z=current_clone
if checkvalue(current_z,values)>moneyrecord:
moneyrecord=checkvalue(current_clone,values)
zrecord= current_clone
else:failures+=1
print "The winning knapsack configuration is %s" %(zrecord)
print "The highest value of objects contained is %s" %(moneyrecord)
testvalues1=np.array([3,8,6])
testweights1= np.array([1,2,1])
McKnapsack(testweights1,testvalues1,60,2.)
What should happen is the following: With a maximum carrying capacity of 2, it should randomly switch between the different potential bag carrying configurations, of which there are 2^3=8 with the test weights and values I've fed it, with each one or zero in the z representing having or not having a given item. It should discard the options with too much weight, while keeping track of the ones with the highest value and acceptable weight. The correct answer would be to see 1,0,1 as the configuration, with 9 as the maximized value. I get nine for value every time when I use even moderately high amounts of iterations, but the configurations seem completely random, and somehow break the weight rule. I've double-checked my "checkcompliance" function with a lot of test arrays, and it seems to work. How are these faulty, overweight configurations getting past my if statements and into my zrecord ?
The trick is that z (and therefore also current_z and also zrecord) end up always referring to the exact same object in memory. flip_z modifies this object in-place via np.put.
Once you find a new combination that increases your moneyrecord, you set a reference to it -- but then in the subsequent iteration you go ahead and change the data at that same reference.
In other words, lines like
current_clone=current_z
zrecord= current_clone
do not copy, they only make yet another alias to the same data in memory.
One way to fix this is to explicitly copy that combination once you find it's a winner:
if checkvalue(current_z, values) > moneyrecord:
moneyrecord = checkvalue(current_clone, values)
zrecord = current_clone.copy()
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have been seriously trying to create a genetic program that will evolve to play tic-tac-toe in an acceptable way. I'm using the genome to generate a function that will then take the board as input and output the result... But it's not working.
Can this program be written in less than 500 lines of code (including blank lines and documentation)? Perhaps my problem is that I'm generating AIs that are too simple.
My research
A Genetic Algorithm for Tic-Tac-Toe (very different from my approach).
http://www.tropicalcoder.com/GeneticAlgorithm.htm (too abstract).
In quite a good number of websites there are references to 'neural networks'. Are they really required?
Important disclaimers
This is NOT homework of any kind, just a personal project for the sake of learning something cool.
This is NOT a 'give me the codz plz', I am looking for high level suggestions. I explicitly don't want ready-made solutions as the answers.
Please give me some help and insight into this 'genetic programming' concept applied to this specific simple problem.
#OnABauer: I think that I am using genetic programming because quoting Wikipedia
In artificial intelligence, genetic programming (GP) is an
evolutionary algorithm-based methodology inspired by biological
evolution to find computer programs that perform a user-defined task.
And I am trying to generate a program (in this case function) that will perform the task of playing tic-tac-toe, you can see it because the output of the most important genetic_process function is a genome that will then be converted to a function, thus if I understood correctly this is genetic programming because the output is a function.
Program introspection and possible bugs/problems
The code runs with no errors nor crashes. The problem is that in the end what I get is an incompetent AI, that will attempt to make illegal move and be punished with losing each and every time. It is no better than random.
Possibly it is because my AI function is so simple: just making calculations on the stored values of the squares with no conditionals.
High level description
What is your chromosome meant to represent?
My cromosome rapresents a list of functions that will then be used to reduce over the array of the board stored as trinary. OK let me make an example:
* Cromosome is: amMMMdsa (length of cromosome must be 8).
1. The first step is converting this to functions following the lookup at the top called LETTERS_TO_FUNCTIONS, this give the functions: [op.add,op.mul,op.mod,op.mod,op.mod,op.floordiv,op.sub,op.add]
The second step is converting the board to a trinary rapresentation. So let's say the board is "OX XOX " we will get [2, 3, 1, 1, 3, 2, 3, 1, 1]
The third step is reducing the trinary raprentation using the functions obtained above. That is best explained by the function down below:
def reduce_by_all_functions(numbers_list,functions_list):
"""
Applies all the functions in a list to a list of numbers.
>>> reduce_by_all_functions([3,4,6],[op.add,op.sub])
1
>>> reduce_by_all_functions([6,2,4],[op.mul,op.floordiv])
3
"""
if len(functions_list) != len(numbers_list) - 1:
raise ValueError("The functions must be exactly one less than the numbers")
result = numbers_list[0]
for index,n in enumerate(numbers_list[1:]):
result = functions_list[index](result,n)
return result
Thus yielding the result of: 0 that means that the ai decided to go in the first square
What is your fitness function?
Luckly this is easy to answer.
def ai_fitness(genome,accuracy):
"""
Returns how good an ai is by letting it play against a random ai many times.
The higher the value, the best the ai
"""
ai = from_genome_to_ai(genome)
return decide_best_ai(ai,random_ai,accuracy)
How does your mutation work?
The son ereditates 80% of the genes from the father and 20% of genes from the mother. There is no kind of random mutation besides that.
And how is that reduce_by_all_functions() being used? I see that it
takes a board and a chromosome and returns a number. How is that
number used, what is it meant to represent, and... why is it being
returned modulo 9?
reduce_by_all_functions() is used to actually apply the functions previously obtained by the cromosome. The number is the square the ai wants to take. It is modulo 9 because it must be between 0 and 8 because the board is 9 spaces.
My code so far:
import doctest
import random
import operator as op
SPACE = ' '
MARK_OF_PLAYER_1 = "X"
MARK_OF_PLAYER_2 = "O"
EMPTY_MARK = SPACE
BOARD_NUMBERS = """
The moves are numbered as follows:
0 | 1 | 2
---------
3 | 4 | 5
---------
6 | 7 | 8
"""
WINNING_TRIPLETS = [ (0,1,2), (3,4,5), (6,7,8),
(0,3,6), (1,4,7), (2,5,8),
(0,4,8), (2,4,6) ]
LETTERS_TO_FUNCTIONS = {
'a': op.add,
'm': op.mul,
'M': op.mod,
's': op.sub,
'd': op.floordiv
}
def encode_board_as_trinary(board):
"""
Given a board, replaces the symbols with the numbers
1,2,3 in order to make further processing easier.
>>> encode_board_as_trinary("OX XOX ")
[2, 3, 1, 1, 3, 2, 3, 1, 1]
>>> encode_board_as_trinary(" OOOXXX")
[1, 1, 1, 2, 2, 2, 3, 3, 3]
"""
board = ''.join(board)
board = board.replace(MARK_OF_PLAYER_1,'3')
board = board.replace(MARK_OF_PLAYER_2,'2')
board = board.replace(EMPTY_MARK,'1')
return list((int(square) for square in board))
def create_random_genome(length):
"""
Creates a random genome (that is a sequences of genes, from which
the ai will be generated. It consists of randoom letters taken
from the keys of LETTERS_TO_FUNCTIONS
>>> random.seed("EXAMPLE")
# Test is not possible because even with the same
# seed it gives different results each run...
"""
letters = [letter for letter in LETTERS_TO_FUNCTIONS]
return [random.choice(letters) for _ in range(length)]
def reduce_by_all_functions(numbers_list,functions_list):
"""
Applies all the functions in a list to a list of numbers.
>>> reduce_by_all_functions([3,4,6],[op.add,op.sub])
1
>>> reduce_by_all_functions([6,2,4],[op.mul,op.floordiv])
3
"""
if len(functions_list) != len(numbers_list) - 1:
raise ValueError("The functions must be exactly one less than the numbers")
result = numbers_list[0]
for index,n in enumerate(numbers_list[1:]):
result = functions_list[index](result,n)
return result
def from_genome_to_ai(genome):
"""
Creates an AI following the rules written in the genome (the same as DNA does).
Each letter corresponds to a function as written in LETTERS_TO_FUNCTIONS.
The resulting ai will reduce the board using the functions obtained.
>>> ai = from_genome_to_ai("amMaMMss")
>>> ai("XOX OXO")
4
"""
functions = [LETTERS_TO_FUNCTIONS[gene] for gene in genome]
def ai(board):
return reduce_by_all_functions(encode_board_as_trinary(board),functions) % 9
return ai
def take_first_empty_ai(board):
"""
Very simple example ai for tic-tac-toe
that takes the first empty square.
>>> take_first_empty_ai(' OX O XXO')
0
>>> take_first_empty_ai('XOX O XXO')
3
"""
return board.index(SPACE)
def get_legal_moves(board):
"""
Given a tic-tac-toe board returns the indexes of all
the squares in which it is possible to play, i.e.
the empty squares.
>>> list(get_legal_moves('XOX O XXO'))
[3, 5]
>>> list(get_legal_moves('X O XXO'))
[1, 2, 3, 5]
"""
for index,square in enumerate(board):
if square == SPACE:
yield index
def random_ai(board):
"""
The simplest possible tic-tac-toe 'ai', just
randomly choses a random move.
>>> random.seed("EXAMPLE")
>>> random_ai('X O XXO')
3
"""
legal_moves = list(get_legal_moves(board))
return random.choice(legal_moves)
def printable_board(board):
"""
User Interface function:
returns an easy to understand representation
of the board.
"""
return """
{} | {} | {}
---------
{} | {} | {}
---------
{} | {} | {}""".format(*board)
def human_interface(board):
"""
Allows the user to play tic-tac-toe.
Shows him the board, the board numbers and then asks
him to select a move.
"""
print("The board is:")
print(printable_board(board))
print(BOARD_NUMBERS)
return(int(input("Your move is: ")))
def end_result(board):
"""
Evaluates a board returning:
0.5 if it is a tie
1 if MARK_OF_PLAYER_1 won # default to 'X'
0 if MARK_OF_PLAYER_2 won # default to 'O'
else if nothing of the above applies return None
>>> end_result('XXX OXO')
1
>>> end_result(' O X X O')
None
>>> end_result('OXOXXOXOO')
0.5
"""
if SPACE not in board:
return 0.5
for triplet in WINNING_TRIPLETS:
if all(board[square] == 'X' for square in triplet):
return 1
elif all(board[square] == 'O' for square in triplet):
return 0
def game_ended(board):
"""
Small syntactic sugar function to if the game is ended
i.e. no tie nor win occured
"""
return end_result(board) is not None
def play_ai_tic_tac_toe(ai_1,ai_2):
"""
Plays a game between two different ai-s, returning the result.
It should be noted that this function can be used also to let the user
play against an ai, just call it like: play_ai_tic_tac_toe(random_ai,human_interface)
>>> play_ai_tic_tac_toe(take_first_empty_ai,take_first_empty_ai)
1
"""
board = [SPACE for _ in range(9)]
PLAYER_1_WIN = 1
PLAYER_1_LOSS = 0
while True:
for ai,check in ( (ai_1,MARK_OF_PLAYER_1), (ai_2,MARK_OF_PLAYER_2) ):
move = ai(board)
# If move is invalid you lose
if board[move] != EMPTY_MARK:
if check == MARK_OF_PLAYER_1:
return PLAYER_1_LOSS
else:
return PLAYER_1_WIN
board[move] = check
if game_ended(board):
return end_result(board)
def loop_play_ai_tic_tac_toe(ai_1,ai_2,games_number):
"""
Plays games number games between ai_1 and ai_2
"""
return sum(( play_ai_tic_tac_toe(ai_1,ai_2)) for _ in range(games_number))
def decide_best_ai(ai_1,ai_2,accuracy):
"""
Returns the number of times the first ai is better than the second:
ex. if the ouput is 1.4, the first ai is 1.4 times better than the second.
>>> decide_best_ai(take_first_empty_ai,random_ai,100) > 0.80
True
"""
return sum((loop_play_ai_tic_tac_toe(ai_1,ai_2,accuracy//2),
loop_play_ai_tic_tac_toe(ai_2,ai_1,accuracy//2))) / (accuracy // 2)
def ai_fitness(genome,accuracy):
"""
Returns how good an ai is by lettting it play against a random ai many times.
The higher the value, the best the ai
"""
ai = from_genome_to_ai(genome)
return decide_best_ai(ai,random_ai,accuracy)
def sort_by_fitness(genomes,accuracy):
"""
Syntactic sugar for sorting a list of genomes based on the fitness.
High accuracy will yield a more accurate ordering but at the cost of more
computation time.
"""
def fit(genome):
return ai_fitness(genome,accuracy)
return list(sorted(genomes, key=fit, reverse=True))
# probable bug-fix because high fitness means better individual
def make_child(a,b):
"""
Returns a mix of cromosome a and cromosome b.
There is a bias towards cromosome a because I think that
a too weird soon is going to be bad.
"""
result = []
for index,char_a in enumerate(a):
char_b = b[index]
if random.random() > 0.8:
result.append(char_a)
else:
result.append(char_b)
return result
def genetic_process(population_size,generation_number,accuracy,elite_number):
"""
A full genetic process yielding a good tic-tac-toe ai. # not yet
# Parameters:
# population_size: the number of ai-s that you allow to be alive
at once
# generation_number: the number of generations of the gentetic
# accuracy: how well the ai-s are ordered,
low accuracy means that a good ai may be considered bad or
viceversa. High accuarcy is computationally costly
# elite_number: the number of best programmes that get to reproduce
at each generation.
# Return:
# A genome for a tic-tac-toe ai
"""
pool = [create_random_genome(9-1) for _ in range(population_size)]
for generation in range(generation_number):
best_individuals = sort_by_fitness(pool,accuracy)[:elite_number]
the_best = best_individuals[0]
for good_individual in best_individuals:
pool.append(make_child(the_best,good_individual))
pool = sort_by_fitness(pool,accuracy)[:population_size]
return the_best
def _test():
"""
Tests all the script by running the
>>> 2 + 2 # code formatted like this
4
"""
doctest.testmod()
def main():
"""
A simple demo to let the user play against a genetic opponent.
"""
print("The genetic ai is being created... please wait.")
genetic_ai = from_genome_to_ai(genetic_process(50,4,40,25))
play_ai_tic_tac_toe(genetic_ai,human_interface)
if __name__ == '__main__':
main()
First and foremost, I am obligated to say that Tic Tac Toe is really too simple a problem to reasonably attack with a genetic program. You simply don't need the power of a GP to win Tic Tac Toe; you can solve it with a brute force lookup table, or a simple game tree.
That said, if I understand correctly, your basic notion is this:
1) Create chromosomes of length 8, where each gene is an arithmetic operation, and the 8-gene chromosome acts on each board as a board evaluation function. That is, a chromosome takes in a board representation, and spits out a number representing the goodness of that board.
It's not perfectly clear that this is what you're doing, because your board representations are each 9 integers (1, 2, 3 only) but your examples are given in terms of the "winning triples" which are 2 integers (0 through 8).
2) Start the AI up and, on the AI's turn it should get a list of all legal moves, evaluate the board per its chromosome for each legal move and... take that number, modulo 9, and use that as the next move? Surely there's some code in there to handle the case where that move is illegal....
3) Let a bunch of these chromosome representations either play a standard implementation, or play each other, and determine the fitness based on the number of wins.
4) Once a whole generation of chromosomes has been evaluated, create a new generation. It's not clear to me how you are selecting the parents from the pool, but once the parents are selected, a child is produced by just taking individual genes from the parents by an 80-20 rule.
Your overall high level strategy is sound, but there are a lot of conceptual and implementation flaws in the execution. First, let's talk about fully observable games and simple ways to make AIs for them. If the game is very simple (such as Tic Tac Toe) you can simply make a brute force minimax game tree, such as this. TTT is simple enough that even your cell phone can go all the way to the bottom of the tree very quickly. You can even solve it by brute force with a look up table: Just make a list of all board positions and the response to each one.
When the games get larger-- think checkers, chess, go-- that is no longer true, and one of the ways around this is to develop what's called a board evaluation function. It is a function which takes a board position and returns a number, usually with higher being better for one player and lower being better for the other. One then executes a search to certain acceptable depth and aims for the highest (say) board evaluation function.
This begs the question: How do we come up with the board evaluation function? Originally, one asked experts at the game to develop these function for you. There is a great paper by Chellapilla and Fogel which is similar to what you want to do for checkers-- they use neural networks to determine the board evaluation functions, and, critically, these neural networks are encoded as genomes and evolved. They are then used in search depth 4 trees. The end results are very competitive against human players.
You should read that paper.
What you are trying to do, I think, is very similar, except instead of coding a neural network as a chromosome, you're trying to code up a very restricted algebraic expression, always of the form:
((((((((arg1 op1 arg2) op2 arg3) op3 arg4) op4 arg5) op5 arg6) op6 arg7) op7 arg8) op8 arg)
... and then you're using it mod 9 to pick a move.
Now let's talk about genetic algorithms, genetic programs, and the creation of new children. The whole idea in evolutionary techniques is to combine the best attributes of two hopefully-good solutions in the hopes that they will be even better, without getting stuck in a local maximum.
Generally, this is done by touranment selection, crossover, and mutation. Tournament selection means selecting the parents proportionally to their fitness. Crossover means dividing the chromosomes into two usually contiguous regions and taking one region from one parent and the other region from the other parent. (Why contiguous? Because Holland's Schema Theorem) Mutation means occasionally changing a gene, as a means of maintaining population diversity.
Now let's look at what you're doing:
1) Your board evaluation function-- the function that your chromosome turns into, which acts on the board positions-- is highly restricted and very arbitrary. There's not much rhyme or reason to assigning 1, 2, and 3 as those numbers, but that might possibly be okay. The bigger flaw is that your functions are a terribly restricted part of the overall space of functions. They are always the same length, and the parse tree always looks the same.
There's no reason to expect anything useful to be in this restrictive space. What's needed is to come up with a scheme which allows for a much more general set of parse trees, including crossover and mutation schemes. You should look up some papers or books by John Koza for ideas on this topic.
Note that Chellapilla and Fogel have fixed forms of functions as well, but their chromosomes are substantially larger than their board representations. A checkers board has 32 playable spaces, and each space can have 5 states. But their neural network had some 85 nodes, and the chromosome comprised the connection weights of those nodes-- hundreds, if not thousands, of values.
2) Then there's this whole modulo 9 thing. I don't understand why you're doing that. Don't do that. You're just scrambling whatever information might be in your chromosomes.
3) Your function to make new children is bad. Even as a genetic algorithm, you should be dividing the chromosomes in two (at random points) and taking part of one parent from one side, and the other part from the other parent on the other side. For genetic programming, which is what you're doing, there are analogous strategies for doing crossovers on parse trees. See Koza.
You must include mutation, or you will almost certainly get sub-optimal results.
4a) If you evaluate the fitness by playing against a competent AI, then realize that your chromosomes will never, ever win. They will lose, or they will draw. A competent AI will never lose. Moreover, it is likely that your AIs will lose all the time and initial generations may all come out as equally (catastrophically) poor players. It's not impossible to get yourelf out of that hole, but it will be hard.
4b) On the other hand, if, like Chellapilla and Fogel, you play the AIs against them selves, then you'd better make certain that the AIs can play either X or O. Otherwise you're never going to make any progress at all.
5) Finally, even if all these concerns are addressed, I'm not convinced this will get great results. Note that the checkers example searches to a depth of 4, which is not a big horizon in a game of checkers that might last 20 or 30 moves.
TTT can only ever last 9 moves.
If you don't do a search tree and just go for the highest board evaluation function, you might get something that works. You might not. I'm not sure. If you search to depth 4, you might as well skip to a full search to level 9 and do this conventionally.
I am solving the homework-1 of Caltech Machine Learning Course (http://work.caltech.edu/homework/hw1.pdf) . To solve ques 7-10 we need to implement a PLA. This is my implementation in python:
import sys,math,random
w=[] # stores the weights
data=[] # stores the vector X(x1,x2,...)
output=[] # stores the output(y)
# returns 1 if dot product is more than 0
def sign_dot_product(x):
global w
dot=sum([w[i]*x[i] for i in xrange(len(w))])
if(dot>0):
return 1
else :
return -1
# checks if a point is misclassified
def is_misclassified(rand_p):
return (True if sign_dot_product(data[rand_p])!=output[rand_p] else False)
# loads data in the following format:
# x1 x2 ... y
# In the present case for d=2
# x1 x2 y
def load_data():
f=open("data.dat","r")
global w
for line in f:
data_tmp=([1]+[float(x) for x in line.split(" ")])
data.append(data_tmp[0:-1])
output.append(data_tmp[-1])
def train():
global w
w=[ random.uniform(-1,1) for i in xrange(len(data[0]))] # initializes w with random weights
iter=1
while True:
rand_p=random.randint(0,len(output)-1) # randomly picks a point
check=[0]*len(output) # check is a list. The ith location is 1 if the ith point is correctly classified
while not is_misclassified(rand_p):
check[rand_p]=1
rand_p=random.randint(0,len(output)-1)
if sum(check)==len(output):
print "All points successfully satisfied in ",iter-1," iterations"
print iter-1,w,data[rand_p]
return iter-1
sign=output[rand_p]
w=[w[i]+sign*data[rand_p][i] for i in xrange(len(w))] # changing weights
if iter>1000000:
print "greater than 1000"
print w
return 10000000
iter+=1
load_data()
def simulate():
#tot_iter=train()
tot_iter=sum([train() for x in xrange(100)])
print float(tot_iter)/100
simulate()
The problem according to the answer of question 7 it should take around 15 iterations for perceptron to converge when size of training set but the my implementation takes a average of 50000 iteration . The training data is to be randomly generated but I am generating data for simple lines such as x=4,y=2,..etc. Is this the reason why I am getting wrong answer or there is something else wrong. Sample of my training data(separable using y=2):
1 2.1 1
231 100 1
-232 1.9 -1
23 232 1
12 -23 -1
10000 1.9 -1
-1000 2.4 1
100 -100 -1
45 73 1
-34 1.5 -1
It is in the format x1 x2 output(y)
It is clear that you are doing a great job learning both Python and classification algorithms with your effort.
However, because of some of the stylistic inefficiencies with your code, it makes it difficult to help you and it creates a chance that part of the problem could be a miscommunication between you and the professor.
For example, does the professor wish for you to use the Perceptron in "online mode" or "offline mode"? In "online mode" you should move sequentially through the data point and you should not revisit any points. From the assignment's conjecture that it should require only 15 iterations to converge, I am curious if this implies the first 15 data points, in sequential order, would result in a classifier that linearly separates your data set.
By instead sampling randomly with replacement, you might be causing yourself to take much longer (although, depending on the distribution and size of the data sample, this is admittedly unlikely since you'd expect roughly that any 15 points would do about as well as the first 15).
The other issue is that after you detect a correctly classified point (cases when not is_misclassified evaluates to True) if you then witness a new random point that is misclassified, then your code will kick down into the larger section of the outer while loop, and then go back to the top where it will overwrite the check vector with all 0s.
This means that the only way your code will detect that it has correctly classified all the points is if the particular random sequence that it evaluates them (in the inner while loop) happens to be a string of all 1's except for the miraculous ability that on any particular 0, on that pass through the array, it classifies correctly.
I can't quite formalize why I think that will make the program take much longer, but it seems like your code is requiring a much stricter form of convergence, where it sort of has to learn everything all at once on one monolithic pass way late in the training stage after having been updated a bunch already.
One easy way to check if my intuition about this is crappy would be to move the line check=[0]*len(output) outside of the while loop all together and only initialize it one time.
Some general advice to make the code easier to manage:
Don't use global variables. Instead, let your function to load and prep the data return things.
There are a few places where you say, for example,
return (True if sign_dot_product(data[rand_p])!=output[rand_p] else False)
This kind of thing can be simplified to
return sign_dot_product(data[rand_p]) != output[rand_p]
which is easier to read and conveys what criteria you're trying to check for in a more direct manner.
I doubt efficiency plays an important role since this seems to be a pedagogical exercise, but there are a number of ways to refactor your use of list comprehensions that might be beneficial. And if possible, just use NumPy which has native array types. Witnessing how some of these operations have to be expressed with list operations is lamentable. Even if your professor doesn't want you to implement with NumPy because she or he is trying to teach you pure fundamentals, I say just ignore them and go learn NumPy. It will help you with jobs, internships, and practical skill with these kinds of manipulations in Python vastly more than fighting with the native data types to do something they were not designed for (array computing).
I have been working on this project for a couple months right now. The ultimate goal of this project is to evaluate an entire digital logic circuit similar to functional testing; just to give a scope of the problem. The topic I created here deals with the issue I'm having with performance of analyzing a boolean expression. For any gate inside a digital circuit, it has an output expression in terms of the global inputs. EX: ((A&B)|(C&D)^E). What I want to do with this expression is then calculate all possible outcomes and determine how much influence each input has on the outcome.
The fastest way that I have found was by building a truth table as a matrix and looking at certain rows (won't go into specifics of that algorithm as it's offtopic), the problem with that is once the number of unique inputs goes above 26-27 (something around that) the memory usage is well beyond 16GB (Max my computer has). You might say "Buy more RAM", but as every increase in inputs by 1, memory usage doubles. Some of the expressions I analyze are well over 200 unique inputs...
The method I use right now uses the compile method to take the expression as the string. Then I create an array with all of the inputs found from the compile method. Then I generate a list row by row of "True" and "False" randomly chosen from a sample of possible values (that way it will be equivalent to rows in a truth table if the sample size is the same size as the range and it will allow me to limit the sample size when things get too long to calculate). These values are then zipped with the input names and used to evaluate the expression. This will give the initial result, after that I go column by column in the random boolean list and flip the boolean then zip it with the inputs again and evaluate it again to determine if the result changed.
So my question is this: Is there a faster way? I have included the code that performs the work. I have tried regular expressions to find and replace but it is always slower (from what I've seen). Take into account that the inner for loop will run N times where N is the number of unique inputs. The outside for loop I limit to run 2^15 if N > 15. So this turns into eval being executed Min(2^N, 2^15) * (1 + N)...
As an update to clarify what I am asking exactly (Sorry for any confusion). The algorithm/logic for calculating what I need is not the issue. I am asking for an alternative to the python built-in 'eval' that will perform the same thing faster. (take a string in the format of a boolean expression, replace the variables in the string with the values in the dictionary and then evaluate the string).
#value is expression as string
comp = compile(value.strip(), '-', 'eval')
inputs = comp.co_names
control = [0]*len(inputs)
#Sequences of random boolean values to be used
random_list = gen_rand_bits(len(inputs))
for row in random_list:
valuedict = dict(zip(inputs, row))
answer = eval(comp, valuedict)
for column in range(len(row)):
row[column] = ~row[column]
newvaluedict = dict(zip(inputs, row))
newanswer = eval(comp, newvaluedict)
row[column] = ~row[column]
if answer != newanswer:
control[column] = control[column] + 1
My question:
Just to make sure that I understand this correctly: Your actual problem is to determine the relative influence of each variable within a boolean expression on the outcome of said expression?
OP answered:
That is what I am calculating but my problem is not with how I calculate it logically but with my use of the python eval built-in to perform evaluating.
So, this seems to be a classic XY problem. You have an actual problem which is to determine the relative influence of each variable within the a boolean expression. You have attempted to solve this in a rather ineffective way, and now that you actually “feel” the inefficiency (in both memory usage and run time), you look for ways to improve your solution instead of looking for better ways to solve your original problem.
In any way, let’s first look at how you are trying to solve this. I’m not exactly sure what gen_rand_bits is supposed to do, so I can’t really take that into account. But still, you are essentially trying out every possible combination of variable assignments and see if flipping the value for a single variable changes the outcome of the formula result. “Luckily”, these are just boolean variables, so you are “only” looking at 2^N possible combinations. This means you have exponential run time. Now, O(2^N) algorithms are in theory very very bad, while in practice it’s often somewhat okay to use them (because most have an acceptable average case and execute fast enough). However, being an exhaustive algorithm, you actually have to look at every single combination and can’t shortcut. Plus the compilation and value evaluation using Python’s eval is apparently not so fast to make the inefficient algorithm acceptable.
So, we should look for a different solution. When looking at your solution, one might say that more efficient is not really possible, but when looking at the original problem, we can argue otherwise.
You essentially want to do things similar to what compilers do as static analysis. You want to look at the source code and analyze it just from there without having to actually evaluate that. As the language you are analyzing is highly restricted (being only a boolean expression with very few operators), this isn’t really that hard.
Code analysis usually works on the abstract syntax tree (or an augmented version of that). Python offers code analysis and abstract syntax tree generation with its ast module. We can use this to parse the expression and get the AST. Then based on the tree, we can analyze how relevant each part of an expression is for the whole.
Now, evaluating the relevance of each variable can get quite complicated, but you can do it all by analyzing the syntax tree. I will show you a simple evaluation that supports all boolean operators but will not further check the semantic influence of expressions:
import ast
class ExpressionEvaluator:
def __init__ (self, rawExpression):
self.raw = rawExpression
self.ast = ast.parse(rawExpression)
def run (self):
return self.evaluate(self.ast.body[0])
def evaluate (self, expr):
if isinstance(expr, ast.Expr):
return self.evaluate(expr.value)
elif isinstance(expr, ast.Name):
return self.evaluateName(expr)
elif isinstance(expr, ast.UnaryOp):
if isinstance(expr.op, ast.Invert):
return self.evaluateInvert(expr)
else:
raise Exception('Unknown unary operation {}'.format(expr.op))
elif isinstance(expr, ast.BinOp):
if isinstance(expr.op, ast.BitOr):
return self.evaluateBitOr(expr.left, expr.right)
elif isinstance(expr.op, ast.BitAnd):
return self.evaluateBitAnd(expr.left, expr.right)
elif isinstance(expr.op, ast.BitXor):
return self.evaluateBitXor(expr.left, expr.right)
else:
raise Exception('Unknown binary operation {}'.format(expr.op))
else:
raise Exception('Unknown expression {}'.format(expr))
def evaluateName (self, expr):
return { expr.id: 1 }
def evaluateInvert (self, expr):
return self.evaluate(expr.operand)
def evaluateBitOr (self, left, right):
return self.join(self.evaluate(left), .5, self.evaluate(right), .5)
def evaluateBitAnd (self, left, right):
return self.join(self.evaluate(left), .5, self.evaluate(right), .5)
def evaluateBitXor (self, left, right):
return self.join(self.evaluate(left), .5, self.evaluate(right), .5)
def join (self, a, ratioA, b, ratioB):
d = { k: v * ratioA for k, v in a.items() }
for k, v in b.items():
if k in d:
d[k] += v * ratioB
else:
d[k] = v * ratioB
return d
expr = '((A&B)|(C&D)^~E)'
ee = ExpressionEvaluator(expr)
print(ee.run())
# > {'A': 0.25, 'C': 0.125, 'B': 0.25, 'E': 0.25, 'D': 0.125}
This implementation will essentially generate a plain AST for the given expression and the recursively walk through the tree and evaluate the different operators. The big evaluate method just delegates the work to the type specific methods below; it’s similar to what ast.NodeVisitor does except that we return the analyzation results from each node here. One could augment the nodes instead of returning it instead though.
In this case, the evaluation is just based on ocurrence in the expression. I don’t explicitely check for semantic effects. So for an expression A | (A & B), I get {'A': 0.75, 'B': 0.25}, although one could argue that semantically B has no relevance at all to the result (making it {'A': 1} instead). This is however something I’ll leave for you. As of now, every binary operation is handled identically (each operand getting a relevance of 50%), but that can be of course adjusted to introduce some semantic rules.
In any way, it will not be necessary to actually test variable assignments.
Instead of reinventing the wheel and getting into risk like performance and security which you are already in, it is better to search for industry ready well accepted libraries.
Logic Module of sympy would do the exact thing that you want to achieve without resorting to evil ohh I meant eval. More importantly, as the boolean expression is not a string you don;t have to care about parsing the expression which generally turns out to be the bottleneck.
You don't have to prepare a static table for computing this. Python is a dynamic language, thus it's able to interpret and run a code by itself during runtime.
In you case, I would suggest a soluation that:
import random, re, time
#Step 1: Input your expression as a string
logic_exp = "A|B&(C|D)&E|(F|G|H&(I&J|K|(L&M|N&O|P|Q&R|S)&T)|U&V|W&X&Y)"
#Step 2: Retrieve all the variable names.
# You can design a rule for naming, and use regex to retrieve them.
# Here for example, I consider all the single-cap-lettler are variables.
name_regex = re.compile(r"[A-Z]")
#Step 3: Replace each variable with its value.
# You could get the value with reading files or keyboard input.
# Here for example I just use random 0 or 1.
for name in name_regex.findall(logic_exp):
logic_exp = logic_exp.replace(name, str(random.randrange(2)))
#Step 4: Replace the operators. Python use 'and', 'or' instead of '&', '|'
logic_exp = logic_exp.replace("&", " and ")
logic_exp = logic_exp.replace("|", " or " )
#Step 5: interpret the expression with eval(exp) and output its value.
print "exporession =", logic_exp
print "expression output =",eval(logic_exp)
This would be very fast and take very little memory. For a test, I run the example above with 25 input variables:
exporession = 1 or 1 and (1 or 1) and 0 or (0 or 0 or 1 and (1 and 0 or 0 or (0 and 0 or 0 and 0 or 1 or 0 and 0 or 0) and 1) or 0 and 1 or 0 and 1 and 0)
expression output= 1
computing time: 0.000158071517944 seconds
According to your comment, I see that you are computing all the possible combinations instead of the output at a given input values. If so, it would become a typical NP-complete Boolean satisfiability problem. I don't think there's any algorithm that could make it by a complexity lower than O(2^N). I suggest you to search with the keywords fast algorithm to solve SAT problem, you would find a lot of interesting things.
Given the final score of a basketball game, how i can count the number of possible scoring sequences that lead to the final score.
Each score can be one of: 3 point, 2 point, 1 point score by either the visiting or home team. For example:
basketball(3,0)=4
Because these are the 4 possible scoring sequences:
V3
V2, V1
V1, V2
V1, V1, V1
And:
basketball(88,90)=2207953060635688897371828702457752716253346841271355073695508308144982465636968075
Also I need to do it in a recursive way and without any global variables(dictionary is allowed and probably is the way to solve this)
Also, the function can get only the result as an argument (basketball(m,n)).
for those who asked for the solution:
basketballDic={}
def basketball(n,m):
count=0;
if n==0 and m==0:
return 1;
if (n,m) in basketballDic:
return basketballDic[(n,m)]
if n>=3:
count+= basketball(n-3,m)
if n>=2:
count+= basketball(n-2,m)
if n>=1:
count+= basketball(n-1,m)
if m>=3:
count+= basketball(n,m-3)
if m>=2:
count+= basketball(n,m-2)
if m>=1:
count+= basketball(n,m-1)
basketballDic[(n,m)]=count;
return count;
When you're considering a recursive algorithm, there are two things you need to figure out.
What is the base case, where the recursion ends.
What is the recursive case. That is, how can you calculate one value from one or more previous values?
For your basketball problem, the base case is pretty simple. When there's no score, there's exactly one possible set of baskets that has happened to get there (it's the empty set). So basketball(0,0) needs to return 1.
The recursive case is a little more tricky to think about. You need to reduce a given score, say (M,N), step by step until you get to (0,0), counting up the different ways to get each score on the way. There are six possible ways for the score to have changed to get to (M,N) from whatever it was previously (1, 2 and 3-point baskets for each team) so the number of ways to get to (M,N) is the sum of the ways to get to (M-1,N), (M-2,N), (M-3,N), (M,N-1), (M,N-2) and (M,N-3). So those are the recursive calls you'll want to make (after perhaps some bounds checking).
You'll find that a naive recursive implementation takes a very long time to solve for high scores. This is because it calculates the same values over and over (for instance, it may calculate that there's only one way to get to a score of (1,0) hundreds of separate times). Memoization can help prevent the duplicate work by remembering previously calculated results. It's also worth noting that the problem is symmetric (there are the same number of ways of getting a score of (M,N) as there are of getting (N,M)) so you can save a little work by remembering not only the current result, but also its reverse.
There are two ways this can be done, and neither come close to matching your specified outputs. The less relevant way would be to count the maximum possible number of scoring plays. Since basketball has 1 point scores, this will always be equal to the sum of both inputs to our basketball() function. The second technique is counting the minimum number of scoring plays. This can be done trivially with recursion, like so:
def team(x):
if x:
score = 3
if x < 3:
score = 2
if x < 2:
score = 1
return 1 + team(x - score)
else:
return 0
def basketball(x, y):
return team(x) + team(y)
Can this be done more tersely and even more elegantly? Certainly, but this should give you a decent starting point for the kind of stateless, recursive solution you are working on.
tried to reduce from the given result (every possible play- 1,2,3 points) using recursion until i get to 0 but for that i need global variable and i cant use one.
Maybe this is where you reveal what you need. You can avoid a global by passing the current count and/or returning the used count (or remaining count) as needed.
In your case I think you would just pass the points to the recursive function and have it return the counts. The return values would be added so the final total would roll-up as the recursion unwinds.
Edit
I wrote a function that was able to generate correct results. This question is tagged "memoization", using it gives a huge performance boost. Without it, the same sub-sequences are processed again and again. I used a decorator to implement memoization.
I liked #Maxwell's separate handling of teams, but that approach will not generate the numbers you are looking for. (Probably because your original wording was not at all clear, I've since rewritten your problem description). I wound up processing the 6 home and visitor scoring possibilities in a single function.
My counts were wrong until I realized what I needed to count was the number of times I hit the terminal condition.
Solution
Other solutions have been posted. Here's my (not very readable) one-liner:
def bball(vscore, hscore):
return 1 if vscore == 0 and hscore == 0 else \
sum([bball(vscore-s, hscore) for s in range(1,4) if s <= vscore]) + \
sum([bball(vscore, hscore-s) for s in range(1,4) if s <= hscore])
Actually I also have this line just before the function definition:
#memoize
I used Michele Simionato's decorator module and memoize sample. Though as #Blckknght mentioned the function is commutative so you could customize memoize to take advantage of this.
While I like the separation of concerns provided by generic memoization, I'm also tempted to initialize the cache with (something like):
cache = {(0, 0): 1}
and remove the special case check for 0,0 args in the function.