Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
For a series of algorithms I'm implementing I need to simulate things like sets of coins being weighed or pooled blood samples. The overriding goal is to identify a sparse set of interesting items in a set of otherwise identical items. This identification is done by testing groups of items together. For example the classic problem is to find a light counterfeit coin in a group of 81 (identical) coins, using as few weightings of a pan balance as possible. The trick is to split the 81 coins into three groups and weigh two groups against each other. You then do this on the group which doesn't balance until you have 2 coins left.
The key point in the discussion above is that the set of interesting items is sparse in the wider set - the algorithms I'm implementing all outperform binary search etc for this type of input.
What I need is a way to test the entire vector that indicates the presence of a single, or more ones, without scanning the vector componentwise.
I.e. a way to return the Hamming Weight of the vector in an O(1) operation - this will accurately simulate pooling blood samples/weighing groups of coins in a pan balance.
It's key that the vector isn't scanned - but the output should indicate that there is at least one 1 in the vector. By scanning I mean looking at the vector with algorithms such as binary search or looking at each element in turn. That is need to simulate pooling groups of items (such as blood samples) and s single test on the group which indicates the presence of a 1.
I've implemented this 'vector' as a list currently, but this needn't be set in stone. The task is to determine, by testing groups of the sublist, where the 1s in the vector are. An example of the list is:
sparselist = [0]*100000
sparselist[1024] = 1
But this could equally well be a long/set/something else as suggested below.
Currently I'm using any() as the test but it's been pointed out to me that any() will scan the vector - defeating the purpose of what I'm trying to achieve.
Here is an example of a naive binary search using any to test the groups:
def binary_search(inList):
low = 0
high = len(inList)
while low < high:
mid = low + (high-low) // 2
upper = inList[mid:high]
lower = inList[low:mid]
if any(lower):
high = mid
elif any(upper):
low = mid+1
else:
# Neither side has a 1
return -1
return mid
I apologise if this code isn't production quality. Any suggestions to improve it (beyond the any() test) will be appreciated.
I'm trying to come up with a better test than any() as it's been pointed out that any() will scan the list - defeating the point of what I'm trying to do. The test needn't return the exact Hamming weight - it merely needs to indicate that there is (or isn't!) a 1 in the group being tested (i.e. upper/lower in the code above).
I've also thought of using a binary xor, but don't know how to use it in a way that isn't componentwise.
Here is a sketch:
class OrVector (list):
def __init__(self):
self._nonzero_counter = 0
list.__init__(self)
def append(self, x):
list.append(self, x)
if x:
self._nonzero_counter += 1
def remove(self, x):
if x:
self._nonzero_counter -= 1
list.remove(self, x)
def hasOne(self):
return self._nonzero_counter > 0
v = OrVector()
v.append(0)
print v
print v.hasOne()
v.append(1);
print v
print v.hasOne()
v.remove(1);
print v
print v.hasOne()
Output:
[0]
False
[0, 1]
True
[0]
False
The idea is to inherit from list, and add a single variable which stores the number of nonzero entries. While the crucial functionality is delegated to the base list class, at the same time you monitor the number of nonzero entries in the list, and can query it in O(1) time using hasOne() member function.
HTH.
any will only scan the whole vector if does not find you you're after before the end of the "vector".
From the docs it is equivalent to
def any(iterable):
for element in iterable:
if element:
return True
return False
This does make it O(n). If you have things sorted (in your "binary vector") you can use bisect.
e.g. position = index(myVector, value)
Ok, maybe I will try an alternative answer.
You cannot do this with out any prior knowledge of your data. The only thing you can do it to make a test and cache the results. You can design a data structure that will help you determine a result of any subsequent tests in case your data structure is mutable, or a data structure that will be able to determine answer in better time on a subset of your vector.
However, your question does not indicate this. At least it did not at the time of writing the answer. For now you want to make one test on a vector, for a presence of a particular element, giving no prior knowledge about the data, in time complexity less than O(log n) in average case or O(n) in worst. This is not possible.
Also keep in mind you need to load a vector at some point which takes O(n) operations, so if you are interested in performing one test over a set of elements you wont loose much. On the average case with more elements, the loading time will take much more than testing.
If you want to perform a set of tests you can design an algorithm that will "build up" some knowledge during the subsequent test, that will help it determine results in better times. However, that holds only if you want make more than one test!
Related
While solving Leetcode problems I've been trying to make my answers as easily intelligible as possible, so I can quickly glance at them later and make sense of them. Toward that end I assigned variable names to indices of interest in a 2D list. When I see "matrix[i][j+1]" and variations thereof repeatedly, I sometimes lose track of what I'm dealing with.
So, for this problem: https://leetcode.com/problems/maximal-square/
I wrote this code:
class Solution:
def maximalSquare(self, matrix: List[List[str]]) -> int:
maximum = 0
for y in range(len(matrix)):
for x in range(len(matrix[0])):
#convert to integer from string
matrix[y][x] = int(matrix[y][x])
#use variable for readability
current = matrix[y][x]
#build largest square counts by checking neighbors above and to left
#so, skip anything in first row or first column
if y!=0 and x!=0 and current==1:
#assign variables for readability. We're checking adjacent squares
left = matrix[y][x-1]
up = matrix[y-1][x]
upleft = matrix[y-1][x-1]
#have to use matrix directly to set new value
matrix[y][x] = current = 1 + min(left, up, upleft)
#reevaluate maximum
if current > maximum:
maximum = current
#return maximum squared, since we're looking for largest area of square, not largest side
return maximum**2
I don't think I've seen people do this before and I'm wondering if it's a bad idea, since I'm sort of maintaining two versions of a value.
Apologies if this is a "coding style" question and therefore just a matter of opinion, but I thought there might be a clear answer that I just haven't found yet.
It is very hard to give a straightforward answer, because it might vary from person to person. Let me start from your queries:
When I see "matrix[i][j+1]" and variations thereof repeatedly, I sometimes lose track of what I'm dealing with.
It depends. People who have moderate programming knowledge should not be confused by seeing a 2-D matrix in matrix[x-pos][y-pos] shape. Again, if you don't feel comfortable, you can use the way you have shared here. But, you should try to adopt and be familiar with this type of common concepts parallelly.
I don't think I've seen people do this before and I'm wondering if it's a bad idea, since I'm sort of maintaining two versions of a value.
It is not a bad idea at all. It is "Okay" as long as you are considering to do this for your comfort. But, if you like to share your code with others, then it might not be a very good idea to use something that is too obvious. It might reduce the understandability of your code to others. But, you should not worry with the maintaining two versions of a value, as long as the extra memory is constant.
Apologies if this is a "coding style" question and therefore just a matter of opinion, but I thought there might be a clear answer that I just haven't found yet.
You are absolutely fine by asking this question. As you mentioned, it is really just a matter of opinion. You can follow some standard language guideline like Google Python Style Guide. It is always recommended to follow some standards for this type of coding style things. Always keep in mind, a piece of good code is always self-documented and putting unnecessary comments sometimes make it boring. Also,
Here I have shared my version of your code. Feel free to comment if you have any question.
# Time: O(m*n)
# Space: O(1)
class Solution:
def maximalSquare(self, matrix: List[List[str]]) -> int:
"""Given an m x n binary matrix filled with 0's and 1's,
find the largest square containing only 1's and return its area.
Args:
matrix: An (m x n) string matrix.
Returns:
Area of the largest square containing only 1's.
"""
maximum = 0
for x in range(len(matrix)):
for y in range(len(matrix[0])):
# convert current matrix cell value from string to integer
matrix[x][y] = int(matrix[x][y])
# build largest square side by checking neighbors from up-row and left-column
# so, skip the cells from the first-row and first-column
if x != 0 and y != 0 and matrix[x][y] != 0:
# update current matrix cell w.r.t. the left, up and up-left cell values respectively
matrix[x][y] = 1 + min(matrix[x][y-1], matrix[x-1][y], matrix[x-1][y-1])
# re-evaluate maximum square side
if matrix[x][y] > maximum:
maximum = matrix[x][y]
# returning the area of the largest square
return maximum**2
I know Python isn't the best idea to be writing any kind of software of this nature. My reasoning is to use this type of algorithm for a Raspberry Pi 3 in it's decision making (still unsure how that will go), and the libraries and APIs that I'll be using (Adafruit motor HATs, Google services, OpenCV, various sensors, etc) all play nicely for importing in Python, not to mention I'm just more comfortable in this environment for the rPi specifically. Already I've cursed it as object oriented such as Java or C++ just makes more sense to me, but Id rather deal with its inefficiencies and focus on the bigger picture of integration for the rPi.
I won't explain the code here, as it's pretty well documented in the comment sections throughout the script. My questions is as stated above; can this be considered basically a genetic algorithm? If not, what must it have to be a basic AI or genetic code? Am I on the right track for this type of problem solving? I know usually there are weighted variables and functions to promote "survival of the fittest", but that can be popped in as needed, I think.
I've read up quite a bit of forums and articles about this topic. I didn't want to copy someone else's code that I barely understand and start using it as a base for a larger project of mine; I want to know exactly how it works so I'm not confused as to why something isn't working out along the way. So, I just tried to comprehend the basic idea of how it works, and write how I interpreted it. Please remember I'd like to stay in Python for this. I know rPi's have multiple environments for C++, Java, etc, but as stated before, most hardware components I'm using have only Python APIs for implementation. if I'm wrong, explain at the algorithmic level, not just with a block of code (again, I really want to understand the process). Also, please don't nitpick code conventions unless it's pertinent to my problem, everyone has a style and this is just a sketch up for now. Here it is, and thanks for reading!
# Created by X3r0, 7/3/2016
# Basic genetic algorithm utilizing a two dimensional array system.
# the 'DNA' is the larger array, and the 'gene' is a smaller array as an element
# of the DNA. There exists no weighted algorithms, or statistical tracking to
# make the program more efficient yet; it is straightforwardly random and solves
# its problem randomly. At this stage, only the base element is iterated over.
# Basic Idea:
# 1) User inputs constraints onto array
# 2) Gene population is created at random given user constraints
# 3) DNA is created with randomized genes ( will never randomize after )
# a) Target DNA is created with loop control variable as data (basically just for some target structure)
# 4) CheckDNA() starts with base gene from DNA, and will recurse until gene matches the target gene
# a) Randomly select two genes from DNA
# b) Create a candidate gene by splicing both parent genes together
# c) Check candidate gene against the target gene
# d) If there exists a match in gene elements, a child gene is created and inserted into DNA
# e) If the child gene in DNA is not equal to target gene, recurse until it is
import random
DNAsize = 32
geneSize = 5
geneDiversity = 9
geneSplit = 4
numRecursions = 0
DNA = []
targetDNA = []
def init():
global DNAsize, geneSize, geneDiversity, geneSplit, DNA
print("This is a very basic form of genetic software. Input variable constraints below. "
"Good starting points are: DNA strand size (array size): 32, gene size (sub array size: 5, gene diversity (randomized 0 - x): 5"
"gene split (where to split gene array for splicing): 2")
DNAsize = int(input('Enter DNA strand size: '))
geneSize = int(input('Enter gene size: '))
geneDiversity = int(input('Enter gene diversity: '))
geneSplit = int(input('Enter gene split: '))
# initializes the gene population, and kicks off
# checkDNA recursion
initPop()
checkDNA(DNA[0])
def initPop():
# builds an array of smaller arrays
# given DNAsize
for x in range(DNAsize):
buildDNA()
# builds the goal array with a recurring
# numerical pattern, in this case just the loop
# control variable
buildTargetDNA(x)
def buildDNA():
newGene = []
# builds a smaller array (gene) using a given geneSize
# and randomized with vaules 0 - [given geneDiversity]
for x in range(geneSize):
newGene.append(random.randint(0,geneDiversity))
# append the built array to the larger array
DNA.append(newGene)
def buildTargetDNA(x):
# builds the target array, iterating with x as a loop
# control from the call in init()
newGene = []
for y in range(geneSize):
newGene.append(x)
targetDNA.append(newGene)
def checkDNA(childGene):
global numRecursions
numRecursions = numRecursions+1
gene = DNA[0]
targetGene = targetDNA[0]
parentGeneA = DNA[random.randint(0,DNAsize-1)] # randomly selects an array (gene) from larger array (DNA)
parentGeneB = DNA[random.randint(0,DNAsize-1)]
pos = random.randint(geneSplit-1,geneSplit+1) # randomly selects a position to split gene for splicing
candidateGene = parentGeneA[:pos] + parentGeneB[pos:] # spliced gene given split from parentA and parentB
print("DNA Splice Position: " + str(pos))
print("Element A: " + str(parentGeneA))
print("Element B: " + str(parentGeneB))
print("Candidate Element: " + str(candidateGene))
print("Target DNA: " + str(targetDNA))
print("Old DNA: " + str(DNA))
# iterates over the candidate gene and compares each element to the target gene
# if the candidate gene element hits a target gene element, the resulting child
# gene is created
for x in range(geneSize):
#if candidateGene[x] != targetGene[x]:
#print("false ")
if candidateGene[x] == targetGene[x]:
#print("true ")
childGene.pop(x)
childGene.insert(x, candidateGene[x])
# if the child gene isn't quite equal to the target, and recursion hasn't reached
# a max (apparently 900), the child gene is inserted into the DNA. Recursion occurs
# until the child gene equals the target gene, or max recursuion depth is exceeded
if childGene != targetGene and numRecursions < 900:
DNA.pop(0)
DNA.insert(0, childGene)
print("New DNA: " + str(DNA))
print(numRecursions)
checkDNA(childGene)
init()
print("Final DNA: " + str(DNA))
print("Number of generations (recursions): " + str(numRecursions))
I'm working with evolutionary computation right now so I hope my answer will be helpful for you, personally, I work with java, mostly because is one of my main languages, and for the portability, because I tested in linux, windows and mac. In my case I work with permutation encoding, but if you are still learning how GA works, I strongly recommend binary encoding. This is what I called my InitialPopulation. I try to describe my program's workflow:
1-. Set my main variables
This are PopulationSize, IndividualSize, MutationRate, CrossoverRate. Also you need to create an objective function and decide the crossover method you use. For this example lets say that my PopulationSize is equals to 50, the IndividualSize is 4, MutationRate is 0.04%, CrossoverRate is 90% and the crossover method will be roulette wheel.
My objective function only what to check if my Individuals are capable to represent the number 15 in binary, so the best individual must be 1111.
2-. Initialize my Population
For this I create 50 individuals (50 is given by my PopulationSize) with random genes.
3-. Loop starts
For each Individuals in Population you need to:
Evaluate fitness according to the objective function. If an Individual is represented by the next characters: 00100 this mean that his fitness is 1. As you can see this is a simple fitness function. You can create your own while you are learning, like fitness = 1/numberOfOnes. Also you need to assign the sum of all the fitness to a variable called populationFitness, this will be useful in the next step.
Select the best individuals. For this task there's a lot of methods you can use, but we will use the roulette wheel method as we say before. In this method, You assign a value to every individual inside your population. This value is given by the next formula: (fitness/populationFitness) * 100. So, if your population fitness is 10, and a certain individual fitness is 3, this mean that this individual has a 30% chance to be selected to make a crossover with another individual. Also, if another individual have a 4 in his fitness, his value will be 40%.
Apply crossover. Once you have the "best" individuals of your population, you need to create a new population. This new population is formed by others individuals of the previous population. For each individual you create a random number from 0 to 1. If this numbers is in the range of 0.9 (since our crossoverRate = 90%), this individual can reproduce, so you select another individual. Each new individual has this 2 parents who inherit his genes. For example:
Lets say that parentA = 1001 and parentB = 0111. We need to create a new individual with this genes. There's a lot of methods to do this, uniform crossover, single point crossover, two point crossover, etc. We will use the single point crossover. In this method we choose a random point between the first gene and the last gene. Then, we create a new individual according to the first genes of parentA and the last genes of parentB. In a visual form:
parentA = 1001
parentB = 0111
crossoverPoint = 2
newIndividual = 1011
As you can see, the new individual share his parents genes.
Once you have a new population with new individuals, you apply the mutation. In this case, for each individual in the new population generate a random number between 0 and 1. If this number is in the range of 0.04 (since our mutationRate = 0.04), you apply the mutation in a random gene. In binary encoding the mutation is just change the 1's for 0's or viceversa. In a visual form:
individual = 1011
randomPoint = 3
mutatedIndividual = 1010
Get the best individual
If this individual has reached the solution stop. Else, repeat the loop
End
As you can see, my english is not very good, but I hope you understand the basic idea of a genetic algorithm. If you are truly interested in learning this, you can check the following links:
http://www.obitko.com/tutorials/genetic-algorithms/
This link explains in a clearer way the basics of a genetic algorithm
http://natureofcode.com/book/chapter-9-the-evolution-of-code/
This book also explain what a GA is, but also provide some code in Processing, basically java. But I think you can understand.
Also I would recommend the following books:
An Introduction to Genetic Algorithms - Melanie Mitchell
Evolutionary algorithms in theory and practice - Thomas Bäck
Introduction to genetic algorithms - S. N. Sivanandam
If you have no money, you can easily find all this books in PDF.
Also, you can always search for articles in scholar.google.com
Almost all are free to download.
Just to add a bit to Alberto's great answer, you need to watch out for two issues as your solution evolves.
The first one is Over-fitting. This basically means that your solution is complex enough to "learn" all samples, but it is not applicable outside the training set. To avoid this, your need to make sure that the "amount" of information in your training set is a lot larger than the amount of information that can fit in your solution.
The second problem is Plateaus. There are cases where you would arrive at certain mediocre solutions that are nonetheless, good enough to "outcompete" any emerging solution, so your progress stalls (one way to see this is, if you see your fitness "stuck" at a certain, less than optimal number). One method for dealing with this is Extinctions: You could track the rate of improvement of your optimal solution, and if the improvement has been 0 for the last N generations, you just Nuke your population. (That is, delete your population and the list of optimal individuals and start over). Randomness will make it so that eventually the Solutions will surpass the Plateau.
Another thing to keep in mind, is that the default Random class is really bad at Randomness. I have had solutions improve dramatically by simply using something like the Mesernne Twister Random generator or a Hardware Entropy Generator.
I hope this helps. Good luck.
I'm trying to implement a simple Markov Chain Monte Carlo in Python 2.7, using numpy. The goal is to find the solution to the "Knapsack Problem," where given a set of m objects of value vi and weight wi, and a bag with holding capacity b, you find the greatest value of objects that can be fit into your bag, and what those objects are. I started coding in the Summer, and my knowledge is extremely lopsided, so I apologize if I'm missing something obvious, I'm self-taught and have been jumping all over the place.
The code for the system is as follows (I split it into parts in an attempt to figure out what's going wrong).
import numpy as np
import random
def flip_z(sackcontents):
##This picks a random object, and changes whether it's been selected or not.
pick=random.randint(0,len(sackcontents)-1)
clone_z=sackcontents
np.put(clone_z,pick,1-clone_z[pick])
return clone_z
def checkcompliance(checkedz,checkedweights,checkedsack):
##This checks to see whether a given configuration is overweight
weightVector=np.dot(checkedz,checkedweights)
weightSum=np.sum(weightVector)
if (weightSum > checkedsack):
return False
else:
return True
def getrandomz(length):
##I use this to get a random starting configuration.
##It's not really important, but it does remove the burden of choice.
z=np.array([])
for i in xrange(length):
if random.random() > 0.5:
z=np.append(z,1)
else:
z=np.append(z,0)
return z
def checkvalue(checkedz,checkedvalue):
##This checks how valuable a given configuration is.
wealthVector= np.dot(checkedz,checkedvalue)
wealthsum= np.sum(wealthVector)
return wealthsum
def McKnapsack(weights, values, iterations,sack):
z_start=getrandomz(len(weights))
z=z_start
moneyrecord=0.
zrecord=np.array(["error if you see me"])
failures=0.
for x in xrange(iterations):
current_z= np.array ([])
current_z=flip_z(z)
current_clone=current_z
if (checkcompliance(current_clone,weights,sack))==True:
z=current_clone
if checkvalue(current_z,values)>moneyrecord:
moneyrecord=checkvalue(current_clone,values)
zrecord= current_clone
else:failures+=1
print "The winning knapsack configuration is %s" %(zrecord)
print "The highest value of objects contained is %s" %(moneyrecord)
testvalues1=np.array([3,8,6])
testweights1= np.array([1,2,1])
McKnapsack(testweights1,testvalues1,60,2.)
What should happen is the following: With a maximum carrying capacity of 2, it should randomly switch between the different potential bag carrying configurations, of which there are 2^3=8 with the test weights and values I've fed it, with each one or zero in the z representing having or not having a given item. It should discard the options with too much weight, while keeping track of the ones with the highest value and acceptable weight. The correct answer would be to see 1,0,1 as the configuration, with 9 as the maximized value. I get nine for value every time when I use even moderately high amounts of iterations, but the configurations seem completely random, and somehow break the weight rule. I've double-checked my "checkcompliance" function with a lot of test arrays, and it seems to work. How are these faulty, overweight configurations getting past my if statements and into my zrecord ?
The trick is that z (and therefore also current_z and also zrecord) end up always referring to the exact same object in memory. flip_z modifies this object in-place via np.put.
Once you find a new combination that increases your moneyrecord, you set a reference to it -- but then in the subsequent iteration you go ahead and change the data at that same reference.
In other words, lines like
current_clone=current_z
zrecord= current_clone
do not copy, they only make yet another alias to the same data in memory.
One way to fix this is to explicitly copy that combination once you find it's a winner:
if checkvalue(current_z, values) > moneyrecord:
moneyrecord = checkvalue(current_clone, values)
zrecord = current_clone.copy()
I am looking for the most efficient way to randomly draw nelements in a list given a list of probabilities stating the probability of each element to be picked.
aList = [3,4,2,1,4,3,5,7,6,4]
MyProba = [0.1,0.1,0.2,0,0.1,0,0.2,0,0.2,0.1]
It means that at each draw, the first element (which is 3) has a probability of 0.1 to be drawn. Of course,
sum(MyProba) == 1 # always returns True
len(aList) == len(MyProba) # always returns True
Up to now I did the following:
def random_pick(some_list, proba):
x = random.uniform(0, 1)
cumulative_proba = 0.0
for item, item_proba in zip(some_list, proba):
cumulative_proba += item_proba
if x < cumulative_proba:
break
return item
nb_draws = 10
list_of_drawn_elements = []
for one_draw in range(nb_draws):
list_of_drawn_elements.append(random_pick(aList, MyProba))
It works but it is terribly slow for long lists and big values of nb_draws. How can I improve the speed of this process?
Note: In the special case I am facing, nb_draws always equals the length of aList.
The general idea (as outlined by others' answers as well) is that your method is inefficient because the preprocessing (the calculation of the cumulative distribution) is done every time you draw a sample, although it would be enough to do it once before the sampling and then use the preprocessed data to do the sampling.
The preprocessing and sampling could be done efficiently with Walker's alias method. I have implemented it a while ago; take a look at the source code. (Sorry for the external link, but I think it's too long to post it here). My version requires NumPy; if you don't want to use NumPy, there is a NumPy-free alternative as well (on which my version is based).
Edit: the explanation of Walker's alias method is to be found in the first link I provided. In a nutshell, imagine that you somehow managed to construct a rectangular "darts board" that is subdivided into parts such that each part corresponds to one of your original items, and the area of each part is proportional to the desired probability of selecting the corresponding element. You can then start throwing darts at random at the darts board (by generating two random numbers that specify the horizontal and vertical coordinate of where the dart ended up) and check which areas the darts hit. The items corresponding to the areas will be the items you have selected. Walker's alias method is simply a linear-time preprocessing that constructs the dart board. Drawing each element can then be done in constant time. In the end, drawing m elements out of n will have a cost of O(n) for preprocessing and O(m) for generating the samples, yielding a total complexity of O(n + m).
here's my lazy method... build a list with expected number of values for the desired distribution, and use random.choice() to pick a value from the list.
>>> import random
>>>
>>> value_probs = dict(zip([3,4,2,1,4,3,5,7,6,4], [0.1,0.1,0.2,0,0.1,0,0.2,0,0.2,0.1]))
>>> expected_dist = sum([[i] * int(prob * 100) for i, prob in value_probs.iteritems()], [])
>>> random.choice(expected_dist)
You might try to precalculate the cumulative probability range for each element and make a tree from these intervals. Then you will get a logarithmic complexity for looking up the element corresponding to the generated probability, instead of linear one that you have now.
You're calculating cumulative_proba each time when you call random_pick. I suggest to calculate it outside the method, and use a better data structure to store it, like a binary search tree, which will reduce the time complexity from O(n) to O(lgn).
Given the final score of a basketball game, how i can count the number of possible scoring sequences that lead to the final score.
Each score can be one of: 3 point, 2 point, 1 point score by either the visiting or home team. For example:
basketball(3,0)=4
Because these are the 4 possible scoring sequences:
V3
V2, V1
V1, V2
V1, V1, V1
And:
basketball(88,90)=2207953060635688897371828702457752716253346841271355073695508308144982465636968075
Also I need to do it in a recursive way and without any global variables(dictionary is allowed and probably is the way to solve this)
Also, the function can get only the result as an argument (basketball(m,n)).
for those who asked for the solution:
basketballDic={}
def basketball(n,m):
count=0;
if n==0 and m==0:
return 1;
if (n,m) in basketballDic:
return basketballDic[(n,m)]
if n>=3:
count+= basketball(n-3,m)
if n>=2:
count+= basketball(n-2,m)
if n>=1:
count+= basketball(n-1,m)
if m>=3:
count+= basketball(n,m-3)
if m>=2:
count+= basketball(n,m-2)
if m>=1:
count+= basketball(n,m-1)
basketballDic[(n,m)]=count;
return count;
When you're considering a recursive algorithm, there are two things you need to figure out.
What is the base case, where the recursion ends.
What is the recursive case. That is, how can you calculate one value from one or more previous values?
For your basketball problem, the base case is pretty simple. When there's no score, there's exactly one possible set of baskets that has happened to get there (it's the empty set). So basketball(0,0) needs to return 1.
The recursive case is a little more tricky to think about. You need to reduce a given score, say (M,N), step by step until you get to (0,0), counting up the different ways to get each score on the way. There are six possible ways for the score to have changed to get to (M,N) from whatever it was previously (1, 2 and 3-point baskets for each team) so the number of ways to get to (M,N) is the sum of the ways to get to (M-1,N), (M-2,N), (M-3,N), (M,N-1), (M,N-2) and (M,N-3). So those are the recursive calls you'll want to make (after perhaps some bounds checking).
You'll find that a naive recursive implementation takes a very long time to solve for high scores. This is because it calculates the same values over and over (for instance, it may calculate that there's only one way to get to a score of (1,0) hundreds of separate times). Memoization can help prevent the duplicate work by remembering previously calculated results. It's also worth noting that the problem is symmetric (there are the same number of ways of getting a score of (M,N) as there are of getting (N,M)) so you can save a little work by remembering not only the current result, but also its reverse.
There are two ways this can be done, and neither come close to matching your specified outputs. The less relevant way would be to count the maximum possible number of scoring plays. Since basketball has 1 point scores, this will always be equal to the sum of both inputs to our basketball() function. The second technique is counting the minimum number of scoring plays. This can be done trivially with recursion, like so:
def team(x):
if x:
score = 3
if x < 3:
score = 2
if x < 2:
score = 1
return 1 + team(x - score)
else:
return 0
def basketball(x, y):
return team(x) + team(y)
Can this be done more tersely and even more elegantly? Certainly, but this should give you a decent starting point for the kind of stateless, recursive solution you are working on.
tried to reduce from the given result (every possible play- 1,2,3 points) using recursion until i get to 0 but for that i need global variable and i cant use one.
Maybe this is where you reveal what you need. You can avoid a global by passing the current count and/or returning the used count (or remaining count) as needed.
In your case I think you would just pass the points to the recursive function and have it return the counts. The return values would be added so the final total would roll-up as the recursion unwinds.
Edit
I wrote a function that was able to generate correct results. This question is tagged "memoization", using it gives a huge performance boost. Without it, the same sub-sequences are processed again and again. I used a decorator to implement memoization.
I liked #Maxwell's separate handling of teams, but that approach will not generate the numbers you are looking for. (Probably because your original wording was not at all clear, I've since rewritten your problem description). I wound up processing the 6 home and visitor scoring possibilities in a single function.
My counts were wrong until I realized what I needed to count was the number of times I hit the terminal condition.
Solution
Other solutions have been posted. Here's my (not very readable) one-liner:
def bball(vscore, hscore):
return 1 if vscore == 0 and hscore == 0 else \
sum([bball(vscore-s, hscore) for s in range(1,4) if s <= vscore]) + \
sum([bball(vscore, hscore-s) for s in range(1,4) if s <= hscore])
Actually I also have this line just before the function definition:
#memoize
I used Michele Simionato's decorator module and memoize sample. Though as #Blckknght mentioned the function is commutative so you could customize memoize to take advantage of this.
While I like the separation of concerns provided by generic memoization, I'm also tempted to initialize the cache with (something like):
cache = {(0, 0): 1}
and remove the special case check for 0,0 args in the function.