How to write pooling algoritm for lab work efficiency? - python

I was wondering if it makes sense to have a algorithm calculating the best combinations of samples to create pools in order to analyse each sample.
e.g.
I have 5 plant populations with different sizes
data = {'pop':[1,2,3,4,5],
'size':[23,45,65,31,43]}
The goal is to analyse each plant for one gene.
What I could do it to analyse each plant individually, but that may involve too much labour.
Therefore, I was thinking in pooling populations in order to minimize the labour involved.
e.g. I could simply do pool1 = pop1,pop2,pop3 | pool2 = pop4,pop5
However, then I was thinking why not do pool1 = pop2,pop5, pool2 = pop1,pop3, and pool3 = pop4
So I was wondering if there is a way to calculate the optimal combination of populations or even plants (It is possible to split the populations in every desired way).
So when e.g. pool1 (pop1,pop2,pop3) is positive (desired gene found) then how to proceed in order to get to the individual plant that is positive, i.e. How to split the pool most effectively in order to get to identify the positive plants. It is likely that multiple plants of one population are positive
Overall I want to minimize the number of 'runs'
It is known that the expected frequency of positives is 0.036
I hope the idea is clear and somebody has ideas on how to do that
Thanks

If you have N plants, and the frequency of positives is 0.036, then the total amount of information you get is -N(0.036 log2 0.036 + 0.964 log2 0.964) = 0.224N bits. See https://en.wikipedia.org/wiki/Entropy_(information_theory)
Ideally, since each run gives you a binary answer, you'll want to get a full bit out of each one, or at least as close to it as possible (and you'll therefore run just under N/4 runs in total). You get a full bit when the probability of a positive result is 50%. That takes 19 plants, so do your initial runs on batches of 19 plants.
After that, you'll probably get close enough to optimal by dividing each batch into halves and testing each half.
The initial batches require N/19 runs.
Then you have N/19 batches of size 10 to test.
You'll have N/16 batches of size 5 to test
N/15 of size 2.5.
For the N/30 positive batches of size 2.5, test each plant.
All together then, you have N(2/19+1/16+1/15+2.5/30) = 0.32N runs all together -- not too bad.
(note that #Stef's answer seems more efficient, but he got lucky in finding only 4 positives when 7 are expected :)
Let's try it:
import random
plants = [random.random() < 0.036 for _ in range(10000)]
nbuckets = len(plants)//19
buckets = [plants[i * len(plants)//nbuckets : (i+1) * len(plants)//nbuckets] for i in range(nbuckets)]
ntests = 0
def count_recursive(ar):
global ntests
if (len(ar)<=3):
# run each plant
ntests += len(ar)
return ar.count(True)
# run the batch
ntests += 1
if (ar.count(True) < 1):
return 0
mid = len(ar)//2
return count_recursive(ar[:mid]) + count_recursive(ar[mid:])
print("Num plants: {}".format(len(plants)))
print("Num Positives: {}".format(plants.count(True)))
foundPositives = sum(count_recursive(b) for b in buckets)
print("Found positives: {} ".format(foundPositives))
print("Num tests: {}".format(ntests))
Results:
Num plants: 10000
Num Positives: 368
Found positives: 368
Num tests: 3310
Num plants: 10000
Num Positives: 325
Found positives: 325
Num tests: 3076
Num plants: 10000
Num Positives: 387
Found positives: 387
Num tests: 3526
Yup, as expected.
We can also do better by skipping a test when the result is guaranteed positive, because everything else in a positive batch tested negative. That optimization brings to total number of tests down to 0.26N -- quite close to optimal.

Since the original partition of plants into populations is irrelevant to the question, I'll ignore it.
Since the frequency of positives is very low, I think a simply dichotomy search should be efficient. Occasionally, we will run into the situation that we split a positive pool into two subpools, and both subpools are positive, but since the frequency of positives is very low, we can hope that it won't happen too often.
import random
# random data
n = 23+45+65+31+43
data = [{'id': random.random(),
'positive': random.choices([True, False], weights=[36, 1000-36])[0]
} for _ in range(n)]
def test_pool(pool): # tests if a pool is positive
# serious science in the lab happens here
return any(d['positive'] for d in pool)
def get_positives(data):
result = []
nb_tests = 0
pools = [data]
while pools:
pool = pools.pop()
if len(pool) == 1:
result.append(pool[0])
else:
for subpool in [pool[:len(pool)//2], pool[len(pool)//2:]]:
nb_tests += 1
if test_pool(subpool):
pools.append(subpool)
return result, nb_tests
results, nb_tests = get_positives(data)
ground_truth = [d for d in data if d['positive']]
print('NUMBER OF TESTS: {}'.format(nb_tests))
print('FOUND POSITIVES:')
print([d['id'] for d in results])
print('GROUND TRUTH:')
print([d['id'] for d in ground_truth])
# NUMBER OF TESTS: 46
# FOUND POSITIVES:
# [0.2505629359502266, 0.46483641024238254, 0.8786751274491258, 0.250765592789725]
# GROUND TRUTH:
# [0.250765592789725, 0.8786751274491258, 0.46483641024238254, 0.2505629359502266]

Related

NumPy random sampling using larger sample results in less unique elements than smaller sample

I'm trying to build a dataset for training a deep learning model that requires positive and negative sampling. For each input list, I randomly choose 3 elementa to be the positive samplea and k elements to be the negative sample from the rest of the vocabulary. For some reason, at the end, when I use k=16 negative samples for each positive, I get less unique elements than if I would've used k=4 and I'm not sure why that's the case since larger samples should obviously provide more coverage. Here's the code that I have doing the sampling (replace value of num_neg to change # of negatives sampled). I feel like I might be missing something obvious but haven't figured it out...
pos_map = {}
neg_map = {}
num_pos = 3
num_neg = 16
# vocab maps from id => integer index, reverse_map maps from integer index => id. vocab size is ~28k and stores all possible values of id
np.random.seed(2)
for ids in ids_list:
encoded = [vocab[id_] for id_ in ids]
target_positive_indices = np.random.choice(range(len(encoded)), size=num_pos, replace=False)
for target_positive_index in target_positive_indices:
pos = encoded[target_positive_index]
if pos in pos_map:
pos_map[pos] += 1
else:
pos_map[pos] = 1
# perform negative sampling
all_indices = np.arange(vocab_size)
possible_negs = np.random.choice(range(len(all_indices)), size=num_neg * 3, replace=False)
# some negatives chosen could be the same as positives or in the context, filter those out
filtered_negs = np.setdiff1d(possible_negs, store_indexes)[:num_neg]
for n in filtered_negs:
neg = reverse_map[n]
if neg in neg_map:
neg_map[neg] += 1
else:
neg_map[neg] = 1
print(len(neg_map))
Result for num_neg=4: 15842
Result for num_neg=16: 13968

How to generate in python a random number in a range but biased toward some specific numbers?

I would like to choose a range, for example, 60 to 80, and generate a random number from it. However, between 65-72 I'd like a higher probability, while the other ranges aside from this (60-64 and 73 to 80) to have lower.
An example:
From 60-64 there's 35% chance of being choosen as well for 73-80. From 65-72 65% chance.
The elements in the subranges are equally likely. I'm generating integers.
Also, it would be interesting a scalable solution, so that one could expand its usage for higher ranges, for example, 1000-2000, but biased toward 1400-1600.
Does anyone could help with some ideas?
Thanks beforehand for anyone willing to contribute!
For equally likely outcomes in the subranges, the following will do the trick:
import random
THRESHOLD = [0.65, 0.65 + 0.35 * 5 / 13]
def my_distribution():
u = random.random()
if u <= THRESHOLD[0]:
return random.randint(65, 72)
elif u <= THRESHOLD[1]:
return random.randint(60, 64)
else:
return random.randint(73, 80)
This uses a uniform random number to decide which subrange you're in, then generates values equally likely within that subrange.
The THRESHOLD values are similar to a cumulative distribution function, but arranged so the most likely outcome is checked first. 65% of the time (u <= THRESHOLD[0]) you'll generate from the range [65, 72]. Failing that, 5 of the 13 remaining possibilities (5/13 of 35%) are in the range [60, 64], and the rest are in the range [73, 80]. A Uniform(0,1) value u will fall below the first threshold 65% of the time, and failing that, below the second threshold 5/13 of the time and above that threshold the remaining 8/13 of the time.
The results look like this:
Here's a numpy based solution:
import numpy as np
# Some params
left_start = 60 # Start of left interval====== [60,64]
middle_start = 65 # Start of middle interval === [65,72]
right_start = 73 # Start of right interval ===- [73,80]
right_end = 80 # End of the right interval == [73,80]
count = 1000 # Number of values to generate.
middle_wt = 0.65 # Middle range to be selected with wt/prob=0.65
middle = np.arange(middle_start, right_start)
rest = np.r_[left_start:middle_start, right_start:(right_end+1)]
rng1 = np.random.default_rng(None) # Generator for randomly choosing range.
rng2 = np.random.default_rng(None) # Generator for generating values in the ranges.
# Now generate a random list of 0s and 1s to indicate choice between
# 'middle' and 'rest'. For this number generation we will set middle_wt as
# the weight/probability for 0 and (1-middle_wt) as the weight/probability for 1.
# (0 indicates middle range and 1 indicates the rest.)
range_choices = rng1.choice([0,1], replace=True, size=count, p=[middle_wt, (1-middle_wt)])
# Now generate 'count' values for the middle range
middle_choices = rng2.choice(middle, replace=True, size=count)
# Now generate 'count' values for the 'rest' of the range (non-middle)
rest_choices = rng2.choice(rest, replace=True, size=count)
result = np.choose(range_choices, (middle_choices,rest_choices))
print (np.sum((65 <= result) & (result<=72)))
Note:
In the above code, p=[middle_wt, (1-middle_wt)] is a list of weights. The middle_wt is the weight for the middle range [65,72], and the (1-middle_wt) is the weight for the rest.
Output:
649 # Indicates that 649 out of the 1000 values of result are in the middle range [65,72]

Python Random Values with a Given Density

Say I want to build a maze with a certain probability of an obstacle at each position. This probability is determined by a density value ranging from 0 to 10, with 0 meaning "no chance", and 10 meaning "certain".
Does this Python code do what I want?
import random
obstacle_density = 10
if random.randint(0, 9) < obstacle_density:
print("There is an obstacle")
I've tried various combinations of upper and lower bounds and inequalities, and this seems to do the job, but I'm suspicious. For one thing, 11 possible values for obstacle_density and only 10 in random.randint(0, 9).
Not super sure about your solution. It seems like it would work, though.
Here's how I would approach it, even if it is a bit redundant - I'd start with a table just for my own reference:
density | probability of obstacle
---------------------------------
0 | 0%
1 | 10%
2 | 20%
3 | 30%
4 | 40%
5 | 50%
6 | 60%
7 | 70%
8 | 80%
9 | 90%
10 | 100%
This seems to add up. I present two versions of a function which returns True or False depending on the density. In the first version, I use the density to create the associated weights to be used with random.choices (the total weight in this case would be 100). For example, if density = 3, then weights = [30, 70] - 30% to be True, 70% to be False.
def get_obstacle_state_version_1(density):
from random import choices
assert isinstance(density, int)
assert density in range(0, 11) # 0 - 10 inclusive
true_weight = density * 10
false_weight = 100 - true_weight
weights = [true_weight, false_weight]
return choices([True, False], weights=weights, k=1)[0]
Here's the second version, in which I use random.choice rather than random.choices. The latter always returns a list of samples, even if the sample size k is 1.
Here, the idea is the same, but basically the density influences the number of Trues and Falses that appear in the population to be sampled. For example, if density = 3, then random.choice would pick one element from a list of 30 Trues, and 70 Falses with a uniform distribution.
def get_obstacle_state_version_2(density):
from random import choice
assert isinstance(density, int)
assert density in range(0, 11) # 0 - 10 inclusive
true_count = density * 10
false_count = 100 - true_count
return choice([True] * true_count + [False] * false_count)
You should loop over the maze and at each site assign a probability.
You should do something like this:
probability = random.randint(0, 10) / 10
I have no idea what you mean by obstacle_density, so I am not gonna go there.

Is there a better way to guess possible unknown variables without brute force than I am doing? Machine learning? [duplicate]

This question already has answers here:
How to approach a number guessing game (with a twist) algorithm?
(7 answers)
Closed 4 years ago.
I have a game with the following rules:
A user is given fruit prices and has a chance to buy or sell items in their fruit basket every turn.
The user cannot make more than a 10% total change in their basket on a single turn.
Fruit prices change every day and when multiplied by the quantities of items in the fruit basket, the total value of the basket changes relative to the fruit price changes every day as well.
The program is only given the current price of all the fruits and the current value of the basket (current price of fruit * quantities for all items in the basket).
Based on these 2 inputs(all fruit prices and basket total value), the program tries to guess what items are in the basket.
A basket cannot hold more than 100 items but slots can be empty
The player can play several turns.
My goal is to accurately guess as computationally inexpensively as possible (read: no brute force) and scale if there are thousands of new fruits.
I am struggling to find an answer but in my mind, it’s not hard. If I have the below table. I could study day 1 and get the following data:
Apple 1
Pears 2
Oranges 3
Basket Value = 217
I can do a back of napkin calculation and assume, the weights in the basket are: 0 apple, 83 pears, and 17 Oranges equaling a basket value of 217.
The next day, the values of the fruits and basket changes. To (apple = 2, Pear 3, Oranges 5) with a basket value of 348. When I take my assumed weights above (0,83,17) I get a total value of 334 – not correct! Running this by my script, I see the closest match is 0 apples, 76 pears, 24 oranges which although does equal 348 when % change of factored in it’s a 38% change so it’s not possible!
I know I can completely brute force this but if I have 1000 fruits, it won’t scale. Not to jump on any bandwagon but can something like a neural net quickly rule out the unlikely so I calculate large volumes of data? I think they have to be a more scalable/quicker way than pure brute force? Or is there any other type of solution that could get the result?
Here is the raw data (remember program can only see prices and total basket value only):
Here's some brute force code (Thank you #paul Hankin for a cleaner example than mine):
def possibilities(value, prices):
for i in range(0, value+1, prices[0]):
for j in range(0, value+1-i, prices[1]):
k = value - i - j
if k % prices[2] == 0:
yield i//prices[0], j//prices[1], k//prices[2]
def merge_totals(last, this, r):
ok = []
for t in this:
for l in last:
f = int(sum(l) * r)
if all(l[i] -f <= t[i] <= l[i] + f for i in range(len(l))):
ok.append(t)
break
return ok
days = [
(217, (1, 2, 3)),
(348, (2, 3, 5)),
(251, (1, 2, 4)),
]
ps = None
for i, d in enumerate(days):
new_ps = list(possibilities(*d))
if ps is None:
ps = new_ps
ps = merge_totals(ps, new_ps, 0.10)
print('Day %d' % (i+1))
for p in ps:
print('Day %d,' % (i+1), 'apples: %s, pears: %s, oranges: %s' % p)
print
Update - The info so far is awesome. Does it make sense to break the problem into two problems? One is generating the possibilities while the other is finding the relationship between the possibilities(no more than a 10% daily change). By ruling out possibilities, couldn't that also be used to help only generate possibilities that are possible, to begin with? I'm not sure the approach still but I do feel both problems are different but tightly related. Your thoughts?
Update 2 - there are a lot of questions about the % change. This is the total volume of items in the basket that can change. To use the game example, Imagine the store says - you can sell/return/buy fruits but they cannot be more than 10% of your last bill. So although the change in fruit prices can cause changes in your basket value, the user cannot take any action that would impact it by more than 10%. So if the value was 100, they can make changes that create get it to 110 but not more.
I hate to let you down but I really don't think a neural net will help at all for this problem, and IMO the best answer to your question is the advice "don't waste your time trying neural nets".
An easy rule of thumb for deciding whether or not neural networks are applicable is to think, "can an average adult human solve this problem reasonably well in a few seconds?" For problems like "what's in this image", "respond to this question", or "transcribe this audio clip", the answer is yes. But for your problem, the answer is a most definite no.
Neural networks have limitations, and one is that they don't deal well with highly logical problems. This is because the answers are generally not "smooth". If you take an image and slightly change a handful of pixels, the content of the image is still the same. If you take an audio clip and insert a few milliseconds of noise, a neural net will probably still be able to figure out what's said. But in your problem, change a single day's "total basket value" by only 1 unit, and your answer(s) will drastically change.
It seems that the only way to solve your problem is with a "classical" algorithmic approach. As currently stated, there might not be any algorithm better than brute force, and it might not be possible to rule out much. For example, what if every day has the property that all fruits are priced the same? The count of each fruit can vary, as long as the total number of fruits is fixed, so the number of possibilities is still exponential in the number of fruits. If your goal is to "produce a list of possibilities", then no algorithm can be better than exponential time since this list can be exponentially large in some cases.
It's interesting that part of your problem can be reduced to an integer linear program (ILP). Consider a single day, where you are given the basket total B and each fruit's cost c_i, for i=1 through i=n (if n is the total number of distinct fruits). Let's say the prices are large, so it's not obvious that you can "fill up" the basket with unit cost fruits. It can be hard in this situation to even find a single solution. Formulated as an ILP, this is equivalent to finding integer values of x_i such that:
sum_i (x_i*c_i) = x_1*c_1 + x_2*c_2 + ... + x_n*c_n = B
and x_i >= 0 for all 1 <= i <= n (can't have negative fruits), and sum_i x_i <= 100 (can have at most 100 fruits).
The good news is that decent ILP solvers exist -- you can just hand over the above formulas and the solver will do its best to find a single solution. You can even add an "objective function" that the solver will maximize or minimize -- minimizing sum_i x_i has the effect of minimizing the total number of fruits in the basket. The bad news is that ILP is NP-complete, so there is almost no hope of finding an efficient solution for a large number of fruits (which equals the number of variables x_i).
I think the best approach forward is to try the ILP approach, but also introduce some more constraints on the scenario. For example, what if all fruits had a different prime number cost? This has the nice property that if you find one solution, you can enumerate a bunch of other related solutions. If an apple costs m and an orange costs n, where m and n are relatively prime, then you can "trade" n*x apples for m*x oranges without changing the basket total, for any integer x>0 (so long as you have enough apples and oranges to begin with). If you choose all fruits to have different prime number costs, then all of the costs will be pairwise relatively prime. I think this approach will result in relatively few solutions for a given day.
You might also consider other constraints, such as "there can't be more than 5 fruits of a single kind in the basket" (add the constraint x_i <= 5), or "there can be at most 5 distinct kinds of fruits in the basket" (but this is harder to encode as an ILP constraint). Adding these kinds of constraints will make it easier for the ILP solver to find a solution.
Of course the above discussion is focused on a single day, and you have multiple days' worth of data. If the hardest part of the problem is finding any solution for any day at all (which happens if your prices are large), then using an ILP solver will give you a large boost. If solutions are easy to find (which happens if you have a very-low-cost fruit that can "fill up" your basket), and the hardest part of the problem is finding solutions that are "consistent" across multiple days, then the ILP approach might not be the best fit, and in general this problem seems much more difficult to reason about.
Edit: and as mentioned in the comments, for some interpretations of the "10% change" constraint, you can even encode the entire multi-day problem as an ILP.
It seems to me like your approach is reasonable, but whether it is depends on the size of the numbers in the actual game. Here's a complete implementation that's a lot more efficient than yours (but still has plenty of scope for improvement). It keeps a list of possibilities for the previous day, and then filters the current day amounts to those that are within 5% of some possibility from the previous day, and prints them out per day.
def possibilities(value, prices):
for i in range(0, value+1, prices[0]):
for j in range(0, value+1-i, prices[1]):
k = value - i - j
if k % prices[2] == 0:
yield i//prices[0], j//prices[1], k//prices[2]
def merge_totals(last, this, r):
ok = []
for t in this:
for l in last:
f = int(sum(l) * r)
if all(l[i] -f <= t[i] <= l[i] + f for i in range(len(l))):
ok.append(t)
break
return ok
days = [
(26, (1, 2, 3)),
(51, (2, 3, 4)),
(61, (2, 4, 5)),
]
ps = None
for i, d in enumerate(days):
new_ps = list(possibilities(*d))
if ps is None:
ps = new_ps
ps = merge_totals(ps, new_ps, 0.05)
print('Day %d' % (i+1))
for p in ps:
print('apples: %s, pears: %s, oranges: %s' % p)
print
Problem Framing
This problem can be described as a combinatorial optimization problem. You're trying to find an optimal object (a combination of fruit items) from a finite set of objects (all possible combinations of fruit items). With the proper analogy and transformations, we can reduce this fruit basket problem to the well known, and extensively studied (since 1897), knapsack problem.
Solving this class of optimization problems is NP-hard. The decision problem of answering "Can we find a combination of fruit items with a value of X?" is NP-complete. Since you want to account for a worst case scenario when you have thousands of fruit items, your best bet is to use a metaheuristic, like evolutionary computation.
Proposed Solution
Evolutionary computation is a family of biologically inspired metaheuristics. They work by revising and mixing (evolving) the most fit candidate solutions based on a fitness function and discarding the least fit ones over many iterations. The higher the fitness of a solution, the more likely it will reproduce similar solutions and survive to the next generation (iteration). Eventually, a local or global optimal solution is found.
These methods provide a needed compromise when the search space is too large to cover with traditional closed form mathematical solutions. Due to the stochastic nature of these algorithms, different executions of the algorithms may lead to different local optima, and there is no guarantee that the global optimum will be found. The odds are good in our case since we have multiple valid solutions.
Example
Let's use the Distributed Evolutionary Algorithms in Python (DEAP) framework and retrofit their Knapsack problem example to our problem. In the code below we apply strong penalty for baskets with 100+ items. This will severely reduce their fitness and have them taken out of the population pool in one or two generations. There are other ways to handle constraints that are also valid.
# This file is part of DEAP.
#
# DEAP is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
#
# DEAP is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with DEAP. If not, see <http://www.gnu.org/licenses/>.
import random
import numpy as np
from deap import algorithms
from deap import base
from deap import creator
from deap import tools
IND_INIT_SIZE = 5 # Calls to `individual` function
MAX_ITEM = 100 # Max 100 fruit items in basket
NBR_ITEMS = 50 # Start with 50 items in basket
FRUIT_TYPES = 10 # Number of fruit types (apples, bananas, ...)
# Generate a dictionary of random fruit prices.
fruit_price = {i: random.randint(1, 5) for i in range(FRUIT_TYPES)}
# Create fruit items dictionary. The key is item ID, and the
# value is a (weight, price) tuple. Weight is always 1 here.
items = {}
# Create random items and store them in the items' dictionary.
for i in range(NBR_ITEMS):
items[i] = (1, fruit_price[i])
# Create fitness function and an individual (solution candidate)
# A solution candidate in our case is a collection of fruit items.
creator.create("Fitness", base.Fitness, weights=(-1.0, 1.0))
creator.create("Individual", set, fitness=creator.Fitness)
toolbox = base.Toolbox()
# Randomly initialize the population (a set of candidate solutions)
toolbox.register("attr_item", random.randrange, NBR_ITEMS)
toolbox.register("individual", tools.initRepeat, creator.Individual,
toolbox.attr_item, IND_INIT_SIZE)
def evalBasket(individual):
"""Evaluate the value of the basket and
apply constraints penalty.
"""
value = 0 # Total value of the basket
for item in individual:
value += items[item][1]
# Heavily penalize baskets with 100+ items
if len(individual) > MAX_ITEM:
return 10000, 0
return len(individual), value # (items in basket, value of basket)
def cxSet(ind1, ind2):
"""Apply a crossover operation on input sets.
The first child is the intersection of the two sets,
the second child is the difference of the two sets.
This is one way to evolve new candidate solutions from
existing ones. Think of it as parents mixing their genes
to produce a child.
"""
temp = set(ind1) # Used in order to keep type
ind1 &= ind2 # Intersection (inplace)
ind2 ^= temp # Symmetric Difference (inplace)
return ind1, ind2
def mutSet(individual):
"""Mutation that pops or add an element.
In nature, gene mutations help offspring express new traits
not found in their ancestors. That could be beneficial or
harmful. Survival of the fittest at play here.
"""
if random.random() < 0.5: # 50% chance of mutation
if len(individual) > 0:
individual.remove(random.choice(sorted(tuple(individual))))
else:
individual.add(random.randrange(NBR_ITEMS))
return individual,
# Register evaluation, mating, mutation and selection functions
# so the framework can use them to run the simulation.
toolbox.register("evaluate", evalKnapsack)
toolbox.register("mate", cxSet)
toolbox.register("mutate", mutSet)
toolbox.register("select", tools.selNSGA2)
def main():
random.seed(64)
NGEN = 50
MU = 50
LAMBDA = 100
CXPB = 0.7
MUTPB = 0.2
pop = toolbox.population(n=MU) # Initial population size
hof = tools.ParetoFront() # Using Pareto front to rank fitness
# Keep track of population fitness stats which should
# improve over generations (iterations).
stats = tools.Statistics(lambda ind: ind.fitness.values)
stats.register("avg", numpy.mean, axis=0)
stats.register("std", numpy.std, axis=0)
stats.register("min", numpy.min, axis=0)
stats.register("max", numpy.max, axis=0)
algorithms.eaMuPlusLambda(pop, toolbox, MU,LAMBDA,\
CXPB, MUTPB, NGEN, stats,\
halloffame=hof)
return pop, stats, hof
if __name__ == "__main__":
main()
Not an answer, but an attempt to make the one information about what "% change" might be supposed to mean (sum of change in count of each item computed backwards) more accessible to non-believers in pixel heaps:
| Day 1 ! Day 2 change ! Day 3 change ! Day 4 change
|$/1| # | $ !$/1| # | % | $ !$/1| # | % | $ !$/1| # | % | $
Apples | 1 | 20 | 20 ! 2 | 21 | 4.76 | 42 ! 1 | 21 | 0 | 21 ! 1 | 22 | 4.55 | 22
Pears | 2 | 43 | 86 ! 3 | 42 | 2.38 | 126 ! 2 | 43 | 2.33 | 86 ! 2 | 43 | 0 | 86
Oranges| 3 | 37 | 111 ! 5 | 36 | 2.78 | 180 ! 4 | 36 | 0 | 144 ! 3 | 35 | 2.86 | 105
Total | 100 | 217 ! 100 | 9.92 | 348 ! 100 | 2.33 | 251 ! 100 | 7.40 | 213
Integer Linear Programming Approach
This sets up naturally as a multi-step Integer Program, with the holdings in {apples, pears, oranges} from the previous step factoring in the calculation of the relative change in holdings that must be constrained. There is no notion of optimal here, but we can turn the "turnover" constraint into an objective and see what happens.
The solutions provided improve on those in your chart above, and are minimal in the sense of total change in basket holdings.
Comments -
I don't know how you calculated the "% change" column in your table. A change from Day 1 to Day 2 of 20 apples to 21 apples is a 4.76% change?
On all days, your total holdings in fruits is exactly 100. There is a constraint that the sum of holdings is <= 100. No violation, I just want to confirm.
We can set this up as an integer linear program, using the integer optimization routine from ortools. I haven't used an ILP solver for a long time, and this one is kind of flaky I think (the solver.OPTIMAL flag is never true it seems, even for toy problems. In addition the ortools LP solver fails to find an optimal solution in cases where scipy.linprog works without a hitch)
h1,d = holdings in apples (number of apples) at end of day d
h2,d = holdings in pears at end of day d
h3,d = holdings in oranges at end of day d
I'll give two proposals here, one which minimizes the l1 norm of the absolute error, the other the l0norm.
The l1 solution finds the minimum of abs(h1,(d+1) - h1,d)/h1 + ... + abs(h3,(d+1) - h3,d)/h3), hoping that the constraint that each relative change in holdings is under 10% if the sum of the relative change in holdings is minimized.
The only thing that prevents this from being a linear program (aside from the integer requirement) is the nonlinear objective function. No problem, we can introduce slack variables and make everything linear. For the l1 formulation, 6 additional slack variables are introduced, 2 per fruit, and 6 additional inequality constraints. For the l0 formulation, 1 slack variable is introduced, and 6 additional inequality constraints.
This is a two step process, for example, replacing |apples_new - apples_old|/|apples_old| with the variable |e|, and adding inequality constraints to ensure the e measures what we'd like. We then replace|e| with (e+ - e-), each of e+, e- >0. It can be shown that one of e+, e- will be 0, and that (e+ + e-) is the absolute value of e. That way the pair (e+, e-) can represent a positive or negative number. Standard stuff, but that adds a bunch of variables and constraints. I can explain this in a bit more detail if necessary.
import numpy as np
from ortools.linear_solver import pywraplp
def fruit_basket_l1_ortools():
UPPER_BOUND = 1000
prices = [[2,3,5],
[1,2,4],
[1,2,3]]
holdings = [20,43,37]
values = [348, 251, 213]
for day in range(len(values)):
solver = pywraplp.Solver('ILPSolver',
pywraplp.Solver.BOP_INTEGER_PROGRAMMING)
# solver = pywraplp.Solver('ILPSolver',
# pywraplp.Solver.CLP_LINEAR_PROGRAMMING)
c = ([1,1] * 3) + [0,0,0]
price = prices[day]
value = values[day]
A_eq = [[ 0, 0, 0, 0, 0, 0, price[0], price[1], price[2]]]
b_eq = [value]
A_ub = [[-1*holdings[0], 1*holdings[0], 0, 0, 0, 0, 1.0, 0, 0],
[-1*holdings[0], 1*holdings[0], 0, 0, 0, 0, -1.0, 0, 0],
[ 0, 0, -1*holdings[1], 1*holdings[1], 0, 0, 0, 1.0, 0],
[ 0, 0, -1*holdings[1], 1*holdings[1], 0, 0, 0, -1.0, 0],
[ 0, 0, 0, 0, -1*holdings[2], 1*holdings[2], 0, 0, 1.0],
[ 0, 0, 0, 0, -1*holdings[2], 1*holdings[2], 0, 0, -1.0]]
b_ub = [1*holdings[0], -1*holdings[0], 1*holdings[1], -1*holdings[1], 1*holdings[2], -1*holdings[2]]
num_vars = len(c)
num_ineq_constraints = len(A_ub)
num_eq_constraints = len(A_eq)
data = [[]] * num_vars
data[0] = solver.IntVar( 0, UPPER_BOUND, 'e1_p')
data[1] = solver.IntVar( 0, UPPER_BOUND, 'e1_n')
data[2] = solver.IntVar( 0, UPPER_BOUND, 'e2_p')
data[3] = solver.IntVar( 0, UPPER_BOUND, 'e2_n')
data[4] = solver.IntVar( 0, UPPER_BOUND, 'e3_p')
data[5] = solver.IntVar( 0, UPPER_BOUND, 'e3_n')
data[6] = solver.IntVar( 0, UPPER_BOUND, 'x1')
data[7] = solver.IntVar( 0, UPPER_BOUND, 'x2')
data[8] = solver.IntVar( 0, UPPER_BOUND, 'x3')
constraints = [0] * (len(A_ub) + len(b_eq))
# Inequality constraints
for i in range(0,num_ineq_constraints):
constraints[i] = solver.Constraint(-solver.infinity(), b_ub[i])
for j in range(0,num_vars):
constraints[i].SetCoefficient(data[j], A_ub[i][j])
# Equality constraints
for i in range(num_ineq_constraints, num_ineq_constraints+num_eq_constraints):
constraints[i] = solver.Constraint(b_eq[i-num_ineq_constraints], b_eq[i-num_ineq_constraints])
for j in range(0,num_vars):
constraints[i].SetCoefficient(data[j], A_eq[i-num_ineq_constraints][j])
# Objective function
objective = solver.Objective()
for i in range(0,num_vars):
objective.SetCoefficient(data[i], c[i])
# Set up as minization problem
objective.SetMinimization()
# Solve it
result_status = solver.Solve()
solution_set = [data[i].solution_value() for i in range(len(data))]
print('DAY: {}'.format(day+1))
print('======')
print('SOLUTION FEASIBLE: {}'.format(solver.FEASIBLE))
print('SOLUTION OPTIMAL: {}'.format(solver.OPTIMAL))
print('VALUE OF BASKET: {}'.format(np.dot(A_eq[0], solution_set)))
print('SOLUTION (apples,pears,oranges): {!r}'.format(solution_set[-3:]))
print('PCT CHANGE (apples,pears,oranges): {!r}\n\n'.format([round(100*(x-y)/y,2) for x,y in zip(solution_set[-3:], holdings)]))
# Update holdings for the next day
holdings = solution_set[-3:]
A single run gives:
DAY: 1
======
SOLUTION FEASIBLE: 1
SOLUTION OPTIMAL: 0
VALUE OF BASKET: 348.0
SOLUTION (apples,pears,oranges): [20.0, 41.0, 37.0]
PCT CHANGE (apples,pears,oranges): [0.0, -4.65, 0.0]
DAY: 2
======
SOLUTION FEASIBLE: 1
SOLUTION OPTIMAL: 0
VALUE OF BASKET: 251.0
SOLUTION (apples,pears,oranges): [21.0, 41.0, 37.0]
PCT CHANGE (apples,pears,oranges): [5.0, 0.0, 0.0]
DAY: 3
======
SOLUTION FEASIBLE: 1
SOLUTION OPTIMAL: 0
VALUE OF BASKET: 213.0
SOLUTION (apples,pears,oranges): [20.0, 41.0, 37.0]
PCT CHANGE (apples,pears,oranges): [-4.76, 0.0, 0.0]
The l0 formulation is also presented:
def fruit_basket_l0_ortools():
UPPER_BOUND = 1000
prices = [[2,3,5],
[1,2,4],
[1,2,3]]
holdings = [20,43,37]
values = [348, 251, 213]
for day in range(len(values)):
solver = pywraplp.Solver('ILPSolver',
pywraplp.Solver.BOP_INTEGER_PROGRAMMING)
# solver = pywraplp.Solver('ILPSolver',
# pywraplp.Solver.CLP_LINEAR_PROGRAMMING)
c = [1, 0, 0, 0]
price = prices[day]
value = values[day]
A_eq = [[0, price[0], price[1], price[2]]]
b_eq = [value]
A_ub = [[-1*holdings[0], 1.0, 0, 0],
[-1*holdings[0], -1.0, 0, 0],
[-1*holdings[1], 0, 1.0, 0],
[-1*holdings[1], 0, -1.0, 0],
[-1*holdings[2], 0, 0, 1.0],
[-1*holdings[2], 0, 0, -1.0]]
b_ub = [holdings[0], -1*holdings[0], holdings[1], -1*holdings[1], holdings[2], -1*holdings[2]]
num_vars = len(c)
num_ineq_constraints = len(A_ub)
num_eq_constraints = len(A_eq)
data = [[]] * num_vars
data[0] = solver.IntVar(-UPPER_BOUND, UPPER_BOUND, 'e' )
data[1] = solver.IntVar( 0, UPPER_BOUND, 'x1')
data[2] = solver.IntVar( 0, UPPER_BOUND, 'x2')
data[3] = solver.IntVar( 0, UPPER_BOUND, 'x3')
constraints = [0] * (len(A_ub) + len(b_eq))
# Inequality constraints
for i in range(0,num_ineq_constraints):
constraints[i] = solver.Constraint(-solver.infinity(), b_ub[i])
for j in range(0,num_vars):
constraints[i].SetCoefficient(data[j], A_ub[i][j])
# Equality constraints
for i in range(num_ineq_constraints, num_ineq_constraints+num_eq_constraints):
constraints[i] = solver.Constraint(int(b_eq[i-num_ineq_constraints]), b_eq[i-num_ineq_constraints])
for j in range(0,num_vars):
constraints[i].SetCoefficient(data[j], A_eq[i-num_ineq_constraints][j])
# Objective function
objective = solver.Objective()
for i in range(0,num_vars):
objective.SetCoefficient(data[i], c[i])
# Set up as minization problem
objective.SetMinimization()
# Solve it
result_status = solver.Solve()
solution_set = [data[i].solution_value() for i in range(len(data))]
print('DAY: {}'.format(day+1))
print('======')
print('SOLUTION FEASIBLE: {}'.format(solver.FEASIBLE))
print('SOLUTION OPTIMAL: {}'.format(solver.OPTIMAL))
print('VALUE OF BASKET: {}'.format(np.dot(A_eq[0], solution_set)))
print('SOLUTION (apples,pears,oranges): {!r}'.format(solution_set[-3:]))
print('PCT CHANGE (apples,pears,oranges): {!r}\n\n'.format([round(100*(x-y)/y,2) for x,y in zip(solution_set[-3:], holdings)]))
# Update holdings for the next day
holdings = solution_set[-3:]
A single run of this gives
DAY: 1
======
SOLUTION FEASIBLE: 1
SOLUTION OPTIMAL: 0
VALUE OF BASKET: 348.0
SOLUTION (apples,pears,oranges): [33.0, 79.0, 9.0]
PCT CHANGE (apples,pears,oranges): [65.0, 83.72, -75.68]
DAY: 2
======
SOLUTION FEASIBLE: 1
SOLUTION OPTIMAL: 0
VALUE OF BASKET: 251.0
SOLUTION (apples,pears,oranges): [49.0, 83.0, 9.0]
PCT CHANGE (apples,pears,oranges): [48.48, 5.06, 0.0]
DAY: 3
======
SOLUTION FEASIBLE: 1
SOLUTION OPTIMAL: 0
VALUE OF BASKET: 213.0
SOLUTION (apples,pears,oranges): [51.0, 63.0, 12.0]
PCT CHANGE (apples,pears,oranges): [4.08, -24.1, 33.33]
Summary
The l1 formulation gives more sensible results, lower turnover, much lower. The optimality check fails on all runs, however, which is concerning. I included a linear solver too and that fails the feasiblity check somehow, I don't know why. The Google people provide precious little documentation for the ortools lib, and most of it is for the C++ lib. But the l1 formulation may be a solution to your problem, which may scale. ILP is in general NP-complete, and so is your problem most likely.
Also - does a solution exist on day 2? How do you define % change so that it does in your chart above? If I knew I could recast the inequalities above and we would have the general solution.
You got a logic problem on integers, not a representation problem. Neural networks are relevant to problem with complex representation (eg., image with pixels, objects in differents shape and color, sometimes hidden etc), as they build their own set of features (descriptors) and mipmaps; they also are a good match with problems dealing with reals, not integer; and last, as they are today, they don't really deal with reasonning and logic, or eventually with simple logic like a small succession of if/else or switch but we don't really have a control over that.
What I see is closer to a cryptographic-ish problem with constraints (10% change, max 100 articles).
Solution for all sets of fruits
There is a way to reach all solutions very quickly. We start by factoring into primes the total, then we find few solutions through brute force. From there we can change the set of fruits with equal total. Eg., we exchange 1 orange for 1 apple and 1 pear with prices = (1,2,3). This way we can navigate through solutions without having to go through brute force.
Algorithm(s): you factorize in prime numbers the total, then you split them into two or more groups; let's take 2 groups: let A be one common multiplier, and let B the other(s). Then you can add your fruits to reach the total B.
Examples:
Day 1: Apple = 1, Pears = 2, Oranges = 3, Basket Value = 217
Day 2: Apple = 2, Pears = 3, Oranges = 5, Basket Value = 348
217 factorizes into [7, 31], we pick 31 as A (common multiplier), then let say 7=3*2+1 (2 orange, 0 pear, 1 apple), you got an answer: 62 oranges, 0 pears, 31 apples. 62+31<100: valid.
348 factorizes into [2, 2, 3, 29], you have several ways to
group your factors and multiply your fruits inside this. The
multiplier can be 29, (or 2*29 etc), then you pick your fruits to reach 12. Let's say 12=2*2+3+5. You got (2 apples, 1 pear, 1 orange) * 29, but it's more than 100 articles. You can fuse recursively 1 apple and 1 pear into 1 orange until you are below 100 articles, or you can go directly with the solution with a minimum of articles: (2 oranges, 1 apple)*29 = (58 oranges, 29 apples). And at last:
-- 87<100 valid;
-- the change is (-4 oranges, -2 apples), 6/93=6.45% <10% change: valid.
Code
Remark: no implementation of the 10% variation
Remark: I didn't implement the "fruit exchange" process that allows the "solution navigation"
Run with python -O solution.py to optimize and remove the debug messages.
def prime_factors(n):
i = 2
factors = []
while i * i <= n:
if n % i:
i += 1
else:
n //= i
factors.append(i)
if n > 1:
factors.append(n)
return factors
def possibilities(value, prices):
for i in range(0, value + 1, prices[0]):
for j in range(0, value + 1-i, prices[1]):
k = value - i - j
if k % prices[2] == 0:
yield i//prices[0], j//prices[1], k//prices[2]
days = [
(217, (1, 2, 3)),
(348, (2, 3, 5)),
(251, (1, 2, 4)),
(213, (1, 2, 3)),
]
for set in days:
total = set[0]
(priceApple, pricePear, priceOrange) = set[1]
factors = prime_factors(total)
if __debug__:
print(str(total) + " -> " + str(factors))
# remove small article to help factorize (odd helper)
evenHelper = False
if len(factors) == 1 :
evenHelper = True
t1 = total - priceApple
factors = prime_factors(t1)
if __debug__:
print(str(total) + " --> " + str(factors))
# merge factors on left
while factors[0] < priceOrange :
factors = [factors[0] * factors[1]] + factors[2:]
if __debug__:
print("merging: " + str(factors))
# merge factors on right
if len(factors) > 2:
multiplier = 1
for f in factors[1:]:
multiplier *= f
factors = [factors[0]] + [multiplier]
(smallTotal, multiplier) = factors
if __debug__:
print("final factors: " + str(smallTotal) + " (small total) , " + str(multiplier) + " (multiplier)")
# solutions satisfying #<100
smallMax = 100 / multiplier
solutions = [o for o in possibilities(smallTotal, set[1]) if sum(o) < smallMax ]
for solution in solutions:
(a,p,o) = [i * multiplier for i in solution]
# if we used it, we need to add back the odd helper to reach the actual solution
if evenHelper:
a += 1
print(str(a) + " apple(s), " + str(p) + " pear(s), " + str(o) + " orange(s)")
# separating solutions
print()
I timed the program with a 10037 total with (5, 8, 17) prices, and maximum 500 articles: it's about 2ms (on i7 6700k). The "solution navigation" process is very simple and shouldn't add significant time.
There might be a heuristic to go from day to day without having to do the factorization + navigation + validation process. I'll think about it.
I know it's a bit late, but I thought this was an interesting problem and that I might as well add my two cents.
My code:
import math
prices = [1, 2, 3]
basketVal = 217
maxFruits = 100
numFruits = len(prices)
## Get the possible baskets
def getPossibleBaskets(maxFruits, numFruits, basketVal, prices):
possBaskets = []
for i in range(101):
for j in range(101):
for k in range(101):
if i + j + k > 100:
pass
else:
possibleBasketVal = 0
for m in range(numFruits):
possibleBasketVal += (prices[m] * [i, j, k][m])
if possibleBasketVal > basketVal:
break
if possibleBasketVal == basketVal:
possBaskets.append([i, j, k])
return possBaskets
firstDayBaskets = getPossibleBaskets(maxFruits, numFruits, basketVal, prices)
## Compare the baskets for percentage change and filter out the values
while True:
prices = list(map(int, input("New Prices:\t").split()))
basketVal = int(input("New Basket Value:\t"))
maxFruits = int(input("Max Fruits:\t"))
numFruits = len(prices)
secondDayBaskets = getPossibleBaskets(maxFruits, numFruits, basketVal, prices)
possBaskets = []
for basket in firstDayBaskets:
for newBasket in secondDayBaskets:
if newBasket not in possBaskets:
percentChange = 0
for n in range(numFruits):
percentChange += (abs(basket[n] - newBasket[n]) / 100)
if percentChange <= 10:
possBaskets.append(newBasket)
firstDayBaskets = possBaskets
secondDayBaskets = []
print(firstDayBaskets)
I guess this could be called a brute force solution, but it definitely works. Every day, it'll print the possible configurations of the basket.

Iteration performance

I made a function to evaluate the following problem experimentally, taken from a A Primer for the Mathematics of Financial Engineering.
Problem: Let X be the number of times you must flip a fair coin until it lands heads. What are E[X] (expected value) and var(X) (variance)?
Following the textbook solution, the following code yields the correct answer:
from sympy import *
k = symbols('k')
Expected_Value = summation(k/2**k, (k, 1, oo)) # Both solutions work
Variance = summation(k**2/2**k, (k, 1, oo)) - Expected_Value**2
To validate this answer, I decided to have a go at making a function to simulate this experiment. The following code is what I came up with.
def coin_toss(toss, probability=[0.5, 0.5]):
"""Computes expected value and variance for coin toss experiment"""
flips = [] # Collects total number of throws until heads appear per experiment.
for _ in range(toss): # Simulate n flips
number_flips=[] # Number of flips until heads is tossed
while sum(number_flips) == 0: # Continue simulation while Tails are thrown
number_flips.append(np.random.choice(2, p=probability)) # Append result to number_flips
flips.append(len(number_flips)) #Append number of flips until lands heads to flips
Expected_Value, Variance = np.mean(flips), np.var(flips)
print('E[X]: {}'.format(Expected_Value),
'\nvar[X]: {}'.format(Variance)) # Return expected value
The run time if I simulate 1e6 experiments, using the following code is approximately 35.9 seconds.
from timeit import Timer
t1 = Timer("""coin_toss(1000000)""", """from __main__ import coin_toss""")
print(t1.timeit(1))
In the interest of developing my understanding of Python, is this a particularly efficient/pythonic way of approaching a problem like this? How can I utilise existing libraries to improve efficiency/flow execution?
In order to code in an efficient and pythonic way, you must take a look at PythonSpeed and NumPy. One exemple of a faster code using numpy can be found below.
The abc of optimizing in python+numpy is to vectorize operations, which in this case is quite dificult because there is a while that could actually be infinite, the coin can be flipped tails 40 times in a row. However, instead of doing a for with toss iterations, the work can be done in chunks. That is the main difference between coin_toss from the question and this coin_toss_2d approach.
coin_toss_2d
The main advantatge of coin_toss_2d is working by chunks, the size of these chunks has some default values, but they can be modified (as they will affect speed). Thus, it will only iterate over the while current_toss<toss a number of times toss%flips_at_a_time. This is achieved with numpy, which allows to generate a matrix with the results of repeating flips_at_a_time times the experiment of flipping a coin flips_per_try times. This matrix will contain 0 (tails) and 1 (heads).
# i.e. doing only 5 repetitions with 3 flips_at_a_time
flip_events = np.random.choice([0,1],size=(repetitions_at_a_time,flips_per_try),p=probability)
# Out
[[0 0 0] # still no head, we will have to keep trying
[0 1 1] # head at the 2nd try (position 1 in python)
[1 0 0]
[1 0 1]
[0 0 1]]
Once this result is obtained, argmax is called. This finds the index corresponding to the maximum (which will be 1, a head) of each row (repetition) and in case of multiple occurences, returns the first one, which is exactly what is needed, the first head after a sequence of tails.
maxs = flip_events.argmax(axis=1)
# Out
[0 1 0 0 2]
# The first position is 0, however, flip_events[0,0]!=1, it's not a head!
However, the case where all the row is 0 must be considered. In this case, the maximum will be 0, and its first occurence will also be 0, the first column (try). Therefore, we check that all the maximums found at the first try correspond to a head at the first try.
not_finished = (maxs==0) & (flip_events[:,0]!=1)
# Out
[ True False False False False] # first repetition is not finished
If that is not the case, we loop repeating that same process but only for the repetitions where there was no head in any of the tries.
n = np.sum(not_finished)
while n!=0: # while there are sequences without any head
flip_events = np.random.choice([0,1],size=(n,flips_per_try),p=probability) # number of experiments reduced to n (the number of all tails sequences)
maxs2 = flip_events.argmax(axis=1)
maxs[not_finished] += maxs2+flips_per_try # take into account that there have been flips_per_try tries already (every iteration is added)
not_finished2 = (maxs2==0) & (flip_events[:,0]!=1)
not_finished[not_finished] = not_finished2
n = np.sum(not_finished)
# Out
# flip_events
[[1 0 1]] # Now there is a head
# maxs2
[0]
# maxs
[3 1 0 0 2] # The value of the still unfinished repetition has been updated,
# taking into account that the first position in flip_events is the 4th,
# without affecting the rest
Then the indexes corresponding to the first head occurence are stored (we have to add 1 because python indexing starts at zero instead of 1). There is one try ... except ... block to cope with cases where toss is not a multiple of repetitions_at_a_time.
def coin_toss_2d(toss, probability=[.5,.5],repetitions_at_a_time=10**5,flips_per_try=20):
# Initialize and preallocate data
current_toss = 0
flips = np.empty(toss)
# loop by chunks
while current_toss<toss:
# repeat repetitions_at_a_time times experiment "flip coin flips_per_try times"
flip_events = np.random.choice([0,1],size=(repetitions_at_a_time,flips_per_try),p=probability)
# store first head ocurrence
maxs = flip_events.argmax(axis=1)
# Check for all tails sequences, that is, repetitions were we have to keep trying to get a head
not_finished = (maxs==0) & (flip_events[:,0]!=1)
n = np.sum(not_finished)
while n!=0: # while there are sequences without any head
flip_events = np.random.choice([0,1],size=(n,flips_per_try),p=probability) # number of experiments reduced to n (the number of all tails sequences)
maxs2 = flip_events.argmax(axis=1)
maxs[not_finished] += maxs2+flips_per_try # take into account that there have been flips_per_try tries already (every iteration is added)
not_finished2 = (maxs2==0) & (flip_events[:,0]!=1)
not_finished[not_finished] = not_finished2
n = np.sum(not_finished)
# try except in case toss is not multiple of repetitions_at_a_time, in general, no error is raised, that is why a try is useful
try:
flips[current_toss:current_toss+repetitions_at_a_time] = maxs+1
except ValueError:
flips[current_toss:] = maxs[:toss-current_toss]+1
# Update current_toss and move to the next chunk
current_toss += repetitions_at_a_time
# Once all values are obtained, average and return them
Expected_Value, Variance = np.mean(flips), np.var(flips)
return Expected_Value, Variance
coin_toss_map
Here the code is basically the same, but now, the intrinsec while is done in a separate function, which is called from the wrapper function coin_toss_map using map.
def toss_chunk(args):
probability,repetitions_at_a_time,flips_per_try = args
# repeat repetitions_at_a_time times experiment "flip coin flips_per_try times"
flip_events = np.random.choice([0,1],size=(repetitions_at_a_time,flips_per_try),p=probability)
# store first head ocurrence
maxs = flip_events.argmax(axis=1)
# Check for all tails sequences
not_finished = (maxs==0) & (flip_events[:,0]!=1)
n = np.sum(not_finished)
while n!=0: # while there are sequences without any head
flip_events = np.random.choice([0,1],size=(n,flips_per_try),p=probability) # number of experiments reduced to n (the number of all tails sequences)
maxs2 = flip_events.argmax(axis=1)
maxs[not_finished] += maxs2+flips_per_try # take into account that there have been flips_per_try tries already (every iteration is added)
not_finished2 = (maxs2==0) & (flip_events[:,0]!=1)
not_finished[not_finished] = not_finished2
n = np.sum(not_finished)
return maxs+1
def coin_toss_map(toss,probability=[.5,.5],repetitions_at_a_time=10**5,flips_per_try=20):
n_chunks, remainder = divmod(toss,repetitions_at_a_time)
args = [(probability,repetitions_at_a_time,flips_per_try) for _ in range(n_chunks)]
if remainder:
args.append((probability,remainder,flips_per_try))
flips = np.concatenate(map(toss_chunk,args))
# Once all values are obtained, average and return them
Expected_Value, Variance = np.mean(flips), np.var(flips)
return Expected_Value, Variance
Performance comparison
In my computer, I got the following computation time:
In [1]: %timeit coin_toss(10**6)
# Out
# ('E[X]: 2.000287', '\nvar[X]: 1.99791891763')
# ('E[X]: 2.000459', '\nvar[X]: 2.00692478932')
# ('E[X]: 1.998118', '\nvar[X]: 1.98881045808')
# ('E[X]: 1.9987', '\nvar[X]: 1.99508631')
# 1 loop, best of 3: 46.2 s per loop
In [2]: %timeit coin_toss_2d(10**6,repetitions_at_a_time=5*10**5,flips_per_try=4)
# Out
# 1 loop, best of 3: 197 ms per loop
In [3]: %timeit coin_toss_map(10**6,repetitions_at_a_time=4*10**5,flips_per_try=4)
# Out
# 1 loop, best of 3: 192 ms per loop
And the results for the mean and variance are:
In [4]: [coin_toss_2d(10**6,repetitions_at_a_time=10**5,flips_per_try=10) for _ in range(4)]
# Out
# [(1.999848, 1.9990739768960009),
# (2.000654, 2.0046035722839997),
# (1.999835, 2.0072329727749993),
# (1.999277, 2.001566477271)]
In [4]: [coin_toss_map(10**6,repetitions_at_a_time=10**5,flips_per_try=4) for _ in range(4)]
# Out
# [(1.999552, 2.0005057992959996),
# (2.001733, 2.011159996711001),
# (2.002308, 2.012128673136001),
# (2.000738, 2.003613455356)]

Categories