Python Random Values with a Given Density - python

Say I want to build a maze with a certain probability of an obstacle at each position. This probability is determined by a density value ranging from 0 to 10, with 0 meaning "no chance", and 10 meaning "certain".
Does this Python code do what I want?
import random
obstacle_density = 10
if random.randint(0, 9) < obstacle_density:
print("There is an obstacle")
I've tried various combinations of upper and lower bounds and inequalities, and this seems to do the job, but I'm suspicious. For one thing, 11 possible values for obstacle_density and only 10 in random.randint(0, 9).

Not super sure about your solution. It seems like it would work, though.
Here's how I would approach it, even if it is a bit redundant - I'd start with a table just for my own reference:
density | probability of obstacle
---------------------------------
0 | 0%
1 | 10%
2 | 20%
3 | 30%
4 | 40%
5 | 50%
6 | 60%
7 | 70%
8 | 80%
9 | 90%
10 | 100%
This seems to add up. I present two versions of a function which returns True or False depending on the density. In the first version, I use the density to create the associated weights to be used with random.choices (the total weight in this case would be 100). For example, if density = 3, then weights = [30, 70] - 30% to be True, 70% to be False.
def get_obstacle_state_version_1(density):
from random import choices
assert isinstance(density, int)
assert density in range(0, 11) # 0 - 10 inclusive
true_weight = density * 10
false_weight = 100 - true_weight
weights = [true_weight, false_weight]
return choices([True, False], weights=weights, k=1)[0]
Here's the second version, in which I use random.choice rather than random.choices. The latter always returns a list of samples, even if the sample size k is 1.
Here, the idea is the same, but basically the density influences the number of Trues and Falses that appear in the population to be sampled. For example, if density = 3, then random.choice would pick one element from a list of 30 Trues, and 70 Falses with a uniform distribution.
def get_obstacle_state_version_2(density):
from random import choice
assert isinstance(density, int)
assert density in range(0, 11) # 0 - 10 inclusive
true_count = density * 10
false_count = 100 - true_count
return choice([True] * true_count + [False] * false_count)

You should loop over the maze and at each site assign a probability.
You should do something like this:
probability = random.randint(0, 10) / 10
I have no idea what you mean by obstacle_density, so I am not gonna go there.

Related

How to generate in python a random number in a range but biased toward some specific numbers?

I would like to choose a range, for example, 60 to 80, and generate a random number from it. However, between 65-72 I'd like a higher probability, while the other ranges aside from this (60-64 and 73 to 80) to have lower.
An example:
From 60-64 there's 35% chance of being choosen as well for 73-80. From 65-72 65% chance.
The elements in the subranges are equally likely. I'm generating integers.
Also, it would be interesting a scalable solution, so that one could expand its usage for higher ranges, for example, 1000-2000, but biased toward 1400-1600.
Does anyone could help with some ideas?
Thanks beforehand for anyone willing to contribute!
For equally likely outcomes in the subranges, the following will do the trick:
import random
THRESHOLD = [0.65, 0.65 + 0.35 * 5 / 13]
def my_distribution():
u = random.random()
if u <= THRESHOLD[0]:
return random.randint(65, 72)
elif u <= THRESHOLD[1]:
return random.randint(60, 64)
else:
return random.randint(73, 80)
This uses a uniform random number to decide which subrange you're in, then generates values equally likely within that subrange.
The THRESHOLD values are similar to a cumulative distribution function, but arranged so the most likely outcome is checked first. 65% of the time (u <= THRESHOLD[0]) you'll generate from the range [65, 72]. Failing that, 5 of the 13 remaining possibilities (5/13 of 35%) are in the range [60, 64], and the rest are in the range [73, 80]. A Uniform(0,1) value u will fall below the first threshold 65% of the time, and failing that, below the second threshold 5/13 of the time and above that threshold the remaining 8/13 of the time.
The results look like this:
Here's a numpy based solution:
import numpy as np
# Some params
left_start = 60 # Start of left interval====== [60,64]
middle_start = 65 # Start of middle interval === [65,72]
right_start = 73 # Start of right interval ===- [73,80]
right_end = 80 # End of the right interval == [73,80]
count = 1000 # Number of values to generate.
middle_wt = 0.65 # Middle range to be selected with wt/prob=0.65
middle = np.arange(middle_start, right_start)
rest = np.r_[left_start:middle_start, right_start:(right_end+1)]
rng1 = np.random.default_rng(None) # Generator for randomly choosing range.
rng2 = np.random.default_rng(None) # Generator for generating values in the ranges.
# Now generate a random list of 0s and 1s to indicate choice between
# 'middle' and 'rest'. For this number generation we will set middle_wt as
# the weight/probability for 0 and (1-middle_wt) as the weight/probability for 1.
# (0 indicates middle range and 1 indicates the rest.)
range_choices = rng1.choice([0,1], replace=True, size=count, p=[middle_wt, (1-middle_wt)])
# Now generate 'count' values for the middle range
middle_choices = rng2.choice(middle, replace=True, size=count)
# Now generate 'count' values for the 'rest' of the range (non-middle)
rest_choices = rng2.choice(rest, replace=True, size=count)
result = np.choose(range_choices, (middle_choices,rest_choices))
print (np.sum((65 <= result) & (result<=72)))
Note:
In the above code, p=[middle_wt, (1-middle_wt)] is a list of weights. The middle_wt is the weight for the middle range [65,72], and the (1-middle_wt) is the weight for the rest.
Output:
649 # Indicates that 649 out of the 1000 values of result are in the middle range [65,72]

Is there a better way to guess possible unknown variables without brute force than I am doing? Machine learning? [duplicate]

This question already has answers here:
How to approach a number guessing game (with a twist) algorithm?
(7 answers)
Closed 4 years ago.
I have a game with the following rules:
A user is given fruit prices and has a chance to buy or sell items in their fruit basket every turn.
The user cannot make more than a 10% total change in their basket on a single turn.
Fruit prices change every day and when multiplied by the quantities of items in the fruit basket, the total value of the basket changes relative to the fruit price changes every day as well.
The program is only given the current price of all the fruits and the current value of the basket (current price of fruit * quantities for all items in the basket).
Based on these 2 inputs(all fruit prices and basket total value), the program tries to guess what items are in the basket.
A basket cannot hold more than 100 items but slots can be empty
The player can play several turns.
My goal is to accurately guess as computationally inexpensively as possible (read: no brute force) and scale if there are thousands of new fruits.
I am struggling to find an answer but in my mind, it’s not hard. If I have the below table. I could study day 1 and get the following data:
Apple 1
Pears 2
Oranges 3
Basket Value = 217
I can do a back of napkin calculation and assume, the weights in the basket are: 0 apple, 83 pears, and 17 Oranges equaling a basket value of 217.
The next day, the values of the fruits and basket changes. To (apple = 2, Pear 3, Oranges 5) with a basket value of 348. When I take my assumed weights above (0,83,17) I get a total value of 334 – not correct! Running this by my script, I see the closest match is 0 apples, 76 pears, 24 oranges which although does equal 348 when % change of factored in it’s a 38% change so it’s not possible!
I know I can completely brute force this but if I have 1000 fruits, it won’t scale. Not to jump on any bandwagon but can something like a neural net quickly rule out the unlikely so I calculate large volumes of data? I think they have to be a more scalable/quicker way than pure brute force? Or is there any other type of solution that could get the result?
Here is the raw data (remember program can only see prices and total basket value only):
Here's some brute force code (Thank you #paul Hankin for a cleaner example than mine):
def possibilities(value, prices):
for i in range(0, value+1, prices[0]):
for j in range(0, value+1-i, prices[1]):
k = value - i - j
if k % prices[2] == 0:
yield i//prices[0], j//prices[1], k//prices[2]
def merge_totals(last, this, r):
ok = []
for t in this:
for l in last:
f = int(sum(l) * r)
if all(l[i] -f <= t[i] <= l[i] + f for i in range(len(l))):
ok.append(t)
break
return ok
days = [
(217, (1, 2, 3)),
(348, (2, 3, 5)),
(251, (1, 2, 4)),
]
ps = None
for i, d in enumerate(days):
new_ps = list(possibilities(*d))
if ps is None:
ps = new_ps
ps = merge_totals(ps, new_ps, 0.10)
print('Day %d' % (i+1))
for p in ps:
print('Day %d,' % (i+1), 'apples: %s, pears: %s, oranges: %s' % p)
print
Update - The info so far is awesome. Does it make sense to break the problem into two problems? One is generating the possibilities while the other is finding the relationship between the possibilities(no more than a 10% daily change). By ruling out possibilities, couldn't that also be used to help only generate possibilities that are possible, to begin with? I'm not sure the approach still but I do feel both problems are different but tightly related. Your thoughts?
Update 2 - there are a lot of questions about the % change. This is the total volume of items in the basket that can change. To use the game example, Imagine the store says - you can sell/return/buy fruits but they cannot be more than 10% of your last bill. So although the change in fruit prices can cause changes in your basket value, the user cannot take any action that would impact it by more than 10%. So if the value was 100, they can make changes that create get it to 110 but not more.
I hate to let you down but I really don't think a neural net will help at all for this problem, and IMO the best answer to your question is the advice "don't waste your time trying neural nets".
An easy rule of thumb for deciding whether or not neural networks are applicable is to think, "can an average adult human solve this problem reasonably well in a few seconds?" For problems like "what's in this image", "respond to this question", or "transcribe this audio clip", the answer is yes. But for your problem, the answer is a most definite no.
Neural networks have limitations, and one is that they don't deal well with highly logical problems. This is because the answers are generally not "smooth". If you take an image and slightly change a handful of pixels, the content of the image is still the same. If you take an audio clip and insert a few milliseconds of noise, a neural net will probably still be able to figure out what's said. But in your problem, change a single day's "total basket value" by only 1 unit, and your answer(s) will drastically change.
It seems that the only way to solve your problem is with a "classical" algorithmic approach. As currently stated, there might not be any algorithm better than brute force, and it might not be possible to rule out much. For example, what if every day has the property that all fruits are priced the same? The count of each fruit can vary, as long as the total number of fruits is fixed, so the number of possibilities is still exponential in the number of fruits. If your goal is to "produce a list of possibilities", then no algorithm can be better than exponential time since this list can be exponentially large in some cases.
It's interesting that part of your problem can be reduced to an integer linear program (ILP). Consider a single day, where you are given the basket total B and each fruit's cost c_i, for i=1 through i=n (if n is the total number of distinct fruits). Let's say the prices are large, so it's not obvious that you can "fill up" the basket with unit cost fruits. It can be hard in this situation to even find a single solution. Formulated as an ILP, this is equivalent to finding integer values of x_i such that:
sum_i (x_i*c_i) = x_1*c_1 + x_2*c_2 + ... + x_n*c_n = B
and x_i >= 0 for all 1 <= i <= n (can't have negative fruits), and sum_i x_i <= 100 (can have at most 100 fruits).
The good news is that decent ILP solvers exist -- you can just hand over the above formulas and the solver will do its best to find a single solution. You can even add an "objective function" that the solver will maximize or minimize -- minimizing sum_i x_i has the effect of minimizing the total number of fruits in the basket. The bad news is that ILP is NP-complete, so there is almost no hope of finding an efficient solution for a large number of fruits (which equals the number of variables x_i).
I think the best approach forward is to try the ILP approach, but also introduce some more constraints on the scenario. For example, what if all fruits had a different prime number cost? This has the nice property that if you find one solution, you can enumerate a bunch of other related solutions. If an apple costs m and an orange costs n, where m and n are relatively prime, then you can "trade" n*x apples for m*x oranges without changing the basket total, for any integer x>0 (so long as you have enough apples and oranges to begin with). If you choose all fruits to have different prime number costs, then all of the costs will be pairwise relatively prime. I think this approach will result in relatively few solutions for a given day.
You might also consider other constraints, such as "there can't be more than 5 fruits of a single kind in the basket" (add the constraint x_i <= 5), or "there can be at most 5 distinct kinds of fruits in the basket" (but this is harder to encode as an ILP constraint). Adding these kinds of constraints will make it easier for the ILP solver to find a solution.
Of course the above discussion is focused on a single day, and you have multiple days' worth of data. If the hardest part of the problem is finding any solution for any day at all (which happens if your prices are large), then using an ILP solver will give you a large boost. If solutions are easy to find (which happens if you have a very-low-cost fruit that can "fill up" your basket), and the hardest part of the problem is finding solutions that are "consistent" across multiple days, then the ILP approach might not be the best fit, and in general this problem seems much more difficult to reason about.
Edit: and as mentioned in the comments, for some interpretations of the "10% change" constraint, you can even encode the entire multi-day problem as an ILP.
It seems to me like your approach is reasonable, but whether it is depends on the size of the numbers in the actual game. Here's a complete implementation that's a lot more efficient than yours (but still has plenty of scope for improvement). It keeps a list of possibilities for the previous day, and then filters the current day amounts to those that are within 5% of some possibility from the previous day, and prints them out per day.
def possibilities(value, prices):
for i in range(0, value+1, prices[0]):
for j in range(0, value+1-i, prices[1]):
k = value - i - j
if k % prices[2] == 0:
yield i//prices[0], j//prices[1], k//prices[2]
def merge_totals(last, this, r):
ok = []
for t in this:
for l in last:
f = int(sum(l) * r)
if all(l[i] -f <= t[i] <= l[i] + f for i in range(len(l))):
ok.append(t)
break
return ok
days = [
(26, (1, 2, 3)),
(51, (2, 3, 4)),
(61, (2, 4, 5)),
]
ps = None
for i, d in enumerate(days):
new_ps = list(possibilities(*d))
if ps is None:
ps = new_ps
ps = merge_totals(ps, new_ps, 0.05)
print('Day %d' % (i+1))
for p in ps:
print('apples: %s, pears: %s, oranges: %s' % p)
print
Problem Framing
This problem can be described as a combinatorial optimization problem. You're trying to find an optimal object (a combination of fruit items) from a finite set of objects (all possible combinations of fruit items). With the proper analogy and transformations, we can reduce this fruit basket problem to the well known, and extensively studied (since 1897), knapsack problem.
Solving this class of optimization problems is NP-hard. The decision problem of answering "Can we find a combination of fruit items with a value of X?" is NP-complete. Since you want to account for a worst case scenario when you have thousands of fruit items, your best bet is to use a metaheuristic, like evolutionary computation.
Proposed Solution
Evolutionary computation is a family of biologically inspired metaheuristics. They work by revising and mixing (evolving) the most fit candidate solutions based on a fitness function and discarding the least fit ones over many iterations. The higher the fitness of a solution, the more likely it will reproduce similar solutions and survive to the next generation (iteration). Eventually, a local or global optimal solution is found.
These methods provide a needed compromise when the search space is too large to cover with traditional closed form mathematical solutions. Due to the stochastic nature of these algorithms, different executions of the algorithms may lead to different local optima, and there is no guarantee that the global optimum will be found. The odds are good in our case since we have multiple valid solutions.
Example
Let's use the Distributed Evolutionary Algorithms in Python (DEAP) framework and retrofit their Knapsack problem example to our problem. In the code below we apply strong penalty for baskets with 100+ items. This will severely reduce their fitness and have them taken out of the population pool in one or two generations. There are other ways to handle constraints that are also valid.
# This file is part of DEAP.
#
# DEAP is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
#
# DEAP is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with DEAP. If not, see <http://www.gnu.org/licenses/>.
import random
import numpy as np
from deap import algorithms
from deap import base
from deap import creator
from deap import tools
IND_INIT_SIZE = 5 # Calls to `individual` function
MAX_ITEM = 100 # Max 100 fruit items in basket
NBR_ITEMS = 50 # Start with 50 items in basket
FRUIT_TYPES = 10 # Number of fruit types (apples, bananas, ...)
# Generate a dictionary of random fruit prices.
fruit_price = {i: random.randint(1, 5) for i in range(FRUIT_TYPES)}
# Create fruit items dictionary. The key is item ID, and the
# value is a (weight, price) tuple. Weight is always 1 here.
items = {}
# Create random items and store them in the items' dictionary.
for i in range(NBR_ITEMS):
items[i] = (1, fruit_price[i])
# Create fitness function and an individual (solution candidate)
# A solution candidate in our case is a collection of fruit items.
creator.create("Fitness", base.Fitness, weights=(-1.0, 1.0))
creator.create("Individual", set, fitness=creator.Fitness)
toolbox = base.Toolbox()
# Randomly initialize the population (a set of candidate solutions)
toolbox.register("attr_item", random.randrange, NBR_ITEMS)
toolbox.register("individual", tools.initRepeat, creator.Individual,
toolbox.attr_item, IND_INIT_SIZE)
def evalBasket(individual):
"""Evaluate the value of the basket and
apply constraints penalty.
"""
value = 0 # Total value of the basket
for item in individual:
value += items[item][1]
# Heavily penalize baskets with 100+ items
if len(individual) > MAX_ITEM:
return 10000, 0
return len(individual), value # (items in basket, value of basket)
def cxSet(ind1, ind2):
"""Apply a crossover operation on input sets.
The first child is the intersection of the two sets,
the second child is the difference of the two sets.
This is one way to evolve new candidate solutions from
existing ones. Think of it as parents mixing their genes
to produce a child.
"""
temp = set(ind1) # Used in order to keep type
ind1 &= ind2 # Intersection (inplace)
ind2 ^= temp # Symmetric Difference (inplace)
return ind1, ind2
def mutSet(individual):
"""Mutation that pops or add an element.
In nature, gene mutations help offspring express new traits
not found in their ancestors. That could be beneficial or
harmful. Survival of the fittest at play here.
"""
if random.random() < 0.5: # 50% chance of mutation
if len(individual) > 0:
individual.remove(random.choice(sorted(tuple(individual))))
else:
individual.add(random.randrange(NBR_ITEMS))
return individual,
# Register evaluation, mating, mutation and selection functions
# so the framework can use them to run the simulation.
toolbox.register("evaluate", evalKnapsack)
toolbox.register("mate", cxSet)
toolbox.register("mutate", mutSet)
toolbox.register("select", tools.selNSGA2)
def main():
random.seed(64)
NGEN = 50
MU = 50
LAMBDA = 100
CXPB = 0.7
MUTPB = 0.2
pop = toolbox.population(n=MU) # Initial population size
hof = tools.ParetoFront() # Using Pareto front to rank fitness
# Keep track of population fitness stats which should
# improve over generations (iterations).
stats = tools.Statistics(lambda ind: ind.fitness.values)
stats.register("avg", numpy.mean, axis=0)
stats.register("std", numpy.std, axis=0)
stats.register("min", numpy.min, axis=0)
stats.register("max", numpy.max, axis=0)
algorithms.eaMuPlusLambda(pop, toolbox, MU,LAMBDA,\
CXPB, MUTPB, NGEN, stats,\
halloffame=hof)
return pop, stats, hof
if __name__ == "__main__":
main()
Not an answer, but an attempt to make the one information about what "% change" might be supposed to mean (sum of change in count of each item computed backwards) more accessible to non-believers in pixel heaps:
| Day 1 ! Day 2 change ! Day 3 change ! Day 4 change
|$/1| # | $ !$/1| # | % | $ !$/1| # | % | $ !$/1| # | % | $
Apples | 1 | 20 | 20 ! 2 | 21 | 4.76 | 42 ! 1 | 21 | 0 | 21 ! 1 | 22 | 4.55 | 22
Pears | 2 | 43 | 86 ! 3 | 42 | 2.38 | 126 ! 2 | 43 | 2.33 | 86 ! 2 | 43 | 0 | 86
Oranges| 3 | 37 | 111 ! 5 | 36 | 2.78 | 180 ! 4 | 36 | 0 | 144 ! 3 | 35 | 2.86 | 105
Total | 100 | 217 ! 100 | 9.92 | 348 ! 100 | 2.33 | 251 ! 100 | 7.40 | 213
Integer Linear Programming Approach
This sets up naturally as a multi-step Integer Program, with the holdings in {apples, pears, oranges} from the previous step factoring in the calculation of the relative change in holdings that must be constrained. There is no notion of optimal here, but we can turn the "turnover" constraint into an objective and see what happens.
The solutions provided improve on those in your chart above, and are minimal in the sense of total change in basket holdings.
Comments -
I don't know how you calculated the "% change" column in your table. A change from Day 1 to Day 2 of 20 apples to 21 apples is a 4.76% change?
On all days, your total holdings in fruits is exactly 100. There is a constraint that the sum of holdings is <= 100. No violation, I just want to confirm.
We can set this up as an integer linear program, using the integer optimization routine from ortools. I haven't used an ILP solver for a long time, and this one is kind of flaky I think (the solver.OPTIMAL flag is never true it seems, even for toy problems. In addition the ortools LP solver fails to find an optimal solution in cases where scipy.linprog works without a hitch)
h1,d = holdings in apples (number of apples) at end of day d
h2,d = holdings in pears at end of day d
h3,d = holdings in oranges at end of day d
I'll give two proposals here, one which minimizes the l1 norm of the absolute error, the other the l0norm.
The l1 solution finds the minimum of abs(h1,(d+1) - h1,d)/h1 + ... + abs(h3,(d+1) - h3,d)/h3), hoping that the constraint that each relative change in holdings is under 10% if the sum of the relative change in holdings is minimized.
The only thing that prevents this from being a linear program (aside from the integer requirement) is the nonlinear objective function. No problem, we can introduce slack variables and make everything linear. For the l1 formulation, 6 additional slack variables are introduced, 2 per fruit, and 6 additional inequality constraints. For the l0 formulation, 1 slack variable is introduced, and 6 additional inequality constraints.
This is a two step process, for example, replacing |apples_new - apples_old|/|apples_old| with the variable |e|, and adding inequality constraints to ensure the e measures what we'd like. We then replace|e| with (e+ - e-), each of e+, e- >0. It can be shown that one of e+, e- will be 0, and that (e+ + e-) is the absolute value of e. That way the pair (e+, e-) can represent a positive or negative number. Standard stuff, but that adds a bunch of variables and constraints. I can explain this in a bit more detail if necessary.
import numpy as np
from ortools.linear_solver import pywraplp
def fruit_basket_l1_ortools():
UPPER_BOUND = 1000
prices = [[2,3,5],
[1,2,4],
[1,2,3]]
holdings = [20,43,37]
values = [348, 251, 213]
for day in range(len(values)):
solver = pywraplp.Solver('ILPSolver',
pywraplp.Solver.BOP_INTEGER_PROGRAMMING)
# solver = pywraplp.Solver('ILPSolver',
# pywraplp.Solver.CLP_LINEAR_PROGRAMMING)
c = ([1,1] * 3) + [0,0,0]
price = prices[day]
value = values[day]
A_eq = [[ 0, 0, 0, 0, 0, 0, price[0], price[1], price[2]]]
b_eq = [value]
A_ub = [[-1*holdings[0], 1*holdings[0], 0, 0, 0, 0, 1.0, 0, 0],
[-1*holdings[0], 1*holdings[0], 0, 0, 0, 0, -1.0, 0, 0],
[ 0, 0, -1*holdings[1], 1*holdings[1], 0, 0, 0, 1.0, 0],
[ 0, 0, -1*holdings[1], 1*holdings[1], 0, 0, 0, -1.0, 0],
[ 0, 0, 0, 0, -1*holdings[2], 1*holdings[2], 0, 0, 1.0],
[ 0, 0, 0, 0, -1*holdings[2], 1*holdings[2], 0, 0, -1.0]]
b_ub = [1*holdings[0], -1*holdings[0], 1*holdings[1], -1*holdings[1], 1*holdings[2], -1*holdings[2]]
num_vars = len(c)
num_ineq_constraints = len(A_ub)
num_eq_constraints = len(A_eq)
data = [[]] * num_vars
data[0] = solver.IntVar( 0, UPPER_BOUND, 'e1_p')
data[1] = solver.IntVar( 0, UPPER_BOUND, 'e1_n')
data[2] = solver.IntVar( 0, UPPER_BOUND, 'e2_p')
data[3] = solver.IntVar( 0, UPPER_BOUND, 'e2_n')
data[4] = solver.IntVar( 0, UPPER_BOUND, 'e3_p')
data[5] = solver.IntVar( 0, UPPER_BOUND, 'e3_n')
data[6] = solver.IntVar( 0, UPPER_BOUND, 'x1')
data[7] = solver.IntVar( 0, UPPER_BOUND, 'x2')
data[8] = solver.IntVar( 0, UPPER_BOUND, 'x3')
constraints = [0] * (len(A_ub) + len(b_eq))
# Inequality constraints
for i in range(0,num_ineq_constraints):
constraints[i] = solver.Constraint(-solver.infinity(), b_ub[i])
for j in range(0,num_vars):
constraints[i].SetCoefficient(data[j], A_ub[i][j])
# Equality constraints
for i in range(num_ineq_constraints, num_ineq_constraints+num_eq_constraints):
constraints[i] = solver.Constraint(b_eq[i-num_ineq_constraints], b_eq[i-num_ineq_constraints])
for j in range(0,num_vars):
constraints[i].SetCoefficient(data[j], A_eq[i-num_ineq_constraints][j])
# Objective function
objective = solver.Objective()
for i in range(0,num_vars):
objective.SetCoefficient(data[i], c[i])
# Set up as minization problem
objective.SetMinimization()
# Solve it
result_status = solver.Solve()
solution_set = [data[i].solution_value() for i in range(len(data))]
print('DAY: {}'.format(day+1))
print('======')
print('SOLUTION FEASIBLE: {}'.format(solver.FEASIBLE))
print('SOLUTION OPTIMAL: {}'.format(solver.OPTIMAL))
print('VALUE OF BASKET: {}'.format(np.dot(A_eq[0], solution_set)))
print('SOLUTION (apples,pears,oranges): {!r}'.format(solution_set[-3:]))
print('PCT CHANGE (apples,pears,oranges): {!r}\n\n'.format([round(100*(x-y)/y,2) for x,y in zip(solution_set[-3:], holdings)]))
# Update holdings for the next day
holdings = solution_set[-3:]
A single run gives:
DAY: 1
======
SOLUTION FEASIBLE: 1
SOLUTION OPTIMAL: 0
VALUE OF BASKET: 348.0
SOLUTION (apples,pears,oranges): [20.0, 41.0, 37.0]
PCT CHANGE (apples,pears,oranges): [0.0, -4.65, 0.0]
DAY: 2
======
SOLUTION FEASIBLE: 1
SOLUTION OPTIMAL: 0
VALUE OF BASKET: 251.0
SOLUTION (apples,pears,oranges): [21.0, 41.0, 37.0]
PCT CHANGE (apples,pears,oranges): [5.0, 0.0, 0.0]
DAY: 3
======
SOLUTION FEASIBLE: 1
SOLUTION OPTIMAL: 0
VALUE OF BASKET: 213.0
SOLUTION (apples,pears,oranges): [20.0, 41.0, 37.0]
PCT CHANGE (apples,pears,oranges): [-4.76, 0.0, 0.0]
The l0 formulation is also presented:
def fruit_basket_l0_ortools():
UPPER_BOUND = 1000
prices = [[2,3,5],
[1,2,4],
[1,2,3]]
holdings = [20,43,37]
values = [348, 251, 213]
for day in range(len(values)):
solver = pywraplp.Solver('ILPSolver',
pywraplp.Solver.BOP_INTEGER_PROGRAMMING)
# solver = pywraplp.Solver('ILPSolver',
# pywraplp.Solver.CLP_LINEAR_PROGRAMMING)
c = [1, 0, 0, 0]
price = prices[day]
value = values[day]
A_eq = [[0, price[0], price[1], price[2]]]
b_eq = [value]
A_ub = [[-1*holdings[0], 1.0, 0, 0],
[-1*holdings[0], -1.0, 0, 0],
[-1*holdings[1], 0, 1.0, 0],
[-1*holdings[1], 0, -1.0, 0],
[-1*holdings[2], 0, 0, 1.0],
[-1*holdings[2], 0, 0, -1.0]]
b_ub = [holdings[0], -1*holdings[0], holdings[1], -1*holdings[1], holdings[2], -1*holdings[2]]
num_vars = len(c)
num_ineq_constraints = len(A_ub)
num_eq_constraints = len(A_eq)
data = [[]] * num_vars
data[0] = solver.IntVar(-UPPER_BOUND, UPPER_BOUND, 'e' )
data[1] = solver.IntVar( 0, UPPER_BOUND, 'x1')
data[2] = solver.IntVar( 0, UPPER_BOUND, 'x2')
data[3] = solver.IntVar( 0, UPPER_BOUND, 'x3')
constraints = [0] * (len(A_ub) + len(b_eq))
# Inequality constraints
for i in range(0,num_ineq_constraints):
constraints[i] = solver.Constraint(-solver.infinity(), b_ub[i])
for j in range(0,num_vars):
constraints[i].SetCoefficient(data[j], A_ub[i][j])
# Equality constraints
for i in range(num_ineq_constraints, num_ineq_constraints+num_eq_constraints):
constraints[i] = solver.Constraint(int(b_eq[i-num_ineq_constraints]), b_eq[i-num_ineq_constraints])
for j in range(0,num_vars):
constraints[i].SetCoefficient(data[j], A_eq[i-num_ineq_constraints][j])
# Objective function
objective = solver.Objective()
for i in range(0,num_vars):
objective.SetCoefficient(data[i], c[i])
# Set up as minization problem
objective.SetMinimization()
# Solve it
result_status = solver.Solve()
solution_set = [data[i].solution_value() for i in range(len(data))]
print('DAY: {}'.format(day+1))
print('======')
print('SOLUTION FEASIBLE: {}'.format(solver.FEASIBLE))
print('SOLUTION OPTIMAL: {}'.format(solver.OPTIMAL))
print('VALUE OF BASKET: {}'.format(np.dot(A_eq[0], solution_set)))
print('SOLUTION (apples,pears,oranges): {!r}'.format(solution_set[-3:]))
print('PCT CHANGE (apples,pears,oranges): {!r}\n\n'.format([round(100*(x-y)/y,2) for x,y in zip(solution_set[-3:], holdings)]))
# Update holdings for the next day
holdings = solution_set[-3:]
A single run of this gives
DAY: 1
======
SOLUTION FEASIBLE: 1
SOLUTION OPTIMAL: 0
VALUE OF BASKET: 348.0
SOLUTION (apples,pears,oranges): [33.0, 79.0, 9.0]
PCT CHANGE (apples,pears,oranges): [65.0, 83.72, -75.68]
DAY: 2
======
SOLUTION FEASIBLE: 1
SOLUTION OPTIMAL: 0
VALUE OF BASKET: 251.0
SOLUTION (apples,pears,oranges): [49.0, 83.0, 9.0]
PCT CHANGE (apples,pears,oranges): [48.48, 5.06, 0.0]
DAY: 3
======
SOLUTION FEASIBLE: 1
SOLUTION OPTIMAL: 0
VALUE OF BASKET: 213.0
SOLUTION (apples,pears,oranges): [51.0, 63.0, 12.0]
PCT CHANGE (apples,pears,oranges): [4.08, -24.1, 33.33]
Summary
The l1 formulation gives more sensible results, lower turnover, much lower. The optimality check fails on all runs, however, which is concerning. I included a linear solver too and that fails the feasiblity check somehow, I don't know why. The Google people provide precious little documentation for the ortools lib, and most of it is for the C++ lib. But the l1 formulation may be a solution to your problem, which may scale. ILP is in general NP-complete, and so is your problem most likely.
Also - does a solution exist on day 2? How do you define % change so that it does in your chart above? If I knew I could recast the inequalities above and we would have the general solution.
You got a logic problem on integers, not a representation problem. Neural networks are relevant to problem with complex representation (eg., image with pixels, objects in differents shape and color, sometimes hidden etc), as they build their own set of features (descriptors) and mipmaps; they also are a good match with problems dealing with reals, not integer; and last, as they are today, they don't really deal with reasonning and logic, or eventually with simple logic like a small succession of if/else or switch but we don't really have a control over that.
What I see is closer to a cryptographic-ish problem with constraints (10% change, max 100 articles).
Solution for all sets of fruits
There is a way to reach all solutions very quickly. We start by factoring into primes the total, then we find few solutions through brute force. From there we can change the set of fruits with equal total. Eg., we exchange 1 orange for 1 apple and 1 pear with prices = (1,2,3). This way we can navigate through solutions without having to go through brute force.
Algorithm(s): you factorize in prime numbers the total, then you split them into two or more groups; let's take 2 groups: let A be one common multiplier, and let B the other(s). Then you can add your fruits to reach the total B.
Examples:
Day 1: Apple = 1, Pears = 2, Oranges = 3, Basket Value = 217
Day 2: Apple = 2, Pears = 3, Oranges = 5, Basket Value = 348
217 factorizes into [7, 31], we pick 31 as A (common multiplier), then let say 7=3*2+1 (2 orange, 0 pear, 1 apple), you got an answer: 62 oranges, 0 pears, 31 apples. 62+31<100: valid.
348 factorizes into [2, 2, 3, 29], you have several ways to
group your factors and multiply your fruits inside this. The
multiplier can be 29, (or 2*29 etc), then you pick your fruits to reach 12. Let's say 12=2*2+3+5. You got (2 apples, 1 pear, 1 orange) * 29, but it's more than 100 articles. You can fuse recursively 1 apple and 1 pear into 1 orange until you are below 100 articles, or you can go directly with the solution with a minimum of articles: (2 oranges, 1 apple)*29 = (58 oranges, 29 apples). And at last:
-- 87<100 valid;
-- the change is (-4 oranges, -2 apples), 6/93=6.45% <10% change: valid.
Code
Remark: no implementation of the 10% variation
Remark: I didn't implement the "fruit exchange" process that allows the "solution navigation"
Run with python -O solution.py to optimize and remove the debug messages.
def prime_factors(n):
i = 2
factors = []
while i * i <= n:
if n % i:
i += 1
else:
n //= i
factors.append(i)
if n > 1:
factors.append(n)
return factors
def possibilities(value, prices):
for i in range(0, value + 1, prices[0]):
for j in range(0, value + 1-i, prices[1]):
k = value - i - j
if k % prices[2] == 0:
yield i//prices[0], j//prices[1], k//prices[2]
days = [
(217, (1, 2, 3)),
(348, (2, 3, 5)),
(251, (1, 2, 4)),
(213, (1, 2, 3)),
]
for set in days:
total = set[0]
(priceApple, pricePear, priceOrange) = set[1]
factors = prime_factors(total)
if __debug__:
print(str(total) + " -> " + str(factors))
# remove small article to help factorize (odd helper)
evenHelper = False
if len(factors) == 1 :
evenHelper = True
t1 = total - priceApple
factors = prime_factors(t1)
if __debug__:
print(str(total) + " --> " + str(factors))
# merge factors on left
while factors[0] < priceOrange :
factors = [factors[0] * factors[1]] + factors[2:]
if __debug__:
print("merging: " + str(factors))
# merge factors on right
if len(factors) > 2:
multiplier = 1
for f in factors[1:]:
multiplier *= f
factors = [factors[0]] + [multiplier]
(smallTotal, multiplier) = factors
if __debug__:
print("final factors: " + str(smallTotal) + " (small total) , " + str(multiplier) + " (multiplier)")
# solutions satisfying #<100
smallMax = 100 / multiplier
solutions = [o for o in possibilities(smallTotal, set[1]) if sum(o) < smallMax ]
for solution in solutions:
(a,p,o) = [i * multiplier for i in solution]
# if we used it, we need to add back the odd helper to reach the actual solution
if evenHelper:
a += 1
print(str(a) + " apple(s), " + str(p) + " pear(s), " + str(o) + " orange(s)")
# separating solutions
print()
I timed the program with a 10037 total with (5, 8, 17) prices, and maximum 500 articles: it's about 2ms (on i7 6700k). The "solution navigation" process is very simple and shouldn't add significant time.
There might be a heuristic to go from day to day without having to do the factorization + navigation + validation process. I'll think about it.
I know it's a bit late, but I thought this was an interesting problem and that I might as well add my two cents.
My code:
import math
prices = [1, 2, 3]
basketVal = 217
maxFruits = 100
numFruits = len(prices)
## Get the possible baskets
def getPossibleBaskets(maxFruits, numFruits, basketVal, prices):
possBaskets = []
for i in range(101):
for j in range(101):
for k in range(101):
if i + j + k > 100:
pass
else:
possibleBasketVal = 0
for m in range(numFruits):
possibleBasketVal += (prices[m] * [i, j, k][m])
if possibleBasketVal > basketVal:
break
if possibleBasketVal == basketVal:
possBaskets.append([i, j, k])
return possBaskets
firstDayBaskets = getPossibleBaskets(maxFruits, numFruits, basketVal, prices)
## Compare the baskets for percentage change and filter out the values
while True:
prices = list(map(int, input("New Prices:\t").split()))
basketVal = int(input("New Basket Value:\t"))
maxFruits = int(input("Max Fruits:\t"))
numFruits = len(prices)
secondDayBaskets = getPossibleBaskets(maxFruits, numFruits, basketVal, prices)
possBaskets = []
for basket in firstDayBaskets:
for newBasket in secondDayBaskets:
if newBasket not in possBaskets:
percentChange = 0
for n in range(numFruits):
percentChange += (abs(basket[n] - newBasket[n]) / 100)
if percentChange <= 10:
possBaskets.append(newBasket)
firstDayBaskets = possBaskets
secondDayBaskets = []
print(firstDayBaskets)
I guess this could be called a brute force solution, but it definitely works. Every day, it'll print the possible configurations of the basket.

Python Random Selection

I have a code which generates either 0 or 9 randomly. This code is run 289 times...
import random
track = 0
if track < 35:
val = random.choice([0, 9])
if val == 9:
track += 1
else:
val = 0
According to this code, if 9 is generated 35 times, then 0 is generated. So there is a heavy bias at the start and in the end 0 is mostly output.
Is there a way to reduce this bias so that the 9's are spread out quite evenly in 289 times.
Thanks for any help in advance
Apparently you want 9 to occur 35 times, and 0 to occur for the remainder - but you want the 9's to be evenly distributed. This is easy to do with a shuffle.
values = [9] * 35 + [0] * (289 - 35)
random.shuffle(values)
It sounds like you want to add some bias to the numbers that are generated by your script. Accordingly, you'll want to think about how you can use probability to assign a correct bias to the numbers being assigned.
For example, let's say you want to generate a list of 289 integers where there is a maximum of 35 nines. 35 is approximately 12% of 289, and as such, you would assign a probability of .12 to the number 9. From there, you could assign some other (relatively small) probability to the numbers 1 - 8, and some relatively large probability to the number 0.
Walker's Alias Method appears to be able to do what you need for this problem.
General Example (strings A B C or D with probabilities .1 .2 .3 .4):
abcd = dict( A=1, D=4, C=3, B=2 )
# keys can be any immutables: 2d points, colors, atoms ...
wrand = Walkerrandom( abcd.values(), abcd.keys() )
wrand.random() # each call -> "A" "B" "C" or "D"
# fast: 1 randint(), 1 uniform(), table lookup
Specific Example:
numbers = dict( 1=725, 2=725, 3=725, 4=725, 5=725, 6=725, 7=725, 8=725, 9=12, 0=3 )
wrand = Walkerrandom( numbers.values(), numbers.keys() )
#Add looping logic + counting logic to keep track of 9's here
track = 0
i = 0
while i < 290
if track < 35:
val = wrand.random()
if val == 9:
track += 1
else:
val = 0
i += 1

Metropolis-Hastings accept-reject implementation

I've been reading about the Metropolis-Hastings (MH) algorithm. Theoretically, I understood how the algorithm works. Now, I am trying to implement the MH algorithm using python.
I came across the following notebook. It suits exactly my problem since I want to fit my data by a straight line taking into consideration the measurement errors on my data. I am going to paste the code I am finding difficulties to understand:
# initial m, b
m,b = 2, 0
# step sizes
mstep, bstep = 0.1, 10.
# how many steps?
nsteps = 10000
chain = []
probs = []
naccept = 0
print 'Running MH for', nsteps, 'steps'
# First point:
L_old = straight_line_log_likelihood(x, y, sigmay, m, b)
p_old = straight_line_log_prior(m, b)
prob_old = np.exp(L_old + p_old)
for i in range(nsteps):
# step
mnew = m + np.random.normal() * mstep
bnew = b + np.random.normal() * bstep
# evaluate probabilities
# prob_new = straight_line_posterior(x, y, sigmay, mnew, bnew)
L_new = straight_line_log_likelihood(x, y, sigmay, mnew, bnew)
p_new = straight_line_log_prior(mnew, bnew)
prob_new = np.exp(L_new + p_new)
if (prob_new / prob_old > np.random.uniform()):
# accept
m = mnew
b = bnew
L_old = L_new
p_old = p_new
prob_old = prob_new
naccept += 1
else:
# Stay where we are; m,b stay the same, and we append them
# to the chain below.
pass
chain.append((b,m))
probs.append((L_old,p_old))
print 'Acceptance fraction:', naccept/float(nsteps)
The code is simple and easy, but I have difficulties in understanding how the MH is being implemented.
My question is in the chain.append (the third line from the bottom). The author is appending m and b whether they were accepted or rejected. Why? Shouldn't he append only the accepted points?
The following R code demonstrates why it is important to capture the rejected case:
# 20 samples from 0 or 1. 1 has an 80% probability of being chosen.
the.population <- sample(c(0,1), 20, replace = TRUE, prob=c(0.2, 0.8))
# Create a new sample that only catches changes
the.sample <- c(the.population[1])
# Loop though the.population,
# but only copy the.population to the.sample if the value changes
for( i in 2:length(the.population))
{
if(the.population[i] != the.population[i-1])
the.sample <- append(the.sample, the.population[i])
}
When this code runs, the.population gets 20 values, for example:
0 1 1 1 1 1 1 1 1 0 1 1 1 1 0 0 1 1 1 1
The probability of a 1 in this population is 16/20 or 0.8. Exactly the probability we expected...
The sample, on the other hand, which only records changes, looks like this:
0 1 0 1 0 1
The probability of a 1 in the sample is 3/6 or 0.5.
We are trying to build a distribution, rejecting the new values means that the old values are more likely than the new values. That needs to be captured so our distribution is correct.
From a quick reading of the algorithm description: When a candidate is rejected, it still counts as a step, but the value is the same as the old step. I.e. b, m are appended either way, but they only get updated (to bnew, mnew) in the case where the candidate is accepted.

Distributing integers using weights? How to calculate?

I need to distribute a value based on some weights. For example, if my weights are 1 and 2, then I would expect the column weighted as 2 to have twice the value as the column weighted 1.
I have some Python code to demonstrate what I'm trying to do, and the problem:
def distribute(total, distribution):
distributed_total = []
for weight in distribution:
weight = float(weight)
p = weight/sum(distribution)
weighted_value = round(p*total)
distributed_total.append(weighted_value)
return distributed_total
for x in xrange(100):
d = distribute(x, (1,2,3))
if x != sum(d):
print x, sum(d), d
There are many cases shown by the code above where distributing a value results in the sum of the distribution being different than the original value. For example, distributing 3 with weights of (1,2,3) results in (1,1,2), which totals 4.
What is the simplest way to fix this distribution algorithm?
UPDATE:
I expect the distributed values to be integer values. It doesn't matter exactly how the integers are distributed as long as they total to the correct value, and they are "as close as possible" to the correct distribution.
(By correct distribution I mean the non-integer distribution, and I haven't fully defined what I mean by "as close as possible." There are perhaps several valid outputs, so long as they total the original value.)
Distribute the first share as expected. Now you have a simpler problem, with one fewer participants, and a reduced amount available for distribution. Repeat until there are no more participants.
>>> def distribute2(available, weights):
... distributed_amounts = []
... total_weights = sum(weights)
... for weight in weights:
... weight = float(weight)
... p = weight / total_weights
... distributed_amount = round(p * available)
... distributed_amounts.append(distributed_amount)
... total_weights -= weight
... available -= distributed_amount
... return distributed_amounts
...
>>> for x in xrange(100):
... d = distribute2(x, (1,2,3))
... if x != sum(d):
... print x, sum(d), d
...
>>>
You have to distribute the rounding errors somehow:
Actual:
| | | |
Pixel grid:
| | | |
The simplest would be to round each true value to the nearest pixel, for both the start and end position. So, when you round up block A 0.5 to 1, you also change the start position of the block B from 0.5 to 1. This decreases the size of B by 0.5 (in essence, "stealing" the size from it). Of course, this leads you to having B steal size from C, ultimately resulting in having:
| | | |
but how else did you expect to divide 3 into 3 integral parts?
The easiest approach is to calculate the normalization scale, which is the factor by which the sum of the weights exceeds the total you are aiming for, then divide each item in your weights by that scale.
def distribute(total, weights):
scale = float(sum(weights))/total
return [x/scale for x in weights]
If you expect distributing 3 with weights of (1,2,3) to be equal to (0.5, 1, 1.5), then the rounding is your problem:
weighted_value = round(p*total)
You want:
weighted_value = p*total
EDIT: Solution to return integer distribution
def distribute(total, distribution):
leftover = 0.0
distributed_total = []
distribution_sum = sum(distribution)
for weight in distribution:
weight = float(weight)
leftover, weighted_value = modf(weight*total/distribution_sum + leftover)
distributed_total.append(weighted_value)
distributed_total[-1] = round(distributed_total[-1]+leftover) #mitigate round off errors
return distributed_total

Categories