Distributing integers using weights? How to calculate? - python

I need to distribute a value based on some weights. For example, if my weights are 1 and 2, then I would expect the column weighted as 2 to have twice the value as the column weighted 1.
I have some Python code to demonstrate what I'm trying to do, and the problem:
def distribute(total, distribution):
distributed_total = []
for weight in distribution:
weight = float(weight)
p = weight/sum(distribution)
weighted_value = round(p*total)
distributed_total.append(weighted_value)
return distributed_total
for x in xrange(100):
d = distribute(x, (1,2,3))
if x != sum(d):
print x, sum(d), d
There are many cases shown by the code above where distributing a value results in the sum of the distribution being different than the original value. For example, distributing 3 with weights of (1,2,3) results in (1,1,2), which totals 4.
What is the simplest way to fix this distribution algorithm?
UPDATE:
I expect the distributed values to be integer values. It doesn't matter exactly how the integers are distributed as long as they total to the correct value, and they are "as close as possible" to the correct distribution.
(By correct distribution I mean the non-integer distribution, and I haven't fully defined what I mean by "as close as possible." There are perhaps several valid outputs, so long as they total the original value.)

Distribute the first share as expected. Now you have a simpler problem, with one fewer participants, and a reduced amount available for distribution. Repeat until there are no more participants.
>>> def distribute2(available, weights):
... distributed_amounts = []
... total_weights = sum(weights)
... for weight in weights:
... weight = float(weight)
... p = weight / total_weights
... distributed_amount = round(p * available)
... distributed_amounts.append(distributed_amount)
... total_weights -= weight
... available -= distributed_amount
... return distributed_amounts
...
>>> for x in xrange(100):
... d = distribute2(x, (1,2,3))
... if x != sum(d):
... print x, sum(d), d
...
>>>

You have to distribute the rounding errors somehow:
Actual:
| | | |
Pixel grid:
| | | |
The simplest would be to round each true value to the nearest pixel, for both the start and end position. So, when you round up block A 0.5 to 1, you also change the start position of the block B from 0.5 to 1. This decreases the size of B by 0.5 (in essence, "stealing" the size from it). Of course, this leads you to having B steal size from C, ultimately resulting in having:
| | | |
but how else did you expect to divide 3 into 3 integral parts?

The easiest approach is to calculate the normalization scale, which is the factor by which the sum of the weights exceeds the total you are aiming for, then divide each item in your weights by that scale.
def distribute(total, weights):
scale = float(sum(weights))/total
return [x/scale for x in weights]

If you expect distributing 3 with weights of (1,2,3) to be equal to (0.5, 1, 1.5), then the rounding is your problem:
weighted_value = round(p*total)
You want:
weighted_value = p*total
EDIT: Solution to return integer distribution
def distribute(total, distribution):
leftover = 0.0
distributed_total = []
distribution_sum = sum(distribution)
for weight in distribution:
weight = float(weight)
leftover, weighted_value = modf(weight*total/distribution_sum + leftover)
distributed_total.append(weighted_value)
distributed_total[-1] = round(distributed_total[-1]+leftover) #mitigate round off errors
return distributed_total

Related

Python Random Values with a Given Density

Say I want to build a maze with a certain probability of an obstacle at each position. This probability is determined by a density value ranging from 0 to 10, with 0 meaning "no chance", and 10 meaning "certain".
Does this Python code do what I want?
import random
obstacle_density = 10
if random.randint(0, 9) < obstacle_density:
print("There is an obstacle")
I've tried various combinations of upper and lower bounds and inequalities, and this seems to do the job, but I'm suspicious. For one thing, 11 possible values for obstacle_density and only 10 in random.randint(0, 9).
Not super sure about your solution. It seems like it would work, though.
Here's how I would approach it, even if it is a bit redundant - I'd start with a table just for my own reference:
density | probability of obstacle
---------------------------------
0 | 0%
1 | 10%
2 | 20%
3 | 30%
4 | 40%
5 | 50%
6 | 60%
7 | 70%
8 | 80%
9 | 90%
10 | 100%
This seems to add up. I present two versions of a function which returns True or False depending on the density. In the first version, I use the density to create the associated weights to be used with random.choices (the total weight in this case would be 100). For example, if density = 3, then weights = [30, 70] - 30% to be True, 70% to be False.
def get_obstacle_state_version_1(density):
from random import choices
assert isinstance(density, int)
assert density in range(0, 11) # 0 - 10 inclusive
true_weight = density * 10
false_weight = 100 - true_weight
weights = [true_weight, false_weight]
return choices([True, False], weights=weights, k=1)[0]
Here's the second version, in which I use random.choice rather than random.choices. The latter always returns a list of samples, even if the sample size k is 1.
Here, the idea is the same, but basically the density influences the number of Trues and Falses that appear in the population to be sampled. For example, if density = 3, then random.choice would pick one element from a list of 30 Trues, and 70 Falses with a uniform distribution.
def get_obstacle_state_version_2(density):
from random import choice
assert isinstance(density, int)
assert density in range(0, 11) # 0 - 10 inclusive
true_count = density * 10
false_count = 100 - true_count
return choice([True] * true_count + [False] * false_count)
You should loop over the maze and at each site assign a probability.
You should do something like this:
probability = random.randint(0, 10) / 10
I have no idea what you mean by obstacle_density, so I am not gonna go there.

(Python) Markov, Chebyshev, Chernoff upper bound functions

I'm stuck with one task on my learning path.
For the binomial distribution X∼Bp,n with mean μ=np and variance σ**2=np(1−p), we would like to upper bound the probability P(X≥c⋅μ) for c≥1.
Three bounds introduced:
Formulas
The task is to write three functions respectively for each of the inequalities. They must take n , p and c as inputs and return the upper bounds for P(X≥c⋅np) given by the above Markov, Chebyshev, and Chernoff inequalities as outputs.
And there is an example of IO:
Code:
print Markov(100.,0.2,1.5)
print Chebyshev(100.,0.2,1.5)
print Chernoff(100.,0.2,1.5)
Output
0.6666666666666666
0.16
0.1353352832366127
I'm completely stuck. I just can't figure out how to plug in all that math into functions (or how to think algorithmically here). If someone could help me out, that would be of great help!
p.s. and all libs are not allowed by task conditions except math.exp
Ok, let's look at what's given:
Input and derived values:
n = 100
p = 0.2
c = 1.5
m = n*p = 100 * 0.2 = 20
s2 = n*p*(1-p) = 16
s = sqrt(s2) = sqrt(16) = 4
You have multiple inequalities of the form P(X>=a*m) and you need to provide bounds for the term P(X>=c*m), so you need to think how a relates to c in all cases.
Markov inequality: P(X>=a*m) <= 1/a
You're asked to implement Markov(n,p,c) that will return the upper bound for P(X>=c*m). Since from
P(X>=a*m)
= P(X>=c*m)
it's clear that a == c, you get 1/a = 1/c. Well, that's just
def Markov(n, p, c):
return 1.0/c
>>> Markov(100,0.2,1.5)
0.6666666666666666
That was easy, wasn't it?
Chernoff inequality states that P(X>=(1+d)*m) <= exp(-d**2/(2+d)*m)
First, let's verify that if
P(X>=(1+d)*m)
= P(X>=c *m)
then
1+d = c
d = c-1
This gives us everything we need to calculate the uper bound:
def Chernoff(n, p, c):
d = c-1
m = n*p
return math.exp(-d**2/(2+d)*m)
>>> Chernoff(100,0.2,1.5)
0.1353352832366127
Chebyshev inequality bounds P(X>=m+k*s) by 1/k**2
So again, if
P(X>=c*m)
= P(X>=m+k*s)
then
c*m = m+k*s
m*(c-1) = k*s
k = m*(c-1)/s
Then it's straight forward to implement
def Chebyshev(n, p, c):
m = n*p
s = math.sqrt(n*p*(1-p))
k = m*(c-1)/s
return 1/k**2
>>> Chebyshev(100,0.2,1.5)
0.16

Roulette wheel selection with positive and negative fitness values for minimization

I'm doing a genetic algorithm where each inidividual generates 3 new offsprings. The new individuals are evaluated using the fitness function, which may return negative and positive values. What is the correct approach to choose between the offsprings using the roulette wheel selection if I want to minimize?
Some possible values of the fitness function are: fitness_offspring_1 = -98.74; fitness_offspring_2 = -10.1; fitness_offspring_3 = 100.31
I'm working on Python but I only need the idea so I can implement it by myself.
Roulette wheel selection is simply assigning probability values proportional to an individuals fitness. And then randomly selecting from that distribution. Fit individuals get a better chance at being selected, while less-fit individuals get lower chances.
You can easily adapt this to your code by using the offspring list instead of the individuals.
Lets start with as simple pseudo-codeish implementation in python, you can modify it to your needs:
fitness_sum = sum([ind.fitness for ind in individuals])
probability_offset = 0
for ind in individuals:
ind.probability = probability_offset + (ind.fitness / fitness_sum)
probability_offset += ind.probability
r = get_random_float()
selected_ind = individuals[0] # initialize
for ind in individuals:
if ind.probability > r:
break;
selected_ind = ind
Now, the code above (by the nature of roulette wheel) assumes all fitness values are positive. So in your case we need to normalize it. You can simply sum all values by the absolute value of smallest offspring. But that would make its probability 0 so you could simply add a bias to all to give it a slight chance as well.
Lets see how it works with simple values, say [1, 5, 14]
fitness_sum = 20
previous_probability = 0
# iterating first for loop:
individual[0].fitness => 0 + 1 / 20 = 0.05
individual[1].fitness => 0.05 + 5 / 20 = 0.3
individual[2].fitness => 0.3 + 14 / 20 = 1
# We created the wheel of probability distributions,
# 0 - 0.05 means we select individual 0
# 0.05 - 0.3 means we select individual 1 etc...
# say our random number r = 0.4
r = 0.4
selected_ind = individual_0
# iterating second for loop:
# 0.05 > 0.4 = false
selected_ind = individual_0
# 0.3 > 0.4 = false
selected_ind = individual_1
# 1.0 > 0.4 = true
break
I'm sure there are much better pseudo-codes or implementations in python you can search for. Just wanted to give you an idea.
This is how I implemented it in JavaScript, to give you a general idea:
var totalFitness = 0;
var minimalFitness = 0;
for(var genome in this.population){
var score = this.population[genome].score;
minimalFitness = score < minimalFitness ? score : minimalFitness;
totalFitness += score
}
minimalFitness = Math.abs(minimalFitness);
totalFitness += minimalFitness * this.popsize;
var random = Math.random() * totalFitness
var value = 0;
for(var genome in this.population){
genome = this.population[genome];
value += genome.score + minimalFitness;
if(random < value) return genome;
}
// if all scores equal, return random genome
return this.population[Math.floor(Math.random() * this.population.length)];
However, just as #umutto has mentioned, this gives the genome with the lowest score no chance of being selected. So you could artificially add a little bit of fitness to each genome, so that even the lowest invidivudla has a chance. Note: I didn't implement that small bias in the above code #umutto mentioned.
For using Roulette wheel selection for minimization, you have to do two pre-processing steps:
You have to get rid of the negative fitness values, because the fitness value will represent the selection probability, which can't be negative. The easiest way for doing this, is to subtract the lowest (negative) value from all fitness values. The lowest fitness value is now zero.
For minimizing, you have to revert the fitness values. This is done by setting the fitness values to max fitness - fitness. The individual with the best fitness has now the highest fitness value.
The transformed fitness values are now feed into the normal Roulette wheel selector, which selects the individual with the highest fitness. But essentially you are doing a minimization.
The Java GA, Jenetics, is doing minimization in this way.

Standard deviation of combinations of dices

I am trying to find stdev for a sequence of numbers that were extracted from combinations of dice (30) that sum up to 120. I am very new to Python, so this code makes the console freeze because the numbers are endless and I am not sure how to fit them all into a smaller, more efficient function. What I did is:
found all possible combinations of 30 dice;
filtered combinations that sum up to 120;
multiplied all items in the list within result list;
tried extracting standard deviation.
Here is the code:
import itertools
import numpy
dice = [1,2,3,4,5,6]
subset = itertools.product(dice, repeat = 30)
result = []
for x in subset:
if sum(x) == 120:
result.append(x)
my_result = numpy.product(result, axis = 1).tolist()
std = numpy.std(my_result)
print(std)
Note that D(X^2) = E(X^2) - E(X)^2, you can solve this problem analytically by following equations.
f[i][N] = sum(k*f[i-1][N-k]) (1<=k<=6)
g[i][N] = sum(k^2*g[i-1][N-k])
h[i][N] = sum(h[i-1][N-k])
f[1][k] = k ( 1<=k<=6)
g[1][k] = k^2 ( 1<=k<=6)
h[1][k] = 1 ( 1<=k<=6)
Sample implementation:
import numpy as np
Nmax = 120
nmax = 30
min_value = 1
max_value = 6
f = np.zeros((nmax+1, Nmax+1), dtype ='object')
g = np.zeros((nmax+1, Nmax+1), dtype ='object') # the intermediate results will be really huge, to keep them accurate we have to utilize python big-int
h = np.zeros((nmax+1, Nmax+1), dtype ='object')
for i in range(min_value, max_value+1):
f[1][i] = i
g[1][i] = i**2
h[1][i] = 1
for i in range(2, nmax+1):
for N in range(1, Nmax+1):
f[i][N] = 0
g[i][N] = 0
h[i][N] = 0
for k in range(min_value, max_value+1):
f[i][N] += k*f[i-1][N-k]
g[i][N] += (k**2)*g[i-1][N-k]
h[i][N] += h[i-1][N-k]
result = np.sqrt(float(g[nmax][Nmax]) / h[nmax][Nmax] - (float(f[nmax][Nmax]) / h[nmax][Nmax]) ** 2)
# result = 32128174994365296.0
You ask for a result of an unfiltered lengths of 630 = 2*1023, impossible to handle as such.
There are two possibilities that can be combined:
Include more thinking to pre-treat the problem, e.g. on how to sample only
those with sum 120.
Do a Monte Carlo simulation instead, i.e. don't sample all
combinations, but only a random couple of 1000 to obtain a representative
sample to determine std sufficiently accurate.
Now, I only apply (2), giving the brute force code:
N = 30 # number of dices
M = 100000 # number of samples
S = 120 # required sum
result = [[random.randint(1,6) for _ in xrange(N)] for _ in xrange(M)]
result = [s for s in result if sum(s) == S]
Now, that result should be comparable to your result before using numpy.product ... that part I couldn't follow, though...
Ok, if you are out after the standard deviation of the product of the 30 dices, that is what your code does. Then I need 1 000 000 samples to get roughly reproducible values for std (1 digit) - takes my PC about 20 seconds, still considerably less than 1 million years :-D.
Is a number like 3.22*1016 what you are looking for?
Edit after comments:
Well, sampling the frequency of numbers instead gives only 6 independent variables - even 4 actually, by substituting in the constraints (sum = 120, total number = 30). My current code looks like this:
def p2(b, s):
return 2**b * 3**s[0] * 4**s[1] * 5**s[2] * 6**s[3]
hits = range(31)
subset = itertools.product(hits, repeat=4) # only 3,4,5,6 frequencies
product = []
permutations = []
for s in subset:
b = 90 - (2*s[0] + 3*s[1] + 4*s[2] + 5*s[3]) # 2 frequency
a = 30 - (b + sum(s)) # 1 frequency
if 0 <= b <= 30 and 0 <= a <= 30:
product.append(p2(b, s))
permutations.append(1) # TODO: Replace 1 with possible permutations
print numpy.std(product) # TODO: calculate std manually, considering permutations
This computes in about 1 second, but the confusing part is that I get as a result 1.28737023733e+17. Either my previous approaches or this one has a bug - or both.
Sorry - not that easy: The sampling is not of the same probability - that is the problem here. Each sample has a different number of possible combinations, giving its weight, which has to be considered before taking the std-deviation. I have drafted that in the code above.

Metropolis-Hastings accept-reject implementation

I've been reading about the Metropolis-Hastings (MH) algorithm. Theoretically, I understood how the algorithm works. Now, I am trying to implement the MH algorithm using python.
I came across the following notebook. It suits exactly my problem since I want to fit my data by a straight line taking into consideration the measurement errors on my data. I am going to paste the code I am finding difficulties to understand:
# initial m, b
m,b = 2, 0
# step sizes
mstep, bstep = 0.1, 10.
# how many steps?
nsteps = 10000
chain = []
probs = []
naccept = 0
print 'Running MH for', nsteps, 'steps'
# First point:
L_old = straight_line_log_likelihood(x, y, sigmay, m, b)
p_old = straight_line_log_prior(m, b)
prob_old = np.exp(L_old + p_old)
for i in range(nsteps):
# step
mnew = m + np.random.normal() * mstep
bnew = b + np.random.normal() * bstep
# evaluate probabilities
# prob_new = straight_line_posterior(x, y, sigmay, mnew, bnew)
L_new = straight_line_log_likelihood(x, y, sigmay, mnew, bnew)
p_new = straight_line_log_prior(mnew, bnew)
prob_new = np.exp(L_new + p_new)
if (prob_new / prob_old > np.random.uniform()):
# accept
m = mnew
b = bnew
L_old = L_new
p_old = p_new
prob_old = prob_new
naccept += 1
else:
# Stay where we are; m,b stay the same, and we append them
# to the chain below.
pass
chain.append((b,m))
probs.append((L_old,p_old))
print 'Acceptance fraction:', naccept/float(nsteps)
The code is simple and easy, but I have difficulties in understanding how the MH is being implemented.
My question is in the chain.append (the third line from the bottom). The author is appending m and b whether they were accepted or rejected. Why? Shouldn't he append only the accepted points?
The following R code demonstrates why it is important to capture the rejected case:
# 20 samples from 0 or 1. 1 has an 80% probability of being chosen.
the.population <- sample(c(0,1), 20, replace = TRUE, prob=c(0.2, 0.8))
# Create a new sample that only catches changes
the.sample <- c(the.population[1])
# Loop though the.population,
# but only copy the.population to the.sample if the value changes
for( i in 2:length(the.population))
{
if(the.population[i] != the.population[i-1])
the.sample <- append(the.sample, the.population[i])
}
When this code runs, the.population gets 20 values, for example:
0 1 1 1 1 1 1 1 1 0 1 1 1 1 0 0 1 1 1 1
The probability of a 1 in this population is 16/20 or 0.8. Exactly the probability we expected...
The sample, on the other hand, which only records changes, looks like this:
0 1 0 1 0 1
The probability of a 1 in the sample is 3/6 or 0.5.
We are trying to build a distribution, rejecting the new values means that the old values are more likely than the new values. That needs to be captured so our distribution is correct.
From a quick reading of the algorithm description: When a candidate is rejected, it still counts as a step, but the value is the same as the old step. I.e. b, m are appended either way, but they only get updated (to bnew, mnew) in the case where the candidate is accepted.

Categories