Find the maximum product sales with planning schedules - python

I'm working on a dynamic programming problem and actually, I'm not quite sure whether it is dynamic programming since moving average M is based on previous M. No need to consider the efficiency. The problem requires selling a product over T time periods and maximizing the total actual sale amount. The total number of products is N and I plan to sell some products over different periods n0,n1,⋯,nT−1 and ∑ni=N.
In conclusion, this question wants to find the most optimal schedule for n0,n1,⋯,nT−1 such that ∑ni=N, which maximizes the ∑Si.
And the actual sale amount Si are based on current moving average M and current ni.
Assume that α=0.001 and π=0.5
Initialize M=0. Then for i=0,1,…,T−1
Compute new Mi=⌈0.5∗(Mi+ni)⌉
At time i we sell Si = ⌈(1−α*M^πi)*ni⌉ products
Continue this process until the last time period. For example, assume we already know ni for all periods, the trading will be below
M = 0
T = 4
N = 10000
alpha = 1e-3
pi = 0.5
S = np.zeros(T,dtype='i')
n = np.array([5000,1000,2000,2000])
print(n)
total = 0
for i in range(T):
M = math.ceil(0.5*(M + n[i]))
S[i] = math.ceil((1 - alpha*M**pi)*n[i])
total += S[i]
print('at time %d, M = %d and we sell %d products' %(i,M,S[i]))
print('total sold =', total)
My idea is to keep track of the state based on t time period, n products left, and m moving average as index and store the actual sale in a high dimension matrix. I think the upper bound for moving average is just [0,n] I'm still confusing how to program it. Could someone provide ideas about how to fix some problems in my programming? Thank you very much.
The below is some of my crude codes but the output is a little strange.
def DPtry(N,T,alpha,pi,S):
schedule = np.zeros(T)
M = 0
for n in range(0,N+1):
for m in range(0,n+1):
S[T-1,n,m] = math.ceil((1 - alpha*m**pi)*n)
for k in range(1,T):
t = T - k - 1
print("t = ",t)
for n in range(0,N+1):
for m in range(0,n+1):
best = -1
for plan in range(0,n+1):
salenow = math.ceil((1 - alpha*m**pi)*plan)
M = math.ceil(0.5*(m + plan))
salelater = S[t+1,n-plan,M]
candidate = salenow + salelater
if candidate > best:
best = candidate
S[t,n,m] = best
print(S[0,N,0])
N = 100
T = 5
pi = .5
alpha = 1e-3
S = np.zeros((T,N+1,N+1))
DPtry(N,T,alpha,pi,S)

Related

Probabilities and rpg dice goes wrong for high values

I made this short code to calculate the chances of a success rolling dice, and it worked very well... but not in big numbers. Se the code, I'll explain better below.
def calc_dados(f_sucessos = 1, faces = 6, n_dados = 1):
p_max = ((f_sucessos/faces)**n_dados) #chance de todos
fator = 1
p_meio = 0
for i in range(n_dados-1):
p_meio += (((f_sucessos/faces)**(n_dados-fator) * ((faces-f_sucessos)/faces)**(n_dados-(n_dados-fator))) * n_dados)
fator += 1
p = p_max + p_meio
return p*100
So, ok, it works, why not see how my chances are better in function of adding dice? More the dice, better the chance. So I made this tiny table with pandas:
f_sucessos = 1 # how many faces are success
faces = 2 # faces of the dice
n_dados = 10 # n de dados lançados
suc_list = []
for i in range(0,n_dados): suc_list.append(f_sucessos)
fac_list = []
for i in range(0,n_dados): fac_list.append(faces)
cha_list = []
for i in range(0,n_dados): cha_list.append(calc_dados(f_sucessos, faces, i+1))
df = pd.DataFrame(
{
"n_dados" : range(1,n_dados+1),
"faces" : fac_list,
"sucessos" : suc_list,
"chance" : cha_list
}
)
df
The results were very strange... So I wrote an coin probability table and tested as the coin was an 2 faced dice. The right table is this:
table of right brute force tested results
But if you use my code to create this table the result will be this:
table of the results of my code
Please, anybody can help me to understood why in a certain moment the probabilities just fall when they should be higher? For example:The chance of at least 1 'head' in 4 coins should be 93,75%, but my code says it is 81,25%...
To be honest, I don't get how exactly 'calc_dados' calculate the probability of a success rolling dice.
So instead, I implemented maybe a more naive approach:
First, we calculate the total of possible outcomes: outcomes_total = faces ** n_dados
Second, we calculate the successful outcomes: outcomes_success
At last: p = outcomes_success / outcomes_total
I'm going to add a mathematical proof behind my version of the function a bit later:)
from math import comb
def calc_dados(f_sucessos=1, faces=6, n_dados=1):
assert f_sucessos <= faces
outcomes_total = faces ** n_dados
outcomes_success = 0
f_fail = faces - f_sucessos
for i in range(1, n_dados + 1):
one_permutation = (f_sucessos ** i) * (f_fail ** (n_dados - i))
n_permutations = comb(n_dados, i)
outcomes_success += one_permutation * n_permutations
p = outcomes_success / outcomes_total
return p * 100
These are some testing results
Now my code, based on the images I posted is the sum of all exact chances to find the chance of at least 1 result.
Below the code I will comment the changes.
from decimal import Decimal
def dado(fs=1,ft=6,d=1,ns=1,exato=False):
'''
fs = faces success
ft = faces totals
d = n of dice rolled
ns - n of expected success
exato = True: chance of exact ns events, False: chance of at least ns events
'''
s = Decimal(str(fs/ft))
f = Decimal(str((ft-fs)/ft))
d_int = d
d = Decimal(str(d))
ns = Decimal(str(ns))
p_max = Decimal(str(s))**Decimal(str(d))
fator = 1
po_soma = 0
for i in range(d_int-1):
po = (Decimal(str(s))**(Decimal(str(d))-fator) * Decimal(str(f))**(Decimal(str(d))-(Decimal(str(d))-fator)))*Decimal(str(d))
po_soma += po
if exato == True:
p_max = 0
break
fator += 1
return f'{(p_max + po_soma)*100:.2f}%'
dado(1,2,5,1)
First - not a change, it still dont work well.
Second - I'm using now 'fs' variable to number of faces that means success and 'ns' variable to elaborate how many successes we gonna find, so fs = 1 and ns 2 in 3d6 means 'the chance of find at least 2 of 1 specific face rolling 3 dice'.
Third - I'm using Decimal because I realize that the multiplication of fractions could generate very small numbers and the precision could be affected by this (but it dont solve the initial problem, them Decimal may be quicked out soon).
Fourth - Exato (exact) is now a variable that breaks the loop and send to us just the 'exact value' or the 'at least ns value'. So 'exato=True' means in the last example 'the chance of find exact 2 of 1 specific face rolling 3 dice', a very smaller number.
This is it, my thanks for #Raibek that is trying solve this problem in combinations way, I'll study this way too but if you have an idea about please let me know.
Hello people, it's finally solved!
First I would like to thank Raibek, who solved it using combinations, I didn't realize it was solved when he did it and below I'll tell you how and why.
If you are not following the history of this code, you just need to know that it is used to calculate the probability of getting at least ns successes when rolling d amount of dice. Solution codes are at the end of this answer.
I found out how to solve the problem by talking to a friend, Eber, who pointed me to an alternative to check the data, anydice.com. I quickly realized that my visual check, assembling tables in Excel/Calc was wrong, but why?
Well, here comes my friend who, reading the table of large numbers with 7d6, where the error was already very evident, shows me that although at the beginning the account worked, my table did not have all the possible combinations. And the more possibilities there were, the more my accounts failed, with the odds getting smaller as more dice were added to the roll.
This is the combinations I was considering, in this example on 7d6 case.
In the first code the account was:
successes**factor *failures**factor *d
The mistake is in assuming that the number of possible combinations was equal to d (which is a coincidence up to 3 dice for the tests I did before thanks to factorials of 1 = 1 and factorial of 2 = 2).
Now notice that, in 7d6 example, in the exact 3 block there are some missing possible combinations in yellow:
The correct account for this term of the equation is:
factorial(d) / factorial (failures) * factorial (successes)
With this account we can find out what the chance of exactly n faces rolling is, and then if we want, for example, to know the chance of at least once getting the number 1 in 3d6, we just need to add the chances of getting exactly 1 time, 2 times and 3 times. What the code already did well.
Finally, let's get to the code:
Daniel-Eber solution:
def dado(fs=1,ft=6,d=1,ns=1,exato=False):
'''
fs = faces sucesso
ft = faces totais
d = n de dados
ns - n de sucessos esperados modificados por exato
exato = True: chance de exatamente ns ocorrerem, False: chance de pelo menos ns ocorrerem
'''
from math import factorial
s = fs/ft
f = (ft-fs)/ft
d = d
ns = ns
p_max = s**d
falhas = 1
po_soma = 0
if exato == False:
for i in range(d-1):
po = ( (s**(d-falhas)) * (f**(falhas))) * (factorial(d)/(factorial(falhas)*factorial((d-falhas))))
po_soma += po
falhas += 1
else:
p_max = 0
falhas = d-ns
po_soma = ( (s**(d-falhas)) * (f**(falhas))) * (factorial(d)/(factorial(falhas)*factorial((d-falhas))))
return f'{(p_max + po_soma)*100:.2f}%'
print(dado(1,6,6,1))
Raibek solution:
from scipy.special import comb
def calc_dados(f_sucessos=1, faces=6, n_dados=1):
assert f_sucessos <= faces
outcomes_total = faces ** n_dados
outcomes_success = 0
f_fail = faces - f_sucessos
for i in range(1, n_dados + 1):
one_permutation = (f_sucessos ** i) * (f_fail ** (n_dados - i))
n_permutations = comb(n_dados, i)
outcomes_success += one_permutation * n_permutations
p = outcomes_success / outcomes_total
return f'{(p)*100:.2f}%'

How to formulate linear programming optimization in PuLP?

I am looking to formulate a (I think) complex LP problem in Python using PuLP.
The optimization goal is to maximize profit margin (aggregate acquisition cost vs aggregate sale revenue plus some future appreciation (FV)) on a basket of products for purchase.
The LP decision variables are pricing 'statistics' distinct for each product.
The constraint is that the bid for a particular product cannot use a pricing statistic > some maximum value. 1.0 in this case. And the aggregate ratio of sum(FV) of won items to net revenue must be <= -2.0.
The price I'd bid is a function of a theoretical price plus a theoretical future value (FV) minus a theoretical cost. These 3 inputs are static - but the pricing statistic scales (or weights) the impact of the FV on the bid, which is what I'd like to solve for. Higher statistic -> higher bid. The trick is that once you change the statistic, you change the bid, and this changes the aggregates that PuLP is trying to optimize for. I figured this would be ok since the bid price is a closed form linear formula, but please see below for how I tried to tackle.
I also have the actual price the item sold for, so can compare the model's output price to the actual price to determine whether I would have bought in that case.
Concretely:
Bid[item j] = Theo price + (Statistic to be tuned[product i] * FV) - (Costs + Expenses)
There are 10 products to tune for, and j total items, non-uniformly distributed throughout a dataset.
If my output bid price based on the parameter being tuned in the LP > actual winning bid price, then consider the item purchased, and add it to the objective function.
Can someone please help me formulate this in PuLP? Maybe this is MIP? If so, I am unsure how to represent it formally.
What I have so far is the following:
from pulp import LpMaximize, LpProblem, LpStatus, lpSum, LpVariable, LpBinary
import pandas as pd
df= pd.read_excel('data.xlsx')
#create matrices and set variables
MAX_STAT = 1.0
RATIO_CONSTRAINT = -2.0
PRODUCTS = [0,1,2,3,4,5,6,7,8,9]
ITEMS = df['ITEMS'].tolist() # IDs
#1xj dicts
ITEM_PRODUCT = {ITEMS[i]:df['PRODUCT'].iloc[i] for i in range(len(df))}
ACTUAL_PX = {ITEMS[i]:df['ACTUAL_PX'].iloc[i] for i in range(len(df))}
COST = {ITEMS[i]:df['COST'].iloc[i] for i in range(len(df))}
EXPENSE = {ITEMS[i]:df['EXPENSE'].iloc[i] for i in range(len(df))}
#ixj dicts
THEO_PX = {ITEMS[i]:[df['THEO_PX'].iloc[i] if PRODUCTS[ITEMS[i]] == x else 0for x in PRODUCTS] for i in range(len(df))}
QUANTITY = {ITEMS[i]:[df['QUANTITY'].iloc[i] if PRODUCTS[ITEMS[i]] == x else 0 for x in PRODUCTS] for i in range(len(df))}
FV = {ITEMS[i]:[df['FV'].iloc[i] if PRODUCTS[ITEMS[i]] == x else 0 for x in PRODUCTS] for i in range(len(df))}
use_vars = {j:[i if ITEM_PRODUCT[j] == i else 0 for i in PRODUCTS] for j in ITEMS}
#Define the model
model = LpProblem(name="maximize_margin", sense=LpMaximize)
#Define decision variables
strategy_statistic = LpVariable.dicts('StrategyStat', [(j,i) for j in ITEMS for i in PRODUCTS], 0, MAX_STAT)
#other variables dependent on the statistic
strategy_bid = {(j,i):strategy_statistic[(j,i)]*FV[j][i]+THEO_PX[j][i]-COST[j]-EXPENSE[j] for j in ITEMS for i in PRODUCTS}
win_loss = {(j,i):1 if strategy_bid[(j,i)] >= ACTUAL_PX[j] else 0 for j in ITEMS for i in PRODUCTS}
aggQuantity = lpSum(win_loss[(j,i)]*QUANTITY[j][i]*use_vars[j][i] for j in ITEMS for i in PRODUCTS)
aggTheo = lpSum(win_loss[(j,i)]*THEO_PX[j][i]*QUANTITY[j][i]*use_vars[j][i] for j in ITEMS for i in PRODUCTS)
aggFV = lpSum(win_loss[(j,i)]*FV[j][i]*QUANTITY[j][i]*use_vars[j][i] for j in ITEMS for i in PRODUCTS)
aggBidNotional = lpSum(win_loss[(j,i)]*strategy_bid[(j,i)]*QUANTITY[j][i]*use_vars[j][i] for j in ITEMS for i in PRODUCTS)
model += (aggTheo - aggBidNotional + aggFV)
model += (aggFV / (aggTheo - aggBidNotional)) <= RATIO_CONSTRAINT
Currently seeing an error on the last line saying that:
TypeError: Expressions cannot be divided by a non-constant expression
But I think there is more wrong with this formulation than that...

Calculating monthly growth percentage from cumulative total growth

I am trying to calculate a constant for month-to-month growth rate from an annual growth rate (goal) in Python.
My question has arithmetic similarities to this question, but was not completely answered.
For example, if total annual sales for 2018 are $5,600,000.00 and I have an expected 30% increase for the next year, I would expect total annual sales for 2019 to be $7,280,000.00.
BV_2018 = 5600000.00
Annual_GR = 0.3
EV_2019 = (BV * 0.3) + BV
I am using the last month of 2018 to forecast the first month of 2019
Last_Month_2018 = 522000.00
Month_01_2019 = (Last_Month_2018 * CONSTANT) + Last_Month_2018
For the second month of 2019 I would use
Month_02_2019 = (Month_01_2019 * CONSTANT) + Month_01_2019
...and so on and so forth
The cumulative sum of Month_01_2019 through Month_12_2019 needs to be equal to EV_2019.
Does anyone know how to go about calculating the constant in Python? I am familiar with the np.cumsum function, so that part is not an issue. My problem is I cannot solve for the constant I need.
Thank you in advance and please do not hesitate to ask for further clarification.
More clarification:
# get beginning value (BV)
BV = 522000.00
# get desired end value (EV)
EV = 7280000.00
We are trying to get from BV to EV (which is a cumulative sum) by calculating the cumulative sum of the [12] monthly totals. Each monthly total will have a % increase from the previous month that is constant across months. It is this % increase that I want to solve for.
Keep in mind, BV is the last month of the previous year. It is from BV that our forecast (i.e., Months 1 through 12) will be calculated. So, I'm thinking that it makes sense to go from BV to the EV plus the BV. Then, just remove BV and its value from the list, giving us EV as the cumulative total of Months 1 through 12.
I imagine using this constant in a function like this:
def supplier_forecast_calculator(sales_at_cost_prior_year, sales_at_cost_prior_month, year_pct_growth_expected):
"""
Calculates monthly supplier forecast
Example:
monthly_forecast = supplier_forecast_calculator(sales_at_cost_prior_year = 5600000,
sales_at_cost_prior_month = 522000,
year_pct_growth_expected = 0.30)
monthly_forecast.all_metrics
"""
# get monthly growth rate
monthly_growth_expected = CONSTANT
# get first month sales at cost
month1_sales_at_cost = (sales_at_cost_prior_month*monthly_growth_expected)+sales_at_cost_prior_month
# instantiate lists
month_list = ['Month 1'] # for months
sales_at_cost_list = [month1_sales_at_cost] # for sales at cost
# start loop
for i in list(range(2,13)):
# Append month to list
month_list.append(str('Month ') + str(i))
# get sales at cost and append to list
month1_sales_at_cost = (month1_sales_at_cost*monthly_growth_expected)+month1_sales_at_cost
# append month1_sales_at_cost to sales at cost list
sales_at_cost_list.append(month1_sales_at_cost)
# add total to the end of month_list
month_list.insert(len(month_list), 'Total')
# add the total to the end of sales_at_cost_list
sales_at_cost_list.insert(len(sales_at_cost_list), np.sum(sales_at_cost_list))
# put the metrics into a df
all_metrics = pd.DataFrame({'Month': month_list,
'Sales at Cost': sales_at_cost_list}).round(2)
# return the df
return all_metrics
Let r = 1 + monthly_rate. Then, the problem we are trying to solve is
r + ... + r**12 = EV/BV. We can use numpy to get the numeric solution. This should be relatively fast in practice. We are solving a polynomial r + ... + r**12 - EV/BV = 0 and recovering monthly rate from r. There will twelve complex roots, but only one real positive one - which is what we want.
import numpy as np
# get beginning value (BV)
BV = 522000.00
# get desired end value (EV)
EV = 7280000.00
def get_monthly(BV, EV):
coefs = np.ones(13)
coefs[-1] -= EV / BV + 1
# there will be a unique positive real root
roots = np.roots(coefs)
return roots[(roots.imag == 0) & (roots.real > 0)][0].real - 1
rate = get_monthly(BV, EV)
print(rate)
# 0.022913299846925694
Some comments:
roots.imag == 0 may be problematic in some cases since roots uses a numeric algorithm. As an alternative, we can pick a root with the least imaginary part (in absolute value) among all roots with a positive real part.
We can use the same method to get rates for other time intervals. For example, for weekly rates, we can replace 13 == 12 + 1 with 52 + 1.
The above polynomial has a solution by radicals, as outlined here.
Update on performance. We could also frame this as a fixed point problem, i.e. to look for a fixed point of a function
x = EV/BV * x ** 13 - EV/BV + 1
The fix point x will be equal to (1 + rate)**13.
The following pure-Python implementation is roughly four times faster than the above numpy version on my machine.
def get_monthly_fix(BV, EV, periods=12):
ratio = EV / BV
r = guess = ratio
while True:
r = ratio * r ** (1 / periods) - ratio + 1
if abs(r - guess) < TOLERANCE:
return r ** (1 / periods) - 1
guess = r
We can make this run even faster with a help of numba.jit.
I am not sure if this works (tell me if it doesn't) but try this.
def get_value(start, end, times, trials=100, _amount=None, _last=-1, _increase=None):
#don't call with _amount, _last, or _increase! Only start, end and times
if _amount is None:
_amount = start / times
if _increase is None:
_increase = start / times
attempt = 1
for n in range(times):
attempt = (attempt * _amount) + attempt
if attempt > end:
if _last != 0:
_increase /= 2
_last = 0
_amount -= _increase
elif attempt < end:
if _last != 1:
_increase /= 2
_last = 1
_amount += _increase
else:
return _amount
if trials <= 0:
return _amount
return get_value(start, end, times, trials=trials-1,
_amount=_amount, _last=_last, _increase=_increase)
Tell me if it works.
Used like this:
get_value(522000.00, 7280000.00, 12)

Standard deviation of combinations of dices

I am trying to find stdev for a sequence of numbers that were extracted from combinations of dice (30) that sum up to 120. I am very new to Python, so this code makes the console freeze because the numbers are endless and I am not sure how to fit them all into a smaller, more efficient function. What I did is:
found all possible combinations of 30 dice;
filtered combinations that sum up to 120;
multiplied all items in the list within result list;
tried extracting standard deviation.
Here is the code:
import itertools
import numpy
dice = [1,2,3,4,5,6]
subset = itertools.product(dice, repeat = 30)
result = []
for x in subset:
if sum(x) == 120:
result.append(x)
my_result = numpy.product(result, axis = 1).tolist()
std = numpy.std(my_result)
print(std)
Note that D(X^2) = E(X^2) - E(X)^2, you can solve this problem analytically by following equations.
f[i][N] = sum(k*f[i-1][N-k]) (1<=k<=6)
g[i][N] = sum(k^2*g[i-1][N-k])
h[i][N] = sum(h[i-1][N-k])
f[1][k] = k ( 1<=k<=6)
g[1][k] = k^2 ( 1<=k<=6)
h[1][k] = 1 ( 1<=k<=6)
Sample implementation:
import numpy as np
Nmax = 120
nmax = 30
min_value = 1
max_value = 6
f = np.zeros((nmax+1, Nmax+1), dtype ='object')
g = np.zeros((nmax+1, Nmax+1), dtype ='object') # the intermediate results will be really huge, to keep them accurate we have to utilize python big-int
h = np.zeros((nmax+1, Nmax+1), dtype ='object')
for i in range(min_value, max_value+1):
f[1][i] = i
g[1][i] = i**2
h[1][i] = 1
for i in range(2, nmax+1):
for N in range(1, Nmax+1):
f[i][N] = 0
g[i][N] = 0
h[i][N] = 0
for k in range(min_value, max_value+1):
f[i][N] += k*f[i-1][N-k]
g[i][N] += (k**2)*g[i-1][N-k]
h[i][N] += h[i-1][N-k]
result = np.sqrt(float(g[nmax][Nmax]) / h[nmax][Nmax] - (float(f[nmax][Nmax]) / h[nmax][Nmax]) ** 2)
# result = 32128174994365296.0
You ask for a result of an unfiltered lengths of 630 = 2*1023, impossible to handle as such.
There are two possibilities that can be combined:
Include more thinking to pre-treat the problem, e.g. on how to sample only
those with sum 120.
Do a Monte Carlo simulation instead, i.e. don't sample all
combinations, but only a random couple of 1000 to obtain a representative
sample to determine std sufficiently accurate.
Now, I only apply (2), giving the brute force code:
N = 30 # number of dices
M = 100000 # number of samples
S = 120 # required sum
result = [[random.randint(1,6) for _ in xrange(N)] for _ in xrange(M)]
result = [s for s in result if sum(s) == S]
Now, that result should be comparable to your result before using numpy.product ... that part I couldn't follow, though...
Ok, if you are out after the standard deviation of the product of the 30 dices, that is what your code does. Then I need 1 000 000 samples to get roughly reproducible values for std (1 digit) - takes my PC about 20 seconds, still considerably less than 1 million years :-D.
Is a number like 3.22*1016 what you are looking for?
Edit after comments:
Well, sampling the frequency of numbers instead gives only 6 independent variables - even 4 actually, by substituting in the constraints (sum = 120, total number = 30). My current code looks like this:
def p2(b, s):
return 2**b * 3**s[0] * 4**s[1] * 5**s[2] * 6**s[3]
hits = range(31)
subset = itertools.product(hits, repeat=4) # only 3,4,5,6 frequencies
product = []
permutations = []
for s in subset:
b = 90 - (2*s[0] + 3*s[1] + 4*s[2] + 5*s[3]) # 2 frequency
a = 30 - (b + sum(s)) # 1 frequency
if 0 <= b <= 30 and 0 <= a <= 30:
product.append(p2(b, s))
permutations.append(1) # TODO: Replace 1 with possible permutations
print numpy.std(product) # TODO: calculate std manually, considering permutations
This computes in about 1 second, but the confusing part is that I get as a result 1.28737023733e+17. Either my previous approaches or this one has a bug - or both.
Sorry - not that easy: The sampling is not of the same probability - that is the problem here. Each sample has a different number of possible combinations, giving its weight, which has to be considered before taking the std-deviation. I have drafted that in the code above.

While loop in python(newbie)

hi am a newbie to python and i am having a bit of hard time understanding this simple while loop.this program is supposed to calculate the time its takes for the bacteria to double.
time = 0
population = 1000 # 1000 bacteria to start with
growth_rate = 0.21 # 21% growth per minute
while population < 2000:
population = population + growth_rate * population
print population
time = time + 1
print "It took %d minutes for the bacteria to double." % time
print "...and the final population was %6.2f bacteria." % population
and the result is:
1210.0
1464.1
1771.561
2143.58881
It took 4 minutes for the bacteria to double.
...and the final population was 2143.59 bacteria.
what i dont get is why is the final result greater than 2000 cause its supposed to stop before 2000.i am i getting something wrong?
Your code reads: "As long as the population is less than 2000, calculate the population of the next generation and then check again". Hence, it will always calculate one generation too many.
Try this:
while True:
nextGen = population + growth_rate * population
if nextGen > 2000: break
population = nextGen
print population
time = time + 1
EDIT:
Or to get the exact result:
print (math.log (2) / math.log (1 + growth_rate) )
So the whole program could be:
import math
population = 1000
growth_rate = 0.21 # 21% growth per minute
t = math.log (2) / math.log (1 + growth_rate)
print 'It took {} minutes for the bacteria to double.'.format (t)
print '...and the final population was {} bacteria.'.format (2 * population)
Because prior to your last iteration (#4 below) it IS below 2,000.
Iteration #1: 1210.0
Iteration #2: 1464.1
Iteration #3: 1771.561
Iteration #4: 2143.58881
Another way to do this, although perhaps less elegant, would be to add a break in your While loop like this (assuming all you care about is not printing any number higher than 2,000):
while population < 2000:
population = population + growth_rate * population
if population >= 2000:
break
else:
print population
time = time + 1
At the penultimate iteration of the loop the population was less than 2,000, so there was another iteration. On the final iteration the population became more than 2,000 and so the loop exited.
If the population increased by 1 each time then you're correct; the loop would have exited at 2,000. You can see this behaviour using a simpler version:
i = 0
while i < 10:
i += 1
print i
Vary the amount that i increases by in order to see how it changes.
A while loop is an example of an "entry controlled" loop. This means that the condition is checked before entering the loop. Hence, this means if your loop condition was violated during the previous iteration, but not at the beginning, your loop will terminate after the first violation, not before it.
Here is a simple example:
>>> a = 1
>>> while a < 5:
... a = a+3
... print a
...
4
7
So, if you want that your loop must exit before a is greater than or equal to 5, you must do the check in the loop itself and break out of the loop:
>>> a = 1
>>> while a < 5:
... a = a+3
... if a < 5:
... print a
... else:
... break
...
4
So your code right here:
while population < 2000:
population = population + growth_rate*population
Say we enter that while loop and population = 1800, and growth_rate = .21. This satisfies all the requirements for the while loop to enter another loop. But, in the next line, you will set population = 1800 +(.21)*1800 which equals 2178. So, when you print out population, it will say 2178 even though it's higher than 2000
What you could do, is something like this:
while population < 2000:
if population == population + growth_rate * population < 2000:
population = population + growth_rate * population
time = time + 1
else:
break
print population
time = 0
population = 1000 # 1000 bacteria to start with
growth_rate = 0.21 # 21% growth per minute
while population != 2*population:
population = population + (growth_rate * population )
time = time + 1
print "It took %d minutes for the bacteria to double." % time
I think you will be getting the right output now.
I think this is a simple math problem - you're expecting the time to be an integer, although it probably is a floating point number. The loop can't stop when the population has reached 2000, because the way you're calculating it, it never has a value of 2000.

Categories