Normalize Small Probabilities in Python - python

I have a list of probabilities, which I need to normalize to equal 1.0.
e.g. probs = [0.01,0.03,0.005]
I realize that this is done by dividing each probability by the sum of probs. However, if the probabilities become really small, Python will tell me that sum(probs)=0.0. I understand that this is an underflow issue. I suppose I should use the log of each probability. How would I do this?

The sum of even very small floating point values will never truly be 0; they may be close to zero, but can never be exactly zero.
Just divide 1 by their sum, and multiply the probabilities by that factor:
def normalize(probs):
prob_factor = 1 / sum(probs)
return [prob_factor * p for p in probs]
Some probabilities may make up but a very small percentage in the total sum, of course, and that percentage may approach zero. But this just means that when normalising you may end up with normalized probabilities that are either very close to zero, or if smaller than the smallest representable floating point value, equal to zero. The latter only happens if there are probabilities in the list that are so much smaller than the others that they no longer represent anything close to something that'll ever occur.
Demo:
>>> def normalize(probs):
... prob_factor = 1 / sum(probs)
... return [prob_factor * p for p in probs]
...
>>> normalize([0.0000000001,0.000000000003,0.000000000000005])
[0.9708266589000533, 0.029124799767001597, 4.854133294500266e-05]
And the extreme case:
>>> import sys
>>> normalize([sys.float_info.max, sys.float_info.min])
[0.9999999999999999, 0.0]
>>> normalize([sys.float_info.max, sys.float_info.min])[-1] == 0
True

You can always use a scale factor to avoid the underflow problem, either manually entered or automatically calculated, e.g.:
import math
no_z = ([x for x in probs if x > 0.0])
if len(no_z) == 0:
print "Unable to calculate with 0.0 as all the probabilities"
order = int(-math.log10(min(no_z)))
if order > 0:
order = 0
sf = 10**order
scaled = [x * sf for x in probs]
tot = sum(scaled)
norm = [x/tot for x in scaled]
Of course you would probably be better off just using bigfloat or numpy and doing high precision maths.

Related

Using Python to use Euler's formula into matrix approximation

I am trying to use Python and NumPy to use Euler’s formula e^i(π) represented as a matrix as e^A where
A = [0 -π]
[π 0]
,and then apply it to the Maclaurin series for an exponential function e^x as
SUMMATION(n=0, infinity) x^n/n! = 1 + x + x^2/2! + x^3/3! +...
So I am trying to compute an approximation matrix S^N+1 and print the matrix and it's four entries.
I have tried emulating euler's and maclaurin's series, which i think the final approximation matrix for this will be when N = 20, but currently my values do not add up. I am also trying to use np.linalg.norm to compute a 2 norm as well.
import math
import numpy as np
n = 0
A = np.eye(2)
A = math.pi * np.rot90(A)
A[0,1] = -A[0,1]
A
mac_series = 0
while n < 120:
print(n)
n += 1
mac_series = (A**n) / (math.factorial(n))
print("\n",mac_series)
np.linalg.norm(mac_series)
The main problem here is that you are confusing A**3 with A#A#A.
Just look at case n=0.
A**0
#array([[1., 1.],
# [1., 1.]])
I am pretty sure, you were expecting A⁰ to be identity (that is only that way that this thinking of x+iy ⇔ np.array([[x,-y],[y,x]]) makes sense)
In numpy, you have np.linalg.matrix_power for that (or you could just accumulate power your self)
sum(np.linalg.matrix_power(A,i) / math.factorial(i) for i in range(20))
is
array([[-1.00000000e+00, 5.28918267e-10],
[-5.28918267e-10, -1.00000000e+00]])
for example. Pretty sure that is what you were expecting (that is the matrix that represents real -1 using the same logic. And whole point of Euler identity is e^(iπ) = -1).
By comparison,
sum(A**i / math.factorial(i) for i in range(20))
returns
array([[ 1. , 0.04321392],
[23.14069263, 1. ]])
Which is just the maclaurin series computed for all four elements of the matrix. In other words, since your matrix is [[0,-π],[π,0]], you are evaluating using a MacLauring series [[e⁰, exp(-π)], [exp(π), e⁰]]. And it works. e⁰=1, obviously. exp(π) is 23.140692632779267, so we got a very good approximation in our result. And exp(-π) is the inverse, 0.04321391826377226. We also got a good approximation.
So it works. Just not at all to do what you obviously intend to do: prove Euler identity's in matrix form; compute exp(iπ) not just exp(π).
Without matrix_power, and with a code closer to your initial code, you could
n=0
mac_series = 0
Apowern=np.eye(2) # A⁰=Id for now
while n < 20:
print(n)
mac_series += Apowern / (math.factorial(n))
Apowern = Apowern # A # # is the matrix multiplication operator
n+=1
Note that I've also moved n+=1 which was misplaced in your code. You were stacking Aⁿ⁺¹/(n+1)! not Aⁿ/n! with your code (in other words, your sum misses the A⁰/0!=Id term).
With this, I get the expected result
>>> mac_series
array([[-1.00000000e+00, 5.28918724e-10],
[-5.28918724e-10, -1.00000000e+00]])
Last problem, more subtle: you may have noticed that I do only 20 iterations, not 120. That is because after 20, you start to have a numerical problem. Apowern (or np.linalg.matrix_power(A,n), it is the same problem for both methods) becomes to big. Since it is divided by n! in the stacking, that doesn't prevent convergence. But it does prevent numeric convergence. And, in practice, after a while, numpy change the type of Apowern.
So, we should not have big matrix divided by big number, and try to iterate things that stay small enough. Like this for example
n=0
mac_series = 0
NthTerm=np.eye(2) # Aⁿ/n!. A⁰/0!=Id for now
while n < 120: # 120 is no longer a problem
print(n)
mac_series += NthTerm
n += 1
NthTerm = (NthTerm # A) / n # so if nthterm was
# Aⁿ/n!, now it becomes Aⁿ/n! # A/(n+1) = Aⁿ⁺¹/(n+1)!
Result
>>> mac_series
array([[-1.00000000e+00, -2.34844612e-16],
[ 2.34844612e-16, -1.00000000e+00]])
tl;dr
You have 4 problems
The one already mentioned by Roy: you are not accumulating the Aⁿ/n!, just replacing them, and eventually keeping only the last. In other words, you need a += instead of =
A**n is not Aⁿ. It is just A, with all the elements to the power n. Said otherwise [[x,-y],[y,x]]**n is not [[x,-y],[y,x]]ⁿ it is [[xⁿ,(-y)ⁿ],[yⁿ,xⁿ]]. So you'll end up computing [[e⁰, 1/e^π], [e^π, e⁰]] ≈ [[1, 0.0432], [23.14, 1]] which is irrelevant.
n+=1 is misplaced
The numerical problem due to Aⁿ becoming huge (even if you intend to divide it by a even huger n!, so it does not theoretically/mathematically pose a problem, but numerically it does, since intermediate result is to big for computer)

Avoid underflow using exp and minimum positive float128 in numpy

I am trying to calculate the following ratio:
w(i) / (sum(w(j)) where w are updated using an exponential decreasing function, i.e. w(i) = w(i) * exp(-k), k being a positive parameter. All the numbers are non-negative.
This ratio is then used to a formula (multiply with a constant and add another constant). As expected, I soon run into underflow problems.
I guess this happens often but can someone give me some references on how to deal with this? I did not find an appropriate transformation so one thing I tried to do is set some minimum positive number as a safety threshold but I did not manage to find which is the minimum positive float (I am representing numbers in numpy.float128). How can I actually get the minimum positive such number on my machine?
The code looks like this:
w = np.ones(n, dtype='float128')
lt = np.ones(n)
for t in range(T):
p = (1-k) * w / w.sum() + (k/n)
# Process a subset of the n elements, call it set I, j is some range()
for i in I:
s = p[list(j[i])].sum()
lt /= s
w[s] *= np.exp(-k * lt)
where k is some constant in (0,1) and n is the length of the array
When working with exponentially small numbers it's usually better to work in log space. For example, log(w*exp(-k)) = log(w) - k, which won't have any over/underflow problems unless k is itself exponentially large or w is zero. And, if w is zero, numpy will correctly return -inf. Then, when doing the sum, you factor out the largest term:
log_w = np.log(w) - k
max_log_w = np.max(log_w)
# Individual terms in the following may underflow, but then they wouldn't
# contribute to the sum anyways.
log_sum_w = max_log_w + np.log(np.sum(np.exp(log_w - max_log_w)))
log_ratio = log_w - log_sum_w
This probably isn't exactly what you want since you could just factor out the k completely (assuming it's a constant and not an array), but it should get you on your way.
Scikit-learn implements a similar thing with extmath.logsumexp, but it's basically the same as the above.

Finding if n! + 1 is a perfect square

I'm trying to write a program to look for a number, n, between 0 and 100 such that n! + 1 is a perfect square. I'm trying to do this because I know there are only three so it was meant as a test of my Python ability.
Refer to Brocard's problem.
math.sqrt always returns a float, even if that float happens to be, say, 4.0. As the docs say, "Except when explicitly noted otherwise, all return values are floats."
So, your test for type(math.sqrt(x)) == int will never be true.
You could try to work around that by checking whether the float represents an integer, like this:
sx = math.sqrt(x)
if round(sx) == sx:
There's even a built-in method that does this as well as possible:
if sx.is_integer():
But keep in mind that float values are not a perfect representation of real numbers, and there are always rounding issues. For example, for a too-large number, the sqrt might round to an integer, even though it really wasn't a perfect square. For example, if math.sqrt(10000000000**2 + 1).is_integer() is True, even though obviously the number is not a perfect square.
I could tell you whether this is safe within your range of values, but can you convince yourself? If not, you shouldn't just assume that it is.
So, is there a way we can check that isn't affected by float roading issues? Sure, we can use integer arithmetic to check:
sx = int(round(math.sqrt(x)))
if sx*sx == x:
But, as Stefan Pochmann points out, even if this check is safe, does that mean the whole algorithm is? No; sqrt itself could have already been rounded to the point where you've lost integer precision.
So, you need an exact sqrt. You could do this by using decimal.Decimal with a huge configured precision. This will take a bit of work, and a lot of memory, but it's doable. Like this:
decimal.getcontext().prec = ENOUGH_DIGITS
sx = decimal.Decimal(x).sqrt()
But how many digits is ENOUGH_DIGITS? Well, how many digits do you need to represent 100!+1 exactly?
So:
decimal.getcontext().prec = 156
while n <= 100:
x = math.factorial(n) + 1
sx = decimal.Decimal(x).sqrt()
if int(sx) ** 2 == x:
print(sx)
n = n + 1
If you think about it, there's a way to reduce the needed precision to 79 digits, but I'll leave that as an exercise for the reader.
The way you're presumably supposed to solve this is by using purely integer math. For example, you can find out whether an integer is a square in logarithmic time just by using Newton's method until your approximation error is small enough to just check the two bordering integers.
For very large numbers it's better to avoid using floating point square roots altogether because you will run into too many precision issues and you can't even guarantee that you will be within 1 integer value of the correct answer. Fortunately Python natively supports integers of arbitrary size, so you can write an integer square root checking function, like this:
def isSquare(x):
if x == 1:
return True
low = 0
high = x // 2
root = high
while root * root != x:
root = (low + high) // 2
if low + 1 >= high:
return False
if root * root > x:
high = root
else:
low = root
return True
Then you can run through the integers from 0 to 100 like this:
n = 0
while n <= 100:
x = math.factorial(n) + 1
if isSquare(x):
print n
n = n + 1
Here's another version working only with integers, computing the square root by adding decreasing powers of 2, for example intsqrt(24680) will be computed as 128+16+8+4+1.
def intsqrt(n):
pow2 = 1
while pow2 < n:
pow2 *= 2
sqrt = 0
while pow2:
if (sqrt + pow2) ** 2 <= n:
sqrt += pow2
pow2 //= 2
return sqrt
factorial = 1
for n in range(1, 101):
factorial *= n
if intsqrt(factorial + 1) ** 2 == factorial + 1:
print(n)
The number math.sqrt returns is never an int, even if it's an integer.How to check if a float value is a whole number

Python Pi approximation

So I have to approximate Pi with following way: 4*(1-1/3+1/5-1/7+1/9-...). Also it should be based on number of iterations. So the function should look like this:
>>> piApprox(1)
4.0
>>> piApprox(10)
3.04183961893
>>> piApprox(300)
3.13825932952
But it works like this:
>>> piApprox(1)
4.0
>>> piApprox(10)
2.8571428571428577
>>> piApprox(300)
2.673322240709928
What am I doing wrong? Here is the code:
def piApprox(num):
pi=4.0
k=1.0
est=1.0
while 1<num:
k+=2
est=est-(1/k)+1/(k+2)
num=num-1
return pi*est
This is what you're computing:
4*(1-1/3+1/5-1/5+1/7-1/7+1/9...)
You can fix it just by adding a k += 2 at the end of your loop:
def piApprox(num):
pi=4.0
k=1.0
est=1.0
while 1<num:
k+=2
est=est-(1/k)+1/(k+2)
num=num-1
k+=2
return pi*est
Also the way you're counting your iterations is wrong since you're adding two elements at the time.
This is a cleaner version that returns the output that you expect for 10 and 300 iterations:
def approximate_pi(rank):
value = 0
for k in xrange(1, 2*rank+1, 2):
sign = -(k % 4 - 2)
value += float(sign) / k
return 4 * value
Here is the same code but more compact:
def approximate_pi(rank):
return 4 * sum(-float(k%4 - 2) / k for k in xrange(1, 2*rank+1, 2))
Important edit:
whoever expects this approximation to yield PI -- quote from Wikipedia:
It converges quite slowly, though – after 500,000 terms, it produces
only five correct decimal digits of π
Original answer:
This is an educational example. You try to use a shortcut and attempt to implement the "oscillating" sign of the summands by handling two steps for k in the same iteration. However, you adjust k only by one step per iteration.
Usually, in math at least, an oscillating sign is achieved with (-1)**i. So, I have chosen this for a more readable implementation:
def pi_approx(num_iterations):
k = 3.0
s = 1.0
for i in range(num_iterations):
s = s-((1/k) * (-1)**i)
k += 2
return 4 * s
As you can see, I have changed your approach a bit, to improve readability. There is no need for you to check for num in a while loop, and there is no particular need for your pi variable. Your est actually is a sum that grows step by step, so why not call it s ("sum" is a built-in keyword in Python). Just multiply the sum with 4 in the end, according to your formula.
Test:
>>> pi_approx(100)
3.1514934010709914
The convergence, however, is not especially good:
>>> pi_approx(100) - math.pi
0.009900747481198291
Your expected output is flaky somehow, because your piApprox(300) (should be 3.13825932952, according to your) is too far away from PI. How did you come up with that? Is that possibly affected by an accumulated numerical error?
Edit
I would not trust the book too much in regard of what the function should return after 10 and 300 iterations. The intermediate result, after 10 steps, should be rather free of numerical errors, indeed. There, it actually makes a difference whether you take two steps of k at the same time or not. So this most likely is the difference between my pi_approx(10) and the books'. For 300 iterations, numerical error might have severely affected the result in the book. If this is an old book, and they have implemented their example in C, possibly using single precision, then a significant portion of the result may be due to accumulation of numerical error (note: this is a prime example for how bad you can be affected by numerical errors: a repeated sum of small and large values, it does not get worse!).
What counts is that you have looked at the math (the formula for PI), and you have implemented a working Python version of approximating that formula. That was the learning goal of the book, so go ahead and tackle the next problem :-).
def piApprox(num):
pi=4.0
k=3.0
est=1.0
while 1<num:
est=est-(1/k)+1/(k+2)
num=num-1
k+=4
return pi*est
Also for real task use math.pi
Here is a slightly simpler version:
def pi_approx(num_terms):
sign = 1. # +1. or -1.
pi_by_4 = 1. # first term
for div in range(3, 2 * num_terms, 2): # 3, 5, 7, ...
sign = -sign # flip sign
pi_by_4 += sign / div # add next term
return 4. * pi_by_4
which gives
>>> for n in [1, 10, 300, 1000, 3000]:
... print(pi_approx(n))
4.0
3.0418396189294032
3.1382593295155914
3.140592653839794
3.1412593202657186
While all of these answers are perfectly good approximations, if you are using the Madhava-Leibniz Series than you should arrive at ,"an approximation of π correct to 11 decimal places as 3.14159265359" within in first 21 terms according to this website: https://en.wikipedia.org/wiki/Approximations_of_%CF%80
Therefore, a more accurate solution could be any variation of this:
import math
def estimate_pi(terms):
ans = 0.0
for k in range(terms):
ans += (-1.0/3.0)**k/(2.0*k+1.0)
return math.sqrt(12)*ans
print(estimate_pi(21))
Output: 3.141592653595635

Numerical accuracy loss in Python

I wish to calculate the standard error of a series of numbers. Suppose the numbers are x[i] where i = 1 ... N. To do this
I set
averageX = 0.0
averageXSquared = 0.0
I then loop over all i=1,...N and for each I calculate
averageX += x[i]
averageXSquared += x[i]**2
I then divide by N
averageX = averageXC / N
averageXSquared = averageXSquared/N
I then take the square root of the difference
stdX = math.sqrt(averageXSquared - averageX * averageX)
The argument here is sure to always be >=0.
However if I set all x[i] = 0.07 (for example) then I get a math domain error as the argument of the root function is negative. There seems to be some loss of precision.
The argument is of the order of 10e-15.
This does not look encouraging. I now have to check myself to see if the result is negative before taking the root.
Or have I done something wrong.
This is not a python problem, but a problem with finite precision in general. If you set all numbers to the same value, the standard error is mathematically 0, but not for a computer. The correct way to handle this, is to set very small values <0 to 0.
x = [0.7, 0.7, 0.7]
average = sum(x) / len(x)
sqav = sum(y**2 for y in x) / len(x)
stderr = math.sqrt(max(sqav - average**2, 0))
The correct way, of course is never subtract large numbers. Have another pass, which guarantees non-negativity (you need to do some algebra to realize that the result is mathematically the same):
y = [ v - average for v in x ]
dev = sum(v*v for v in y) / len(x)
stderr = math.sqrt(dev)

Categories