I wrote the following script:
import numpy
d = numpy.array([[1089, 1093]])
e = numpy.array([[1000, 4443]])
answer = numpy.exp(-3 * d)
answer1 = numpy.exp(-3 * e)
res = answer.sum()/answer1.sum()
print res
But I got this result and with the error occurred:
nan
C:\Users\Desktop\test.py:16: RuntimeWarning: invalid value encountered in double_scalars
res = answer.sum()/answer1.sum()
It seems to be that the input element were too small that python turned them to be zeros, but indeed the division has its result.
How to solve this kind of problem?
You can't solve it. Simply answer1.sum()==0, and you can't perform a division by zero.
This happens because answer1 is the exponential of 2 very large, negative numbers, so that the result is rounded to zero.
nan is returned in this case because of the division by zero.
Now to solve your problem you could:
go for a library for high-precision mathematics, like mpmath. But that's less fun.
as an alternative to a bigger weapon, do some math manipulation, as detailed below.
go for a tailored scipy/numpy function that does exactly what you want! Check out #Warren Weckesser answer.
Here I explain how to do some math manipulation that helps on this problem. We have that for the numerator:
exp(-x)+exp(-y) = exp(log(exp(-x)+exp(-y)))
= exp(log(exp(-x)*[1+exp(-y+x)]))
= exp(log(exp(-x) + log(1+exp(-y+x)))
= exp(-x + log(1+exp(-y+x)))
where above x=3* 1089 and y=3* 1093. Now, the argument of this exponential is
-x + log(1+exp(-y+x)) = -x + 6.1441934777474324e-06
For the denominator you could proceed similarly but obtain that log(1+exp(-z+k)) is already rounded to 0, so that the argument of the exponential function at the denominator is simply rounded to -z=-3000. You then have that your result is
exp(-x + log(1+exp(-y+x)))/exp(-z) = exp(-x+z+log(1+exp(-y+x))
= exp(-266.99999385580668)
which is already extremely close to the result that you would get if you were to keep only the 2 leading terms (i.e. the first number 1089 in the numerator and the first number 1000 at the denominator):
exp(3*(1089-1000))=exp(-267)
For the sake of it, let's see how close we are from the solution of Wolfram alpha (link):
Log[(exp[-3*1089]+exp[-3*1093])/([exp[-3*1000]+exp[-3*4443])] -> -266.999993855806522267194565420933791813296828742310997510523
The difference between this number and the exponent above is +1.7053025658242404e-13, so the approximation we made at the denominator was fine.
The final result is
'exp(-266.99999385580668) = 1.1050349147204485e-116
From wolfram alpha is (link)
1.105034914720621496.. × 10^-116 # Wolfram alpha.
and again, it is safe to use numpy here too.
You can use np.logaddexp (which implements the idea in #gg349's answer):
In [33]: d = np.array([[1089, 1093]])
In [34]: e = np.array([[1000, 4443]])
In [35]: log_res = np.logaddexp(-3*d[0,0], -3*d[0,1]) - np.logaddexp(-3*e[0,0], -3*e[0,1])
In [36]: log_res
Out[36]: -266.99999385580668
In [37]: res = exp(log_res)
In [38]: res
Out[38]: 1.1050349147204485e-116
Or you can use scipy.special.logsumexp:
In [52]: from scipy.special import logsumexp
In [53]: res = np.exp(logsumexp(-3*d) - logsumexp(-3*e))
In [54]: res
Out[54]: 1.1050349147204485e-116
Related
Recently we encountered an issue with math.log() . Since 243 is a perfect power of 3 , assumption that taking floor should be fine was wrong as it seems to have precision error on lower side.
So as a hack we started adding a small value before taking logarithm. Is there a way to configue math.log upfront or something similar that that we dont have to add EPS every time.
To clarify some of the comments Note we are not looking to round to nearest integer. Our goal is to keep the value exact or at times take floor. But if the precision error is on lower side floor screws up big time, that's what we are trying to avoid.
code:
import math
math.log(243, 3)
int(math.log(243, 3))
output:
4.999999999999999
4
code:
import math
EPS = 1e-09
math.log(243 + EPS, 3)
int(math.log(243 + EPS, 3))
output:
5.0000000000037454
5
Instead of trying to solve it might be easier to look at and just solve this iteratively, taking advantage of Python's integer type. This way you can avoid the float domain, and its associated precision loss, entirely.
Here's a rough attempt:
def ilog(a: int, p: int) -> tuple[int, bool]:
"""
find the largest b such that p ** b <= a
return tuple of (b, exact)
"""
if p == 1:
return a, True
b = 0
x = 1
while x < a:
x *= p
b += 1
if x == a:
return b, True
else:
return b - 1, False
There are plenty of opportunities for optimization if this is too slow (consider Newton's method, binary search...)
How about this? Is this what you are looking for?
import math
def ilog(a: int, p:int) -> int:
"""
find the largest b such that p ** b <= a
"""
float_log = math.log(a, p)
if (candidate := math.ceil(float_log))**p <= a:
return candidate
return int(float_log)
print(ilog(243, 3))
print(ilog(3**31, 3))
print(ilog(8,2))
Output:
5
31
3
You can use decimals and play with precision and rounding instead of floats in this case
Like this:
from decimal import Decimal, Context, ROUND_HALF_UP, ROUND_HALF_DOWN
ctx1 = Context(prec=20, rounding=ROUND_HALF_UP)
ctx2 = Context(prec=20, rounding=ROUND_HALF_DOWN)
ctx1.divide(Decimal(243).ln( ctx1) , Decimal(3).ln( ctx2))
Output:
Decimal('5')
First, the rounding works like the epsilon - the numerator is rounded up and denominator down. You always get a slightly higher answer
Second, you can adjust precision you need
However, fundamentally the problem is unsolvable.
I'm completing the 56th question on Project Euler:
A googol (10100) is a massive number: one followed by one-hundred zeros; 100100 is almost unimaginably large: one followed by two-hundred zeros. Despite their size, the sum of the digits in each number is only 1.
Considering natural numbers of the form, ab, where a, b < 100`, what is the maximum digital sum?
I wrote this code, which gives the wrong answer:
import math
value = 0
a = 1
b = 1
while a < 100:
b = 1
while b < 100:
result = int(math.pow(a,b))
x = [int(a) for a in str(result)]
if sum(x) > value:
value = sum(x)
b = b + 1
a = a + 1
print(value)
input("")
My code outputs 978, whereas the correct answer is
972
I already know the actual approach, but I don't know why my reasoning is incorrect. The value that gives me the greatest digital sum seems to be 8899, but adding together each digit in that result will give me 978. What am I misinterpreting in the question?
math.pow uses floating point numbers internally:
Unlike the built-in ** operator, math.pow() converts both its arguments to type float.
Note that Python integers have no size restriction, so there is no problem computing 88 ** 99 this way:
>>> from math import pow
>>> pow(88, 99)
3.1899548991064687e+192
>>> 88**99
3189954899106468738519431331435374548486457306565071277011188404860475359372836550565046276541670202826515718633320519821593616663471686151960018780508843851702573924250277584030257178740785152
And indeed, this is exactly what the documentation recommends:
Use ** or the built-in pow() function for computing exact integer powers.
The result computed using math.pow will be slightly different, due to the lack of precision of floating-point values:
>>> int(pow(88, 99))
3189954899106468677983468001676918389478607432406058411199358053788184470654582587122118156926366989707830958889227847846886750593566290713618113587727930256898153980172821794148406939795587072
It so happens that the sum of these digits is 978.
I'm doing complex division in a context where numerical precision really matters. I find that dividing two complex128 numbers with no imaginary part gives a different result beyond 15 decimal digits that dividing the same two numbers as float64.
a = np.float64(1.501)
b = np.float64(1.337)
print('{:.20f}'.format(a / b))
# 1.12266267763649962852
a_com = np.complex128(1.501)
b_com = np.complex128(1.337)
print('{:.20f}'.format((a_com / b_com).real))
# 1.12266267763649940647
I have a C++ reference implementation where complex division agrees with NumPy float division beyond 15 decimal digits. I'd like to use NumPy complex division with the same precision. Is there a way to accomplish that?
This seems to work:
import numpy as np
def compl_div(A,B):
A,B = np.asarray(A),np.asarray(B)
Ba = np.abs(B)[...,None]
A = (A[...,None].view(float)/Ba).view(complex)[...,0]
B = (B.conj()[...,None].view(float)/Ba).view(complex)[...,0]
return A*B
a = np.random.randn(10000)
b = np.random.randn(10000)
A = a.astype(complex)
B = b.astype(complex)
print((compl_div(A,B)==a/b).all())
print((np.sqrt(b*b)==np.abs(b)).all())
ac = a.view(complex)
bc = b.view(complex)
print(np.allclose(compl_div(ac,bc),ac/bc))
Sample run:
True # complex without imag exactly equal float
True # reason it works
True # for nonzeron imag part do we actually get complex division
Explanation:
Let us write /// for complex by float division (x+iy)///r = x/r + iy/r
numpy seems to implement complex division A/B as A*(1/B) (1/B can be computed as B.conj()///(B.conj()*B)), indeed A/B appears to always equal a*(1/b)
We do instead (A///abs(B)) * (B.conj()///abs(B)) as abs(B)^2 = B*B.conj() this is mathematically, but not numerically, equivalent.
Now, if we had abs(B) == abs(b) then A///abs(B) = a/abs(b) and B///abs(B) = sign(b) and we could see that compl_div(A,B) indeed gives back exactly a/b.
As abs(x+iy) = sqrt(x^2+y^2) we need to show sqrt(b*b) = abs(b). This is provably true unless there is over or underflow in the square or the square is denormal or the implementation does not conform to IEEE.
my homework is to write a code which contains a function that calculates the sinx taylor series and gives the amount back.
the function must get (n,k) which n is the requested number for sine,and k is the digits that function must calculate after the dot.
first i neglected k since its easy to limit the numbers after the dot,and wrote a function that just calculates sinx taylor,so i gave it a specific range for r(r is every sentence of the taylor series):
def taylor(n,k):
s= ((math.pi)/180)*n
ex = s
sign = 1
factorial = 1
sum=0
i=1
r=1
while r>0.00000000000000000001 or r<0.0000000000000000000001 :
r= ex*sign/factorial
ex = ex*s*s
sign = sign*(-1)
factorial=factorial*(i+1)*(i+2)
i= i+2
sum = sum + r
return sum
import math
print(taylor(45,1))
i just don't know why if i set amount of r larger than this (i.e 0.1) i get this error:
Traceback (most recent call last):
File "/Users/modern/Desktop/taylor.py", line 22, in <module>
print(taylor(45))
File "/Users/modern/Desktop/taylor.py", line 12, in taylor
r= ex*sign/factorial
OverflowError: int too large to convert to float
I'm surprised that this is an issue since I would think that r gets below the error tolerance before it is a problem.
Note that what you really need is the reciprocal of the factorial. Instead of having a factorial variable that you divide by, you could have a variable, say, fact_recip which is initialized as
fact_recip = 1.0
used like r= ex*sign*fact_recp
And updated via
fact_recip /= ((i+1)*(i+2))
this will handle the error that you are seeing, but I'm not sure if the round-off error would be an issue.
You can handle your input with a leading question and a split, as #John Coleman suggested, although I'd do the assignment as a pair:
nums = input("Enter n, k, separated by a space")
n, k = nums.split()
Here is the cleaned-up program: the factor updates -- especially the factorial -- are reduced to changes from the previous term. I also canonized your loop limits to be more readable.
def taylor(n,k):
s = (math.pi/180)*n
s2 = s*s
sum = s
i = 1
r = s
converge = 1.0E-20
while r > converge or r < converge / 100 :
r *= -s2/((i+1)*(i+2))
sum += r
i = i+2
return sum
import math
print(taylor(45,1))
I'm not sure what you mean with
if i set amount of r larger than this (i.e 0.1)
and the condition of your while loop looks strange, but as R Nar pointed out, the error is caused by your value of factorial is getting too large. I would not recommend using decimal however since it is really slow. Rather take a look at gmpy which is built in order to get (really) fast arbitrary precision math.
Alternatively you could use Strinling's Approximation for calculating the large factorials.
Does anyone know why the below doesn't equal 0?
import numpy as np
np.sin(np.radians(180))
or:
np.sin(np.pi)
When I enter it into python it gives me 1.22e-16.
The number π cannot be represented exactly as a floating-point number. So, np.radians(180) doesn't give you π, it gives you 3.1415926535897931.
And sin(3.1415926535897931) is in fact something like 1.22e-16.
So, how do you deal with this?
You have to work out, or at least guess at, appropriate absolute and/or relative error bounds, and then instead of x == y, you write:
abs(y - x) < abs_bounds and abs(y-x) < rel_bounds * y
(This also means that you have to organize your computation so that the relative error is larger relative to y than to x. In your case, because y is the constant 0, that's trivial—just do it backward.)
Numpy provides a function that does this for you across a whole array, allclose:
np.allclose(x, y, rel_bounds, abs_bounds)
(This actually checks abs(y - x) < abs_ bounds + rel_bounds * y), but that's almost always sufficient, and you can easily reorganize your code when it's not.)
In your case:
np.allclose(0, np.sin(np.radians(180)), rel_bounds, abs_bounds)
So, how do you know what the right bounds are? There's no way to teach you enough error analysis in an SO answer. Propagation of uncertainty at Wikipedia gives a high-level overview. If you really have no clue, you can use the defaults, which are 1e-5 relative and 1e-8 absolute.
One solution is to switch to sympy when calculating sin's and cos's, then to switch back to numpy using sp.N(...) function:
>>> # Numpy not exactly zero
>>> import numpy as np
>>> value = np.cos(np.pi/2)
6.123233995736766e-17
# Sympy workaround
>>> import sympy as sp
>>> def scos(x): return sp.N(sp.cos(x))
>>> def ssin(x): return sp.N(sp.sin(x))
>>> value = scos(sp.pi/2)
0
just remember to use sp.pi instead of sp.np when using scos and ssin functions.
Faced same problem,
import numpy as np
print(np.cos(math.radians(90)))
>> 6.123233995736766e-17
and tried this,
print(np.around(np.cos(math.radians(90)), decimals=5))
>> 0
Worked in my case. I set decimal 5 not lose too many information. As you can think of round function get rid of after 5 digit values.
Try this... it zeros anything below a given tiny-ness value...
import numpy as np
def zero_tiny(x, threshold):
if (x.dtype == complex):
x_real = x.real
x_imag = x.imag
if (np.abs(x_real) < threshold): x_real = 0
if (np.abs(x_imag) < threshold): x_imag = 0
return x_real + 1j*x_imag
else:
return x if (np.abs(x) > threshold) else 0
value = np.cos(np.pi/2)
print(value)
value = zero_tiny(value, 10e-10)
print(value)
value = np.exp(-1j*np.pi/2)
print(value)
value = zero_tiny(value, 10e-10)
print(value)
Python uses the normal taylor expansion theory it solve its trig functions and since this expansion theory has infinite terms, its results doesn't reach exact but it only approximates.
For e.g
sin(x) = x - x³/3! + x⁵/5! - ...
=> Sin(180) = 180 - ... Never 0 bout approaches 0.
That is my own reason by prove.
Simple.
np.sin(np.pi).astype(int)
np.sin(np.pi/2).astype(int)
np.sin(3 * np.pi / 2).astype(int)
np.sin(2 * np.pi).astype(int)
returns
0
1
0
-1