Error 34, Result too large - python

I am trying to print the Fibonacci sequence and it always returns an Overflow error after about the 600th term.
def fib():
import math
from math import sqrt
print "\nFibonacci Sequence up to the term of what?"
n=raw_input(prompt)
if n.isdigit():
if int(n)==0:
return 0
elif int(n)==1:
return 1
else:
n_count=2
print "\n0\n1"
while n_count<int(n):
fib=int(((1+sqrt(5))**n_count-(1-sqrt(5))**n_count)/(2**n_count*sqrt(5)))
print fib
n_count+=1
fib()
else:
print "\nPlease enter a number."
fib()
fib()
When I run this:
Traceback (most recent call last):
File "<pyshell#21>", line 1, in <module>
fib()
File "<pyshell#20>", line 15, in fib
fib=int(((1+sqrt(5))**n_count-(1-sqrt(5))**n_count)/(2**n_count*sqrt(5)))
OverflowError: (34, 'Result too large')

Well, first, let's split that big expression up into smaller ones, so we can see where it's going wrong. And use a debugger or some print statements to see what values are making it go wrong. That way, we're not just taking a stab in the dark.
If you do that, you can tell that (1+sqrt(5)**n_count) is raising this exception when n_count hits 605. Which you can verify pretty easily:
>>> (1+sqrt(5))**604
1.1237044275099689e+308
>>> (1+sqrt(5))**605
OverflowError: (34, 'Result too large')
So, why is that a problem?
Well, Python float values, unlike its integers, aren't arbitrary-sized, they can only hold what an IEEE double can hold:*
>>> 1e308
1e308
>>> 1e309
inf
So, the problem is that one of the terms in your equation is greater than the largest possible IEEE double.
That means you either need to pick a different algorithm,** or get a "big-float" library.
As it happens, Python has a built-in big-float library, in the decimal module. Of course as the name implies, it handles decimal floats, not binary floats, so you'll get different rounding errors if you use it. But you presumably don't care much about rounding errors, given your code.
So:
import decimal
s5 = decimal.Decimal(5).sqrt()
… then …
fib=int(((1+s5)**n_count-(1-s5)**n_count)/(2**n_count*s5))
* In fact, the limits are platform-specific; implementations aren't required to use IEEE doubles for float. So, use sys.float_info to see the max value for your platform. But it's almost always going to be 1.7976931348623157e+308.
** Note that the only advantage of the algorithm you're using over the naive one is that it allows you to approximate the Nth Fibonacci number directly, without calculating the preceding N-1. But since you want to print them all out anyway, you're not getting any advantage. You're just getting the disadvantages—it's an approximation; it's more complicated; it requires floating-point math, which is subject to rounding error; it's slower; it takes more memory; the built-in floating-point types in most languages on most platforms can't hold F(605), … All that for no benefit doesn't seem worth it.

As abarnert pointed out, floats are limited in python. You can see the limit by
>>> import sys
>>> sys.float_info
sys.float_info(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308, min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15, mant_dig=53, epsilon=2.220446049250313e-16, radix=2, rounds=1)
You will reach some more terms if you divide by 2 before raising to the power:
int(( ((1+sqrt(5))/2)**n_count - ((1-sqrt(5))/2)**n_count)/sqrt(5))
But since Fibonacci sequence grows exponentially, you will soon hit the same wall. Try computing Fibonacci by holding the last two terms and adding them. That way you will use ints.

Related

How to disable python rounding?

I'm looking for a way to disable this:
print(0+1e-20) returns 1e-20, but print(1+1e-20) returns 1.0
I want it to return something like 1+1e-20.
I need it because of this problem:
from numpy import sqrt
def f1(x):
return 1/((x+1)*sqrt(x))
def f2(x):
return 1/((x+2)*sqrt(x+1))
def f3(x):
return f2(x-1)
print(f1(1e-6))
print(f3(1e-6))
print(f1(1e-20))
print(f3(1e-20))
returns
999.9990000010001
999.998999986622
10000000000.0
main.py:10: RuntimeWarning: divide by zero encountered in double_scalars
return 1/((x+2)*sqrt(x+1))
inf
f1 is the original function, f2 is f1 shifted by 1 to the left and f3 is f2 moved back by 1 to the right. By this logic, f1 and f3 should be equal to eachother, but this is not the case.
I know about decimal. Decimal and it doesn't work, because decimal doesn't support some functions including sin. If you could somehow make Decimal work for all functions, I'd like to know how.
Can't be done. There is no rounding - that would imply that the exact result ever existed, which it did not. Floating-point numbers have limited precision. The easiest way to envision this is an analogy with art and computer screens. Imagine someone making a fabulously detailed painting, and all you have is a 1024x768 screen to view it through. If a microscopic dot is added to the painting, the image on the screen might not change at all. Maybe you need a 4K screen instead.
In Python, the closest representable number after 1.0 is 1.0000000000000002 (*), according to math.nextafter(1.0, math.inf) (Python 3.9+ required for math.nextafter).1e-20 and 1 are too different in magnitude, so the result of their addition cannot be represented by a Python floating-point number, which are precise to up to about 16 digits.
See Is floating point math broken? for an in-depth explanation of the cause.
As this answer suggests, there are libraries like mpmath that implement arbitrary-precision arithmetics:
from mpmath import mp, mpf
mp.dps = 25 # set precision to 25 decimal digits
mp.sin(1)
# => mpf('0.8414709848078965066525023183')
mp.sin(1 + mpf('1e-20')) # mpf is a constructor for mpmath floats
# => mpf('0.8414709848078965066579053457')
mpmath floats are sticky; if you add up an int and a mpf you get a mpf, so I did not have to write mp.mpf(1). The result is still not precise, but you can select what precision is sufficient for your needs. Also, note that the difference between these two results is, again, too small to be representable by Python's floating point numbers, so if the difference is meaningful to you, you have to keep it in the mpmath land:
float(mpf('0.8414709848078965066525023183')) == float(mpf('0.8414709848078965066579053457'))
# => True
(*) This is actually a white lie. The next number after 1.0 is 0x1.0000000000001, or 0b1.0000000000000000000000000000000000000000000000001, but Python doesn't like hexadecimal or binary float literals. 1.0000000000000002 is Python's approximation of that number for your decimal convenience.
As others have stated, in general this can't be done (due to how computers commonly represent numbers).
It's common to work with the precision you've got, ensuring that algorithms are numerically stable can be awkward.
In this case I'd redefine f1 to work on the logarithms of numbers, e.g.:
from numpy as sqrt, log, log1p,
def f1(x):
prod = log1p(x) + log(x) / 2
return exp(-prod)
You might need to alter other parts of the code to work in log space as well depending on what you need to do. Note that most stats algorithms work with log-probabilities because it's much more compatible with how computers represent numbers.
f3 is a bit more work due to the subtraction.

Question on Python treatment of numpy.int32 vs int

In coding up a simple Fibonacci script, I found some 'odd' behaviour in how Python treats numpy.int32 vs how it treats regular int numbers.
Can anyone help me understand what causes this behaviour?
Using the Fibonacci code as follows, leveraging caching to significantly speed things up;
from functools import lru_cache
import numpy as np
#lru_cache(maxsize=None)
def fibo(n):
if n <= 1:
return n
else:
return fibo(n-1)+fibo(n-2)
If I define a Numpy array of numbers to calculate over (with np.arange), it all works well until n = 47, then things start going haywire. If, on the other hand, I use a regular python list, then the values are all correctly calculated
You should be able to see the difference with the following;
fibo(np.int32(47)), fibo(47)
Which should return (at least it does for me);
(-1323752223, 2971215073)
Obviously, something very wrong has occured with the calculations against the numpy.int32 input. Now, I can get around the issue by simply inserting a 'n = int(n)' line in the fibo function before anything else is evaluated, but I dont understand why this is necessary.
I've also tried np.int(47) instead of np.int32(47), and found that the former works just fine. However, using np.arange to create the array seems to default to np.int32 data type.
I've tried removing the caching (I wouldn't recommend you try - it takes around 2 hours to calculate to n = 47) - and I get the same behaviour, so that is not the cause.
Can anyone shed some insight into this for me?
Thanks
Python's "integers have unlimited precision". This was built into the language so that new users have "one less thing to learn".
Though maybe not in your case, or for anyone using NumPy. That library is designed to make computations as fast as possible. It therefore uses data types that are well supported by the CPU architecture, such as 32-bit and 64-bit integers that neatly fit into a CPU register and have an invariable memory footprint.
But then we're back to dealing with overflow problems like in any other programming language. NumPy does warn about that though:
>>> print(fibo(np.int32(47)))
fib.py:9: RuntimeWarning: overflow encountered in long_scalars
return fibo(n-1)+fibo(n-2)
-1323752223
Here we are using a signed 32-bit integer. The largest positive number it can hold is 231 - 1 = 2147483647. But the 47th Fibonacci number is even larger than that, it's 2971215073 as you calculated. In that case, the 32-bit integer overflows and we end up with -1323752223, which is its two's complement:
>>> 2971215073 + 1323752223 == 2**32
True
It worked with np.int because that's just an alias of the built-in int, so it returns a Python integer:
>>> np.int is int
True
For more on this, see: What is the difference between native int type and the numpy.int types?
Also note that np.arange for integer arguments returns an integer array of type np.int_ (with a trailing underscore, unlike np.int). That data type is platform-dependent and maps to 32-bit integers on Windows, but 64-bit on Linux.

Find the base of the power and power with given base in python

I have a number N let's say 5451 and I want to find the base for power 50. How do I do it with python?
The answer is 1.1877622648368 according this website (General Root calculator) But i need to find it with computation.
Second question. If I have the number N 5451, and I know the base 1.1877622648368, how do I find the power?
Taking the n-th root is the same as raising to the power 1/n. This is very simple to do in Python using the ** operator (exponentiation):
>>> 5451**(1/50)
1.1877622648368031
The second one requires a logarithm, which is the inverse of exponentiation. You want to take the base-1.1877622648368031 logarithm of 5451, which is done with the log function in the math module:
>>> import math
>>> math.log(5451, 1.1877622648368031)
50.00000000000001
As you can see, there's some roundoff error. This is unavoidable when working with limited-precision floating point numbers. Apply the round() function to the result if you know that it must be an integer.

Counterintuitive behaviour of int() in python

It's clearly stated in the docs that int(number) is a flooring type conversion:
int(1.23)
1
and int(string) returns an int if and only if the string is an integer literal.
int('1.23')
ValueError
int('1')
1
Is there any special reason for that? I find it counterintuitive that the function floors in one case, but not the other.
There is no special reason. Python is simply applying its general principle of not performing implicit conversions, which are well-known causes of problems, particularly for newcomers, in languages such as Perl and Javascript.
int(some_string) is an explicit request to convert a string to integer format; the rules for this conversion specify that the string must contain a valid integer literal representation. int(float) is an explicit request to convert a float to an integer; the rules for this conversion specify that the float's fractional portion will be truncated.
In order for int("3.1459") to return 3 the interpreter would have to implicitly convert the string to a float. Since Python doesn't support implicit conversions, it chooses to raise an exception instead.
This is almost certainly a case of applying three of the principles from the Zen of Python:
Explicit is better implicit.
[...] practicality beats purity
Errors should never pass silently
Some percentage of the time, someone doing int('1.23') is calling the wrong conversion for their use case, and wants something like float or decimal.Decimal instead. In these cases, it's clearly better for them to get an immediate error that they can fix, rather than silently giving the wrong value.
In the case that you do want to truncate that to an int, it is trivial to explicitly do so by passing it through float first, and then calling one of int, round, trunc, floor or ceil as appropriate. This also makes your code more self-documenting, guarding against a later modification "correcting" a hypothetical silently-truncating int call to float by making it clear that the rounded value is what you want.
Sometimes a thought experiment can be useful.
Behavior A: int('1.23') fails with an error. This is the existing behavior.
Behavior B: int('1.23') produces 1 without error. This is what you're proposing.
With behavior A, it's straightforward and trivial to get the effect of behavior B: use int(float('1.23')) instead.
On the other hand, with behavior B, getting the effect of behavior A is significantly more complicated:
def parse_pure_int(s):
if "." in s:
raise ValueError("invalid literal for integer with base 10: " + s)
return int(s)
(and even with the code above, I don't have complete confidence that there isn't some corner case that it mishandles.)
Behavior A therefore is more expressive than behavior B.
Another thing to consider: '1.23' is a string representation of a floating-point value. Converting '1.23' to an integer conceptually involves two conversions (string to float to integer), but int(1.23) and int('1') each involve only one conversion.
Edit:
And indeed, there are corner cases that the above code would not handle: 1e-2 and 1E-2 are both floating point values too.
In simple words - they're not the same function.
int( decimal ) behaves as 'floor i.e. knock off the decimal portion and return as int'
int( string ) behaves as 'this text describes an integer, convert it and return as int'.
They are 2 different functions with the same name that return an integer but they are different functions.
'int' is short and easy to remember and its meaning applied to each type is intuitive to most programmers which is why they chose it.
There's no implication they are providing the same or combined functionality, they simply have the same name and return the same type. They could as easily be called 'floorDecimalAsInt' and 'convertStringToInt', but they went for 'int' because it's easy to remember, (99%) intuitive and confusion would rarely occur.
Parsing text as an Integer for text which included a decimal point such as "4.5" would throw an error in majority of computer languages and be expected to throw an error by majority of programmers, since the text-value does not represent an integer and implies they are providing erroneous data

python decimal comparison

python decimal comparison
>>> from decimal import Decimal
>>> Decimal('1.0') > 2.0
True
I was expecting it to convert 2.0 correctly, but after reading thru PEP 327 I understand there were some reason for not implictly converting float to Decimal, but shouldn't in that case it should raise TypeError as it does in this case
>>> Decimal('1.0') + 2.0
Traceback (most recent call last):
File "<string>", line 1, in <string>
TypeError: unsupported operand type(s) for +: 'Decimal' and 'float'
so does all other operator / - % // etc
so my questions are
is this right behavior? (not to raise exception in cmp)
What if I derive my own class and
right a float converter basically
Decimal(repr(float_value)), are
there any caveats? my use case
involves only comparison of prices
System details: Python 2.5.2 on Ubuntu 8.04.1
Re 1, it's indeed the behavior we designed -- right or wrong as it may be (sorry if that trips your use case up, but we were trying to be general!).
Specifically, it's long been the case that every Python object could be subject to inequality comparison with every other -- objects of types that aren't really comparable get arbitrarily compared (consistently in a given run, not necessarily across runs); main use case was sorting a heterogeneous list to group elements in it by type.
An exception was introduced for complex numbers only, making them non-comparable to anything -- but that was still many years ago, when we were occasionally cavalier about breaking perfectly good user code. Nowadays we're much stricter about backwards compatibility within a major release (e.g. along the 2.* line, and separately along the 3.* one, though incompatibilities are allowed between 2 and 3 -- indeed that's the whole point of having a 3.* series, letting us fix past design decisions even in incompatible ways).
The arbitrary comparisons turned out to be more trouble than they're worth, causing user confusion; and the grouping by type can now be obtained easily e.g. with a key=lambda x: str(type(x)) argument to sort; so in Python 3 comparisons between objects of different types, unless the objects themselves specifically allow it in the comparison methods, does raise an exception:
>>> decimal.Decimal('2.0') > 1.2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unorderable types: Decimal() > float()
In other words, in Python 3 this behaves exactly as you think it should; but in Python 2 it doesn't (and never will in any Python 2.*).
Re 2, you'll be fine -- though, look to gmpy for what I hope is an interesting way to convert doubles to infinite-precision fractions through Farey trees. If the prices you're dealing with are precise to no more than cents, use '%.2f' % x rather than repr(x)!-)
Rather than a subclass of Decimal, I'd use a factory function such as
def to_decimal(float_price):
return decimal.Decimal('%.2f' % float_price)
since, once produced, the resulting Decimal is a perfectly ordinary one.
The greater-than comparison works because, by default, it works for all objects.
>>> 'abc' > 123
True
Decimal is right merely because it correctly follows the spec. Whether the spec was the correct approach is a separate question. :)
Only the normal caveats when dealing with floats, which briefly summarized are: beware of edge cases such as negative zero, +/-infinity, and NaN, don't test for equality (related to the next point), and count on math being slightly inaccurate.
>>> print (1.1 + 2.2 == 3.3)
False
If it's "right" is a matter of opinion, but the rationale of why there is no automatic conversion exists in the PEP, and that was the decision taken. The caveat basically is that you can't always exactly convert between float and decimal. Therefore the conversion should not be implicit. If you in your application know that you never have enough significant numbers for this to affect you, making classes that allow this implicit behaviour shouldn't be a problem.
Also, one main argument is that real world use cases doesn't exist. It's likely to be simpler if you just use Decimal everywhere.

Categories