Arithmetic error: Python is incorrectly dividing variables - python

I'm getting something that doesn't seem to be making a lot of sense. I was practicing my coding by making a little program that would give me the probability of getting certain cards within a certain timeframe of a card game. In order to calculate the chances, I would need to create a method that would perform division, and report the chances as a fraction and as a decimal. so I designed this:
from fractions import Fraction
def time_odds(card_count,turns,deck_size=60):
chance_of_occurence = float(card_count)/float(deck_size)
opening_hand_odds = 7*chance_of_occurence
turn_odds = (7 + turns)*chance_of_occurence
print ("Chance of it being in the opening hand: %s or %s" %(opening_hand_odds,Fraction(opening_hand_odds)))
print ("Chance of it being acquired by turn %s : %s or %s" %(turns,turn_odds,Fraction(turn_odds) ))
and then I used it like so:
time_odds(3,5)
but for whatever reason I got this as the answer:
"Chance of it being in the opening hand: 0.35000000000000003 or
6305039478318695/18014398509481984"
"Chance of it being acquired by turn 5 : 0.6000000000000001 or
1351079888211149/2251799813685248"
so it's like, almost right, except the decimal is just slightly off, giving like a 0.0000000000003 difference or a 0.000000000000000000001 difference.
Python doesn't do this when I just make it do division like this:
print (7*3/60)
This gives me just 0.35, which is correct. The only difference that I can observe, is that I get the slightly incorrect values when I am dividing with variables rather than just numbers.
I've looked around a little for an answer, and most incorrect division problems have to do with integer division (or I think it can be called floor division) , but I didn't manage to find anything addressing this.
I've had a similar problem with python doing this when I was dividing really big numbers. What's going on?
Why is this so? what can I do to correct it?

The extra digits you're seeing are floating point precision errors. As you do more and more operations with floating point numbers, the errors have a chance of compounding.
The reason you don't see them when you try to replicate the computation by hand is that your replication performs the operations in a different order. If you compute 7 * 3 / 60, the mutiplication happens first (with no error), and the division introduces a small enough error that Python's float type hides it for you in its repr (because 0.35 unambiguously refers to the same float value as the computation). If you do 7 * (3 / 60), the division happens first (introducing error) and then the multiplication increases the size of the error to the point that it can't be hidden (because 0.35000000000000003 is a different float value than 0.35).
To avoid printing out the the extra digits that are probably error, you may want to explicitly specify a precision to use when turning your numbers into strings. For instance, rather than using the %s format code (which calls str on the value), you could use %.3f, which will round off your number after three decimal places.
There's another issue with your Fractions. You're creating the Fraction directly from the floating point value, which already has the error calculated in. That's why you're seeing the fraction print out with a very large numerator and denominator (it's exactly representing the same number as the inaccurate float). If you instead pass integer numerator and denominator values to the Fraction constructor, it will take care of simplifying the fraction for you without any floating point inaccuracy:
print("Chance of it being in the opening hand: %.3f or %s"
% (opening_hand_odds, Fraction(7*card_count, deck_size)))
This should print out the numbers as 0.350 and 7/20. You can of course choose whatever number of decimal places you want.
Completely separate from the floating point errors, the calculation isn't actually getting the probability right. The formula you're using may be a good enough one for doing in your head while playing a game, but it's not completely accurate. If you're using a computer to crunch the numbers for you, you might as well get it right.
The probability of drawing at least one of N specific cards from a deck of size M after D draws is:
1 - (comb(M-N, D) / comb(M, D))
Where comb is the binary coefficient or "combination" function (often spoken as "N choose R" and written "nCr" in mathematics). Python doesn't have an implementation of that function in the standard library, but there are a lot of add on modules you may already have installed that provide one or you can pretty easily write your own. See this earlier question for more specifics.
For your example parameters, the correct odds are '5397/17110' or 0.315.

Related

Simplify an irrational decimal to its simple fraction equivalent using Python

Here is my problem:
I am writing a program to solve a statistical problem from timed coding challenge using Python 2.7
I'm not allowed to use many external packages ( however I can use Fractions). To finish my problem I need to convert an irrational decimal number to it's fraction equivalent.
Example Input:
0.6428571428571428 [i.e. 9/14]
Problem:
I want to output 9/14 in this instance but if I do something like:
print(Fraction(0.6428571428571428))
It will print some ungodly long fraction that can't be reduced.
Is there a way to reduce 0.6428571428571428 to 9/14 without forcing Fraction to round closest to 14 (since I need to use it for a lot of different fractions)?
Another Example:
.33333333333 (i.e. 1/3)
Current Output:
print(Fraction(.333333333333333)) # Outputs 6004799503160061/18014398509481984
If you know around how large your denominators get, you can use limit_denominator. See the docs for this
Here's what you'd get setting 100000 as your denominator
from fractions import Fraction
print(Fraction(.333333333333333).limit_denominator(max_denominator=100000))
# 1/3
print(Fraction(0.6428571428571428).limit_denominator(max_denominator=100000))
# 9/14
We're giving plenty of freedom with 100000 as the upper limit but it still finds the result we are looking for. You can adjust that number to suit your needs.
For these cases, I continued to get these results up to 10**14 and I start getting different results at 10**15 which is because, as Olivier Melançon points, that we have 15 digits in our input and when using max_denominator the error is 1/(2 * max_denominator)

Python and R are returning different results where they should be exactly the same [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
[Python numpy code]
In [171]: A1*b
Out[171]:
array([ -7.55603523e-01, 7.18519356e-01, 3.98628050e-03,
9.27047917e-04, -1.31074698e-03, 1.44455190e-03,
1.02676602e-03, 5.03891225e-02, -1.15752426e-03,
-2.43685270e-02, 5.88382307e-03, 2.63372861e-04])
In [172]: (A1*b).sum()
Out[172]: -1.6702134467139196e-16
[R code]
> cholcholT[2,] * b
[1] -0.7556035225 0.7185193560 0.0039862805 0.0009270479 -0.0013107470
[6] 0.0014445519 0.0010267660 0.0503891225 -0.0011575243 -0.0243685270
[11] 0.0058838231 0.0002633729
> sum(cholcholT[2,] * b)
[1] -9.616873e-17
The first is the R code and second is numpy. Up until the element-wise product of two vectors, they return the same result. However, if I try to add them up, they become different. I believe it doesn't have to do with the precision settings of the two since they both are double-precision based. Why is this happening?
You are experiencing what is called catastrophic cancellation. You are subtracting numbers from each other which differ only very slightly. As a result you get numbers which have a very high error relative to their value. The error stems from rounding errors which are introduced when your system stores values which cannot be represented by the binary system accurately.
Intuitively, you can think of this as the same difficulties you have when writing 1/3 as a decimal number. You would have to write 0.3333... , so infinitely many 3s behind the decimal point. You cannot do this and your computer can't either.
So your computer has to round the numbers somewhere.
You can see the rounding errors if you use something like
"{:.20e}".format(0.1)
You will see that after the 16th digit or so the number you wanted to store (1.0000000000000000000...×10^-1) is different from the number the computer stores (1.00000000000000005551...×10^-1)
To see in which order of magnitude this inaccuracy lies, you can view the machine epsilon. In simplified terms, this value gives you the minimum amount relative to your value which you can add to your value so that the computer can still distinguish the result from the old value (so it gets not rounded away while storing the result in memory).
If you execute
import numpy as np
eps = np.finfo(float).eps
you can see that this value lies on the order of magnitude of 10^-16.
The computer reprents floats in a form like SIGN|EXPONENT|FRACTION. So to simplify greatly, If computer memory would store numbers in decimal format, a number like -0.0053 would be stored as 1|-2|.53|. 1 is for the negative sign, -2 means 'FRACTION times 10^-2'.
If you sum up floats, the computer must represent each float with the same exponent to add/subtract the digits of the FRACTION from each other. Therefore all your values will be represented in terms of the greatest exponent of your data, which is -1. Therefore your rounding error will be in the order of magnitude of 10^-16*10^-1 which is 10^-17. You can see that your result is in this order of magnitude as well, so it is very much influenced by the rounding errors of your digits.
You are using floats and apply arithmetic on it. Floating point arithmetic is a dangerous thing because it always produces a small rounding error. Whether this error is rounded up or down or just "cut off" from the binary representation, different results may appear.

Numpy to weak to calculate a precise mean value

This question is very similar to this post - but not exactly
I have some data in a .csv file. The data has precision to the 4th digit (#.####).
Calculating the mean in Excel or SAS gives a result with precision to 5th digit (#.#####) but using numpy gives:
import numpy as np
data = np.recfromcsv(path2file, delimiter=';', names=['measurements'], dtype=np.float64)
rawD = data['measurements']
print np.average(rawD)
gives a number like this
#.#####999999999994
Clearly something is wrong..
using
from math import fsum
print fsum(rawD.ravel())/rawD.size
gives
#.#####
Is there anything in the np.average that I set wrong _______?
BONUS info:
I'm only working with 200 data points in the array
UPDATE
I thought I should make my case more clear.
I have numbers like 4.2730 in my csv (giving a 4 decimal precision - even though the 4th always is zero [not part of the subject so don't mind that])
Calculating an average/mean by numpy gives me this
4.2516499999999994
Which gives a print by
>>>print "%.4f" % np.average(rawD)
4.2516
During the same thing in Excel or SAS gives me this:
4.2517
Which I actually believe as being the true average value because it finds it to be 4.25165.
This code also illustrate it:
answer = 0
for number in rawD:
answer += int(number*1000)
print answer/2
425165
So how do I tell np.average() to calculate this value ___?
I'm a bit surprised that numpy did this to me... I thought that I only needed to worry if I was dealing with 16 digits numbers. Didn't expect a round off on the 4 decimal place would be influenced by this..
I know I could use
fsum(rawD.ravel())/rawD.size
But I also have other things (like std) I want to calculate with the same precision
UPDATE 2
I thought I could make a temp solution by
>>>print "%.4f" % np.float64("%.5f" % np.mean(rawD))
4.2416
Which did not solve the case. Then I tried
>>>print "%.4f" % float("4.24165")
4.2416
AHA! There is a bug in the formatter: Issue 5118
To be honest I don't care if python stores 4.24165 as 4.241649999... It's still a round off error - NO MATTER WHAT.
If the interpeter can figure out how to display the number
>>>print float("4.24165")
4.24165
Then should the formatter as well and deal with that number when rounding..
It still doesn't change the fact that I have a round off problem (now both with the formatter and numpy)
In case you need some numbers to help me out then I have made this modified .csv file:
Download it from here
(I'm aware that this file does not have the number of digits I explained earlier and that the average gives ..9988 at the end instead of ..9994 - it's modified)
Guess my qeustion boils down to how do I get a string output like the one excel gives me if I use =average()
and have it round off correctly if I choose to show only 4 digits
I know that this might seem strange for some.. But I have my reasons for wanting to reproduce the behavior of Excel.
Any help would be appreciated, thank you.
To get exact decimal numbers, you need to use decimal arithmetic instead of binary. Python provides the decimal module for this.
If you want to continue to use numpy for the calculations and simply round the result, you can still do this with decimal. You do it in two steps, rounding to a large number of digits to eliminate the accumulated error, then rounding to the desired precision. The quantize method is used for rounding.
from decimal import Decimal,ROUND_HALF_UP
ten_places = Decimal('0.0000000001')
four_places = Decimal('0.0001')
mean = 4.2516499999999994
print Decimal(mean).quantize(ten_places).quantize(four_places, rounding=ROUND_HALF_UP)
4.2517
The result value of average is a double. When you print out a double, by default all digits are printed. What you see here is the result of limited digital precision, which is not a problem of numpy, but a general computing problem. When you care of the presentation of your float value, use "%.4f" % avg_val. There is also a package for rational numbers, to avoid representing fractions as real numbers, but I guess that's not what you're looking for.
For your second statement, summarizing all the values by hand and then dividing it, I suppose you're using python 2.7 and all your input values are integer. In that way, you would have an integer division, which truncates everything after the dot, resulting in another integer value.

Why do Python's math.ceil() and math.floor() operations return floats instead of integers?

Can someone explain this (straight from the docs- emphasis mine):
math.ceil(x) Return the ceiling of x as a float, the smallest integer value greater than or equal to x.
math.floor(x) Return the floor of x as a float, the largest integer value less than or equal to x.
Why would .ceil and .floor return floats when they are by definition supposed to calculate integers?
EDIT:
Well this got some very good arguments as to why they should return floats, and I was just getting used to the idea, when #jcollado pointed out that they in fact do return ints in Python 3...
As pointed out by other answers, in python they return floats probably because of historical reasons to prevent overflow problems. However, they return integers in python 3.
>>> import math
>>> type(math.floor(3.1))
<class 'int'>
>>> type(math.ceil(3.1))
<class 'int'>
You can find more information in PEP 3141.
The range of floating point numbers usually exceeds the range of integers. By returning a floating point value, the functions can return a sensible value for input values that lie outside the representable range of integers.
Consider: If floor() returned an integer, what should floor(1.0e30) return?
Now, while Python's integers are now arbitrary precision, it wasn't always this way. The standard library functions are thin wrappers around the equivalent C library functions.
Because python's math library is a thin wrapper around the C math library which returns floats.
The source of your confusion is evident in your comment:
The whole point of ceil/floor operations is to convert floats to integers!
The point of the ceil and floor operations is to round floating-point data to integral values. Not to do a type conversion. Users who need to get integer values can do an explicit conversion following the operation.
Note that it would not be possible to implement a round to integral value as trivially if all you had available were a ceil or float operation that returned an integer. You would need to first check that the input is within the representable integer range, then call the function; you would need to handle NaN and infinities in a separate code path.
Additionally, you must have versions of ceil and floor which return floating-point numbers if you want to conform to IEEE 754.
Before Python 2.4, an integer couldn't hold the full range of truncated real numbers.
http://docs.python.org/whatsnew/2.4.html#pep-237-unifying-long-integers-and-integers
Because the range for floats is greater than that of integers -- returning an integer could overflow
This is a very interesting question! As a float requires some bits to store the exponent (=bits_for_exponent) any floating point number greater than 2**(float_size - bits_for_exponent) will always be an integral value! At the other extreme a float with a negative exponent will give one of 1, 0 or -1. This makes the discussion of integer range versus float range moot because these functions will simply return the original number whenever the number is outside the range of the integer type. The python functions are wrappers of the C function and so this is really a deficiency of the C functions where they should have returned an integer and forced the programer to do the range/NaN/Inf check before calling ceil/floor.
Thus the logical answer is the only time these functions are useful they would return a value within integer range and so the fact they return a float is a mistake and you are very smart for realizing this!
Maybe because other languages do this as well, so it is generally-accepted behavior. (For good reasons, as shown in the other answers)
This totally caught me off guard recently. This is because I've programmed in C since the 1970's and I'm only now learning the fine details of Python. Like this curious behavior of math.floor().
The math library of Python is how you access the C standard math library. And the C standard math library is a collection of floating point numerical functions, like sin(), and cos(), sqrt(). The floor() function in the context of numerical calculations has ALWAYS returned a float. For 50 YEARS now. It's part of the standards for numerical computation. For those of us familiar with the math library of C, we don't understand it to be just "math functions". We understand it to be a collection of floating-point algorithms. It would be better named something like NFPAL - Numerical Floating Point Algorithms Libary. :)
Those of us that understand the history instantly see the python math module as just a wrapper for the long-established C floating-point library. So we expect without a second thought, that math.floor() is the same function as the C standard library floor() which takes a float argument and returns a float value.
The use of floor() as a numerical math concept goes back to 1798 per the Wikipedia page on the subject: https://en.wikipedia.org/wiki/Floor_and_ceiling_functions#Notation
It never has been a computer science covert floating-point to integer storage format function even though logically it's a similar concept.
The floor() function in this context has always been a floating-point numerical calculation as all(most) the functions in the math library. Floating-point goes beyond what integers can do. They include the special values of +inf, -inf, and Nan (not a number) which are all well defined as to how they propagate through floating-point numerical calculations. Floor() has always CORRECTLY preserved values like Nan and +inf and -inf in numerical calculations. If Floor returns an int, it totally breaks the entire concept of what the numerical floor() function was meant to do. math.floor(float("nan")) must return "nan" if it is to be a true floating-point numerical floor() function.
When I recently saw a Python education video telling us to use:
i = math.floor(12.34/3)
to get an integer I laughed to myself at how clueless the instructor was. But before writing a snarkish comment, I did some testing and to my shock, I found the numerical algorithms library in Python was returning an int. And even stranger, what I thought was the obvious answer to getting an int from a divide, was to use:
i = 12.34 // 3
Why not use the built-in integer divide to get the integer you are looking for! From my C background, it was the obvious right answer. But low and behold, integer divide in Python returns a FLOAT in this case! Wow! What a strange upside-down world Python can be.
A better answer in Python is that if you really NEED an int type, you should just be explicit and ask for int in python:
i = int(12.34/3)
Keeping in mind however that floor() rounds towards negative infinity and int() rounds towards zero so they give different answers for negative numbers. So if negative values are possible, you must use the function that gives the results you need for your application.
Python however is a different beast for good reasons. It's trying to address a different problem set than C. The static typing of Python is great for fast prototyping and development, but it can create some very complex and hard to find bugs when code that was tested with one type of objects, like floats, fails in subtle and hard to find ways when passed an int argument. And because of this, a lot of interesting choices were made for Python that put the need to minimize surprise errors above other historic norms.
Changing the divide to always return a float (or some form of non int) was a move in the right direction for this. And in this same light, it's logical to make // be a floor(a/b) function, and not an "int divide".
Making float divide by zero a fatal error instead of returning float("inf") is likewise wise because, in MOST python code, a divide by zero is not a numerical calculation but a programming bug where the math is wrong or there is an off by one error. It's more important for average Python code to catch that bug when it happens, instead of propagating a hidden error in the form of an "inf" which causes a blow-up miles away from the actual bug.
And as long as the rest of the language is doing a good job of casting ints to floats when needed, such as in divide, or math.sqrt(), it's logical to have math.floor() return an int, because if it is needed as a float later, it will be converted correctly back to a float. And if the programmer needed an int, well then the function gave them what they needed. math.floor(a/b) and a//b should act the same way, but the fact that they don't I guess is just a matter of history not yet adjusted for consistency. And maybe too hard to "fix" due to backward compatibility issues. And maybe not that important???
In Python, if you want to write hard-core numerical algorithms, the correct answer is to use NumPy and SciPy, not the built-in Python math module.
import numpy as np
nan = np.float64(0.0) / 0.0 # gives a warning and returns float64 nan
nan = np.floor(nan) # returns float64 nan
Python is different, for good reasons, and it takes a bit of time to understand it. And we can see in this case, the OP, who didn't understand the history of the numerical floor() function, needed and expected it to return an int from their thinking about mathematical integers and reals. Now Python is doing what our mathematical (vs computer science) training implies. Which makes it more likely to do what a beginner expects it to do while still covering all the more complex needs of advanced numerical algorithms with NumPy and SciPy. I'm constantly impressed with how Python has evolved, even if at times I'm totally caught off guard.

Significant figures in the decimal module

So I've decided to try to solve my physics homework by writing some python scripts to solve problems for me. One problem that I'm running into is that significant figures don't always seem to come out properly. For example this handles significant figures properly:
from decimal import Decimal
>>> Decimal('1.0') + Decimal('2.0')
Decimal("3.0")
But this doesn't:
>>> Decimal('1.00') / Decimal('3.00')
Decimal("0.3333333333333333333333333333")
So two questions:
Am I right that this isn't the expected amount of significant digits, or do I need to brush up on significant digit math?
Is there any way to do this without having to set the decimal precision manually? Granted, I'm sure I can use numpy to do this, but I just want to know if there's a way to do this with the decimal module out of curiosity.
Changing the decimal working precision to 2 digits is not a good idea, unless you absolutely only are going to perform a single operation.
You should always perform calculations at higher precision than the level of significance, and only round the final result. If you perform a long sequence of calculations and round to the number of significant digits at each step, errors will accumulate. The decimal module doesn't know whether any particular operation is one in a long sequence, or the final result, so it assumes that it shouldn't round more than necessary. Ideally it would use infinite precision, but that is too expensive so the Python developers settled for 28 digits.
Once you've arrived at the final result, what you probably want is quantize:
>>> (Decimal('1.00') / Decimal('3.00')).quantize(Decimal("0.001"))
Decimal("0.333")
You have to keep track of significance manually. If you want automatic significance tracking, you should use interval arithmetic. There are some libraries available for Python, including pyinterval and mpmath (which supports arbitrary precision). It is also straightforward to implement interval arithmetic with the decimal library, since it supports directed rounding.
You may also want to read the Decimal Arithmetic FAQ: Is the decimal arithmetic ‘significance’ arithmetic?
Decimals won't throw away decimal places like that. If you really want to limit precision to 2 d.p. then try
decimal.getcontext().prec=2
EDIT: You can alternatively call quantize() every time you multiply or divide (addition and subtraction will preserve the 2 dps).
Just out of curiosity...is it necessary to use the decimal module? Why not floating point with a significant-figures rounding of numbers when you are ready to see them? Or are you trying to keep track of the significant figures of the computation (like when you have to do an error analysis of a result, calculating the computed error as a function of the uncertainties that went into the calculation)? If you want a rounding function that rounds from the left of the number instead of the right, try:
def lround(x,leadingDigits=0):
"""Return x either as 'print' would show it (the default)
or rounded to the specified digit as counted from the leftmost
non-zero digit of the number, e.g. lround(0.00326,2) --> 0.0033
"""
assert leadingDigits>=0
if leadingDigits==0:
return float(str(x)) #just give it back like 'print' would give it
return float('%.*e' % (int(leadingDigits),x)) #give it back as rounded by the %e format
The numbers will look right when you print them or convert them to strings, but if you are working at the prompt and don't explicitly print them they may look a bit strange:
>>> lround(1./3.,2),str(lround(1./3.,2)),str(lround(1./3.,4))
(0.33000000000000002, '0.33', '0.3333')
Decimal defaults to 28 places of precision.
The only way to limit the number of digits it returns is by altering the precision.
What's wrong with floating point?
>>> "%8.2e"% ( 1.0/3.0 )
'3.33e-01'
It was designed for scientific-style calculations with a limited number of significant digits.
If I undertand Decimal correctly, the "precision" is the number of digits after the decimal point in decimal notation.
You seem to want something else: the number of significant digits. That is one more than the number of digits after the decimal point in scientific notation.
I would be interested in learning about a Python module that does significant-digits-aware floating point point computations.

Categories