Slight difference in results between Python and Matlab [duplicate] - python

I am writing a program where I need to delete duplicate points stored in a matrix. The problem is that when it comes to check whether those points are in the matrix, MATLAB can't recognize them in the matrix although they exist.
In the following code, intersections function gets the intersection points:
[points(:,1), points(:,2)] = intersections(...
obj.modifiedVGVertices(1,:), obj.modifiedVGVertices(2,:), ...
[vertex1(1) vertex2(1)], [vertex1(2) vertex2(2)]);
The result:
>> points
points =
12.0000 15.0000
33.0000 24.0000
33.0000 24.0000
>> vertex1
vertex1 =
12
15
>> vertex2
vertex2 =
33
24
Two points (vertex1 and vertex2) should be eliminated from the result. It should be done by the below commands:
points = points((points(:,1) ~= vertex1(1)) | (points(:,2) ~= vertex1(2)), :);
points = points((points(:,1) ~= vertex2(1)) | (points(:,2) ~= vertex2(2)), :);
After doing that, we have this unexpected outcome:
>> points
points =
33.0000 24.0000
The outcome should be an empty matrix. As you can see, the first (or second?) pair of [33.0000 24.0000] has been eliminated, but not the second one.
Then I checked these two expressions:
>> points(1) ~= vertex2(1)
ans =
0
>> points(2) ~= vertex2(2)
ans =
1 % <-- It means 24.0000 is not equal to 24.0000?
What is the problem?
More surprisingly, I made a new script that has only these commands:
points = [12.0000 15.0000
33.0000 24.0000
33.0000 24.0000];
vertex1 = [12 ; 15];
vertex2 = [33 ; 24];
points = points((points(:,1) ~= vertex1(1)) | (points(:,2) ~= vertex1(2)), :);
points = points((points(:,1) ~= vertex2(1)) | (points(:,2) ~= vertex2(2)), :);
The result as expected:
>> points
points =
Empty matrix: 0-by-2

The problem you're having relates to how floating-point numbers are represented on a computer. A more detailed discussion of floating-point representations appears towards the end of my answer (The "Floating-point representation" section). The TL;DR version: because computers have finite amounts of memory, numbers can only be represented with finite precision. Thus, the accuracy of floating-point numbers is limited to a certain number of decimal places (about 16 significant digits for double-precision values, the default used in MATLAB).
Actual vs. displayed precision
Now to address the specific example in the question... while 24.0000 and 24.0000 are displayed in the same manner, it turns out that they actually differ by very small decimal amounts in this case. You don't see it because MATLAB only displays 4 significant digits by default, keeping the overall display neat and tidy. If you want to see the full precision, you should either issue the format long command or view a hexadecimal representation of the number:
>> pi
ans =
3.1416
>> format long
>> pi
ans =
3.141592653589793
>> num2hex(pi)
ans =
400921fb54442d18
Initialized values vs. computed values
Since there are only a finite number of values that can be represented for a floating-point number, it's possible for a computation to result in a value that falls between two of these representations. In such a case, the result has to be rounded off to one of them. This introduces a small machine-precision error. This also means that initializing a value directly or by some computation can give slightly different results. For example, the value 0.1 doesn't have an exact floating-point representation (i.e. it gets slightly rounded off), and so you end up with counter-intuitive results like this due to the way round-off errors accumulate:
>> a=sum([0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1]); % Sum 10 0.1s
>> b=1; % Initialize to 1
>> a == b
ans =
logical
0 % They are unequal!
>> num2hex(a) % Let's check their hex representation to confirm
ans =
3fefffffffffffff
>> num2hex(b)
ans =
3ff0000000000000
How to correctly handle floating-point comparisons
Since floating-point values can differ by very small amounts, any comparisons should be done by checking that the values are within some range (i.e. tolerance) of one another, as opposed to exactly equal to each other. For example:
a = 24;
b = 24.000001;
tolerance = 0.001;
if abs(a-b) < tolerance, disp('Equal!'); end
will display "Equal!".
You could then change your code to something like:
points = points((abs(points(:,1)-vertex1(1)) > tolerance) | ...
(abs(points(:,2)-vertex1(2)) > tolerance),:)
Floating-point representation
A good overview of floating-point numbers (and specifically the IEEE 754 standard for floating-point arithmetic) is What Every Computer Scientist Should Know About Floating-Point Arithmetic by David Goldberg.
A binary floating-point number is actually represented by three integers: a sign bit s, a significand (or coefficient/fraction) b, and an exponent e. For double-precision floating-point format, each number is represented by 64 bits laid out in memory as follows:
The real value can then be found with the following formula:
This format allows for number representations in the range 10^-308 to 10^308. For MATLAB you can get these limits from realmin and realmax:
>> realmin
ans =
2.225073858507201e-308
>> realmax
ans =
1.797693134862316e+308
Since there are a finite number of bits used to represent a floating-point number, there are only so many finite numbers that can be represented within the above given range. Computations will often result in a value that doesn't exactly match one of these finite representations, so the values must be rounded off. These machine-precision errors make themselves evident in different ways, as discussed in the above examples.
In order to better understand these round-off errors it's useful to look at the relative floating-point accuracy provided by the function eps, which quantifies the distance from a given number to the next largest floating-point representation:
>> eps(1)
ans =
2.220446049250313e-16
>> eps(1000)
ans =
1.136868377216160e-13
Notice that the precision is relative to the size of a given number being represented; larger numbers will have larger distances between floating-point representations, and will thus have fewer digits of precision following the decimal point. This can be an important consideration with some calculations. Consider the following example:
>> format long % Display full precision
>> x = rand(1, 10); % Get 10 random values between 0 and 1
>> a = mean(x) % Take the mean
a =
0.587307428244141
>> b = mean(x+10000)-10000 % Take the mean at a different scale, then shift back
b =
0.587307428244458
Note that when we shift the values of x from the range [0 1] to the range [10000 10001], compute a mean, then subtract the mean offset for comparison, we get a value that differs for the last 3 significant digits. This illustrates how an offset or scaling of data can change the accuracy of calculations performed on it, which is something that has to be accounted for with certain problems.

Look at this article: The Perils of Floating Point. Though its examples are in FORTRAN it has sense for virtually any modern programming language, including MATLAB. Your problem (and solution for it) is described in "Safe Comparisons" section.

type
format long g
This command will show the FULL value of the number. It's likely to be something like 24.00000021321 != 24.00000123124

Try writing
0.1 + 0.1 + 0.1 == 0.3.
Warning: You might be surprised about the result!

Maybe the two numbers are really 24.0 and 24.000000001 but you're not seeing all the decimal places.

Check out the Matlab EPS function.
Matlab uses floating point math up to 16 digits of precision (only 5 are displayed).

Related

Python: Why does have 2^-n work for n>52 and not 1+2^-n-1?

I'm pretty new to python, and I've made a table which calculates T=1+2^-n-1 and C=2^n, which both give the same values from n=40 to n=52, but for n=52 to n=61 I get 0.0 for T, whereas C gives me progressively smaller decimals each time - why is this?
I think I understand why T becomes 0.0, because of python using binary floating point and because of the machine epsilon value - but I'm slightly confused as to why C doesn't also become 0.0.
import numpy as np
import math
t=np.zeros(21)
c=np.zeros(21)
for n in range(40,61):
m=n-40
t[m]=1+2**(-n)-1
c[m]=2**(-n)
print (n,t[m],c[m])
The "floating" in floating point means that values are represented by storing a fixed number of leading digits and a scale factor, rather than assuming a fixed scale (which would be fixed point).
2**-53 only takes one (binary) digit to represent (not including the scale), but 1+2**-53 would take 54 to represent exactly. Python floats only have 53 binary digits of precision; 2**-53 can be represented exactly, but 1+2**-53 gets rounded to exactly 1, and subtracting 1 from that gives exactly 0. Thus, we have
>>> 2**-53
1.1102230246251565e-16
>>> 1+(2**-53)-1
0.0
Postscript: you might wonder why 2**-53 displays as a value not equal to the exact mathematical value when I said it was exact. That's due to the float->string conversion logic, which only keeps enough decimal digits to reconstruct the original float (instead of printing a bunch of digits at the end that are usually just noise).
The difference between both is indeed due to floating-point representation. Indeed, if you perform 1 + X where X is a very very small number, then the floating-point representation sets its exponent value to 0 and the precision is ensured by the mantissa, which is 52-bit on a 64-bit computer. Therefore, 1 + 2^(-X) if X > 52 is equal to 1. However, even 2^-100 can be represented in double-precision floating-point, so you can see C decrease for a larger number of samples.

why (0.0006*100000)%10 is 10

When I did (0.0006*100000)%10 and (0.0003*100000)%10 in python it returned 9.999999999999993 respectively, but actually it has to be 0.
Similarly in c++ fmod(0.0003*100000,10) gives the value as 10. Can someone help me out where i'm getting wrong.
The closest IEEE 754 64-bit binary number to 0.0003 is 0.0002999999999999999737189393389513725196593441069126129150390625. The closest representable number to the result of multiplying it by 100000 is 29.999999999999996447286321199499070644378662109375.
There are a number of operations, such as floor and mod, that can make very low significance differences very visible. You need to be careful using them in connection with floating point numbers - remember that, in many cases, you have a very, very close approximation to the infinite precision value, not the infinite precision value itself. The actual value can be slightly high or, as in this case, slightly low.
Just to give the obvious answer: 0.0006 and 0.0003 are not representable in a machine double (at least on modern machines). So you didn't actually multiply by those values, but by some value very close. Slightly more, or slightly less, depending on how the compiler rounded them.
May I suggest using the remainder function in C?
It will compute the remainder after rounding the quotient to nearest integer, with exact computation (no rounding error):
remainder = dividend - round(dividend/divisor)*divisor
This way, your result will be in [-divisor/2,+divisor/2] interval.
This will still emphasize the fact that you don't get a float exactly equal to 6/10,000 , but maybe in a less surprising way when you expect a null remainder:
remainder(0.0006*100000,10.0) -> -7.105427357601002e-15
remainder(0.0003*100000,10.0) -> -3.552713678800501e-15
I don't know of such remainder function support in python, but there seems to be a match in gnulib-python module (to be verified...)
https://github.com/ghostmansd/gnulib-python/blob/master/modules/remainder
EDIT
Why does it apparently work with every other N/10,000 in [1,9] interval but 3 and 6?
It's not completely lucky, this is somehow good properties of IEEE 754 in default rounding mode (round to nearest, tie to even).
The result of a floating point operation is rounded to nearest floating point value.
Instead of N/D you thus get (N/D+err) where the absolute error err is given by this snippet (I'm more comfortable in Smalltalk, but I'm sure you will find equivalent in Python):
| d |
d := 10000.
^(1 to: 9) collect: [:n | ((n/d) asFloat asFraction - (n/d)) asFloat]
It gives you something like:
#(4.79217360238593e-21 9.58434720477186e-21 -2.6281060661048628e-20 1.916869440954372e-20 1.0408340855860843e-20 -5.2562121322097256e-20 -7.11236625150491e-21 3.833738881908744e-20 -2.4633073358870662e-20)
Changing the last bit of a floating point significand leads to a small difference named the unit of least precision (ulp), and it might be good to express the error in term of ulp:
| d |
d := 10000.
^(1 to: 9) collect: [:n | ((n/d) asFloat asFraction - (n/d)) / (n/d) asFloat ulp]
the number of ulp off the exact fraction is thus:
#(0.3536 0.3536 -0.4848 0.3536 0.096 -0.4848 -0.0656 0.3536 -0.2272)
The error is the same for N=1,2,4,8 because they are essentially the same floating point - same significand, just the exponent changes.
It's also the same for N=3 and 6 for same reason, but very near the maximum error for a single operation which is 0.5 ulp (unluckily the number can be half way between two floats).
For N=9, the relative error is smaller than for N=1, and for 5 and 7, the error is very small.
Now when we multiply these approximation by 10000 which is exactly representable as a float, (N/D+err)D is N+Derr, and it's then rounded to nearest float. If D*err is less than half distance to next float, then this is rounded to N and the rounding error vanishes.
| d |
d := 10000.
^(1 to: 9) collect: [:n | ((n/d) asFloat asFraction - (n/d)) * d / n asFloat ulp]
OK, we were unlucky for N=3 and 6, the already high rounding error magnitude has become greater than 0.5 ulp:
#(0.2158203125 0.2158203125 -0.591796875 0.2158203125 0.1171875 -0.591796875 -0.080078125 0.2158203125 -0.138671875)
Beware, the distance is not symmetric for exact powers of two, the next float after 1.0 is 1.0+2^-52, but before 1.0 it's 1.0-2^-53.
Nonetheless, what we see here, is that after the second rounding operation, the error did annihilate in four cases, and did cumulate only in a single case (counting only the cases with different significands).
We can generalize that result. As long as we do not sum numbers with very different exponents, but just use muliply/divide operations, while the error bound can be high after P operations, the statistical distribution of cumulated errors has a remarkably narrow peak compared to this bound, and the result are somehow surprisingly good w.r.t. what we regularly read about float imprecision. See my answer to The number of correct decimal digits in a product of doubles with a large number of terms for example.
I just wanted to mention that yes, float are inexact, but they sometimes do such a decent job, that they are fostering the illusion of exactness. Finding a few outliers like mentionned in this post is then surprising. The sooner surprise, the least surprise. Ah, if only float were implemented less carefully, there would be less questions in this category...

What is the nature of the round off error here?

Can someone help me unpack what exactly is going on under the hood here?
>>> 1e16 + 1.
1e+16
>>> 1e16 + 1.1
1.0000000000000002e+16
I'm on 64-bit Python 2.7. For the first, I would assume that since there is only a precision of 15 for float that it's just round-off error. The true floating-point answer might be something like
10000000000000000.999999....
And the decimal just gets lopped of. But the second result makes me question this understanding and can't 1 be represented exactly? Any thoughts?
[Edit: Just to clarify. I'm not in any way suggesting that the answers are "wrong." Clearly, they're right, because, well they are. I'm just trying to understand why.]
It's just rounding as close as it can.
1e16 in floating hex is 0x4341c37937e08000.
1e16+2 is 0x4341c37937e08001.
At this level of magnitude, the smallest difference in precision that you can represent is 2. Adding 1.0 exactly rounds down (because typically IEEE floating point math will round to an even number). Adding values larger than 1.0 will round up to the next representable value.
10^16 = 0x002386f26fc10000 is exactly representable as a double precision floating point number. The next representable number is 1e16+2. 1e16+1 is correctly rounded to 1e16, and 1e16+1.1 is correctly rounded to 1e16+2. Check the output of this C program:
#include <stdio.h>
#include <math.h>
#include <stdint.h>
int main()
{
uint64_t i = 10000000000000000ULL;
double a = (double)i;
double b = nextafter(a,1.0e20); // next representable number
printf("I=0x%016llx\n",i); // 10^16 in hex
printf("A=%a (%.4f)\n",a,a); // double representation
printf("B=%a (%.4f)\n",b,b); // next double
}
Output:
I=0x002386f26fc10000
A=0x1.1c37937e08p+53 (10000000000000000.0000)
B=0x1.1c37937e08001p+53 (10000000000000002.0000)
Let's decode some floats, and see what's actually going on! I'm going to use Common Lisp, which has a handy function for getting at the significand (a.k.a mantissa) and exponent of a floating-point number without needing to twiddle any bits. All floats used are IEEE double-precision floats.
> (integer-decode-float 1.0d0)
4503599627370496
-52
1
That is, if we consider the value stored in the significand as an integer, it is the maximum power of 2 available (4503599627370496 = 2^52), scaled down (2^-52). (It isn't stored as 1 with an exponent of 0 because it's simpler for the significand to never have zeros on the left, and this allows us to skip representing the leftmost 1 bit and have more precision. Numbers not in this form are called denormal.)
Let's look at 1e16.
> (integer-decode-float 1d16)
5000000000000000
1
1
Here we have the representation (5000000000000000) * 2^1. Note that the significand, despite being a nice round decimal number, is not a power of 2; this is because 1e16 is not a power of 2. Every time you multiply by 10, you are multiplying by 2 and 5; multiplying by 2 is just incrementing the exponent, but multiplying by 5 is an "actual" multiplication, and here we've multiplied by 5 16 times.
5000000000000000 = 10001110000110111100100110111111000001000000000000000 (base 2)
Observe that this is a 53-bit binary number, as it should be since double floats have a 53-bit significand.
But the key to understanding the situation is that the exponent is 1. (The exponent being small is an indication that we are getting close to the limits of precision.) This means that the float value is 2^1 = 2 times this significand.
Now, what happens when we try to represent adding 1 to this number? Well, we need to represent 1 at the same scale. But the smallest change we can make in this number is exactly 2, because the least significant bit of the significand has value 2!
That is, if we increment the significand, making the smallest possible change, we get
5000000000000001 = 10001110000110111100100110111111000001000000000000001 (base 2)
and when we apply the exponent, we get 2 * 5000000000000001 = 10000000000000002, which is exactly the value you observed. You can only have either 10000000000000000 or 10000000000000002, and 10000000000000001.1 is closer to the latter.
(Note that the issue here isn't even that decimal numbers aren't exact in binary! There's no binary "repeating decimals" here, and there's plenty of 0 bits on the right end of the significand — it's just that your input neatly falls just beyond the lowest bit.)
With numpy, you can see the next larger and smaller representable IEEE floating point number:
>>> import numpy as np
>>> huge=1e100
>>> tiny=1e-100
>>> np.nextafter(1e16,huge)
10000000000000002.0
>>> np.nextafter(1e16,tiny)
9999999999999998.0
So:
>>> (np.nextafter(1e16,huge)-np.nextafter(1e16,tiny))/2.0
2.0
And:
>>> 1.1>2.0/2
True
Therefore 1e16 + 1.1 is correctly rounded to the next larger IEEE representable number of 10000000000000002.0
As is:
>>> 1e16+1.0000000000000005
1.0000000000000002e+16
and 1e16-(something slightly larger than 1) is rounded down by 2 to the next smaller IEEE number:
>>> 1e16-1.0000000000000005
9999999999999998.0
Keep in mind that 32 bit vs 64 bit Python is irrelevant. It is the size of the IEEE format used that matters. Also keep in mind that the larger the magnitude of the number, the epsilon value (the spread between the two next larger and smaller IEEE values basically) changes.
You can see this in bits as well:
>>> def f_to_bits(f): return struct.unpack('<Q', struct.pack('<d', f))[0]
...
>>> def bits_to_f(bits): return struct.unpack('<d', struct.pack('<Q', bits))[0]
...
>>> bits_to_f(f_to_bits(1e16)+1)
1.0000000000000002e+16
>>> bits_to_f(f_to_bits(1e16)-1)
9999999999999998.0

A "round"ed number multiplied by 0.01 results in x.y00000000000001 and not x.y?

The reason I'm asking this is because there is a validation in OpenERP that it's driving me crazy:
>>> round(1.2 / 0.01) * 0.01
1.2
>>> round(12.2 / 0.01) * 0.01
12.200000000000001
>>> round(122.2 / 0.01) * 0.01
122.2
>>> round(1222.2 / 0.01) * 0.01
1222.2
As you can see, the second round is returning an odd value.
Can someone explain to me why is this happening?
This has in fact nothing to with round, you can witness the exact same problem if you just do 1220 * 0.01:
>>> 1220*0.01
12.200000000000001
What you see here is a standard floating point issue.
You might want to read what Wikipedia has to say about floating point accuracy problems:
The fact that floating-point numbers cannot precisely represent all real numbers, and that floating-point operations cannot precisely represent true arithmetic operations, leads to many surprising situations. This is related to the finite precision with which computers generally represent numbers.
Also see:
Numerical analysis
Numerical stability
A simple example for numerical instability with floating-point:
the numbers are finite. lets say we save 4 digits after the dot in a given computer or language.
0.0001 multiplied with 0.0001 would result something lower than 0.0001, and therefore it is impossible to save this result!
In this case if you calculate (0.0001 x 0.0001) / 0.0001 = 0.0001, this simple computer will fail in being accurate because it tries to multiply first and only afterwards to divide. In javascript, dividing with fractions leads to similar inaccuracies.
The float type that you are using stores binary floating point numbers. Not every decimal number is exactly representable as a float. In particular there is no exact representation of 1.2 or 0.01, so the actual number stored in the computer will differ very slightly from the value written in the source code. This representation error can cause calculations to give slightly different results from the exact mathematical result.
It is important to be aware of the possibility of small errors whenever you use floating point arithmetic, and write your code to work well even when the values calculated are not exactly correct. For example, you should consider rounding values to a certain number of decimal places when displaying them to the user.
You could also consider using the decimal type which stores decimal floating point numbers. If you use decimal then 1.2 can be stored exactly. However, working with decimal will reduce the performance of your code. You should only use it if exact representation of decimal numbers is important. You should also be aware that decimal does not mean that you'll never have any problems. For example 0.33333... has no exact representation as a decimal.
There is a loss of accuracy from the division due to the way floating point numbers are stored, so you see that this identity doesn't hold
>>> 12.2 / 0.01 * 0.01 == 12.2
False
bArmageddon, has provided a bunch of links which you should read, but I believe the takeaway message is don't expect floats to give exact results unless you fully understand the limits of the representation.
Especially don't use floats to represent amounts of money! which is a pretty common mistake
Python also has the decimal module, which may be useful to you
Others have answered your question and mentioned that many numbers don't have an exact binary fractional representation. If you are accustomed to working only with decimal numbers, it can seem deeply weird that a nice, "round" number like 0.01 could be a non-terminating number in some other base. In the spirit of "seeing is believing," here's a little Python program that will print out a binary representation of any number to any desired number of digits.
from decimal import Decimal
n = Decimal("0.01") # the number to print the binary equivalent of
m = 1000 # maximum number of digits to print
p = -1
r = []
w = int(n)
n = abs(n) - abs(w)
while n and -p < m:
s = Decimal(2) ** p
if n >= s:
r.append("1")
n -= s
else:
r.append("0")
p -= 1
print "%s.%s%s" % ("-" if w < 0 else "", bin(abs(w))[2:],
"".join(r), "..." if n else "")

Safest way to convert float to integer in python?

Python's math module contain handy functions like floor & ceil. These functions take a floating point number and return the nearest integer below or above it. However these functions return the answer as a floating point number. For example:
import math
f=math.floor(2.3)
Now f returns:
2.0
What is the safest way to get an integer out of this float, without running the risk of rounding errors (for example if the float is the equivalent of 1.99999) or perhaps I should use another function altogether?
All integers that can be represented by floating point numbers have an exact representation. So you can safely use int on the result. Inexact representations occur only if you are trying to represent a rational number with a denominator that is not a power of two.
That this works is not trivial at all! It's a property of the IEEE floating point representation that int∘floor = ⌊⋅⌋ if the magnitude of the numbers in question is small enough, but different representations are possible where int(floor(2.3)) might be 1.
To quote from Wikipedia,
Any integer with absolute value less than or equal to 224 can be exactly represented in the single precision format, and any integer with absolute value less than or equal to 253 can be exactly represented in the double precision format.
Use int(your non integer number) will nail it.
print int(2.3) # "2"
print int(math.sqrt(5)) # "2"
You could use the round function. If you use no second parameter (# of significant digits) then I think you will get the behavior you want.
IDLE output.
>>> round(2.99999999999)
3
>>> round(2.6)
3
>>> round(2.5)
3
>>> round(2.4)
2
Combining two of the previous results, we have:
int(round(some_float))
This converts a float to an integer fairly dependably.
That this works is not trivial at all! It's a property of the IEEE floating point representation that int∘floor = ⌊⋅⌋ if the magnitude of the numbers in question is small enough, but different representations are possible where int(floor(2.3)) might be 1.
This post explains why it works in that range.
In a double, you can represent 32bit integers without any problems. There cannot be any rounding issues. More precisely, doubles can represent all integers between and including 253 and -253.
Short explanation: A double can store up to 53 binary digits. When you require more, the number is padded with zeroes on the right.
It follows that 53 ones is the largest number that can be stored without padding. Naturally, all (integer) numbers requiring less digits can be stored accurately.
Adding one to 111(omitted)111 (53 ones) yields 100...000, (53 zeroes). As we know, we can store 53 digits, that makes the rightmost zero padding.
This is where 253 comes from.
More detail: We need to consider how IEEE-754 floating point works.
1 bit 11 / 8 52 / 23 # bits double/single precision
[ sign | exponent | mantissa ]
The number is then calculated as follows (excluding special cases that are irrelevant here):
-1sign × 1.mantissa ×2exponent - bias
where bias = 2exponent - 1 - 1, i.e. 1023 and 127 for double/single precision respectively.
Knowing that multiplying by 2X simply shifts all bits X places to the left, it's easy to see that any integer must have all bits in the mantissa that end up right of the decimal point to zero.
Any integer except zero has the following form in binary:
1x...x where the x-es represent the bits to the right of the MSB (most significant bit).
Because we excluded zero, there will always be a MSB that is one—which is why it's not stored. To store the integer, we must bring it into the aforementioned form: -1sign × 1.mantissa ×2exponent - bias.
That's saying the same as shifting the bits over the decimal point until there's only the MSB towards the left of the MSB. All the bits right of the decimal point are then stored in the mantissa.
From this, we can see that we can store at most 52 binary digits apart from the MSB.
It follows that the highest number where all bits are explicitly stored is
111(omitted)111. that's 53 ones (52 + implicit 1) in the case of doubles.
For this, we need to set the exponent, such that the decimal point will be shifted 52 places. If we were to increase the exponent by one, we cannot know the digit right to the left after the decimal point.
111(omitted)111x.
By convention, it's 0. Setting the entire mantissa to zero, we receive the following number:
100(omitted)00x. = 100(omitted)000.
That's a 1 followed by 53 zeroes, 52 stored and 1 added due to the exponent.
It represents 253, which marks the boundary (both negative and positive) between which we can accurately represent all integers. If we wanted to add one to 253, we would have to set the implicit zero (denoted by the x) to one, but that's impossible.
If you need to convert a string float to an int you can use this method.
Example: '38.0' to 38
In order to convert this to an int you can cast it as a float then an int. This will also work for float strings or integer strings.
>>> int(float('38.0'))
38
>>> int(float('38'))
38
Note: This will strip any numbers after the decimal.
>>> int(float('38.2'))
38
math.floor will always return an integer number and thus int(math.floor(some_float)) will never introduce rounding errors.
The rounding error might already be introduced in math.floor(some_large_float), though, or even when storing a large number in a float in the first place. (Large numbers may lose precision when stored in floats.)
Another code sample to convert a real/float to an integer using variables.
"vel" is a real/float number and converted to the next highest INTEGER, "newvel".
import arcpy.math, os, sys, arcpy.da
.
.
with arcpy.da.SearchCursor(densifybkp,[floseg,vel,Length]) as cursor:
for row in cursor:
curvel = float(row[1])
newvel = int(math.ceil(curvel))
Since you're asking for the 'safest' way, I'll provide another answer other than the top answer.
An easy way to make sure you don't lose any precision is to check if the values would be equal after you convert them.
if int(some_value) == some_value:
some_value = int(some_value)
If the float is 1.0 for example, 1.0 is equal to 1. So the conversion to int will execute. And if the float is 1.1, int(1.1) equates to 1, and 1.1 != 1. So the value will remain a float and you won't lose any precision.

Categories