Conversion from string to float of numerical values (scientific notation) - python

I am using Python for reading a file and converting numerical value written as string to float. I observe a weird conversion:
a="-5.970471694E+02"
b = float(a)
b
>> -597.0471694
bb = np.float64(a)
bb
>> -597.04716940000003
e="-5.970471695E+02"
ee = np.float64(e)
ee
>> -597.0471695
ee-bb
>> -9.9999965641472954e-08
What is the reason of the term "0000003" at the end of bb. Why I don't observe the same thing for ee. Is this really a problem? I think this issue is due to the floating-point accuracy but the result seems to be perturbed before I start to use the variables...

What is the reason of the term "0000003" at the end of bb. Why I don't observe the same thing for ee.
b and bb have identical values (try evaluating b == bb). The difference comes down to how they are represented by the interpreter. By default, numpy floats are displayed with 8 digits after the decimal place, whereas Python floats are printed to 13 significant digits (including those before the decimal place).
Is this really a problem?
Since the actual values of b and bb are identical then the answer is almost certainly no. If the display differences bother you, you can use np.set_printoptions to control how numpy floats are represented in the interpreter. If you use IPython, you can also use the %precision magic to control how regular Python floats are printed.

Both float and float64 use a binary representation of the number. Both have to save the approximation that is caused by conversion from the number with base 10 to the number with base 2. The float uses less bits, so the error is greater and it is made visible when the a is copied to b. That is because b takes the a including the rounding error without the loss of information, and the a contains that 000..03 value. In other words, it is a rounding error from converting a decimal number to a binary number.

Related

Almost correct output, but not quite right. Math help in Python

I am trying to match an expected output of "13031.157014219536" exactly, and I have attempted 3 times to get the value with different methods, detailed below, and come extremely close to the value, but not close enough. What is happening in these code snippets that it causing the deviation? Is it rounding error in the calculation? Did I do something wrong?
Attempts:
item_worth = 8000
years = range(10)
for year in years:
item_worth += (item_worth*0.05)
print(item_worth)
value = item_worth * (1 + ((0.05)**10))
print(value)
cost = 8000
for x in range(10):
cost *= 1.05
print(cost)
Expected Output:
13031.157014219536
Actual Outputs:
13031.157014219529
13031.157014220802
13031.157014219538
On most machine, floats are represented by fp64, or double floats.
You can check the precision of those for your number that way (not a method to be used for real computation. Just out of curiosity):
import struct
struct.pack('d', 13031.157014219536)
# bytes representing that number in fp64 b'\xf5\xbc\n\x19\x94s\xc9#'
# or, in a more humanely understandable way
struct.unpack('q', struct.pack('d', 13031.157014219536))[0]
# 4668389568658717941
# That number has no meaning, except that this integer is represented by the same bytes as your float.
# Now, let's see what float is "next in line"
struct.unpack('d', struct.pack('q', 4668389568658717941+1))[0]
# 13031.157014219538
Note that this code works on most machine, but is not reliable. First of all, it relies on the fact that significant bits are not just all 1. Otherwise, it would give a totally unrelated number. Secondly, it makes assumption that ints are LE. But well, it gave me what I wanted.
That is the information that the smallest number bigger than 13031.157014219536 is 13031.157014219538.
(Or, said more accurately for this kind of conversation: the smallest float bigger than 13031.157014219536 whose representation is not the same as the representation of 13031.157014219536 has the same representation as 13031.157014219538)
So, my point is you are flirting with the representation limit. You can't expect the sum of 10 numbers to be more accurate.
I could also have said that saying that the biggest power of 2 smaller than your number is 8192=2¹³.
So, that 13 is the exponent of your float in its representation. And this you have 53 significant bits, the precision of such a number is 2**(13-53+1) = 1.8×10⁻¹² (which is indeed also the result of 13031.157014219538-13031.157014219536). Hence the reason why in decimal, 12 decimal places are printed. But not all combination of them can exist, and the last one is not insignificant, but not fully significant neither.
If your computation is the result of the sum of 10 such numbers, you could even have an error 10 times bigger than your last result the right to complain :D

Why does my program only print the first few characters of e rather than the whole number?

e = str(2.7182818284590452353602874713526624977572470936999595749669676277240766303535475945713821785251664274)
print(e)
Output:
2.718281828459045
Screenshots: here and here.
Why does the code only print out the first few characters of e instead of the whole string?
A string str has characters, but a number (be it an int or a float) just has a value.
If you do this:
e_first_100 = '2.7182818284590452353602874713526624977572470936999595749669676277240766303535475945713821785251664274'
print(e_first_100)
You'll see all digits printed, because they are just characters in a string, it could have also been the first 100 characters from 'War and Peace' and you would not expect any of that to get lost either.
Since 'e' is not an integer value, you can't use int here, so you'll have to use float, but Python uses a finite number of bits to represent such a number, while there's an infinite number of real numbers. In fact there's an infinite number of values between any two real numbers. So a clever way has to be used to represent at least the ones you use most often, with a limited amount of precision.
You often don't notice the lack of precision, but try something like .1 + .1 + .1 == .3 in Python and you'll see that it can pop up in common situations.
Your computer already has a built-in way to represent these floating point numbers, using either 32 or 64 bits, although many languages (Python included) do offer additional ways of representing floats that aren't part of the way your computer works and allow a bit more precision. By default, Python uses these standard representations of real numbers.
So, if you then do this:
e1 = float(e_first_100)
print(e1)
e2 = 2.7182818284590452353602874713526624977572470936999595749669676277240766303535475945713821785251664274
print(e2)
Both result in a value that, when you print it, looks like:
2.718281828459045
Because that's the precision up to which the number is (more or less) accurately represented.
If you need to use e in a more precise manner, you can use Python's own representation:
from decimal import Decimal
e3 = Decimal(e_first_100)
print(e3)
That looks promising, but even Decimal only has limited precision, although it's better than standard floats:
print(e2 * 3)
print(e3 * Decimal(3))
The difference:
8.154845485377136
8.154845485377135706080862414
To expand on Grismar's answer, you don't see the data because the default string representation of floats cuts off at that point as going further than that wouldn't be very useful, but while the object is a float the data is still there.
To get a string with the data, you could provide a fixed precision to some larger amount of digits, for example
In [2]: e = format(
...: 2.7182818284590452353602874713526624977572470936999595749669676277240766303535475945713821785251664274,
...: ".50f",
...: )
In [3]: e
Out[3]: '2.71828182845904509079559829842764884233474731445312'
which gives us the first 50 digits, but this is of course not particularly useful with floats as the loss of precision picks up the further you go

Python: Float to Decimal conversion and subsequent shifting may give wrong result

When I convert a float to decimal.Decimal in Python and afterwards call Decimal.shift it may give me completely wrong and unexpected results depending on the float. Why is this the case?
Converting 123.5 and shifting it:
from decimal import Decimal
a = Decimal(123.5)
print(a.shift(1)) # Gives expected result
The code above prints the expected result of 1235.0.
If I instead convert and shift 123.4:
from decimal import Decimal
a = Decimal(123.4)
print(a.shift(1)) # Gives UNexpected result
it gives me 3.418860808014869689941406250E-18 (approx. 0) which is completely unexpected and wrong.
Why is this the case?
Note:
I understand the floating-point imprecision because of the representation of floats in memory. However, I can't explain why this should give me such a completely wrong result.
Edit:
Yes, in general it would be best to not convert the floats to decimals but convert strings to decimals instead. However, this is not the point of my question. I want to understand why the shifting after float conversion gives such a completely wrong result. So if I print(Decimal(123.4)) it gives 123.40000000000000568434188608080148696899414062 so after shifting I would expect it to be 1234.0000000000000568434188608080148696899414062 and not nearly zero.
You need to change the Decimal constructor input to use strings instead of floats.
a = Decimal('123.5')
print(a.shift(1))
a = Decimal('123.4')
print(a.shift(1))
or
a = Decimal(str(123.5))
print(a.shift(1))
a = Decimal(str(123.4))
print(a.shift(1))
The output will be as expected.
>>> 1235.0
>>> 1234.0
Decimal instances can be constructed from integers, strings, floats, or tuples. Construction from an integer or a float performs an exact conversion of the value of that integer or float.
For floats, Decimal calls Decimal.from_float()
Note that Decimal.from_float(0.1) is not the same as Decimal('0.1'). Since 0.1 is not exactly representable in binary floating point, the value is stored as the nearest representable value which is 0x1.999999999999ap-4. The exact equivalent of the value in decimal is 0.1000000000000000055511151231257827021181583404541015625.
Internally, the Python decimal library converts a float into two integers representing the numerator and denominator of a fraction that yields the float.
n, d = abs(123.4).as_integer_ratio()
It then calculates the bit length of the denominator, which is the number of bits required to represent the number in binary.
k = d.bit_length() - 1
And then from there the bit length k is used to record the coefficient of the decimal number by multiplying the numerator * 5 to the power of the bit length of the denominator.
coeff = str(n*5**k)
The resulting values are used to create a new Decimal object with constructor arguments of sign, coefficient, and exponent using this values.
For the float 123.5 these values are
>>> 1 1235 -1
and for the float 123.4 these values are
1 123400000000000005684341886080801486968994140625 -45
So far, nothing is amiss.
However when you call shift, the Decimal library has to calculate how much to pad the number with zeroes based on the shift you've specified. To do this internally it takes the precision subtracted by length of the coefficient.
amount_to_pad = context.prec - len(coeff)
The default precision is only 28 and with a float like 123.4 the coefficient becomes much longer than the default precision as noted above. This creates a negative amount to pad with zeroes and makes the number very tiny as you noted.
A way around this is to increase the precision to the length of the exponent + the length of the number you started with (45 + 4).
from decimal import Decimal, getcontext
getcontext().prec = 49
a = Decimal(123.4)
print(a)
print(a.shift(1))
>>> 123.400000000000005684341886080801486968994140625
>>> 1234.000000000000056843418860808014869689941406250
The documentation for shift hints that the precision is important for this calculation:
The second operand must be an integer in the range -precision through precision.
However it does not explain this caveat for floats that don't play nice with memory limitations.
I would expect this to raise some kind of error and prompt you to change your input or increase the precision, but at least you know!
#MarkDickinson noted in a comment above that you can view this Python bug tracker for more information: https://bugs.python.org/issue7233

How do I put more precision in my imported data?

I'm trying to improve the precision by 10 digits for the columns marked in red (Image 1), I tried pd.set_options ('display.precision', 10), but it didn't work. Any ideas? the only line of code I have used is to import data from excel: df = pd.read_excel ('')
It looks like your numbers are actually really big integers, so pandas converts them to 'float' as opposed to integer when they get imported. (they don't fit in standard int64 data type).
Precision applies to decimals. i.e. if you have something like 0.234566 etc how many decimals to show. But in your case it's not decimals you looking for, but how many relevant digits to display, before cutting off to scientific notation.
Since it's a float data type, you have to control float format, i.e. pd.options.display.float_format
To keep scientific notation, but limit digits to 10,
set display options to this:
pd.options.display.float_format = '{:.10e}'.format
And it will display 10 digits after decimal point before e. You can change 10 to any other digit.
To understand how scientific notation works in python, try this:
print("{:.2e}".format(12345678))
Basically it will format 12345678 limited to 2 digits and will display as 1.23e+07 (which means 1.23 * 10^7)
If you don't want any scientific notation, and just want to display the long integer, use this:
pd.options.display.float_format = '{:.0f}'.format
In this case 0 means show 0 decimals, and it will show the full integer part of the number.
eg:
print("{:.0f}".format(1234567887298739)) returns 1234567887298739. While print("{:.2f}".format(1.234567887298739487)) returns 1.23.
Caveat: If the number is too long, it will start rounding and confusing after awhile. I think 10 digits is ok, but if it's much larger than that, python can't really handle it and it will start rounding and cutting things up... float precision has a system limit too.
Note: In all cases your underlying data stays the same. just the formatting changes.
By "improve precision", I assume you mean increase the number of digits that are displayed. In this case, pd.set_option('display.precision', 10) should give you what you want. I have an example as well.
import pandas as pd
c = pd.read_excel("Book1.xlsx")
print(c)
pd.set_option('display.precision', 10)
print()
print(c)
pd.set_option('display.precision', 20)
print()
print(c)
This yields the following.
https://i.stack.imgur.com/XHpdO.png
Make note that Pandas actually does have the actual values in memory, it is just printing less digits in accordance with default settings. As an example, you can verify this by indexing your data frames.
>>> frame = pd.DataFrame([123.456789101112])
>>> frame
0
0 123.456789
>>> frame[0].data[0]
123.456789101112

Safest way to convert float to integer in python?

Python's math module contain handy functions like floor & ceil. These functions take a floating point number and return the nearest integer below or above it. However these functions return the answer as a floating point number. For example:
import math
f=math.floor(2.3)
Now f returns:
2.0
What is the safest way to get an integer out of this float, without running the risk of rounding errors (for example if the float is the equivalent of 1.99999) or perhaps I should use another function altogether?
All integers that can be represented by floating point numbers have an exact representation. So you can safely use int on the result. Inexact representations occur only if you are trying to represent a rational number with a denominator that is not a power of two.
That this works is not trivial at all! It's a property of the IEEE floating point representation that int∘floor = ⌊⋅⌋ if the magnitude of the numbers in question is small enough, but different representations are possible where int(floor(2.3)) might be 1.
To quote from Wikipedia,
Any integer with absolute value less than or equal to 224 can be exactly represented in the single precision format, and any integer with absolute value less than or equal to 253 can be exactly represented in the double precision format.
Use int(your non integer number) will nail it.
print int(2.3) # "2"
print int(math.sqrt(5)) # "2"
You could use the round function. If you use no second parameter (# of significant digits) then I think you will get the behavior you want.
IDLE output.
>>> round(2.99999999999)
3
>>> round(2.6)
3
>>> round(2.5)
3
>>> round(2.4)
2
Combining two of the previous results, we have:
int(round(some_float))
This converts a float to an integer fairly dependably.
That this works is not trivial at all! It's a property of the IEEE floating point representation that int∘floor = ⌊⋅⌋ if the magnitude of the numbers in question is small enough, but different representations are possible where int(floor(2.3)) might be 1.
This post explains why it works in that range.
In a double, you can represent 32bit integers without any problems. There cannot be any rounding issues. More precisely, doubles can represent all integers between and including 253 and -253.
Short explanation: A double can store up to 53 binary digits. When you require more, the number is padded with zeroes on the right.
It follows that 53 ones is the largest number that can be stored without padding. Naturally, all (integer) numbers requiring less digits can be stored accurately.
Adding one to 111(omitted)111 (53 ones) yields 100...000, (53 zeroes). As we know, we can store 53 digits, that makes the rightmost zero padding.
This is where 253 comes from.
More detail: We need to consider how IEEE-754 floating point works.
1 bit 11 / 8 52 / 23 # bits double/single precision
[ sign | exponent | mantissa ]
The number is then calculated as follows (excluding special cases that are irrelevant here):
-1sign × 1.mantissa ×2exponent - bias
where bias = 2exponent - 1 - 1, i.e. 1023 and 127 for double/single precision respectively.
Knowing that multiplying by 2X simply shifts all bits X places to the left, it's easy to see that any integer must have all bits in the mantissa that end up right of the decimal point to zero.
Any integer except zero has the following form in binary:
1x...x where the x-es represent the bits to the right of the MSB (most significant bit).
Because we excluded zero, there will always be a MSB that is one—which is why it's not stored. To store the integer, we must bring it into the aforementioned form: -1sign × 1.mantissa ×2exponent - bias.
That's saying the same as shifting the bits over the decimal point until there's only the MSB towards the left of the MSB. All the bits right of the decimal point are then stored in the mantissa.
From this, we can see that we can store at most 52 binary digits apart from the MSB.
It follows that the highest number where all bits are explicitly stored is
111(omitted)111. that's 53 ones (52 + implicit 1) in the case of doubles.
For this, we need to set the exponent, such that the decimal point will be shifted 52 places. If we were to increase the exponent by one, we cannot know the digit right to the left after the decimal point.
111(omitted)111x.
By convention, it's 0. Setting the entire mantissa to zero, we receive the following number:
100(omitted)00x. = 100(omitted)000.
That's a 1 followed by 53 zeroes, 52 stored and 1 added due to the exponent.
It represents 253, which marks the boundary (both negative and positive) between which we can accurately represent all integers. If we wanted to add one to 253, we would have to set the implicit zero (denoted by the x) to one, but that's impossible.
If you need to convert a string float to an int you can use this method.
Example: '38.0' to 38
In order to convert this to an int you can cast it as a float then an int. This will also work for float strings or integer strings.
>>> int(float('38.0'))
38
>>> int(float('38'))
38
Note: This will strip any numbers after the decimal.
>>> int(float('38.2'))
38
math.floor will always return an integer number and thus int(math.floor(some_float)) will never introduce rounding errors.
The rounding error might already be introduced in math.floor(some_large_float), though, or even when storing a large number in a float in the first place. (Large numbers may lose precision when stored in floats.)
Another code sample to convert a real/float to an integer using variables.
"vel" is a real/float number and converted to the next highest INTEGER, "newvel".
import arcpy.math, os, sys, arcpy.da
.
.
with arcpy.da.SearchCursor(densifybkp,[floseg,vel,Length]) as cursor:
for row in cursor:
curvel = float(row[1])
newvel = int(math.ceil(curvel))
Since you're asking for the 'safest' way, I'll provide another answer other than the top answer.
An easy way to make sure you don't lose any precision is to check if the values would be equal after you convert them.
if int(some_value) == some_value:
some_value = int(some_value)
If the float is 1.0 for example, 1.0 is equal to 1. So the conversion to int will execute. And if the float is 1.1, int(1.1) equates to 1, and 1.1 != 1. So the value will remain a float and you won't lose any precision.

Categories