I am working with Python and have two strings which contain float style values, For example:
a = '0.0000001'
b = '0.0003599'
I am looking for a solution to simply add or subtract the two values together to a new string, keeping the decimal precision intact. I have tried converting them to a float and using a + b etc but this seems to be inconsistent.
So the resulting string in this example would be string
c = '0.0003600'
I've been over a number of examples/methods and not quite found the answer. Any help appreciated.
Looks like the decimal module should do what you want:
>>> from decimal import *
>>> a = '0.0000001'
>>> b = '0.0003599'
>>> Decimal(a)+Decimal(b)
Decimal('0.0003600')
mpmath library could do arbitrary precision float arithmetic:
>>> from mpmath import mpf
>>> a = mpf('0.0003599')
>>> b = mpf('0.0000001')
>>> print(a + b)
0.00036
Converting to a float is fine for all cases I can think of:
>>> str(sum(float(i) for i in (a, b)))
'0.00036'
>>> str(sum(map(float, (a, b))))
'0.00036'
>>> str(float(a) + float(b))
'0.00036'
Related
I am trying to use the Decimal module to do some FX calculations (instead of using floats).
However when I do the following, I do not get the expected value output:
>>> from decimal import Decimal
>>> x = Decimal(1.3755)
>>> y = Decimal(1.2627)
>>> z = y/(1/x)
>>> print(z)
1.736843849999999839084452447
>>>
The output should be: 1.73684385
I thought using Decimals would correct this issue. How can I resolve this rounding issue?
from decimal import Decimal
x = Decimal('1.3755')
y = Decimal('1.2627')
print(y/(1/x))
# 1.736843850000000000000000000
Use str to construct Decimal instead of real number.
String decimal objects are required:
>>> from decimal import Decimal
>>> x = Decimal('1.3755')
>>> y = Decimal('1.2627')
>>> z = y/(1/x)
>>> print(z)
1.736843850000000000000000000
>>>
Otherwise they would still be treated as regular python floats.
As #schwobaseggl mentioned:
Note that the float is passed to the Decimal constructor, so the precision is lost there already. The Decimal constructor has no chance of knowing the desired precision. The result is still a decimal object, but with the precision of the passed float.
you can use 0:.nf to round the numbers like below
:
print(z)
# 1.736843849999999839084452447
print("{0:.8f}".format(z))
# 1.73684385
try this, this will helps you, set your precision level
>>> from decimal import *
>>> getcontext().prec = 6
>>> Decimal(1) / Decimal(7)
Decimal('0.142857')
>>> getcontext().prec = 28
>>> Decimal(1) / Decimal(7)
Decimal('0.1428571428571428571428571429')
import sympy as sy
x = sy.symbols('x')
def f2(x,t,l):
return 5*sy.log(x)+14388/((273+t)*x)-sy.log((1.1910*10**8)/l+1)
print(sy.solve(f2(x,35,80),x))
Result is:
OverflowError: Python int too large to convert to C long
How to solve this problem?
Please check your equation. There does not appear to be a solution:
>>> eq=f2(x,35,80);eq
5*log(x) - 14.2134480713559 + 327/(7*x)
There is a minimum in the function and it is convex up at that point and positive:
>>> solve(eq.diff(x))
[327/35]
>>> eq.subs(x,_[0]).n()
1.95961247568333
>>> eq.diff(x,2).subs(x,Rational(327,35))
6125/106929
So if the constant were a little more negative, everything would work:
>>> eq.subs(eq.atoms(Float).pop(),-20)
5*log(x) - 20 + 327/(7*x)
>>> ans=solve(_)
>>> [i.n(2) for i in ans]
[44., 3.3]
I have an issue with floating-point numbers:
a = 0.4812
b = 0.4813
a-b
-9.999999999998899e-05
a = Decimal(0.4812)
b = Decimal(0.4813)
a-b
Decimal('-0.000099999999999988986587595718447118997573852539062500')
How can I get exactly -0.0001?
You need to pass the numbers in as strings to the Decimal constructor, if you use float literals they've already lost precision before the Decimal object gets constructed.
>>> a = Decimal('0.4812')
>>> b = Decimal('0.4813')
>>> a - b
Decimal('-0.0001')
To illustrate more clearly:
>>> Decimal('0.4812')
Decimal('0.4812')
>>> Decimal(0.4812)
Decimal('0.481200000000000016608936448392341844737529754638671875')
If you want round it you can use this: round(-0.000099999999999988986587595718447118997573852539062500, 4)
As Python has no limit over integers but has some limit over its floats.
How do I go about calculating floor of very large floats?
I am trying to calculate floor(A*B), where A is a small irrational number, possibly sqrt(2), e, sqrt(5) etc and B is a very large number in range 10^1000.
You can use decimal module:
>>> from decimal import Decimal
>>> from math import floor, sqrt
>>>
>>> d1 = Decimal(sqrt(2))
>>> d2 = Decimal(10**1000)
>>>
>>> result = d1 * d2
>>> floor(result)
You can also set the precision for the decimal using getcontext().prec in order to get more precise result.
>>> from decimal import *
>>> getcontext().prec = 100
>>> d1 = Decimal(2).sqrt()
>>> d1
Decimal('1.414213562373095048801688724209698078569671875376948073176679737990732478462107038850387534327641573')
64 bit floats don't go above ~10^308. So your 10^1000 is definitely not going to fit, regardless of multiplication by any constant. (No 64-bit float can be small enough to make 10^1000 less than 10^308.) So your procedure isn't going to work for floats.
Consider using the decimal module. E.g.:
import decimal
import math
a = decimal.Decimal("10") ** 10000
b = decimal.Decimal("0.123")
math.floor(a*b)
Good morning,
I'm reading two numbers from a FITS file (representing the integer and floating point parts of a single number), converting them to long doubles (128 bit in my machine), and then summing them up.
The result is not as precise as I would expect from using 128-bit floats. Here is the code:
a_int = np.longdouble(read_header_key(fits_file, 'I'))
print "I %.25f" % a_int, type(a_int)
a_float = np.longdouble(read_header_key(fits_file, 'F'))
print "F %.25f" % a_float, a_float.dtype
a = a_int + a_float
print "TOT %.25f" % a, a.dtype
and here's the answer I get:
I 55197.0000000000000000000000000 <type 'numpy.float128'>
F 0.0007660185200000000195833 float128
TOT 55197.0007660185219720005989075 float128
The result departs from what I would expect(55197.0007660185200000000195833) after just 11 decimal digits (16 significant digits in total). I would expect a much better precision from 128bit floats. What am I doing wrong?
This result was reproduced on a Mac machine and on a Linux 32bit machine (in that case, the dtype was float96, but the values were exactly the same)
Thanks in advance for your help!
Matteo
The problem lies in your printing of the np.longdouble. When you format using %f, Python casts the result to a float (64-bits) before printing.
Here:
>>> a_int = np.longdouble(55197)
>>> a_float = np.longdouble(76601852) / 10**11
>>> b = a_int + a_float
>>> '%.25f' % b
'55197.0007660185219720005989075'
>>> '%.25f' % float(b)
'55197.0007660185219720005989075'
>>> b * 10**18
5.5197000766018519998e+22
Note that on my machine, I only get a bit more precision with longdouble compared with ordinary double (20 decimal places instead of 15). So, it may be worth seeing if the Decimal module might be more suited for your application. Decimal handles arbitrary-precision decimal floating-point numbers with no loss of precision.
My guess is that the %f modifier constructs a float from your longdouble object and uses that when creating the format string.
>>> import numpy as np
>>> np.longdouble(55197)
55197.0
>>> a = np.longdouble(55197)
>>> b = np.longdouble(0.0007660185200000000195833)
>>> a
55197.0
>>> b
0.00076601852000000001958
>>> a + b
55197.00076601852
>>> type(a+b)
<type 'numpy.float128'>
>>> a + b == 55197.00076601852
False
As a side note, even repr doesn't print enough digets to reconstruct the object. This is simply because you can't have a float literal which is sufficient to pass to your longdouble.