How can I get a big number in non-scientific notation? - python

I just tried
>>> 2.17 * 10**27
2.17e+27
>>> str(2.17 * 10**27)
'2.17e+27'
>>> "%i" % 2.17 * 10**27
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: cannot fit 'long' into an index-sized integer
>>> "%f" % 2.17 * 10**27
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: cannot fit 'long' into an index-sized integer
>>> "%l" % 2.17 * 10**27
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: incomplete format
Now I ran out of ideas. I want to get
2170000000000000000000000000
How can I print such big numbers? (I don't care if it's a Python 2.7+ solution or a Python 3.X solution)

You are getting your operator precedence wrong. You are formatting 2.17, then multiplying that by a long integer:
>>> r = "%f" % 2.17
>>> r
'2.170000'
>>> r * 10 ** 27
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: cannot fit 'long' into an index-sized integer
Put parentheses around the multiplication:
>>> "%f" % (2.17 * 10**27)
'2169999999999999971109634048.000000'
This is one of the drawbacks of overloading the modulus operator for string formatting; the newer Format String syntax used by the str.format() method and the Format Specification Mini-Language it employs (and can be used with the format() function) neatly skirt around that issue. I'd use format() for this case:
>>> format(2.17 * 10**27, 'f')
'2169999999999999971109634048.000000'

Related

Dividing and multiplying Decimal objects in Python

In the following code, both coeff1 and coeff2 are Decimal objects. When i check their type using type(coeff1), i get (class 'decimal.Decimal') but when i made a test code and checked decimal objects i get decimal. Decimal, without the word class
coeff1 = system[i].normal_vector.coordinates[i]
coeff2 = system[m].normal_vector.coordinates[i]
x = coeff2/coeff1
print(type(x))
system.xrow_add_to_row(x,i,m)
another issue is when i change the first input to the function xrow_add_to_row to negative x:
system.xrow_add_to_row(-x,i,m)
I get invalid operation error at a line that is above the changed code:
<ipython-input-11-ce84b250bafa> in compute_triangular_form(self)
93 coeff1 = system[i].normal_vector.coordinates[i]
94 coeff2 = system[m].normal_vector.coordinates[i]
---> 95 x = coeff2/coeff1
96 print(type(coeff1))
97 system.xrow_add_to_row(-x,i,m)
InvalidOperation: [<class 'decimal.DivisionUndefined'>]
But then again in a test code i use negative numbers with Decimal objects and it works fine. Any idea what the problem might be? Thanks.
decimal.DivisionUndefined is raised when you attempt to divide zero by zero. It's a bit confusing as you get a different exception when only the divisor is zero (decimal.DivisionByZero)
>>> import decimal.Decimal as D
>>> D(0) / D(0)
Traceback (most recent call last):
File "<pyshell#1>", line 1, in <module>
D(0) / D(0)
decimal.InvalidOperation: [<class 'decimal.DivisionUndefined'>]
>>> D(1) / D(0)
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
D(1) / D(0)
decimal.DivisionByZero: [<class 'decimal.DivisionByZero'>]

decimal.InvalidOperation, DivisionImpossible for very large numbers

Using python 3.5.2
>>> from decimal import Decimal
>>> Decimal('12') % Decimal('0.01')
Decimal('0.00')
>>> Decimal('234567') % Decimal('0.01')
Decimal('0.00')
Works as expected. But...
>>> Decimal('7316717653133062491922511967442657474235534919493496983520312774506326239578318016984801869478851843858615607891129494954595017379583319528532088055111254069874715852386305071569329096329522744304355766896648950445244523161731856403098711121722383113622298934233803081353362766142828064444866452387493035890729629049156044077239071381051585930796086670172427121883998797908792274921901699720888093776657273330010533678812202354218097512545405947522435258490771167055601360483958644670632441572215539753697817977846174064955149290862569321978468622482839722413756570560574902614079729686524145351004748216637048440319989000889524345065854122758866688116427171479924442928230863465674813919123162824586178664583591245665294765456828489128831426076900422421902267105562632111110937054421750694165896040807198403850962455444362981230987879927244284909188845801561660979191338754992005240636899125607176060588611646710940507754100225698315520005593572972571636269561882670428252483600823257530420752963450') % Decimal('0.01')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
decimal.InvalidOperation: [<class 'decimal.DivisionImpossible'>]
EDIT: This is the smallest number I found that can cause this error:
>>> Decimal(10**26) % Decimal('0.01')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
decimal.InvalidOperation: [<class 'decimal.DivisionImpossible'>]
Why does Decimal(very_large_int) % Decimal('0.01') give this error? I thought that Decimal is able to handle very large numbers?
As L3viathan answered, the problem is that a result (not the result—this is the "hidden part" I mention in a comment) has overrun the available precision.
The hidden part is more obvious if we use Python2:
Traceback (most recent call last):
File "/tmp/d.py", line 24, in <module>
print(big % Decimal('0.01'))
File "/usr/local/lib/python2.7/decimal.py", line 1460, in __mod__
remainder = self._divide(other, context)[1]
File "/usr/local/lib/python2.7/decimal.py", line 1381, in _divide
'quotient too large in //, % or divmod')
File "/usr/local/lib/python2.7/decimal.py", line 3873, in _raise_error
raise error(explanation)
InvalidOperation: quotient too large in //, % or divmod
Essentially, a % b is implemented by doing both division and modulus together (a la Algorithm D in Knuth vol 2; for a C implementation restricted to two fullwords, see the qdivrem.c code I wrote in the early 2000s). The library code therefore needs two extra digits (the number of digits to the right of the decimal point in Decimal('0.01')—calculating the actual number of digits needed is not as simple as for big below as we have to look at the exponents) to compute the intermediate quotient.
The decimal library was reimplemented directly in C for Python3, which hides the detail, but the cure is the same for both: extend the precision. Here's an example source routine that catches the exception and tries again, though with magic constant 2.
from __future__ import print_function
import decimal
Decimal = decimal.Decimal
import traceback
big = Decimal(
'731671765313306249192251196744265747423553491949349698352031277'
'4506326239578318016984801869478851843858615607891129494954595017379'
'5833195285320880551112540698747158523863050715693290963295227443043'
'5576689664895044524452316173185640309871112172238311362229893423380'
'3081353362766142828064444866452387493035890729629049156044077239071'
'3810515859307960866701724271218839987979087922749219016997208880937'
'7665727333001053367881220235421809751254540594752243525849077116705'
'5601360483958644670632441572215539753697817977846174064955149290862'
'5693219784686224828397224137565705605749026140797296865241453510047'
'4821663704844031998900088952434506585412275886668811642717147992444'
'2928230863465674813919123162824586178664583591245665294765456828489'
'1288314260769004224219022671055626321111109370544217506941658960408'
'0719840385096245544436298123098787992724428490918884580156166097919'
'1338754992005240636899125607176060588611646710940507754100225698315'
'520005593572972571636269561882670428252483600823257530420752963450')
try:
print(big % Decimal('0.01'))
except decimal.DecimalException:
traceback.print_exc()
print('')
ctx = decimal.getcontext()
print('failed because precision was', ctx.prec, 'and big is',
len(big.as_tuple().digits), 'digits long')
print('trying again with 2 more digits')
with decimal.localcontext() as ctx:
ctx.prec = len(big.as_tuple().digits) + 2
try:
print(big % Decimal('0.01'))
except decimal.DecimalException:
traceback.print_exc()
With Python2:
$ python2 /tmp/d.py
Traceback (most recent call last):
File "/tmp/d.py", line 24, in <module>
print(big % Decimal('0.01'))
File "/usr/local/lib/python2.7/decimal.py", line 1460, in __mod__
remainder = self._divide(other, context)[1]
File "/usr/local/lib/python2.7/decimal.py", line 1381, in _divide
'quotient too large in //, % or divmod')
File "/usr/local/lib/python2.7/decimal.py", line 3873, in _raise_error
raise error(explanation)
InvalidOperation: quotient too large in //, % or divmod
failed because precision was 28 and big is 1000 digits long
trying again with 2 more digits
0.00
With Python3:
$ python3 /tmp/d.py
Traceback (most recent call last):
File "/tmp/d.py", line 24, in <module>
print(big % Decimal('0.01'))
decimal.InvalidOperation: [<class 'decimal.DivisionImpossible'>]
failed because precision was 28 and big is 1000 digits long
trying again with 2 more digits
0.00
Note that dividing by a very large number is actually easier: it's the division by 0.01 that is causing problems here. If the exponent on the divisor were at least 1000 - 28 (1e972 or larger), we would not have the problem.
Decimal is based on the Decimal Arithmetic specification. You can see here that "Division impossible" means that
the integer result of a divide-integer or remainder operation had too many digits (would be longer than precision).
This precision is something you can adjust:
>>> decimal.getcontext().prec=10000
>>> Decimal('7316717653133062491922511967442657474235534919493496983520312774506326239578318016984801869478851843858615607891129494954595017379583319528532088
... 0551112540698747158523863050715693290963295227443043557668966489504452445231617318564030987111217223831136222989342338030813533627661428280644448664523874
... 9303589072962904915604407723907138105158593079608667017242712188399879790879227492190169972088809377665727333001053367881220235421809751254540594752243525
... 8490771167055601360483958644670632441572215539753697817977846174064955149290862569321978468622482839722413756570560574902614079729686524145351004748216637
... 0484403199890008895243450658541227588666881164271714799244429282308634656748139191231628245861786645835912456652947654568284891288314260769004224219022671
... 0556263211111093705442175069416589604080719840385096245544436298123098787992724428490918884580156166097919133875499200524063689912560717606058861164671094
... 0507754100225698315520005593572972571636269561882670428252483600823257530420752963450') % Decimal('0.01')
Decimal('0.00')

TypeError: %d format: a number is required, not numpy.float64

Trying to plot, I got the following error from matplotlib:
TypeError: %d format: a number is required, not numpy.float64
This is the complete traceback (I've modified path names):
Traceback (most recent call last):
File ".../plotmod.py", line 154, in _plot
fig.autofmt_xdate()
File ".../local/lib/python2.7/site-packages/matplotlib/figure.py", line 426, in autofmt_xdate
for label in ax.get_xticklabels():
File ".../local/lib/python2.7/site-packages/matplotlib/axes.py", line 2620, in get_xticklabels
self.xaxis.get_ticklabels(minor=minor))
File ".../local/lib/python2.7/site-packages/matplotlib/axis.py", line 1118, in get_ticklabels
return self.get_majorticklabels()
File ".../local/lib/python2.7/site-packages/matplotlib/axis.py", line 1102, in get_majorticklabels
ticks = self.get_major_ticks()
File ".../local/lib/python2.7/site-packages/matplotlib/axis.py", line 1201, in get_major_ticks
numticks = len(self.get_major_locator()())
File ".../local/lib/python2.7/site-packages/matplotlib/dates.py", line 595, in __call__
'RRuleLocator estimated to generate %d ticks from %s to %s: exceeds Locator.MAXTICKS * 2 (%d) ' % (estimate, dmin, dmax, self.MAXTICKS * 2))
TypeError: %d format: a number is required, not numpy.float64
Where can this error come from?
Some basic research I've made results:
the error is not the real error, but instead one which is caused while trying to format the RuntimeError message that matplotlib.dates raises
the formatting error was due to python's %d, which, it seems, cannot handle numpy.float64 instances
the instance which has contained that data type is either estimate, which is some inner calculation result of matplotlib, or MAXTICKS, which is probably a constant, hence I tend to believe it's the first option
the calculation of estimate involves date2num which should return legitimate values, and _get_unit() and _get_interval(), which go deep enough into the module, and this is where my research stops.
I can easily reproduce the error in my entire software framework, but I can't isolate it for easy reproduction code. I think it tends to happen when the entire axis that should be plotted is very short (say, up to a few minutes long).
Any thoughts?
It seems you have a NaN or infinity that you are trying to format as an integer which raises the error (there's no such thing as a NaN or Inf for the int datatype).
In [1]: import numpy as np
In [2]: '%d' % np.float64(42)
Out[2]: '42'
In [3]: '%d' % np.float64('nan')
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython console> in <module>()
TypeError: %d format: a number is required, not numpy.float64
In [4]: '%d' % np.float64('inf')
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
ipython console> in <module>()
TypeError: %d format: a number is required, not numpy.float64
You could go into the matplotlib file (or use a python debugger) that generates the error and change the print line to have a %f which will work with all numpy floats. ('%f' % np.float64('nan') returns 'nan').
Convert numpy float to Python float manually.
np.asscalar(np.float64(42))

On Windows, how to convert a timestamps BEFORE 1970 into something manageable?

Summary: "negative" timestamps on Mac work fine, but on Windows I can't convert them into something usable.
Details:
I can have a file on Windows whose modification time is, say 1904:
$ ls -l peter.txt
-rw-r--r-- 1 sync Administ 1 Jan 1 1904 peter.txt
In python:
>>> import os
>>> ss = os.stat('peter.txt')
>>> ss.st_mtime
-2082816000.0
Great. But I can't figure out how to turn that negative timestamp into a date/time string. On Mac this code works fine.
>>> datetime.fromtimestamp(-2082816000)
datetime.datetime(1904, 1, 1, 0, 0)
And from here I can do whatever I want in terms of formatting.
But on Windows it fails:
>>> datetime.fromtimestamp(-2082816000)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: timestamp out of range for platform localtime()/gmtime() function
And trying anything else I can think of fails:
>>> time.gmtime(-2082816000)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: (22, 'Invalid argument')
the wonderful python-dateutil package doesn't seem to have this facility. I've looked though time, calendar, and datetime module. Any help?
>>> datetime.datetime(1970, 1, 1) + datetime.timedelta(seconds=-2082816000)
datetime.datetime(1904, 1, 1, 8, 0)
Using Ignacio's idea, this function will convert any timestamp to a proper naive datetime object:
def convert_timestamp_to_datetime(timestamp):
import datetime as dt
if timestamp >=0:
return dt.datetime.fromtimestamp(timestamp)
else:
return dt.datetime(1970, 1, 1) + dt.timedelta(seconds=int(timestamp))

Float value is equal -1.#IND

A function returns a list which contains of float values. If I plot this list, I see that some of the float values are equal -1.#IND. I also checked the type of those -1.#IND values. And they are also of float type.
But how can I understand this -1.#IND values? What do they represent or stand for?
-1.#IND means indefinite, the result of a floating point equation that doesn't have a solution. On other platforms, you'd get NaN instead, meaning 'not a number', -1.#IND is specific to Windows. On Python 2.5 on Linux I get:
>>> 1e300 * 1e300 * 0
-nan
You'll only find this on python versions 2.5 and before, on Windows platforms. The float() code was improved in python 2.6 and consistently uses float('nan') for such results; mostly because there was no way to turn 1.#INF and -1.#IND back into an actual float() instance again:
>>> repr(inf)
'1.#INF'
>>> float(_)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for float(): 1.#INF
>>> repr(nan)
'-1.#IND'
>>> float(_)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for float(): -1.#IND
On versions 2.6 and newer this has all been cleaned up and made consistent:
>>> 1e300 * 1e300 * 0
nan
>>> 1e300 * 1e300
inf
>>> 1e300 * 1e300 * -1
-inf

Categories