I need to find a numpy.float64 value that is as close to zero as possible.
Numpy offers several constants that allow to do something similar:
np.finfo(np.float64).eps = 2.2204460492503131e-16
np.finfo(np.float64).tiny = 2.2250738585072014e-308
These are both reasonably small, but when I do this
>>> x = np.finfo(np.float64).tiny
>>> x / 2
6.9533558078350043e-310
the result is even smaller. When using an impromptu binary search I can get down to about 1e-323, before the value is rounded down to 0.0.
Is there a constant for this in numpy that I am missing? Alternatively, is there a right way to do this?
Use np.nextafter.
>>> import numpy as np
>>> np.nextafter(0, 1)
4.9406564584124654e-324
>>> np.nextafter(np.float32(0), np.float32(1))
1.4012985e-45
2^-1075 is the smallest positive float.
2^-1075 = 5.10^-324
Related
I'm looking for a way to neatly show rounded floats of varying decimal lengh.
Example of what I'm looking for:
In: 0.0000000071234%
Out: 0.0000000071%
In: 0.00061231999999%
Out: 0.0061%
In: 0.149999999%
Out: 0.15%
One way to do it would be:
def dynamic_round(num):
zeros = 2
original = num
while num< 0.1:
num*= 10
zeros += 1
return round(original, zeros)
I'm sure however there is a much cleaner way to do the same thing.
Here's a way to do it without a loop:
a = 0.003123
log10 = -int(math.log10(a))
res = round(a, log10+2)
==> 0.0031
This post answers your question with a similar logic
How can I format a decimal to always show 2 decimal places?
but just to clarify
One way would be to use round() function also mentioned in the documentation
built-in functions: round()
>>> round(number[,digits])
here digit refers to the precision after decimal point and is optional as well.
Alternatively, you can also use new format specifications
>>> from math import pi # pi ~ 3.141592653589793
>>> '{0:.2f}'.format(pi)
'3.14'
here the number next to f tells the precision and f refers to float.
Another way to go here is to import numpy
>>>import numpy
>>>a=0.0000327123
>>>res=-int(numpy.log10(a))
>>>round(a,res+2)
>>>0.000033
numpy.log() also, takes an array as an argument, so if you have multiple values you can iterate through the array.
I am fairly new to python is there a fast way to convert decimal to 16-bit fixed-point binary (1-bit sign – 7-bit integer – 8-bit fraction) and back in python.
I would like to manipulate the binary and convert this manipulated binary back to decimal.
Example 1.25 -> 00000001.01000000
Manipulate first fraction part (0->1)
00000001.11000000 -> 1.75
Would really appreciate any help.
If you have N bits in the fractional part then you just need to divide by 2N, since the stored bit pattern is actually the real value multiplied by 2N. Therefore with Q8.8 like in your case you'll have to divide by 256
For your 00000001.01000000 and 00000001.11000000 examples above:
>>> 0b0000000101000000/256.0
1.25
>>> 0b0000000111000000/256.0
1.75
You can use the Binary fractions package.
Example:
>>> from binary_fractions import Binary
>>> b = Binary(15.5)
>>> print(b)
0b1111.1
>>> Binary(1.25).lfill(8).rfill(8)
Binary(00000001.01000000, 0, False)
>>> Binary(1.75).lfill(8).rfill(8)
Binary(00000001.11000000, 0, False)
>>> Binary('0b01.110').lfill(8).rfill(8)
Binary(00000001.11000000, 0, False)
>>> Binary('0b01.110').lfill(8).rfill(8).string()
'00000001.11000000'
It has many more helper functions to manipulate binary strings such as: shift, add, fill, to_exponential, invert...
PS: Shameless plug, I'm the author of this package.
You can use fxpmath to do calculations simplier.
Info about fxpmath:
https://github.com/francof2a/fxpmath
Your example could be solved like:
from fxpmath import Fxp
x = Fxp(1.25, True, 16, 8) # signed=True, n_word=16, n_frac=8
x.bin(frac_dot=True)
out:
'00000001.01000000'
Now you can apply an OR mask to do 0 bit val to 1:
y = x | Fxp('0b01.110', True, 16, 8)
print(y.bin(frac_dot=True))
print(y)
out:
'00000001.11000000'
1.75
numfi can transform floating point number to fixed point with certain word/fraction length, but directly manipulate binary bits is not possible.
You can use bitwise logical operation like and/or/xor to modify bits as workaround, for computation heavy program, bitwise operation should be faster than string evaluation
>>> from numfi import numfi
>>> x = numfi(1.25,1,16,8)
>>> x
numfi([1.25]) s16/8-r/s
>>> x.bin
array(['0000000101000000'], dtype='<U16')
>>> x.bin_
array(['11111110.11000000'], dtype='<U17')
>>> y = x | 0b0000000010000000
>>> y
numfi([1.75]) s16/8-r/s
>>> y.bin_
array(['00000001.11000000'], dtype='<U17')
When I take the square root of -1 it gives me an error:
invalid value encountered in sqrt
How do I fix that?
from numpy import sqrt
arr = sqrt(-1)
print(arr)
To avoid the invalid value warning/error, the argument to numpy's sqrt function must be complex:
In [8]: import numpy as np
In [9]: np.sqrt(-1+0j)
Out[9]: 1j
As #AshwiniChaudhary pointed out in a comment, you could also use the cmath standard library:
In [10]: cmath.sqrt(-1)
Out[10]: 1j
I just discovered the convenience function numpy.lib.scimath.sqrt explained in the sqrt documentation. I use it as follows:
>>> from numpy.lib.scimath import sqrt as csqrt
>>> csqrt(-1)
1j
You need to use the sqrt from the cmath module (part of the standard library)
>>> import cmath
>>> cmath.sqrt(-1)
1j
The latest addendum to the Numpy Documentation here, adds the command numpy.emath.sqrt which returns the complex numbers when the negative numbers are fed to the square root sign in a operation.
The square root of -1 is not a real number, but rather an imaginary number. IEEE 754 does not have a way of representing imaginary numbers.
numpy has support for complex numbers. I suggest you use that: http://docs.scipy.org/doc/numpy/user/basics.types.html
Others have probably suggested more desirable methods, but just to add to the conversation, you could always multiply any number less than 0 (the value you want the sqrt of, -1 in this case) by -1, then take the sqrt of that. Just know then that your result is imaginary.
When I perform the operation numpy.arctanh(x) for x >= 1, it returns nan, which is odd because when I perform the operation in Wolfram|alpha, it returns complex values, which is what I need for my application.
Does anyone know what I can do to keep Numpy from suppressing complex values?
Add +0j to your real inputs to make them complex numbers.
Numpy is following a variation of the maxim "Garbage in, Garbage out."
Float in, float out.
>>> import numpy as np
>>> np.sqrt(-1)
__main__:1: RuntimeWarning: invalid value encountered in sqrt
nan
Complex in, complex out.
>>> numpy.sqrt(-1+0j)
1j
>>> numpy.arctanh(24+0j)
(0.0416908044695255-1.5707963267948966j)
I have some math operations that produce a numpy array of results with about 8 significant figures. When I use tolist() on my array y_axis, it creates what I assume are 32-bit numbers.
However, I wonder if this is just garbage. I assume it is garbage, but it seems intelligent enough to change the last number so that rounding makes sense.
print "y_axis:",y_axis
y_axis = y_axis.tolist()
print "y_axis:",y_axis
y_axis: [-0.99636686 0.08357361 -0.01638707]
y_axis: [-0.9963668578012771, 0.08357361233570479, -0.01638706796138937]
So my question is: if this is not garbage, does using tolist actually help in accuracy for my calculations, or is Python always using the entire number, but just not displaying it?
When you call print y_axis on a numpy array, you are getting a truncated version of the numbers that numpy is actually storing internally. The way in which it is truncated depends on how numpy's printing options are set.
>>> arr = np.array([22/7, 1/13]) # init array
>>> arr # np.array default printing
array([ 3.14285714, 0.07692308])
>>> arr[0] # int default printing
3.1428571428571428
>>> np.set_printoptions(precision=24) # increase np.array print "precision"
>>> arr # np.array high-"precision" print
array([ 3.142857142857142793701541, 0.076923076923076927347012])
>>> float.hex(arr[0]) # actual underlying representation
'0x1.9249249249249p+1'
The reason it looks like you're "gaining accuracy" when you print out the .tolist()ed form of y_axis is that by default, more digits are printed when you call print on a list than when you call print on a numpy array.
In actuality, the numbers stored internally by either a list or a numpy array should be identical (and should correspond to the last line above, generated with float.hex(arr[0])), since numpy uses numpy.float64 by default, and python float objects are also 64 bits by default.
My understanding is that numpy is not showing you the full precision to make the matrices lay out consistently. The list shouldn't have any more precision than its numpy.array counterpart:
>>> v = -0.9963668578012771
>>> a = numpy.array([v])
>>> a
array([-0.99636686])
>>> a.tolist()
[-0.9963668578012771]
>>> a[0] == v
True
>>> a.tolist()[0] == v
True