Binary operations on Numpy scalars automatically up-casts to float64 - python

I want to do binary operations (like add and multiply) between np.float32 and builtin Python int and float and get a np.float32 as the return type. However, it gets automatically up-casted to a np.float64.
Example code:
>>> a = np.float32(5)
>>> a.dtype
dtype('float32')
>>> b = a + 2
>>> b.dtype
dtype('float64')
If I do this with a np.float128, b also becomes a np.float128. This is good, as it thereby preserves precision. However, no up-casting to np.float64 is necessary to preserve precision in my example, but it still occurs. Had I added 2.0 (a Python float (64 bit)) to a instead of 2, the casting would make sense. But even here, I do not want it.
So my question is: How can I alter the casting done when applying a binary operator to a np.float32 and a builtin Python int/float? Alternatively, making single precision the standard in all calculations rather than double, would also count as a solution, as I do not ever need double precision. Other people have asked for this, and it seems that no solution has been found.
I know about numpy arrays and there dtypes. Here I get the wanted behavior, as an array always preserves its dtype. It is however when I do an operation on a single element of an array that I get the unwanted behavior.
I have a vague idea to a solution, involving subclassing np.ndarray (or np.float32) and changing the value of __array_priority__. So far I have not been able to get it working.
Why do I care? I am trying to write an n-body code using Numba. This is why I cannot simply do operations on the array as a whole. Changing all np.float64 to np.float32 makes for a speed up of about a factor of 2, which is important. The np.float64-casting behavior serves to ruin this speed up completely, as all operations on my np.float32 array are done in 64-precision and then downcasted to 32-precision.

I'm not sure about the NumPy behavior, or how exactly you're trying to use Numba, but being explicit about the Numba types might help. For example, if you do something like this:
#jit
def foo(a):
return a[0] + 2;
a = np.array([3.3], dtype='f4')
foo(a)
The float32 value in a[0] is promoted to a float64 before the add operation (if you don't mind diving into llvm IR, you can see this for yourself by running the code using the numba command and using the --dump-llvm or --dump-optimized flag: numba --dump-optimized numba_test.py). However, by specifying the function signature, including the return type as float32:
#jit('f4(f4[:]'))
def foo(a):
return a[0] + 2;
The value in a[0] is not promoted to float64, although the result is cast to a float64 so it can be converted to a Python float object when the function returns to Python land.
If you can allocate an array beforehand to hold the result, you can do something like this:
#jit
def foo():
a = np.arange(1000000, dtype='f4')
result = np.zeros(1000000, dtype='f4')
for i in range(a.size):
result[0] = a[0] + 2
Even though you're doing the looping yourself, the performance of the compiled code should be comparable to a NumPy ufunc, and no casts to float64 should occur (Again, this can be verified by looking at the llvm IR that Numba generates).

Related

Python numba returned data types when calculating MSE

I am using numba to calculate MSE. The input are images which are ready as numpy arrays of uint8. Each element is 0-255.
When calculating the squared difference between two images the python function returns (expectedly) a uint8 result, but the same function when using numba returns int64.
#numba.jit(nopython=True)
def test1(var_a: np.ndarray, var_b: np.ndarray) -> float:
return var_a - var_b
#numba.jit(nopython=True)
def test2(var_a: np.ndarray, var_b: np.ndarray) -> float:
return (var_a - var_b) ** 2
def test3(var_a: np.ndarray, var_b: np.ndarray) -> float:
return (var_a - var_b) ** 2
a = np.array([2, 2]).astype(np.uint8).reshape(2, 1)
b = np.array([255, 255]).astype(np.uint8).reshape(2, 1)
test1(a, b) # output: array([[3, 3]], dtype=uint8)
test2(a, b) # output: array([[64009, 64009]], dtype=int64)
test3(a, b) # output: array([[9, 9]], dtype=uint8)
What's unclear to me is why the python-only code preserves the data-type while the numba-code adjusts the returned type to int64?
For my purpose, the numba result is ideal, but I don't understand why. I'm trying to avoid needing to .astype(int) all of my images, since this will eat a lot of RAM, when I'm only interested that the result of the subtraction be int (i.e., not unsigned).
So, why does numba "fixes" the datatype in test2()?
Numba is a JIT compiler that first uses static type inference to deduce the type of the variables and then compile the function before it can be called. This means all literals like integers are typed before running anything. Numba choose to set the type of integer literals to int64 so to avoid overflows on 64-bit machines (and int32 on 32-bit machines). This means var_a - var_b is evaluated as an array of uint8 as expected. (var_a - var_b) ** 2 is like var_tmp ** np.uin64(2) where var_tmp is of type uint8[:]. In this case, The Numba type inference system needs to do a type promotion like in any statically typed language (eg. C/C++). Like most languages, Numba choose to do a relatively safe type promotion by casting the array to int64 because int64 include all the possible values of uint8. In practice, the type promotion can be quite unsafe un pathological cases: for example, when you mix uint64 values with int64 ones, the result can be a float64 with a large but more limited precision and no warning is raised. If you use (var_a - var_b) ** np.uint8(2), then the output type is the one you expect (ie. uint8) because there is no type promotion.
Numpy uses dynamic type inference. Moreover, integers have a variable length in Python so their type has to be defined by Numpy at runtime (not by CPython which only define the generic variable-sized int type). Numpy can thus adapt the type of integer literals based on their runtime value. For example, (np.uint8(7) * 1000_000).dtype is of type int32 on my machine, while (np.uint8(7) * 100_000_000_000).dtype is of type int64 (because the type of the right-most integer literal is set to int64 since it is too big for a 32-bit integer. This is something Numba cannot do because of JIT compilation [1]. Thus, the semantics is a bit different between Numba and Numpy. The type promotion should be the same though (so to get results as close to Numpy with Numba).
A good practice is to explicitly type arrays so to avoid sneaky overflow in both Numpy and Numba. Casting integers to specific types is generally not needed but it is also a good practice when the types are small and performance matters (eg. integer arithmetic with intended overflow like for hash computations).
Note you can do your_function.inspect_types() so to get additional information about the type inference (though it is not easy to read).
[1] In fact, it Numba could type integer literals based on their value, but not variables. The thing is it would be very unexpected for users to get different output types (and behaviour due to possible overflows) when users change literals to runtime variables.

Python nan with complex arrays

Inserting a nan in Python into a complex numpy array gives some (for me) unexpected behavior:
a = np.array([5+6*1j])
print a
array([5.+6.j])
a[0] = np.nan
print a
array([nan+0.j])
I expected Python to write nan+nanj. For analyses it often might not matter, since np.isnan of any complex with either real and/or imaginary parts is True. However, I did not know the behavior and when plotting the real and imaginary parts of my array it gave me the impression I had info on the imaginary (however there is none). A workaround is to write a[0] = np.nan + np.nan*1j. Can somebody explain the reason for this behavior to me?
The issue here is that when you create an array with complex values:
a = np.array([5+6*1j])
You've created an array of dtype complex:
a.dtype
# dtype('complex128')
So by adding a value which only contains real part, it will be converted to a complex value, and you will thus be inserting a number with a complex component equal to 0j, so:
np.complex(np.nan)
# (nan+0j)
Which explains the behaviour:
a[0] = np.array([np.nan])
print(a)
# [nan+0.j]
It probably hast to do with numpy representation of nan:
NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic
(IEEE 754). This means that Not a Number is not equivalent to
infinity.
Essentially np.nan is a float. By setting x[0] = np.nan you are setting its value to a "real" float (but not changing the dtype of the array, which remains complex), so the imaginary part remains untouched as 0j.
That also explains why you can change the imaginary part by doing np.nan * 0j

Stocking large numbers into numpy array

I have a dataset on which I'm trying to apply some arithmetical method.
The thing is it gives me relatively large numbers, and when I do it with numpy, they're stocked as 0.
The weird thing is, when I compute the numbers appart, they have an int value, they only become zeros when I compute them using numpy.
x = np.array([18,30,31,31,15])
10*150**x[0]/x[0]
Out[1]:36298069767006890
vector = 10*150**x/x
vector
Out[2]: array([0, 0, 0, 0, 0])
I have off course checked their types:
type(10*150**x[0]/x[0]) == type(vector[0])
Out[3]:True
How can I compute this large numbers using numpy without seeing them turned into zeros?
Note that if we remove the factor 10 at the beggining the problem slitghly changes (but I think it might be a similar reason):
x = np.array([18,30,31,31,15])
150**x[0]/x[0]
Out[4]:311075541538526549
vector = 150**x/x
vector
Out[5]: array([-329406144173384851, -230584300921369396, 224960293581823801,
-224960293581823801, -368934881474191033])
The negative numbers indicate the largest numbers of the int64 type in python as been crossed don't they?
As Nils Werner already mentioned, numpy's native ctypes cannot save numbers that large, but python itself can since the int objects use an arbitrary length implementation.
So what you can do is tell numpy not to convert the numbers to ctypes but use the python objects instead. This will be slower, but it will work.
In [14]: x = np.array([18,30,31,31,15], dtype=object)
In [15]: 150**x
Out[15]:
array([1477891880035400390625000000000000000000L,
191751059232884086668491363525390625000000000000000000000000000000L,
28762658884932613000273704528808593750000000000000000000000000000000L,
28762658884932613000273704528808593750000000000000000000000000000000L,
437893890380859375000000000000000L], dtype=object)
In this case the numpy array will not store the numbers themselves but references to the corresponding int objects. When you perform arithmetic operations they won't be performed on the numpy array but on the objects behind the references.
I think you're still able to use most of the numpy functions with this workaround but they will definitely be a lot slower than usual.
But that's what you get when you're dealing with numbers that large :D
Maybe somewhere out there is a library that can deal with this issue a little better.
Just for completeness, if precision is not an issue, you can also use floats:
In [19]: x = np.array([18,30,31,31,15], dtype=np.float64)
In [20]: 150**x
Out[20]:
array([ 1.47789188e+39, 1.91751059e+65, 2.87626589e+67,
2.87626589e+67, 4.37893890e+32])
150 ** 28 is way beyond what an int64 variable can represent (it's in the ballpark of 8e60 while the maximum possible value of an unsigned int64 is roughly 18e18).
Python may be using an arbitrary length integer implementation, but NumPy doesn't.
As you deduced correctly, negative numbers are a symptom of an int overflow.

Correct way to test for numpy.dtype

I'm looking at a third-party lib that has the following if-test:
if isinstance(xx_, numpy.ndarray) and xx_.dtype is numpy.float64 and xx_.flags.contiguous:
xx_[:] = ctypes.cast(xx_.ctypes._as_parameter_,ctypes.POINTER(ctypes.c_double))
It appears that xx_.dtype is numpy.float64 always fails:
>>> xx_ = numpy.zeros(8, dtype=numpy.float64)
>>> xx_.dtype is numpy.float64
False
What is the correct way to test that the dtype of a numpy array is float64 ?
This is a bug in the lib.
dtype objects can be constructed dynamically. And NumPy does so all the time. There's no guarantee anywhere that they're interned, so constructing a dtype that already exists will give you the same one.
On top of that, np.float64 isn't actually a dtype; it's a… I don't know what these types are called, but the types used to construct scalar objects out of array bytes, which are usually found in the type attribute of a dtype, so I'm going to call it a dtype.type. (Note that np.float64 subclasses both NumPy's numeric tower types and Python's numeric tower ABCs, while np.dtype of course doesn't.)
Normally, you can use these interchangeably; when you use a dtype.type—or, for that matter, a native Python numeric type—where a dtype was expected, a dtype is constructed on the fly (which, again, is not guaranteed to be interned), but of course that doesn't mean they're identical:
>>> np.float64 == np.dtype(np.float64) == np.dtype('float64')
True
>>> np.float64 == np.dtype(np.float64).type
True
The dtype.type usually will be identical if you're using builtin types:
>>> np.float64 is np.dtype(np.float64).type
True
But two dtypes are often not:
>>> np.dtype(np.float64) is np.dtype('float64')
False
But again, none of that is guaranteed. (Also, note that np.float64 and float use the exact same storage, but are separate types. And of course you can also make a dtype('f8'), which is guaranteed to work the same as dtype(np.float64), but that doesn't mean 'f8' is, or even ==, np.float64.)
So, it's possible that constructing an array by explicitly passing np.float64 as its dtype argument will mean you get back the same instance when you check the dtype.type attribute, but that isn't guaranteed. And if you pass np.dtype('float64'), or you ask NumPy to infer it from the data, or you pass a dtype string for it to parse like 'f8', etc., it's even less likely to match. More importantly, you definitely not get np.float64 back as the dtype itself.
So, how should it be fixed?
Well, the docs define what it means for two dtypes to be equal, and that's a useful thing, and I think it's probably the useful thing you're looking for here. So, just replace the is with ==:
if isinstance(xx_, numpy.ndarray) and xx_.dtype == numpy.float64 and xx_.flags.contiguous:
However, to some extent I'm only guessing that's what you're looking for. (The fact that it's checking the contiguous flag implies that it's probably going to go right into the internal storage… but then why isn't it checking C vs. Fortran order, or byte order, or anything else?)
Try:
x = np.zeros(8, dtype=np.float64)
print x.dtype is np.dtype(np.float64))
is tests for the identity of 2 objects, whether they have the same id(). It is used for example to test is None, but can give errors when testing for integers or strings. But in this case, there's a further problem, x.dtype and np.float64 are not the same class.
isinstance(x.dtype, np.dtype) # True
isinstance(np.float64, np.dtype) # False
x.dtype.__class__ # numpy.dtype
np.float64.__class__ # type
np.float64 is actually a function. np.float64() produces 0.0. x.dtype() produces an error. (correction np.float64 is a class.)
In my interactive tests:
x.dtype is np.dtype(np.float64)
returns True. But I don't know if that's universally the case, or just the result of some sort of local caching. The dtype documentation mentions a dtype attribute:
dtype.num A unique number for each of the 21 different built-in types.
Both dtypes give 12 for this num.
x.dtype == np.float64
tests True.
Also, using type works:
x.dtype.type is np.float64 # True
When I import ctypes and do the cast (with your xx_) I get an error:
ValueError: setting an array element with a sequence.
I don't know enough of ctypes to understand what it is trying to do. It looks like it is doing a type conversion of the data pointer of xx_, xx_.ctypes._as_parameter_ is the same number as xx_.__array_interface__['data'][0].
In the numpy test code I find these dtype tests:
issubclass(arr.dtype.type, (nt.integer, nt.bool_)
assert_(dat.dtype.type is np.float64)
assert_equal(A.dtype.type, np.unicode_)
assert_equal(r['col1'].dtype.kind, 'i')
numpy documentation also talks about
np.issubdtype(x.dtype, np.float64)
np.issubsctype(x, np.float64)
both of which use issubclass.
Further tracing of the c code suggests that x.dtype == np.float64 is evaluated as:
x.dtype.num == np.dtype(np.float64).num
That is, the scalar type is converted to a dtype, and the .num attributes compared. The code is in scalarapi.c, descriptor.c, multiarraymodule.c of numpy / core / src / multiarray
I'm not sure when this API was introduced, but at least as of 2022 it looks like you can use numpy.issubdtype for the type checking part and therefore write:
if isinstance(arr, numpy.ndarray) and numpy.issubdtype(arr.dtype, numpy.floating):
...

complex-valued math evaluations permitted in Python but not in numpy

Is this documented anywhere? Why such a drastic difference?
# Python 3.2
# numpy 1.6.2 using Intel's Math Kernel Library
>>> import numpy as np
>>> x = np.float64(-0.2)
>>> x ** 0.8
__main__:1: RuntimeWarning: invalid value encountered in double_scalars
nan
>>> x = -0.2 # note: `np.float` is same built-in `float`
>>> x ** 0.8
(-0.2232449487530631+0.16219694943147778j)
This is especially confusing since according to this, np.float64 and built-in float are identical except for __repr__.
I can see how the warning from np may be useful in some cases (especially since it can be disabled or enabled in np.seterr); but the problem is that the return value is nan rather than the complex value provided by the built-in. Therefore, this breaks code when you start using numpy for some of the calculations, and don't convert its return values to built-in float explicitly.
numpy.float may or may not be float, but complex numbers are not float at all:
In [1]: type((-0.2)**0.8)
Out[1]: builtins.complex
So there's no float result of the operation, hence nan.
If you don't want to do an explicit conversion to float (which is recommended), do the numpy calculation in complex numbers:
In [3]: np.complex(-0.2)**0.8
Out[3]: (-0.2232449487530631+0.16219694943147778j)
The behaviour of returning a complex number from a float operation is certainly not usual, and was only introduced with Python 3 (like the float division of integers with the / operator). In Python 2.7 you get the following:
In [1]: (-0.2)**0.8
ValueError: negative number cannot be raised to a fractional power
On a scalar, if instead of np.float64 you use np.float, you'll get the same float type as Python uses. (And you'll either get the above error in 2.7 or the complex number in 3.x.)
For arrays, all the numpy operators return the same type of array, and most ufuncs do not support casting from float > complex (e.g., check np.<ufunc>.type).
If what you want is a consistent operation on scalars, use np.float
If you are interested in array operations, you'll have to cast the array as complex: x = x.astype('complex')

Categories