Conjugate transpose operator ".H" in numpy - python

It is very convenient in numpy to use the .T attribute to get a transposed version of an ndarray. However, there is no similar way to get the conjugate transpose. Numpy's matrix class has the .H operator, but not ndarray. Because I like readable code, and because I'm too lazy to always write .conj().T, I would like the .H property to always be available to me. How can I add this feature? Is it possible to add it so that it is brainlessly available every time numpy is imported?
(A similar question could by asked about the .I inverse operator.)

You can subclass the ndarray object like:
from numpy import ndarray
class myarray(ndarray):
#property
def H(self):
return self.conj().T
such that:
a = np.random.rand(3, 3).view(myarray)
a.H
will give you the desired behavior.
Edit:
As suggested by #slek120, you can force to transpose only the last 2 axes with:
self.swapaxes(-2, -1).conj()
instead of self.conj().T.

In general, the difficulty in this problem is that Numpy is a C-extension, which cannot be monkey patched...or can it? The forbiddenfruit module allows one to do this, although it feels a little like playing with knives.
So here is what I've done:
Install the very simple forbiddenfruit package
Determine the user customization directory:
import site
print site.getusersitepackages()
In that directory, edit usercustomize.py to include the following:
from forbiddenfruit import curse
from numpy import ndarray
from numpy.linalg import inv
curse(ndarray,'H',property(fget=lambda A: A.conj().T))
curse(ndarray,'I',property(fget=lambda A: inv(A)))
Test it:
python -c python -c "import numpy as np; A = np.array([[1,1j]]); print A; print A.H"
Results in:
[[ 1.+0.j 0.+1.j]]
[[ 1.-0.j]
[ 0.-1.j]]

Related

Calling numpy functions

Is it okay to call any numpy function without using the library name before the function (example: numpy.linspace())? Can we call it simply
linspace()
instead of calling
numpy.linspace()
You can import it like this
from numpy import linspace
and then use it like this
a = linspace(1, 10)
yes, its completely fine when you are importing the function separately from the numpy such as
from numpy import linespace
#you can call the function by just writing its name
result=linespace(3,50)
but the convention is to use the name alias the pakage as np
import numpy as np
#then calling the function with short name
result = np.linespace(3,50)
alias can be helpful when working with large number of libraries.and it also improves the code readability.
If you import the function from the library directly there is nothing wrong with calling said function directly.
i.e.
from numpy import linspace
# Then call linspace by itself
a = linspace(1, 10)
That being said, many find that having numpy (often shortened to np) in front of function names help improve code readability. As almost everyone does this with certain libraries (Tensorflow as tf, Numpy as np, Pandas as pd) some may view it in a poor light if you simply directly import and use the function.
I would recommend importing the library as the shortened name and then using it appropriately.
i.e.
import numpy as np
# Then call np.linspace
a = np.linspace(1, 10)

I am currently solving a problem for my class on Plancks blackbody function

import math
import matplotlib.pyplot as plt
import numpy as np
hc=1.23984186E3
k=1.3806503E-23
T=(np.linspace(5000,10000,50))
lamb = np.linspace(0.00001,.0000001,50)
print(len(lamb))
print (len(T))
planck_top=8*math.pi*hc
planck_bottom1=lamb*1000000
planck_bottom2=math.exp(hc//lamb*k*T)
planck=planck_top//planck_bottom
I keep getting this error here;
>
planck_bottom2=math.exp(hc//lamb*k*T)
TypeError: only size-1 arrays can be converted to Python scalars
I am not sure how to correct this, as we are dealing with a large array here
hc//lamb*k*T returns an array, and math.exp() can work with a number only. So, the following would work:
planck_bottom2=[math.exp(i) for i in hc//lamb*k*T]
It returns a list containing each number in the array hc//lamb*k*T correspondingly exponentiated.
You have another option of using numpy instead of math.
Using Numpy for Array Calculations
Just replace math by np since you already import numpy as np.
planck_top=8*np.pi*hc
planck_bottom2 = np.exp(hc//lamb*k*T)
About using math and/or numpy:
As an additional piece of information, I encourage you to look at the following references for evaluating when/if you should choose math and/or numpy.
What is the difference between import numpy and import math [duplicate]
Are NumPy's math functions faster than Python's?
"Math module functions cannot be used directly on ndarrays because they only accept scalar, not array arguments."

import numpy as np versus from numpy import

I have a module that heavily makes use of numpy:
from numpy import array, median, nan, percentile, roll, sqrt, sum, transpose, unique, where
Is it better practice to keep the namespace clean by using
import numpy as np
and then when I need to use array just use np.array, for e.g.?
This module also gets called repeatedly, say a few million times and keeping the namespace clean appears to add a bit of an overhead?
setup = '''import numpy as np'''
function = 'x = np.sum(np.array([1,2,3,4,5,6,7,8,9,10]))'
print(min(timeit.Timer(function, setup=setup).repeat(10, 300000)))
1.66832
setup = '''from numpy import arange, array, sum'''
function = 'x = sum(array([1,2,3,4,5,6,7,8,9,10]))'
print(min(timeit.Timer(function, setup=setup).repeat(10, 300000)))
1.65137
Why does this add more time when using np.sum vs sum?
You are right, it is better to keep the namespace clean. So I would use
import numpy as np
It keeps your code more readable, when you see a call like np.sum(array) you are reminded that you should work with an numpy array. The second reason is that many of the numpy functions have identical names as functions in other modules like scipy... If you use both its always clear which one you are using.
As you you can see in the test you made, the performance difference is there and if you really need the performance you could do it the other way.
The difference in performance is that in the case of a specific function import, you are referencing the function in the numpy module at the beginning of the script.
In the case of the general module import you import only the reference to the module and python needs to resolve/find the function that you are using in that module at every call.
You could have the best of both worlds (faster name resolution, and non-shadowing), if you're ok with defining your own aliasing (subject to your team conventions, of course):
import numpy as np
(np_sum, np_min, np_arange) = (np.sum, np.min, np.arange)
x = np_arange(24)
print (np_sum(x))
Alternative syntax to define your aliases:
from numpy import \
arange as np_arange, \
sum as np_sum, \
min as np_min

Datatype of python universal function created with frompyfunc

I have the following question concerning universal functions
in numpy. How can I define a universal function which returns
the same type of numpy array as the numpy build-in functions.
The following sample code:
import numpy as np
def mysimplefunc(a):
return np.sin(a)
mysimpleufunc = np.frompyfunc(mysimplefunc,1,1)
a = np.linspace(0.0,1.0,3)
print(np.sin(a).dtype)
print(mysimpleufunc(a).dtype)
results in the output:
float64
object
Any help is very much appreciated :)
PS.: I am using python 3.4
I found the solution
(see also discussion on stackoverflow):
use vectorize instead of frompyfunc
import numpy as np
def mysimplefunc(a):
return np.sin(a)
mysimpleufunc = np.vectorize(mysimplefunc)
a = np.linspace(0.0,1.0,3)
print(np.sin(a).dtype)
print(mysimpleufunc(a).dtype)

Type hinting / annotation (PEP 484) for numpy.ndarray

Has anyone implemented type hinting for the specific numpy.ndarray class?
Right now, I'm using typing.Any, but it would be nice to have something more specific.
For instance if the NumPy people added a type alias for their array_like object class. Better yet, implement support at the dtype level, so that other objects would be supported, as well as ufunc.
Update
Check recent numpy versions for a new typing module
https://numpy.org/doc/stable/reference/typing.html#module-numpy.typing
dated answer
It looks like typing module was developed at:
https://github.com/python/typing
The main numpy repository is at
https://github.com/numpy/numpy
Python bugs and commits can be tracked at
http://bugs.python.org/
The usual way of adding a feature is to fork the main repository, develop the feature till it is bomb proof, and then submit a pull request. Obviously at various points in the process you want feedback from other developers. If you can't do the development yourself, then you have to convince someone else that it is a worthwhile project.
cython has a form of annotations, which it uses to generate efficient C code.
You referenced the array-like paragraph in numpy documentation. Note its typing information:
A simple way to find out if the object can be converted to a numpy array using array() is simply to try it interactively and see if it works! (The Python Way).
In other words the numpy developers refuse to be pinned down. They don't, or can't, describe in words what kinds of objects can or cannot be converted to np.ndarray.
In [586]: np.array({'test':1}) # a dictionary
Out[586]: array({'test': 1}, dtype=object)
In [587]: np.array(['one','two']) # a list
Out[587]:
array(['one', 'two'],
dtype='<U3')
In [589]: np.array({'one','two'}) # a set
Out[589]: array({'one', 'two'}, dtype=object)
For your own functions, an annotation like
def foo(x: np.ndarray) -> np.ndarray:
works. Of course if your function ends up calling some numpy function that passes its argument through asanyarray (as many do), such an annotation would be incomplete, since your input could be a list, or np.matrix, etc.
When evaluating this question and answer, pay attention to the date. 484 was a relatively new PEP back then, and code to make use of it for standard Python still in development. But it looks like the links provided are still valid.
Numpy 1.21 includes a numpy.typing module with an NDArray generic type.
From the Numpy 1.21 docs:
numpy.typing.NDArray = numpy.ndarray[typing.Any, numpy.dtype[+ScalarType]]
A generic version of np.ndarray[Any, np.dtype[+ScalarType]].
Can be used during runtime for typing arrays with a given dtype and unspecified shape.
Examples:
>>> import numpy as np
>>> import numpy.typing as npt
>>> print(npt.NDArray)
numpy.ndarray[typing.Any, numpy.dtype[+ScalarType]]
>>> print(npt.NDArray[np.float64])
numpy.ndarray[typing.Any, numpy.dtype[numpy.float64]]
>>> NDArrayInt = npt.NDArray[np.int_]
>>> a: NDArrayInt = np.arange(10)
>>> def func(a: npt.ArrayLike) -> npt.NDArray[Any]:
... return np.array(a)
As of 2022-09-05, support for shapes is still a work in progress per numpy/numpy#16544.
At my company we've been using:
from typing import TypeVar, Generic, Tuple, Union, Optional
import numpy as np
Shape = TypeVar("Shape")
DType = TypeVar("DType")
class Array(np.ndarray, Generic[Shape, DType]):
"""
Use this to type-annotate numpy arrays, e.g.
image: Array['H,W,3', np.uint8]
xy_points: Array['N,2', float]
nd_mask: Array['...', bool]
"""
pass
def compute_l2_norm(arr: Array['N,2', float]) -> Array['N', float]:
return (arr**2).sum(axis=1)**.5
print(compute_l2_norm(arr = np.array([(1, 2), (3, 1.5), (0, 5.5)])))
We actually have a MyPy checker around this that checks that the shapes work out (which we should release at some point). Only thing is it doesn't make PyCharm happy (ie you still get the nasty warning lines):
nptyping adds lots of flexibility for specifying numpy type hints.
What i did was to just define it as
Dict[Tuple[int, int], TYPE]
So for example if you want an array of floats you can do:
a = numpy.empty(shape=[2, 2], dtype=float) # type: Dict[Tuple[int, int], float]
This is of course not exact from a documentation perspective, but for analyzing correct usage and getting proper completion with pyCharm it works great!

Categories