Immutable numpy array? - python

Is there a simple way to create an immutable NumPy array?
If one has to derive a class from ndarray to do this, what's the minimum set of methods that one has to override to achieve immutability?

You can make a numpy array unwriteable:
a = np.arange(10)
a.flags.writeable = False
a[0] = 1
# Gives: ValueError: assignment destination is read-only
Also see the discussion in this thread:
http://mail.scipy.org/pipermail/numpy-discussion/2008-December/039274.html
and the documentation:
http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.flags.html

I have a subclass of Array at this gist: https://gist.github.com/sfaleron/9791418d7023a9985bb803170c5d93d8
It makes a copy of its argument and marks that as read-only, so you should only be able to shoot yourself in the foot if you are very deliberate about it. My immediate need was for it to be hashable, so I could use them in sets, so that works too. It isn't a lot of code, but about 70% of the lines are for testing, so I won't post it directly.
Note that it's not a drop-in replacement; it won't accept any keyword args like a normal Array constructor. Instances will behave like Arrays, though.

Setting the flag directly didn't work for me, but using ndarray.setflags did work:
a = np.arange(10)
a.setflags(write=False)
a[0] = 1 # ValueError

Related

Is there any difference between fromlist() method and extend() method of Python array module?

The fromlist() method and extend() method are both used to extend the array by list in the array module.
There is no difference in how it is returning and handling runtime errors.
Almost both behave the same, but why there are two different methods with the same functionality?
I checked in Python docs, Source code but didn't find anything unique.
let me know if I'm missing out on anything here.
Thanks in Advance!
#Sample code
from array import array
arr = array('i',[1,2,3,4,4])
templist = [5,6]
arr.extend(templist)
#arr.fromlist(templist)

How to check if an object is an np.array()?

I'm trying to build a code that checks whether a given object is an np.array() in python.
if isinstance(obj,np.array()) doesn't seem to work.
I would truly appreciate any help.
isinstance(obj, numpy.ndarray) may work
Below code seems to work. Use numpy.ndarray.
import numpy as np
l = [1,2,3,4]
l_arr = np.array(l)
if isinstance(l_arr, np.ndarray):
print("Type is np.array")
else:
print("Type is not np.array")
Output:
Type is np.array
You could compare the type of the object being passed to the checking function with 'np.ndarray' to check if the given object is indeed an np.ndarray
The sample code snippet for the same should look something like this :
if isinstance(obj,np.ndarray):
# proceed -> is an np array
else
# Not an np.ndarray
The type of what numpy.array returns is numpy.ndarray. You can determine that in the repl by calling type(numpy.array([])). Note that this trick works even for things where the raw class is not publicly accessible. It's generally better to use the direct reference, but storing the return from type(someobj) for later comparison does have its place.

Substitute numpy functions with Python only

I have a python function that employs the numpy package. It uses numpy.sort and numpy.array functions as shown below:
def function(group):
pre_data = np.sort(np.array(
[c["data"] for c in group[1]],
dtype = np.float64
))
How can I re-write the sort and array functions using only Python in such a way that I no longer need the numpy package?
It really depends on the code after this. pre_data will be a numpy.ndarray which means that it has array methods which will be really hard to replicate without numpy. If those methods are being called later in the code, you're going to have a hard time and I'd advise you to just bite the bullet and install numpy. It's popularity is a testament to it's usefulness...
However, if you really just want to sort a list of floats and put it into a sequence-like container:
def function(group):
pre_data = sorted(float(c['data']) for c in group[1])
should do the trick.
Well, it's not strictly possible because the return type is an ndarray. If you don't mind to use a list instead, try this:
pre_data = sorted(float(c["data"]) for c in group[1])
That's not actually using any useful numpy functions anyway
def function(group):
pre_data = sorted(float(c["data"]) for c in group[1])

Subclassing numpy scalar types

I'm trying to subclass numpy.complex64 in order to make use of the way numpy stores the data, (contiguous, alternating real and imaginary part) but use my own __add__, __sub__, ... routines.
My problem is that when I make a numpy.ndarray, setting dtype=mysubclass, I get a numpy.ndarray with dtype='numpy.complex64' in stead, which results in numpy not using my own functions for additions, subtractions and so on.
Example:
import numpy as np
class mysubclass(np.complex64):
pass
a = mysubclass(1+1j)
A = np.empty(2, dtype=mysubclass)
print type(a)
print repr(A)
Output:
<class '__main__.mysubclass'>
array([ -2.07782988e-20 +4.58546896e-41j, -2.07782988e-20 +4.58546896e-41j], dtype=complex64)'
Does anyone know how to do this?
Thanks in advance - Soren
The NumPy type system is only designed to be extended from C, via the PyArray_RegisterDataType function. It may be possible to access this functionality from Python using ctypes but I wouldn't recommend it; better to write an extension in C or Cython, or subclass ndarray as #seberg describes.
There's a simple example dtype in the NumPy source tree: newdtype_example/floatint.c. If you're into Pyrex, reference.pyx in the pytables source may be worth a look.
Note that scalars and arrays are quite different in numpy. np.complex64 (this is 32-bit float, just to note, not double precision). You will not be able to change the array like that, you will need to subclass the array instead and then override its __add__ and __sub__.
If that is all you want to do, it should just work otherwise look at http://docs.scipy.org/doc/numpy/user/basics.subclassing.html since subclassing an array is not that simple.
However if you want to use this type also as a scalar. For example you want to index scalars out, it gets more difficult at least currently. You can get a little further by defining __array_wrap__ to convert to scalars to your own scalar type for some reduce functions, for indexing to work in all cases it appears to me that you may have define your own __getitem__ currently.
In all cases with this approach, you still use the complex datatype, and all functions that are not explicitly overridden will still behave the same. #ecatmur mentioned that you can create new datatypes from the C side, if that is really what you want.

Python-numpy test for ndarray using ndim

I'm working on a project in Python requiring a lot of numerical array calculations. Unfortunately (or fortunately, depending on your POV), I'm very new to Python, but have been doing MATLAB and Octave programming (APL before that) for years. I'm very used to having every variable automatically typed to a matrix float, and still getting used to checking input types.
In many of my functions, I require the input S to be a numpy.ndarray of size (n,p), so I have to both test that type(S) is numpy.ndarray and get the values (n,p) = numpy.shape(S). One potential problem is that the input could be a list/tuple/int/etc..., another problem is that the input could be an array of shape (): S.ndim = 0. It occurred to me that I could simultaneously test the variable type, fix the S.ndim = 0problem, then get my dimensions like this:
# first simultaneously test for ndarray and get proper dimensions
try:
if (S.ndim == 0):
S = S.copy(); S.shape = (1,1);
# define dimensions p, and p2
(p,p2) = numpy.shape(S);
except AttributeError: # got here because input is not something array-like
raise AttributeError("blah blah blah");
Though it works, I'm wondering if this is a valid thing to do? The docstring for ndim says
If it is not already an ndarray, a conversion is
attempted.
and we surely know that numpy can easily convert an int/tuple/list to an array, so I'm confused why an AttributeError is being raised for these types inputs, when numpy should be doing this
numpy.array(S).ndim;
which should work.
When doing input validation for NumPy code, I always use np.asarray:
>>> np.asarray(np.array([1,2,3]))
array([1, 2, 3])
>>> np.asarray([1,2,3])
array([1, 2, 3])
>>> np.asarray((1,2,3))
array([1, 2, 3])
>>> np.asarray(1)
array(1)
>>> np.asarray(1).shape
()
This function has the nice feature that it only copies data when necessary; if the input is already an ndarray, the data is left in-place (only the type may be changed, because it also gets rid of that pesky np.matrix).
The docstring for ndim says
That's the docstring for the function np.ndim, not the ndim attribute, which non-NumPy objects don't have. You could use that function, but the effect would be that the data might be copied twice, so instead do:
S = np.asarray(S)
(p, p2) = S.shape
This will raise a ValueError if S.ndim != 2.
[Final note: you don't need ; in Python if you just follow the indentation rules. In fact, Python programmers eschew the semicolon.]
Given the comments to #larsmans answer, you could try:
if not isinstance(S, np.ndarray):
raise TypeError("Input not a ndarray")
if S.ndim == 0:
S = np.reshape(S, (1,1))
(p, p2) = S.shape
First, you check explicitly whether S is a (subclass of) ndarray. Then, you use the np.reshape to copy your data (and reshaping it, of course) if needed. At last, you get the dimension.
Note that in most cases, the np functions will first try to access the corresponding method of a ndarray, then attempt to convert the input to a ndarray (sometimes keeping it a subclass, as in np.asanyarray, sometimes not (as in np.asarray(...)). In other terms, it's always more efficient to use the method rather than the function: that's why we're using S.shape and not np.shape(S).
Another point: the np.asarray, np.asanyarray, np.atleast_1D... are all particular cases of the more generic function np.array. For example, asarray sets the optional copy argument of array to False, asanyarray does the same and sets subok=True, atleast_1D sets ndmin=1, atleast_2d sets ndmin=2... In other terms, it's always easier to use np.array with the appropriate arguments. But as mentioned in some comments, it's a matter of style. Shortcuts can often improve readability, which is always an objective to keep in mind.
In any case, when you use np.array(..., copy=True), you're explicitly asking for a copy of your initial data, a bit like doing a list([....]). Even if nothing else changed, your data will be copied. That has the advantages of its drawbacks (as we say in French), you could for example change the order from row-first C to column-first F. But anyway, you get the copy you wanted.
With np.array(input, copy=False), a new array is always created. It will either point to the same block of memory as input if this latter was already a ndarray (that is, no waste of memory), or will create a new one "from scratch" if input wasn't. The interesting case is of course if input was a ndarray.
Using this new array in a function may or may not change the original input, depending on the function. You have to check the documentation of the function you want to use to see whether it returns a copy or not. The NumPy developers try hard to limit unnecessary copies (following the Python example), but sometimes it can't be avoided. The documentation should tell explicitly what happens, if it doesn't or it's unclear, please mention it.
np.array(...) may raise some exceptions if something goes awry. For example, trying to use a dtype=float with an input like ["STRING", 1] will raise a ValueError. However, I must admit I can't remember which exceptions in all the cases, please edit this post accordingly.
Welcome to stack-overflow. This comes down to almost a style choice, but the most common way I've seen to deal with this kind of situation is to convert the input to an array. Numpy provides some useful tools for this. numpy.asarray has already been mentioned, but here are a few more. numpy.at_least1d is similar to asarray, but reshapes () arrays to be (1,) numpy.at_least2d is the same as above but reshapes 0d and 1d arrays to be 2d, ie (3,) to (1, 3). The reason we convert "array_like" inputs to arrays is partly just because we're lazy, for example sometimes it can be easier to write foo([1, 2, 3]) than foo(numpy.array([1, 2, 3])), but this is also the design choice made within numpy itself. Notice that the following works:
>>> numpy.mean([1., 2., 3.])
>>> 2.0
In the docs for numpy.mean we can see that x should be "array_like".
Parameters
----------
a : array_like
Array containing numbers whose mean is desired. If `a` is not an
array, a conversion is attempted.
That being said, there are situations when you want to only accept arrays as arguments and not all "array_like" types.

Categories