I am using Numpy to store data into matrices. Coming from R background, there has been an extremely simple way to apply a function over row/columns or both of a matrix.
Is there something similar for python/numpy combination? It's not a problem to write my own little implementation but it seems to me that most of the versions I come up with will be significantly less efficient/more memory intensive than any of the existing implementation.
I would like to avoid copying from the numpy matrix to a local variable etc., is that possible?
The functions I am trying to implement are mainly simple comparisons (e.g. how many elements of a certain column are smaller than number x or how many of them have absolute value larger than y).
Almost all numpy functions operate on whole arrays, and/or can be told to operate on a particular axis (row or column).
As long as you can define your function in terms of numpy functions acting on numpy arrays or array slices, your function will automatically operate on whole arrays, rows or columns.
It may be more helpful to ask about how to implement a particular function to get more concrete advice.
Numpy provides np.vectorize and np.frompyfunc to turn Python functions which operate on numbers into functions that operate on numpy arrays.
For example,
def myfunc(a,b):
if (a>b): return a
else: return b
vecfunc = np.vectorize(myfunc)
result=vecfunc([[1,2,3],[5,6,9]],[7,4,5])
print(result)
# [[7 4 5]
# [7 6 9]]
(The elements of the first array get replaced by the corresponding element of the second array when the second is bigger.)
But don't get too excited; np.vectorize and np.frompyfunc are just syntactic sugar. They don't actually make your code any faster. If your underlying Python function is operating on one value at a time, then np.vectorize will feed it one item at a time, and the whole
operation is going to be pretty slow (compared to using a numpy function which calls some underlying C or Fortran implementation).
To count how many elements of column x are smaller than a number y, you could use an expression such as:
(array['x']<y).sum()
For example:
import numpy as np
array=np.arange(6).view([('x',np.int),('y',np.int)])
print(array)
# [(0, 1) (2, 3) (4, 5)]
print(array['x'])
# [0 2 4]
print(array['x']<3)
# [ True True False]
print((array['x']<3).sum())
# 2
Selecting elements from a NumPy array based on one or more conditions is straightforward using NumPy's beautifully dense syntax:
>>> import numpy as NP
>>> # generate a matrix to demo the code
>>> A = NP.random.randint(0, 10, 40).reshape(8, 5)
>>> A
array([[6, 7, 6, 4, 8],
[7, 3, 7, 9, 9],
[4, 2, 5, 9, 8],
[3, 8, 2, 6, 3],
[2, 1, 8, 0, 0],
[8, 3, 9, 4, 8],
[3, 3, 9, 8, 4],
[5, 4, 8, 3, 0]])
how many elements in column 2 are greater than 6?
>>> ndx = A[:,1] > 6
>>> ndx
array([False, True, False, False, True, True, True, True], dtype=bool)
>>> NP.sum(ndx)
5
how many elements in last column of A have absolute value larger than 3?
>>> A = NP.random.randint(-4, 4, 40).reshape(8, 5)
>>> A
array([[-4, -1, 2, 0, 3],
[-4, -1, -1, -1, 1],
[-1, -2, 2, -2, 3],
[ 1, -4, -1, 0, 0],
[-4, 3, -3, 3, -1],
[ 3, 0, -4, -1, -3],
[ 3, -4, 0, -3, -2],
[ 3, -4, -4, -4, 1]])
>>> ndx = NP.abs(A[:,-1]) > 3
>>> NP.sum(ndx)
0
how many elements in the first two rows of A are greater than or equal to 2?
>>> ndx = A[:2,:] >= 2
>>> NP.sum(ndx.ravel()) # 'ravel' just flattens ndx, which is originally 2D (2x5)
2
NumPy's indexing syntax is pretty close to R's; given your fluency in R, here are the key differences between R and NumPy in this context:
NumPy indices are zero-based, in R, indexing begins with 1
NumPy (like Python) allows you to index from right to left using negative indices--e.g.,
# to get the last column in A
A[:, -1],
# to get the penultimate column in A
A[:, -2]
# this is a big deal, because in R, the equivalent expresson is:
A[, dim(A)[0]-2]
NumPy uses colon ":" notation to denote "unsliced", e.g., in R, to
get the first three rows in A, you would use, A[1:3, ]. In NumPy, you
would use A[0:2, :] (in NumPy, the "0" is not necessary, in fact it
is preferable to use A[:2, :]
I also come from a more R background, and bumped into the lack of a more versatile apply which could take short customized functions. I've seen the forums suggesting using basic numpy functions because many of them handle arrays. However, I've been getting confused over the way "native" numpy functions handle array (sometimes 0 is row-wise and 1 column-wise, sometimes the opposite).
My personal solution to more flexible functions with apply_along_axis was to combine them with the implicit lambda functions available in python. Lambda functions should very easy to understand for the R minded who uses a more functional programming style, like in R functions apply, sapply, lapply, etc.
So for example I wanted to apply standardisation of variables in a matrix. Tipically in R there's a function for this (scale) but you can also build it easily with apply:
(R code)
apply(Mat,2,function(x) (x-mean(x))/sd(x) )
You see how the body of the function inside apply (x-mean(x))/sd(x) is the bit we can't type directly for the python apply_along_axis. With lambda this is easy to implement FOR ONE SET OF VALUES, so:
(Python)
import numpy as np
vec=np.random.randint(1,10,10) # some random data vector of integers
(lambda x: (x-np.mean(x))/np.std(x) )(vec)
Then, all we need is to plug this inside the python apply and pass the array of interest through apply_along_axis
Mat=np.random.randint(1,10,3*4).reshape((3,4)) # some random data vector
np.apply_along_axis(lambda x: (x-np.mean(x))/np.std(x),0,Mat )
Obviously, the lambda function could be implemented as a separate function, but I guess the whole point is to use rather small functions contained within the line where apply originated.
I hope you find it useful !
Pandas is very useful for this. For instance, DataFrame.apply() and groupby's apply() should help you.
Related
What does the 'a' in numpy's numpy.arange method stand for, and how does it differ from a simple range produced by Python's builtin range method (definitionally, not in terms of performance and whatnot)?
I tried looking online for an answer to this, but all I find is tutorials for how to use numpy.arange by GeeksForGeeks and co.
You can inspect the return types and reason about what it could mean that way:
print(type(range(0,5)))
import numpy as np
print(type(np.arange(0,5)))
Which prints:
<class 'range'>
<class 'numpy.ndarray'>
Here's a related question: Why was the name "arange" chosen for the numpy function?
Some people do from numpy import * which would shadow range which causes problems.
Naming the function arrayrange was not chosen because it's too long to type.
From the previous SO we learn that the 'a' stands, in some sense, for 'array'. arange is a function that returns a numpy array that is similar, at least in simple cases, to the list produced by list(range(...)). From the official arange docs:
For integer arguments the function is roughly equivalent to the Python built-in range, but returns an ndarray rather than a range instance.
In [104]: list(range(-3,10,2))
Out[104]: [-3, -1, 1, 3, 5, 7, 9]
In [105]: np.arange(-3,10,2)
Out[105]: array([-3, -1, 1, 3, 5, 7, 9])
In py3, range by itself is "unevaluated", it's generator like. It's the equivalent of the py2 xrange.
The best "definition" is the official documentation page:
https://numpy.org/doc/stable/reference/generated/numpy.arange.html
But maybe you are wondering when to use one or the other. The simple answer is - if you are doing python level iteration, range is usually better. If you need an array, use arange (or np.linspace as suggested by the docs).
In [106]: [x**2 for x in range(5)]
Out[106]: [0, 1, 4, 9, 16]
In [107]: np.arange(5)**2
Out[107]: array([ 0, 1, 4, 9, 16])
I often use arange to create a example array, as in:
In [108]: np.arange(12).reshape(3,4)
Out[108]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
While it is possible to make an array from a range, e.g. np.array(range(5)), that is relatively slow. np.fromiter(range(5),int) is faster, but still not as good as the direct np.arange.
The 'a' stands for 'array' in numpy.arange. Numpy.arange is a function that produces an array of sequential numbers within a given interval. It differs from Python's builtin range() function in that it can handle floating-point numbers as well as arbitrary step sizes. Also, the output of numpy.arange is an array of elements instead of a range object.
I was surprised that numpy.split yields a list and not an array. I would have thought it would be better to return an array, since numpy has put a lot of work into making arrays more useful than lists. Can anyone justify numpy returning a list instead of an array? Why would that be a better programming decision for the numpy developers to have made?
A comment pointed out that if the slit is uneven, the result can't be a array, at least not one that has the same dtype. At best it would be an object dtype.
But lets consider the case of equal length subarrays:
In [124]: x = np.arange(10)
In [125]: np.split(x,2)
Out[125]: [array([0, 1, 2, 3, 4]), array([5, 6, 7, 8, 9])]
In [126]: np.array(_) # make an array from that
Out[126]:
array([[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9]])
But we can get the same array without split - just reshape:
In [127]: x.reshape(2,-1)
Out[127]:
array([[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9]])
Now look at the code for split. It just passes the task to array_split. Ignoring the details about alternative axes, it just does
sub_arys = []
for i in range(Nsections):
# st and end from `div_points
sub_arys.append(sary[st:end])
return sub_arys
In other words, it just steps through array and returns successive slices. Those (often) are views of the original.
So split is not that sophisticate a function. You could generate such a list of subarrays yourself without a lot of numpy expertise.
Another point. Documentation notes that split can be reversed with an appropriate stack. concatenate (and family) takes a list of arrays. If give an array of arrays, or a higher dim array, it effectively iterates on the first dimension, e.g. concatenate(arr) => concatenate(list(arr)).
Actually you are right it returns a list
import numpy as np
a=np.random.randint(1,30,(2,2))
b=np.hsplit(a,2)
type(b)
it will return type(b) as list so, there is nothing wrong in the documentation, i also first thought that the documentation is wrong it doesn't return a array, but when i checked
type(b[0])
type(b[1])
it returned type as ndarray.
it means it returns a list of ndarrary's.
i=np.arange(1,4,dtype=np.int)
a=np.arange(9).reshape(3,3)
and
a
>>>array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
a[:,0:1]
>>>array([[0],
[3],
[6]])
a[:,0:2]
>>>array([[0, 1],
[3, 4],
[6, 7]])
a[:,0:3]
>>>array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
Now I want to vectorize the array to print them all together. I try
a[:,0:i]
or
a[:,0:i[:,None]]
It gives TypeError: only integer scalar arrays can be converted to a scalar index
Short answer:
[a[:,:j] for j in i]
What you are trying to do is not a vectorizable operation. Wikipedia defines vectorization as a batch operation on a single array, instead of on individual scalars:
In computer science, array programming languages (also known as vector or multidimensional languages) generalize operations on scalars to apply transparently to vectors, matrices, and higher-dimensional arrays.
...
... an operation that operates on entire arrays can be called a vectorized operation...
In terms of CPU-level optimization, the definition of vectorization is:
"Vectorization" (simplified) is the process of rewriting a loop so that instead of processing a single element of an array N times, it processes (say) 4 elements of the array simultaneously N/4 times.
The problem with your case is that the result of each individual operation has a different shape: (3, 1), (3, 2) and (3, 3). They can not form the output of a single vectorized operation, because the output has to be one contiguous array. Of course, it can contain (3, 1), (3, 2) and (3, 3) arrays inside of it (as views), but that's what your original array a already does.
What you're really looking for is just a single expression that computes all of them:
[a[:,:j] for j in i]
... but it's not vectorized in a sense of performance optimization. Under the hood it's plain old for loop that computes each item one by one.
I ran into the problem when venturing to use numpy.concatenate to emulate a C++ like pushback for 2D-vectors; If A and B are two 2D numpy.arrays, then numpy.concatenate(A,B) yields the error.
The fix was to simply to add the missing brackets: numpy.concatenate( ( A,B ) ), which are required because the arrays to be concatenated constitute to a single argument
This could be unrelated to this specific problem, but I ran into a similar issue where I used NumPy indexing on a Python list and got the same exact error message:
# incorrect
weights = list(range(1, 129)) + list(range(128, 0, -1))
mapped_image = weights[image[:, :, band]] # image.shape = [800, 600, 3]
# TypeError: only integer scalar arrays can be converted to a scalar index
It turns out I needed to turn weights, a 1D Python list, into a NumPy array before I could apply multi-dimensional NumPy indexing. The code below works:
# correct
weights = np.array(list(range(1, 129)) + list(range(128, 0, -1)))
mapped_image = weights[image[:, :, band]] # image.shape = [800, 600, 3]
try the following to change your array to 1D
a.reshape((1, -1))
You can use numpy.ravel to return a flattened array from n-dimensional array:
>>> a
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
>>> a.ravel()
array([0, 1, 2, 3, 4, 5, 6, 7, 8])
I had a similar problem and solved it using list...not sure if this will help or not
classes = list(unique_labels(y_true, y_pred))
this problem arises when we use vectors in place of scalars
for example in a for loop the range should be a scalar, in case you have given a vector in that place you get error. So to avoid the problem use the length of the vector you have used
I ran across this error when while trying to access elements of a list using a 1-D array. I was suggested this page but I don't the answer I was looking for.
Let l be the list and myarray be my 1D array. The correct way to access list l using elements of myarray is
np.take(l,myarray)
The type of matrix I am dealing with was created from a vector as shown below:
Start with a 1-d vector V of length L.
To create a matrix A from V with N rows, make the i'th column of A the first N entries of V, starting from the i'th entry of V, so long as there are enough entries left in V to fill up the column. This means A has L - N + 1 columns.
Here is an example:
V = [0, 1, 2, 3, 4, 5]
N = 3
A =
[0 1 2 3
1 2 3 4
2 3 4 5]
Representing the matrix this way requires more memory than my machine has. Is there any reasonable way of storing this matrix sparsely? I am currently storing N * (L - N + 1) values, when I only need to store L values.
You can take a view of your original vector as follows:
>>> import numpy as np
>>> from numpy.lib.stride_tricks import as_strided
>>>
>>> v = np.array([0, 1, 2, 3, 4, 5])
>>> n = 3
>>>
>>> a = as_strided(v, shape=(n, len(v)-n+1), strides=v.strides*2)
>>> a
array([[0, 1, 2, 3],
[1, 2, 3, 4],
[2, 3, 4, 5]])
This is a view, not a copy of your original data, e.g.
>>> v[3] = 0
>>> v
array([0, 1, 2, 0, 4, 5])
>>> a
array([[0, 1, 2, 0],
[1, 2, 0, 4],
[2, 0, 4, 5]])
But you have to be careful no to do any operation on a that triggers a copy, since that would send your memory use through the ceiling.
If you're already using numpy, use its strided or sparse arrays, as Jaime explained.
If you're not already using numpy, you may to strongly consider using it.
If you need to stick with pure Python, there are three obvious ways to do this, depending on your use case.
For strided or sparse-but-clustered arrays, you could do effectively the same thing as numpy.
Or you could use a simple run-length-encoding scheme, plus maybe a higher-level list of runs for, or list of pointers to every Nth element, or even a whole stack of such lists (one for every 100 elements, one for every 10000, etc.).
But for mostly-uniformly-dense arrays, the easiest thing is to simply store a dict or defaultdict mapping indices to values. Random-access lookups or updates are still O(1)—albeit with a higher constant factor—and the storage you waste storing (in effect) a hash, key, and value instead of just a value for each non-default element is more than made up for by not storing values for the default elements, as long as you're less than 0.33 density.
I'm really confused by the index logic of numpy arrays with several dimensions. Here is an example:
import numpy as np
A = np.arange(18).reshape(3,2,3)
[[[ 0, 1, 2],
[ 3, 4, 5]],
[[ 6, 7, 8],
[ 9, 10, 11]],
[[12, 13, 14],
[15, 16, 17]]])
this gives me an array of shape (3,2,3), call them (x,y,z) for sake of argument. Now I want an array B with the elements from A corresponding to x = 0,2 y =0,1 and z = 1,2. Like
array([[[ 1, 2],
[4, 5]],
[[13, 14],
[16, 17]]])
Naively I thought that
B=A[[0,2],[0,1],[1,2]]
would do the job. But it gives
array([ 2, 104])
and does not work.
A[[0,2],:,:][:,:,[1,2]]
does the job. But I still wonder whats wrong with my first try. And what is the best way to do what I want to do?
There are two types of indexing in NumPy basic and advanced. Basic indexing uses tuples of slices for indexing, and does not copy the array, but rather creates a view with adjusted strides. Advanced indexing in contrast also uses lists or arrays of indices and copies the array.
Your first attempt
B = A[[0, 2], [0, 1], [1, 2]]
uses advanced indexing. In advanced indexing, all index lists are first broadcasted to the same shape, and this shape is used for the output array. In this case, they already have the same shape, so the broadcasting does not do anything. The output array will also have this shape of two entries. The first entry of the output array is obtained by using all first indices of the three lists, and the second by using all second indices:
B = numpy.array([A[0, 0, 1], A[2, 1, 2]])
Your second approach
B = A[[0,2],:,:][:,:,[1,2]]
does work, but it is inefficient. It uses advanced indexing twice, so your data will be copied twice.
To get what you actually want with advanced indexing, you can use
A[np.ix_([0,2],[0,1],[1,2])]
as pointed out by nikow. This will copy the data only once.
In your example, you can get away without copying the data at all, using only basic indexing:
B = A[::2, :, 1:2]
I recommend the following advanced tutorial, which explains the various indexing methods: NumPy MedKit
Once you understand the powerful ways to index arrays (and how they can be combined) it will make sense. If your first try was valid then this would collide with some of the other indexing techniques (reducing your options in other use cases).
In your example you can exploit that the third index covers a continuous range:
A[[0,2],:,1:]
You could also use
A[np.ix_([0,2],[0,1],[1,2])]
which is handy in more general cases, when the latter indices are not continuous. np.ix_ simply constructs three index arrays.
As Sven pointed out in his answer, there is a more efficient way in this specific case (using a view instead of a copied version).
Edit: As pointed out by Sven my answer contained some errors, which I have removed. I still think that his answer is better, but unfortunately I can't delete mine now.
A[(0,2),:,1:]
If you wanted
array([[[ 1, 2],
[ 4, 5]],
[[13, 14],
[16, 17]]])
A[indices you want,rows you want, col you want]