I have a generator that returns numpy arrays.
For example sake, let it be:
import numpy as np
a = np.arange(9).reshape(3,3)
gen = (x for x in a)
Calling:
np.sum(gen)
On numpy 1.17.4:
DeprecationWarning: Calling np.sum(generator) is deprecated, and in
the future will give a different result. Use
np.sum(np.fromiter(generator)) or the python sum builtin instead.
Trying to refactor the above:
np.sum(np.fromiter(gen, dtype=np.ndarray))
I get:
ValueError: cannot create object arrays from iterator
What is wrong in the above statement?
The problem is the second argument, np.ndarray in the fromiter(). Numpy fromiter expected a 1D and returns a 1D array:
Create a new 1-dimensional array from an iterable object.
Therefore, you cannot create object arrays from iterator. Furthermore the .reshape() will also raise an error, because of what I stated in the first line. All in all, this works:
import numpy as np
a = np.arange(9)
gen = (x for x in a)
print(np.sum(np.fromiter(gen,float)))
Output:
36
Since you're summing instances of arrays you can just use the built-in sum:
result = sum(gen)
What about simply converting your generator to a list and then passing it to the np.sum?
a = np.arange(9).reshape(3,3)
gen = (x for x in a)
Summing all the elements:
>>> np.sum(list(gen))
36
Summing column-wise:
>>> np.sum(list(gen), axis=0)
array([ 9, 12, 15])
Summing row-wise:
>>> np.sum(list(gen), axis=1)
array([ 3, 12, 21])
Related
When trying to map a function
def make_pair(a,b):
return (a,b)
to a numpy array,
arr = np.array([1,2,3])
I intend to do
np.array([make_pair('foo',x) for x in arr])
, but I read that
in newer version of numpy you can simply call the function by passing the numpy array to the function that you wrote for scalar type
I tried to apply the function make_pair directly to the array (res = make_pair('foo',arr)), but couldn't get the expected result (mapping it over the array). In the code below:
import numpy as np
def make_pair(a,b):
return (a,b)
arr = np.array([1,2,3])
res_0 = np.array([('foo',1), ('foo',2), ('foo',3)])
res_exp = np.array([make_pair('foo',x) for x in arr])
res = make_pair('foo',arr)
I got:
>>> res
('foo', array([1, 2, 3]))
The function is applied to the array arr instead of mapped into it (due to the ambiguity).
Is there a way to tell numpy to map the function instead of apply the function on the array? (other than doing the mapping externally with list comprehension as shown)
This is with Python 3.6.5, and latest numpy as of July 2018, Ubuntu 18.04.
array = numpy.array([1,2,3,4,5,6,7,8,9,10])
array[-1:3:1]
>> []
I want this array indexing to return something like this:
[10,1,2,3]
Use np.roll to:
Roll array elements along a given axis. Elements that roll beyond the last position are re-introduced at the first.
>>> np.roll(x, 1)[:4]
array([10, 1, 2, 3])
np.roll lets you wrap an array which might be useful
import numpy as np
a = np.array([1,2,3,4,5,6,7,8,9,10])
b = np.roll(a,1)[0:4]
results in
>>> b
array([10 1 2 3])
As one of the answers mentioned, rolling the array makes a copy of the whole array which can be memory consuming for large arrays. So just another way of doing this without converting to list is:
np.concatenate([array[-1:],array[:3]])
Use np.r_:
import numpy as np
>>>
>>> arr = np.arange(1, 11)
>>> arr[np.r_[-1:3]]
array([10, 1, 2, 3])
Simplest solution would be to convert first to list, and then join and return to array.
As such:
>>> numpy.array(list(array[-1:]) + list(array[:3]))
array([10, 1, 2, 3])
This way you can choose which indices to start and end, without creating a duplicate of the entire array
I have a numpy array that is changed by a function.
After calling the function I want to proceed with the initial value of the array (value before calling the modifying function)
# Init of the array
array = np.array([1, 2, 3])
# Function that modifies array
func(array)
# Print the init value [1,2,3]
print(array)
Is there a way to pass the array by value or am I obligated to make a deep copy?
As I mentioned, np.ndarray objects are mutable data structures. This means that any variables that refer to a particular object will all reflect changes when a change is made to the object.
However, keep in mind that most numpy functions that transform arrays return new array objects, leaving the original unchanged.
What you need to do in this scenario depends on exactly what you're doing. If your function modifies the same array in place, then you'll need to pass a copy to the function. You can do this with np.ndarray.copy.
You could use the following library (https://pypi.python.org/pypi/pynverse) that inverts your function and call it, like so :
from pynverse import inversefunc
cube = (lambda x: x**3)
invcube = inversefunc(cube)
arr = func(arr)
In [0]: arr
Out[0]: array([1, 8, 27], dtype=int32)
In [1]: invcube(arr)
Out[1]: array([1, 2, 3])
I want to get the tangent inverse of a set of array
import numpy as np
import math
For example (this is an array)
x_value=[1 2 3 4 5 6]
a= abs(x_value-125)
This still works fine, but when I get the tangent inverse of a:
b=math.atan(a)
I got this error: TypeError: only length-1 arrays can be converted to Python scalars
How should I solve this error where I can get the tangent inverse of the elements of array a?
Just use np.arctan:
>>> import numpy as np
>>> a = np.array([1,2,3,4,5,6])
>>> a = abs(a - 125) # could use np.abs. It does the same thing, but might be more clear that you expect to get an ndarray instance as a result.
>>> a
array([124, 123, 122, 121, 120, 119])
>>> np.arctan(a)
array([ 1.56273199, 1.56266642, 1.56259979, 1.56253205, 1.56246319,
1.56239316])
you could use a list comprehension to apply the atan function to each element of the array:
a = np.abs(np.array([1,2,3,4,5,6]) - 125)
b = [np.math.atan(x) for x in a]
You can use a list comprehension:
b = [math.atan(ele) for ele in a]
I am using a set operation in python to perform a symmetric difference between two numpy arrays. The result, however, is a set and I need to convert it back to a numpy array to move forward. Is there a way to do this? Here's what I tried:
a = numpy.array([1,2,3,4,5,6])
b = numpy.array([2,3,5])
c = set(a) ^ set(b)
The results is a set:
In [27]: c
Out[27]: set([1, 4, 6])
If I convert to a numpy array, it places the entire set in the first array element.
In [28]: numpy.array(c)
Out[28]: array(set([1, 4, 6]), dtype=object)
What I need, however, would be this:
array([1,4,6],dtype=int)
I could loop over the elements to convert one by one, but I will have 100,000 elements and hoped for a built-in function to save the loop. Thanks!
Do:
>>> numpy.array(list(c))
array([1, 4, 6])
And dtype is int (int64 on my side.)
Don't convert the numpy array to a set to perform exclusive-or. Use setxor1d directly.
>>> import numpy
>>> a = numpy.array([1,2,3,4,5,6])
>>> b = numpy.array([2,3,5])
>>> numpy.setxor1d(a, b)
array([1, 4, 6])
Try:
numpy.fromiter(c, int, len(c))
This is twice as fast as the solution with list as a middle product.
Try this.
numpy.array(list(c))
Converting to list before initializing numpy array would set the individual elements to integer rather than the first element as the object.