Efficient way to assign list elements to numpy array - python

I have some numpy array objects in a list that I want to combine to a single numpy array. What is an efficient way to do this? The code below does not work since it puts a list into a numpy array...
import numpy as np
C = [np.array([1,2,3]), np.array([4,5,6]), np.array([7,8,9])]
M = np.zeros((1,3*3))
M[0] = C ## THIS THROWS AN ERROR

Use the following code
print(np.append(C,[]))
[1. 2. 3. 4. 5. 6. 7. 8. 9.]

Related

Efficient operation on numpy arrays contain rows with different size

I want to ask something that is related with this question posted time ago Operation on numpy arrays contain rows with different size . The point is that I need to do some operations with numpy arrays that contain rows with different size.
The standard way like "list2*list3*np.exp(list1)" doens't work since the rows are from different size, and the option that works is using zip. See the code below.
import numpy as np
import time
list1 = np.array([[2.,0.,3.5,3],[3.,4.2,5.,7.1,5.]])
list2 = np.array([[2,3,3,0],[3,8,5.1,7.6,1.2]])
list3 = np.array([[1,3,8,3],[3,4,9,0,0]])
start_time = time.time()
c =[]
for i in range(len(list1)):
c.append([list2*list3*np.exp(list1) for list1, list2,list3 in zip(list1[i], list2[i],list3[i])])
print("--- %s seconds ---"% (time.time()-start_time))
I want to ask if exist a much more efficient way to perform this operations avoiding a loop an doing in a more numpy way. Thanks!
This should do it:
f = np.vectorize(lambda x, y, z: y * z * np.exp(x))
result = [f(*i) for i in np.column_stack((list1, list2, list3))]
result
#[array([ 14.7781122 , 9. , 794.77084701, 0. ]),
# array([ 180.76983231, 2133.96259331, 6812.16400281, 0. , 0. ])]

Sequentially add elements in numpy array into new array

I'm curious if there is a built in function to transform an array of values into a cumulative array of values.
Example:
input = np.asarray([0.000,1.500,2.100,5.000])
into
[0.000,1.500,3.600,8.600]
Thanks!
Use in-built cumsum from NumPy to get the cumulative sum of your array inputt as
inputt = np.asarray([0.000,1.500,2.100,5.000])
print (np.cumsum(inputt))
# [0. 1.5 3.6 8.6]
I renamed your array because input is already an in-built function in python to get the user input from the keyboard

#Python iterate over an array with empty (nan) values

In advance, thank you for the time.
The problem is as follows, I have a matrix where both "0" and "empty fields" are necessary in the further calculation:
as data is converted into a numpy array, it automatically replaces the empty fields with "nan" ... how can I loop over each row of an array while ignoring the "nan" values for further calculation .
>>>data
[[ 2. 4. 7.]
[ 7. 0. nan]
[-3. 7. 0.]
[nan nan 6.]]
The idea was to run a set of conditions while iterating over the rows and possibly append to a new numpy array but for simplicity let's say I just want to get a new array without the "nan" so the final result would look something like >
>>>final_data
[[2,4,7], [7,0], [-3,7,0], [6]]

Summing up columns of arrays of different shapes in array of arrays- Python 3.x

I have an array that contains 2D arrays.
For each 2D array i want to sum up the columns and the result must be in column form.
I have a piece of code to do this, but I feel like I am not utilising numpy optimally. What is the fastest to do this?
My current code:
temp = [np.sum(l_i,axis=1).reshape(-1,1) for l_i in self.layer_inputs]
Sample Array:
array([
array([[ 0.48517904, -11.10809746],
[ 13.64104864, 5.77576326]]),
array([[16.74109924, -3.28535518],
[-4.00977275, -3.39593759],
[ 5.9048581 , -1.65258805],
[13.40762143, -1.61158724],
[ 9.8634849 , 8.02993728]]),
array([[-7.61920427, -3.2314264 ],
[-3.79142779, -2.44719713],
[32.42085005, 4.79376209],
[13.97676962, -1.19746096],
[45.60100807, -3.01680368]])
], dtype=object)
Sample Expected Result:
[array([[-10.62291842],
[ 19.41681191]]),
array([[13.45574406],
[-7.40571034],
[ 4.25227005],
[11.7960342 ],
[17.89342218]]),
array([[-10.85063067],
[ -6.23862492],
[ 37.21461214],
[ 12.77930867],
[ 42.58420439]]) ]
New answer
Given your stringent requirement for a list of arrays, there is no more computationally efficient solution.
Original answer
To leverage NumPy, don't work with a list of arrays: dtype=object is the hint you won't be able to use vectorised operations.
Instead, combine into one array, e.g. via np.vstack, and store split indices. If you need a list of arrays, use np.split as a final step. But this constant flipping between lists and a single array is expensive. Really, you should attempt to just store the splits and a single array, i.e. idx and data below.
idx = np.array(list(map(len, A))).cumsum()[:-1] # [2, 7]
data = np.vstack(A).sum(1)

Numpy - Compare elements in two 2D arrays and replace values

I have a specific requirement for this problem. I need it to be simple and fast.
My problem:
I have two 2D arrays and I need to replace values in 1. array by values in 2. array according to condition. That is if element in x,y position in 1. array is smaller than element in x,y position in 2. array, then replace element in 1. array by element in 2. array.
what I tried and is not working:
import numpy as np
arr = np.random.randint(3,size=(2, 2))
arr2 = np.random.randint(3,size=(2, 2))
print(arr)
print(arr2)
arr[arr<arr2]=arr2 # Doesnt work.
This raises TypeError:
TypeError: NumPy boolean array indexing assignment requires a 0 or 1-dimensional input, input has 2 dimensions.
I can see, that it would be possible to iterate through columns or rows, but I believe it can be done without iteration.
Thanks in advance

Categories