How do you select a group of elements from a 3d array using a 1d array.
#These are my 3 data types
# A = numpy.ndarray[numpy.ndarray[float]]
# B1 = numpy.ndarray[numpy.ndarray[numpy.ndarray[float]]]
#B2=numpy.ndarray[numpy.ndarray[numpy.ndarray[float]]]
#I want to choose values from A based on values from B1 in the B2
This is what I tried but it returned all False:
A2[i]=image_values[updated_image_values==initial_means[i]]
Example:
A=[[1,1,1][2,2,2]]
B=[[[1,1,1],[2,3,4]],[[2,2,2],[1,1,1]],[[1,1,1],[2,2,2]]]
B2=[[[2,2,2],[9,3,21]],[[22,0,-2],[-1,-1,1]],[[1,-1,-1],[10,0,2]]]
#A2 is calculated as the means of the B2 values that correspond
#to it's value according to B
So, to calculate A2 we use check what values in B2 are equal to values in A. So, for the first index A[0], B[0][0],B[1][1] and B[2][0] are equal to A[0]. So for A2[0], we get the corresponding values of B in B2 and use those to calculate the average for each index:
#A2[0][0]=(B2[0][0][0]+B2[1][1][0]+B2[2][0][0]) /3 = 0.67
#A2[1][2]=(B2[1][0][2]+B2[2][1][2]) /2 = 0
#After doing this for every A2 value, A2 should be:
A2=[[0.67,0,0.67],[16,0,0]]
Here's a vectorized approach with np.add.reduceat -
idx = np.argwhere((B == A[:,None,None]).all(-1))
B2_indexed = B2[idx[:,1],idx[:,2]]
_,start, count = np.unique(idx[:,0],return_index=1,return_counts=1)
out = np.add.reduceat(B2_indexed,start)/count.astype(float)[:,None]
Alternatively, we can save on memory a bit by avoiding creating 4D mask with a 3D mask instead for getting idx, like so -
dims = np.maximum(B.max(axis=(0,1)),A.max(0))+1
A_reduced = np.ravel_multi_index(A.T,dims)
B_reduced = np.ravel_multi_index(B.T,dims)
idx = np.argwhere(B_reduced.T == A_reduced[:,None,None])
Here's another approach with one-loop -
out = np.empty(A.shape)
for i in range(A.shape[0]):
r,c = np.where((B == A[i]).all(-1))
out[i] = B2[r,c].mean(0)
Related
I have two unsorted ndarrays with the following structure:
a1 = np.array([0,4,2,3],[0,2,5,6],[2,3,7,4],[6,0,9,8],[9,0,6,7])
a2 = np.array([3,4,2],[0,6,9])
I would like to find all the indices of a1, where each a2 row is in a1 and also inside a1 the position:
result = [[0,[3,1,2]],[2,[1,3,0]],[3,[1,0,2]],[4,[1,2,0]]
In this example a2[0] is in a1 at position 0 and 2 within a1 position at 3,1,2 and 1,3,0. For a2[1] at position 3 and 4 within a1 position at 1,0,2 and 1,2,0.
Each a2 row appears twice in a1. a1 has a least 1Mio. rows, a2 around 10,000. So the algorithm should be also quite fast (if possible).
So far, i was thinking about this approach:
big_res = []
for r in xrange(len(a2)):
big_indices = np.argwhere(a1 == a2[r])
small_res = []
for k in xrange(2):
small_indices = [i for i in a2[r] if i in a1[big_indices[k]]]
np.append(small_res, small_indices)
combined_res = [[big_indices[0],small_res[0]],[big_indices[1],small_res[1]]]
np.append(big_res, combined_res)
Using numpy_indexed, (disclaimer: I am its author) what I think of as the hard part can be written efficiently as follows:
import numpy_indexed as npi
a1s = np.sort(a1, axis=1)
a2s = np.sort(a2, axis=1)
matches = np.array([npi.indices(a2s, np.delete(a1s, i, axis=1), missing=-1) for i in range(4)])
rows, cols = np.argwhere(matches != -1).T
a1idx = cols
a2idx = matches[rows, cols]
# results.shape = [len(a2), 2]
result = npi.group_by(a2idx).split_array_as_array(a1idx)
This only gives you the matches efficiently; not the relative orders. But once you have the matches, computing the relative orders should be simple to do in linear time.
EDIT: and some code of questionable density to get your relative orderings:
order = npi.indices(
(np.indices(a1.shape)[0].flatten(), a1.flatten()),
(np.repeat(result.flatten(), 3), np.repeat(a2, 2, axis=0).flatten())
).reshape(-1, 2, 3) - result[..., None] * 4
I have a 1D vector Zc containing n elements that are 2D arrays. I want to find the index of each 2D array that equals np.ones(Zc[i].shape).
a = np.zeros((5,5))
b = np.ones((5,5))*4
c = np.ones((5,5))
d = np.ones((5,5))*2
Zc = np.stack((a,b,c,d))
for i in range(len(Zc)):
a = np.ones(Zc[i].shape)
b = Zc[i]
if np.array_equal(a,b):
print(i)
else:
pass
Which returns 2. The code above works and returns the correct answer, but I want to know if there a vectorized way to achieve the same result?
Going off of hpaulj's comment:
>>> allones = (Zc == np.array(np.ones(Zc[i].shape))).all(axis=(1,2))
>>> np.where(allones)[0][0]
2
I have a 2D numpy array with 3 columns. Columns 1 and 2 are a list of connections between ID's. Column 3 is a the strength of that connection. I would like to transform this 3 column matrix into a weighted adjacency matrix (an N x N matrix where cells represent the strength of connection between each ID).
I have already done this in my code below. matrix is the 3 column 2D array and t1 is the weighted adjacency matrix. My problem is this code is very slow because I am using nested for loops. I am familiar with the pandas function melt which does this, but I am not able to use pandas. Is there a faster implementation not using pandas?
import numpy as np
a = np.arange(2000)
np.random.shuffle(a)
b = np.arange(2000)
np.random.shuffle(b)
c = np.random.rand(2000,1)
matrix = np.column_stack((a,b,c))
#get unique value list of nm
flds = list(np.unique(matrix[:,0]))
flds.extend(list(np.unique(matrix[:,1])))
flds = np.asarray(flds)
flds = np.unique(flds)
#make lookup dict
lookup = dict(zip(np.arange(0,len(flds)), flds))
lookup_rev = dict(zip(flds, np.arange(0,len(flds))))
#make empty n by n matrix with unique lists
t1 = np.zeros([len(flds) , len(flds)])
#map values into the n by n matrix and make the rest 0
'''this takes a long time to run'''
#iterate through rows
for i in np.arange(0,len(lookup)):
#iterate through columns
for k in np.arange(0,len(lookup)):
val = matrix[(matrix[:,0] == lookup[i]) & (matrix[:,1] == lookup[k])][:,2]
if val:
t1[i,k] = sum(val)
Assuming that I understood the question correctly and that val is a scalar, you could use a vectorized approach that involves initializing with zeros and then indexing, like so -
out = np.zeros((len(flds),len(flds)))
out[matrix[:,0].astype(int),matrix[:,1].astype(int)] = matrix[:,2]
Please note that by my observation it looks like you can avoid using lookup.
You need to iterate your matrix only once:
import numpy as np
size = 2000
a = np.arange(size)
np.random.shuffle(a)
b = np.arange(size)
np.random.shuffle(b)
c = np.random.rand(size,1)
matrix = np.column_stack((a,b,c))
#get unique value list of nm
fields = np.unique(matrix[:,:2])
n = len(fields)
#make reverse lookup dict
lookup = dict(zip(fields, range(n)))
#make empty n by n matrix
t1 = np.zeros([n, n])
for src, dest, val in matrix:
i = lookup[src]
j = lookup[dest]
t1[i, j] += val
The main acceleration you can get is by not iterating through each element of the NxN matrix but instead iterate trough your connection list, which is much smaller.
I tried to simplify your code a bit. It use the list.index method, which can be slow, but it should still be faster that what you had.
import numpy as np
a = np.arange(2000)
np.random.shuffle(a)
b = np.arange(2000)
np.random.shuffle(b)
c = np.random.rand(2000,1)
matrix = np.column_stack((a,b,c))
lookup = np.unique(matrix[:,:2]).tolist() # You can call unique only once
t1 = np.zeros((len(lookup),len(lookup)))
for i,j,val in matrix:
t1[lookup.index(i),lookup.index(j)] = val # Fill the matrix
The following is my script. Each equal part has self.number samples, in0 is input sample. There is an error as follows:
pn[i] = pn[i] + d
IndexError: list index out of range
Is this the problem about the size of pn? How can I define a list with a certain size but no exact number in it?
for i in range(0,len(in0)/self.number):
pn = []
m = i*self.number
for d in in0[m: m + self.number]:
pn[i] += d
if pn[i] >= self.alpha:
out[i] = 1
elif pn[i] <= self.beta:
out[i] = 0
else:
if pn[i] >= self.noise:
out[i] = 1
else:
out[i] = 0
if pn[i] >= self.noise:
out[i] = 1
else:
out[i] = 0
There are a number of problems in the code as posted, however, the gist seems to be something that you'd want to do with numpy arrays instead of iterating over lists.
For example, the set of if/else cases that check if pn[i] >= some_value and then sets a corresponding entry into another list with the result (true/false) could be done as a one-liner with an array operation much faster than iterating over lists.
import numpy as np
# for example, assuming you have 9 numbers in your list
# and you want them divided into 3 sublists of 3 values each
# in0 is your original list, which for example might be:
in0 = [1.05, -0.45, -0.63, 0.07, -0.71, 0.72, -0.12, -1.56, -1.92]
# convert into array
in2 = np.array(in0)
# reshape to 3 rows, the -1 means that numpy will figure out
# what the second dimension must be.
in2 = in2.reshape((3,-1))
print(in2)
output:
[[ 1.05 -0.45 -0.63]
[ 0.07 -0.71 0.72]
[-0.12 -1.56 -1.92]]
With this 2-d array structure, element-wise summing is super easy. So is element-wise threshold checking. Plus 'vectorizing' these operations has big speed advantages if you are working with large data.
# add corresponding entries, we want to add the columns together,
# as each row should correspond to your sub-lists.
pn = in2.sum(axis=0) # you can sum row-wise or column-wise, or all elements
print(pn)
output: [ 1. -2.72 -1.83]
# it is also trivial to check the threshold conditions
# here I check each entry in pn against a scalar
alpha = 0.0
out1 = ( pn >= alpha )
print(out1)
output: [ True False False]
# you can easily convert booleans to 1/0
x = out1.astype('int') # or simply out1 * 1
print(x)
output: [1 0 0]
# if you have a list of element-wise thresholds
beta = np.array([0.0, 0.5, -2.0])
out2 = (pn >= beta)
print(out2)
output: [True False True]
I hope this helps. Using the correct data structures for your task can make the analysis much easier and faster. There is a wealth of documentation on numpy, which is the standard numeric library for python.
You initialize pn to an empty list just inside the for loop, never assign anything into it, and then attempt to access an index i. There is nothing at index i because there is nothing at any index in pn yet.
for i in range(0, len(in0) / self.number):
pn = []
m = i*self.number
for d in in0[m: m + self.number]:
pn[i] += d
If you are trying to add the value d to the pn list, you should do this instead:
pn.append(d)
I have the following dataset in numpy
indices | real data (X) |targets (y)
| |
0 0 | 43.25 665.32 ... |2.4 } 1st block
0 0 | 11.234 |-4.5 }
0 1 ... ... } 2nd block
0 1 }
0 2 } 3rd block
0 2 }
1 0 } 4th block
1 0 }
1 0 }
1 1 ...
1 1
1 2
1 2
2 0
2 0
2 1
2 1
2 1
...
Theses are my variables
idx1 = data[:,0]
idx2 = data[:,1]
X = data[:,2:-1]
y = data[:,-1]
I also have a variable W which is a 3D array.
What I need to do in the code is loop through all the blocks in the dataset and return a scalar number for each block after some computation, then sum up all the scalars, and store it in a variable called cost. Problem is that the looping implementation is very slow, so I'm trying to do it vectorized if possible. This is my current code. Is it possible to do this without for loops in numpy?
IDX1 = 0
IDX2 = 1
# get unique indices
idx1s = np.arange(len(np.unique(data[:,IDX1])))
idx2s = np.arange(len(np.unique(data[:,IDX2])))
# initialize global sum variable to 0
cost = 0
for i1 in idx1s:
for i2 in idx2:
# for each block in the dataset
mask = np.nonzero((data[:,IDX1] == i1) & (data[:,IDX2] == i2))
# get variables for that block
curr_X = X[mask,:]
curr_y = y[mask]
curr_W = W[:,i2,i1]
# calculate a scalar
pred = np.dot(curr_X,curr_W)
sigm = 1.0 / (1.0 + np.exp(-pred))
loss = np.sum((sigm- (0.5)) * curr_y)
# add result to global cost
cost += loss
Here is some sample data
data = np.array([[0,0,5,5,7],
[0,0,5,5,7],
[0,1,5,5,7],
[0,1,5,5,7],
[1,0,5,5,7],
[1,1,5,5,7]])
W = np.zeros((2,2,2))
idx1 = data[:,0]
idx2 = data[:,1]
X = data[:,2:-1]
y = data[:,-1]
That W was tricky... Actually, your blocks are pretty irrelevant, apart from getting the right slice of W to do the np.dot with the corresponding X, so I went the easy route of creating an aligned_W array as follows:
aligned_W = W[:, idx2, idx1]
This is an array of shape (2, rows) where rows is the number of rows of your data set. You can now proceed to do your whole calculation without any for loops as:
from numpy.core.umath_tests import inner1d
pred = inner1d(X, aligned_W.T)
sigm = 1.0 / (1.0 + np.exp(-pred))
loss = (sigm - 0.5) * curr_y
cost = np.sum(loss)
My guess is the major reason your code is slow is the following line:
mask = np.nonzero((data[:,IDX1] == i1) & (data[:,IDX2] == i2))
Because you repeatedly scan your input arrays for small number of rows of interest. So you need to do the following:
ni1 = len(np.unique(data[:,IDX1]))
ni2 = len(np.unique(data[:,IDX2]))
idx1s = np.arange(ni1)
idx2s = np.arange(ni2)
key = data[:,IDX1] * ni2 + data[:,IDX2] # 1D key to the rows
sortids = np.argsort(key) #indices to the sorted key
Then inside the loop instead of
mask=np.nonzero(...)
you need to do
curid = i1 * ni2 + i2
left = np.searchsorted(key, curid, 'left', sorter=sortids)
right=np.searchsorted(key, curid, 'right', sorter=sortids)
mask = sortids[left:right]
I don't think that there is a way to compare numpy array of different sizes without using for loops. Would be hard to decide what is the output meaning and shape of something like
[0,1,2,3,4] == [3,4,2]
The only suggestion that I can give you is to get rid of one of the for loop using itertools.product:
import itertools as it
[...]
idx1s = np.unique(data[:,IDX1])
idx2s = np.unique(data[:,IDX2])
# initialize global sum variable to 0
cost = 0
for i1, i2 in it.product(idx1s, idx2):
# for each block in the dataset
mask = np.nonzero((data[:,IDX1] == i1) & (data[:,IDX2] == i2))
# get variables for that block
curr_X = X[mask,:]
curr_y = y[mask]
[...]
You can also keep mask as a bool array
mask = (data[:,IDX1] == i1) & (data[:,IDX2] == i2)
The output is the same and you have to use anyway the memory to create the bool array. Doing this way saves you some memory and a function evaluation
EDIT
If you know that the indices do not have holes or have few holes, might be worth to remove the part where you define idx1s and idxs2 and change the for loop to
max1, max2 = data[:,[IDX1, IDX2]].max(axis=0)
for i1, i2 in it.product(xrange(max1), xrange(max2)):
[...]
Both xrange and it.product are iterators, so they create only i1 and i2 when you need.
ps: if you are on python3.x use range instead of xrange