Numpy Error using 'np.arange': Index Error [duplicate] - python
This question already has answers here:
First row of numpy.ones is still populated after referencing another matrix
(5 answers)
Closed 6 years ago.
I am working with the code shown below and am getting an Index Error:
index 8 is out of bounds for axis 1 with size 8 and
index 6 is out of bounds for axis 0 with size 6`.
When changing np.arrange(1,9) and np.arrange(2,8) to np.arrange(8) and np.arrange(6) respectively the code runs with no errors. However, the output matrix C is giving different results than expected. What if I want i and j to be an integer with values (1,2,3,4,5,6,7,8) instead of (0,1,2,3,4,5,6,7) and k and l with integer values (2,3,4,5,6,7)? I am creating a matrix C which looks at the inner 6x6 square of B (ignoring the border) and uses the matrix D as a 'weight' when determining the values of matrix C.
import numpy
import numpy as np
A = np.matrix([[8,8,8,7,7,6,8,2],
[8,8,7,7,7,6,6,7],
[1,8,8,7,7,6,6,6],
[1,1,8,7,7,6,7,7],
[1,1,1,1,8,7,7,6],
[1,1,2,1,8,7,7,6],
[2,2,2,1,1,8,7,7],
[2,1,2,1,1,8,8,7]])
B = np.ones((8,8),dtype=np.int)
for i in np.arange(1,9):
for j in np.arange(1,9):
B[i,j] = A[i,j]
C = np.zeros((6,6),dtype=np.int)
print C
D = np.matrix([[1,1,2,3,3,2,2,1],
[1,2,1,2,3,3,3,2],
[1,1,2,1,1,2,2,3],
[2,2,3,2,2,2,1,3],
[1,2,2,3,2,3,1,3],
[1,2,3,3,2,3,2,3],
[1,2,2,3,2,3,1,2],
[2,2,3,2,2,3,2,2]])
print D
for k in np.arange(2,8):
for l in np.arange(2,8):
B[k,l] # point in middle
b = B[(k-1),(l-1)]
if b == 8:
# Matrix C is smaller than Matrix B
C[(k-1),(l-1)] = C[(k-1),(l-1)] + 1*D[(k-1),(l-1)]
Remember that Python indexing starts at 0, not 1, so your array would be from 0-7, not 1-8.
Related
Get access index of a 2D array element built from 2 x 1D arrays
I have the following prior_total built from prior_fish array and np.arange(5) with num_prior_loop=50 # Declare prior array prior_start = 1 prior_end = 5 prior_fish = np.logspace(prior_start,prior_end,num_prior_loop) # Prior total prior_total = np.stack([prior_fish.T, np.arange(5)]) How to get access to prior_total[i,j], I mean i for the i-th element of prior_fish and j for the j-element of np.aranage(5) ? I tried : prior_total[i,j] and prior_total([[[0,i],[1,j]]] but this doesn't work. What might be wrong here?
a = np.logspace(1,5,5) b = np.arange(5) c = np.stack([a.T, b]) c[i,j] returns a single element, at col i row j c[0,i], c[1,j] returns two elements, the ith element of a and the jth element of b
Accumulated sum of 2D array [duplicate]
This question already has answers here: Multidimensional cumulative sum in numpy (3 answers) Closed 2 years ago. Suppose I have a 2D numpy array like below dat = np.array([[1,2],[3,4],[5,6],[7,8]) I want to get a new array with each row equals to the sum of its previous rows with itself, like the following first row: [1,2] second row: [1,2] + [3,4] = [4,6] third row: [4,6] + [5,6] = [9,12] forth row: [9,12] + [7,8] = [16,20] So the array would be like dat = np.array([[1,2],[4,6],[9,12],[16,20])
np.cumsum is what you are looking for: dat = np.array([[1,2],[3,4],[5,6],[7,8]]) result = np.cumsum(dat, axis=0)
Transforming a 3 Column Matrix into an N x N Matrix in Numpy
I have a 2D numpy array with 3 columns. Columns 1 and 2 are a list of connections between ID's. Column 3 is a the strength of that connection. I would like to transform this 3 column matrix into a weighted adjacency matrix (an N x N matrix where cells represent the strength of connection between each ID). I have already done this in my code below. matrix is the 3 column 2D array and t1 is the weighted adjacency matrix. My problem is this code is very slow because I am using nested for loops. I am familiar with the pandas function melt which does this, but I am not able to use pandas. Is there a faster implementation not using pandas? import numpy as np a = np.arange(2000) np.random.shuffle(a) b = np.arange(2000) np.random.shuffle(b) c = np.random.rand(2000,1) matrix = np.column_stack((a,b,c)) #get unique value list of nm flds = list(np.unique(matrix[:,0])) flds.extend(list(np.unique(matrix[:,1]))) flds = np.asarray(flds) flds = np.unique(flds) #make lookup dict lookup = dict(zip(np.arange(0,len(flds)), flds)) lookup_rev = dict(zip(flds, np.arange(0,len(flds)))) #make empty n by n matrix with unique lists t1 = np.zeros([len(flds) , len(flds)]) #map values into the n by n matrix and make the rest 0 '''this takes a long time to run''' #iterate through rows for i in np.arange(0,len(lookup)): #iterate through columns for k in np.arange(0,len(lookup)): val = matrix[(matrix[:,0] == lookup[i]) & (matrix[:,1] == lookup[k])][:,2] if val: t1[i,k] = sum(val)
Assuming that I understood the question correctly and that val is a scalar, you could use a vectorized approach that involves initializing with zeros and then indexing, like so - out = np.zeros((len(flds),len(flds))) out[matrix[:,0].astype(int),matrix[:,1].astype(int)] = matrix[:,2] Please note that by my observation it looks like you can avoid using lookup.
You need to iterate your matrix only once: import numpy as np size = 2000 a = np.arange(size) np.random.shuffle(a) b = np.arange(size) np.random.shuffle(b) c = np.random.rand(size,1) matrix = np.column_stack((a,b,c)) #get unique value list of nm fields = np.unique(matrix[:,:2]) n = len(fields) #make reverse lookup dict lookup = dict(zip(fields, range(n))) #make empty n by n matrix t1 = np.zeros([n, n]) for src, dest, val in matrix: i = lookup[src] j = lookup[dest] t1[i, j] += val
The main acceleration you can get is by not iterating through each element of the NxN matrix but instead iterate trough your connection list, which is much smaller. I tried to simplify your code a bit. It use the list.index method, which can be slow, but it should still be faster that what you had. import numpy as np a = np.arange(2000) np.random.shuffle(a) b = np.arange(2000) np.random.shuffle(b) c = np.random.rand(2000,1) matrix = np.column_stack((a,b,c)) lookup = np.unique(matrix[:,:2]).tolist() # You can call unique only once t1 = np.zeros((len(lookup),len(lookup))) for i,j,val in matrix: t1[lookup.index(i),lookup.index(j)] = val # Fill the matrix
Cumulative integration of elements of numpy arrays
I would like to to the following type of integration: Say I have 2 arrays a = np.array[1,2,3,4] b = np.array[2,4,6,8] I know how to integrate these using something like: c = scipy.integrate.simps(b, a) where c = 15 for above data set. What I would like to do is multiply the first elements of each array and add to new array called d, i.e. a[0]*b[0] then integrate the first 2 elements the arrays then the first 3 elements, etc. So eventually for this data set, I would get d = [2 3 8 15] I have tried a few things but no luck; I am pretty new to writing code.
If I have understood correctly what you need you could do the following: import numpy as np from scipy import integrate a = np.array([2,4,6,8]) b = np.array([1,2,3,4]) d = np.empty_like(b) d[0] = a[0] * b[0] for i in range(2, len(a) + 1): d[i-1] = integrate.simps(b[0:i], a[0:i]) print(d)
Matrix multiplication with numpy.einsum
I have the following two arrays with shape: A = (d,w,l) B = (d,q) And I want to combine these into a 3d array with shape: C = (q,w,l) To be a bit more specific, in my case d (depth of the 3d array) is 2, and i'd first like to multiply all positions out of w * l in the upper layer of A (so d = 0) with the first value of B in the highest row (so d=0, q=0). For d=1 I do the same, and then sum the two so: C_{q=0,w,l} = A_{d=0,w,l}*B_{d=0,q=0} + A_{d=1,w,l}*B_{d=1,q=0} I wanted to calculate C by making use of numpy.einsum. I thought of the following code: A = np.arange(100).reshape(2,10,5) B = np.arange(18).reshape(2,9) C = np.einsum('ijk,i -> mjk',A,B) Where ijk refers to 2,10,5 and mjk refers to 9,10,5. However I get an error. Is there some way to perform this multiplication with numpy einsum? Thanks
Your shapes A = (d,w,l), B = (d,q), C = (q,w,l) practically write the einsum expression C=np.einsum('dwl,dq->qwl',A,B) which I can test with In [457]: np.allclose(A[0,:,:]*B[0,0]+A[1,:,:]*B[1,0],C[0,:,:]) Out[457]: True