matrix = np.array([[0,0,0,0],[0,0,0,0],[0,0,0,0],[0,0,0,0]])
vector = np.array([0,0,0,0])
For vectors, you can edit every other element like so
vector[1::2] = 1
This gives
np.array([0,1,0,1])
However;
matrix[1::2] = 1
yields
np.array([[0,0,0,0],[1,1,1,1],[0,0,0,0],[1,1,1,1]])
I would like the output
np.array([[0,1,0,1],[0,1,0,1],[0,1,0,1],[0,1,0,1]])
There is a brute force approach to take the shape of the array, flatten it, use [1::2], and reshape, but i'm sure there is a more elegant solution i am missing.
Any help would be appreciated.
You can do something similar with multidimensional indexing
>>> matrix
array([[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]])
>>> matrix[:,1::2] = 1
>>> matrix
array([[0, 1, 0, 1],
[0, 1, 0, 1],
[0, 1, 0, 1],
[0, 1, 0, 1]])
Related
As in the title, if I have a matrix a
a = np.diag(np.arange(5))
array([[0, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[0, 0, 2, 0, 0],
[0, 0, 0, 3, 0],
[0, 0, 0, 0, 4]])
How can I assign a new 4x4 matrix or even 3x4 matrix to a without i-th row and i-th column? Let's say
b = array([[1,1,1,1],
[1,1,1,1],
[1,1,1,1])
I want to slice a and remove the first and second row and the second column of the matrix, which is something in R like
a[c(-1,-2), -2] = b
a =
array([[0, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[1, 0, 1, 1, 1],
[1, 0, 1, 1, 1],
[1, 0, 1, 1, 1]])
But in python, I tried something like
a[[2,3,4],:][:,[0,1,3,4]]
output:
array([0, 2, 0, 0],
[0, 0, 3, 0],
[0, 0, 0, 4]])
This operation won't allow me to assign a new matrix to slices of a.
How can I do that? I really appreciate any help you can provide.
p.s.
I found in this special case, I can assign values by blocks. But what I actually want to ask is when we do slice like a[2:5, [0,2,3,4]], we can get a 3x4 matrix, and assign a new matrix to that position of the matrix. But I want to do is to slice 'a[[0,2,3,4],[0,2,3,4]]` to get a 4x4 matrix or other shapes(the index for row and column may even be random), and assign a new matrix to that position. But numpy gives me a 1d array.
newmatrix = a[[0, 1, 3, 4], :][:, [0, 1, 3, 4]]
Regarding setting the values of a matric part of a larger matrix, I think there is no direct option. But you can create the original matrix around the one to be added:
before = np.array([[0, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[0, 0, 2, 0, 0],
[0, 0, 0, 3, 0],
[0, 0, 0, 0, 4]])
insert_array = np.array([[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1]])
first two rows without second column
first_step = np.delete(before[:2, :], 1, 1)
or
first_step = before[:2, [0, 2, 3, 4]]
appended to insert matrix
second_step = np.insert(insert_array, 0, first_step, axis=0)
second column appended
third_step = np.insert(second_step, 1, before[:, 1], axis=1)
final matrix
third_step = np.array([[0, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[1, 0, 1, 1, 1],
[1, 0, 1, 1, 1],
[1, 0, 1, 1, 1]])
I can't find a one-step solution to do that. But I think we can assign matrix by block.
a[2:5, 0] = 1
a[2:5, 2:5] = 1
Then I can get what I want.
In numpy you can set the indices of a 1d array to a value
import numpy as np
b = np.array([0, 0, 0, 0, 0])
indices = [1, 3]
b[indices] = 1
b
array([0, 1, 0, 1, 0])
I'm trying to do this with multi-rows and an index for each row in the most programmatically elegant and computationally efficient way possible. For example
b = np.array([[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]])
indices = [[1, 3], [0, 1], [0, 3]]
The desired result is
array([[0, 1, 0, 1, 0],
[1, 1, 0, 0, 0],
[1, 0, 0, 1, 0]])
I tried b[indices] and b[:,indices] but they resulted in an error or undesired result.
From searching, there are a few work arounds, but each tends to need at least 1 loop in python.
Solution 1: Run a loop through each row of the 2d array. The draw back for this is that the loop runs in python, and this part won't take advantage of numpy's c processing.
Solution 2: Use numpy put. The draw back is put works on a flattened version of the input array, so the indices need to be flattened too, and altered by the row size and number of rows, which would use a double for loop in python.
Solution 3: put_along_axis seems to only be able to set 1 value per row, so I would need to repeat this function for the number of values per row.
What would be the most computationally and programatically elegant solution? Anything where numpy would handle all the operations?
In [330]: b = np.zeros((3,5),int)
To set the (3,2) columns, the row indices need to be (3,1) shape (matching by broadcasting):
In [331]: indices = np.array([[1,3],[0,1],[0,3]])
In [332]: b[np.arange(3)[:,None], indices] = 1
In [333]: b
Out[333]:
array([[0, 1, 0, 1, 0],
[1, 1, 0, 0, 0],
[1, 0, 0, 1, 0]])
put along does the same thing:
In [335]: b = np.zeros((3,5),int)
In [337]: np.put_along_axis(b, indices,1,axis=1)
In [338]: b
Out[338]:
array([[0, 1, 0, 1, 0],
[1, 1, 0, 0, 0],
[1, 0, 0, 1, 0]])
On solution to build the indices in each dimension and then use a basic indexing:
from itertools import chain
b = np.array([[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]])
# Find the indices along the axis 0
y = np.arange(len(indices)).repeat(np.fromiter(map(len, indices), dtype=np.int_))
# Flatten the list and convert it to an array
x = np.fromiter(chain.from_iterable(indices), dtype=np.int_)
# Finaly set the items
b[y, x] = 1
It works even for indices lists with variable-sized sub-lists like indices = [[1, 3], [0, 1], [0, 2, 3]]. If your indices list always contains the same number of items in each sub-list then you can use the (more efficient) following code:
b = np.array([[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]])
indices = np.array(indices)
n, m = indices.shape
y = np.arange(n).repeat(m)
x = indices.ravel()
b[y, x] = 1
Simple one-liner based on Jérôme's answer (requires all items of indices to be equal-length):
>>> b[np.arange(np.size(indices)) // len(indices[0]), np.ravel(indices)] = 1
>>> b
array([[0, 1, 0, 1, 0],
[1, 1, 0, 0, 0],
[1, 0, 0, 1, 0]])
I have an array that looks like this:
A = [[1, 0, 0],
[0, 1, 0],
[0, 0, 1]]
and from it, I'd like to create an array that looks like this:
B = [[1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0],
[0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0],
[0, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 1, 1]]
Where every element of A gets repeated in a square shape n times.
I'm sure there's a simple way of doing this -- can anybody think of something?
What you're looking for is a block matrix. See this documentation. For your specific application, each block would just be a constant (A[i][j]) times a matrix of ones (np.ones(n)).
Looks like this does the job, although I'm open to other (faster or more elegant) suggestions!
np.repeat(np.repeat(A, n, axis=0), A, n, axis=1)
I have a matrix A=
np.matrix([[0, 0, 0, 0, 0],
[0, 0, 0, 1, 1],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 1, 1, 1]]
I wanna build a matrix B where B[i,j]=5 if A[i,j]=1 and (i+1)%3=0; B[i,j]=0 otherwise.
The B should be: B=
np.matrix([[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 5, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 5, 0, 0]]
Is there any possible method to achieve this without using for loop, just like matrix calculation? Thank you.
UPDATED ANSWER:
I have not thought of a way to eliminate the for-loop in the list comprehension for filtering on the remainder condition, but the "heavy" part of the computation exploits numpy's optimizations.
import numpy as np
newdata = 5 * np.array([(i + 1) % 3 == 0 for i in range(data.shape[-1])]) * np.array(data)
ORIGINAL ANSWER (prior to condition that for-loops cannot be used):
Assuming your matrix is stored as data, then you can use list comprehension syntax to get what you want.
newdata = [[5 if val == 1 and (idx + 1) % 3 == 0 else 0
for idx, val in enumerate(row)]
for row in data]
How to construct sparse matrix from diagonal vectors like this:
Lets say my matrix is square with dimension N=6 and i have the following vector
vec = np.array([[1], [1,2]])
and I want to put those parts on diagonals
offset = np.array([2,3])
but vec[0] should start at Mat[0,2] and vec[1] should start at Mat[1,4]
I know about scipy.sparse.diags() but I don't think there is a way to specify just part of a diagonal where non-zero elements are present.
This is just an example to illustrate the problem. In reality I deal with very big arrays and I dont want to waste memory for useless zeros.
Is this the matrix that you want?
In [200]: sparse.dia_matrix(([[0,0,1,0,0,0],[0,0,0,0,1,2]],[2,3]),(6,6)).A
Out[200]:
array([[0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 2],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0]])
Yes, the specification includes zeros, which could be annoying in larger cases.
spdiags just wraps the dia_matrix, with the option of converting the result to another format. In your example that converts a 7 element sparse to a 3.
sparse.diags accepts a ragged list of values, but they still need to match the diagonals in length. And internally it converts them to the rectangular array that dia_matrix takes.
S3=sparse.diags([[1,0,0,0],[0,1,2]],[2,3],(6,6))
So if you really need to be parsimonious about the zeros you need to go the coo route.
For example:
In [363]: starts = [[0,2],[1,4]]
In [364]: data = np.concatenate(vec)
In [365]: rows=np.concatenate([range(s[0],s[0]+len(v)) for s,v in zip(starts, vec)])
In [366]: cols=np.concatenate([range(s[1],s[1]+len(v)) for s,v in zip(starts, vec)])
In [367]: sparse.coo_matrix((data,(rows,cols)),(6,6)).A
Out[367]:
array([[0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 2],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0]])