I'm looking to express in pure python what is being done by the np.kron function.
Let's say i have these lists:
v1 = [1, 0, 0, 1]
v2 = [1, 0, 0, 1]
i would like to define a function that would create a list of lists by multiplying v1 by each element in v2. So these two lists would produce 4 lists:
[[1 * v1[0]],[0 * v1[1]],[0 * v1[2]],[1 * v1[3]]]
Currently, I can get the right lists from a list comprehension:
i = [[a*b for a in v1] for b in v2]
>>[[1, 0, 0, 1], [0, 0, 0, 0], [0, 0, 0, 0], [1, 0, 0, 1]]
those lists are correct, but when i convert to np.array and reformat it, there are 1s in the quadrants rather than down the diagonal:
print(np.array(i).reshape(4,4))
[[1 0 0 1]
[0 0 0 0]
[0 0 0 0]
[1 0 0 1]]
if np.kron is passed v1, v2 after converting them to numpy arrays it would give:
i2 = np.kron((np.array(v1).reshape(2,2)),(np.array(v2).reshape(2,2)))
[[1 0 0 0]
[0 1 0 0]
[0 0 1 0]
[0 0 0 1]]
which is a beautiful 4 x 4 identity matrix; thats what im looking to express in pure python, rather than using np kron function.
While the kron solution is by far the simplest (and you can always dump it into a list with .tolist), let's look at a pure python implementation.
There are two parts here: how to implement kron, and how to reshape a list. One way to do it is to make a grid that tells you which element to get from where at each location.
So the index into v1 would look like:
0 1 0 1
2 3 2 3
0 1 0 1
2 3 2 3
The index into v2 is something like
0 0 1 1
0 0 1 1
2 2 3 3
2 2 3 3
You can convert these into an expression in term of the row r and column c. The index into v1 looks something like
i1 = 2 * (r % 2) + (c % 2)
For v2, you can write
i2 = 2 * (r // 2) + (c // 2)
If course it's the constant 2 that would change if you tried to shape the inputs or outputs differently.
Now you can just write a nested comprehension:
output = [[v1[2 * (r % 2) + (c % 2)] * v2[2 * (r // 2) + (c // 2)] for c in range(4)] for r in range(4)]
Related
I am trying to create a function which takes two inputs. One input is the matrix (n*m), and the second is K. K is a integer value. The distance between the cells A[3][2] and A[1][4] is |1-3| + |4-2| = 4. The expected output from the function is the count of cells with cell distance greater than K.
Cell here is each entry in the given matrix A. For example, A[0][0] is a cell and it has an entry value of 1 in the matrix.
I have created a function like this:
A = [[1, 0, 0],
[0, 0, 0],
[0, 0, 1],
[0, 1, 0]]
def findw(K, matrix):
m_c = matrix.copy()
result = 0
for i, j in zip(range(len(matrix)), range(len(m_c))):
for k, l in zip(range(len(matrix[i])), range(len(m_c[j]))):
D = abs(i - l) + abs(j - k)
print(i, k)
print(j, l)
print(D)
if D > K:
result += 1
return result
findw(1, A)
The output I got from the above function for the given matrix A with K = 1 is 9. But I am expecting 3. From the output I also realized that for both the matrices my function is always taking same value, for example (0,0) or (1,0), etc. See the print output below.
findw(1, A)
0 0
0 0
0
0 1
0 1
2
0 2
0 2
4
1 0
1 0
2
1 1
1 1
0
1 2
1 2
2
2 0
2 0
4
2 1
2 1
2
2 2
2 2
0
3 0
3 0
6
3 1
3 1
4
3 2
3 2
2
Out[120]: 9
It looks like my function is not iterating for cells where the indexes for both matrices are different. For example, matrix[0][0] and m_c[0][1].
How can I resolve this issue?
Working under the assumption that it is only the positions which have the value 1 that you care about, you could first enumerate those indices and then loop over the pairs of such things. itertools is a natural tool to use here:
from itertools import product, combinations
def D(p,q):
i,j = p
k,l = q
return abs(i-k) + abs(j-l)
def findw(k,matrix):
m = len(matrix)
n = len(matrix[0])
result = 0
indices = [(i,j) for i,j in product(range(m),range(n)) if matrix[i][j] == 1]
for p,q in combinations(indices,2):
d = D(p,q)
if d > k:
print(p,q,d)
result += 1
return result
#test:
A = [[1, 0, 0],
[0, 0, 0],
[0, 0, 1],
[0, 1, 0]]
print(findw(1, A))
Output:
(0, 0) (2, 2) 4
(0, 0) (3, 1) 4
(2, 2) (3, 1) 2
3
I would like to find a way to mimic MATLAB's ndarray in Python (very different ndarry function!)
so if I have 3 1D arrays, say i = 0:10, j = 0:11, k = 0:12, I would like to create 3 3D arrays,
I, J and K are all 3D array of size(11, 12, 13) with their values given by:
I(x,:,:) = i(x), J(:,x,:) = j(x) and K(:,:,x) = k(x)
In MATLAB this is simply:
[I, J, K] = ndarray(i,j,k)
is there something similar in Python, without reverting to loops - I can't seem to find it?
numpy.meshgrid does what you want
import numpy as np
I,J,K = np.meshgrid(range(2), range(3), range(4))
In [17]: print(f'I={I}')
I=[[[0 0 0 0]
[1 1 1 1]]
[[0 0 0 0]
[1 1 1 1]]
[[0 0 0 0]
[1 1 1 1]]]
In [19]: print(f'J={J}')
J=[[[0 0 0 0]
[0 0 0 0]]
[[1 1 1 1]
[1 1 1 1]]
[[2 2 2 2]
[2 2 2 2]]]
In [20]: print(f'K={K}')
K=[[[0 1 2 3]
[0 1 2 3]]
[[0 1 2 3]
[0 1 2 3]]
[[0 1 2 3]
[0 1 2 3]]]
or equivalently, the slightly more elegant mgrid:
I, J, K = np.mgrid[0:2,0:3,0:4]
I am just starting off with numpy and am trying to create a function that takes in an array (x), converts this into a np.array, and returns a numpy array with 0,0,0,0 added after each element.
It should look like so:
input array: [4,5,6]
output: [4,0,0,0,0,5,0,0,0,0,6,0,0,0,0]
I have tried the following:
import numpy as np
x = np.asarray([4,5,6])
y = np.array([])
for index, value in enumerate(x):
y = np.insert(x, index+1, [0,0,0,0])
print(y)
which returns:
[4 0 0 0 0 5 6]
[4 5 0 0 0 0 6]
[4 5 6 0 0 0 0]
So basically I need to combine the output into one single numpy array rather than three lists.
Would anybody know how to solve this?
Many thanks!
Use the numpy .zeros function !
import numpy as np
inputArray = [4,5,6]
newArray = np.zeros(5*len(inputArray),dtype=int)
newArray[::5] = inputArray
In fact, you 'force' all the values with indexes 0,5 and 10 to become 4,5 and 6.
so _____[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
becomes [4 0 0 0 0 5 0 0 0 0 6 0 0 0 0]
>>> newArray
array([4, 0, 0, 0, 0, 5, 0, 0, 0, 0, 6, 0, 0, 0 ,0])
I haven't used numpy to solve this problem,but this code seems to return your required output:
a = [4,5,6]
b = [0,0,0,0]
c = []
for x in a:
c = c + [x] + b
print(c)
I hope this helps!
big_array = np.array((
[0,1,0,0,1,0,0,1],
[0,1,0,0,0,0,0,0],
[0,1,0,0,1,0,0,0],
[0,0,0,0,1,0,0,0],
[1,0,0,0,1,0,0,0]))
print(big_array)
[[0 1 0 0 1 0 0 1]
[0 1 0 0 0 0 0 0]
[0 1 0 0 1 0 0 0]
[0 0 0 0 1 0 0 0]
[1 0 0 0 1 0 0 0]]
Is there a way to iterate over this numpy array and for each 2x2 cluster of 0s, set all values within that cluster = 5? This is what the output would look like.
[[0 1 5 5 1 5 5 1]
[0 1 5 5 0 5 5 0]
[0 1 5 5 1 5 5 0]
[0 0 5 5 1 5 5 0]
[1 0 5 5 1 5 5 0]]
My thoughts are to use advanced indexing to set the 2x2 shape = to 5, but I think it would be really slow to simply iterate like:
1) check if array[x][y] is 0
2) check if adjacent array elements are 0
3) if all elements are 0, set all those values to 5.
big_array = [1, 7, 0, 0, 3]
i = 0
p = 0
while i <= len(big_array) - 1 and p <= len(big_array) - 2:
if big_array[i] == big_array[p + 1]:
big_array[i] = 5
big_array[p + 1] = 5
print(big_array)
i = i + 1
p = p + 1
Output:
[1, 7, 5, 5, 3]
It is a example, not whole correct code.
Here's a solution by viewing the array as blocks.
First you need to define this function rolling_window from here https://gist.github.com/seberg/3866040/revisions
Then break the array big, your starting array, into 2x2 blocks using this function.
Also generate an array which has indices of every element in big and break it similarly into 2x2 blocks.
Then generate a boolean mask where the 2x2 blocks of big are all zero, and use the index array to get those elements.
blks = rolling_window(big,window=(2,2)) # 2x2 blocks of original array
inds = np.indices(big.shape).transpose(1,2,0) # array of indices into big
blkinds = rolling_window(inds,window=(2,2,0)).transpose(0,1,4,3,2) # 2x2 blocks of indices into big
mask = blks == np.zeros((2,2)) # generate a mask of every 2x2 block which is all zero
mask = mask.reshape(*mask.shape[:-2],-1).all(-1) # still generating the mask
# now blks[mask] is every block which is zero..
# but you actually want the original indices in the array 'big' instead
inds = blkinds[mask].reshape(-1,2).T # indices into big where elements need replacing
big[inds[0],inds[1]] = 5 #reassign
You need to test this: I did not. But the idea is to break the array into blocks, and an array of indices into blocks, then develop a boolean condition on the blocks, use those to get the indices, and then reassign.
An alternative would be to iterate through indblks as defined here, then test the 2x2 obtained from big at each indblk element and reassign if necessary.
This is my attempt to help you solve your problem. My solution may be subject to fair criticism.
import numpy as np
from itertools import product
m = np.array((
[0,1,0,0,1,0,0,1],
[0,1,0,0,0,0,0,0],
[0,1,0,0,1,0,0,0],
[0,0,0,0,1,0,0,0],
[1,0,0,0,1,0,0,0]))
h = 2
w = 2
rr, cc = tuple(d + 1 - q for d, q in zip(m.shape, (h, w)))
slices = [(slice(r, r + h), slice(c, c + w))
for r, c in product(range(rr), range(cc))
if not m[r:r + h, c:c + w].any()]
for s in slices:
m[s] = 5
print(m)
[[0 1 5 5 1 5 5 1]
[0 1 5 5 0 5 5 5]
[0 1 5 5 1 5 5 5]
[0 5 5 5 1 5 5 5]
[1 5 5 5 1 5 5 5]]
I have a row vector A, A = [a1 a2 a3 ..... an] and I would like to create a diagonal matrix, B = diag(a1, a2, a3, ....., an) with the elements of this row vector. How can this be done in Python?
UPDATE
This is the code to illustrate the problem:
import numpy as np
a = np.matrix([1,2,3,4])
d = np.diag(a)
print (d)
the output of this code is [1], but my desired output is:
[[1 0 0 0]
[0 2 0 0]
[0 0 3 0]
[0 0 0 4]]
You can use diag method:
import numpy as np
a = np.array([1,2,3,4])
d = np.diag(a)
# or simpler: d = np.diag([1,2,3,4])
print(d)
Results in:
[[1 0 0 0]
[0 2 0 0]
[0 0 3 0]
[0 0 0 4]]
If you have a row vector, you can do this:
a = np.array([[1, 2, 3, 4]])
d = np.diag(a[0])
Results in:
[[1 0 0 0]
[0 2 0 0]
[0 0 3 0]
[0 0 0 4]]
For the given matrix in the question:
import numpy as np
a = np.matrix([1,2,3,4])
d = np.diag(a.A1)
print (d)
Result is again:
[[1 0 0 0]
[0 2 0 0]
[0 0 3 0]
[0 0 0 4]]
I suppose you could also use diagflat:
import numpy
a = np.matrix([1,2,3,4])
d = np.diagflat(a)
print (d)
Which like the diag method results in
[[1 0 0 0]
[0 2 0 0]
[0 0 3 0]
[0 0 0 4]]
but there's no need for flattening with .A1
Another solution could be:
import numpy as np
a = np.array([1,2,3,4])
d = a * np.identity(len(a))
As for performances for the various answers here, I get with timeit on 100000 repetitions:
np.array and np.diag (Marcin's answer): 2.18E-02 s
np.array and np.identity (this answer): 6.12E-01 s
np.matrix and np.diagflat (Bokee's answer): 1.00E-00 s
Assuming you are working in numpy based on your tags, this will do it:
import numpy
def make_diag( A ):
my_diag = numpy.zeroes( ( 2, 2 ) )
for i, a in enumerate( A ):
my_diag[i,i] = a
return my_diag
enumerate( LIST ) creates an iterator over the list that returns tuples like:
( 0, 1st element),
( 1, 2nd element),
...
( N-1, Nth element )