I have been given a .mat file which is 1024*1024*360 i.e., a 3D object. I have divided the data in to three .mat files A,B and C. All three of them are 1024*1024*120 . I am loading them to a matrix 'mat' which is 1024*360 . I am loading each one of them one by one and then deleting them to make space. Basically it's just a 2D slice of the 3D object at the point 240. Later I am trying to plot the image. Following is my code :
import scipy.io
import numpy as np
mat = np.zeros((1024,360))
x = scipy.io.loadmat('/home/imaging/Desktop/PRAKRITI/Project/A.mat')
x = x.values()
mat[:,0:120]= x[240,:,:]
del x
y = scipy.io.loadmat('/home/imaging/Desktop/PRAKRITI/Project/B.mat')
y = y.values()
mat[:,120:240]= y[240,:,:]
del y
z = scipy.io.loadmat('/home/imaging/Desktop/PRAKRITI/Project/C.mat')
z = z.values()
mat[:,240:360]= z[240,:,:]
del z
import matplotlib.py as plt
imageplot = plt.imshow(matrix)
I am getting this error :
mat[:,0:120]= x[240,:,:]
TypeError: List indices must be integers, not tuple
Can anyone suggest what I am doing wrong here?
You have to create a numpy array from the original x matrix.
This is why the normal python array doesn't accept the numpy type fancy indexing, like matrix[x,y,z] only like matrix[x][y][z].
import scipy.io
import numpy as np
mat = np.zeros((1024,360))
x = scipy.io.loadmat('/home/imaging/Desktop/PRAKRITI/Project/A.mat')
x = np.array((x.values()))
mat[:,0:120]= x[240,:,:]
del x
y = scipy.io.loadmat('/home/imaging/Desktop/PRAKRITI/Project/B.mat')
y = np.array((y.values()))
mat[:,120:240]= y[240,:,:]
del y
z = scipy.io.loadmat('/home/imaging/Desktop/PRAKRITI/Project/C.mat')
z = np.array((z.values()))
mat[:,240:360]= z[240,:,:]
del z
import matplotlib.py as plt
imageplot = plt.imshow(matrix)
Alternately you can use x[240][:][:] instead of x[240,:,:]
Glad to have been of help! Feel free to accept my answer if you feel it was useful to you. :-)
continuing:
Because the following code worked fine, i guess the problem is somewhere at the loaded matrixs' dimensions i.e. x.values() etc. So please check it first, with print x.shape().
import numpy as np
mat = np.zeros((1024,360))
x = np.zeros((1024,1024,120))
mat[:,0:120] = x[240,:,:]
print mat
[[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
...,
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]]
Related
I'm doing Identity Matrix, but it comes TypeError: only integer scalar arrays can be converted to a scalar index, and IDK how to fix it, plz help me!
Z = np.array([
[0,2,0,4,4],
[0,0,3,0,0],
[0,0,0,1,0],
[0,2,0,0,0],
[0,0,0,0,0]
])
I = np.eye(Z)
I = np.identity(Z)
Both np.eye and np.identify come to the same error.
The fucntion np.identity() takes an integer argument, not np.array() object as argument. So if you want to create an identity matrix of size nxn you need to calculate the length of Z:
import numpy as np
Z = np.array([
[0,2,0,4,4],
[0,0,3,0,0],
[0,0,0,1,0],
[0,2,0,0,0],
[0,0,0,0,0]
])
I = np.identity(len(Z))
print(I)
Output:
[[1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 0. 1. 0. 0.]
[0. 0. 0. 1. 0.]
[0. 0. 0. 0. 1.]]
The pure numpy solution is:
import numpy as np
data = np.random.rand(5,5) #data is of shape (5,5) with floats
masking_prob = 0.5 #probability of an element to get masked
indices = np.random.choice(np.prod(data.shape), replace=False, size=int(np.prod(data.shape)*masking_prob))
data[np.unravel_index(indices, data)] = 0. #set to zero
How can I achieve this in TensorFlow?
Use tf.nn.dropout:
import tensorflow as tf
import numpy as np
data = np.random.rand(5,5)
array([[0.38658212, 0.6896139 , 0.92139911, 0.45646086, 0.23185075],
[0.03461688, 0.22073962, 0.21254995, 0.20046708, 0.43419155],
[0.49012903, 0.45495968, 0.83753471, 0.58815975, 0.90212244],
[0.04071416, 0.44375078, 0.55758641, 0.31893155, 0.67403431],
[0.52348073, 0.69354454, 0.2808658 , 0.6628248 , 0.82305081]])
tf.nn.dropout(data, rate=prob).numpy()*(1-prob)
array([[0.38658212, 0.6896139 , 0.92139911, 0. , 0. ],
[0.03461688, 0. , 0. , 0.20046708, 0. ],
[0.49012903, 0.45495968, 0. , 0. , 0. ],
[0. , 0.44375078, 0.55758641, 0.31893155, 0. ],
[0.52348073, 0.69354454, 0.2808658 , 0.6628248 , 0. ]])
Dropout multiplies remaining values so I counter this by multiplying by (1-prob)
For further users looking for a TF 2.x compatible answer, this is what I came up with:
import tensorflow as tf
import numpy as np
input_tensor = np.random.rand(5,5).astype(np.float32)
def my_numpy_func(x):
# x will be a numpy array with the contents of the input to the
# tf.function
p = 0.5
indices = np.random.choice(np.prod(x.shape), replace=False, size=int(np.prod(x.shape)*p))
x[np.unravel_index(indices, x.shape)] = 0.
return x
#tf.function(input_signature=[tf.TensorSpec((None, None), tf.float32)])
def tf_function(input):
y = tf.numpy_function(my_numpy_func, [input], tf.float32)
return y
tf_function(tf.constant(input_tensor))
You can also use this is code in the context of a Dataset by using the map() operation.
First of all, I work with byte array (>= 400x400x1000) bytes.
I wrote a small function which can insert a multidimensional array (or a fraction of) into another one by indicating an offset. This works if the embedded array is smaller than the embedding array (case A). Otherwise the embedded array is truncated (case B).
case A) Inserting a 3x3 into a 5x5 matrix with offset 1,1 would look like this.
[[ 0. 0. 0. 0. 0.]
[ 0. 1. 1. 1. 0.]
[ 0. 1. 1. 1. 0.]
[ 0. 1. 1. 1. 0.]
[ 0. 0. 0. 0. 0.]]
case B) If the offsets are exceeding the dimensions of the embedding matrix, the smaller array is truncated. E.g. a (-1,-1) offset would result in this.
[[ 1. 1. 0. 0. 0.]
[ 1. 1. 0. 0. 0.]
[ 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0.]]
case C) Now, instead of truncating the embedded array, I want to extend the embedding array (by zeroes) if the embedded array is either bigger than the embedding array or the offsets enforce it (e.g. case B). Is there a smart way with numpy or scipy to solve this?
[[ 1. 1. 1. 0. 0. 0.]
[ 1. 1. 1. 0. 0. 0.]
[ 1. 1. 1. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0.]]
Actually I work with 3D array, but for simplicity I wrote an example for 2D arrays. Current source:
import numpy as np
import nibabel as nib
def addAtPos(mat_bigger, mat_smaller, xyz_coor):
size_sm_x, size_sm_y = np.shape(mat_smaller)
size_gr_x, size_gr_y = np.shape(mat_bigger)
start_gr_x, start_gr_y = xyz_coor
start_sm_x, start_sm_y = 0,0
end_x, end_y = (start_gr_x + size_sm_x), (start_gr_y + size_sm_y)
print(size_sm_x, size_sm_y)
print(size_gr_x, size_gr_y)
print(end_x, end_y)
if start_gr_x < 0:
start_sm_x = -start_gr_x
start_gr_x = 0
if start_gr_y < 0:
start_sm_y = -start_gr_y
start_gr_y = 0
if end_x > size_gr_x:
size_sm_x = size_sm_x - (end_x - size_gr_x)
end_x = size_gr_x
if end_y > size_gr_y:
size_sm_y = size_sm_y - (end_y - size_gr_y)
end_y = size_gr_y
# copy all or a chunk (if offset is small/big enough) of the smaller matrix into the bigger matrix
mat_bigger[start_gr_x:end_x, start_gr_y:end_y] = mat_smaller[start_sm_x:size_sm_x, start_sm_y:size_sm_y]
return mat_bigger
a_gr = np.zeros([5,5])
a_sm = np.ones([3,3])
a_res = addAtPos(a_gr, a_sm, [-2,1])
#print (a_gr)
print (a_res)
Actually there is an easier way to do it.
For your first example of a 3x3 array embedded to a 5x5 one you can do it with something like:
A = np.array([[1,1,1], [1,1,1], [1,1,1]])
(N, M) = A.shape
B = np.zeros(shape=(N + 2, M + 2))
B[1:-1:, 1:-1] = A
By playing with slicing you can select a subset of A and insert it anywhere within a continuous subset of B.
Hope it helps! ;-)
What is the best way to fill in the lower triangle of a numpy array with zeros in place so that I don't have to do the following:
a=np.random.random((5,5))
a = np.triu(a)
since np.triu returns a copy, not a view. Preferable this would require no list indexing as well since I am working with large arrays.
Digging into the internals of triu you'll find that it just multiplies the input by the output of tri.
So you can just multiply the array in-place by the output of tri:
>>> a = np.random.random((5, 5))
>>> a *= np.tri(*a.shape)
>>> a
array([[ 0.46026582, 0. , 0. , 0. , 0. ],
[ 0.76234296, 0.5298908 , 0. , 0. , 0. ],
[ 0.08797149, 0.14881991, 0.9302515 , 0. , 0. ],
[ 0.54794779, 0.36896506, 0.92901552, 0.73747726, 0. ],
[ 0.62917827, 0.61674542, 0.44999905, 0.80970863, 0.41860336]])
Like triu, this still creates a second array (the output of tri), but at least it performs the operation itself in-place. The splat is a bit of a shortcut; consider basing your function on the full version of triu for something robust. But note that you can still specify a diagonal:
>>> a = np.random.random((5, 5))
>>> a *= np.tri(*a.shape, k=2)
>>> a
array([[ 0.25473126, 0.70156073, 0.0973933 , 0. , 0. ],
[ 0.32859487, 0.58188318, 0.95288351, 0.85735005, 0. ],
[ 0.52591784, 0.75030515, 0.82458369, 0.55184033, 0.01341398],
[ 0.90862183, 0.33983192, 0.46321589, 0.21080121, 0.31641934],
[ 0.32322392, 0.25091433, 0.03980317, 0.29448128, 0.92288577]])
I now see that the question title and body describe opposite behaviors. Just in case, here's how you can fill the lower triangle with zeros. This requires you to specify the -1 diagonal:
>>> a = np.random.random((5, 5))
>>> a *= 1 - np.tri(*a.shape, k=-1)
>>> a
array([[0.6357091 , 0.33589809, 0.744803 , 0.55254798, 0.38021111],
[0. , 0.87316263, 0.98047459, 0.00881754, 0.44115527],
[0. , 0. , 0.51317289, 0.16630385, 0.1470729 ],
[0. , 0. , 0. , 0.9239731 , 0.11928557],
[0. , 0. , 0. , 0. , 0.1840326 ]])
If speed and memory use are still a limitation and Cython is available, a short Cython function will do what you want.
Here's a working version designed for a C-contiguous array with double precision values.
cimport cython
#cython.boundscheck(False)
#cython.wraparound(False)
cpdef make_lower_triangular(double[:,:] A, int k):
""" Set all the entries of array A that lie above
diagonal k to 0. """
cdef int i, j
for i in range(min(A.shape[0], A.shape[0] - k)):
for j in range(max(0, i+k+1), A.shape[1]):
A[i,j] = 0.
This should be significantly faster than any version that involves multiplying by a large temporary array.
import numpy as np
n=3
A=np.zeros((n,n))
for p in range(n):
A[0,p] = p+1
if p >0 :
A[1,p]=p+3
if p >1 :
A[2,p]=p+4
creates a upper triangular matrix starting at 1
I'm trying to use numpy with numba but I'm getting weird results while trying to access or set some values to a numpy array of float using a float index converted to an int.
Check with this basic function.
#numba.jit("void(f8[:,::1],f8[:,::1])")
def test(table, index):
x,y = int(index[0,0]), int(index[1,0)
table[y,x] = 1.0
print index[0,0], index[1,0], x,y
print table
print table[y,x]
table = np.zeros((5,5), dtype = np.float32)
index = np.random.ranf(((2,2)))*5
test(table, index)
results:
index[0,0] = 1.34129550525 index[1,0] = 0.0656177324359 x = 1 y = 0
table[0,1] = 1.0
table [[ 0. 0. 1.875 0. 0. ]
[ 0. 0. 0. 0. 0. ]
[ 0. 0. 0. 0. 0. ]
[ 0. 0. 0. 0. 0. ]
[ 0. 0. 0. 0. 0. ]]
Why do I get a 1.875 in my table and not a 1.0? This a basic example but I'm working with big array and it gives me a lot of error. I know i can convert index to np.int32 and change #numba.jit("void(f8[:,::1],f8[:,::1])") to #numba.jit("void(f8[:,::1],i4[:,::1])") and that is working fine, but I would you like ton understand why this is not working.
Is it a problem while parsing the type from python to c++?
Thanks for you help
In [198]: np.float64(1.0).view((np.float32,2))
Out[198]: array([ 0. , 1.875], dtype=float32)
So when
table[y,x] = 1.0
writes a np.float64(1.0) into table, table views the data as np.float32 and interprets it as a 0 and a 1.875.
Notice that the 0 shows up at index location [0,1], and 1.875 shows up at index location [0,2], whereas the assignment occurred at [y,x] = [0,1].
You could fix the dtype mismatch by changing
#numba.jit("void(f8[:,::1],f8[:,::1])")
to
#numba.jit("void(f4[:,::1],f8[:,::1])")
These are the 8 bytes in np.float64(1.0):
In [201]: np.float64(1.0).tostring()
Out[201]: '\x00\x00\x00\x00\x00\x00\xf0?'
And when the 4 bytes '\x00\x00\xf0?' are interpreted as a np.float32 you get 1.875:
In [205]: np.fromstring('\x00\x00\xf0?', dtype='float32')
Out[205]: array([ 1.875], dtype=float32)