I am working with 3d MRI images in X,Y,Z format where Z is number of slices in brain. I have JPEGS and niftis and I wanna save both files as xyz format numpy array. In the nifti file format, the files are read in as xyz but when I print the first slice I get it as x axis length of z and on Y is length of. How can I read it as xyz and still be able to print the third dimension as a square.
img = nib.load(os.path.join(data_path, str(list_dir[i]) + ".nii.gz"))
# Get the data from the NIFTI image
data = img.get_fdata()
# Get the shape of the data
slices = []
print(data.shape)
data = np.transpose(data, (0, 1, 2))
shape = data.shape
print(shape)
# Loop through each slice in the data
for i in range(shape[2]):
# Get the current slice
slice = data[:,:,i]
slice = np.rot90(slice, axes = (1,0))
# Save the slice as a 3D Numpy array
slices.append(np.array(slice))
# Convert the list of slices to a Numpy array
print(len(slices))
slices = np.array(slices)
print(slices.shape)
Related
I have the poses of humans (X, Y, Z values of joints like left elbow, right knee etc) in a video saved in a 4D numpy array.
Example: The poses are saved in the array of shape 3, 103, 25, 2 means:
3(number of Co-ordinates), 103(number of Frames), 25(Number of Joints), 2(number of persons).
Now I want to change the view angle of this observation. i.e. I want to apply a rotation matrix on all the joint position values.
As of now, I'm
iterating through the number of persons
converting each person into 2-D array
Multiplying the 2D array with a rotation 3x3 matrix
Reshaping the rotated 2D array into 3D array
Appending the 3D arrays
seq = np.random.rand(3, 103, 25, 2)
rot_mat = np.random.rand(3,3)
rotated_seq = np.zeros(seq.shape)
for i in range(seq.shape[3]): # iterating through person
person = seq[:, :, :, i]
joint_values = np.reshape(person, (3, -1))
rotated_joint_values = np.dot(joint_values.T, rot_mat).T
rotated_person = np.reshape(rotated_joint_values, person.shape)
rotated_seq[:, :, :, i] = rotated_person
My question is there any way to do this without using the for loop.
I have a numpy array Data of 1D of size [36*64]. Basically, I have 36, 8*8 images stored in a 1D array. Each image is stored in Height(8)*Width(8) format.
For e.g.: ith image is stored from Data[i*8*8 : (i*8*8 + 8*8)].
Now I want to make a tile of images from the given 36 images, i.e. 6 images stacked on top of each other. Example.
Basically, I want to transform my 1D Numpy array into a 2D array of images in the above mentioned format.
I would prefer answers with just using Numpy methods.
To convert your 1D array to 2D use reshape as shown with an example:
# Creating 36 images each of shape 8x8
initial_1D = np.random.randn(2304).reshape(36, 8, 8)
Collage can be formed using PIL. For clear understanding refer here Making a collage in PIL
If I understand you correctly, you can do it like this
# make example data
a = np.linspace(0, 36*64-1, 36*64)
print(a[:64])
print(a.shape)
# reshape 1D to 3D array
b = a.reshape(-1, 8, 8)
# look at first "image"
print(b[0])
If I did not understand you correctly, you need to put the -1, 8, 8 in a different order.
I am analyzing some image represented datasets using keras. I am stuck that I have two different dimensions of images. Please see the snapshot. Features has 14637 images having dimension (10,10,3) and features2 has dimension (10,10,100)
Is there any way that I can merge/concatenate these two data together.?
If features and features2 contain the features of the same batch of images, that is features[i] is the same image of features2[i] for each i, then it would make sense to group the features in a single array using the numpy function concatenate():
newArray = np.concatenate((features, features2), axis=3)
Where 3 is the axis along which the arrays will be concatenated. In this case, you'll end up with a new array having dimension (14637, 10, 10, 103).
However, if they refer to completely different batches of images and you would like to merge them on the first axis such that the 14637 images of features2 are placed after the first 14637 image, then, there no way you can end up with an array, since numpy array are structured as matrix, non as a list of objects.
For instance, if you try to execute:
> a = np.array([[0, 1, 2]]) // shape = (1, 3)
> b = np.array([[0, 1]]) // shape = (1, 2)
> c = np.concatenate((a, b), axis=0)
Then, you'll get:
ValueError: all the input array dimensions except for the concatenation axis must match exactly
since you are concatenating along axis = 0 but axis 1's dimensions differ.
If dealing with numpy arrays, you should be able to use concatenate method and specify the axis, along which the data should be merged. Basically: np.concatenate((array_a, array_b), axis=2)
I think it would be better if you use class.
class your_class:
array_1 = []
array_2 = []
final_array = []
for x in range(len(your_previous_one_array)):
temp_class = your_class
temp_class.array_1 = your_previous_one_array
temp_class.array_2 = your_previous_two_array
final_array.append(temp_class)
I am trying to create an empty numpy array, and save all the images that I get from my device. The images come in as numpy array of shape (240,320,3). Creating an empty array to store these images seems like the correct thing to do. When I try to append however, I get this error:
ValueError: all the input arrays must have same number of dimensions
Code as follows:
import numpy as np
# will be appending many images of size (240,320,3)
images = np.empty((0,240,320,3),dtype='uint8')
# filler image to append
image = np.ones((240,320,3),dtype='uint8') * 255
images = np.append(images,image,axis=0)
I need to append many images to this array, so after 100 appends, the shape of the images array should be of shape (100,240,320,3) if done correctly.
Better than np.append is:
images = np.empty((100,240,320,3),dtype='uint8')
for i in range(100):
image = ....
images[i,...] = image
or
alist = []
for i in range(100):
image = ....
alist.append(image)
images = np.array(alist)
# or images = np.stack(alist, axis=0) for more control
np.append is just a cover for np.concatenate. So it makes a new array each time through the loop. By the time you add the 100th image, you have copied the first one 100 times!. The other disadvantage with np.append is that you have to adjust the dimensions of image, a frequent source of error. The other frequent error is getting that initial 'empty' array shape wrong.
Your images array has four dimensions, so you must append a four dimensional item to it. To do so, simply add a new axis to image like so:
images = np.append(images,image[np.newaxis, ...], axis=0)
In a sense, when passing an axis numpy.append is more akin to list.extend than list.append.
i want to change a image(17x15) to 2d array with code:
from PIL import Image
import numpy as np
list = []
im = Image.open('plus1.jpg')
row,col = im.size
print(row,col)
for i in range (row):
for j in range (col):
r, g, b = im.getpixel((i, j))
list.append([r,g,b])
print(list)
print(len(list))
list = np.array(list)
print(list)
list.reshape(17,15)
It change okay to 1D array but when i using reshape to make 2D array with list.reshape(17,15) got the error:
ValueError: total size of new array must be unchanged
The size is 17x15, and change to 1D array have 255 elements, so why the error appear and how to make it run normaly?
Your image is 17x15, so there are 255 pixels. For each pixel, there are three color values (r,g,b). This means that your array list has shape (255,1,3). This means that it contains 755 elements, and an error is raised when you try to reshape it to (17,15), which does not preserve the number of elements. To obtain an array that has the first two dimensions the same as your image (17,15), and a third axis that contains the rgb values, you should write:
np.reshape(list, (17,15,3) )