I am trying to train a neural network for image classification. I am having trouble getting my data into a proper numpy array format. To feed into the network, my array must be of dimension (9068,184,184,1). The problem is if I check the length of the array, it just returns (9068,). If I check the length of an individual element in the array, it returns (184,184,1). How can I make it so that the whole array's length will be four dimensional (9068,184,184,1) so my neural network can take it as input?
Below is my code. I have a (9068,2) dataframe with the file names. I am grabbing the file name, reading it in to an array as pixel information, and storing it into another array.
path = '/home/vivek/Downloads/kaggle_ndsb2-master/data_segmenter_trainset/'
for ii in pairing_table['image']:
new_path = os.path.join(path,ii)
img = Image.open(new_path)
print type(ii)
for ii in range(0,len(image_table['image'])):
new_path = os.path.join(path,image_table['image'][ii])
img = Image.open(new_path)
img2 = np.array(img.getdata()).reshape(184, 184, -1)
#print(type(img))
image_table['image'][ii] = img2
img.close()
for ii in range(0,len(image_table['mask'])):
new_path = os.path.join(path,image_table['mask'][ii])
img = Image.open(new_path)
img2 = np.array(img.getdata()).reshape(184, 184, -1)
image_table['mask'][ii] = img2
img.close()
print(image_table['image'][0].shape) #this is returning (184,184,1)
print(image_table['image'].shape) #this is returning (9068,) should be (9068,184,184,1)
print(image_table['mask'][0].shape) #this is returning (184,184,1)
print(image_table['mask'].shape) #this is returning (9068,) should be (9068,184,184,1)
Related
I have images of varying resolutions, and I would like to extract non-overlapping blocks from these images.
However, because the images have not fixed size and my block size is big (64x64), I would like to get only non-overlapping blocks that could be found in an image. If the block exceeds the image borders, I don't want to get them.
I tried the view_as_blocks function from scikit-image as below:
from skimage.util import view_as_blocks
for elem in listOfFiles:
# Reading image
print("Reading image "+elem)
img = cv2.imread(elem)
print(img.shape) #for example, one image is (2059, 2059, 3)
Blocks = view_as_blocks(img, block_shape=(64, 64, 3))
The code returns the following error:
ValueError: 'block_shape' is not compatible with 'arr_in'
I also tried the Patch Extractor from scikit-learn, as follows:
from sklearn.feature_extraction import image
import cv2
import numpy
for elem in listOfFiles:
# Reading image
print("Reading image "+elem)
img = cv2.imread(elem)
print(img.shape)
pe = image.PatchExtractor(patch_size=(64,64))
pe_fit = pe.fit(img)
pe_trans = pe.transform(img)
print('Patches shape: {}'.format(pe_trans.shape))
The error that returns to me is the following:
ValueError: negative dimensions are not allowed
the function image.extract_patches_2d from sklearns runs perfectly, but unfortunatelly it works only for overlapping blocks as you can see here.
These functions also don't help me because I also want to show the image with these blocks selected, so I also need another matrix with coordinates of such blocks and show the selected blocks.
Is that possible to that in Python?
Since you don't care about the incomplete blocks at the edges, you can manually check the number of blocks along each dimension, and crop your image to that shape:
from skimage.util import view_as_blocks
for elem in listOfFiles:
# Reading image
print("Reading image "+elem)
img = cv2.imread(elem)
print(img.shape) #for example, one image is (2059, 2059, 3)
block_shape = np.array((64, 64, 3))
nblocks = np.array(img.shape) // block_shape # integer division
crop_r, crop_c, crop_ch = nblocks * block_shape
cropped_img = img[:crop_r, :crop_c, :crop_ch]
Blocks = view_as_blocks(cropped_img, block_shape=(64, 64, 3))
mypath='/Users/sachal/Desktop/data_raw/normal_1/images'
onlyfiles = [ f for f in listdir(mypath) if isfile(join(mypath,f)) ]
images = np.asarray(np.empty(len(onlyfiles), dtype=object))
for n in range(0, len(onlyfiles)):
images[n] = cv2.imread( join(mypath,onlyfiles[n]) )
#--------------------------------------------------------------------------------
resized = np.asarray(np.empty(len(onlyfiles), dtype=object))
img_f = np.asarray(np.empty(len(onlyfiles), dtype=object))
for n in range(0, len(onlyfiles)):
resized[n] = cv2.resize(images[n],(101,101))
img_f[n] = cv2.cvtColor(resized[n], cv2.COLOR_BGR2YUV)
train_img = np.asarray(img_f)
#--------------------------------------------------------------------------------
In the above code first I am loading images using opencv then I am resizing and changing their colour space in the second block.
My batch size is 6408 and dimensions of images are 101*101*3
When i do train_img.shape i get(6408,) and upon train_img[i].shape i get 101*101*3 and I am unable to train my neural network model because of this and the dimensions i want are 6408*101*101*3
I tried reshaping with this train_img.resize(6408,101,101,3) i got this ValueError: cannot resize an array that references or is referenced
by another array in this way. Use the resize function
and while fitting my model with i got this error Error when checking input: expected conv2d_3_input to have 4 dimensions, but got array with shape (6408, 1)
I want to know if i can change the dimensions of my input with the current method i am using to load my images.
You shouldn't use the dtype=object here. OpenCV creates ndarray images anyway.
Here is a corrected version of your code:
mypath='/Users/sachal/Desktop/data_raw/normal_1/images'
onlyfiles = [ f for f in os.listdir(mypath) if os.path.isfile(join(mypath,f)) ]
images = []
for file in onlyfiles:
img = cv2.imread(os.path.join(mypath,file))
resized_img = cv2.resize(img, (101, 101))
yuv_img = cv2.cvtColor(resized_img, cv2.COLOR_BGR2YUV)
images.append(yuv_img.reshape(1, 101, 101, 3))
train_img = np.concatenate(images, axis=0)
print(train_img.shape)
In the loop, you load each image, resize it, convert it to YUV then put it in a list. At the end of the loop, your list contains all your training images. You can pass it to np.concatenate to create an ndarray.
I have a requirement to read image files( 28*28) from a folder and stack them together to make a single array for analysis.
I have the following code:
for fname in os.listdir(dirname):
im = Image.open(os.path.join(dirname, fname))
imarray = np.array(im)
final = np.stack((final,imarray ), axis = 0)
am getting the following error:
ValueError: all input arrays must have the same shape
imarray is (28,28) and i have 60K images in that folder so i want to make a array of size (60000,28,28)
Thanks for the help
NK
Build a list of all components and stack them once:
alist = []
for fname in os.listdir(dirname):
im = Image.open(os.path.join(dirname, fname))
imarray = np.array(im)
alist.append(imarray)
final = np.stack(alist) # axis=0 is the default
This will join them on a new initial axis.
I have a loop, where I read images and resize them to 32x32x3
for i, filename in enumerate(os.listdir(path)):
img = plt.imread(path+filename)
out = imresize(img, [32,32])
I tried to store it in a list and convert it to an numpy array
for i, filename in enumerate(os.listdir(path)):
img = plt.imread(path+filename)
out = imresize(img, [32,32])
inet_signs.append(out)
a = np.array(inet_signs)
But this only resulted in the error:
ValueError: could not broadcast input array from shape (32,32,3) into
shape (32,32)
I am trying to customize an existing code to suit my own need. Originally, the code use imgs = np.ndarray((total, 1, image_rows, image_cols), dtype=np.uint8) to store a list of image files in an numpy array format. Iterating the folder, each image file is read as follows img = skimage.io.imread(os.path.join(train_data_path, image_name)) It works just fine.
The code is as follows:
image_rows = 420
image_cols = 580
imgs = np.ndarray((total, 1, image_rows, image_cols), dtype=np.uint8)
i=0
for image_name in images:
img = skimage.io.imread(os.path.join(train_data_path, image_name))
img = np.array([img])
imgs[i]=img
i+=1
In order to suit my own need, I tend to have image file array with the shape [total, image_rows,image_cols,1]. In other words, I modified it as imgs = np.ndarray((total,image_rows, image_cols,1), dtype=np.uint8) However, running the code causes the following error
imgs[i] = img
ValueError: could not broadcast input array from shape (1,420,580) into shape
(420,580,1)
Are there any way to change the shape of img, which originally has shape of [1,420,580] after reading from file. How can I change it to [420,580,1] without affecting the corresponding pixel values in the image.
You want to transpose the dimensions. It can be done using the transpose method:
img = img.transpose(1,2,0)
(for your case)