How to reshape images in folder for DCGAN code? - python

These are the 9600 images in my folder + their properties: enter image description here
I tried resizing them with this code below, but then this gave me an image shape of (9600, 224, 672). I mainly used this code because it helped me get my images into the correct array/ I had used this code earlier for a kmeans thing.
I need a smaller image shape for the DCGAN code (something closer to 9600,64,128)-- see below for why.
import numpy as np
from numpy import save
import tensorflow as tf
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
import cv2
import os, glob, shutil
input_dir = '/Users/User/Documents/MATLAB/attempt6images/'
glob_dir = input_dir + '/*.jpg'
img = [cv2.resize(cv2.imread(file), (224, 224)) for file in glob.glob(glob_dir)]
#save('SV_images.npy', images)
paths = [file for file in glob.glob(glob_dir)]
img = np.array(np.float32(img).reshape(len(img), -1)/255)
save('SV_images.npy', img)
I'm trying to run them through a DCGAN code which was originally created for an image with shape (9600, 64, 128). When I use the image shape 224,672, my kernel dies. So I'm assuming my image size might be too big in comparison to the original images. By the way I changed the 25624 part to 256721 to match my image shape + changed all the other numbers besides the 256 column). Below is a part of that original code for the 64x128. Any idea how to reshape my image size so that my kernel doesnt die?
enter image description here

Related

ROI for image dataset

I'm trying to find out ROI for an image dataset. I'm able to find ROI for a single image, but when it comes to the whole dataset or multiple images I can't do it.
I tried to use different code from the internet, but nothing worked.
`
import cv2
import numpy as np
import os
import random
from matplotlib import pyplot as plt
import matplotlib.image as mpimg
img_folder=r'Shrimp/train/Resized_Dataset'
img_dir = r'Cropped_shrimp'
for i in range(30):
file = random.choice(os.listdir(img_folder))
image_path= os.path.join(img_folder, file)
img=mpimg.imread(image_path)
ax=plt.subplot(1,30,i+1)
ax.title.set_text(file)
plt.imshow(img)
roi = cv2.selectROI(img_folder)
ROI_image = img_folder[int(r[1]):int(r[1]+r[3]),
int(r[0]):int(r[0]+r[2])]
cv2.imshow("ROI_image", ROI_image)
cv2.waitKey(0)
os.chdir(img_dir)
cv2.imwrite("ROI_image", JPG)
`
This is my last approach. I think there're lots of mistake because I'm trying this for the first time.

convert .nii to .tif using imwrite, it saves black image insted of the image

I want to convert .nii images to .tif to train my model using U-Net.
1-I looped through all images in the folder.
2-I looped through all slices within each image.
3-I saved each slice as .tif.
The training images are converted successfully. However, the labels (masks) are all saved as black images. I want to successfully convert those masks from .nii to .tif, but I don't know how. I read that it could be something with brightness, but I didn't get the idea clearly, so I couldn't solve the problem until now.
The only reason for this conversion is to be able to train my model. Feel free to suggest a better idea, if anyone can share a way to feed the network with the .nii format directly.
import nibabel as nib
import matplotlib.pyplot as plt
import imageio
import numpy as np
import glob
import os
import nibabel as nib
import numpy as np
from tifffile import imsave
import tifffile as tiff
for filepath in glob.iglob('data/Task04_Hippocampus/labelsTr/*.nii.gz'):
a = nib.load(filepath).get_fdata()
a = a.astype('int8')
base = Path(filepath).stem
base = re.sub('.nii', '', base)
x,y,z = a.shape
for i in range(0,z):
newimage = a[:, :, i]
imageio.imwrite('data/Task04_Hippocampus/masks/'+base+'_'+str(i)+'.tif', newimage)
Unless you absolutely have to use TIFF, I would strongly suggest using the NiFTI format for a number of important reasons:
Image values are often not arbitrary. For example, in CT images the values correspond to x-ray attenuation (check out this Wikipedia page). TIFF, which is likely to scale the values in some way, is not suitable for this.
NIfTI also contains a header which has crucial geometric information needed to correctly interpret the image, such as the resolution, slice thickness, and direction.
You can directly extract a numpy.ndarray from NIfTI images using SimpleITK. Here is a code snippet:
import SimpleITK as sitk
import numpy as np
img = sitk.ReadImage("your_image.nii")
arr = sitk.GetArrayFromImage(img)
slice_0 = arr[0,:,:] # this is a 2D axial slice as a np.ndarray
As an aside: the reason the images where you stored your masks look black is because in NIfTI format labels have a value of 1 (and background is 0). If you directly convert to TIFF, a value of 1 is very close to black when interpreted as an RGB value - another reason to avoid TIFF!

Resizing images efficiently

My dataset contains 102235 images which I stored in a list and saved as a pickle file. I would like to train a neural network with these images.
Therefore, I would like to resize the images.
I used the following code but my Jupyter Notebook died.
import pickle
images = pickle.load(open(r"path\to\pickle\data\*.pkl", "rb"))
import cv2
def resize_images(img, new_width, new_height):
size = (new_width, new_height)
resized_img = cv2.resize(img, size)
return resized_img
import pandas as pd
images = images.tolist()
resized_img_train = [resize_images(img, new_width=224, new_height=224) for img in images]
Is there a better way to do this?
Edit: I do not want to use TensorFlow or Keras since my computer seems to have problems with these two packages.

Loading images and reshaping resulting numpy array

Hi I am trying to reshape an array that is full of pixel data from images loaded with OpenCv. The resulting combined array is of shape (100,28,28,3)
and I am trying to make it shape (100,28,28) and cannot entire delete it with np.delete or reshape. Any help would be great! Here's my code thus far:
import cv2
import glob
import numpy as np
hand_dig = []
files = glob.glob ("C:/Users/xxx/Desktop/digits/hand/*.PNG")
for myFile in files:
print(myFile)
image = cv2.imread (myFile)
hand_dig.append (image)
print('hand_digit shape:', np.array(hand_dig).shape)
hand_dig=np.reshape(hand_dig,(100,28,28))
print(hand_dig.shape)
From what you have given above it seems you have 100 RGB images of shape (28,28).
No, you cannot drop pixels as it may result in you losing a lot of information.
A better option would be to convert each image into gray and then stack it.
After reading image, add this lines:
image=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)

Read mnist images into Tensorflow

I was looking at this Tensorflow tutorial.
In the tutorial the images are magically read like this:
mnist = learn.datasets.load_dataset("mnist")
train_data = mnist.train.images
My images are placed in two directories:
../input/test/
../input/train/
They all have a *.jpg ending.
So how can read them into my program?
I don't think I can use learn.datasets.load_dataset because this seems to take in a specialized dataset structure, while I only have folders with images.
mnist.train.images is essentially a numpy array of shape [55000, 784]. Where, 55000 is the number of images and 784 is the number of pixels in each image (each image is 28x28)
You need to create a similar numpy array from your data in case you want to run this exact code. So, you'll need to iterate over all your images, read image as a numpy array, flatten it and create a matrix of size [num_examples, image_size]
The following code snippet should do it:
import os
import cv2
import numpy as np
def load_data(img_dir):
return np.array([cv2.imread(os.path.join(img_dir, img)).flatten() for img in os.listdir(img_dir) if img.endswith(".jpg")])
A more comprehensive code to enable debugging:
import os
list_of_imgs = []
img_dir = "../input/train/"
for img in os.listdir("."):
img = os.path.join(img_dir, img)
if not img.endswith(".jpg"):
continue
a = cv2.imread(img)
if a is None:
print "Unable to read image", img
continue
list_of_imgs.append(a.flatten())
train_data = np.array(list_of_imgs)
Note:
If your images are not 28x28x1 (B/W images), you will need to change the neural network architecture (defined in cnn_model_fn). The architecture in the tutorial is a toy architecture which only works for simple images like MNIST. Alexnet may be a good place to start for RGB images.
You can check the answers given in How do I convert a directory of jpeg images to TFRecords file in tensorflow?. Easiest way is to use the utility provided by tensor flow :build_image_data.py, which does exactly the thing you want to do.

Categories