I am a beginner in deep learning.
I'm working on an article that was posted on github. I downloaded the database which contains 6 types of images: glass, paper, cardboard, plastic, metal and trash.
My question is that I want to read the database as the author did but I can't read it, yet I gave the path correctly.
I show you the author's code to read for example the boxes and second my code that I made on google colab.
import copy
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
import tensorflow as tf
cardboard = []
for i in range(1,404):
temp = tf.keras.preprocessing.image.load_img(
path = "trashnet/cardboard/cardboard"+str(i)+".jpg",
grayscale=False, color_mode='rgb',target_size=(227,227))
X = np.array(temp)
cardboard.append(X)
cardboard = np.array(cardboard)
cardboard = np.take(cardboard,np.random.permutation(cardboard.shape[0]),axis=0)
print(cardboard.shape)
import copy
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
import tensorflow as tf
cardboard = []
for i in range(1,404):
temp = tf.keras.preprocessing.image.load_img(
path="dataset-resized.zip/dataset-resized/cardboard"+str(i)+".jpg",
grayscale=False, color_mode='rgb',target_size=(227,227))
X = np.array(temp)
cardboard.append(X)
cardboard = np.array(cardboard)
cardboard = np.take(cardboard,np.random.permutation(cardboard.shape[0]),axis=0)
print(cardboard.shape)
/usr/local/lib/python3.7/dist-packages/keras_preprocessing/image/utils.py in load_img(path, grayscale, color_mode, target_size, interpolation)
111 raise ImportError('Could not import PIL.Image. '
112 'The use of `load_img` requires PIL.')
--> 113 with open(path, 'rb') as f:
114 img = pil_image.open(io.BytesIO(f.read()))
115 if color_mode == 'grayscale':
NotADirectoryError: [Errno 20] Not a directory: 'dataset-resized.zip/dataset-resized/cardboard1.jpg'
I was able to replicate the issue using Garbage Classification dataset here. Please find the working code below and try passing the correct path to tf.keras.preprocessing.image.load_img.
import copy
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
import tensorflow as tf
cardboard = []
for i in range(1,404):
temp = tf.keras.preprocessing.image.load_img(
path = "/content/garbage classification/Garbage classification/cardboard/cardboard"+str(i)+".jpg",
grayscale=False, color_mode='rgb',target_size=(227,227))
X = np.array(temp)
cardboard.append(X)
cardboard = np.array(cardboard)
cardboard = np.take(cardboard,np.random.permutation(cardboard.shape[0]),axis=0)
print(cardboard.shape)
Output:
(403, 227, 227, 3)
Related
One week ago I run this code perfectly. But today I am getting runtime error: output shape not correct
from PIL import Image
import glob
import numpy as np
import scipy.ndimage.filters
import matplotlib.pyplot as plt
image_list_Brownspot = []
for filename in glob.glob('./dataset/BrownSpot/*.jpg'):
im=Image.open(filename)
image_list_Brownspot.append(im.copy())
im.close()
len(image_list_Brownspot)
lap = scipy.ndimage.filters.laplace(image_list_Brownspot[0])
`
I would like to get the feature of a several images located in the same folder.
My codes are as follow - Prerequisites (librairies needed):
import numpy as np
from PIL import Image
import glob
import cv2
import os
Definition of folder where are located the images (around 6000)
images_dir = "TrainImages"
Creation of a function that defines the different variables et compute them
def get_data_from_image(image_path):
cv_img = cv2.imread(image_path)
(means, stds) = cv2.meanStdDev(cv_img)
stats = np.concatenate([means, stds]).flatten()
image_features_list = [stats.tolist()]
return image_features_list
Creation of a variable that scans and analyses the images
image_files = [x.path for x in os.scandir(images_dir)]
Creation of a loop function
i = 0
mylist =[]
for i in range (4): # I test only 4 images, could be more
mylist.append((get_data_from_image(image_files[i])))
Running the stuff
image_features_list = get_data_from_image(image_files[i])
Look at the output
image_features_list
The output provides only the feature of one image, instead of all images located in the folder
[Out]:
[[114.31548828125001,
139.148388671875,
139.57832682291667,
50.54138521536725,
53.82290182999255,
51.946187641459595]]
I would be grateful if I could have a solution on how to have the features of all images (not only one). At this effect, do not hesitate to correct the code.
Thanks and kindest regards
After some commments from friendly persons, here is an additional information for those who would be interested by the response : The output to look at is mylist.
mylist
[Out]:
[[[144.28788548752834,
151.28145691609978,
148.6195351473923,
51.50620316379085,
53.36979275398226,
52.2493589172815]],
[[56.220865079365076,
59.99653968253968,
60.28386507936508,
66.72797279655177,
65.24673515467009,
64.93141350917332]],
[[125.2066064453125,
118.1168994140625,
145.0827685546875,
68.95463582009148,
52.65138276425348,
56.68269683130363]],
[[114.31548828125001,
139.148388671875,
139.57832682291667,
50.54138521536725,
53.82290182999255,
51.946187641459595]]]
Thanks for your help. It is a great forum here !
Try this approach and tell me if its successful
import os, os.path
import numpy as np
from PIL import Image
import cv2
def get_data_from_image(image_path):
cv_img = cv2.imread(image_path)
(means, stds) = cv2.meanStdDev(cv_img)
stats = np.concatenate([means, stds]).flatten()
image_features_list = [stats.tolist()]
return image_features_list
images_dir = 'C:\\Users\\User\\Directory\\TrainImages\\'
images_names = []
with os.scandir(images_dir) as dirs:
for entry in dirs:
images_names.append(entry.name)
for image in images_names:
path = images_dir + image
image_features_list = get_data_from_image(path))
print(image_features_list)
I'm doing an image processing task and I want to use torch.cat to concat pictures belonging to two different folders. The size of the images in folder 1 is 224*224*3, and the size of the images in folder 2 is 224*224*1.After that, I want to save the generated image.I used the save function, but I got an error, the error code is "TypeError: Cannot handle this data type".Please help me find the solution.
Here is my code:
import glob
import os
import torch
import torchvision.transforms.functional as TF
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
import scipy.misc
f1_folder = 'F:\\picture1'
f2_folder = 'F:\\picture2'
f1_images = glob.glob(os.path.join(f1_folder, '*.jpg'))
f2_images = glob.glob(os.path.join(f2_folder, '*.jpg'))
for f1_img, f2_img in zip(f1_images, f2_images):
img1 = Image.open(f1_img)
img2 = Image.open(f2_img)
cat_img = torch.cat((TF.to_tensor(img1), TF.to_tensor(img2)), dim=1)# for 'CHW'
cat_img = cat_img.numpy()
cat_img = np.transpose(cat_img, (1, 2, 0))
im = Image.fromarray(cat_img)
im.save("file.jpeg")
I am using visualise cam from keras-vis for creating guided-gradcam images.
The grad-cam is working perfectly well with vgg16. but when i used the same code for inceptionv3 it is not working properly.
from keras.applications.inception_v3 import InceptionV3
from vis.utils import utils
from keras.preprocessing import image
import numpy as np
from keras import activations
from matplotlib import pyplot as plt
%matplotlib inline
from vis.visualization import visualize_cam,overlay
#build the inceptionv3 model with imagenet weights
model = InceptionV3(weights='imagenet',include_top=True)
# Utility to search for layer index by name
layer_idx = utils.find_layer_idx(model,'predictions')
#swap with softmax with linear classifier for the reasons mentioned above
model.layers[layer_idx].activation = activations.linear
model = utils.apply_modifications(model)
from vis.utils import utils
from matplotlib import pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize']=(18,6)
img1 = utils.load_img('images/ouzel1.jpg',target_size=(299,299))
img2 = utils.load_img('images/ouzel2.jpg',target_size=(299,299))
f, ax = plt.subplots(1,2)
ax[0].imshow(img1)
ax[1].imshow(img2)
plt.show()
from vis.visualization import visualize_cam
for modifier in [None, 'guided', 'relu']:
plt.figure()
f, ax = plt.subplots(1, 2)
plt.suptitle("vanilla" if modifier is None else modifier)
for i, img in enumerate([img1, img2]):
# 20 is the imagenet index corresponding to `ouzel`
heatmap = visualize_cam(model, layer_idx, filter_indices=20,
seed_input=img, backprop_modifier=modifier,
#penultimate_layer_idx = 299 # corresponding to "conv2d_94"
)
# Lets overlay the heatmap onto original image.
ax[i].imshow(overlay(img, heatmap))
by commenting out the line #penultimate_layer also I am getting the same output which is not correct. can someone tell me what is the problem? The guided-grad cam result is given , followed by the original image is given.
The problem is the heatmap must be on the bird (ouzel).
I hit the very same problem, but then I discovered that InceptionV3 mis-classifies these images. Check:
>>> model.predict(np.stack([img1, img2], 0)).argmax(axis=1)
array([110, 725])
While with VGG it's:
>>> model.predict(np.stack([img1, img2], 0)).argmax(axis=1)
array([20, 20])
I'd like to 1. iterate through a directory of images and turn each image into a NumPy array. I think I have accomplished this with the following code:
import tensorflow as tf
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
from PIL import Image
import os
myimages = []
path_to_images = 'images_animation'
filenum = len([name for name in os.listdir(path_to_images) if os.path.isfile(os.path.join(path_to_images, name))])
#loops through available pngs
for p in range(1, filenum):
## Read in picture
fname = "images_animation/image%03d.png" % p
img = mpimg.imread(fname)
imgplot = plt.imshow(img)
# append AxesImage object to the list
myimages.append([imgplot])
for n, im in enumerate(myimages):
img = Image.open(fname).convert("L")
arr = np.array(img
print(arr)
If I can make this code better or more efficient, please feel free to tell me how.
Now, I'd like to 2. turn these NumPy arrays into TFRecords. What is the best way to do so? I'm near clueless as how to do this, so I have not done much to solve it myself, so I'm looking for a solution.