Keras CNN Classifier - python

I do have a question regarding the CNN in Keras if you would like to help me I would really appreciate this.
Disclaimer: I'm a noob in CNN and Keras, I'm just learning them right now.
My Data:
2 Classes (dogs and cats)
Traing: 30 pics each category
Test: 14 pics each category
Valid: 30 pics each category
My code:
data_path = Path("../data")
train_path = data_path / "train"
test_path = data_path / "test"
valid_path = data_path / "valid"
train_batch = ImageDataGenerator().flow_from_directory(directory=train_path,
target_size=(200, 200),
classes=animals,
batch_size=10)
valid_batch = ImageDataGenerator().flow_from_directory(directory=valid_path,
target_size=(200, 200),
classes=animals,
batch_size=10)
test_path = ImageDataGenerator().flow_from_directory(directory=test_path,
target_size=(200, 200),
classes=animals,
batch_size=4)
imgs, labels = next(train_batch)
model = Sequential(
[Conv2D(32, (3, 3), activation="relu", input_shape=(200, 200, 3)), Flatten(),
Dense(len(animals), activation='softmax')])
model.compile(Adam(lr=.0001), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(train_path, steps_per_epoch=4, validation_data=valid_batch, validation_steps=3, epochs=5, verbose=2)
Here it's my error message:
I've replaced the paths with ""
Traceback (most recent call last):
File "", line 191, in <module>
model.fit_generator(train_path, steps_per_epoch=4, validation_data=valid_batch, validation_steps=3, epochs=5, verbose=2)
File "y", line 91, in wrapper
return func(*args, **kwargs)
File "", line 1732, in fit_generator
initial_epoch=initial_epoch)
File "", line 185, in fit_generator
generator_output = next(output_generator)
File "", line 742, in get
six.reraise(*sys.exc_info())
File "", line 693, in reraise
raise value
File "", line 711, in get
inputs = future.get(timeout=30)
File "", line 657, in get
raise self._value
File "", line 121, in worker
result = (True, func(*args, **kwds))
File "", line 650, in next_sample
return six.next(_SHARED_SEQUENCES[uid])
TypeError: 'PosixPath' object is not an iterator
Could anyone explain to me what I'm doing wrong please? Also if this is an off-topic question just let me know where I can ask it.

The issue you are having is that you are NOT passing the generator for the training, but the path for the files (you are using train_path instead of train_batch.
Whereas you need to pass a generator for object when using .fit_generator():
model.fit_generator(train_batch, steps_per_epoch=4, validation_data=valid_batch, validation_steps=3, epochs=5, verbose=2)

This line isn't necessary
imgs, labels = next(train_batch)
from the docs fit_generator first argument is a generator object no a string as you have supplied. Like this
model.fit_generator(train_path, steps_per_epoch=4, validation_data=valid_batch, validation_steps=3, epochs=5, verbose=2)

Related

Can't add other metrics than accuracy on ViT model

I am a begginer in machine learning and i am trying to train a ViT model to categorical classes with my own dataset. I am following this code: https://keras.io/examples/vision/image_classification_with_vision_transformer/
It's working fine when I use the accuracy metric, but I want to use recall and precision as well, but I keep getting this error when I add them:
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
52 try:
53 ctx.ensure_initialized()
---> 54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
55 inputs, attrs, num_outputs)
56 except core._NotOkStatusException as e:
InvalidArgumentError: Graph execution error:
Detected at node 'assert_greater_equal/Assert/AssertGuard/Assert' defined at (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.8/dist-packages/ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "/usr/local/lib/python3.8/dist-packages/traitlets/config/application.py", line 992, in launch_instance
app.start()
File "/usr/local/lib/python3.8/dist-packages/ipykernel/kernelapp.py", line 612, in start
self.io_loop.start()
File "/usr/local/lib/python3.8/dist-packages/tornado/platform/asyncio.py", line 149, in start
self.asyncio_loop.run_forever()
...
(1) INVALID_ARGUMENT: assertion failed: [predictions must be >= 0] [Condition x >= y did not hold element-wise:] [x (model_3/dense_79/BiasAdd:0) = ] [[251.636795 -233.491394 322.750397...]...] [y (Cast_4/x:0) = ] [0]
[[{{node assert_greater_equal/Assert/AssertGuard/Assert}}]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_348766]
I also encoded my y_train to one_hot so I could use the Categorical_Crossentropy loss instead of SparseCategorical_Crossentropy loss
Here is the shape of the arrays now:
x_train shape: (6179, 336, 336, 3) - y_train shape: (6179, 9) x_test shape: (2060, 336, 336, 3) - y_test shape: (2060, 9)
I just changed a few things on the compile from the original code:
optimizer = tfa.optimizers.AdamW(
learning_rate=learning_rate, weight_decay=weight_decay
)
model.compile(
optimizer=optimizer,
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics = [ 'accuracy', tf.keras.metrics.Recall(name = 'recall')]
)
checkpoint_filepath = "/content/drive/MyDrive/DATASET/checkpoints/checkpoint.h5"
checkpoint_callback = keras.callbacks.ModelCheckpoint(
checkpoint_filepath,
monitor="val_accuracy",
save_best_only=True,
save_weights_only=True,
)
r = model.fit(
x=x_train,
y=y_train,
batch_size=batch_size,
epochs=num_epochs,
validation_split=0.25,
)
model.load_weights(checkpoint_filepath)
_, accuracy = model.evaluate(x_test, y_test)
print(f"Test accuracy: {round(accuracy * 100, 2)}%")
Im using colab

Class weights not working with return_sequences Keras

''''
#x_train format = [samples, timesteps, features]
#y_train format = [samples, timesteps]
num_timesteps = len(x_train[0])
num_features = len(x_train[0, 0])
num_classes = 3
model = Sequential()
model.add(Dense(50, input_shape = (num_timesteps, num_features)))
model.add(Dense(50))
model.add(Dense(50))
model.add(LSTM(300, return_sequences=True))
model.add(Dense(50))
model.add(Dense(50))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'],
sample_weight_mode='temporal')
model.fit(x_train, y_train, epochs=1, batch_size=500, class_weight = class_weights)
''''
I get an error that seems to indicate something is wrong with indices. The code works if I remove class_weights.
Does anyone know what I am doing wrong?
Error:
Traceback (most recent call last):
File "C:/Users/Documents/Neural Network for probabilities/keras_test_real_data_non_stateful_return_sequences_with_class_weights.py", line 129, in
model.fit(x_train, y_train, epochs=1, batch_size=500, class_weight = class_weights)
File "C:\Users\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\keras\engine\training.py", line 66, in _method_wrapper
return method(self, *args, **kwargs)
File "C:\Users\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\keras\engine\training.py", line 848, in fit
tmp_logs = train_function(iterator)
File "C:\Users\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\eager\def_function.py", line 580, in call
result = self._call(*args, **kwds)
File "C:\Users\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\eager\def_function.py", line 644, in _call
return self._stateless_fn(*args, **kwds)
File "C:\Users\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\eager\function.py", line 2420, in call
return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access
File "C:\Users\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\eager\function.py", line 1665, in _filtered_call
self.captured_inputs)
File "C:\Users\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\eager\function.py", line 1746, in _call_flat
ctx, args, cancellation_manager=cancellation_manager))
File "C:\Users\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\eager\function.py", line 598, in call
ctx=ctx)
File "C:\Users\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\eager\execute.py", line 60, in quick_execute
inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[1] = 4 is not in [0, 3)
[[{{node GatherV2}}]]
[[IteratorGetNext]] [Op:__inference_train_function_4037]
Function call stack:
train_function

How to fix "AttributeError: 'str' object has no attribute '__array_interface__'" while doing Image classification using python

I am doing Image Classification using Keras and while training the model I am getting an error which says that "AttributeError: 'str' object has no attribute 'array_interface'" and it is being generated from Image.py file of PIL and because of which my model is not getting started for training.
I have already tried to do some changes in Image.py file and in my code file but, the error still exists. The code file was working earlier but now it is throwing me an "AttributeError".
img_width, img_height = 150, 150
train_dir = r'D:\DataSets\Cats_Dogs\train'
validation_dir = r'D:\DataSets\Cats_Dogs\test'
nb_train_samples = 5000
nb_validation_samples = 1000
epochs = 50
batch_size = 20
train_datagen = ImageDataGenerator(rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True,
vertical_flip=True, rotation_range=0.2)
validation_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(train_dir, target_size=(img_width, img_height)
, batch_size=batch_size, class_mode='binary')
validation_generator = validation_datagen.flow_from_directory(validation_dir, target_size=(img_width, img_height)
, batch_size=batch_size, class_mode='binary')
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape, activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.summary()
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
model.fit_generator(train_generator, steps_per_epoch= nb_train_samples // batch_size, epochs=epochs,validation_data=validation_generator, validation_steps= nb_validation_samples // batch_size)
Use tf.cast instead.
Epoch 1/50
Traceback (most recent call last):
File "C:/Users/user/PycharmProjects/ImageClassification/BinaryClass.py", line 56, in <module>
validation_data=validation_generator, validation_steps= nb_validation_samples // batch_size)
File "C:\Users\user\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "C:\Users\user\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\engine\training.py", line 1658, in fit_generator
initial_epoch=initial_epoch)
File "C:\Users\user\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\engine\training_generator.py", line 181, in fit_generator
generator_output = next(output_generator)
File "C:\Users\user\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\utils\data_utils.py", line 616, in get
six.reraise(*sys.exc_info())
File "C:\Users\user\AppData\Local\Continuum\anaconda3\lib\site-packages\six.py", line 693, in reraise
raise value
File "C:\Users\user\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\utils\data_utils.py", line 603, in get
inputs = future.get(timeout=30)
File "C:\Users\user\AppData\Local\Continuum\anaconda3\lib\multiprocessing\pool.py", line 657, in get
raise self._value
File "C:\Users\user\AppData\Local\Continuum\anaconda3\lib\multiprocessing\pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "C:\Users\user\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\utils\data_utils.py", line 406, in get_index
return _SHARED_SEQUENCES[uid][i]
File "C:\Users\user\AppData\Local\Continuum\anaconda3\lib\site-packages\keras_preprocessing\image\iterator.py", line 65, in __getitem__
return self._get_batches_of_transformed_samples(index_array)
File "C:\Users\user\AppData\Local\Continuum\anaconda3\lib\site-packages\keras_preprocessing\image\iterator.py", line 230, in _get_batches_of_transformed_samples
interpolation=self.interpolation)
File "C:\Users\user\AppData\Local\Continuum\anaconda3\lib\site-packages\keras_preprocessing\image\utils.py", line 110, in load_img
img = pil_image.fromarray(path)
File "C:\Users\user\AppData\Local\Continuum\anaconda3\lib\site-packages\PIL\Image.py", line 2643, in fromarray
arr = obj.__array_interface__
AttributeError: 'str' object has no attribute '__array_interface__'
Process finished with exit code 1
In the code, I am giving the path of my training and testing dataset and at the time of making a model fit on training and validation dataset, it is giving me an error.

Tensorflow model.fit() using a Dataset generator

I am using the Dataset API to generate training data and sort it into batches for a NN.
Here is a minimum working example of my code:
import tensorflow as tf
import numpy as np
import random
def my_generator():
while True:
x = np.random.rand(4, 20)
y = random.randint(0, 11)
label = tf.one_hot(y, depth=12)
yield x.reshape(4, 20, 1), label
def my_input_fn():
dataset = tf.data.Dataset.from_generator(lambda: my_generator(),
output_types=(tf.float64, tf.int32))
dataset = dataset.batch(32)
iterator = dataset.make_one_shot_iterator()
batch_features, batch_labels = iterator.get_next()
return batch_features, batch_labels
if __name__ == "__main__":
tf.enable_eager_execution()
model = tf.keras.Sequential([tf.keras.layers.Flatten(input_shape=(4, 20, 1)),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(12, activation=tf.nn.softmax)])
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
data_generator = my_input_fn()
model.fit(data_generator)
The code fails using TensorFlow 1.13.1 at the model.fit() call with the following error:
Traceback (most recent call last):
File "scripts/min_working_example.py", line 37, in <module>
model.fit(data_generator)
File "~/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 880, in fit
validation_steps=validation_steps)
File "~/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_arrays.py", line 310, in model_iteration
ins_batch = slice_arrays(ins[:-1], batch_ids) + [ins[-1]]
File "~/.local/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 526, in slice_arrays
return [None if x is None else x[start] for x in arrays]
File "~/.local/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 526, in <listcomp>
return [None if x is None else x[start] for x in arrays]
File "~/.local/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 654, in _slice_helper
name=name)
File "~/.local/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 820, in strided_slice
shrink_axis_mask=shrink_axis_mask)
File "~/.local/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 9334, in strided_slice
_six.raise_from(_core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: Attr shrink_axis_mask has value 4294967295 out of range for an int32 [Op:StridedSlice] name: strided_slice/
I tried running the same code on a different machine using TensorFlow 2.0 (after removing the line tf.enable_eager_execution() because it runs eagerly by default) and I got the following error:
Traceback (most recent call last):
File "scripts/min_working_example.py", line 37, in <module>
model.fit(data_generator)
File "~/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 873, in fit
steps_name='steps_per_epoch')
File "~/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training_arrays.py", line 352, in model_iteration
batch_outs = f(ins_batch)
File "~/.local/lib/python3.7/site-packages/tensorflow/python/keras/backend.py", line 3217, in __call__
outputs = self._graph_fn(*converted_inputs)
File "~/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 558, in __call__
return self._call_flat(args)
File "~/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 627, in _call_flat
outputs = self._inference_function.call(ctx, args)
File "~/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 397, in call
(len(args), len(list(self.signature.input_arg))))
ValueError: Arguments and signature arguments do not match: 21 23
I tried changing model.fit() to model.fit_generator() but this fails on both TensorFlow versions too. On TF 1.13.1 I get the following error:
Traceback (most recent call last):
File "scripts/min_working_example.py", line 37, in <module>
model.fit_generator(data_generator)
File "~/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 1426, in fit_generator
initial_epoch=initial_epoch)
File "~/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_generator.py", line 115, in model_iteration
shuffle=shuffle)
File "~/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_generator.py", line 377, in convert_to_generator_like
num_samples = int(nest.flatten(data)[0].shape[0])
TypeError: __int__ returned non-int (type NoneType)
and on TF 2.0 I get the following error:
Traceback (most recent call last):
File "scripts/min_working_example.py", line 37, in <module>
model.fit_generator(data_generator)
File "~/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 1515, in fit_generator
steps_name='steps_per_epoch')
File "~/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training_generator.py", line 140, in model_iteration
shuffle=shuffle)
File "~/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training_generator.py", line 477, in convert_to_generator_like
raise ValueError('You must specify `batch_size`')
ValueError: You must specify `batch_size`
yet batch_size is not a recognized keyword for fit_generator().
I am puzzled by these error messages and I would appreciate if anyone can shed some light on them, or point out what I am doing wrong.
While the origin of the errors is still nebulous, I have found a solution that makes the code work. I'll post it here in case it is useful to anyone in a similar situation.
Basically, I changed the my_input_fn() into a generator and used model.fit_generator() as follows:
import tensorflow as tf
import numpy as np
import random
def my_generator(total_items):
i = 0
while i < total_items:
x = np.random.rand(4, 20)
y = random.randint(0, 11)
label = tf.one_hot(y, depth=12)
yield x.reshape(4, 20, 1), label
i += 1
def my_input_fn(total_items, epochs):
dataset = tf.data.Dataset.from_generator(lambda: my_generator(total_items),
output_types=(tf.float64, tf.int64))
dataset = dataset.repeat(epochs)
dataset = dataset.batch(32)
iterator = dataset.make_one_shot_iterator()
while True:
batch_features, batch_labels = iterator.get_next()
yield batch_features, batch_labels
if __name__ == "__main__":
tf.enable_eager_execution()
model = tf.keras.Sequential([tf.keras.layers.Flatten(input_shape=(4, 20, 1)),
tf.keras.layers.Dense(64, activation=tf.nn.relu),
tf.keras.layers.Dense(12, activation=tf.nn.softmax)])
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
total_items = 200
batch_size = 32
epochs = 10
num_batches = int(total_items/batch_size)
train_data_generator = my_input_fn(total_items, epochs)
model.fit_generator(generator=train_data_generator, steps_per_epoch=num_batches, epochs=epochs, verbose=1)
EDIT
As implied by giser_yugang in a comment, it is also possible to do it with my_input_fn() as a function returning the dataset instead of the individual batches.
def my_input_fn(total_items, epochs):
dataset = tf.data.Dataset.from_generator(lambda: my_generator(total_items),
output_types=(tf.float64, tf.int64))
dataset = dataset.repeat(epochs)
dataset = dataset.batch(32)
return dataset
if __name__ == "__main__":
tf.enable_eager_execution()
model = tf.keras.Sequential([tf.keras.layers.Flatten(input_shape=(4, 20, 1)),
tf.keras.layers.Dense(64, activation=tf.nn.relu),
tf.keras.layers.Dense(12, activation=tf.nn.softmax)])
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
total_items = 100
batch_size = 32
epochs = 10
num_batches = int(total_items/batch_size)
dataset = my_input_fn(total_items, epochs)
model.fit_generator(dataset, epochs=epochs, steps_per_epoch=num_batches)
There does not appear to be any average performance difference between the approaches.

Keras: model definition in a loop: 'list' object has no attibute

I have a strong suspicion there is a better way of setting this up, and suggestions are welcome, but here I am:
I would like to do multi-class classification of time-series data using a recurrent neural network in Keras. My model definition goes like this:
model = Sequential()
model.add(LSTM(out_dim, input_shape = (X_train.shape[1], X_train.shape[2]), return_sequences = False))
model.add(Dense(num_classes, activation='sigmoid'))
optim_type = ["rmsprop", "adam", "sgd"]
for optim_val in optim_type:
if optim_val == "sgd" and default_val == False:
...
else:
optim_use = optim_type
model.compile(loss='categorical_crossentropy', optimizer = optim_use, metrics = ['accuracy'])
hist = model.fit(X_train, dummy_y, validation_data=(X_test, dummy_y_test), nb_epoch = epochs, batch_size = b_size)
The error I get is:
Using TensorFlow backend.
Traceback (most recent call last):
File "../rnn_new.py", line 213, in <module>
network_LSTM_rnn(data_in, out_dim, optim_type, b_size, save_file, num_classes, epochs, default_val)
File "../rnn_new.py", line 166, in network_LSTM_rnn
hist = model.fit(X_train, dummy_y, validation_data=(X_test, dummy_y_test), nb_epoch = epochs, batch_size = b_size)
File "/user/pkgs/anaconda2/envs/my_env/lib/python2.7/site-packages/keras/models.py", line 627, in fit
sample_weight=sample_weight)
File "/user/pkgs/anaconda2/envs/my_env/lib/python2.7/site-packages/keras/engine/training.py", line 1097, in fit
self._make_train_function()
File "/user/pkgs/anaconda2/envs/my_env/lib/python2.7/site-packages/keras/engine/training.py", line 712, in _make_train_function
training_updates = self.optimizer.get_updates(self._collected_trainable_weights,
AttributeError: 'list' object has no attribute 'get_updates'
How can I fix this error? Is there anything else you need me to post?
I just can't name variables. Where I have optim_use = optim_type I should have optim_use = optim_val. Thank you anyway!

Categories