I am using the Keras ImageDataGenerator to process the inputs to my CNN. I want to basic preprocessing that scales the image pixels to values from -1 to 1 as it was done in the paper of the Mobilenet architecture.
My datagenerator only defines the preprocessing function:
train_datagen = ImageDataGenerator(
preprocessing_function=preprocess_input
)
My preprocess_input function:
def preprocess_input(img):
pix = np.asarray(img)
pix = pix.astype(np.float32)
pix = pix / 255.0
pix = pix * 2
return pix
This is giving me the follwing error:
Traceback (most recent call last): File "finetune_mobilenet.py",
line 206, in <module>
train(folder_train, folder_dev, './models/') File "finetune_mobilenet.py", line 150, in train
callbacks=callbacks_list) File "/usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.py",
line 91, in wrapper
return func(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py",
line 2192, in fit_generator
generator_output = next(output_generator) File "/usr/local/lib/python2.7/dist-packages/keras/utils/data_utils.py",
line 584, in get
six.raise_from(StopIteration(e), e) File "/usr/local/lib/python2.7/dist-packages/six.py", line 737, in
raise_from
raise value StopIteration: 'tuple' object cannot be interpreted as an index
I also tried the original preprocessing function that is available for the Mobilenet architecture in Keras but that one also fails. Can you tell what I need to change to zero-center my image data?
Related
I have the following augmentations in my code
import albumentations as A
import torch
def get_train_transform():
transform = A.Compose([
A.RandomRotate90(),
A.Flip(),
A.Transpose(),
A.OneOf([
A.IAAAdditiveGaussianNoise(),
A.GaussNoise(),
], p=0.2),
A.OneOf([
A.MotionBlur(p=.2),
A.MedianBlur(blur_limit=3, p=0.1),
A.Blur(blur_limit=3, p=0.1),
], p=0.2),
A.ShiftScaleRotate(shift_limit=0.0625, scale_limit=0.2, rotate_limit=45, p=0.2),
A.OneOf([
A.OpticalDistortion(p=0.3),
A.GridDistortion(p=.1),
A.IAAPiecewiseAffine(p=0.3),
], p=0.2),
A.OneOf([
A.CLAHE(clip_limit=2),
A.IAASharpen(),
A.IAAEmboss(),
A.RandomBrightnessContrast(),
], p=0.3),
A.HueSaturationValue(p=0.3),
A.Normalize(mean=(0.5274, 0.4120, 0.3841), std=(0.1036, 0.0970, 0.0930)),
])
return transform
def get_val_transform():
transform = A.Compose([A.Normalize(mean=(0.5274, 0.4120, 0.3841), std=(0.1036, 0.0970, 0.0930))])
return transform
Here is how I call these transforms
train_dataset = MyDataset(train_data_path, 224, 224, train_labels_path, get_train_transform())
# The sampler, which takes into account an unbalanced dataset, is used only at the training step
train_loader= DataLoader(train_dataset, batch_size=8, num_workers=0, sampler=sampler)
# With validation data, the only transformation is the conversion into tensor
valid_dataset = MyDataset(test_data_path, 224, 224, test_labels_path, get_val_transform())
valid_loader = DataLoader(valid_dataset,batch_size=8, shuffle=False,num_workers=0)
However, once I run the code, it returns the following error:
File "dataset.py", line 41, in __getitem__
image_resized = self.transforms(image=image_resized)["image"]
File "/opt/conda/lib/python3.7/site-packages/albumentations/core/composition.py", line 205, in __call__
data = t(**data)
File "/opt/conda/lib/python3.7/site-packages/albumentations/core/composition.py",
line 309, in __call__
data = t(force_apply=True, **data)
File "/opt/conda/lib/python3.7/site-
packages/albumentations/core/transforms_interface.py", line 118, in __call__
return self.apply_with_params(params, **kwargs)
File "/opt/conda/lib/python3.7/site-
packages/albumentations/core/transforms_interface.py", line 131, in apply_with_params
res[key] = target_function(arg, **dict(params, **target_dependencies))
File "/opt/conda/lib/python3.7/site-packages/albumentations/augmentations/transforms.py", line 1310, in apply
return F.clahe(img, clip_limit, self.tile_grid_size)
File "/opt/conda/lib/python3.7/site-packages/albumentations/augmentations/utils.py", line 122, in wrapped_function
result = func(img, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/albumentations/augmentations/functional.py", line 452, in clahe
raise TypeError("clahe supports only uint8 inputs")
TypeError: clahe supports only uint8 inputs
I read from here that it seems that it is just a matter of putting the normalization as the last step of the augmentation, which is exactly what I am doing. So, what is supposed to be my mistake here?
I have a Detectron2 model that is trained to identify specific items on a backend server. I would like to make this model available on iOS devices and convert it to a CoreML model using coremltools v6.1. I used the export_model.py script provided by Facebook to create a torchscript model, but when I try to convert this to coreml I get a KeyError
def save_core_ml_package(scripted_model):
# Using image_input in the inputs parameter:
# Convert to Core ML neural network using the Unified Conversion API.
h = 224
w = 224
ctmodel = ct.convert(scripted_model,
inputs=[ct.ImageType(shape=(1, 3, h, w),
color_layout=ct.colorlayout.RGB)]
)
# Save the converted model.
ctmodel.save("newmodel.mlmodel")
I get the following error
Support for converting Torch Script Models is experimental. If possible you should use a traced model for conversion.
Traceback (most recent call last):
File "/usr/repo/URCV/src/Python/pytorch_to_torchscript.py", line 101, in <module>
save_trace_to_core_ml_package(test_model, outdir=outdir)
File "/usr/repo/URCV/src/Python/pytorch_to_torchscript.py", line 46, in save_trace_to_core_ml_package
ctmodel = ct.convert(
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/_converters_entry.py", line 444, in convert
mlmodel = mil_convert(
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 190, in mil_convert
return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 217, in _mil_convert
proto, mil_program = mil_convert_to_proto(
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 282, in mil_convert_to_proto
prog = frontend_converter(model, **kwargs)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 112, in __call__
return load(*args, **kwargs)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 56, in load
converter = TorchConverter(torchscript, inputs, outputs, cut_at_symbols, specification_version)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 160, in __init__
raw_graph, params_dict = self._expand_and_optimize_ir(self.torchscript)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 486, in _expand_and_optimize_ir
graph, params_dict = TorchConverter._jit_pass_lower_graph(graph, torchscript)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 431, in _jit_pass_lower_graph
_lower_graph_block(graph)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 410, in _lower_graph_block
module = getattr(node_to_module_map[_input], attr_name)
KeyError: images.2 defined in (%images.2 : __torch__.detectron2.structures.image_list.ImageList = prim::CreateObject()
)
From the error message it looks like you are using a torch script model:
Support for converting Torch Script Models is experimental. If
possible you should use a traced model for conversion.
if possible try to use a traced model e.g.:
dummy_input = torch.randn(batch, channels, width, height)
traceable_model = torch.jit.trace(model, dummy_input)
followed by your original code:
ct.convert(traceable_model,...
I am building a simple recurrent neural network, which contains a lstm layer with a fully connected layer, to make the classification for every row data. shape of my data 'x_train' is an nd.array with the shape of (210,240,1), 'y_train' is an nd.array with the shape of (210,). And the model output is normal. However, when I run model.fit(), there is always an error: AttributeError: 'NoneType' object has no attribute 'dtype'.
I don' t know what's wrong with the following code.
#%%
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow as tf
import pandas as pd
import numpy as np
#%%
model = tf.keras.models.Sequential([
tf.keras.layers.LSTM(8),
tf.keras.layers.Dense(units = 2)
])
#%%
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy\
(from_logits=True)
model.compile(optimizer=tf.keras.optimizers.Adam(0.001),\
loss = loss_fn)
#%%
x_train = np.random.randn(210,240,1)
y_train = np.random.binomial(1, 0.5,(210,))
#%%
model.fit(x_train, y_train, epochs=20)
The following are the whole error information:
Traceback (most recent call last):
File "D:\运筹优化\机器学习课程项目\时序数据预测\rnn_exploration.py", line 68, in <module>
model.fit(x_train, y_train, epochs=20)
File "D:\python\anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 819, in fit
use_multiprocessing=use_multiprocessing)
File "D:\python\anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 235, in fit
use_multiprocessing=use_multiprocessing)
File "D:\python\anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 593, in _process_training_inputs
use_multiprocessing=use_multiprocessing)
File "D:\python\anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 646, in _process_inputs
x, y, sample_weight=sample_weights)
File "D:\python\anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2360, in _standardize_user_data
self._compile_from_inputs(all_inputs, y_input, x, y)
File "D:\python\anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2618, in _compile_from_inputs
experimental_run_tf_function=self._experimental_run_tf_function)
File "D:\python\anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow_core\python\training\tracking\base.py", line 457, in _method_wrapper
result = method(self, *args, **kwargs)
File "D:\python\anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 416, in compile
endpoint.create_training_target(t, run_eagerly=self.run_eagerly)
File "D:\python\anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 3023, in create_training_target
self.loss_fn, K.dtype(self.output))
File "D:\python\anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\backend.py", line 1237, in dtype
return x.dtype.base_dtype.name
AttributeError: 'NoneType' object has no attribute 'dtype'
Any help would be appreciated!
Problem still exists with numpy 1.21.3, downgrade to 1.19.5 works.
I am working on transfer learning for an image classification task.
The training generator is as follows:
train_generator = train_datagen.flow_from_directory(
'/home/idu/Desktop/COV19D/train/',
color_mode = "grayscale",
target_size=(512, 512), # All images are 512 * 512
batch_size=batch_size,
classes = ['covid','non-covid'],
class_mode='binary')
The transferred model code is as follows:
SIZE = 512
VGG_model = VGG16(include_top=False, weights=None, input_shape=(SIZE, SIZE, 1))
for layer in VGG_model.layers:
layer.trainable = False
feature_extractor=VGG_model.predict(train_generator)
The last command throws the error:
Traceback (most recent call last):
File "<ipython-input-28-b9bad68819ec>", line 1, in <module>
feature_extractor=VGG_model.predict(train_generator)
File "/home/idu/.local/lib/python3.6/site-packages/keras/engine/training.py", line 1681, in predict
steps_per_execution=self._steps_per_execution)
File "/home/idu/.local/lib/python3.6/site-packages/keras/engine/data_adapter.py", line 1348, in get_data_handler
return DataHandler(*args, **kwargs)
File "/home/idu/.local/lib/python3.6/site-packages/keras/engine/data_adapter.py", line 1150, in __init__
model=model)
File "/home/idu/.local/lib/python3.6/site-packages/keras/engine/data_adapter.py", line 793, in __init__
peek, x = self._peek_and_restore(x)
File "/home/idu/.local/lib/python3.6/site-packages/keras/engine/data_adapter.py", line 850, in _peek_and_restore
peek = next(x)
File "/home/idu/.local/lib/python3.6/site-packages/keras_preprocessing/image/iterator.py", line 104, in __next__
return self.next(*args, **kwargs)
File "/home/idu/.local/lib/python3.6/site-packages/keras_preprocessing/image/iterator.py", line 116, in next
return self._get_batches_of_transformed_samples(index_array)
File "/home/idu/.local/lib/python3.6/site-packages/keras_preprocessing/image/iterator.py", line 231, in _get_batches_of_transformed_samples
x = img_to_array(img, data_format=self.data_format)
File "/home/idu/.local/lib/python3.6/site-packages/keras_preprocessing/image/utils.py", line 309, in img_to_array
x = np.asarray(img, dtype=dtype)
File "/home/idu/.local/lib/python3.6/site-packages/numpy/core/_asarray.py", line 83, in asarray
return array(a, dtype, copy=False, order=order)
TypeError: __array__() takes 1 positional argument but 2 were given
How can I overcome this error to do the feature exctraction?
Thank you.
I tried to downgrade tensorflow to 2.4, but that did not work. I downgraded my python version from 3.10.2 to 3.9.9 and re-installed scipy using the following command: python -m pip install --user numpy scipy matplotlib ipython jupyter pandas sympy nose. This command solved the issue.
I'm following the tutorial here:
in order to create a python program that will create a deep-dream style img and save in onto disk. I thought that changes to the following lines should do the trick:
img = run_deep_dream_with_octaves(img=original_img, step_size=0.01)
display.clear_output(wait=True)
img = tf.image.resize(img, base_shape)
img = tf.image.convert_image_dtype(img/255.0, dtype=tf.uint8)
tf.compat.v1.enable_eager_execution()
fname = '2.jpg'
with tf.compat.v1.Session() as sess:
enc = tf.io.encode_jpeg(img)
fwrite = tf.io.write_file(tf.constant(fname), enc)
result = sess.run(fwrite)'
the key line being encode_jpeg, however this gives me the following error:
Traceback (most recent call last):
File "main.py", line 246, in <module>
enc = tf.io.encode_jpeg(img)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
packages/tensorflow/python/ops/gen_image_ops.py", line 1496, in encode_jpeg
name=name)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
packages/tensorflow/python/framework/op_def_library.py", line 470, in
_apply_op_helper
preferred_dtype=default_dtype)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
packages/tensorflow/python/framework/ops.py", line 1465, in convert_to_tensor
raise RuntimeError("Attempting to capture an EagerTensor without "
RuntimeError: Attempting to capture an EagerTensor without building a function.
You can simply convert the "img" tensor into numpy array and then save it as you have eager execution enabled (its enabled by default in tf 2.0)
So, the modified code for saving the image will be:
img = run_deep_dream_with_octaves(img=original_img, step_size=0.01)
display.clear_output(wait=True)
img = tf.image.resize(img, base_shape)
img = tf.image.convert_image_dtype(img/255.0, dtype=tf.uint8)
fname = '2.jpg'
PIL.Image.fromarray(np.array(img)).save(fname)
You don't have to use sessions in tf2.0 to get the values from tensor.