Can't save tensorflow image to file - python

I need to resize an image to certain size and save it to file, so I chose tf.image.resize_image_with_crop_or_pad function:
import tensorflow as tf
image_decoded = tf.image.decode_jpeg(tf.read_file('1.jpg'), channels=3)
cropped = tf.image.resize_image_with_crop_or_pad(image_decoded, 200, 200)
tf.write_file('2.jpg', cropped)
Failed with errors:
Traceback (most recent call last):
File "/home/test/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 490, in apply_op
preferred_dtype=default_dtype)
File "/home/test/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 669, in convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/home/test/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 583, in _TensorTensorConversionFunction
% (dtype.name, t.dtype.name, str(t)))
ValueError: Tensor conversion requested dtype string for Tensor with dtype uint8: 'Tensor("control_dependency_3:0", shape=(200, 200, 3), dtype=uint8)'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "train.py", line 15, in <module>
tf.write_file('2.jpg', cropped)
File "/home/test/anaconda3/lib/python3.5/site-packages/tensorflow/python/ops/gen_io_ops.py", line 694, in write_file
contents=contents, name=name)
File "/home/test/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 508, in apply_op
(prefix, dtypes.as_dtype(input_arg.type).name))
TypeError: Input 'contents' of 'WriteFile' Op has type uint8 that does not match expected type of string.
I tried to convert the Tensor to string using tf.as_string() but crashed with:
TypeError: DataType uint8 for attr 'T' not in list of allowed values: int32, int64, complex64, float32, float64, bool, int8
I use Tensorflow v0.12.0-rc0 on Linux Mint

You first need to encode the image from a tensor to a jpeg and then save it. Moreover, you should execute a session to evaluate your code:
import tensorflow as tf
image_decoded = tf.image.decode_jpeg(tf.read_file('1.jpg'), channels=3)
cropped = tf.image.resize_image_with_crop_or_pad(image_decoded, 200, 200)
enc = tf.image.encode_jpeg(cropped)
fname = tf.constant('2.jpg')
fwrite = tf.write_file(fname, enc)
sess = tf.Session()
result = sess.run(fwrite)
EDIT: Same thing with TensorFlow 2 (compatibility mode)
fname = '2.jpg'
with tf.compat.v1.Session() as sess:
image_decoded = tf.image.decode_jpeg(tf.io.read_file('1.jpg'), channels=3)
cropped = tf.image.resize_with_crop_or_pad(image_decoded, 200, 200)
enc = tf.image.encode_jpeg(cropped)
fwrite = tf.io.write_file(tf.constant(fname), enc)
result = sess.run(fwrite)

Related

Converting a tensorflow object into a jpeg on local drive

I'm following the tutorial here:
in order to create a python program that will create a deep-dream style img and save in onto disk. I thought that changes to the following lines should do the trick:
img = run_deep_dream_with_octaves(img=original_img, step_size=0.01)
display.clear_output(wait=True)
img = tf.image.resize(img, base_shape)
img = tf.image.convert_image_dtype(img/255.0, dtype=tf.uint8)
tf.compat.v1.enable_eager_execution()
fname = '2.jpg'
with tf.compat.v1.Session() as sess:
enc = tf.io.encode_jpeg(img)
fwrite = tf.io.write_file(tf.constant(fname), enc)
result = sess.run(fwrite)'
the key line being encode_jpeg, however this gives me the following error:
Traceback (most recent call last):
File "main.py", line 246, in <module>
enc = tf.io.encode_jpeg(img)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
packages/tensorflow/python/ops/gen_image_ops.py", line 1496, in encode_jpeg
name=name)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
packages/tensorflow/python/framework/op_def_library.py", line 470, in
_apply_op_helper
preferred_dtype=default_dtype)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
packages/tensorflow/python/framework/ops.py", line 1465, in convert_to_tensor
raise RuntimeError("Attempting to capture an EagerTensor without "
RuntimeError: Attempting to capture an EagerTensor without building a function.
You can simply convert the "img" tensor into numpy array and then save it as you have eager execution enabled (its enabled by default in tf 2.0)
So, the modified code for saving the image will be:
img = run_deep_dream_with_octaves(img=original_img, step_size=0.01)
display.clear_output(wait=True)
img = tf.image.resize(img, base_shape)
img = tf.image.convert_image_dtype(img/255.0, dtype=tf.uint8)
fname = '2.jpg'
PIL.Image.fromarray(np.array(img)).save(fname)
You don't have to use sessions in tf2.0 to get the values from tensor.

Python Keras TypeError('Expected binary or unicode string, got %r' % TypeError: Expected binary or unicode string, got 1

By running the test.py script which contains a keras implementation of a faster rcnn, I received this error:
raise TypeError('Expected binary or unicode string, got %r' %
TypeError: Expected binary or unicode string, got 1
enriched with:
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File test_frcnn.py", line 125, in <module>
shared_layers = nn.nn_base(img_input, trainable=False)
File resnet.py", line 183, in nn_base
x = FixedBatchNormalization(axis=bn_axis, name='bn_conv1')(x)
raise TypeError("Failed to convert object of type %s to Tensor. "
TypeError: Failed to convert object of type <class 'list'> to Tensor. Contents: [1, None, 1, 1].
Consider casting elements to a supported type.
This is a very strange behaviour since it returns this error on my laptop while it perfectly runs on google colab.
What is it?
Is it possible that the compiler returns this error due to the fact that I have trained the model using colab GPU while now I'm compiling the test on my laptop using CPU?
The complete Traceback:
Traceback (most recent call last):
File test.py", line 125, in <module>
shared_layers = nn.nn_base(img_input, trainable=False)
resnet.py", line 183, in nn_base
x = FixedBatchNormalization(axis=bn_axis, name='bn_conv1')(x)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/keras/backend/tensorflow_backend.py", line 75, in symbolic_fn_wrapper
return func(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/keras/engine/base_layer.py", line 489, in __call__
output = self.call(inputs, **kwargs)
File "/Users/federicolisi/PycharmProjects/neural_networks_GIT/MakeUp_v2/keras-frcnn/keras_frcnn/FixedBatchNormalization.py", line 66, in call
broadcast_running_mean = K.reshape(self.running_mean, broadcast_shape)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/keras/backend/tensorflow_backend.py", line 2440, in reshape
return tf.reshape(x, shape)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py", line 193, in reshape
result = gen_array_ops.reshape(tensor, shape, name)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/ops/gen_array_ops.py", line 8086, in reshape
_, _, _op, _outputs = _op_def_library._apply_op_helper(
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/framework/op_def_library.py", line 473, in _apply_op_helper
raise err
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/framework/op_def_library.py", line 465, in _apply_op_helper
values = ops.convert_to_tensor(
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 1341, in convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py", line 321, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py", line 261, in constant
return _constant_impl(value, dtype, shape, name, verify_shape=False,
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py", line 298, in _constant_impl
tensor_util.make_tensor_proto(
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/framework/tensor_util.py", line 545, in make_tensor_proto
raise TypeError("Failed to convert object of type %s to Tensor. "
TypeError: Failed to convert object of type <class 'list'> to Tensor. Contents: [1, None, 1, 1]. Consider casting elements to a supported type.

TensorFlow/Keras: Why do I get "ValueError: Incompatible conversion from float32 to uint8" when calling fit?

I use TensorFlow 1.12 with eager execution. When I call
model.fit(train, steps_per_epoch=int(np.ceil(num_train_samples / BATCH_SIZE)), epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, validation_data=val, validation_steps=int(np.ceil(num_val_samples / BATCH_SIZE)))
I get the following error:
ValueError: Incompatible type conversion requested to type 'uint8' for variable of type 'float32'
As far as I know, I am not converting from uint8 to float32 anywhere, at least not explicitly.
My datasets are generated as follows:
train = tf.data.Dataset.from_generator(generator=train_sample_fetcher, output_types=(tf.uint8, tf.float32))
train = train.repeat()
train = train.batch(BATCH_SIZE)
train = train.shuffle(10)
val = tf.data.Dataset.from_generator(generator=val_sample_fetcher, output_types=(tf.uint8, tf.float32))
employing the following generator functions:
def train_sample_fetcher():
return sample_fetcher()
def val_sample_fetcher():
return sample_fetcher(is_validations=True)
def sample_fetcher(is_validations=False):
sample_names = [filename[:-4] for filename in os.listdir(DIR_DATASET + "ndarrays/")]
if not is_validations: sample_names = sample_names[:int(len(sample_names) * TRAIN_VAL_SPLIT)]
else: sample_names = sample_names[int(len(sample_names) * TRAIN_VAL_SPLIT):]
for sample_name in sample_names:
rgb = tf.image.decode_jpeg(tf.read_file(DIR_DATASET + sample_name + ".jpg"))
rgb = tf.image.resize_images(rgb, (HEIGHT, WIDTH))
#d = tf.image.decode_jpeg(tf.read_file(DIR_DATASET + "depth/" + sample_name + ".jpg"))
#d = tf.image.resize_images(d, (HEIGHT, WIDTH))
#rgbd = tf.concat([rgb,d], axis=2)
onehots = tf.convert_to_tensor(np.load(DIR_DATASET + "ndarrays/" + sample_name + ".npy"), dtype=tf.float32)
yield rgb, onehots
---------------------------------------------------------------------------------
For reference, the full stacktrace:
Traceback (most recent call last):
File "tensorflow/python/ops/gen_nn_ops.py", line 976, in conv2d
"data_format", data_format, "dilations", dilations)
tensorflow.python.eager.core._FallbackException: Expecting int64_t value for attr strides, got numpy.int32
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "dir/to/my_script.py", line 100, in <module>
history = model.fit(train, steps_per_epoch=int(np.ceil(num_train_samples / BATCH_SIZE)), epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, validation_data=val, validation_steps=int(np.ceil(num_val_samples / BATCH_SIZE)))
File "/tensorflow/python/keras/engine/training.py", line 1614, in fit
validation_steps=validation_steps)
File "/tensorflow/python/keras/engine/training_eager.py", line 705, in fit_loop
batch_size=batch_size)
File "/tensorflow/python/keras/engine/training_eager.py", line 251, in iterator_fit_loop
model, x, y, sample_weights=sample_weights, training=True)
File "/tensorflow/python/keras/engine/training_eager.py", line 511, in _process_single_batch
training=training)
File "/tensorflow/python/keras/engine/training_eager.py", line 90, in _model_loss
outs, masks = model._call_and_compute_mask(inputs, **kwargs)
File "/tensorflow/python/keras/engine/network.py", line 856, in _call_and_compute_mask
mask=masks)
File "/tensorflow/python/keras/engine/network.py", line 1029, in _run_internal_graph
computed_tensor, **kwargs)
File "/tensorflow/python/keras/engine/network.py", line 856, in _call_and_compute_mask
mask=masks)
File "/tensorflow/python/keras/engine/network.py", line 1031, in _run_internal_graph
output_tensors = layer.call(computed_tensor, **kwargs)
File "/tensorflow/python/keras/layers/convolutional.py", line 194, in call
outputs = self._convolution_op(inputs, self.kernel)
File "/tensorflow/python/ops/nn_ops.py", line 868, in __call__
return self.conv_op(inp, filter)
File "/tensorflow/python/ops/nn_ops.py", line 520, in __call__
return self.call(inp, filter)
File "/tensorflow/python/ops/nn_ops.py", line 204, in __call__
name=self.name)
File "/tensorflow/python/ops/gen_nn_ops.py", line 982, in conv2d
name=name, ctx=_ctx)
File "/tensorflow/python/ops/gen_nn_ops.py", line 1015, in conv2d_eager_fallback
_attr_T, _inputs_T = _execute.args_to_matching_eager([input, filter], _ctx)
File "/tensorflow/python/eager/execute.py", line 195, in args_to_matching_eager
ret = [internal_convert_to_tensor(t, dtype, ctx=ctx) for t in l]
File "/tensorflow/python/eager/execute.py", line 195, in <listcomp>
ret = [internal_convert_to_tensor(t, dtype, ctx=ctx) for t in l]
File "/tensorflow/python/framework/ops.py", line 1146, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/tensorflow/python/ops/variables.py", line 828, in _TensorConversionFunction
"of type '%s'" % (dtype.name, v.dtype.name))
ValueError: Incompatible type conversion requested to type 'uint8' for variable of type 'float32'
The first thrown error is about NumPy ndarrays, but I convert those to TensorFlow tensors right after I import them. Any suggestions are greatly appreciated! I checked for any np.int32 types, but was not able to find any.
The output of tf.image.resize_images is a tensor of type float and therefore the rgb tensor returned from sample_fetcher() is a tensor of type float. However, when calling the Dataset.from_generator() method, you are specifying the output type of the first generated element as tf.uint8 (i.e. output_types=(tf.uint8, tf.float32)). Therefore, a conversion needs to be done which actually could not be done. Change it to tf.float32 (i.e. output_types=(tf.float32, tf.float32)) for both train and validation generators and the problem would be fixed.

tf.keras - Importing model with batchnormalization layers

I've gotten stuck on this issue for a little while. I'm trying to run the code below with the tf_cnnvis (https://github.com/InFoCusp/tf_cnnvis) package for visualising learnt features in the network, where I import my protobuf model and then try and provide it a tensor containing some image data (which I believe is provided as a feed_dict, although I could be mistaken).
import numpy as np
import tensorflow as tf
import keras as k
import cv2
import tf_cnnvis as tfv
from tensorflow.python.platform import gfile
from keras import backend as K
model_filename = "saved_model.pb"
image = "test.jpg"
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.8, allow_growth=False)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
K.set_session(sess)
K._LEARNING_PHASE = tf.constant(0)
K.set_learning_phase(0)
with gfile.FastGFile(model_filename, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def)
X = tf.placeholder(tf.float32, shape = [None, 48, 64, 3],name = "input") # placeholder for input images
y = tf.placeholder(tf.float32, shape = [None, 8])
im = np.array(cv2.imread(image))
im = np.expand_dims(im, 0)
layers = ['r', 'p', 'c']
init_op = init_op = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer())
sess.run(init_op)
with sess.as_default():
is_success = tfv.activation_visualization(sess_graph_path=tf.get_default_graph(), value_feed_dict = {X : im}, layers=layers)
sess.close()
When I run my code, I get an "InvalidArgumentError" with this traceback:
Traceback (most recent call last):
File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1292, in _do_call
return fn(*args)
File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1277, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1367, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'import/batch_normalization_1_input' with dtype float and shape [?,48,64,3]
[[{{node import/batch_normalization_1_input}} = Placeholder[_class=["loc:#import/batch_normalization/cond/FusedBatchNorm_1/Switch"], dtype=DT_FLOAT, shape=[?,48,64,3], _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
[[{{node import/conv2d/Relu/_5}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_50_import/conv2d/Relu", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "vis2.py", line 36, in <module>
is_success = tfv.activation_visualization(sess_graph_path=tf.get_default_graph(), value_feed_dict = {X : im}, layers=layers)
File "/usr/local/anaconda3/lib/python3.6/site-packages/tf_cnnvis-1.0.0-py3.6.egg/tf_cnnvis/tf_cnnvis.py", line 406, in activation_visualization
File "/usr/local/anaconda3/lib/python3.6/site-packages/tf_cnnvis-1.0.0-py3.6.egg/tf_cnnvis/tf_cnnvis.py", line 169, in _get_visualization
File "/usr/local/anaconda3/lib/python3.6/site-packages/tf_cnnvis-1.0.0-py3.6.egg/tf_cnnvis/tf_cnnvis.py", line 227, in _visualization_by_layer_type
File "/usr/local/anaconda3/lib/python3.6/site-packages/tf_cnnvis-1.0.0-py3.6.egg/tf_cnnvis/tf_cnnvis.py", line 288, in _visualization_by_layer_name
File "/usr/local/anaconda3/lib/python3.6/site-packages/tf_cnnvis-1.0.0-py3.6.egg/tf_cnnvis/tf_cnnvis.py", line 315, in _activation
File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 887, in run
run_metadata_ptr)
File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1110, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1286, in _do_run
run_metadata)
File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1308, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'import/batch_normalization_1_input' with dtype float and shape [?,48,64,3]
[[{{node import/batch_normalization_1_input}} = Placeholder[_class=["loc:#import/batch_normalization/cond/FusedBatchNorm_1/Switch"], dtype=DT_FLOAT, shape=[?,48,64,3], _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
[[{{node import/conv2d/Relu/_5}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_50_import/conv2d/Relu", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Caused by op 'import/batch_normalization_1_input', defined at:
File "vis2.py", line 36, in <module>
is_success = tfv.activation_visualization(sess_graph_path=tf.get_default_graph(), value_feed_dict = {X : im}, layers=layers)
File "/usr/local/anaconda3/lib/python3.6/site-packages/tf_cnnvis-1.0.0-py3.6.egg/tf_cnnvis/tf_cnnvis.py", line 406, in activation_visualization
path_logdir = path_logdir, path_outdir = path_outdir)
File "/usr/local/anaconda3/lib/python3.6/site-packages/tf_cnnvis-1.0.0-py3.6.egg/tf_cnnvis/tf_cnnvis.py", line 159, in _get_visualization
s = _graph_import_function(PATH,s)
File "/usr/local/anaconda3/lib/python3.6/site-packages/tf_cnnvis-1.0.0-py3.6.egg/tf_cnnvis/tf_cnnvis.py", line 177, in _graph_import_function
new_saver = tf.train.import_meta_graph(PATH) # Import graph
File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1650, in import_meta_graph
meta_graph_or_file, clear_devices, import_scope, **kwargs)[0]
File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1672, in _import_meta_graph_with_return_elements
**kwargs))
File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/meta_graph.py", line 806, in import_scoped_meta_graph_with_return_elements
return_elements=return_elements)
File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 442, in import_graph_def
_ProcessNewOps(graph)
File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 234, in _ProcessNewOps
for new_op in graph._add_new_tf_operations(compute_devices=False): # pylint: disable=protected-access
File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3426, in _add_new_tf_operations
for c_op in c_api_util.new_tf_operations(self)
File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3426, in <listcomp>
for c_op in c_api_util.new_tf_operations(self)
File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3285, in _create_op_from_tf_operation
ret = Operation(c_op, self)
File "/usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1748, in __init__
self._traceback = tf_stack.extract_stack()
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'import/batch_normalization_1_input' with dtype float and shape [?,48,64,3]
[[{{node import/batch_normalization_1_input}} = Placeholder[_class=["loc:#import/batch_normalization/cond/FusedBatchNorm_1/Switch"], dtype=DT_FLOAT, shape=[?,48,64,3], _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
[[{{node import/conv2d/Relu/_5}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_50_import/conv2d/Relu", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Now, I've looked around and I've arrived (tentatively) at the conclusion that this is due to a learning phase variable that's set in the BatchNormalization layer that I have in the model. I'm unclear as to how to set the learning phase when you've imported the model. Some people set the learning phase before initializing the model (which as you can see, I have attempted), but in most examples of this they're using one of the large, pre-provided models (such as MNIST). Others provide the learning phase in the feed_dict, which I have also tried, like so:
with sess.as_default():
is_success = tfv.activation_visualization(sess_graph_path=tf.get_default_graph(), value_feed_dict = {X : im, K.learning_phase(): 0}, layers=layers)
But this gives me a different error message:
Traceback (most recent call last):
File "vis2.py", line 36, in <module>
is_success = tfv.activation_visualization(sess_graph_path=tf.get_default_graph(), value_feed_dict = {X : im, K.learning_phase(): 0}, layers=layers)
File "/usr/local/anaconda3/lib/python3.6/site-packages/tf_cnnvis-1.0.0-py3.6.egg/tf_cnnvis/tf_cnnvis.py", line 406, in activation_visualization
File "/usr/local/anaconda3/lib/python3.6/site-packages/tf_cnnvis-1.0.0-py3.6.egg/tf_cnnvis/tf_cnnvis.py", line 169, in _get_visualization
File "/usr/local/anaconda3/lib/python3.6/site-packages/tf_cnnvis-1.0.0-py3.6.egg/tf_cnnvis/tf_cnnvis.py", line 227, in _visualization_by_layer_type
File "/usr/local/anaconda3/lib/python3.6/site-packages/tf_cnnvis-1.0.0-py3.6.egg/tf_cnnvis/tf_cnnvis.py", line 270, in _visualization_by_layer_name
File "/usr/local/anaconda3/lib/python3.6/site-packages/tf_cnnvis-1.0.0-py3.6.egg/tf_cnnvis/utils.py", line 79, in parse_tensors_dict
AttributeError: 'int' object has no attribute 'name'
At this stage, seeing as I'm still not completely sure if the problem I'm trying to fix is even the right one, I would very much appreciate some input. If there's anything else you need me to provide, please ask.

Keras image preprocessing error at return

I am using the Keras ImageDataGenerator to process the inputs to my CNN. I want to basic preprocessing that scales the image pixels to values from -1 to 1 as it was done in the paper of the Mobilenet architecture.
My datagenerator only defines the preprocessing function:
train_datagen = ImageDataGenerator(
preprocessing_function=preprocess_input
)
My preprocess_input function:
def preprocess_input(img):
pix = np.asarray(img)
pix = pix.astype(np.float32)
pix = pix / 255.0
pix = pix * 2
return pix
This is giving me the follwing error:
Traceback (most recent call last): File "finetune_mobilenet.py",
line 206, in <module>
train(folder_train, folder_dev, './models/') File "finetune_mobilenet.py", line 150, in train
callbacks=callbacks_list) File "/usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.py",
line 91, in wrapper
return func(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py",
line 2192, in fit_generator
generator_output = next(output_generator) File "/usr/local/lib/python2.7/dist-packages/keras/utils/data_utils.py",
line 584, in get
six.raise_from(StopIteration(e), e) File "/usr/local/lib/python2.7/dist-packages/six.py", line 737, in
raise_from
raise value StopIteration: 'tuple' object cannot be interpreted as an index
I also tried the original preprocessing function that is available for the Mobilenet architecture in Keras but that one also fails. Can you tell what I need to change to zero-center my image data?

Categories