Converting a tensorflow object into a jpeg on local drive - python

I'm following the tutorial here:
in order to create a python program that will create a deep-dream style img and save in onto disk. I thought that changes to the following lines should do the trick:
img = run_deep_dream_with_octaves(img=original_img, step_size=0.01)
display.clear_output(wait=True)
img = tf.image.resize(img, base_shape)
img = tf.image.convert_image_dtype(img/255.0, dtype=tf.uint8)
tf.compat.v1.enable_eager_execution()
fname = '2.jpg'
with tf.compat.v1.Session() as sess:
enc = tf.io.encode_jpeg(img)
fwrite = tf.io.write_file(tf.constant(fname), enc)
result = sess.run(fwrite)'
the key line being encode_jpeg, however this gives me the following error:
Traceback (most recent call last):
File "main.py", line 246, in <module>
enc = tf.io.encode_jpeg(img)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
packages/tensorflow/python/ops/gen_image_ops.py", line 1496, in encode_jpeg
name=name)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
packages/tensorflow/python/framework/op_def_library.py", line 470, in
_apply_op_helper
preferred_dtype=default_dtype)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
packages/tensorflow/python/framework/ops.py", line 1465, in convert_to_tensor
raise RuntimeError("Attempting to capture an EagerTensor without "
RuntimeError: Attempting to capture an EagerTensor without building a function.

You can simply convert the "img" tensor into numpy array and then save it as you have eager execution enabled (its enabled by default in tf 2.0)
So, the modified code for saving the image will be:
img = run_deep_dream_with_octaves(img=original_img, step_size=0.01)
display.clear_output(wait=True)
img = tf.image.resize(img, base_shape)
img = tf.image.convert_image_dtype(img/255.0, dtype=tf.uint8)
fname = '2.jpg'
PIL.Image.fromarray(np.array(img)).save(fname)
You don't have to use sessions in tf2.0 to get the values from tensor.

Related

Error converting Detectron2 torchscript model to CoreML using coremltools

I have a Detectron2 model that is trained to identify specific items on a backend server. I would like to make this model available on iOS devices and convert it to a CoreML model using coremltools v6.1. I used the export_model.py script provided by Facebook to create a torchscript model, but when I try to convert this to coreml I get a KeyError
def save_core_ml_package(scripted_model):
# Using image_input in the inputs parameter:
# Convert to Core ML neural network using the Unified Conversion API.
h = 224
w = 224
ctmodel = ct.convert(scripted_model,
inputs=[ct.ImageType(shape=(1, 3, h, w),
color_layout=ct.colorlayout.RGB)]
)
# Save the converted model.
ctmodel.save("newmodel.mlmodel")
I get the following error
Support for converting Torch Script Models is experimental. If possible you should use a traced model for conversion.
Traceback (most recent call last):
File "/usr/repo/URCV/src/Python/pytorch_to_torchscript.py", line 101, in <module>
save_trace_to_core_ml_package(test_model, outdir=outdir)
File "/usr/repo/URCV/src/Python/pytorch_to_torchscript.py", line 46, in save_trace_to_core_ml_package
ctmodel = ct.convert(
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/_converters_entry.py", line 444, in convert
mlmodel = mil_convert(
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 190, in mil_convert
return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 217, in _mil_convert
proto, mil_program = mil_convert_to_proto(
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 282, in mil_convert_to_proto
prog = frontend_converter(model, **kwargs)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 112, in __call__
return load(*args, **kwargs)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 56, in load
converter = TorchConverter(torchscript, inputs, outputs, cut_at_symbols, specification_version)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 160, in __init__
raw_graph, params_dict = self._expand_and_optimize_ir(self.torchscript)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 486, in _expand_and_optimize_ir
graph, params_dict = TorchConverter._jit_pass_lower_graph(graph, torchscript)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 431, in _jit_pass_lower_graph
_lower_graph_block(graph)
File "/opt/python-venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 410, in _lower_graph_block
module = getattr(node_to_module_map[_input], attr_name)
KeyError: images.2 defined in (%images.2 : __torch__.detectron2.structures.image_list.ImageList = prim::CreateObject()
)
From the error message it looks like you are using a torch script model:
Support for converting Torch Script Models is experimental. If
possible you should use a traced model for conversion.
if possible try to use a traced model e.g.:
dummy_input = torch.randn(batch, channels, width, height)
traceable_model = torch.jit.trace(model, dummy_input)
followed by your original code:
ct.convert(traceable_model,...

Problems with npy to nii.gz conversion

I'm trying to convert a .npy image to a nii.gz image, I'm having various problems even though I'm following instructions correctly.
This is the code I'm using:
import numpy as np
import nibabel as nib
file_dir = "D:/teste volumes slicer/"
fileNPY1 = "teste01.npy"
img_array1 = np.load(file_dir + fileNPY1)
print(img_array1.shape)
print(img_array1.dtype)
normal_array = "D:/teste volumes slicer/teste01.npy"
print ("done")
nifti_file = nib.Nifti1Image(normal_array, np.eye(4))
About the image: (332, 360, 360) float64 (this is what we got when we print the shape and dtype)
https://ibb.co/mRyTrw7 - image that shows the error and the image's information
The error message:
Traceback (most recent call last):
File "d:\teste volumes slicer\conversor.py", line 16, in <module>
nifti_file = nib.Nifti1Image(normal_array, np.eye(4))
File "C:\Python39\lib\site-packages\nibabel\nifti1.py", line 1756, in __init__
super(Nifti1Pair, self).__init__(dataobj,
File "C:\Python39\lib\site-packages\nibabel\analyze.py", line 918, in __init__
super(AnalyzeImage, self).__init__(
File "C:\Python39\lib\site-packages\nibabel\spatialimages.py", line 469, in __init__
self.update_header()
File "C:\Python39\lib\site-packages\nibabel\nifti1.py", line 2032, in update_header
super(Nifti1Image, self).update_header()
File "C:\Python39\lib\site-packages\nibabel\nifti1.py", line 1795, in update_header
super(Nifti1Pair, self).update_header()
File "C:\Python39\lib\site-packages\nibabel\spatialimages.py", line 491, in update_header
shape = self._dataobj.shape
AttributeError: 'str' object has no attribute 'shape'
The problem with your code is instead of passing your Numpy-array (your image), you are passing the path of the image to the Nifti1Image function.
This is the correct way to convert it:
import numpy as np
import nibabel as nib
file_dir = "D:/teste volumes slicer/"
fileNPY1 = "teste01.npy"
img_array1 = np.load(file_dir + fileNPY1)
nifti_file = nib.Nifti1Image(img_array1 , np.eye(4))

using tensorflow: google.protobuf.message.DecodeError: Wrong wire type in tag

for my project I'm trying to do inference based on my trained model stored in saved_model.pb. I doubt that the mistake is due to my code you can see over here but more likely is due to an installation problem:
from PIL import Image
import numpy as np
import scipy
from scipy import misc
import matplotlib.pyplot as plt
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
with tf.Graph().as_default() as graph: # Set default graph as graph
with tf.Session() as sess:
# Load the graph in graph_def
print("load graph")
# We load the protobuf file from the disk and parse it to retrive the unserialized graph_drf
with gfile.FastGFile("saved_model.pb",'rb') as f:
from scipy.io import wavfile
samplerate, data = wavfile.read('sound.wav')
# Set FCN graph to the default graph
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
sess.graph.as_default()
# Import a graph_def into the current default Graph (In this case, the weights are (typically) embedded in the graph)
tf.import_graph_def(
graph_def,
input_map=None,
return_elements=None,
name="",
op_dict=None,
producer_op_list=None
)
# Print the name of operations in the session
for op in graph.get_operations():
print("Operation Name :",op.name) # Operation name
print("Tensor Stats :",str(op.values())) # Tensor name
# INFERENCE Here
l_input = graph.get_tensor_by_name('Inputs/fifo_queue_Dequeue:0') # Input Tensor
l_output = graph.get_tensor_by_name('upscore32/conv2d_transpose:0') # Output Tensor
print("Shape of input : ", tf.shape(l_input))
f.global_variables_initializer()
# Run model on single image
Session_out = sess.run( m_output, feed_dict = {m_input : data} )
print("Predicted class:", class_names[Session_out[0].argmax()] )
The traceback is the following
Traceback (most recent call last):
File "/home/pi/model_inference/test.py", line 11, in <module>
graph_def.ParseFromString(f.read())
File "/usr/local/lib/python3.7/dist-packages/google/protobuf/message.py", line 199, in ParseFromString
return self.MergeFromString(serialized)
File "/usr/local/lib/python3.7/dist-packages/google/protobuf/internal/python_message.py", line 1145, in MergeFromString
if self._InternalParse(serialized, 0, length) != length:
File "/usr/local/lib/python3.7/dist-packages/google/protobuf/internal/python_message.py", line 1212, in InternalParse
pos = field_decoder(buffer, new_pos, end, self, field_dict)
File "/usr/local/lib/python3.7/dist-packages/google/protobuf/internal/decoder.py", line 754, in DecodeField
if value._InternalParse(buffer, pos, new_pos) != new_pos:
File "/usr/local/lib/python3.7/dist-packages/google/protobuf/internal/python_message.py", line 1212, in InternalParse
pos = field_decoder(buffer, new_pos, end, self, field_dict)
File "/usr/local/lib/python3.7/dist-packages/google/protobuf/internal/decoder.py", line 733, in DecodeRepeatedField
if value.add()._InternalParse(buffer, pos, new_pos) != new_pos:
File "/usr/local/lib/python3.7/dist-packages/google/protobuf/internal/python_message.py", line 1212, in InternalParse
pos = field_decoder(buffer, new_pos, end, self, field_dict)
File "/usr/local/lib/python3.7/dist-packages/google/protobuf/internal/decoder.py", line 888, in DecodeMap
if submsg._InternalParse(buffer, pos, new_pos) != new_pos:
File "/usr/local/lib/python3.7/dist-packages/google/protobuf/internal/python_message.py", line 1199, in InternalParse
buffer, new_pos, wire_type) # pylint: disable=protected-access
File "/usr/local/lib/python3.7/dist-packages/google/protobuf/internal/decoder.py", line 989, in _DecodeUnknownField
(data, pos) = _DecodeUnknownFieldSet(buffer, pos)
File "/usr/local/lib/python3.7/dist-packages/google/protobuf/internal/decoder.py", line 968, in _DecodeUnknownFieldSet
(data, pos) = _DecodeUnknownField(buffer, pos, wire_type)
File "/usr/local/lib/python3.7/dist-packages/google/protobuf/internal/decoder.py", line 993, in _DecodeUnknownField
raise _DecodeError('Wrong wire type in tag.')
google.protobuf.message.DecodeError: Wrong wire type in tag.
Important to note is that I'm trying this out on a raspberry pi v4 (thus linux running on it). I would be glad about any hint what to do.
Thanks in advance!
Try using tf.saved_model.load(path) where path is the path to your saved model's folder containing the assets and variables folder.follow this link to the tensorflow object-detection api inference tutorial.
It looks like your file "saved_model.pb" is not a saved (wireformat) protobuffer of the message type GraphDef. Maybe you can look how that was saved and find some instructions on how to load it back? Just guessing from the name, can it be a keras model and you have to use tf.keras.models.load_model?

Keras image preprocessing error at return

I am using the Keras ImageDataGenerator to process the inputs to my CNN. I want to basic preprocessing that scales the image pixels to values from -1 to 1 as it was done in the paper of the Mobilenet architecture.
My datagenerator only defines the preprocessing function:
train_datagen = ImageDataGenerator(
preprocessing_function=preprocess_input
)
My preprocess_input function:
def preprocess_input(img):
pix = np.asarray(img)
pix = pix.astype(np.float32)
pix = pix / 255.0
pix = pix * 2
return pix
This is giving me the follwing error:
Traceback (most recent call last): File "finetune_mobilenet.py",
line 206, in <module>
train(folder_train, folder_dev, './models/') File "finetune_mobilenet.py", line 150, in train
callbacks=callbacks_list) File "/usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.py",
line 91, in wrapper
return func(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py",
line 2192, in fit_generator
generator_output = next(output_generator) File "/usr/local/lib/python2.7/dist-packages/keras/utils/data_utils.py",
line 584, in get
six.raise_from(StopIteration(e), e) File "/usr/local/lib/python2.7/dist-packages/six.py", line 737, in
raise_from
raise value StopIteration: 'tuple' object cannot be interpreted as an index
I also tried the original preprocessing function that is available for the Mobilenet architecture in Keras but that one also fails. Can you tell what I need to change to zero-center my image data?

Can't save tensorflow image to file

I need to resize an image to certain size and save it to file, so I chose tf.image.resize_image_with_crop_or_pad function:
import tensorflow as tf
image_decoded = tf.image.decode_jpeg(tf.read_file('1.jpg'), channels=3)
cropped = tf.image.resize_image_with_crop_or_pad(image_decoded, 200, 200)
tf.write_file('2.jpg', cropped)
Failed with errors:
Traceback (most recent call last):
File "/home/test/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 490, in apply_op
preferred_dtype=default_dtype)
File "/home/test/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 669, in convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/home/test/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 583, in _TensorTensorConversionFunction
% (dtype.name, t.dtype.name, str(t)))
ValueError: Tensor conversion requested dtype string for Tensor with dtype uint8: 'Tensor("control_dependency_3:0", shape=(200, 200, 3), dtype=uint8)'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "train.py", line 15, in <module>
tf.write_file('2.jpg', cropped)
File "/home/test/anaconda3/lib/python3.5/site-packages/tensorflow/python/ops/gen_io_ops.py", line 694, in write_file
contents=contents, name=name)
File "/home/test/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 508, in apply_op
(prefix, dtypes.as_dtype(input_arg.type).name))
TypeError: Input 'contents' of 'WriteFile' Op has type uint8 that does not match expected type of string.
I tried to convert the Tensor to string using tf.as_string() but crashed with:
TypeError: DataType uint8 for attr 'T' not in list of allowed values: int32, int64, complex64, float32, float64, bool, int8
I use Tensorflow v0.12.0-rc0 on Linux Mint
You first need to encode the image from a tensor to a jpeg and then save it. Moreover, you should execute a session to evaluate your code:
import tensorflow as tf
image_decoded = tf.image.decode_jpeg(tf.read_file('1.jpg'), channels=3)
cropped = tf.image.resize_image_with_crop_or_pad(image_decoded, 200, 200)
enc = tf.image.encode_jpeg(cropped)
fname = tf.constant('2.jpg')
fwrite = tf.write_file(fname, enc)
sess = tf.Session()
result = sess.run(fwrite)
EDIT: Same thing with TensorFlow 2 (compatibility mode)
fname = '2.jpg'
with tf.compat.v1.Session() as sess:
image_decoded = tf.image.decode_jpeg(tf.io.read_file('1.jpg'), channels=3)
cropped = tf.image.resize_with_crop_or_pad(image_decoded, 200, 200)
enc = tf.image.encode_jpeg(cropped)
fwrite = tf.io.write_file(tf.constant(fname), enc)
result = sess.run(fwrite)

Categories