Context:
1. python==3.6.6
2. Keras==2.2.4
3. tensorflow==2.1.0
4. pillow==7.0.0
When I load the model trained in Google Teachable Machine, it show me the following error code.
Code of loading model program:
import tensorflow.keras
from PIL import Image, ImageOps
import numpy as np
# Disable scientific notation for clarity
np.set_printoptions(suppress=True)
# Load the model
model = tensorflow.keras.models.load_model('keras_model.h5')
# Create the array of the right shape to feed into the keras model
# The 'length' or number of images you can put into the array is
# determined by the first position in the shape tuple, in this case 1.
data = np.ndarray(shape=(1, 224, 224, 3), dtype=np.float32)
# Replace this with the path to your image
image = Image.open('38.jpg')
#resize the image to a 224x224 with the same strategy as in TM2:
#resizing the image to be at least 224x224 and then cropping from the center
size = (224, 224)
image = ImageOps.fit(image, size, Image.ANTIALIAS)
#turn the image into a numpy array
image_array = np.asarray(image)
# display the resized image
image.show()
# Normalize the image
normalized_image_array = (image_array.astype(np.float32) / 127.0) - 1
# Load the image into the array
data[0] = normalized_image_array
# run the inference
prediction = model.predict(data)
print(prediction)
Error message:
"/home/muhammad_abdullah/anaconda3/envs/google teachable machine/bin/python" "/home/muhammad_abdullah/PycharmProjects/google teachable machine/main.py"
/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Traceback (most recent call last):
File "/home/muhammad_abdullah/PycharmProjects/google teachable machine/main.py", line 9, in <module>
model = tensorflow.keras.models.load_model('keras_model.h5')
File "/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorflow/python/keras/saving/save.py", line 146, in load_model
return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)
File "/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 212, in load_model_from_hdf5
custom_objects=custom_objects)
File "/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorflow/python/keras/saving/model_config.py", line 55, in model_from_config
return deserialize(config, custom_objects=custom_objects)
File "/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorflow/python/keras/layers/serialization.py", line 89, in deserialize
printable_module_name='layer')
File "/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 192, in deserialize_keras_object
list(custom_objects.items())))
File "/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorflow/python/keras/engine/sequential.py", line 352, in from_config
custom_objects=custom_objects)
File "/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorflow/python/keras/layers/serialization.py", line 89, in deserialize
printable_module_name='layer')
File "/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 192, in deserialize_keras_object
list(custom_objects.items())))
File "/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorflow/python/keras/engine/sequential.py", line 352, in from_config
custom_objects=custom_objects)
File "/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorflow/python/keras/layers/serialization.py", line 89, in deserialize
printable_module_name='layer')
File "/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 192, in deserialize_keras_object
list(custom_objects.items())))
File "/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1121, in from_config
process_layer(layer_data)
File "/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1105, in process_layer
layer = deserialize_layer(layer_data, custom_objects=custom_objects)
File "/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorflow/python/keras/layers/serialization.py", line 89, in deserialize
printable_module_name='layer')
File "/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 194, in deserialize_keras_object
return cls.from_config(cls_config)
File "/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 446, in from_config
return cls(**config)
File "/home/muhammad_abdullah/anaconda3/envs/google teachable machine/lib/python3.6/site-packages/tensorflow/python/keras/engine/input_layer.py", line 80, in __init__
raise ValueError('Unrecognized keyword arguments:', kwargs.keys())
ValueError: ('Unrecognized keyword arguments:', dict_keys(['ragged']))
Process finished with exit code 1
The most important line is towards the bottom where it mentions ragged.
This happens for me when I use a new Keras model in an old version of Keras. How did you generate the model? My bet is that you used a newer version of TensorFlow to build it.
The best and easiest thing to do is rebuild the .h5 Keras model using the same version of TF you are using. However, you can also export the model as .json, modify the input layer and then reload in the older version. However be warned that a couple other changes happened that you will run into after that.
Related
This is my code:-
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.pylab import rcParams
rcParams['figure.figsize']=20,10
from keras import Sequential
from keras import LSTM,Dropout,Dense
from sklearn.preprocessing import MinMaxScaler
df=pd.read_csv("/Users/apple/Desktop/NSE-Tata-Global-Beverages-Limited.csv")
df.head()
df["Date"]=pd.to_datetime(df.Date,format="%Y-%m-%d")
df.index=df['Date']
plt.figure(figsize=(16,8))
plt.plot(df["Close"],label='Close Price history')
It shows error as follows:-
Users/apple/PycharmProjects/pythonProject4/venv/bin/python /Users/apple/PycharmProjects/pythonProject4/main.py
Traceback (most recent call last):
File "/Users/apple/PycharmProjects/pythonProject4/main.py", line 9, in <module>
from keras import Sequential
File "/Users/apple/PycharmProjects/pythonProject4/venv/lib/python3.11/site-packages/keras/__init__.py", line 21, in <module>
from keras import models
File "/Users/apple/PycharmProjects/pythonProject4/venv/lib/python3.11/site-packages/keras/models/__init__.py", line 18, in <module>
from keras.engine.functional import Functional
File "/Users/apple/PycharmProjects/pythonProject4/venv/lib/python3.11/site-packages/keras/engine/functional.py", line 24, in <module>
import tensorflow.compat.v2 as tf
File "/Users/apple/PycharmProjects/pythonProject4/venv/lib/python3.11/site-packages/tensorflow/__init__.py", line 24, in <module>
from tensorflow.python import *
File "/Users/apple/PycharmProjects/pythonProject4/venv/lib/python3.11/site-packages/tensorflow/python/__init__.py", line 75, in <module>
from tensorflow.core.framework.graph_pb2 import *
File "/Users/apple/PycharmProjects/pythonProject4/venv/lib/python3.11/site-packages/tensorflow/core/framework/graph_pb2.py", line 16, in <module>
from tensorflow.core.framework import node_def_pb2 as tensorflow_dot_core_dot_framework_dot_node__def__pb2
File "/Users/apple/PycharmProjects/pythonProject4/venv/lib/python3.11/site-packages/tensorflow/core/framework/node_def_pb2.py", line 16, in <module>
from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2
File "/Users/apple/PycharmProjects/pythonProject4/venv/lib/python3.11/site-packages/tensorflow/core/framework/attr_value_pb2.py", line 16, in <module>
from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2
File "/Users/apple/PycharmProjects/pythonProject4/venv/lib/python3.11/site-packages/tensorflow/core/framework/tensor_pb2.py", line 16, in <module>
from tensorflow.core.framework import resource_handle_pb2 as tensorflow_dot_core_dot_framework_dot_resource__handle__pb2
File "/Users/apple/PycharmProjects/pythonProject4/venv/lib/python3.11/site- packages/tensorflow/core/framework/resource_handle_pb2.py", line 36, in <module>
_descriptor.FieldDescriptor(
File "/Users/apple/PycharmProjects/pythonProject4/venv/lib/python3.11/site- packages/google/protobuf/descriptor.py", line 560, in __new__
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure- Python parsing and will be much slower).
More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
Process finished with exit code 1
I have installed both "keras" and "tensorflow both through the interpreter. Using pycharm 3. What is to be done? I have tried installling keras and tensorflow throguh the terminal as well. I have tried everything. Thank you for going through my question.
I did what Yevhen Kuzmovych told and now the error is:-
/Users/apple/PycharmProjects/pythonProject4/venv/bin/python /Users/apple/PycharmProjects/pythonProject4/main.py
/Users/apple/PycharmProjects/pythonProject4/venv/lib/python3.11/site-packages/tensorflow/python/framework/dtypes.py:471: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/Users/apple/PycharmProjects/pythonProject4/venv/lib /python3.11/site-packages/tensorflow/python/framework/dtypes.py:472: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/Users/apple/PycharmProjects/pythonProject4/venv/lib /python3.11/site-packages/tensorflow/python/framework/dtypes.py:473: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/Users/apple/PycharmProjects/pythonProject4/venv/lib/python3.11/site-packages/tensorflow/python/framework/dtypes.py:474: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/Users/apple/PycharmProjects/pythonProject4/venv/lib/python3.11/site-packages/tensorflow/python/framework/dtypes.py:475: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_ np_qint32 = np.dtype([("qint32", np.int32, 1)])
/Users/apple/PycharmProjects/pythonProject4/venv/lib/python3.11/site-packages/tensorflow/python/framework/dtypes.py:490: FutureWarning: In the future `np.object` will be defined as the corresponding NumPy scalar. (This may have returned Python scalars in past versions.
(np.object, string),
Traceback (most recent call last):
File "/Users/apple/PycharmProjects/pythonProject4/main.py", line 9, in <module>
from keras import Sequential
File "/Users/apple/PycharmProjects/pythonProject4/venv/lib/python3.11/site-packages/keras/__init__.py", line 21, in <module>
from keras import models
File "/Users/apple/PycharmProjects/pythonProject4/venv/lib /python3.11/site-packages/keras/models/__init__.py", line 18, in <module>
from keras.engine.functional import Functional
File "/Users/apple/PycharmProjects/pythonProject4/venv/lib/python3.11/site-packages/keras/engine/functional.py", line 24, in <module>
import tensorflow.compat.v2 as tf
File "/Users/apple/PycharmProjects/pythonProject4/venv/lib /python3.11/site-packages/tensorflow/__init__.py", line 24, in <module>
from tensorflow.python import *
File "/Users/apple/PycharmProjects/pythonProject4/venv/lib/python3.11/site-packages/tensorflow/python/__init__.py", line 84, in <module>
from tensorflow.python.framework.framework_lib import *
File "/Users/apple/PycharmProjects/pythonProject4/venv/lib/python3.11/site-packages/tensorflow/python/framework/framework_lib.py", line 73, in <module>
from tensorflow.python.framework.ops import Graph
File "/Users/apple/PycharmProjects/pythonProject4/venv/lib/python3.11/site-packages/tensorflow/python/framework/ops.py", line 39, in <module>
from tensorflow.python.framework import dtypes
File "/Users/apple/PycharmProjects/pythonProject4/venv/lib/python3.11/site-packages/tensorflow/python/framework/dtypes.py", line 490, in <module>
(np.object, string),
^^^^^^^^^
File "/Users/apple/PycharmProjects/pythonProject4/venv/lib/python3.11/site-packages/numpy/__init__.py", line 284, in __getattr__
raise AttributeError("module {!r} has no attribute "
AttributeError: module 'numpy' has no attribute 'object'. Did you mean: 'object_'?
Process finished with exit code 1
As the error message states, you can try downgrading protobuf:
pip install --force-reinstall -v "protobuf~=3.20.0"
You can try the steps provided in the github discussion: https://github.com/ipython/ipyparallel/issues/349#issuecomment-449402168
Most probably the error is due to the circular dependencies which is used by tensorflow and keras.
And if the python version is not a very strict barrier, I would suggest to use the previous release like 3.10
I am new to Tensorflow, I have been using a trained model from a Git repository. The pre-trained model is saved in '../model/snapshot-38' directory. I have snapshot-38.index, snapshot-38.meta, snapshot-38.data-00000-of-00001 and checkpoint files here. I have my python script files and data in '../src' and I don't use any other location other than these in my code to save model.
def save(self):
"save model to file"
self.snapID += 1
self.saver.save(self.sess, '../model/snapshot', global_step=self.snapID)
I am using Python 3.6, Tensorflow 1.12.2
I have backed these files and tried re-training using a different set of data and generating a new model output but aborted half way through.
I have then restored my pre-trained model files from the back up as before but from since then I am getting error "Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:" delete saved model
When I try either retrain or restore the model. Is there some temporary files that I need to remove ?? doubt if Tensorflow is trying to do something I am not aware, I don't really get an answer from any of the solutions in similar threads. Below is the detailed stack trace,
as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\framework\dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\framework\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\framework\dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\framework\dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\framework\dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Validation character error rate of saved model: 10.624916%
Python: 3.6.10 |Anaconda, Inc.| (default, May 7 2020, 19:46:08) [MSC v.1916 64 bit (AMD64)]
Tensorflow: 1.12.0
2020-06-26 00:53:20.161185: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
model DIR ---- ../model/
model latestSnapshot ---- ../model/snapshot-38
Init with stored values from ../model/snapshot-38
Traceback (most recent call last):
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call
return fn(*args)
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\client\session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\client\session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [1,1,512,71] rhs shape= [1,1,512,80]
[[{{node save/Assign_15}} = Assign[T=DT_FLOAT, _class=["loc:#Variable_5"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Variable_5, save/RestoreV2:15)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\training\saver.py", line 1546, in restore
{self.saver_def.filename_tensor_name: save_path})
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
run_metadata_ptr)
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
run_metadata)
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [1,1,512,71] rhs shape= [1,1,512,80]
[[node save/Assign_15 (defined at P:\Desktop\COSC428_ComputerVision\SimpleHTR-master\SimpleHTR-master\src\Model.py:141) = Assign[T=DT_FLOAT, _class=["loc:#Variable_5"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Variable_5, save/RestoreV2:15)]]
Caused by op 'save/Assign_15', defined at:
File "main.py", line 145, in <module>
main()
File "main.py", line 140, in main
model = Model(open(FilePaths.fnCharList).read(), decoderType, mustRestore=True, dump=args.dump)
File "P:\Desktop\COSC428_ComputerVision\SimpleHTR-master\SimpleHTR-master\src\Model.py", line 53, in __init__
(self.sess, self.saver) = self.setupTF()
File "P:\Desktop\COSC428_ComputerVision\SimpleHTR-master\SimpleHTR-master\src\Model.py", line 141, in setupTF
saver = tf.train.Saver(max_to_keep=1) # saver saves model to file
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\training\saver.py", line 1102, in __init__
self.build()
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\training\saver.py", line 1114, in build
self._build(self._filename, build_save=True, build_restore=True)
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\training\saver.py", line 1151, in _build
build_save=build_save, build_restore=build_restore)
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\training\saver.py", line 795, in _build_internal
restore_sequentially, reshape)
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\training\saver.py", line 428, in _AddRestoreOps
assign_ops.append(saveable.restore(saveable_tensors, shapes))
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\training\saver.py", line 119, in restore
self.op.get_shape().is_fully_defined())
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\ops\state_ops.py", line 221, in assign
validate_shape=validate_shape)
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\ops\gen_state_ops.py", line 60, in assign
use_locking=use_locking, name=name)
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\framework\ops.py", line 3274, in create_op
op_def=op_def)
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\framework\ops.py", line 1770, in __init__
self._traceback = tf_stack.extract_stack()
InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [1,1,512,71] rhs shape= [1,1,512,80]
[[node save/Assign_15 (defined at P:\Desktop\COSC428_ComputerVision\SimpleHTR-master\SimpleHTR-master\src\Model.py:141) = Assign[T=DT_FLOAT, _class=["loc:#Variable_5"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Variable_5, save/RestoreV2:15)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "main.py", line 145, in <module>
main()
File "main.py", line 140, in main
model = Model(open(FilePaths.fnCharList).read(), decoderType, mustRestore=True, dump=args.dump)
File "P:\Desktop\COSC428_ComputerVision\SimpleHTR-master\SimpleHTR-master\src\Model.py", line 53, in __init__
(self.sess, self.saver) = self.setupTF()
File "P:\Desktop\COSC428_ComputerVision\SimpleHTR-master\SimpleHTR-master\src\Model.py", line 153, in setupTF
saver.restore(sess, latestSnapshot)
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\training\saver.py", line 1582, in restore
err, "a mismatch between the current graph and the graph")
tensorflow.python.framework.errors_impl.InvalidArgumentError: Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
Assign requires shapes of both tensors to match. lhs shape= [1,1,512,71] rhs shape= [1,1,512,80]
[[node save/Assign_15 (defined at P:\Desktop\COSC428_ComputerVision\SimpleHTR-master\SimpleHTR-master\src\Model.py:141) = Assign[T=DT_FLOAT, _class=["loc:#Variable_5"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Variable_5, save/RestoreV2:15)]]
Caused by op 'save/Assign_15', defined at:
File "main.py", line 145, in <module>
main()
File "main.py", line 140, in main
model = Model(open(FilePaths.fnCharList).read(), decoderType, mustRestore=True, dump=args.dump)
File "P:\Desktop\COSC428_ComputerVision\SimpleHTR-master\SimpleHTR-master\src\Model.py", line 53, in __init__
(self.sess, self.saver) = self.setupTF()
File "P:\Desktop\COSC428_ComputerVision\SimpleHTR-master\SimpleHTR-master\src\Model.py", line 141, in setupTF
saver = tf.train.Saver(max_to_keep=1) # saver saves model to file
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\training\saver.py", line 1102, in __init__
self.build()
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\training\saver.py", line 1114, in build
self._build(self._filename, build_save=True, build_restore=True)
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\training\saver.py", line 1151, in _build
build_save=build_save, build_restore=build_restore)
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\training\saver.py", line 795, in _build_internal
restore_sequentially, reshape)
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\training\saver.py", line 428, in _AddRestoreOps
assign_ops.append(saveable.restore(saveable_tensors, shapes))
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\training\saver.py", line 119, in restore
self.op.get_shape().is_fully_defined())
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\ops\state_ops.py", line 221, in assign
validate_shape=validate_shape)
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\ops\gen_state_ops.py", line 60, in assign
use_locking=use_locking, name=name)
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\framework\ops.py", line 3274, in create_op
op_def=op_def)
File "C:\Users\rcs70\.conda\envs\tensorflow_opencv\lib\site-packages\tensorflow\python\framework\ops.py", line 1770, in __init__
self._traceback = tf_stack.extract_stack()
InvalidArgumentError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
Assign requires shapes of both tensors to match. lhs shape= [1,1,512,71] rhs shape= [1,1,512,80]
[[node save/Assign_15 (defined at P:\Desktop\COSC428_ComputerVision\SimpleHTR-master\SimpleHTR-master\src\Model.py:141) = Assign[T=DT_FLOAT, _class=["loc:#Variable_5"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Variable_5, save/RestoreV2:15)]]
The error says this: Assign requires shapes of both tensors to match. lhs shape= [1,1,512,71] rhs shape= [1,1,512,80]
This means that the dimensions of one of the tensors in the snapshot are different from the tensor in the model, in the snapshot it is [1,1,512,80] and in the model it is [1,1,512,71].
Therefore, something is different. You have to load the snapshot on a model that matches exactcly the one it was saved from.
If I would have to guess, I would say that this is a multi-class classification model and that the number of classes the model was trained in (i.e. the snapshot) was 80, while now the model has been built to classify 71 classes.
Hallo everyone,
I'm trying running my program on a new computer, but I got all those errors and warning. From the results, it seems like all the problems come from tensorflow.
I have also tried running this program on other computer, it is fine.
The first problem is that I cannot figure out what is those warnings mean, on other computer there are no those warnings.
$ ./run_av.sh
WARNING:tensorflow:From /home/wentao/Sigmedia-AVSR-LRS2/avsr/io_utils.py:208: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and:
`tf.data.TFRecordDataset(path)`
WARNING:tensorflow:From /home/wentao/Sigmedia-AVSR-LRS2/avsr/io_utils.py:183: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:From /home/wentao/.local/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py:1419: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /home/wentao/Sigmedia-AVSR-LRS2/avsr/video.py:27: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.conv2d instead.
WARNING:tensorflow:From /home/wentao/Sigmedia-AVSR-LRS2/avsr/video.py:11: batch_normalization (from tensorflow.python.layers.normalization) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.batch_normalization instead.
WARNING:tensorflow:From /home/wentao/Sigmedia-AVSR-LRS2/avsr/cells.py:19: LSTMCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This class is equivalent as tf.keras.layers.LSTMCell, and will be replaced by that in Tensorflow 2.0.
WARNING:tensorflow:From /home/wentao/Sigmedia-AVSR-LRS2/avsr/cells.py:92: MultiRNNCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This class is equivalent as tf.keras.layers.StackedRNNCells, and will be replaced by that in Tensorflow 2.0.
WARNING:tensorflow:From /home/wentao/Sigmedia-AVSR-LRS2/avsr/encoder.py:81: dynamic_rnn (from tensorflow.python.ops.rnn) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `keras.layers.RNN(cell)`, which is equivalent to this API
WARNING:tensorflow:From /home/wentao/.local/lib/python3.6/site-packages/tensorflow/python/ops/rnn.py:626: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:From /home/wentao/.local/lib/python3.6/site-packages/tensorflow/python/ops/rnn_cell_impl.py:1259: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
WARNING:tensorflow:From /home/wentao/.local/lib/python3.6/site-packages/tensorflow/contrib/seq2seq/python/ops/helper.py:311: Bernoulli.__init__ (from tensorflow.python.ops.distributions.bernoulli) is deprecated and will be removed after 2019-01-01.
Instructions for updating:
The TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.
WARNING:tensorflow:From /home/wentao/.local/lib/python3.6/site-packages/tensorflow/python/ops/distributions/bernoulli.py:97: Distribution.__init__ (from tensorflow.python.ops.distributions.distribution) is deprecated and will be removed after 2019-01-01.
Instructions for updating:
The TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.
WARNING:tensorflow:From /home/wentao/.local/lib/python3.6/site-packages/tensorflow/contrib/seq2seq/python/ops/helper.py:314: Categorical.__init__ (from tensorflow.python.ops.distributions.categorical) is deprecated and will be removed after 2019-01-01.
Instructions for updating:
The TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.
WARNING:tensorflow:From /home/wentao/.local/lib/python3.6/site-packages/tensorflow/python/ops/distributions/categorical.py:278: multinomial (from tensorflow.python.ops.random_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.random.categorical instead.
/home/wentao/.local/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py:110: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
Traceback (most recent call last):
File "/home/wentao/.local/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 511, in _apply_op_helper
preferred_dtype=default_dtype)
File "/home/wentao/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1175, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/home/wentao/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 977, in _TensorTensorConversionFunction
(dtype.name, t.dtype.name, str(t)))
ValueError: Tensor conversion requested dtype float32 for Tensor with dtype resource: 'Tensor("Decoder/video/Encoder/multi_rnn_cell/cell_0/lstm_cell/kernel/AMSGrad:0", shape=(), dtype=resource)'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "experiment_tcd_av.py", line 64, in <module>
main(sys.argv)
File "experiment_tcd_av.py", line 47, in main
num_gpus=1,
File "/home/wentao/Sigmedia-AVSR-LRS2/avsr/avsr.py", line 193, in __init__
self._create_models()
File "/home/wentao/Sigmedia-AVSR-LRS2/avsr/avsr.py", line 381, in _create_models
batch_size=self._hparams.batch_size[0])
File "/home/wentao/Sigmedia-AVSR-LRS2/avsr/avsr.py", line 418, in _make_model
hparams=self._hparams
File "/home/wentao/Sigmedia-AVSR-LRS2/avsr/seq2seq.py", line 21, in __init__
self._make_decoder()
File "/home/wentao/Sigmedia-AVSR-LRS2/avsr/seq2seq.py", line 113, in _make_decoder
hparams=self._hparams
File "/home/wentao/Sigmedia-AVSR-LRS2/avsr/decoder_bimodal.py", line 51, in __init__
self._init_decoder()
File "/home/wentao/Sigmedia-AVSR-LRS2/avsr/decoder_bimodal.py", line 116, in _init_decoder
self._init_optimiser()
File "/home/wentao/Sigmedia-AVSR-LRS2/avsr/decoder_bimodal.py", line 514, in _init_optimiser
zip(gradients, variables), global_step=tf.train.get_global_step())
File "/home/wentao/.local/lib/python3.6/site-packages/tensorflow/python/training/optimizer.py", line 612, in apply_gradients
update_ops.append(processor.update_op(self, grad))
File "/home/wentao/.local/lib/python3.6/site-packages/tensorflow/python/training/optimizer.py", line 171, in update_op
update_op = optimizer._resource_apply_dense(g, self._v)
File "/home/wentao/Sigmedia-AVSR-LRS2/avsr/AMSGrad.py", line 96, in _resource_apply_dense
m_t = state_ops.assign(m, beta1_t * m + m_scaled_g_values, use_locking=self._use_locking)
File "/home/wentao/.local/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 812, in binary_op_wrapper
return func(x, y, name=name)
File "/home/wentao/.local/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 1078, in _mul_dispatch
return gen_math_ops.mul(x, y, name=name)
File "/home/wentao/.local/lib/python3.6/site-packages/tensorflow/python/ops/gen_math_ops.py", line 5860, in mul
"Mul", x=x, y=y, name=name)
File "/home/wentao/.local/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 547, in _apply_op_helper
inferred_from[input_arg.type_attr]))
TypeError: Input 'y' of 'Mul' Op has type resource that does not match type float32 of argument 'x'.
Exception ignored in: <bound method AVSR.__del__ of <avsr.avsr.AVSR object at 0x7f3db42332b0>>
Traceback (most recent call last):
File "/home/wentao/Sigmedia-AVSR-LRS2/avsr/avsr.py", line 198, in __del__
self._train_session.close()
AttributeError: 'AVSR' object has no attribute '_train_session'
And for the ValueError and TypeError, I can't figure out why.
I have just used "pip3 install tensorflow" install tensorflow, but it seems something wrong,
Please help me or give me some advises to fix it , thank you
export_saved_model used on TPUEstimator raises TypeError: Failed to convert object of type to Tensor with Tensorflow 1.12.0. Am I using it incorrectly or if it is a bug is there some workaround?
I would like to train a model on TPU using TPUEstimator and then use the trained model locally on CPU. I cannot use the graph saved during training directly, but I need to use export_saved_model instead (Github issue).
export_saved_model on TPUEstimator works correctly with Tensorflow 1.13.0rc0, however it fails with current Tensorflow 1.12.0 (another Github issue). At the moment, however, TPUs with Tensorflow 1.13 are not available at Google Cloud and TPUs with Tensorflow 1.12 are not compatible, so upgrading Tensorflow to 1.13 is not an option.
The relevant code is:
def serving_input_receiver_fn():
feature = tf.placeholder(tf.float32, shape=[None, None, None, 2])
return tf.estimator.export.TensorServingInputReceiver(feature, feature)
estimator.export_saved_model(FLAGS.export_dir, serving_input_receiver_fn)
Expected result.
The model should be exported correctly. This happens with Tensorflow 1.13.0rc0 or with TPUEstimator replaced with Estimator. The former can be reproduced using this colab).
Actual result.
Exporting fails with TypeError: Failed to convert object of type and the traceback included below. This can be reproduced with this colab.
...
WARNING:tensorflow:From /Users/michal/.virtualenvs/deepsat/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py:1044: calling SavedModelBuilder.add_meta_graph_and_variables (from tensorflow.python.saved_model.builder_impl) with legacy_init_op is deprecated and will be removed in a future version.
Instructions for updating:
Pass your op to the equivalent parameter main_op instead.
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
WARNING:tensorflow:rewrite_for_inference (from tensorflow.contrib.tpu.python.tpu.tpu) is experimental and may change or be removed at any time, and without warning.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Running infer on CPU
ERROR:tensorflow:Operation of type Placeholder (policy_labels) is not supported on the TPU. Execution will fail if this op is used in the graph.
ERROR:tensorflow:Operation of type Placeholder (sat_labels) is not supported on the TPU. Execution will fail if this op is used in the graph.
INFO:tensorflow:Done calling model_fn.
Traceback (most recent call last):
File "/Users/michal/.virtualenvs/deepsat/lib/python3.6/site-packages/tensorflow/python/framework/tensor_util.py", line 527, in make_tensor_proto
str_values = [compat.as_bytes(x) for x in proto_values]
File "/Users/michal/.virtualenvs/deepsat/lib/python3.6/site-packages/tensorflow/python/framework/tensor_util.py", line 527, in <listcomp>
str_values = [compat.as_bytes(x) for x in proto_values]
File "/Users/michal/.virtualenvs/deepsat/lib/python3.6/site-packages/tensorflow/python/util/compat.py", line 61, in as_bytes
(bytes_or_text,))
TypeError: Expected binary or unicode string, got dict_values([<tf.Tensor 'sat_prob:0' shape=(?,) dtype=float32>, <tf.Tensor 'policy_prob:0' shape=(?, ?, 2) dtype=float32>])
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "neurosat_tpu.py", line 253, in <module>
tf.app.run()
File "/Users/michal/.virtualenvs/deepsat/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "neurosat_tpu.py", line 248, in main
estimator.export_saved_model(FLAGS.export_dir, serving_input_receiver_fn)
File "/Users/michal/.virtualenvs/deepsat/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 734, in export_saved_model
strip_default_attrs=True)
File "/Users/michal/.virtualenvs/deepsat/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 663, in export_savedmodel
mode=model_fn_lib.ModeKeys.PREDICT)
File "/Users/michal/.virtualenvs/deepsat/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 789, in _export_saved_model_for_mode
strip_default_attrs=strip_default_attrs)
File "/Users/michal/.virtualenvs/deepsat/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 907, in _export_all_saved_models
mode=model_fn_lib.ModeKeys.PREDICT)
File "/Users/michal/.virtualenvs/deepsat/lib/python3.6/site-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2188, in _add_meta_graph_for_mode
check_variables=False))
File "/Users/michal/.virtualenvs/deepsat/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 984, in _add_meta_graph_for_mode
config=self.config)
File "/Users/michal/.virtualenvs/deepsat/lib/python3.6/site-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2192, in _call_model_fn
return self._call_model_fn_for_inference(features, labels, mode, config)
File "/Users/michal/.virtualenvs/deepsat/lib/python3.6/site-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2253, in _call_model_fn_for_inference
new_tensors.append(array_ops.identity(t))
File "/Users/michal/.virtualenvs/deepsat/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 81, in identity
return gen_array_ops.identity(input, name=name)
File "/Users/michal/.virtualenvs/deepsat/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 3454, in identity
"Identity", input=input, name=name)
File "/Users/michal/.virtualenvs/deepsat/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 513, in _apply_op_helper
raise err
File "/Users/michal/.virtualenvs/deepsat/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 510, in _apply_op_helper
preferred_dtype=default_dtype)
File "/Users/michal/.virtualenvs/deepsat/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1146, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/Users/michal/.virtualenvs/deepsat/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 229, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/Users/michal/.virtualenvs/deepsat/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 208, in constant
value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "/Users/michal/.virtualenvs/deepsat/lib/python3.6/site-packages/tensorflow/python/framework/tensor_util.py", line 531, in make_tensor_proto
"supported type." % (type(values), values))
TypeError: Failed to convert object of type <class 'dict_values'> to Tensor. Contents: dict_values([<tf.Tensor 'sat_prob:0' shape=(?,) dtype=float32>, <tf.Tensor 'policy_prob:0' shape=(?, ?, 2) dtype=float32>]). Consider casting elements to a supported type.
Adding argument export_to_tpu=False to TPUEstimator constructor prevents the error in Tensorflow 1.12:
estimator = tf.contrib.tpu.TPUEstimator(..., export_to_tpu=False)
export_to_tpu=False disables exporting the TPU version of the model, but CPU version is still exported and this is sufficient to run the model locally. With Tensorflow 1.13 the bug is fixed and the flag is not necessary.
The answer is based on the Github thread linked in the question.
I want to make my own tensorflow object detection api. Use this guide http://www.gradient-ascent.com/blog/2018/7/24/8tdkwi9iwmds0ws0e1fvhuauw8zxdt
And when i want to train, i have this error. Use 3 config files,but
the error remains. Please help me!
user#pc:~/anaconda3/lib/python3.6/site-packages/tensorflow/models$ python research/object_detection/train.py \
> --logtostderr \
> --train_dir=train \
> --pipeline_config_path=ssd_mobilenet_v2_coco.config
WARNING:tensorflow:From /home/user/anaconda3/lib/python3.6/site-packages/tensorflow/models/research/object_detection/trainer.py:176: create_global_step (from tensorflow.contrib.framework.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Please switch to tf.train.create_global_step
WARNING:tensorflow:From /home/user/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/slim/python/slim/data/parallel_reader.py:242: string_input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.from_tensor_slices(string_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`.
.
.
.
A lot of warning (7-8)
Instructions for updating:
Use the `axis` argument instead
WARNING:tensorflow:From /home/user/anaconda3/lib/python3.6/site-packages/tensorflow/models/research/object_detection/core/batcher.py:96: batch (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`).
Traceback (most recent call last):
File "/home/user/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 455, in _apply_op_helper
as_ref=input_arg.is_ref)
File "/home/user/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1211, in internal_convert_n_to_tensor
ctx=ctx))
File "/home/user/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1146, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/home/user/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 971, in _autopacking_conversion_function
return _autopacking_helper(v, dtype, name or "packed")
File "/home/user/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 902, in _autopacking_helper
elem))
TypeError: Cannot convert a list containing a tensor of dtype <dtype: 'int32'> to <dtype: 'float32'> (Tensor is: <tf.Tensor 'Preprocessor/stack_1:0' shape=(1, 3) dtype=int32>)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "research/object_detection/train.py", line 198, in <module>
tf.app.run()
File "/home/user/anaconda3/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "research/object_detection/train.py", line 194, in main
worker_job_name, is_chief, FLAGS.train_dir)
File "/home/user/anaconda3/lib/python3.6/site-packages/tensorflow/models/research/object_detection/trainer.py", line 192, in train
clones = model_deploy.create_clones(deploy_config, model_fn, [input_queue])
File "/home/user/anaconda3/lib/python3.6/site-packages/tensorflow/models/research/slim/deployment/model_deploy.py", line 193, in create_clones
outputs = model_fn(*args, **kwargs)
File "/home/user/anaconda3/lib/python3.6/site-packages/tensorflow/models/research/object_detection/trainer.py", line 124, in _create_losses
images = tf.concat(images, 0)
File "/home/user/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 1124, in concat
return gen_array_ops.concat_v2(values=values, axis=axis, name=name)
File "/home/user/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 1033, in concat_v2
"ConcatV2", values=values, axis=axis, name=name)
File "/home/user/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 483, in _apply_op_helper
raise TypeError("%s that don't all match." % prefix)
TypeError: Tensors in list passed to 'values' of 'ConcatV2' Op have types [<NOT CONVERTIBLE TO TENSOR>, <NOT CONVERTIBLE TO TENSOR>, <NOT CONVERTIBLE TO TENSOR>, <NOT CONVERTIBLE TO TENSOR>, <NOT CONVERTIBLE TO TENSOR>, <NOT CONVERTIBLE TO TENSOR>, <NOT CONVERTIBLE TO TENSOR>, <NOT CONVERTIBLE TO TENSOR>,...] that don't all match.
I use Tensoflow 1.12