Cannot find the variable that is input to the ReadVariableOp - python

Trying to save a Keras .h5 file containing weights to Tensorflow .pb file
# I keep getting the error: ValueError: Cannot find the variable that is an input to the ReadVariableOp.
frozen_graph = freeze_session(K.get_session(),
output_names=[out.op.name for out in model.keras_model.output])
0
frozen_graph = freeze_session(K.get_session(), output_names=[out.op.name for out in model.keras_model.output])
I got an error:
ValueError Traceback (most recent call last) in 1 frozen_graph =
freeze_session(K.get_session(), ----> 2 output_names=[out.op.name for
out in model.keras_model.output])
in freeze_session(session, keep_var_names, output_names,
clear_devices) 26 node.device = "" 27 frozen_graph =
tf.graph_util.convert_variables_to_constants( ---> 28 session,
input_graph_def, output_names, freeze_var_names) 29 return
frozen_graph
~/anaconda3/envs/env_name/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py
in new_func(*args, **kwargs) 322 'in a future version' if date is None
else ('after %s' % date), 323 instructions) --> 324 return func(*args,
**kwargs) 325 return tf_decorator.make_decorator( 326 func, new_func, 'deprecated',
~/anaconda3/envs/env_name/lib/python3.6/site-packages/tensorflow/python/framework/graph_util_impl.py
in convert_variables_to_constants(sess, input_graph_def,
output_node_names, variable_names_whitelist, variable_names_blacklist)
300 source_op_name = get_input_name(map_name_to_node[source_op_name])
301 if map_name_to_node[source_op_name].op != "VarHandleOp": --> 302
raise ValueError("Cannot find the variable that is an input " 303 "to
the ReadVariableOp.") 304
ValueError: Cannot find the variable that is an input to the
ReadVariableOp.

I just ran into this same issue, adding
import keras.backend as K
k.set_learning_phase(0)
which sets the learning phase to testing mode, was the solution.

Related

ValueError when calling activity_classifier.create(...) method

I am using TuriCreate to create model to classify a human activity, but I get error when I try to run activity_classifier.create(...) method.
Code
This is what I did:
Load all data:
train_sf = tc.SFrame("data/cleaned_train_sframe")
valid_sf = tc.SFrame("data/cleaned_valid_sframe")
test_sf = tc.SFrame("data/cleaned_test_sframe")
Dividing the SFrame randomly into two smaller SFrames:
train, valid = tc.activity_classifier.util.random_split_by_session(train_sf, session_id='sessionId', fraction=0.9)
Trying to build and train my model:
model = tc.activity_classifier.create(dataset=train_sf,
session_id='sessionId',
target='activity',
features=["rotX", "rotY", "rotZ", "accelX", "accelY", "accelZ"],
prediction_window=50,
validation_set=valid_sf,
max_iterations=20)
Error
The third step raise the following error:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [34], in <cell line: 1>()
----> 1 model = tc.activity_classifier.create(dataset=train_sf,
2 session_id='sessionId',
3 target='activity',
4 features=["rotX", "rotY", "rotZ", "accelX", "accelY", "accelZ"],
5 prediction_window=50,
6 validation_set=valid_sf,
7 max_iterations=20)
File ~/Desktop/PFG/lib/python3.8/site-packages/turicreate/toolkits/activity_classifier/_activity_classifier.py:200, in create(dataset, session_id, target, features, prediction_window, validation_set, max_iterations, batch_size, verbose, random_seed)
197 options["_show_loss"] = False
198 options["random_seed"] = random_seed
--> 200 model.train(dataset, target, session_id, validation_set, options)
201 return ActivityClassifier(model_proxy=model, name=name)
File ~/Desktop/PFG/lib/python3.8/site-packages/turicreate/extensions.py:305, in _ToolkitClass.__getattr__.<locals>.<lambda>(*args, **kwargs)
302 return _wrap_function_return(self._tkclass.get_property(name))
303 elif name in self._functions:
304 # is it a function?
--> 305 ret = lambda *args, **kwargs: self.__run_class_function(name, args, kwargs)
306 ret.__doc__ = (
307 "Name: " + name + "\nParameters: " + str(self._functions[name]) + "\n"
308 )
309 try:
File ~/Desktop/PFG/lib/python3.8/site-packages/turicreate/extensions.py:290, in _ToolkitClass.__run_class_function(self, fnname, args, kwargs)
288 # unwrap it
289 try:
--> 290 ret = self._tkclass.call_function(fnname, argument_dict)
291 except RuntimeError as exc:
292 # Expose C++ exceptions using ToolkitError.
293 raise _ToolkitError(exc)
File cy_model.pyx:35, in turicreate._cython.cy_model.UnityModel.call_function()
File cy_model.pyx:40, in turicreate._cython.cy_model.UnityModel.call_function()
ValueError: stod: no conversion
Does anyone know what the problem could be?
You can get passed this issue by setting the validation_set to None.
This does mean that you have no validation, but at least you can create your model.

XGBoost Bad Allocation

When loading an XGBoost model and running it I get the following error:
XGBoostError: bad allocation
I read it might be a memory problem, I have 32Gb of RAM and the model is quite small. I only have 8Gb of memory on my C: drive which might be causing the problem?
Code below:
def workflow_funnel(embeddings, model, funneldf):
xvalid_count_source = embeddings.transform(funneldf['cleaned_original_text'].apply(lambda x: np.str_(x)))
funnel_predictions = model.predict(xvalid_count_source)
funnel_model = xgboost.Booster(model_file = project_directory + language+funnel_model_load)
Full error:
---------------------------------------------------------------------------
XGBoostError Traceback (most recent call last)
<ipython-input-47-40524e96ae70> in <module>
1 # Funnel model application
----> 2 funnel_model = xgboost.Booster(model_file = project_directory + language+funnel_model_load)
3
4 funnel_embedding = pickle.load(open(project_directory + language+funnel_vectorizer_load, 'rb'))
5
~\AppData\Roaming\Python\Python37\site-packages\xgboost\core.py in __init__(self, params, cache, model_file)
1324 self.__dict__.update(state)
1325 elif isinstance(model_file, (STRING_TYPES, os.PathLike, bytearray)):
-> 1326 self.load_model(model_file)
1327 elif model_file is None:
1328 pass
~\AppData\Roaming\Python\Python37\site-packages\xgboost\core.py in load_model(self, fname)
2160 fname = os.fspath(os.path.expanduser(fname))
2161 _check_call(_LIB.XGBoosterLoadModel(
-> 2162 self.handle, c_str(fname)))
2163 elif isinstance(fname, bytearray):
2164 buf = fname
~\AppData\Roaming\Python\Python37\site-packages\xgboost\core.py in _check_call(ret)
216 """
217 if ret != 0:
--> 218 raise XGBoostError(py_str(_LIB.XGBGetLastError()))
219
220
XGBoostError: bad allocation

XGBoostError when loading model from json file

I'm trying to load a trained XGBoost model which has been saved in a json file. I'm using the following code:
params= {'objective' : 'multi:softmax',
'eval_metric': 'mlogloss',
'num_class': 10,
'early_stopping_rounds': 10}
xgb = xgb.XGBClassifier(**params)
xgb.load_model("xgb_default.json")
However I'm getting an error. I will include it in here together with the Traceback:
XGBoostError Traceback (most recent call last)
<ipython-input-4-8a9abeb40a78> in <module>
10
11 xgb = xgb.XGBClassifier(**params)
---> 12 xgb.load_model("xgb_default.json")
~\anaconda3\lib\site-packages\xgboost\sklearn.py in load_model(self, fname)
412 if not hasattr(self, '_Booster'):
413 self._Booster = Booster({'n_jobs': self.n_jobs})
--> 414 self._Booster.load_model(fname)
415 meta = self._Booster.attr('scikit_learn')
416 if meta is None:
~\anaconda3\lib\site-packages\xgboost\core.py in load_model(self, fname)
1601 # assume file name, cannot use os.path.exist to check, file can be
1602 # from URL.
-> 1603 _check_call(_LIB.XGBoosterLoadModel(
1604 self.handle, c_str(os_fspath(fname))))
1605 elif isinstance(fname, bytearray):
~\anaconda3\lib\site-packages\xgboost\core.py in _check_call(ret)
186 """
187 if ret != 0:
--> 188 raise XGBoostError(py_str(_LIB.XGBGetLastError()))
189
190
XGBoostError: [11:07:00] C:\Users\Administrator\workspace\xgboost-win64_release_1.2.0\include\xgboost/json.h:65: Invalid cast, from Null to Array
Does anyone know what is the issue here? Thank you in advance!
Administrator\workspace\xgboost-win64_release_1.2.0
Support for JSON was introduced in XGBoost 1.3.

Unsupported dtype for TensorType: <dtype: 'int32'> When running train of Mask RCNN with Tensorflow 1 on jupyter notebook with conda env

It was a bit strange for me, Because I run this notebook a few days earlier and many times before without something similar.
Additional I found this GitHub's issue
On MacOS
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
~/miniconda3/envs/mask_rcnn/lib/python3.6/site-packages/theano/tensor/type.py in dtype_specs(self)
268 'complex64': (complex, 'theano_complex64', 'NPY_COMPLEX64')
--> 269 }[self.dtype]
270 except KeyError:
KeyError: "<dtype: 'int32'>"
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
<ipython-input-10-36f43ebc6563> in <module>
----> 1 model = modellib.MaskRCNN(mode="training", config=config, model_dir=MODEL_DIR)
~/Study/Project07 - MaskRCNN/mrcnn/model.py in __init__(self, mode, config, model_dir)
2041 self.model_dir = model_dir
2042 self.set_log_dir()
-> 2043 self.keras_model = self.build(mode=mode, config=config)
2044
2045 def build(self, mode, config):
~/Study/Project07 - MaskRCNN/mrcnn/model.py in build(self, mode, config)
2066 # RPN GT
2067 input_rpn_match = KL.Input(
-> 2068 shape=[None, 1], name="input_rpn_match", dtype=tf.int32)
2069 input_rpn_bbox = KL.Input(
2070 shape=[None, 4], name="input_rpn_bbox", dtype=tf.float32)
~/miniconda3/envs/mask_rcnn/lib/python3.6/site-packages/keras/engine/topology.py in Input(shape, batch_shape, name, dtype, sparse, tensor)
1461 name=name, dtype=dtype,
1462 sparse=sparse,
-> 1463 input_tensor=tensor)
1464 # Return tensor including _keras_shape and _keras_history.
1465 # Note that in this case train_output and test_output are the same pointer.
~/miniconda3/envs/mask_rcnn_old2/lib/python3.6/site-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
89 warnings.warn('Update your `' + object_name +
90 '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 91 return func(*args, **kwargs)
92 wrapper._original_function = func
93 return wrapper
~/miniconda3/envs/mask_rcnn/lib/python3.6/site-packages/keras/engine/topology.py in __init__(self, input_shape, batch_size, batch_input_shape, dtype, input_tensor, sparse, name)
1370 dtype=dtype,
1371 sparse=self.sparse,
-> 1372 name=self.name)
1373 else:
1374 self.is_placeholder = False
~/miniconda3/envs/mask_rcnn/lib/python3.6/site-packages/keras/backend/theano_backend.py in placeholder(shape, ndim, dtype, sparse, name)
237 x = th_sparse_module.csr_matrix(name=name, dtype=dtype)
238 else:
--> 239 x = T.TensorType(dtype, broadcast)(name)
240 x._keras_shape = shape
241 x._uses_learning_phase = False
~/miniconda3/envs/mask_rcnn/lib/python3.6/site-packages/theano/tensor/type.py in __init__(self, dtype, broadcastable, name, sparse_grad)
49 # True or False
50 self.broadcastable = tuple(bool(b) for b in broadcastable)
---> 51 self.dtype_specs() # error checking is done there
52 self.name = name
53 self.numpy_dtype = np.dtype(self.dtype)
~/miniconda3/envs/mask_rcnn/lib/python3.6/site-packages/theano/tensor/type.py in dtype_specs(self)
270 except KeyError:
271 raise TypeError("Unsupported dtype for %s: %s"
--> 272 % (self.__class__.__name__, self.dtype))
273
274 def to_scalar_type(self):
TypeError: Unsupported dtype for TensorType: <dtype: 'int32'>
I still not sure what was the issue but it's gone after closing the jupyter, deactivate conda and reopening the terminal, conda env, jupyter, etc.

I'm Trying to Get Keras to Load vgg16.h5 Wheights Locally Instead of Downloading

I've tried modifying this code serveral different ways,
from trying to change the last lines to my vgg16.h5 file on my local disk,
to importing load_weights from Keras and trying to get it to grab the weights that way instead.
This code is from lesson 1 of the fast.ai course. I've asked on their forum but got no response.
The files running this are in this link.
https://github.com/fastai/courses/tree/master/deeplearning1/nbs
lesson1.ipynb calls on the file vgg16.py to download the weights.
The code below starts at line 117 in the vgg16.py file.
def create(self):
"""
Creates the VGG16 network achitecture and loads the pretrained weights.
Args: None
Returns: None
"""
model = self.model = Sequential()
model.add(Lambda(vgg_preprocess, input_shape=(3,224,224), output_shape=(3,224,224)))
self.ConvBlock(2, 64)
self.ConvBlock(2, 128)
self.ConvBlock(3, 256)
self.ConvBlock(3, 512)
self.ConvBlock(3, 512)
model.add(Flatten())
self.FCBlock()
self.FCBlock()
model.add(Dense(1000, activation='softmax'))
fname = 'vgg16.h5'
model.load_weights(get_file(fname, self.FILE_PATH+fname, cache_subdir='models'))
The above code is the code out of the box that downloads the weights.
When I change that last line and get rid of everything in the parenthesis except for 'fname' like this...
fname = 'vgg16.h5'
model.load_weights(fname)
I get the error below.
---------------------------------------------------------------------------
InternalError Traceback (most recent call last)
<ipython-input-7-2b6861506a11> in <module>()
----> 1 vgg = Vgg16()
2 # Grab a few images at a time for training and validation.
3 # NB: They must be in subdirectories named based on their category
4 batches = vgg.get_batches(path+'train', batch_size=batch_size)
5 val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)
/home/eagle/fastai/courses-master/deeplearning1/nbs/vgg16.pyc in __init__(self)
45 def __init__(self):
46 self.FILE_PATH = 'http://files.fast.ai/models/'
---> 47 self.create()
48 self.get_classes()
49
/home/eagle/fastai/courses-master/deeplearning1/nbs/vgg16.pyc in create(self)
137
138 fname = 'vgg16.h5'
--> 139 model.load_weights(fname)
140
141
/home/eagle/anaconda3/envs/les1/lib/python2.7/site-packages/Keras-1.2.2-py2.7.egg/keras/engine/topology.pyc in load_weights(self, filepath, by_name)
2706 self.load_weights_from_hdf5_group_by_name(f)
2707 else:
-> 2708 self.load_weights_from_hdf5_group(f)
2709
2710 if hasattr(f, 'close'):
/home/eagle/anaconda3/envs/les1/lib/python2.7/site-packages/Keras-1.2.2-py2.7.egg/keras/engine/topology.pyc in load_weights_from_hdf5_group(self, f)
2792 weight_values[0] = w
2793 weight_value_tuples += zip(symbolic_weights, weight_values)
-> 2794 K.batch_set_value(weight_value_tuples)
2795
2796 def load_weights_from_hdf5_group_by_name(self, f):
/home/eagle/anaconda3/envs/les1/lib/python2.7/site-packages/Keras-1.2.2-py2.7.egg/keras/backend/tensorflow_backend.pyc in batch_set_value(tuples)
1879 assign_ops.append(assign_op)
1880 feed_dict[assign_placeholder] = value
-> 1881 get_session().run(assign_ops, feed_dict=feed_dict)
1882
1883
/home/eagle/anaconda3/envs/les1/lib/python2.7/site-packages/Keras-1.2.2-py2.7.egg/keras/backend/tensorflow_backend.pyc in get_session()
120 config = tf.ConfigProto(intra_op_parallelism_threads=nb_thread,
121 allow_soft_placement=True)
--> 122 _SESSION = tf.Session(config=config)
123 session = _SESSION
124 if not _MANUAL_VAR_INIT:
/home/eagle/anaconda3/envs/les1/lib/python2.7/site-packages/tensorflow/python/client/session.pyc in __init__(self, target, graph, config)
1191
1192 """
-> 1193 super(Session, self).__init__(target, graph, config=config)
1194 # NOTE(mrry): Create these on first `__enter__` to avoid a reference cycle.
1195 self._default_graph_context_manager = None
/home/eagle/anaconda3/envs/les1/lib/python2.7/site-packages/tensorflow/python/client/session.pyc in __init__(self, target, graph, config)
552 try:
553 with errors.raise_exception_on_not_ok_status() as status:
--> 554 self._session = tf_session.TF_NewDeprecatedSession(opts, status)
555 finally:
556 tf_session.TF_DeleteSessionOptions(opts)
/home/eagle/anaconda3/envs/les1/lib/python2.7/contextlib.pyc in __exit__(self, type, value, traceback)
22 if type is None:
23 try:
---> 24 self.gen.next()
25 except StopIteration:
26 return
/home/eagle/anaconda3/envs/les1/lib/python2.7/site-packages/tensorflow/python/framework/errors_impl.pyc in raise_exception_on_not_ok_status()
464 None, None,
465 compat.as_text(pywrap_tensorflow.TF_Message(status)),
--> 466 pywrap_tensorflow.TF_GetCode(status))
467 finally:
468 pywrap_tensorflow.TF_DeleteStatus(status)
InternalError: Failed to create session.
I found the folder Keras is/ or would store this weight file and dropped it in there with the following line in Terminal.
mv /home/mine/fastai/courses-master/deeplearning1/nbs/vgg16.h5 ~/.keras/models/vgg16.h5
The first path is the path with my fully downloaded weights .h5 file. The second path is where I put said weights and the path Keras looks at to find the weights.
One of the possible ways to load weights locally is as follows:
vgg = vgg16.VGG16(weights=<path_to_weights_file>)
This will work fine. And there is no need to modify the vgg16.py file at all.
Copying the .h5 file to .keras/models/ and modifying the vgg16.py at line 30 to
WEIGHTS_PATH_NO_TOP = ('.keras/models/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5') seems to work fine
The path in different system:
Linux:
~/.keras/models/
Win:
settings/.keras/models/ of Python
Anaconda on Win
D:\Anaconda3\Lib\site-packages\tensorflow\contrib\keras\api\keras\applications\vgg16
Putting the downloaded .h5 file to these local folder seems to work.

Categories