Odd Dimension() object in shape of input? - python

I am experiencing issues with Tensorflow. I've narrowed the issues down to the format of my input shape of the build() method of a custom Attention layer in tf-keras. I am getting a list of Dimension() objects instead of the actual input-shape. Please help.
Attention Layer
x_attn = []
for i in range(self.attn_range): #4
x_attn.append(Attention()(x))
print("DEBUG:00001")
x = L.Concatenate(-1)(x_attn)
Custom Attention Layer build() method:
def build(self,
input_shape):
print("DEBUG:00005")
params_shape = list(input_shape[1:])
print("DEBUG:00007", params_shape)
self.query_weights = self.add_weight(
name='q_weights',
shape=params_shape,
initializer=self.q_weights_init
)
print("DEBUG:00006")
self.key_weights = self.add_weight(
name='key_weights',
shape=params_shape,
initializer=self.key_weights_init
)
self.val_weights = self.add_weight(
name='val_weights',
shape=params_shape,
initializer=self.value_weights_init
)
Output of param shape debug print function is:
DEBUG:00007 [Dimension(10), Dimension(5), Dimension(128)]
Edit:
Full Error Message as Requested:
Traceback (most recent call last):
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/tensor_util.py", line 558, in make_tensor_proto
str_values = [compat.as_bytes(x) for x in proto_values]
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/tensor_util.py", line 558, in <listcomp>
str_values = [compat.as_bytes(x) for x in proto_values]
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/util/compat.py", line 65, in as_bytes
(bytes_or_text,))
TypeError: Expected binary or unicode string, got Dimension(10)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "Empowerment\.py", line 327, in <module>
env_actor = ComputerEnv()
File "Empowerment\.py", line 225, in __init__
self.dqn = ICDQNAgent(self.state_size + (3,), self.state_size[0], 4)
File "Empowerment\.py", line 78, in __init__
self.model, self.autoencoder, self.critic = self.build_model()
File "Empowerment\.py", line 99, in build_model
x_attn.append(Attention()(x))
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 591, in __call__
self._maybe_build(inputs)
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1881, in _maybe_build
self.build(input_shapes)
File "/home/ai/Desktop/ai_proj/layers.py", line 69, in build
initializer=self.q_weights_init
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 384, in add_weight
aggregation=aggregation)
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/tracking/base.py", line 663, in _add_variable_with_custom_getter
**kwargs_for_getter)
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 155, in make_variable
shape=variable_shape if variable_shape.rank else None)
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 259, in __call__
return cls._variable_v1_call(*args, **kwargs)
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 220, in _variable_v1_call
shape=shape)
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 198, in <lambda>
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/variable_scope.py", line 2495, in default_variable_creator
shape=shape)
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 263, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 460, in __init__
shape=shape)
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 604, in _init_from_args
initial_value() if init_from_fn else initial_value,
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 135, in <lambda>
init_val = lambda: initializer(shape, dtype=dtype)
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/init_ops.py", line 533, in __call__
shape, -limit, limit, dtype, seed=self.seed)
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/random_ops.py", line 239, in random_uniform
shape = _ShapeTensor(shape)
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/random_ops.py", line 44, in _ShapeTensor
return ops.convert_to_tensor(shape, dtype=dtype, name="shape")
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1087, in convert_to_tensor
return convert_to_tensor_v2(value, dtype, preferred_dtype, name)
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1145, in convert_to_tensor_v2
as_ref=False)
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1224, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 305, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 246, in constant
allow_broadcast=True)
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 284, in _constant_impl
allow_broadcast=allow_broadcast))
File "/home/ai/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/tensor_util.py", line 562, in make_tensor_proto
"supported type." % (type(values), values))
TypeError: Failed to convert object of type <class 'list'> to Tensor. Contents: [Dimension(10), Dimension(5), Dimension(128)]. Consider casting elements to a supported type.

Related

Paramter Estimation with parmest for differential equations in Pyomo

I have a model and I want to estimate the values of the variables using parmest in pyomo.
Using parmest, I got a runtime error and a TypeError, and I can't figure out why.
Below is my code:
data = pd.read_csv('Test.csv')
#data
def modelfunc(data):
model = m = ConcreteModel()
m.t = ContinuousSet(bounds=(0,1))
# Define parameters
m.Ssu_in = Param(m.t, mutable=True)
m.Saa_in = Param(m.t, mutable=True)
m.Sfa_in = Param(m.t, mutable=True)
m.Q = Param(m.t, mutable=True)
m.V_liq = Param(initialize=3400, within=PositiveReals)
# Variables
m.S_su = Var(m.t, initialize=0.012394, domain=PositiveReals, bounds=(0.001,1))
m.S_aa = Var(m.t, initialize=0.0055432, domain=PositiveReals, bounds=(0,0.1))
m.S_fa = Var(m.t, initialize=0.10741, domain=PositiveReals, bounds=(0.001,2))
# Derivatives
m.dS_su_dt = DerivativeVar(m.S_su, wrt=m.t)
m.dS_aa_dt = DerivativeVar(m.S_aa, wrt=m.t)
m.dS_fa_dt = DerivativeVar(m.S_fa, wrt=m.t)
#initial values
m.S_su[0].fix(0.012394)
m.S_aa[0].fix(0.0055432)
m.S_fa[0].fix(0.10741)
#Discretize model using Finite Difference Method
discretizer = TransformationFactory('dae.finite_difference')
discretizer.apply_to(m,nfe=50,wrt=m.t,scheme='BACKWARD')
# Load data into the following variables
timepoints = list(m.t)
data_timepoints = data['time'].tolist()
data_profiles1 = data['S_su'].tolist()
data_profiles2 = data['S_aa'].tolist()
data_profiles3 = data['S_fa'].tolist()
data_profiles27 = data['Q'].tolist()
# Interpolate the data
interp_Ssu_values = np.interp(timepoints, data_timepoints, data_profiles1)
interp_Saa_values = np.interp(timepoints, data_timepoints, data_profiles2)
interp_Sfa_values = np.interp(timepoints, data_timepoints, data_profiles3)
interp_Q_values = np.interp(timepoints, data_timepoints, data_profiles27)
for i,t in enumerate(timepoints):
m.Ssu_in[t] = interp_Ssu_values[i]
m.Saa_in[t] = interp_Saa_values[i]
m.Sfa_in[t] = interp_Sfa_values[i]
m.Q[t] = interp_Q_values[i]
#Constraints
def S_su_out_bal(m,t):
return m.dS_su_dt[t] == (m.Q[t]/m.V_liq) * (m.Ssu_in[t] - m.S_su[t]) + 0.000662979
m.Ssu_outcon = Constraint(m.t, rule=S_su_out_bal)
def S_aa_out_bal(m,t):
return m.dS_aa_dt[t] == (m.Q[t]/m.V_liq) * (m.Saa_in[t] - m.S_aa[t]) - 0.00202160
m.Saa_outcon = Constraint(m.t, rule=S_aa_out_bal)
def S_fa_out_bal(m,t):
return m.dS_fa_dt[t] == (m.Q[t]/m.V_liq) * (m.Sfa_in[t] - m.S_fa[t]) + 0.005667982
m.Sfa_outcon = Constraint(m.t, rule=S_fa_out_bal)
return model
#Vars to estimate
theta_names = ['m.S_su', 'm.S_aa', 'm.S_fa']
#Sum of squred error
def SSE(m, data):
expr = (float(data['S_su']) - m.S_su)**2 + \
(float(data['S_aa']) - m.S_aa)**2 + \
(float(data['S_fa']) - m.S_fa)**2
return expr
# Create an instance of the Parameter Estimation
pest = parmest.Estimator(modelfunc, data, theta_names, SSE, tee=True)
# Parameter Estimation
obj, theta = pest.theta_est()
I get the following error:
ERROR: Rule failed for Expression 'SecondStageCost' with index None:
TypeError: unsupported operand type(s) for -: 'float' and 'IndexedVar'
ERROR: Constructing component 'SecondStageCost' from data=None failed:
TypeError: unsupported operand type(s) for -: 'float' and 'IndexedVar'
--- Logging error ---
Traceback (most recent call last):
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\logging\__init__.py", line 1085, in emit
msg = self.format(record)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\logging\__init__.py", line 929, in format
return fmt.format(record)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\common\log.py", line 238, in format
return self.standard_formatter.format(record)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\common\log.py", line 107, in format
msg = record.getMessage()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\logging\__init__.py", line 373, in getMessage
msg = msg % self.args
TypeError: not enough arguments for format string
Call stack:
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\ipykernel_launcher.py", line 17, in <module>
app.launch_new_instance()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\traitlets\config\application.py", line 976, in launch_instance
app.start()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\ipykernel\kernelapp.py", line 712, in start
self.io_loop.start()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\tornado\platform\asyncio.py", line 199, in start
self.asyncio_loop.run_forever()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\asyncio\base_events.py", line 570, in run_forever
self._run_once()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\asyncio\base_events.py", line 1859, in _run_once
handle._run()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\asyncio\events.py", line 81, in _run
self._context.run(self._callback, *self._args)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\ipykernel\kernelbase.py", line 510, in dispatch_queue
await self.process_one()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\ipykernel\kernelbase.py", line 499, in process_one
await dispatch(*args)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\ipykernel\kernelbase.py", line 406, in dispatch_shell
await result
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\ipykernel\kernelbase.py", line 730, in execute_request
reply_content = await reply_content
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\ipykernel\ipkernel.py", line 383, in do_execute
res = shell.run_cell(
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\ipykernel\zmqshell.py", line 528, in run_cell
return super().run_cell(*args, **kwargs)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\IPython\core\interactiveshell.py", line 2975, in run_cell
result = self._run_cell(
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\IPython\core\interactiveshell.py", line 3030, in _run_cell
return runner(coro)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\IPython\core\async_helpers.py", line 78, in _pseudo_sync_runner
coro.send(None)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\IPython\core\interactiveshell.py", line 3257, in run_cell_async
has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\IPython\core\interactiveshell.py", line 3473, in run_ast_nodes
if (await self.run_code(code, result, async_=asy)):
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\IPython\core\interactiveshell.py", line 3553, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "C:\Users\mfo21001\AppData\Local\Temp\ipykernel_4860\3065394695.py", line 84, in <cell line: 84>
obj, theta = pest.theta_est()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\parmest.py", line 687, in theta_est
return self._Q_opt(solver=solver, return_values=return_values,
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\parmest.py", line 432, in _Q_opt
ef = local_ef.create_EF(scen_names,
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\create_ef.py", line 88, in create_EF
scen_dict = {
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\create_ef.py", line 89, in <dictcomp>
name: scenario_creator(name, **scenario_creator_kwargs)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\parmest.py", line 143, in _experiment_instance_creation_callback
instance = callback(experiment_number = exp_num, cb_data = cb_data)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\parmest.py", line 391, in _instance_creation_callback
model = self._create_parmest_model(exp_data)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\parmest.py", line 344, in _create_parmest_model
logger.warning(
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\logging\__init__.py", line 1458, in warning
self._log(WARNING, msg, args, **kwargs)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\logging\__init__.py", line 1589, in _log
self.handle(record)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\logging\__init__.py", line 1599, in handle
self.callHandlers(record)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\logging\__init__.py", line 1661, in callHandlers
hdlr.handle(record)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\logging\__init__.py", line 954, in handle
self.emit(record)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\common\log.py", line 250, in emit
super(StdoutHandler, self).emit(record)
Message: 'theta_name[%s] (%s) was not found on the model'
Arguments: ((0, 'm.S_su'),)
--- Logging error ---
Traceback (most recent call last):
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\logging\__init__.py", line 1085, in emit
msg = self.format(record)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\logging\__init__.py", line 929, in format
return fmt.format(record)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\common\log.py", line 238, in format
return self.standard_formatter.format(record)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\common\log.py", line 107, in format
msg = record.getMessage()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\logging\__init__.py", line 373, in getMessage
msg = msg % self.args
TypeError: not enough arguments for format string
Call stack:
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\ipykernel_launcher.py", line 17, in <module>
app.launch_new_instance()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\traitlets\config\application.py", line 976, in launch_instance
app.start()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\ipykernel\kernelapp.py", line 712, in start
self.io_loop.start()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\tornado\platform\asyncio.py", line 199, in start
self.asyncio_loop.run_forever()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\asyncio\base_events.py", line 570, in run_forever
self._run_once()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\asyncio\base_events.py", line 1859, in _run_once
handle._run()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\asyncio\events.py", line 81, in _run
self._context.run(self._callback, *self._args)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\ipykernel\kernelbase.py", line 510, in dispatch_queue
await self.process_one()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\ipykernel\kernelbase.py", line 499, in process_one
await dispatch(*args)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\ipykernel\kernelbase.py", line 406, in dispatch_shell
await result
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\ipykernel\kernelbase.py", line 730, in execute_request
reply_content = await reply_content
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\ipykernel\ipkernel.py", line 383, in do_execute
res = shell.run_cell(
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\ipykernel\zmqshell.py", line 528, in run_cell
return super().run_cell(*args, **kwargs)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\IPython\core\interactiveshell.py", line 2975, in run_cell
result = self._run_cell(
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\IPython\core\interactiveshell.py", line 3030, in _run_cell
return runner(coro)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\IPython\core\async_helpers.py", line 78, in _pseudo_sync_runner
coro.send(None)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\IPython\core\interactiveshell.py", line 3257, in run_cell_async
has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\IPython\core\interactiveshell.py", line 3473, in run_ast_nodes
if (await self.run_code(code, result, async_=asy)):
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\IPython\core\interactiveshell.py", line 3553, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "C:\Users\mfo21001\AppData\Local\Temp\ipykernel_4860\3065394695.py", line 84, in <cell line: 84>
obj, theta = pest.theta_est()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\parmest.py", line 687, in theta_est
return self._Q_opt(solver=solver, return_values=return_values,
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\parmest.py", line 432, in _Q_opt
ef = local_ef.create_EF(scen_names,
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\create_ef.py", line 88, in create_EF
scen_dict = {
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\create_ef.py", line 89, in <dictcomp>
name: scenario_creator(name, **scenario_creator_kwargs)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\parmest.py", line 143, in _experiment_instance_creation_callback
instance = callback(experiment_number = exp_num, cb_data = cb_data)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\parmest.py", line 391, in _instance_creation_callback
model = self._create_parmest_model(exp_data)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\parmest.py", line 344, in _create_parmest_model
logger.warning(
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\logging\__init__.py", line 1458, in warning
self._log(WARNING, msg, args, **kwargs)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\logging\__init__.py", line 1589, in _log
self.handle(record)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\logging\__init__.py", line 1599, in handle
self.callHandlers(record)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\logging\__init__.py", line 1661, in callHandlers
hdlr.handle(record)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\logging\__init__.py", line 954, in handle
self.emit(record)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\common\log.py", line 250, in emit
super(StdoutHandler, self).emit(record)
Message: 'theta_name[%s] (%s) was not found on the model'
Arguments: ((1, 'm.S_aa'),)
--- Logging error ---
Traceback (most recent call last):
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\logging\__init__.py", line 1085, in emit
msg = self.format(record)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\logging\__init__.py", line 929, in format
return fmt.format(record)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\common\log.py", line 238, in format
return self.standard_formatter.format(record)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\common\log.py", line 107, in format
msg = record.getMessage()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\logging\__init__.py", line 373, in getMessage
msg = msg % self.args
TypeError: not enough arguments for format string
Call stack:
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\ipykernel_launcher.py", line 17, in <module>
app.launch_new_instance()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\traitlets\config\application.py", line 976, in launch_instance
app.start()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\ipykernel\kernelapp.py", line 712, in start
self.io_loop.start()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\tornado\platform\asyncio.py", line 199, in start
self.asyncio_loop.run_forever()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\asyncio\base_events.py", line 570, in run_forever
self._run_once()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\asyncio\base_events.py", line 1859, in _run_once
handle._run()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\asyncio\events.py", line 81, in _run
self._context.run(self._callback, *self._args)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\ipykernel\kernelbase.py", line 510, in dispatch_queue
await self.process_one()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\ipykernel\kernelbase.py", line 499, in process_one
await dispatch(*args)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\ipykernel\kernelbase.py", line 406, in dispatch_shell
await result
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\ipykernel\kernelbase.py", line 730, in execute_request
reply_content = await reply_content
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\ipykernel\ipkernel.py", line 383, in do_execute
res = shell.run_cell(
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\ipykernel\zmqshell.py", line 528, in run_cell
return super().run_cell(*args, **kwargs)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\IPython\core\interactiveshell.py", line 2975, in run_cell
result = self._run_cell(
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\IPython\core\interactiveshell.py", line 3030, in _run_cell
return runner(coro)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\IPython\core\async_helpers.py", line 78, in _pseudo_sync_runner
coro.send(None)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\IPython\core\interactiveshell.py", line 3257, in run_cell_async
has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\IPython\core\interactiveshell.py", line 3473, in run_ast_nodes
if (await self.run_code(code, result, async_=asy)):
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\IPython\core\interactiveshell.py", line 3553, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "C:\Users\mfo21001\AppData\Local\Temp\ipykernel_4860\3065394695.py", line 84, in <cell line: 84>
obj, theta = pest.theta_est()
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\parmest.py", line 687, in theta_est
return self._Q_opt(solver=solver, return_values=return_values,
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\parmest.py", line 432, in _Q_opt
ef = local_ef.create_EF(scen_names,
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\create_ef.py", line 88, in create_EF
scen_dict = {
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\create_ef.py", line 89, in <dictcomp>
name: scenario_creator(name, **scenario_creator_kwargs)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\parmest.py", line 143, in _experiment_instance_creation_callback
instance = callback(experiment_number = exp_num, cb_data = cb_data)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\parmest.py", line 391, in _instance_creation_callback
model = self._create_parmest_model(exp_data)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\parmest.py", line 344, in _create_parmest_model
logger.warning(
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\logging\__init__.py", line 1458, in warning
self._log(WARNING, msg, args, **kwargs)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\logging\__init__.py", line 1589, in _log
self.handle(record)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\logging\__init__.py", line 1599, in handle
self.callHandlers(record)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\logging\__init__.py", line 1661, in callHandlers
hdlr.handle(record)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\logging\__init__.py", line 954, in handle
self.emit(record)
File "c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\common\log.py", line 250, in emit
super(StdoutHandler, self).emit(record)
Message: 'theta_name[%s] (%s) was not found on the model'
Arguments: ((2, 'm.S_fa'),)
TypeError Traceback (most recent call last)
c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\parmest.py in _experiment_instance_creation_callback(scenario_name, node_names, cb_data)
142 try:
--> 143 instance = callback(experiment_number = exp_num, cb_data = cb_data)
144 except TypeError:
c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\parmest.py in _instance_creation_callback(self, experiment_number, cb_data)
390 raise RuntimeError(f'Unexpected data format for cb_data={cb_data}')
--> 391 model = self._create_parmest_model(exp_data)
392
c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\parmest.py in _create_parmest_model(self, data)
365 model.FirstStageCost = pyo.Expression(rule=FirstStageCost_rule)
--> 366 model.SecondStageCost = pyo.Expression(rule=_SecondStageCostExpr(self.obj_function, data))
367
c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\core\base\block.py in __setattr__(self, name, val)
543 #
--> 544 self.add_component(name, val)
545 else:
c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\core\base\block.py in add_component(self, name, val)
1088 try:
-> 1089 val.construct(data)
1090 except:
c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\core\base\expression.py in construct(self, data)
368 assert data is None
--> 369 self._construct_from_rule_using_setitem()
370 finally:
c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\core\base\indexed_component.py in _construct_from_rule_using_setitem(self)
707 # constant, then only call the rule once.
--> 708 val = rule(block, None)
709 for index in self.index_set():
c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\core\base\initializer.py in __call__(self, parent, idx)
372 def __call__(self, parent, idx):
--> 373 return self._fcn(parent)
374
c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\parmest.py in __call__(self, model)
269 def __call__(self, model):
--> 270 return self._ssc_function(model, self._data)
271
~\AppData\Local\Temp\ipykernel_4860\3065394695.py in SSE(m, data)
74 def SSE(m, data):
---> 75 expr = (float(data['S_su']) - m.S_su)**2 + \
76 (float(data['S_aa']) - m.S_aa)**2 + \
TypeError: unsupported operand type(s) for -: 'float' and 'IndexedVar'
During handling of the above exception, another exception occurred:
RuntimeError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_4860\3065394695.py in <cell line: 84>()
82
83 # Parameter Estimation
---> 84 obj, theta = pest.theta_est()
c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\parmest.py in theta_est(self, solver, return_values, calc_cov, cov_n)
685 assert cov_n > len(self.theta_names), "The number of datapoints must be greater than the number of parameters to estimate"
686
--> 687 return self._Q_opt(solver=solver, return_values=return_values,
688 bootlist=None, calc_cov=calc_cov, cov_n=cov_n)
689
c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\parmest.py in _Q_opt(self, ThetaVals, solver, return_values, bootlist, calc_cov, cov_n)
430 scenario_creator_kwargs=scenario_creator_options)
431 else:
--> 432 ef = local_ef.create_EF(scen_names,
433 _experiment_instance_creation_callback,
434 EF_name = "_Q_opt",
c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\create_ef.py in create_EF(scenario_names, scenario_creator, scenario_creator_kwargs, EF_name, suppress_warnings, nonant_for_fixed_vars)
86 if scenario_creator_kwargs is None:
87 scenario_creator_kwargs = dict()
---> 88 scen_dict = {
89 name: scenario_creator(name, **scenario_creator_kwargs)
90 for name in scenario_names
c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\create_ef.py in <dictcomp>(.0)
87 scenario_creator_kwargs = dict()
88 scen_dict = {
---> 89 name: scenario_creator(name, **scenario_creator_kwargs)
90 for name in scenario_names
91 }
c:\users\mfo21001\anaconda5\envs\watertap\lib\site-packages\pyomo\contrib\parmest\parmest.py in _experiment_instance_creation_callback(scenario_name, node_names, cb_data)
143 instance = callback(experiment_number = exp_num, cb_data = cb_data)
144 except TypeError:
--> 145 raise RuntimeError("Only one callback signature is supported: "
146 "callback(experiment_number, cb_data) ")
147 """
RuntimeError: Only one callback signature is supported: callback(experiment_number, cb_data)

Saving this Keras model blows up

Consider the following program:
import tensorflow as tf
from tensorflow.linalg import matmul
import tensorflow.keras as tfk
import tensorflow.keras.backend as K
import numpy as np
class MinimalRNNCell(tfk.layers.Layer):
def __init__(self, units, **kwargs):
self.units = units
self.state_size = units
super(MinimalRNNCell, self).__init__(**kwargs)
def build(self, input_shape):
self.kernel = self.add_weight(shape=(input_shape[-1], self.units),
initializer='uniform',
name='kernel')
self.recurrent_kernel = self.add_weight(
shape=(self.units, self.units),
initializer='uniform',
name='recurrent_kernel')
self.built = True
def call(self, inputs, states=None, constants=None, *args, **kwargs):
prev_output = states[0]
print("constants: ", constants[0].name)
h = matmul(inputs, self.kernel) + constants[0]
output = h + matmul(prev_output, self.recurrent_kernel)
return output, [output]
def get_config(self):
return dict(super().get_config(), **{'units': self.units})
cell = MinimalRNNCell(32)
x = tfk.Input((None, 5), name='x')
z = tfk.Input((1,), name='z')
layer = tfk.layers.RNN(cell, name='rnn')
y = layer(x, constants=[z])
model = tfk.Model(inputs=[x, z], outputs=[y])
model.compile(optimizer='adam', loss='mse')
model.predict([np.array([[[0,0,0,0,0]]]), np.array([[0]])])
model.save('tmp.model')
Everything works until saving, at which point it blows up:
constants: z:0
constants: z:0
constants: constants:0
Traceback (most recent call last):
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/framework/ops.py", line 1880, in _create_c_op
c_op = pywrap_tf_session.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimensions must be equal, but are 32 and 5 for '{{node add/add}} = AddV2[T=DT_FLOAT](MatMul, constants)' with input shapes: [?,32], [?,?,5].
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/ops/math_ops.py", line 1245, in binary_op_wrapper
out = r_op(x)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/ops/math_ops.py", line 1267, in r_binary_op_wrapper
return func(x, y, name=name)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/util/dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/ops/math_ops.py", line 1565, in _add_dispatch
return gen_math_ops.add_v2(x, y, name=name)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/ops/gen_math_ops.py", line 532, in add_v2
_, _, _op, _outputs = _op_def_library._apply_op_helper(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/framework/op_def_library.py", line 748, in _apply_op_helper
op = g._create_op_internal(op_type_name, inputs, dtypes=None,
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 599, in _create_op_internal
return super(FuncGraph, self)._create_op_internal( # pylint: disable=protected-access
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/framework/ops.py", line 3557, in _create_op_internal
ret = Operation(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/framework/ops.py", line 2041, in __init__
self._c_op = _create_c_op(self._graph, node_def, inputs,
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/framework/ops.py", line 1883, in _create_c_op
raise ValueError(str(e))
ValueError: Dimensions must be equal, but are 32 and 5 for '{{node add/add}} = AddV2[T=DT_FLOAT](MatMul, constants)' with input shapes: [?,32], [?,?,5].
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "example.py", line 43, in <module>
model.save('tmp.model')
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 2111, in save
save.save_model(self, filepath, overwrite, include_optimizer, save_format,
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/save.py", line 150, in save_model
saved_model_save.save(model, filepath, overwrite, include_optimizer,
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/save.py", line 89, in save
saved_nodes, node_paths = save_lib.save_and_return_nodes(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/saved_model/save.py", line 1103, in save_and_return_nodes
_build_meta_graph(obj, signatures, options, meta_graph_def,
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/saved_model/save.py", line 1290, in _build_meta_graph
return _build_meta_graph_impl(obj, signatures, options, meta_graph_def,
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/saved_model/save.py", line 1207, in _build_meta_graph_impl
signatures = signature_serialization.find_function_to_export(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/saved_model/signature_serialization.py", line 99, in find_function_to_export
functions = saveable_view.list_functions(saveable_view.root)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/saved_model/save.py", line 154, in list_functions
obj_functions = obj._list_functions_for_serialization( # pylint: disable=protected-access File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 2713, in _list_functions_for_serialization
functions = super(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 3016, in _list_functions_for_serialization
return (self._trackable_saved_model_saver
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/base_serialization.py", line 92, in list_functions_for_serialization
fns = self.functions_to_serialize(serialization_cache)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 73, in functions_to_serialize
return (self._get_serialized_attributes(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 89, in _get_serialized_attributes
object_dict, function_dict = self._get_serialized_attributes_internal(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/model_serialization.py", line 53, in _get_serialized_attributes_internal
super(ModelSavedModelSaver, self)._get_serialized_attributes_internal(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 99, in _get_serialized_attributes_internal
functions = save_impl.wrap_layer_functions(self.obj, serialization_cache)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 154, in wrap_layer_functions
original_fns = _replace_child_layer_functions(layer, serialization_cache)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 284, in _replace_child_layer_functions
child_layer._trackable_saved_model_saver._get_serialized_attributes(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 89, in _get_serialized_attributes
object_dict, function_dict = self._get_serialized_attributes_internal(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 151, in _get_serialized_attributes_internal
super(RNNSavedModelSaver, self)._get_serialized_attributes_internal(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 99, in _get_serialized_attributes_internal
functions = save_impl.wrap_layer_functions(self.obj, serialization_cache)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 204, in wrap_layer_functions
fn.get_concrete_function()
File "/usr/lib64/python3.8/contextlib.py", line 120, in __exit__
next(self.gen)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 367, in tracing_scope
fn.get_concrete_function(*args, **kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 1367, in get_concrete_function
concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 1273, in _get_concrete_function_garbage_collected
self._initialize(args, kwargs, add_initializers_to=initializers)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 763, in _initialize
self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/eager/function.py", line 3050, in _get_concrete_function_internal_garbage_collected
graph_function, _ = self._maybe_define_function(args, kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/eager/function.py", line 3444, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/eager/function.py", line 3279, in _create_graph_function
func_graph_module.func_graph_from_py_func(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 999, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 672, in wrapped_fn
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 599, in wrapper
ret = method(*args, **kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 165, in wrap_with_training_arg
return control_flow_util.smart_cond(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/utils/control_flow_util.py", line 109, in smart_cond
return smart_module.smart_cond(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/framework/smart_cond.py", line 56, in smart_cond
return false_fn()
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 167, in <lambda>
lambda: replace_training_and_call(False))
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 163, in replace_training_and_call
return wrapped_call(*args, **kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 681, in call
return call_and_return_conditional_losses(inputs, *args, **kwargs)[0]
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 639, in __call__
return self.wrapped_call(*args, **kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 889, in __call__
result = self._call(*args, **kwds)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 933, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 763, in _initialize
self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/eager/function.py", line 3050, in _get_concrete_function_internal_garbage_collected
graph_function, _ = self._maybe_define_function(args, kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/eager/function.py", line 3444, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/eager/function.py", line 3279, in _create_graph_function
func_graph_module.func_graph_from_py_func(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 999, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 672, in wrapped_fn
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 599, in wrapper
ret = method(*args, **kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 165, in wrap_with_training_arg
return control_flow_util.smart_cond(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/utils/control_flow_util.py", line 109, in smart_cond
return smart_module.smart_cond(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/framework/smart_cond.py", line 56, in smart_cond
return false_fn()
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 167, in <lambda>
lambda: replace_training_and_call(False))
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 163, in replace_training_and_call
return wrapped_call(*args, **kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 663, in call_and_return_conditional_losses
call_output = layer_call(*args, **kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 802, in call
last_output, outputs, states = backend.rnn(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/util/dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/backend.py", line 4377, in rnn
output_time_zero, _ = step_function(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 789, in step
output, new_states = cell_call_fn(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1030, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 69, in return_outputs_and_add_losses
outputs, losses = fn(*args, **kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 165, in wrap_with_training_arg
return control_flow_util.smart_cond(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/utils/control_flow_util.py", line 109, in smart_cond
return smart_module.smart_cond(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/framework/smart_cond.py", line 56, in smart_cond
return false_fn()
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 167, in <lambda>
lambda: replace_training_and_call(False))
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 163, in replace_training_and_call
return wrapped_call(*args, **kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 639, in __call__
return self.wrapped_call(*args, **kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 889, in __call__
result = self._call(*args, **kwds)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 933, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 763, in _initialize
self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/eager/function.py", line 3050, in _get_concrete_function_internal_garbage_collected
graph_function, _ = self._maybe_define_function(args, kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/eager/function.py", line 3444, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/eager/function.py", line 3279, in _create_graph_function
func_graph_module.func_graph_from_py_func(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 999, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 672, in wrapped_fn
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 599, in wrapper
ret = method(*args, **kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 165, in wrap_with_training_arg
return control_flow_util.smart_cond(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/utils/control_flow_util.py", line 109, in smart_cond
return smart_module.smart_cond(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/framework/smart_cond.py", line 56, in smart_cond
return false_fn()
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 167, in <lambda>
lambda: replace_training_and_call(False))
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 163, in replace_training_and_call
return wrapped_call(*args, **kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 663, in call_and_return_conditional_losses
call_output = layer_call(*args, **kwargs)
File "example.py", line 27, in call
h = matmul(inputs, self.kernel) + constants[0]
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/ops/math_ops.py", line 1250, in binary_op_wrapper
raise e
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/ops/math_ops.py", line 1234, in binary_op_wrapper
return func(x, y, name=name)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/util/dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/ops/math_ops.py", line 1565, in _add_dispatch
return gen_math_ops.add_v2(x, y, name=name)
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/ops/gen_math_ops.py", line 532, in add_v2
_, _, _op, _outputs = _op_def_library._apply_op_helper(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/framework/op_def_library.py", line 748, in _apply_op_helper
op = g._create_op_internal(op_type_name, inputs, dtypes=None,
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 599, in _create_op_internal
return super(FuncGraph, self)._create_op_internal( # pylint: disable=protected-access
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/framework/ops.py", line 3557, in _create_op_internal
ret = Operation(
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/framework/ops.py", line 2041, in __init__
self._c_op = _create_c_op(self._graph, node_def, inputs,
File "/home/itamarst/Devel/tensorflow/venv/lib64/python3.8/site-packages/tensorflow/python/framework/ops.py", line 1883, in _create_c_op
raise ValueError(str(e))
ValueError: Dimensions must be equal, but are 32 and 5 for '{{node add}} = AddV2[T=DT_FLOAT](MatMul, constants)' with input shapes: [?,32], [?,?,5].
Any idea what's going on? Original code: https://github.com/tensorflow/tensorflow/issues/48213
Resolved by attaching
.h5 extension in TF 2.6
in while saving model file
import tensorflow as tf
from tensorflow.linalg import matmul
import tensorflow.keras as tfk
import tensorflow.keras.backend as K
import numpy as np
class MinimalRNNCell(tfk.layers.Layer):
def __init__(self, units, **kwargs):
self.units = units
self.state_size = units
super(MinimalRNNCell, self).__init__(**kwargs)
def build(self, input_shape):
self.kernel = self.add_weight(shape=(input_shape[-1], self.units),
initializer='uniform',
name='kernel')
self.recurrent_kernel = self.add_weight(
shape=(self.units, self.units),
initializer='uniform',
name='recurrent_kernel')
self.built = True
def call(self, inputs, states=None, constants=None, *args, **kwargs):
prev_output = states[0]
print("constants: ", constants[0].name)
h = matmul(inputs, self.kernel) + constants[0]
output = h + matmul(prev_output, self.recurrent_kernel)
return output, [output]
def get_config(self):
return dict(super().get_config(), **{'units': self.units})
cell = MinimalRNNCell(32)
x = tfk.Input((None, 5), name='x')
z = tfk.Input((1,), name='z')
layer = tfk.layers.RNN(cell, name='rnn')
y = layer(x, constants=[z])
model = tfk.Model(inputs=[x, z], outputs=[y])
model.compile(optimizer='adam', loss='mse')
model.predict([np.array([[[0,0,0,0,0]]]), np.array([[0]])])
model.save('tmp.model.h5')

How to initialize a kernel with a tensor

I have created a custom layer in keras, which simply perform a dot product between the input and a kernel. But for the kernel I wanted to use the mean of the batch as a kernel initialization, meaning taking the mean of the batch and producing a kernel which initial value is that mean. To do so I have created a custom kernel initializer as follow:
class Tensor_Init(Initializer):
"""Initializer that generates tensors initialized to a given tensor.
# Arguments
Tensor: the generator tensors.
"""
def __init__(self, Tensor=None):
self.Tensor = Tensor
def __call__(self, shape, dtype=None):
return tf.Variable(self.Tensor)
def get_config(self):
return {'Tensor': self.Tensor}
This is the call method of the custom layer in keras. I simply compute the mean of the batch and use it with the above initializer class to produce a kernel. I use it as follow in the custom layer
def call(self, inputs):
data_format = conv_utils.convert_data_format(self.data_format, self.rank + 2)
inputs = tf.extract_image_patches(
inputs,
ksizes=(1,) + self.kernel_size + (1,),
strides=(1,) + self.strides + (1,),
rates=(1,) + self.dilation_rate + (1,),
padding=self.padding.upper(),
)
inputs = K.reshape(inputs,[-1,inputs.get_shape().as_list()[1],inputs.get_shape().as_list()
[2],self.kernel_size[0]*self.kernel_size[1] ,self.output_dim])
self.kernel = self.add_weight(name='kernel',shape=(),initializer=Tensor_Init(Tensor=tf.reduce_mean(inputs, 0)),trainable=True)
outputs = (tf.einsum('NHWKC,HWKC->NHWC',inputs,self.kernel)+self.c)**self.p
if self.data_format == 'channels_first':
outputs = K.permute_dimensions(outputs, (0, 3, 1, 2))
return outputs
Th e model is created and compiled normaly but I start training I am getting this error
InvalidArgumentError: You must feed a value for placeholder tensor 'conv2d_1_input' with dtype float and shape [?,48,48,3]
[[node conv2d_1_input (defined at C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\keras\backend\tensorflow_backend.py:736) ]]
Original stack trace for 'conv2d_1_input':
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\traitlets\config\application.py", line 658, in launch_instance
app.start()
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\ipykernel\kernelapp.py", line 563, in start
self.io_loop.start()
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tornado\platform\asyncio.py", line 148, in start
self.asyncio_loop.run_forever()
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\asyncio\base_events.py", line 438, in run_forever
self._run_once()
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\asyncio\base_events.py", line 1451, in _run_once
handle._run()
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\asyncio\events.py", line 145, in _run
self._callback(*self._args)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tornado\ioloop.py", line 690, in <lambda>
lambda f: self._run_callback(functools.partial(callback, future))
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tornado\ioloop.py", line 743, in _run_callback
ret = callback()
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tornado\gen.py", line 787, in inner
self.run()
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tornado\gen.py", line 748, in run
yielded = self.gen.send(value)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 378, in dispatch_queue
yield self.process_one()
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tornado\gen.py", line 225, in wrapper
runner = Runner(result, future, yielded)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tornado\gen.py", line 714, in __init__
self.run()
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tornado\gen.py", line 748, in run
yielded = self.gen.send(value)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 365, in process_one
yield gen.maybe_future(dispatch(*args))
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 272, in dispatch_shell
yield gen.maybe_future(handler(stream, idents, msg))
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 542, in execute_request
user_expressions, allow_stdin,
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\ipykernel\ipkernel.py", line 294, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\ipykernel\zmqshell.py", line 536, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2855, in run_cell
raw_cell, store_history, silent, shell_futures)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2881, in _run_cell
return runner(coro)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\IPython\core\async_helpers.py", line 68, in _pseudo_sync_runner
coro.send(None)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 3058, in run_cell_async
interactivity=interactivity, compiler=compiler, result=result)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 3249, in run_ast_nodes
if (await self.run_code(code, result, async_=asy)):
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 3326, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-35eda01d200a>", line 75, in <module>
model = create_vgg16()
File "<ipython-input-2-35eda01d200a>", line 12, in create_vgg16
model.add(Conv2D(64, (5, 5), input_shape=(48,48,3), padding='same'))
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\keras\engine\sequential.py", line 162, in add
name=layer.name + '_input')
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\keras\engine\input_layer.py", line 178, in Input
input_tensor=tensor)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\keras\engine\input_layer.py", line 87, in __init__
name=self.name)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\keras\backend\tensorflow_backend.py", line 736, in placeholder
shape=shape, ndim=ndim, dtype=dtype, sparse=sparse, name=name)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\keras\backend.py", line 998, in placeholder
x = array_ops.placeholder(dtype, shape=shape, name=name)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\ops\array_ops.py", line 2143, in placeholder
return gen_array_ops.placeholder(dtype=dtype, shape=shape, name=name)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 7401, in placeholder
"Placeholder", dtype=dtype, shape=shape, name=name)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 3616, in create_op
op_def=op_def)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 2005, in __init__
self._traceback = tf_stack.extract_stack()
I was able to pass the mean of the batch to the kernel by simply creating a zero initialized kernel then assigning the mean value to it, without even creating a custom initilizer. I modified the custom layer as follow
def call(self, inputs):
data_format = conv_utils.convert_data_format(self.data_format, self.rank + 2)
inputs = tf.extract_image_patches(
inputs,
ksizes=(1,) + self.kernel_size + (1,),
strides=(1,) + self.strides + (1,),
rates=(1,) + self.dilation_rate + (1,),
padding=self.padding.upper(),
)
inputs = K.reshape(inputs,[-1,inputs.get_shape().as_list()[1],inputs.get_shape().as_list()
[2],self.kernel_size[0]*self.kernel_size[1] ,self.output_dim])
weights = tf.reduce_mean(inputs, 0)
self.kernel = self.add_weight(name='kernel',
shape=(weights.get_shape().as_list()[0],weights.get_shape().as_list()
[1],weights.get_shape().as_list()[2],weights.get_shape().as_list()[3]),
initializer='zeros',
trainable=True)
tf.compat.v1.assign(self.kernel, weights)
outputs = (tf.einsum('NHWKC,HWKC->NHWC',inputs,self.kernel)+self.c)**self.p
if self.data_format == 'channels_first':
outputs = K.permute_dimensions(outputs, (0, 3, 1, 2))
return outputs

Tensorflow object detection api training error "TypeError: Input 'y' of 'Mul' Op has type float32

EDIT2
Ok so far i have tried with python3.5 -tf 1.10 and python 2.7 tf 1.10
I m still getting this error
Traceback (most recent call last):
File "object_detection/model_main.py", line 101, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "object_detection/model_main.py", line 97, in main
tf.estimator.train_and_evaluate(estimator, train_spec, eval_specs[0])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/training.py", line 455, in train_and_evaluate
return executor.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/training.py", line 594, in run
return self.run_local()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/training.py", line 695, in run_local
saving_listeners=saving_listeners)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator.py", line 354, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator.py", line 1179, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator.py", line 1209, in _train_model_default
features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator.py", line 1167, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/home/nvidia/tensorflow/models/research/object_detection/model_lib.py", line 287, in model_fn
prediction_dict, features[fields.InputDataFields.true_image_shape])
File "/home/nvidia/tensorflow/models/research/object_detection/meta_architectures/ssd_meta_arch.py", line 686, in loss
keypoints, weights)
File "/home/nvidia/tensorflow/models/research/object_detection/meta_architectures/ssd_meta_arch.py", line 859, in _assign_targets
groundtruth_weights_list)
File "/home/nvidia/tensorflow/models/research/object_detection/core/target_assigner.py", line 481, in batch_assign_targets
anchors, gt_boxes, gt_class_targets, unmatched_class_label, gt_weights)
File "/home/nvidia/tensorflow/models/research/object_detection/core/target_assigner.py", line 180, in assign
match = self._matcher.match(match_quality_matrix, **params)
File "/home/nvidia/tensorflow/models/research/object_detection/core/matcher.py", line 239, in match
return Match(self._match(similarity_matrix, **params),
File "/home/nvidia/tensorflow/models/research/object_detection/matchers/argmax_matcher.py", line 190, in _match
_match_when_rows_are_non_empty, _match_when_rows_are_empty)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 2074, in cond
orig_res_t, res_t = context_t.BuildCondBranch(true_fn)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 1920, in BuildCondBranch
original_result = fn()
File "/home/nvidia/tensorflow/models/research/object_detection/matchers/argmax_matcher.py", line 153, in _match_when_rows_are_non_empty
-1)
File "/home/nvidia/tensorflow/models/research/object_detection/matchers/argmax_matcher.py", line 203, in _set_values_using_indicator
indicator = tf.cast(1-indicator, x.dtype)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py", line 878, in r_binary_op_wrapper
x = ops.convert_to_tensor(x, dtype=y.dtype.base_dtype, name="x")
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1028, in convert_to_tensor
as_ref=False)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1124, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.py", line 228, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.py", line 207, in constant
value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_util.py", line 442, in make_tensor_proto
_AssertCompatible(values, dtype)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_util.py", line 353, in _AssertCompatible
(dtype.name, repr(mismatch), type(mismatch).__name__))
TypeError: Expected bool, got 1 of type 'int' instead.
Has anybody tried to train on TX2 or is it for my case only and i did something wrong?
ORIGINAL
Trying to train on mobilenet ssd on Jetson TX2 (I know it is not for taining but i have no better option)
followed these guides
https://towardsdatascience.com/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_locally.md
Training runs on my laptop (CPU) fine but i get the following error on my TX2
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 510, in _apply_op_helper
preferred_dtype=default_dtype)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1040, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 883, in _TensorTensorConversionFunction
(dtype.name, t.dtype.name, str(t)))
ValueError: Tensor conversion requested dtype int32 for Tensor with dtype float32: 'Tensor("Loss/Loss/huber_loss/Sub_1:0", shape=(24, 1917, 4), dtype=float32)'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "object_detection/model_main.py", line 101, in <module>
tf.app.run()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 126, in run
_sys.exit(main(argv))
File "object_detection/model_main.py", line 97, in main
tf.estimator.train_and_evaluate(estimator, train_spec, eval_specs[0])
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/training.py", line 425, in train_and_evaluate
executor.run()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/training.py", line 504, in run
self.run_local()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/training.py", line 636, in run_local
hooks=train_hooks)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 355, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 824, in _train_model
features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 805, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/home/nvidia/tensorflow/models/research/object_detection/model_lib.py", line 287, in model_fn
prediction_dict, features[fields.InputDataFields.true_image_shape])
File "/home/nvidia/tensorflow/models/research/object_detection/meta_architectures/ssd_meta_arch.py", line 708, in loss
weights=batch_reg_weights)
File "/home/nvidia/tensorflow/models/research/object_detection/core/losses.py", line 74, in __call__
return self._compute_loss(prediction_tensor, target_tensor, **params)
File "/home/nvidia/tensorflow/models/research/object_detection/core/losses.py", line 157, in _compute_loss
reduction=tf.losses.Reduction.NONE
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/losses/losses_impl.py", line 444, in huber_loss
math_ops.multiply(delta, linear))
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/math_ops.py", line 326, in multiply
return gen_math_ops.mul(x, y, name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 4689, in mul
"Mul", x=x, y=y, name=name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 546, in _apply_op_helper
inferred_from[input_arg.type_attr]))
TypeError: Input 'y' of 'Mul' Op has type float32 that does not match type int32 of argument 'x'.
NOTE:
Used precompiled wheels to install tensorflow
There was an error with protobuf compiler that has been solved by
removing this line
reserved 6; (line number 104)
in ssd.proto on object_detection/protos folder
I found this solution here but i couldnt find the link
Here is the script to start training
PIPELINE_CONFIG_PATH=/home/nvidia/testtraining/models/model/ssd_mobilenet_v1_pets.config
MODEL_DIR=/home/nvidia/testtraining/models/model/
NUM_TRAIN_STEPS=50000
NUM_EVAL_STEPS=2000
python3 object_detection/model_main.py \
--pipeline_config_path=${PIPELINE_CONFIG_PATH} \
--model_dir=${MODEL_DIR} \
--num_train_steps=${NUM_TRAIN_STEPS} \
--num_eval_steps=${NUM_EVAL_STEPS} \
--alsologtostderr
Laptop TF version 1.10.0
Jetson TX2 tf version 1.6.0-rc1
I m new to Ubuntu and Tensorflow so go easy on me :)
Thanks
EDIT:
It seems like line 546, in _apply_op_helper is some sort of error handling line.
I tried to fix this error with following edit. Added these. In /usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/math_ops.py Added these to line 236 just after define statement
import tensorflow as tf
y = tf.cast(y, x.dtype)
This created some other error message which is solved by editing /home/nvidia/tensorflow/models/research/object_detection/matchers/argmax_matcher.py line 203-204 to these
indicator = tf.cast(1-indicator, x.dtype)
return tf.add(tf.multiply(x, indicator), val * indicator)
But i m still getting error
Traceback (most recent call last):
File "object_detection/model_main.py", line 101, in <module>
tf.app.run()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 126, in run
_sys.exit(main(argv))
File "object_detection/model_main.py", line 97, in main
tf.estimator.train_and_evaluate(estimator, train_spec, eval_specs[0])
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/training.py", line 425, in train_and_evaluate
executor.run()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/training.py", line 504, in run
self.run_local()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/training.py", line 636, in run_local
hooks=train_hooks)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 355, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 824, in _train_model
features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 805, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/home/nvidia/tensorflow/models/research/object_detection/model_lib.py", line 287, in model_fn
prediction_dict, features[fields.InputDataFields.true_image_shape])
File "/home/nvidia/tensorflow/models/research/object_detection/meta_architectures/ssd_meta_arch.py", line 686, in loss
keypoints, weights)
File "/home/nvidia/tensorflow/models/research/object_detection/meta_architectures/ssd_meta_arch.py", line 859, in _assign_targets
groundtruth_weights_list)
File "/home/nvidia/tensorflow/models/research/object_detection/core/target_assigner.py", line 481, in batch_assign_targets
anchors, gt_boxes, gt_class_targets, unmatched_class_label, gt_weights)
File "/home/nvidia/tensorflow/models/research/object_detection/core/target_assigner.py", line 180, in assign
match = self._matcher.match(match_quality_matrix, **params)
File "/home/nvidia/tensorflow/models/research/object_detection/core/matcher.py", line 239, in match
return Match(self._match(similarity_matrix, **params),
File "/home/nvidia/tensorflow/models/research/object_detection/matchers/argmax_matcher.py", line 190, in _match
_match_when_rows_are_non_empty, _match_when_rows_are_empty)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/deprecation.py", line 432, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 2047, in cond
orig_res_t, res_t = context_t.BuildCondBranch(true_fn)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 1897, in BuildCondBranch
original_result = fn()
File "/home/nvidia/tensorflow/models/research/object_detection/matchers/argmax_matcher.py", line 153, in _match_when_rows_are_non_empty
-1)
File "/home/nvidia/tensorflow/models/research/object_detection/matchers/argmax_matcher.py", line 203, in _set_values_using_indicator
indicator = tf.cast(1-indicator, x.dtype)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/math_ops.py", line 983, in r_binary_op_wrapper
x = ops.convert_to_tensor(x, dtype=y.dtype.base_dtype, name="x")
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 950, in convert_to_tensor
as_ref=False)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1040, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/constant_op.py", line 235, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/constant_op.py", line 214, in constant
value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/tensor_util.py", line 433, in make_tensor_proto
_AssertCompatible(values, dtype)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/tensor_util.py", line 344, in _AssertCompatible
(dtype.name, repr(mismatch), type(mismatch).__name__))
TypeError: Expected bool, got 1 of type 'int' instead.
And this one is out of my leage
I think there is a huge compatability issues and i will just install tf 1.1 instead
I m open to new ideas though
The key to your problem is here(bottom of traceback):
TypeError: Input 'y' of 'Mul' Op has type float32 that does not match type int32 of argument 'x'.
your y has type float32, but argument - x(place where you pass this y to), needs to be int32.
Try using tf.cast(y, tf.int32) or something like that.
Sometimes there are some changes in tf/you use some older model versions. So this may happen from time to time.
So just open
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py"
find line 546, and do that cast. (using sudo vim, i guess)

TypeError when calling dynamic_rnn on LSTMCell

I'm kind of new to working with TensorFlow and my problem may be easy to solve, at least I hope so.
I'm trying to work with LSTMCell to predict the next label in a sequence.
Here is the code I'm using :
import tensorflow as tf
max_sequence_length = 1000
vector_length = 1
number_of_classes = 1000
batch_size = 50
num_hidden = 24
# Define graph
data = tf.placeholder(tf.int64, [None, max_sequence_length, vector_length])
# 0 must be a free class so that the mask can work
target = tf.placeholder(tf.int64, [None, max_sequence_length, number_of_classes + 1])
labels = tf.argmax(target, 2)
cell = tf.nn.rnn_cell.LSTMCell(num_hidden, state_is_tuple=True)
Then I try to get the real length of each sequence in the batch
no_of_batches = tf.shape(data)[0]
sequence_lengths = tf.zeros([batch_size])
for i in xrange(max_sequence_length):
data_at_t = tf.squeeze(tf.slice(data, [0,i,0],[-1,1,-1]))
t = tf.scalar_mul(i, tf.ones([batch_size]))
boolean = tf.not_equal(data_at_t, tf.zeros([no_of_batches, batch_size], dtype = tf.int64))
sequence_lengths = tf.select(boolean, t, sequence_lengths)
And finally I try to call tf.nn.dynamic_rnn :
outputs, state = tf.nn.dynamic_rnn(
cell = cell,
inputs = data,
sequence_length = max_sequence_length,
dtype = tf.float64
)
Then, I get a TypeError:
Traceback (most recent call last):
File "<stdin>", line 5, in <module>
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/rnn.py", line 830, in dynamic_rnn
dtype=dtype)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/rnn.py", line 997, in _dynamic_rnn_loop
swap_memory=swap_memory)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 1973, in while_loop
result = context.BuildLoop(cond, body, loop_vars)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 1860, in BuildLoop
pred, body, original_loop_vars, loop_vars)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 1810, in _BuildLoop
body_result = body(*packed_vars_for_body)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/rnn.py", line 980, in _time_step
skip_conditionals=True)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/rnn.py", line 394, in _rnn_step
new_output, new_state = call_cell()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/rnn.py", line 968, in <lambda>
call_cell = lambda: cell(input_t, state)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/rnn_cell.py", line 489, in __call__
dtype, self._num_unit_shards)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/rnn_cell.py", line 323, in _get_concat_variable
sharded_variable = _get_sharded_variable(name, shape, dtype, num_shards)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/rnn_cell.py", line 353, in _get_sharded_variable
dtype=dtype))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 830, in get_variable
custom_getter=custom_getter)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 673, in get_variable
custom_getter=custom_getter)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 217, in get_variable
validate_shape=validate_shape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 202, in _true_getter
caching_device=caching_device, validate_shape=validate_shape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 536, in _get_single_variable
validate_shape=validate_shape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variables.py", line 211, in __init__
dtype=dtype)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variables.py", line 281, in _init_from_args
self._initial_value = ops.convert_to_tensor(initial_value(),
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 526, in <lambda>
init_val = lambda: initializer(shape.as_list(), dtype=dtype)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/init_ops.py", line 210, in _initializer
dtype, seed=seed)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/random_ops.py", line 235, in random_uniform
minval = ops.convert_to_tensor(minval, dtype=dtype, name="min")
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 621, in convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.py", line 180, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.py", line 163, in constant
tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_util.py", line 353, in make_tensor_proto
_AssertCompatible(values, dtype)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_util.py", line 290, in _AssertCompatible
(dtype.name, repr(mismatch), type(mismatch).__name__))
TypeError: Expected int64, got -0.34641016151377546 of type 'float' instead.
I don't understand where this float comes from as all other values in the script are integers. How can I solve this problem ?
1) The RNN cell state is tf.float64. You set this explicitly within the tf.nn.dynamic_rnn call (dtype). The tensorflow engine then initialized the states with RNN's default random_uniform initializer. This is why you have the -0.34 float value there.
I'm not sure what you wanted to achieve. Please refer to https://www.tensorflow.org/versions/r0.11/api_docs/python/nn.html#dynamic_rnn
2) The sequence_length = max_sequence_length must be an int32/int64 vector sized [batch_size] instead of scalar 1000
3) You may want to initialize the LSTMCell state too:
cell = tf.nn.rnn_cell.LSTMCell(num_hidden, state_is_tuple=True),
initializer=tf.constant_initializer(value=0, dtype=tf.int32))
i have a similar problem with my code, i fixed it by replacing the int32 at the placeholder as float, check if that works

Categories