python : AttributeError: 'str' object has no attribute 'keys' - python

I'm trying to solve classification problem. I don't know why I'm getting this error:
AttributeError: 'str' object has no attribute 'keys'
This is the main code:
def generate_arrays_for_training(indexPat, paths, start=0, end=100):
while True:
from_=int(len(paths)/100*start)
to_=int(len(paths)/100*end)
for i in range(from_, int(to_)):
f=paths[i]
x = np.load(PathSpectogramFolder+f)
if('P' in f):
y = np.repeat([[0,1]],x.shape[0], axis=0)
else:
y =np.repeat([[1,0]],x.shape[0], axis=0)
yield(x,y)
history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75) ## problem here
steps_per_epoch=int((len(filesPath)-int(len(filesPath)/100*25))),
validation_steps=int((len(filesPath)-int(len(filesPath)/100*75))),
verbose=2,class_weight="balanced",
epochs=15, max_queue_size=2, shuffle=True, callbacks=[callback])
where generate_arrays_for_training function return x and y . x is a 2D array of float numbers and y is [0,1].
Error:
Traceback (most recent call last):
File "/home/user1/thesis2/CNN_dwt2.py", line 437, in <module>
main()
File "/home/user1/thesis2/CNN_dwt2.py", line 316, in main
history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75),
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/util/deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1815, in fit_generator
return self.fit(
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper
return method(self, *args, **kwargs)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1049, in fit
data_handler = data_adapter.DataHandler(
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/data_adapter.py", line 1122, in __init__
dataset = dataset.map(_make_class_weight_map_fn(class_weight))
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/data_adapter.py", line 1295, in _make_class_weight_map_fn
class_ids = list(sorted(class_weight.keys()))
AttributeError: 'str' object has no attribute 'keys'

your issue is caused by class_weight="balanced" parameter that you pass to model.fit()
according to the model.fit() reference, this parameter should be a dict:
Optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to "pay more attention" to samples from an under-represented class.
try class_weight=None for testing, it should get rid of the original error. Later provide proper dict as class_weight to address imbalanced dataset issue.

Related

AttributError: 'NoneType' object has no attribute 'dtype'

I am building a simple recurrent neural network, which contains a lstm layer with a fully connected layer, to make the classification for every row data. shape of my data 'x_train' is an nd.array with the shape of (210,240,1), 'y_train' is an nd.array with the shape of (210,). And the model output is normal. However, when I run model.fit(), there is always an error: AttributeError: 'NoneType' object has no attribute 'dtype'.
I don' t know what's wrong with the following code.
#%%
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow as tf
import pandas as pd
import numpy as np
#%%
model = tf.keras.models.Sequential([
tf.keras.layers.LSTM(8),
tf.keras.layers.Dense(units = 2)
])
#%%
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy\
(from_logits=True)
model.compile(optimizer=tf.keras.optimizers.Adam(0.001),\
loss = loss_fn)
#%%
x_train = np.random.randn(210,240,1)
y_train = np.random.binomial(1, 0.5,(210,))
#%%
model.fit(x_train, y_train, epochs=20)
The following are the whole error information:
Traceback (most recent call last):
File "D:\运筹优化\机器学习课程项目\时序数据预测\rnn_exploration.py", line 68, in <module>
model.fit(x_train, y_train, epochs=20)
File "D:\python\anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 819, in fit
use_multiprocessing=use_multiprocessing)
File "D:\python\anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 235, in fit
use_multiprocessing=use_multiprocessing)
File "D:\python\anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 593, in _process_training_inputs
use_multiprocessing=use_multiprocessing)
File "D:\python\anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 646, in _process_inputs
x, y, sample_weight=sample_weights)
File "D:\python\anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2360, in _standardize_user_data
self._compile_from_inputs(all_inputs, y_input, x, y)
File "D:\python\anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2618, in _compile_from_inputs
experimental_run_tf_function=self._experimental_run_tf_function)
File "D:\python\anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow_core\python\training\tracking\base.py", line 457, in _method_wrapper
result = method(self, *args, **kwargs)
File "D:\python\anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 416, in compile
endpoint.create_training_target(t, run_eagerly=self.run_eagerly)
File "D:\python\anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 3023, in create_training_target
self.loss_fn, K.dtype(self.output))
File "D:\python\anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow_core\python\keras\backend.py", line 1237, in dtype
return x.dtype.base_dtype.name
AttributeError: 'NoneType' object has no attribute 'dtype'
Any help would be appreciated!
Problem still exists with numpy 1.21.3, downgrade to 1.19.5 works.

How to solve AttributeError: 'str' object has no attribute 'ndim'?

I want to test my keras model. But I've faced that problem. I have an image for checking in the "path".
path = 'C:\\Users\\Администратор\\AppData\\Local\\Programs\\Python\\Python36-32\\577793008_ef4345205b.jpg'
model = keras.models.load_model('C:\\Users\\Администратор\\AppData\\Local\\Programs\\Python\\Python36-32\\model1.h5')
predictions = model.predict(path)
print (predictions[0])
Error.
Traceback (most recent call last):
File "C:\Users\Администратор\AppData\Local\Programs\Python\Python36-32\load1.p
y", line 11, in <module>
predictions = model.predict(path)
File "C:\Users\Администратор\AppData\Local\Programs\Python\Python36-32\lib\sit
e-packages\keras\engine\training.py", line 1441, in predict
x, _, _ = self._standardize_user_data(x)
File "C:\Users\Администратор\AppData\Local\Programs\Python\Python36-32\lib\sit
e-packages\keras\engine\training.py", line 579, in _standardize_user_data
exception_prefix='input')
File "C:\Users\Администратор\AppData\Local\Programs\Python\Python36-32\lib\sit
e-packages\keras\engine\training_utils.py", line 99, in standardize_input_data
data = [standardize_single_array(x) for x in data]
File "C:\Users\Администратор\AppData\Local\Programs\Python\Python36-32\lib\sit
e-packages\keras\engine\training_utils.py", line 99, in <listcomp>
data = [standardize_single_array(x) for x in data]
File "C:\Users\Администратор\AppData\Local\Programs\Python\Python36-32\lib\sit
e-packages\keras\engine\training_utils.py", line 34, in standardize_single_array
elif x.ndim == 1:
AttributeError: 'str' object has no attribute 'ndim'
The predict method can take several types of inputs, but not a string. It cannot directly read a file based on the path.
You need to transform this file into whatever can be read by the Model class. Read the file and make the content an array for instance.

model.fit_generator python error: AttributeError: 'str' object has no attribute 'keys' [duplicate]

I'm trying to solve classification problem. I don't know why I'm getting this error:
AttributeError: 'str' object has no attribute 'keys'
This is the main code:
def generate_arrays_for_training(indexPat, paths, start=0, end=100):
while True:
from_=int(len(paths)/100*start)
to_=int(len(paths)/100*end)
for i in range(from_, int(to_)):
f=paths[i]
x = np.load(PathSpectogramFolder+f)
if('P' in f):
y = np.repeat([[0,1]],x.shape[0], axis=0)
else:
y =np.repeat([[1,0]],x.shape[0], axis=0)
yield(x,y)
history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75) ## problem here
steps_per_epoch=int((len(filesPath)-int(len(filesPath)/100*25))),
validation_steps=int((len(filesPath)-int(len(filesPath)/100*75))),
verbose=2,class_weight="balanced",
epochs=15, max_queue_size=2, shuffle=True, callbacks=[callback])
where generate_arrays_for_training function return x and y . x is a 2D array of float numbers and y is [0,1].
Error:
Traceback (most recent call last):
File "/home/user1/thesis2/CNN_dwt2.py", line 437, in <module>
main()
File "/home/user1/thesis2/CNN_dwt2.py", line 316, in main
history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75),
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/util/deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1815, in fit_generator
return self.fit(
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper
return method(self, *args, **kwargs)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1049, in fit
data_handler = data_adapter.DataHandler(
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/data_adapter.py", line 1122, in __init__
dataset = dataset.map(_make_class_weight_map_fn(class_weight))
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/data_adapter.py", line 1295, in _make_class_weight_map_fn
class_ids = list(sorted(class_weight.keys()))
AttributeError: 'str' object has no attribute 'keys'
your issue is caused by class_weight="balanced" parameter that you pass to model.fit()
according to the model.fit() reference, this parameter should be a dict:
Optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to "pay more attention" to samples from an under-represented class.
try class_weight=None for testing, it should get rid of the original error. Later provide proper dict as class_weight to address imbalanced dataset issue.

Problem with Dataloader object not subscriptable

I am now running a Python program using Pytorch. I use my own dataset, not torch.data.dataset. I download data from a pickle file extracted from feature extraction. But the following errors appear:
Traceback (most recent call last):
File "C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demo-emotion.py", line 326, in <module>
fire.Fire(demo)
File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\fire\core.py", line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\fire\core.py", line 468, in _Fire
target=component.__name__)
File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\fire\core.py", line 672, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demo-emotion.py", line 304, in demo
train(model,train_set1, valid_set=valid_set, test_set=test1, save=save, n_epochs=n_epochs,batch_size=batch_size,seed=seed)
File "C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demo-emotion.py", line 172, in train
n_epochs=n_epochs,
File "C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demo-emotion.py", line 37, in train_epoch
loader=np.asarray(list(loader))
File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data\dataloader.py", line 345, in __next__
data = self._next_data()
File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data\dataloader.py", line 385, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data\dataset.py", line 257, in __getitem__
return self.dataset[self.indices[idx]]
TypeError: 'DataLoader' object is not subscriptable
The code is:
train_set1 = Owndata()
train1, test1 = train_set1 .get_splits()
# prepare data loaders
train_dl = torch.utils.data.DataLoader(train1, batch_size=32, shuffle=True)
test_dl =torch.utils.data.DataLoader(test1, batch_size=1024, shuffle=False)
test_set1 = Owndata()
'''print('test_set# ',test_set)'''
if valid_size:
valid_set = Owndata()
indices = torch.randperm(len(train_set1))
train_indices = indices[:len(indices) - valid_size]
valid_indices = indices[len(indices) - valid_size:]
train_set1 = torch.utils.data.Subset(train_dl, train_indices)
valid_set = torch.utils.data.Subset(valid_set, valid_indices)
else:
valid_set = None
model = DenseNet(
growth_rate=growth_rate,
block_config=block_config,
num_classes=10,
small_inputs=True,
efficient=efficient,
)
train(model,train_set1, valid_set=valid_set, test_set=test1, save=save, n_epochs=n_epochs, batch_size=batch_size, seed=seed)
Any help is appreciated! Thanks a lot in advance!!
It is not the line giving you an error as it's the very last train function you are not showing.
You are confusing two things:
torch.utils.data.Dataset object is indexable (dataset[5] works fine for example). It is a simple object which defines how to get a single (usually single) sample of data.
torch.utils.data.DataLoader - non-indexable, only iterable, usually returns batches of data from above Dataset. Can work in parallel using num_workers. It's what you are trying to index while you should use dataset for that.
Please see PyTorch documentation about data to get a better grasp on how those work.

keras.layers.Embedding TypeError: __call__() missing 1 required positional argument: 'shape'

I am trying to build up a Keras Model using a keras.layers.Embedding layer with embeddings_initializer=keras.initializers.Identity.
This should let me implement one-hot-encodings directly in the Keras Model, while feeding the model with embeddings representation of word lists (i.e. From word keys to sparse binary vector)
Input Example for each Sample:
[ [5,22,3], # Statement encoding
[6,9,1,76], # Context encoding
]
I am trying to implement what is described in: Keras One Hot Encoding Memory Management - best Possible way out
(Look at Daniel's Answer)
Expected output from the Embedding Layer:
[ [ [B11], [B12], [B13] ], # Statement encoding for lstm_emb_phrase
[[B21], [B22], [B23], [B24] ], # Context encoding for lstm_emb_cont
]
where #Bij# should be a binary array for the word j of the phrase i.
This is the code I currently wrote:
DEFAULT_INNER_ACTIVATION = 'relu'
DEFAULT_OUTPUT_ACTIVATION = 'softplus'
def __init__(self, sentence_max_lenght, ctx_max_len, dense_features_dim, vocab_size):
lstm_input_phrase = keras.layers.Input(shape=(sentence_max_lenght,), name='L0_STC_MyApp')
lstm_input_cont = keras.layers.Input(shape=(ctx_max_len,), name='L0_CTX_MyApp')
# The following line is #56
lstm_emb_phrase = keras.layers.Embedding(vocab_size, vocab_size, embeddings_initializer=keras.initializers.Identity,
input_length=sentence_max_lenght, name='L0E_STC_MyApp')(lstm_input_phrase)
lstm_emb_phrase = keras.layers.LSTM(DEFAULT_MODEL_L1_STC_DIM, name='L1_STC_MyApp')(lstm_emb_phrase)
lstm_emb_phrase = keras.layers.Dense(DEFAULT_MODEL_L2_STC_DIM, name='L2_STC_MyApp', activation=DEFAULT_INNER_ACTIVATION)(lstm_emb_phrase)
lstm_emb_cont = keras.layers.Embedding(vocab_size, vocab_size,
embeddings_initializer=keras.initializers.Identity,
input_length=ctx_max_len, name='L0E_CTX_MyApp')(lstm_input_cont)
lstm_emb_cont = keras.layers.LSTM(DEFAULT_MODEL_L1_CTX_DIM, name='L1_CTX_MyApp')(lstm_emb_cont)
lstm_emb_cont = keras.layers.Dense(DEFAULT_MODEL_L2_CTX_DIM, name='L2_CTX_MyApp', activation=DEFAULT_INNER_ACTIVATION)(lstm_emb_cont)
x = keras.layers.concatenate([lstm_emb_phrase, lstm_emb_cont])
x = keras.layers.Dense(DEFAULT_MODEL_L3_DIM, activation=DEFAULT_INNER_ACTIVATION)(x)
x = keras.layers.Dense(DEFAULT_MODEL_L4_DIM, activation=DEFAULT_INNER_ACTIVATION)(x)
main_output = keras.layers.Dense(DEFAULT_OUTPUT_DIM, activation=DEFAULT_OUTPUT_ACTIVATION)(x)
self.model = keras.models.Model(inputs=[lstm_input_phrase, lstm_input_cont],
outputs=main_output)
self.model.compile(loss='binary_crossentropy', metrics=['accuracy'])
This is the error I get when I try to run my script:
WARNING:tensorflow:From [PATH_TO_MyApp]\venv\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (
from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
Traceback (most recent call last):
File "my_script.py", line 307, in <module>
main()
File "my_script.py", line 60, in main
flag = flag or train_r_skills(argsparser)
File "my_script.py", line 230, in train_r_skills
sk_extr = MyKerasModel(stm_max_len, ctx_max_len, segm_max_len, vocab_size)
File "[PATH_TO_MyApp]\MyApp\parser\MyKerasModel_002.py", line 56, in __init__
input_length=ctx_features_dim, name='L0E_CTX_MyApp')(lstm_input_cont)
File "[PATH_TO_MyApp]\venv\lib\site-packages\keras\engine\base_layer.py", line 431, in __call__
self.build(unpack_singleton(input_shapes))
File "[PATH_TO_MyApp]\venv\lib\site-packages\keras\layers\embeddings.py", line 109, in build
dtype=self.dtype)
File "[PATH_TO_MyApp]\venv\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "[PATH_TO_MyApp]\venv\lib\site-packages\keras\engine\base_layer.py", line 252, in add_weight
constraint=constraint)
File "[PATH_TO_MyApp]\venv\lib\site-packages\keras\backend\tensorflow_backend.py", line 402, in variable
v = tf.Variable(value, dtype=tf.as_dtype(dtype), name=name)
File "[PATH_TO_MyApp]\venv\lib\site-packages\tensorflow\python\ops\variables.py", line 213, in __call__
return cls._variable_v1_call(*args, **kwargs)
File "[PATH_TO_MyApp]\venv\lib\site-packages\tensorflow\python\ops\variables.py", line 176, in _variable_v1_call
aggregation=aggregation)
File "[PATH_TO_MyApp]\venv\lib\site-packages\tensorflow\python\ops\variables.py", line 155, in <lambda>
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "[PATH_TO_MyApp]\venv\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 2495, in default_variable_creator
expected_shape=expected_shape, import_scope=import_scope)
File "[PATH_TO_MyApp]\venv\lib\site-packages\tensorflow\python\ops\variables.py", line 217, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "[PATH_TO_MyApp]\venv\lib\site-packages\tensorflow\python\ops\variables.py", line 1395, in __init__
constraint=constraint)
File "[PATH_TO_MyApp]\venv\lib\site-packages\tensorflow\python\ops\variables.py", line 1503, in _init_from_args
initial_value(), name="initial_value", dtype=dtype)
TypeError: __call__() missing 1 required positional argument: 'shape'
Can you help me to understand what is going on?
EDIT:
I checked also Convert code from Keras 1 to Keras 2: TypeError: __call__() missing 1 required positional argument: 'shape' , as C. Lightfoot suggested, however, at the best of my understanding seems to not match my issue (I am not converting code from Keras 1 to Keras 2). Please give me more hints if there is anything I am missing about this article.
Cheers,
/H

Categories