I want to test my keras model. But I've faced that problem. I have an image for checking in the "path".
path = 'C:\\Users\\Администратор\\AppData\\Local\\Programs\\Python\\Python36-32\\577793008_ef4345205b.jpg'
model = keras.models.load_model('C:\\Users\\Администратор\\AppData\\Local\\Programs\\Python\\Python36-32\\model1.h5')
predictions = model.predict(path)
print (predictions[0])
Error.
Traceback (most recent call last):
File "C:\Users\Администратор\AppData\Local\Programs\Python\Python36-32\load1.p
y", line 11, in <module>
predictions = model.predict(path)
File "C:\Users\Администратор\AppData\Local\Programs\Python\Python36-32\lib\sit
e-packages\keras\engine\training.py", line 1441, in predict
x, _, _ = self._standardize_user_data(x)
File "C:\Users\Администратор\AppData\Local\Programs\Python\Python36-32\lib\sit
e-packages\keras\engine\training.py", line 579, in _standardize_user_data
exception_prefix='input')
File "C:\Users\Администратор\AppData\Local\Programs\Python\Python36-32\lib\sit
e-packages\keras\engine\training_utils.py", line 99, in standardize_input_data
data = [standardize_single_array(x) for x in data]
File "C:\Users\Администратор\AppData\Local\Programs\Python\Python36-32\lib\sit
e-packages\keras\engine\training_utils.py", line 99, in <listcomp>
data = [standardize_single_array(x) for x in data]
File "C:\Users\Администратор\AppData\Local\Programs\Python\Python36-32\lib\sit
e-packages\keras\engine\training_utils.py", line 34, in standardize_single_array
elif x.ndim == 1:
AttributeError: 'str' object has no attribute 'ndim'
The predict method can take several types of inputs, but not a string. It cannot directly read a file based on the path.
You need to transform this file into whatever can be read by the Model class. Read the file and make the content an array for instance.
Related
I'm trying to solve classification problem. I don't know why I'm getting this error:
AttributeError: 'str' object has no attribute 'keys'
This is the main code:
def generate_arrays_for_training(indexPat, paths, start=0, end=100):
while True:
from_=int(len(paths)/100*start)
to_=int(len(paths)/100*end)
for i in range(from_, int(to_)):
f=paths[i]
x = np.load(PathSpectogramFolder+f)
if('P' in f):
y = np.repeat([[0,1]],x.shape[0], axis=0)
else:
y =np.repeat([[1,0]],x.shape[0], axis=0)
yield(x,y)
history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75) ## problem here
steps_per_epoch=int((len(filesPath)-int(len(filesPath)/100*25))),
validation_steps=int((len(filesPath)-int(len(filesPath)/100*75))),
verbose=2,class_weight="balanced",
epochs=15, max_queue_size=2, shuffle=True, callbacks=[callback])
where generate_arrays_for_training function return x and y . x is a 2D array of float numbers and y is [0,1].
Error:
Traceback (most recent call last):
File "/home/user1/thesis2/CNN_dwt2.py", line 437, in <module>
main()
File "/home/user1/thesis2/CNN_dwt2.py", line 316, in main
history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75),
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/util/deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1815, in fit_generator
return self.fit(
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper
return method(self, *args, **kwargs)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1049, in fit
data_handler = data_adapter.DataHandler(
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/data_adapter.py", line 1122, in __init__
dataset = dataset.map(_make_class_weight_map_fn(class_weight))
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/data_adapter.py", line 1295, in _make_class_weight_map_fn
class_ids = list(sorted(class_weight.keys()))
AttributeError: 'str' object has no attribute 'keys'
your issue is caused by class_weight="balanced" parameter that you pass to model.fit()
according to the model.fit() reference, this parameter should be a dict:
Optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to "pay more attention" to samples from an under-represented class.
try class_weight=None for testing, it should get rid of the original error. Later provide proper dict as class_weight to address imbalanced dataset issue.
I'm trying to solve classification problem. I don't know why I'm getting this error:
AttributeError: 'str' object has no attribute 'keys'
This is the main code:
def generate_arrays_for_training(indexPat, paths, start=0, end=100):
while True:
from_=int(len(paths)/100*start)
to_=int(len(paths)/100*end)
for i in range(from_, int(to_)):
f=paths[i]
x = np.load(PathSpectogramFolder+f)
if('P' in f):
y = np.repeat([[0,1]],x.shape[0], axis=0)
else:
y =np.repeat([[1,0]],x.shape[0], axis=0)
yield(x,y)
history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75) ## problem here
steps_per_epoch=int((len(filesPath)-int(len(filesPath)/100*25))),
validation_steps=int((len(filesPath)-int(len(filesPath)/100*75))),
verbose=2,class_weight="balanced",
epochs=15, max_queue_size=2, shuffle=True, callbacks=[callback])
where generate_arrays_for_training function return x and y . x is a 2D array of float numbers and y is [0,1].
Error:
Traceback (most recent call last):
File "/home/user1/thesis2/CNN_dwt2.py", line 437, in <module>
main()
File "/home/user1/thesis2/CNN_dwt2.py", line 316, in main
history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75),
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/util/deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1815, in fit_generator
return self.fit(
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper
return method(self, *args, **kwargs)
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1049, in fit
data_handler = data_adapter.DataHandler(
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/data_adapter.py", line 1122, in __init__
dataset = dataset.map(_make_class_weight_map_fn(class_weight))
File "/home/user1/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/data_adapter.py", line 1295, in _make_class_weight_map_fn
class_ids = list(sorted(class_weight.keys()))
AttributeError: 'str' object has no attribute 'keys'
your issue is caused by class_weight="balanced" parameter that you pass to model.fit()
according to the model.fit() reference, this parameter should be a dict:
Optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to "pay more attention" to samples from an under-represented class.
try class_weight=None for testing, it should get rid of the original error. Later provide proper dict as class_weight to address imbalanced dataset issue.
I am now running a Python program using Pytorch. I use my own dataset, not torch.data.dataset. I download data from a pickle file extracted from feature extraction. But the following errors appear:
Traceback (most recent call last):
File "C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demo-emotion.py", line 326, in <module>
fire.Fire(demo)
File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\fire\core.py", line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\fire\core.py", line 468, in _Fire
target=component.__name__)
File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\fire\core.py", line 672, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demo-emotion.py", line 304, in demo
train(model,train_set1, valid_set=valid_set, test_set=test1, save=save, n_epochs=n_epochs,batch_size=batch_size,seed=seed)
File "C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demo-emotion.py", line 172, in train
n_epochs=n_epochs,
File "C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demo-emotion.py", line 37, in train_epoch
loader=np.asarray(list(loader))
File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data\dataloader.py", line 345, in __next__
data = self._next_data()
File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data\dataloader.py", line 385, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data\dataset.py", line 257, in __getitem__
return self.dataset[self.indices[idx]]
TypeError: 'DataLoader' object is not subscriptable
The code is:
train_set1 = Owndata()
train1, test1 = train_set1 .get_splits()
# prepare data loaders
train_dl = torch.utils.data.DataLoader(train1, batch_size=32, shuffle=True)
test_dl =torch.utils.data.DataLoader(test1, batch_size=1024, shuffle=False)
test_set1 = Owndata()
'''print('test_set# ',test_set)'''
if valid_size:
valid_set = Owndata()
indices = torch.randperm(len(train_set1))
train_indices = indices[:len(indices) - valid_size]
valid_indices = indices[len(indices) - valid_size:]
train_set1 = torch.utils.data.Subset(train_dl, train_indices)
valid_set = torch.utils.data.Subset(valid_set, valid_indices)
else:
valid_set = None
model = DenseNet(
growth_rate=growth_rate,
block_config=block_config,
num_classes=10,
small_inputs=True,
efficient=efficient,
)
train(model,train_set1, valid_set=valid_set, test_set=test1, save=save, n_epochs=n_epochs, batch_size=batch_size, seed=seed)
Any help is appreciated! Thanks a lot in advance!!
It is not the line giving you an error as it's the very last train function you are not showing.
You are confusing two things:
torch.utils.data.Dataset object is indexable (dataset[5] works fine for example). It is a simple object which defines how to get a single (usually single) sample of data.
torch.utils.data.DataLoader - non-indexable, only iterable, usually returns batches of data from above Dataset. Can work in parallel using num_workers. It's what you are trying to index while you should use dataset for that.
Please see PyTorch documentation about data to get a better grasp on how those work.
I am using the tensorflow data API to try and do some rejection sampling for my unbalanced data set.
I have run the code on my personal computer and it seems to work as I expect it to, however, when I run the code on my University's cluster I get a type error that I can't seem to understand. I have tried recasting and I get the same error.
I am still learning how to use this API and I'm still not 100% clear on if this is the best way to achieve what I want, so I also welcome any advice on how I implemented the rejection sampling (this could very well be the reason why I get error since I don't fully understand yet).
This is how I am loading in the data to the dataset:
data = np.loadtxt("my_data.dat")
features = data[:, 1:10]
labels = data[:, 0]
labels[labels == -1] = 0
assert features.shape[0] == labels.shape[0]
dataset_size = len(features)
dataset = tf.data.Dataset.from_tensor_slices((features.astype('float32'),
labels.astype('int32')))
dataset = dataset.shuffle(buffer_size=dataset_size)
the error occurs when I read here:
train_size = int((2/3.0)*dataset_size)
tr_dataset = dataset.take(train_size)
tr_dataset = (tr_dataset.apply(
tf.contrib.data.rejection_resample(
class_func=lambda _, c: c, target_dist=[0.5, 0.5],
seed=42)).map(lambda a, b: b)).batch(100)
This is the error:
Traceback (most recent call last):
File "/home/user/.conda/envs/tensorflowcpu/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 510, in _apply_op_helper
preferred_dtype=default_dtype)
File "/home/user/.conda/envs/tensorflowcpu/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1094, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/home/user/.conda/envs/tensorflowcpu/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 931, in _TensorTensorConversionFunction
(dtype.name, t.dtype.name, str(t)))
ValueError: Tensor conversion requested dtype int32 for Tensor with dtype int64: 'Tensor("Sum:0", shape=(2,), dtype=int64)'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test.py", line 185, in <module>
seed=42))).batch(100)
File "/home/user/.conda/envs/tensorflowcpu/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 1109, in apply
dataset = transformation_func(self)
File "/home/user/.conda/envs/tensorflowcpu/lib/python3.6/site-packages/tensorflow/contrib/data/python/ops/resampling.py", line 74, in _apply_fn
target_dist_t, class_values_ds)
File "/home/user/.conda/envs/tensorflowcpu/lib/python3.6/site-packages/tensorflow/contrib/data/python/ops/resampling.py", line 183, in _estimate_initial_dist_ds
update_estimate_and_tile))
File "/home/user/.conda/envs/tensorflowcpu/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 1109, in apply
dataset = transformation_func(self)
File "/home/user/.conda/envs/tensorflowcpu/lib/python3.6/site-packages/tensorflow/contrib/data/python/ops/scan_ops.py", line 172, in _apply_fn
return _ScanDataset(dataset, initial_state, scan_func)
File "/home/user/.conda/envs/tensorflowcpu/lib/python3.6/site-packages/tensorflow/contrib/data/python/ops/scan_ops.py", line 74, in __init__
add_to_graph=False)
File "/home/user/.conda/envs/tensorflowcpu/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 1459, in __init__
self._function._create_definition_if_needed() # pylint: disable=protected-access
File "/home/user/.conda/envs/tensorflowcpu/lib/python3.6/site-packages/tensorflow/python/framework/function.py", line 337, in _create_definition_if_needed
self._create_definition_if_needed_impl()
File "/home/user/.conda/envs/tensorflowcpu/lib/python3.6/site-packages/tensorflow/python/framework/function.py", line 346, in _create_definition_if_needed_impl
self._capture_by_value, self._caller_device)
File "/home/user/.conda/envs/tensorflowcpu/lib/python3.6/site-packages/tensorflow/python/framework/function.py", line 863, in func_graph_from_py_func
outputs = func(*func_graph.inputs)
File "/home/user/.conda/envs/tensorflowcpu/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 1392, in tf_data_structured_function_wrapper
ret = func(*nested_args)
File "/home/user/.conda/envs/tensorflowcpu/lib/python3.6/site-packages/tensorflow/contrib/data/python/ops/resampling.py", line 176, in update_estimate_and_tile
c, num_examples_per_class_seen)
File "/home/user/.conda/envs/tensorflowcpu/lib/python3.6/site-packages/tensorflow/contrib/data/python/ops/resampling.py", line 212, in _estimate_data_distribution
array_ops.one_hot(c, num_classes, dtype=dtypes.int64), 0))
File "/home/user/.conda/envs/tensorflowcpu/lib/python3.6/site-packages/tensorflow/python/ops/gen_math_ops.py", line 297, in add
"Add", x=x, y=y, name=name)
File "/home/user/.conda/envs/tensorflowcpu/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 546, in _apply_op_helper
inferred_from[input_arg.type_attr]))
TypeError: Input 'y' of 'Add' Op has type int64 that does not match type int32 of argument 'x'.
I want to import data from text file and make vector space representation out of words:
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(input="file")
f = open('D:\\test\\17.txt')
bag_of_words = vectorizer.fit(f)
bag_of_words = vectorizer.transform(f)
print(bag_of_words)
But I get this error:
Traceback (most recent call last):
File "D:\test\test.py", line 5, in <module>
bag_of_words = vectorizer.fit(f)
File "C:\Anaconda3\lib\site-packages\sklearn\feature_extraction\text.py", line 776, in fit
self.fit_transform(raw_documents)
File "C:\Anaconda3\lib\site-packages\sklearn\feature_extraction\text.py", line 804, in fit_transform
self.fixed_vocabulary_)
File "C:\Anaconda3\lib\site-packages\sklearn\feature_extraction\text.py", line 739, in _count_vocab
for feature in analyze(doc):
File "C:\Anaconda3\lib\site-packages\sklearn\feature_extraction\text.py", line 236, in <lambda>
tokenize(preprocess(self.decode(doc))), stop_words)
File "C:\Anaconda3\lib\site-packages\sklearn\feature_extraction\text.py", line 110, in decode
doc = doc.read()
AttributeError: 'str' object has no attribute 'read'
Any ideas?
The vectorizer.fit method expects an iterable of file or string objects (not a single file object), hence you should have vectorizer.fit([f]).
In addition, you cannot reuse the f in the second call to vectorizer.transform (because the file has been read at that moment). What you probably want to do is the following:
vectorizer = CountVectorizer(input="file")
f = open('D:\\test\\17.txt')
bag_of_words = vectorizer.fit_transform([f])