No module named 'torch.autograd' - python

Working with torch package:
import torch
from torch.autograd import Variable
x_data = [1.0,2.0,3.0]
y_data = [2.0,4.0,6.0]
w = Variable(torch.Tensor([1.0]), requires_grad = True)
def forward(x):
return x*w
def loss(x,y):
y_pred = forward(x)
return (y_pred-y)*(y_pred-y)
print("my prediction before training",4,forward(4))
for epoch in range(10):
for x_val, y_val in zip(x_data,y_data):
l= loss(x_val, y_val)
l.backward()
print("\tgrad: ", x_val, y_val, w.grad.data[0])
w.data=w.data-0.01*w.grad.data
w.grad.data.zero_()
print("progress:", epoch, l.data[0] )
print("my new prediction after training ", forward(4))
Got error:
runfile('C:/gdrive/python/temp2.py', wdir='C:/gdrive/python')
Traceback (most recent call last):
File "C:\gdrive\python\temp2.py", line 11, in <module>
from torch.autograd import Variable
ModuleNotFoundError: No module named 'torch.autograd'
Command conda list pytorch brings:
# packages in environment at C:\Users\g\.conda\envs\test:
#
# Name Version Build Channel
(test) PS C:\gdrive\python>
How to fix this problem?

It seems to me that you have installed pytorch using conda.
Might be you have torch named folder in your current directory.
Try changing the directory, or try installing pytorch using pip.
This https://github.com/pytorch/pytorch/issues/1851 might help you to solve your problem.

People who are using pip:
pip install torchvision

Related

Getting the error "str' object has no attribute 'decode" when trying to use custom weights for image classification [duplicate]

After Training, I saved Both Keras whole Model and Only Weights using
model.save_weights(MODEL_WEIGHTS) and model.save(MODEL_NAME)
Models and Weights were saved successfully and there was no error.
I can successfully load the weights simply using model.load_weights and they are good to go, but when i try to load the save model via load_model, i am getting an error.
File "C:/Users/Rizwan/model_testing/model_performance.py", line 46, in <module>
Model2 = load_model('nasnet_RS2.h5',custom_objects={'euc_dist_keras': euc_dist_keras})
File "C:\Users\Rizwan\AppData\Roaming\Python\Python36\site-packages\keras\engine\saving.py", line 419, in load_model
model = _deserialize_model(f, custom_objects, compile)
File "C:\Users\Rizwan\AppData\Roaming\Python\Python36\site-packages\keras\engine\saving.py", line 321, in _deserialize_model
optimizer_weights_group['weight_names']]
File "C:\Users\Rizwan\AppData\Roaming\Python\Python36\site-packages\keras\engine\saving.py", line 320, in <listcomp>
n.decode('utf8') for n in
AttributeError: 'str' object has no attribute 'decode'
I never received this error and i used to load any models successfully. I am using Keras 2.2.4 with tensorflow backend. Python 3.6.
My Code for training is :
from keras_preprocessing.image import ImageDataGenerator
from keras import backend as K
from keras.models import load_model
from keras.callbacks import ReduceLROnPlateau, TensorBoard,
ModelCheckpoint,EarlyStopping
import pandas as pd
MODEL_NAME = "nasnet_RS2.h5"
MODEL_WEIGHTS = "nasnet_RS2_weights.h5"
def euc_dist_keras(y_true, y_pred):
return K.sqrt(K.sum(K.square(y_true - y_pred), axis=-1, keepdims=True))
def main():
# Here, we initialize the "NASNetMobile" model type and customize the final
#feature regressor layer.
# NASNet is a neural network architecture developed by Google.
# This architecture is specialized for transfer learning, and was discovered via Neural Architecture Search.
# NASNetMobile is a smaller version of NASNet.
model = NASNetMobile()
model = Model(model.input, Dense(1, activation='linear', kernel_initializer='normal')(model.layers[-2].output))
# model = load_model('current_best.hdf5', custom_objects={'euc_dist_keras': euc_dist_keras})
# This model will use the "Adam" optimizer.
model.compile("adam", euc_dist_keras)
lr_callback = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.003)
# This callback will log model stats to Tensorboard.
tb_callback = TensorBoard()
# This callback will checkpoint the best model at every epoch.
mc_callback = ModelCheckpoint(filepath='current_best_mem3.h5', verbose=1, save_best_only=True)
es_callback=EarlyStopping(monitor='val_loss', min_delta=0, patience=4, verbose=0, mode='auto', baseline=None, restore_best_weights=True)
# This is the train DataSequence.
# These are the callbacks.
#callbacks = [lr_callback, tb_callback,mc_callback]
callbacks = [lr_callback, tb_callback,es_callback]
train_pd = pd.read_csv("./train3.txt", delimiter=" ", names=["id", "label"], index_col=None)
test_pd = pd.read_csv("./val3.txt", delimiter=" ", names=["id", "label"], index_col=None)
# train_pd = pd.read_csv("./train2.txt",delimiter=" ",header=None,index_col=None)
# test_pd = pd.read_csv("./val2.txt",delimiter=" ",header=None,index_col=None)
#model.summary()
batch_size=32
datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = datagen.flow_from_dataframe(dataframe=train_pd,
directory="./images", x_col="id", y_col="label",
has_ext=True,
class_mode="other", target_size=(224, 224),
batch_size=batch_size)
valid_generator = datagen.flow_from_dataframe(dataframe=test_pd, directory="./images", x_col="id", y_col="label",
has_ext=True, class_mode="other", target_size=(224, 224),
batch_size=batch_size)
STEP_SIZE_TRAIN = train_generator.n // train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n // valid_generator.batch_size
model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
callbacks=callbacks,
epochs=20)
# we save the model.
model.save_weights(MODEL_WEIGHTS)
model.save(MODEL_NAME)
if __name__ == '__main__':
# freeze_support() here if program needs to be frozen
main()
For me the solution was downgrading the h5py package (in my case to 2.10.0), apparently putting back only Keras and Tensorflow to the correct versions was not enough.
I downgraded my h5py package with the following command,
pip install 'h5py==2.10.0' --force-reinstall
Restarted my ipython kernel and it worked.
For me it was the version of h5py that was superior to my previous build.
Fixed it by setting to 2.10.0.
Downgrade h5py package with the following command to resolve the issue,
pip install h5py==2.10.0 --force-reinstall
I had the same problem, solved putting compile=False in load_model:
model_ = load_model('path to your model.h5',custom_objects={'Scale': Scale()}, compile=False)
sgd = SGD(lr=1e-3, decay=1e-6, momentum=0.9, nesterov=True)
model_.compile(loss='categorical_crossentropy',optimizer='sgd',metrics=['accuracy'])
saved using TF format file and not h5py: save_format='tf'. In my case:
model.save_weights("NMT_model_weight.tf",save_format='tf')
This is probably due to a model saved from a different version of keras. I got the same problem when loading a model generated by tensorflow.keras (which is similar to keras 2.1.6 for tf 1.12 I think) from keras 2.2.6.
You can load the weights with model.load_weights and resave the complete model from the keras version you want to use.
The solution than works for me was:
pip3 uninstall keras
pip3 uninstall tensorflow
pip3 install --upgrade pip3
pip3 install tensorflow
pip3 install keras
I still kept having this error after having tensorflow==2.4.1, h5py==2.1.0, and python 3.8 in my environment.
what fixed it was downgrading the python version to 3.6.9
Downgrading python, tensorflow, keras and h5py resolved the issue.
python -> 3.6.2
pip install tensorflow==1.3.0
pip install keras==2.1.2
pip install 'h5py==2.10.0' --force-reinstall

Why the error : "cannot import name 'balanced_accuracy' from 'sklearn.metrics' " is more complicated than it should?

I'm trying to do a multiclass classification project on COVID-19 imbalanced dataset using jupyter notebooks on kaggle, my code is :
import random
import gc
import numpy as np
from numpy import asarray
import itertools
train_dir='/kaggle/input/pandemic2/Training/Training'
test_dir='/kaggle/input/pandemic2/Testing/Testing'
train_covid=['/kaggle/input/pandemic2/Training/Training/{}'.format(i) for i in os.listdir(train_dir) if 'COVID' in I]
train_normal=['/kaggle/input/pandemic2/Training/Training/{}'.format(i) for i in os.listdir(train_dir) if 'Normal' in I]
train_pneumonia=['/kaggle/input/pandemic2/Training/Training/{}'.format(i) for i in os.listdir(train_dir) if 'MERS' or
'SARS'or
'Bacterial' or
'Chlamydia' or
'Influenza' or
'Klebsiella' or
'Legionella' or
'Mycoplasma' or
'Pneumocystis' or
'Streptococcus' or
'Varicella' in I]
test_imgs=['/kaggle/input/pandemic2/Testing/Testing/{}'.format(i) for i in os.listdir(test_dir)]
train_imgs=train_covid[:] + train_normal[:] + train_pneumonia[:]
random.shuffle(train_imgs)
del train_covid
del train_normal
del train_pneumonia
gc.collect()
nrows=150
ncolumns=150
channels= 3
def read_and_process_image (list_of_images):
x=[]
y=[]
for image in list_of_images:
x.append(cv2.resize(cv2.imread(image, cv2.IMREAD_COLOR),(nrows, ncolumns), interpolation=cv2.INTER_CUBIC))
if 'Normal' in image:
y.append(0)
elif 'COVID' in image:
y.append(1)
else:
y.append(2)
return x,y
x,y= read_and_process_image(train_imgs)
del train_imgs
gc.collect()
x=np.array(x)
print(x.shape)
y=np.array(y)
print(y.shape)
import sklearn
from keras import layers
from keras import models
from keras import optimizers
from sklearn.model_selection import StratifiedKFold
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing.image import img_to_array, load_img
from sklearn.model_selection import cross_val_score
from sklearn.metrics import balanced_accuracy
from keras.applications import InceptionResNetV2
conv_base= InceptionResNetV2(weights='imagenet', include_top=False, input_shape=(150,150,3))
model=models.Sequential()
model.add(conv_base)
model.add(layers.Flatten())
model.add(layers.Dense(256, activation='relu'))
model.add(layers.Dense(3, activation='softmax'))
from keras import optimizers
model.compile(loss='categorical_crossentropy',optimizer=optimizers.Adam(lr=1e-4), metrics= ['categorical_accuracy'])
train_datagen=ImageDataGenerator(rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True )
val_datagen=ImageDataGenerator(rescale=1./255)
skf = StratifiedKFold(n_splits=5, shuffle=True,random_state=1)
scores = cross_val_score(model, x, y, cv=5, scoring= "balanced_accuracy")
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test=train_test_split(x,y, stratify=y, test_size=0.20, random_state=2)
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, stratify=y_train, train_size=0.8, random_state=2)
from keras.utils import to_categorical
y_train=to_categorical(y_train,3)
print('Shape of training lables is:', y_train.shape)
y_val=to_categorical(y_val,3)
print('Shape of validation lables is:', y_val.shape)
y_test=to_categorical(y_test,3)
print('Shape of test labels is:', y_test.shape)
for index, (train_indices, val_indices) in enumerate(skf.split(x, y)):
print ("Training on fold " + str(index+1) + "/" + str(n_splits))
# Generate batches from indices
xtrain, xval = x[train_indices], x[val_indices]
ytrain, yval = y[train_indices], y[val_indices]
ntrain=len(x_train)
nval=len(x_val)
batch_size=32
train_generator=train_datagen.flow(x_train,y_train, batch_size=batch_size)
val_generator=val_datagen.flow(x_val,y_val,batch_size=batch_size)
print ("Training new iteration on " + str(xtrain.shape[0]) + " training samples, " + str(xval.shape[0]) + " validation samples, this may take while...")
history=model.fit(train_generator,
steps_per_epoch=ntrain//batch_size,
epochs=80,
validation_data=(val_generator),
validation_steps=nval//batch_size,
verbose=2)
print('\nBalanced Accuracy:', mterics.balanced_accurcay*100, '%')
when I run the code it gives me the following error:
Traceback (most recent call last)
<ipython-input-7-45c4c9070141> in <module>
6 from keras.preprocessing.image import img_to_array, load_img
7 from sklearn.model_selection import cross_val_score
----> 8 from sklearn.metrics import balanced_accuracy
9
10 #Download the model
ImportError: cannot import name 'balanced_accuracy' from 'sklearn.metrics' (/opt/conda/lib/python3.7/site-packages/sklearn/metrics/__init__.py)
I've tried so many solutions like 1, 2, 3 and 4 but this solution has lead me to another complicated issue because whenever I run the command conda activate myenv it gives me the error:
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
See 'conda init --help' for more information and options.
IMPORTANT: You may need to close and restart your shell after running 'conda init'.
Tackling this issue by understanding it thoroughly and trying what is provided in this thread lead me to the following error message:
/bin/bash: -c: line 0: syntax error near unexpected token `newline' /bin/bash: -c: line 0: `/opt/conda/bin/conda init <bash>'.
I have tried these solutions, 1 and 2 but had no luck !!
Then, when I got stuck and felt trapped, I tried to follow the conda official documentation to create a virtual environment with all the needed packages but I ran into the same error above stating that my shell is not properly configured to activate the new virtualenv !!
Again, I went back to do the simple solutions by reverting to the first version of my base environment by doing conda list revisions conda install --revision=0 and updating it but the error persists and still preventing me from using balanced_accuracy and other useful metrics.
I also tried to create a new jupyter notebook and start from scratch by updating my packages but they were already updated to the most recent versions !!
I believe that I'm running the set of configuration commands in the wrong order because my jupyter kernel and environment is a complete mess now.
It would be highly appreciated if someone can guide me to the best practices in setting up the environments for a deep learning task.
BTW: the solutions suggested manipulating the bashrc file are a bit confusing to me and I don't fully understand how it works.

Error output (Symfony\Component\Process\Exception\ProcessFailedException)

Symfony\Component\Process\Exception\ProcessFailedException
================ Error Output: ================
line 15 if loaded_model.predict(X)[0][0] >= 0.5: ^ SyntaxError: invalid syntax
The original code is if loaded_model.predict(X)[0][0] >= 0.5:
After change the code to >=, the another error shows,
Code: 1(General error),Output: ================ Error Output: ================ Traceback (most recent call last)
line 1, in <module> from keras.models import load_model File
line 1, in <module> from keras.models import load_model ImportError: No module named models
Please help
#keras.py
from keras.models import load_model
from cv2 import imread, resize
from numpy import zeros
# Path of the Image To Test
img_path = "wound.jpg"
loaded_model = load_model('./content/wound_classification')
# Load and preprocess Image
X = zeros((1, 300, 300, 3))
X[0] = resize(imread(img_path), (300, 300))
X[0] = X[0] / 255.0
# Predict Image
if loaded_model.predict(X)[0][0] >= 0.5:
print("Wound is Non Chronic")
else:
print("Wound is Chronic")
The code works well in my computer but after upload to plesk control panel. It doest works, does anyone have this problem and solution?
or need to install keras in plesk control panel? and how to install because google didn't shows the way to install keras in plesk control panel.
you have to install keras using pip (e.g. pip install keras or pip install <zip file> from the releases https://github.com/keras-team/keras/releases)
you don't have the right keras package installed. that's why your code does not work.
also, rename your keras.py file to be something else as your are confusing python imports

Having error messages for importing tensorflow package even after installing it

Good day everyone. I got a module from the internet which is a module about NMT. In the module I have an import for tensorflow, but unfortunately even after the installation of tensorflow in my system using pip, I still get the error. Here is the error
from tensorflow.keras.models import load_model
ModuleNotFoundError: No module named 'tensorflow'
The module hello_app.py is below:
from flask import Flask
from flask import request
from flask import jsonify
import uuid
import os
from tensorflow.keras.models import load_model
import numpy as np
EXPECTED = {
"cylinders":{"min":3,"max":8},
"displacement":{"min":68.0,"max":455.0},
"horsepower":{"min":46.0,"max":230.0},
"weight":{"min":1613,"max":5140},
"acceleration":{"min":8.0,"max":24.8},
"year":{"min":70,"max":82},
"origin":{"min":1,"max":3}
}
# Load neural network when Flask boots up
model = load_model(os.path.join("../dnn/","mpg_model.h5"))
#app.route('/api/mpg', methods=['POST'])
def calc_mpg():
content = request.json
errors = []
for name in content:
if name in EXPECTED:
expected_min = EXPECTED[name]['min']
expected_max = EXPECTED[name]['max']
value = content[name]
if value < expected_min or value > expected_max:
errors.append(f"Out of bounds: {name}, has value of: {value}, but should be between {expected_min} and {expected_max}.")
else:
errors.append(f"Unexpected field: {name}.")
# Check for missing input fields
for name in EXPECTED:
if name not in content:
errors.append(f"Missing value: {name}.")
if len(errors) <1:
x = np.zeros( (1,7) )
# Predict
x[0,0] = content['cylinders']
x[0,1] = content['displacement']
x[0,2] = content['horsepower']
x[0,3] = content['weight']
x[0,4] = content['acceleration']
x[0,5] = content['year']
x[0,6] = content['origin']
pred = model.predict(x)
mpg = float(pred[0])
response = {"id":str(uuid.uuid4()),"mpg":mpg,"errors":errors}
else:
response = {"id":str(uuid.uuid4()),"errors":errors}
print(content['displacement'])
return jsonify(response)
if __name__ == '__main__':
app.run(host= '0.0.0.0',debug=True)
Please I would really appreciate your answers. Thank you.
This is the github repo where I got the code
https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_13_01_flask.ipynb
To avoid package or version conflict, one can use virtual environment.
pip install virtualenv
virtualenv -p /usr/bin/python3 tf
source tf/bin/activate
tf$ pip install tensorflow
If you have Anaconda or conda
#Set Up Anaconda Environments
conda create --name tf python=3
#Activate the new Environment
source activate tf
tf$pip install tensorflow

No module named 'sknn'

I am facing a problem with the sci kit neural network (sknn) module.
The code:
from sknn.mlp import Regressor, Layer
capasinicio = TodasEstaciones.loc['2015-01-12':'2015-03-31'].as_matrix()[:,[0,2]]
capasalida = TodasEstaciones.loc['2015-01-12':'2015-03-31'].as_matrix()[:,1]
neurones = 1000
tasaaprendizaje = 0.00001
numiteraciones = 9000
#Definition of the training for the neural network
redneural = Regressor(
layers=[
Layer("ExpLin", units=neurones),
Layer("ExpLin", units=neurones), Layer("Linear")],
learning_rate=tasaaprendizaje,
n_iter=numiteraciones)
redneural.fit(capasinicio, capasalida)
#Get the prediction for the train set
valortest = ([])
for i in range(capasinicio.shape[0]):
prediccion = redneural.predict(np.array([capasinicio[i,:].tolist()]))
valortest.append(prediccion[0][0])
The error message:
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-7-e1e3c4d6d246> in <module>()
----> 1 from sknn.mlp import Regressor, Layer
2
3 capasinicio = TodasEstaciones.loc['2015-01-12':'2015-03-31'].as_matrix()[:,[0,2]]
4 capasalida = TodasEstaciones.loc['2015-01-12':'2015-03-31'].as_matrix()[:,1]
5 neurones = 1000
ModuleNotFoundError: No module named 'sknn'
It appears that installing the module through pip
pip install scikit-neuralnetwork
does not solve the problem
any help would be appreciated :)
what worked for me: I uninstalled both python and anaconda, then reinstalled anaconda while specifying that its corresponding PATH gets the priority when calling modules (an option that is not recommended by the developers).

Categories