I'm getting this error. It doesn't seem to find the augmentations module. I'm trying to use basicsr for model training.I already tried to install the augmentation module, albumentation, but it doesn't work.
Can someone help me and explain how to solve this?
How it's being imported:
import os.path
import random
import numpy as np
import cv2
import torch
import torch.utils.data as data
import data.util as util
import sys
sys.path.append('../codes/scripts')
sys.path.append('../codes/data')
**import augmentations # Here shows the module import error**
As it is being used below:
# Random Crop (reduce computing cost and adjust images to correct size first)
if img_HR.shape[0] > HR_size or img_HR.shape[1] > HR_size:
#Here the scale should be in respect to the images, not to the training scale (in case they are being scaled on the fly)
scaleor = img_HR.shape[0]//img_LR.shape[0]
img_HR, img_LR = augmentations.random_crop_pairs(img_HR, img_LR, HR_size, scaleor)
Console error:
File "D:\basicsrtrainmodel\BasicSR\codes\data\LRHROTF_dataset.py", line 12, in <module>
import augmentations
ModuleNotFoundError: No module named 'augmentations'
Someone help me please?
Related
This is the error I can't figure out.
module 'keras.backend' has no attribute 'unique_object_name'
This is what I'm importing:
import cv2
import os
from keras.models import load_model
import numpy as np
from pygame import mixer
import time
I get the error when I try and run this line:
model = load_model('C:/Users/Henry/Downloads/Drowsiness detection/Drowsiness detection/models/cnnCat2.h5')
Method keras.models.load_model() probably worked properly before Keras become part of Tensorflow.
If you are using newer version of tf you should call this to load model:
tf.keras.models.load_model()
In the following code I'm getting errors when trying to call librosa.grifflim, telling me the attribute does not exist.
import os
from matplotlib import pyplot as plt
import librosa
import librosa.display
import IPython.display as ipd
import numpy as np
import cv2
S = cv2.imread('spectrograms/CantinaBand60.wav10.jpg')
D = librosa.amplitude_to_db(np.abs(S), ref=np.max)
signal = librosa.griffinlim(D)
sf.write('test.wav', signal, 352000)
I've upgraded librosa, and I still encounter the error. The documentation page for this function no longer seems to exist either. I've also tried import just that module using librosa.griffinlim but it continues to tell me this module doesn't exist. Was this function removed during a recent version? If so, is there another function I can use to apply the griffin lim algorithm?
librosa.griffinlim was introduced in librosa 0.7.0. So you need to have that version or later. You can check this using the following code.
import librosa; print(librosa.__version__)
I'm using numpy and mlrose, and all i have written so far is:
import numpy as np
import mlrose
However, when i run it, it comes up with an error message:
File "C:\Users\<my username>\AppData\Local\Programs\Python\Python38-32\lib\site-packages\mlrose\neural.py", line 12, in <module>
from sklearn.externals import six
ImportError: cannot import name 'six' from 'sklearn.externals' (C:\Users\<my username>\AppData\Local\Programs\Python\Python38-32\lib\site-packages\sklearn\externals\__init__.py)
Any help on sorting this problem will be greatly appreciated.
Solution: The real answer is that the dependency needs to be changed by the mlrose maintainers.
A workaround is:
import six
import sys
sys.modules['sklearn.externals.six'] = six
import mlrose
from sklearn.externals import six is deprecated, use import six instead
This issue has been solved sorry for any time wasted I'm using notepad++ due to hardware constraints so I wasn't aware of needing to import OS to define a filepath
I am trying to create a TFLite model, I have drawn an arrow to where I'm getting the file path error:
Error: (NameError: name 'oranges' is not defined)
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import tensorflow as tf
assert tf.__version__.startswith('2')
from tensorflow_examples.lite.model_customization.core.data_util.image_dataloader import ImageClassifierDataLoader
from tensorflow_examples.lite.model_customization.core.task import image_classifier
from tensorflow_examples.lite.model_customization.core.task.model_spec import efficientnet_b0_spec
from tensorflow_examples.lite.model_customization.core.task.model_spec import ImageModelSpec
import matplotlib.pyplot as plt
data = ImageClassifierDataLoader.from_folder(oranges) <-- oranges is a folder containing the test
images. It is in the same folder as this file
model = image_classifier.create(data)
loss, accuracy = model.evaluate()
model.export('image_classifier.tflite', 'image_labels.txt')
The path must be a folder containing your images for modelling. In this case, the entry oranges is not defined as a folder path anywhere in the code.
To create a folder path, run:
import os
oranges = os.path.abspath('oranges')
Before executing the code:
data = ImageClassifierDataLoader.from_folder(oranges)
I'm writing code in a Jupyter Notebook that involves cleaning and analyzing a large amount of consumer data. I'm trying to use dill to save the dataframes with thousands of rows so I don't have to run the code every time I want to make an adjustment, so dill seems like the perfect package to do so... Except I'm getting this error when attempting to pickle the notebook:
AttributeError: module 'dill' has no attribute 'dump_session'
Let me know if the program code is necessary - I don't think it should make a difference. The imports are:
import numpy as np
import pandas as pd
import dill
import scipy
from matplotlib import pyplot as plt
from __future__ import division
from collections import OrderedDict
from sklearn.cluster import KMeans
pd.options.display.max_columns = None
and when I run this code I get the error from above:
dill.dump_session('recengine.db')
Is there another package that's interfering with dill's use of pickle vs. cpickle?