fastai cnn_learner output table of fit_one_cycle() - python

I have trained a CNN using fastai on Kaggle and also on my local machine. After calling learn.fit_one_cycle(1) on Kaggle I get the following table as output:
I executed the exact same code on my local machine (with Spyder ide and Python 3.7) and everything works, but I cannot see that output table. How can I display it?
This is the complete code:
from fastai import *
from fastai.vision import *
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
bs = 32
path = 'C:\\DB\\UCMerced_LandUse\\UCMerced_LandUse\\Unfoldered_Images'
pat = r"([^/\d]+)[^/]*$"
fnames = get_image_files(path)
data = ImageDataBunch.from_name_re(path, fnames, pat, ds_tfms=get_transforms(),
size = 224, bs = bs, num_workers = 0).normalize(imagenet_stats)
learn = cnn_learner(data, models.resnet34, metrics=[accuracy])
learn.fit_one_cycle(1)

The problem was that the console in Spyder was set to 'execute in current console' which doesn't seem to be able to displaye the result table. Setting it to 'execute in an external system terminal' solved the problem.

Related

How to solve Qt display/platform error on Google Collab

I am trying to run an optical flow model, RAFT, on Google Colab. I have installed the setup file and libraries necessary for it but when I try to run the demo file, I get a Qt error that looks like this.
!python demo.py --model=raft-things.pth --path=demo-frames
/usr/local/lib/python3.10/site-packages/torch-1.13.0-py3.10-linux-x86_64.egg/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3190.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
qt.qpa.xcb: could not connect to display
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/usr/local/lib/python3.10/site-packages/opencv_python-4.6.0.66-py3.10-linux-x86_64.egg/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: xcb, eglfs, minimal, minimalegl, offscreen, vnc.
I have never had any GUI errors on Colab with other libraries so I am not sure why torch is giving it.
Here is the demo.py file:
import argparse
import os
import cv2
import glob
import numpy as np
import torch
from PIL import Image
from raft import RAFT
from raft.utils import flow_viz
from raft.utils.utils import InputPadder
DEVICE = 'cuda'
def load_image(imfile):
img = np.array(Image.open(imfile)).astype(np.uint8)
img = torch.from_numpy(img).permute(2, 0, 1).float()
return img[None].to(DEVICE)
def viz(img, flo):
img = img[0].permute(1,2,0).cpu().numpy()
flo = flo[0].permute(1,2,0).cpu().numpy()
# map flow to rgb image
flo = flow_viz.flow_to_image(flo)
img_flo = np.concatenate([img, flo], axis=0)
# import matplotlib.pyplot as plt
# plt.imshow(img_flo / 255.0)
# plt.show()
cv2.imshow('image', img_flo[:, :, [2,1,0]]/255.0)
cv2.waitKey()
def demo(args):
model = torch.nn.DataParallel(RAFT(args))
model.load_state_dict(torch.load(args.model, map_location=DEVICE))
model = model.module
model.to(DEVICE)
model.eval()
with torch.no_grad():
images = glob.glob(os.path.join(args.path, '*.png')) + \
glob.glob(os.path.join(args.path, '*.jpg'))
images = sorted(images)
for imfile1, imfile2 in zip(images[:-1], images[1:]):
image1 = load_image(imfile1)
image2 = load_image(imfile2)
padder = InputPadder(image1.shape)
image1, image2 = padder.pad(image1, image2)
flow_low, flow_up = model(image1, image2, iters=20, test_mode=True)
viz(image1, flow_up)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--model', help="restore checkpoint")
parser.add_argument('--path', help="dataset for evaluation")
parser.add_argument('--small', action='store_true', help='use small model')
parser.add_argument('--mixed_precision', action='store_true', help='use mixed precision')
parser.add_argument('--alternate_corr', action='store_true', help='use efficent correlation implementation')
args = parser.parse_args()
demo(args)
My hunch is that the issue might be coming from the fact the model might be outdated (2years). But I have tried using the pytorch and cuda version they specified and the most recent stable version (both installed through conda) and am still getting the same error.
Before this, I was getting a CudaCheck error but I assumed that was because the setup.py file wasn't correctly installed because of not having the correct version of python (3.8+), which I resolved by creating a separate kernel for python 3.10 and installing with that. I am now getting this error first.
My other hunch is that it has something to do with cv2 functions like cv2.namedWindow or cv2.imshow. That is what I gained from this other post. Nothing from there solved my issue. But they are necessary as a lot of the architecture is built on cv.

How to upgrade TF1 GAN notebook on Colab to TF2? It does't work because Colab is't supporting TF1 anymore

I was trying to run this notebook on colab,
https://colab.research.google.com/github/https-deeplearning-ai/GANs-Public/blob/master/C1W1_(Colab)_Inputs_to_a_pre_trained_GAN.ipynb ,
but first I got this :
ValueError: Tensorflow 1 is unsupported in Colab.
then I upgraded it using this script:
import tensorflow as tf
!tf_upgrade_v2 \
--intree stylegan/ \
--inplace
and I did comment these:
%tensorflow_version 1.x
tflib.init_tf()
but I got this one! and couldn't solve:
AttributeError: Can't get attribute 'Network' on <module 'dnnlib.tflib.network' from '/content/stylegan/dnnlib/tflib/network.py'>
Can somebody help?
# Clone the official StyleGAN repository from GitHub
!git clone https://github.com/NVlabs/stylegan.git
%tensorflow_version 1.x
import os
import pickle
import numpy as np
import PIL.Image
import stylegan
from stylegan import config
from stylegan.dnnlib import tflib
from tensorflow.python.util import module_wrapper
module_wrapper._PER_MODULE_WARNING_LIMIT = 0
# Initialize TensorFlow
tflib.init_tf()
# Go into that cloned directory
path = 'stylegan/'
if "stylegan" not in os.getcwd():
os.chdir(path)
# Load pre-trained network
# url = 'https://drive.google.com/uc?id=1MEGjdvVpUsu1jB4zrXZN7Y4kBBOzizDQ' # Downloads the pickled model file: karras2019stylegan-ffhq-1024x1024.pkl
url = 'https://bitbucket.org/ezelikman/gans/downloads/karras2019stylegan-ffhq-1024x1024.pkl'
with stylegan.dnnlib.util.open_url(url, cache_dir=config.cache_dir) as f:
print(f)
_G, _D, Gs = pickle.load(f)
# Gs.print_layers() # Print network details

Having error messages for importing tensorflow package even after installing it

Good day everyone. I got a module from the internet which is a module about NMT. In the module I have an import for tensorflow, but unfortunately even after the installation of tensorflow in my system using pip, I still get the error. Here is the error
from tensorflow.keras.models import load_model
ModuleNotFoundError: No module named 'tensorflow'
The module hello_app.py is below:
from flask import Flask
from flask import request
from flask import jsonify
import uuid
import os
from tensorflow.keras.models import load_model
import numpy as np
EXPECTED = {
"cylinders":{"min":3,"max":8},
"displacement":{"min":68.0,"max":455.0},
"horsepower":{"min":46.0,"max":230.0},
"weight":{"min":1613,"max":5140},
"acceleration":{"min":8.0,"max":24.8},
"year":{"min":70,"max":82},
"origin":{"min":1,"max":3}
}
# Load neural network when Flask boots up
model = load_model(os.path.join("../dnn/","mpg_model.h5"))
#app.route('/api/mpg', methods=['POST'])
def calc_mpg():
content = request.json
errors = []
for name in content:
if name in EXPECTED:
expected_min = EXPECTED[name]['min']
expected_max = EXPECTED[name]['max']
value = content[name]
if value < expected_min or value > expected_max:
errors.append(f"Out of bounds: {name}, has value of: {value}, but should be between {expected_min} and {expected_max}.")
else:
errors.append(f"Unexpected field: {name}.")
# Check for missing input fields
for name in EXPECTED:
if name not in content:
errors.append(f"Missing value: {name}.")
if len(errors) <1:
x = np.zeros( (1,7) )
# Predict
x[0,0] = content['cylinders']
x[0,1] = content['displacement']
x[0,2] = content['horsepower']
x[0,3] = content['weight']
x[0,4] = content['acceleration']
x[0,5] = content['year']
x[0,6] = content['origin']
pred = model.predict(x)
mpg = float(pred[0])
response = {"id":str(uuid.uuid4()),"mpg":mpg,"errors":errors}
else:
response = {"id":str(uuid.uuid4()),"errors":errors}
print(content['displacement'])
return jsonify(response)
if __name__ == '__main__':
app.run(host= '0.0.0.0',debug=True)
Please I would really appreciate your answers. Thank you.
This is the github repo where I got the code
https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_13_01_flask.ipynb
To avoid package or version conflict, one can use virtual environment.
pip install virtualenv
virtualenv -p /usr/bin/python3 tf
source tf/bin/activate
tf$ pip install tensorflow
If you have Anaconda or conda
#Set Up Anaconda Environments
conda create --name tf python=3
#Activate the new Environment
source activate tf
tf$pip install tensorflow

Download sklearn datasets behind a proxy

I installed sklearn in my enviorment and running it now on jupyter notebook on windows.
How can I avoid the error:
URLError: urlopen error [Errno 11004] getaddrinfo failed
I am running the following code:
import sklearn
import sklearn.ensemble
import sklearn.metrics
from sklearn.datasets import fetch_20newsgroups
categories = ['alt.atheism', 'soc.religion.christian']
newsgroups_train = fetch_20newsgroups(subset='train', categories=categories)
which gives the error with line 5:
----> 3 newsgroups_train = fetch_20newsgroups(subset='train', categories=categories)
I am behind a proxy on my working computer, is there any option to avoid this error and to be able to use the sample datasets?
According to source code, scikit-learn will download the file from:
https://ndownloader.figshare.com/files/5975967
I am assuming that you cannot reach this location from behind the proxy.
Can you access the dataset by some other means? If yes, then you can download it manually and then run the following script on it:
and keep it at the location:
~/scikit_learn_data/
Here ~ refers to the user home folder. You can use the following code to know the default location of that folder according to your system.
from sklearn.datasets import get_data_home
print(get_data_home())
Update: Once done, use the following script to make it in a form in which scikit-learn keeps its caches
import codecs, pickle, tarfile, shutil
from sklearn.datasets import load_files
data_folder = '~/scikit_learn_data/'
target_folder = data_folder+'20news_home/'
tarfile.open(data_folder+'20newsbydate.tar.gz', "r:gz").extractall(path=target_folder)
cache = dict(train=load_files(target_folder+'20news-bydate-train', encoding='latin1'),
test=load_files(target_folder+'20news-bydate-test', encoding='latin1'))
compressed_content = codecs.encode(pickle.dumps(cache), 'zlib_codec')
with open(data_folder+'20news-bydate_py3.pkz', 'wb') as f:
f.write(compressed_content)
shutil.rmtree(target_folder)
Scikit-learn will always check if the dataset exists locally before attempting to download from internet. For that it will check the above location.
After that you can run the import normally.

Save LGBM model in `.cpp` format from Python

If I run
from sklearn.datasets import load_breast_cancer
import lightgbm as lgb
breast_cancer = load_breast_cancer()
data = breast_cancer.data
target = breast_cancer.target
params = {
"task": "convert_model",
"convert_model_language": "cpp",
"convert_model": "test.cpp",
}
gbm = lgb.train(params, lgb.Dataset(data, target))
then I was expecting that a file called test.cpp would be created, with the model saved in c++ format.
However, nothing appears in my current directory.
I have read the documentation (https://lightgbm.readthedocs.io/en/latest/Parameters.html#io-parameters), but can't tell what I'm doing wrong.
Here's a real 'for dummies' answer:
Install the CLI version of lightgbm: https://lightgbm.readthedocs.io/en/latest/Installation-Guide.html
Make note of your installation path, and find the executable. For example, for me, this was ~/LightGBM/lightgbm.
Run the following in a Jupyter notebook:
from sklearn.datasets import load_breast_cancer
import pandas as pd
breast_cancer = load_breast_cancer()
data = pd.DataFrame(breast_cancer.data)
target = pd.DataFrame(breast_cancer.target)
pd.concat([target, data], axis=1).to_csv("regression.train", header=False, index=False)
train_conf = """
task = train
objective = binary
metric = auc
data = regression.train
output_model = trained_model.txt
"""
with open("train.conf", "w") as f:
f.write(train_conf)
conf_convert = """
task = convert_model
input_model= trained_model.txt
"""
with open("convert.conf", "w") as f:
f.write(conf_convert)
! ~/LightGBM/lightgbm config=train.conf
! ~/LightGBM/lightgbm config=convert.conf
Your model with be saved in your current directory.
In the doc they say:
Note: can be used only in CLI version
under the convert_model and convert_model_language parameters.
That means that you should probably use the CLI (Command Line Interfarce) of LGBM instead of the python wrapper to do this.
Link to Quick Start CLI version.

Categories