Error on trying to use tensorboard with google colab - python

I am getting an error on trying to use tensorboard with google collab.
I am using ngork to run tensorboard. The error is as follows
The code I am using to do the above-mentioned operation is as follows
LOG_DIR = '/content/drive/My Drive/Practice/Su'
get_ipython().system_raw(
'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &'
.format(LOG_DIR)
)
!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-
amd64.zip
!unzip ngrok-stable-linux-amd64.zip
get_ipython().system_raw('./ngrok http 6006 &')
! curl -s http://localhost:4040/api/tunnels | python3 -c \
"import sys, json; print(json.load(sys.stdin)['tunnels'][0]
['public_url'])"

Skip ngrok and use the built-in %tensorboard magic.
Here's a demo:
https://colab.research.google.com/github/tensorflow/tensorboard/blob/master/docs/r2/tensorboard_in_notebooks.ipynb

Here's a solution that worked for me:
First uninstall tensorboard and tensorflow:
!pip3 uninstall tensorboard
!pip3 uninstall tensorflow
then install tf-nightly:
!pip3 install --ignore-installed tf-nightly
then run tensorboard inside google colab:
%load_ext tensorboard
%tensorboard --logdir {logs_base_dir}

Related

Streamlit app not loading from google colab

Tried to run one of my streamlit app from google colab as the local system is not that much friendly for such heavy task.
I used ngork according to the instructions from gist. The output is showing the app is running on some local port. But the locallink is not loading and finally shows site cant be reached.
Implementation:
from google.colab import drive
drive.mount('/content/drive/',force_remount=True)
%cd drive/MyDrive/MyProject/
!pip install -r requirements.txt
!pip install pyngrok
!pip install -q streamlit
!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
!unzip ngrok-stable-linux-amd64.zip
!./ngrok authtoken ***************** (my authentication from ngork is used here.)
get_ipython().system_raw('./ngrok http 8501 &')
# !curl -s http://localhost:4040/api/tunnels | python3 -c \
# 'import sys, json; print("Execute the next cell and the go to the following URL: " +json.load(sys.stdin)["tunnels"][0]["public_url"])'
!nohup streamlit run main.py &

Trouble on training YoloV5 on AWS Sagemaker | AlgorithmError: , exit code: 1

I'm trying to train YoloV5 on AWS Sagemaker with custom data (that is stored in S3) via a Docker Image (ECR) and I keep getting "AlgorithmError: , exit code: 1". Can someone please tell me how to debug this problem?
Here's the Docker Image :
# GET THE AWS IMAGE
FROM 763104351884.dkr.ecr.eu-west-3.amazonaws.com/pytorch-training:1.11.0-gpu-py38-cu113-ubuntu20.04-sagemaker
# UPDATES
RUN apt update
RUN DEBIAN_FRONTEND=noninteractive TZ=Etc/UTC apt install -y tzdata
RUN apt install -y python3-pip git zip curl htop screen libgl1-mesa-glx libglib2.0-0
RUN alias python=python3
# INSTALL REQUIREMENTS
COPY requirements.txt .
RUN python3 -m pip install --upgrade pip
RUN pip install --no-cache -r requirements.txt albumentations gsutil notebook \
coremltools onnx onnx-simplifier onnxruntime openvino-dev tensorflow-cpu tensorflowjs
COPY code /opt/ml/code
WORKDIR /opt/ml/code
RUN git clone https://github.com/ultralytics/yolov5 /opt/ml/code/yolov5
ENV SAGEMAKER_SUBMIT_DIRECTORY /opt/ml/code
ENV SAGEMAKER_PROGRAM trainYolo.py
ENTRYPOINT ["python", "trainYolo.py"]
And here's trainYolo.py :
import json
import os
import numpy as np
import cv2 as cv
import subprocess
import yaml
import shutil
trainSet = os.environ["SM_CHANNEL_TRAIN"]
valSet = os.environ["SM_CHANNEL_VAL"]
output_dir = os.environ["SM_CHANNEL_OUTPUT"]
#Creating the data.yaml for yolo
dict_file = [{'names' : ['block']},
{'nc' : ['1']}, {'train': [trainSet]}
, {'val': [valSet]}]
with open(r'data.yaml', 'w') as file:
documents = yaml.dump(dict_file, file)
#Execute this command to train Yolo
res = subprocess.run(["python3", "yolov5/train.py", "--batch", "16" "--epochs", "100", "--data", "data.yaml", "--cfg", "yolov5/models/yolov5s.yaml","--weights", "yolov5s.pt" "--cache"], shell=True)
shutil.copy("yolov5", output_dir)
Note : I'm not sure if subprocess.run() works in an environment such as Sagemaker.
Thank you
So your training script is not configured properly. When using a SageMaker estimator or Script Mode you must configure it in a format that will save the model properly. Here's an example notebook with TensorFlow and script mode. If you would like to build your own Dockerfile (Bring Your Own Container) then you would have to configure your train file as shown in the second link.
Script-Mode: https://github.com/RamVegiraju/SageMaker-Deployment/tree/master/RealTime/Script-Mode/TensorFlow/Classification
BYOC: https://github.com/RamVegiraju/SageMaker-Deployment/tree/master/RealTime/BYOC/Sklearn/Sklearn-Regressor/container/randomForest

object detection with the yolov4 model in Raspberry Pi 3 B+ using the Tensorflow Lite

I try to run the yolov4 model in Raspberry Pi 3 B+ using the Tensorflow Lite.
I took the codes and tried to follow the instructions from the following link and it ran successfully in my pc but not in raspberry.:
https://github.com/haroonshakeel/tensorflow-yolov4-tflite
Used these commands in rasbian:
cd Projects/tflite/
python -m pip install virtualenv
python -m venv tflite-env
source tflite-env/bin/activate
sudo apt -y install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev libavcodec-dev libavformat-dev libswscale-dev libv4l-dev libxvidcore-dev libx264-dev
sudo apt -y install qt4-dev-tools libatlas-base-dev libhdf5-103
python -m pip install opencv-contrib-python==4.1.0.25
uname -a
uname -m
python --version
python -m pip install https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp37-cp37m-linux_armv7l.whl
And for the run:
python save_model.py --weights ./data/yolov4.weights --output ./checkpoints/yolov4-416 --input_size 416 --model yolov4 --framework tflite
And it gave me:
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info
I ran the following command for the weights:
python convert_tflite.py --weights ./checkpoints/yolov4-416 --output ./checkpoints/yolov4-416-fp16.tflite --quantize_mode float16
And had this error:
OSError: Unable to create file (unable to open file: name = ' ./checkpoints/yolov4-416', erno = 21, error message = 'Is a directory', flags = 13, o_flags = 242)
Finally, when i tried to run the detection with this command:
python detect_video.py --weights ./checkpoints/yolov4-416.tflite --size 416 --model yolov4 --video ./data/videoNIR.AVI
I had this error:
TypeError(): load() missing 1 required positional argument: 'export dir'
Any way how to solve these errors?
Thanks.
The commands for building the tflite model should not be executed on the raspberry. You do everything on your PC and after on the raspberry you must execute the detection command.Note also that if you have not connected a screen on your raspberry, the code will not be able to work because it is still looking for used the GUI could be executed, which it will not find !!

could able to run demo sound classification application in openvino

To run sound classication demo in openvino,
I have followed below steps:
cd /opt/intel/openvino_2021/install_dependencies
sudo -E ./install_openvino_dependencies.sh
for env setting: source /opt/intel/openvino_2021/bin/setupvars.sh
cd /opt/intel/openvino_2021/deployment_tools/model_optimizer/install_prerequisites
sudo ./install_prerequisites.sh
git clone https://github.com/openvinotoolkit/open_model_zoo.git.
Then I placed the cloned repo in the deployment_tools directory.
sudo python3 /opt/intel/openvino_2021/deployment_tools/open_model_zoo/tools/downloader/downloader.py --name aclnet
sudo python3 /opt/intel/openvino_2021/deployment_tools/open_model_zoo/tools/downloader/converter.py --name aclnet
Here I got error:
/usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
Unable to locate Model Optimizer. Use --mo or run setupvars.sh/setupvars.bat from the OpenVINO toolkit.
Here aclent model downloaded and gives .onnx in public folder. now how to convert .onnx file to ir format(xml and bin format)?
I followed below coomand too but still I get same error
sudo python3 ./mo.py --input_model ~/public/aclnet/aclnet_des_53.onnx --output_dir ~/public/aclnet
https://docs.openvinotoolkit.org/latest/omz_demos_sound_classification_demo_python.html
Please can anyone help on this?
This error happened because the script attempts to locate Model Optimizer using the environment variables set by the OpenVINO™ toolkit's setupvars.sh script. You can override this heuristic with the --mo option:
python3 converter.py --mo my/openvino/path/model_optimizer/mo.py --name aclnet

TensorFlow 2 not show the version in colab google and windows 10

I install the TensorFlow version 2 in google colab:
!wget https://developer.nvidia.com/compute/cuda/10.0/Prod/local_installers/cuda-repo-ubuntu1604-10-0-local-10.0.130-410.48_1.0-1_amd64 -O cuda-repo-ubuntu1604-10-0-local-10.0.130-410.48_1.0-1_amd64.deb
!dpkg -i cuda-repo-ubuntu1604-10-0-local-10.0.130-410.48_1.0-1_amd64.deb
!apt-key add /var/cuda-repo-10-0-local-10.0.130-410.48/7fa2af80.pub
!apt-get update
!apt-get install cuda
!pip install tf-nightly-gpu-2.0-preview
but when I try to find the version it shows
1.13.0-dev20190116
also I have error when I want to use
the tf.enable_eager_execution()
and
NameError: name 'layers' is not defined
Try like this:
!pip install tf-nightly-2.0-preview
import tensorflow as tf
As explained by Luca to check the version you need:
print(tf.__version__)

Categories