I've seen tons of tutorials about using YOLOv7 for object detection, and I followed one to detect license plates. I cloned the repo from github, put my data in, uploaded to drive and ran it on google collab. The tutorials stop at testing. How do I save this model and use it later for predictions? Is it already saved in the uploaded folder, and all I have to do is put inputs into the test folder and run the below line? Or can I save this as a .yaml file on my local drive? If so, can you share the code for using that .yaml file to make a prediction locally? (Btw the training used the collab GPU)
This is the line I'm using for testing:
!python detect.py --weights best.pt --conf 0.5 --img-size 640 --source img.jpg --view-img --no-trace
Related
I am currently loading the images from my Google Drive.
But the issue is, those images are in my drive and when I share my colab notebook to others, they can't run it since it requires my authentication code to access the Drive images.
So I thought if I uploaded the data folder in a Github repository & made that repo as public will allow anyone to fetch the data (in my case images folder). Thus no authentication required to run the colab code.
I have no idea how to mount the directory to a Github repo as Google Drive.
from google.colab import drive
drive.mount('/content/drive/') # this will set my google drive folder as the notebook directory.
Is it possible to do a similar mounting to a github repo?
You could clone the repository directly like this by running git in a code cell.
!git clone https://github.com/yourusername/yourpublicrepo.git
This will create a folder called yourpublicrepo.
So I have my images links like:
https://my_website_name.s3.ap-south-1.amazonaws.com/XYZ/image_id/crop_image.png
and I have almost 10M images which I want to use for Deep Learning purpose. I have a script to download the images and save them in desired directories already using requests and PIL
Most naïve idea that I have and which I have been using my whole life is to first download all the images in my local machine, make a zip and upload it to Google Drive where I can just use gdown to download it anywhere based on my Network Speed. Or just copy to Colab using terminal.
But that data was not so big. Always under 200K images. But now, the data is huge so downloading and again uploading the images will take a whole lot of time in days and on top of that, it'll just make the Google Drive run out of space with 10M images. So I am thinking about using AWS ML (SageMaker) or something else from AWS. So is there a better approach to this? How can I import the data directly to my SSD supported based virtual machine?
You can use the AWS python library boto3 to connect to the S3 bucket from Colab: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html
I am working on a pothole detection system, and I have trained my model using Yolov5(by Ultralytics, completely implemented in PyTorch). After training the model in Google Colab, I have the final weight file in the .pt format. Now what I want to do is make some kind of web app which will take input from the webcam and feed it to my model in realtime. I searched a lot but didn't find a satisfying solution.
There is no web app open source for this. But you can modify the ultralitycs detect.py to make a detector, put the information of your detection on a database and use it to show it on a web page.
https://github.com/ultralytics/yolov5/blob/master/detect.py
YOLOv5 provides the simple command to do so.
The command is:-
python detect.py --weights 'your-path-to-weights-file' --conf any-confidence-percentage-you-like --source 0
The above command is used if you run it on your local computer.
Please help if you can! I have a lot of individual images stored in a google bucket. I want to retrieve individual images from the bucket through google colab. I have already set up a connection via gcsfuse but I can still not access the images.
I have tried:
I = io.imread('/content/coco/Val/Val/val2017/000000000139.jpg')
I = file_io.FileIO('/content/coco/Val/Val/val2017/000000000139.jpg', 'r')
I = tf.io.read_file('/content/coco/Val/Val/val2017/000000000139.jpg', 'r')
None have worked and I am confused.
io.imread returns "None"
file_io.FileIO returns "tensorflow.python.lib.io.file_io.FileIO at 0x7fb7e075e588"
which I don't know what to do with.
tf.io.read_file returns an empty tensor.
(I am actually using PyTorch, not Tensorflow but after some google searches, it seemed TensorFlow might have the answer.)
Is unclear to me if your issue is with copying files from Google Cloud Storage to Colab or accesing a file in Colab with Python
As stated in the Colab documentation In order to use Google Cloud Storage you should be using the gsutil tool.
Anyways I tried myself to use the gcsFUSE tool by following this steps and I was able to see the objects of my bucket by running the !ls command
Steps:
from google.colab import auth
auth.authenticate_user()
Once you run this, a link will be generated, you can click on it and get the signing in done.
!echo "deb http://packages.cloud.google.com/apt gcsfuse-bionic main" > /etc/apt/sources.list.d/gcsfuse.list
!curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
!apt -qq update
!apt -qq install gcsfuse
Use this to install gcsfuse on colab.
!mkdir folderOnColab
!gcsfuse folderOnBucket folderOnColab
Replace the folderOnColab with the desired name of your folder and the folderOnBucket with the name of your bucket removing the gs:// preceding the name.
By following all these steps and running the !ls command I was able to see the files form my bucket in the new folder in Colab.
I want to train a deep learning (CNN) model on a dataset containing around 100000 images. Since the dataset is huge (approx 82 Gb), I want to use Google colab since it's GPU supported. How do I upload this full image folder into my notebook and use it?
I can not use google drive or git hub since my dataset is too large.
You can try zipping and the unzip on collab
Step 1: Zip the whole folder
Step 2:Upload the folder
Step 3: !unzip myfoldername.zip
Step 4:type ls and see folder names to see if successful
It would be better if you compress or resize the images to reduce file size using opencv or something