Saving an AI model to google drive for app development - python

I have been building a python model for emotion detection via an online course (on google colab). After building the model, they used the following code to "save the model for app development" in google drive as a zip file.
from google.colab import drive
drive.mount('/content/gdrive')
save_path = F"/content/gdrive/My Drive/transfer_cnn.zip"
tf.keras.models.save_model(cnn_model,'transfer_cnn')
import zipfile
import os
import zipfile
def zipdir(path, ziph):
for root, dirs, files in os.walk(path):
for file in files:
ziph.write(os.path.join(root, file))
zipf = zipfile.ZipFile(save_path, 'w', zipfile.ZIP_DEFLATED)
zipdir('transfer_cnn', zipf)
zipf.close()
After running the code, the model was saved in my google drive as a zip file but I have no idea how to run it from the drive. Please help me I am on a deadline !!

Related

How to upload the a folder into Azure blob storage while preserving the structure in Python?

I've wrote the following code to upload a file into blob storage using Python:
blob_service_client = ContainerClient(account_url="https://{}.blob.core.windows.net".format(ACCOUNT_NAME),
credential=ACCOUNT_KEY,
container_name=CONTAINER_NAME)
blob_service_client.upload_blob("my_file.txt", open("my_file.txt", "rb"))
this works fine. I wonder how can I upload the entire folder with all files and sub folders in it while keeping the structure of my local folder intact?
After reproducing from my end I could able to achieve your requirement using os module. Below is the complete code that worked for me.
dir_path = r'<YOUR_LOCAL_FOLDER>'
for path, subdirs, files in os.walk(dir_path):
for name in files:
fullPath=os.path.join(path, name)
print("FullPath : "+fullPath)
file=fullPath.replace(dir_path,'')
fileName=file[1:len(file)];
print("File Name :"+fileName)
# Create a blob client using the local file name as the name for the blob
blob_service_client = ContainerClient(account_url=ACCOUNT_URL,
credential=ACCOUNT_KEY,
container_name=CONTAINER_NAME)
print("\nUploading to Azure Storage as blob:\n\t" + fileName)
blob_service_client.upload_blob(fileName, open(fullPath, "rb"))
Below is the folder structure in my local.
├───g.txt
├───h.txt
├───Folder1
├───z.txt
├───y.txt
├───Folder2
├───a.txt
├───b.txt
├───SubFolder1
├───c.txt
├───d.txt
RESULTS:

Unzip Password Protected Zip file automatically from azure storage?

I'm just wondering is there a way to extract a password protected zip file from Azure Storage.
I tried using a python Azure Function to no avail but had a problem reading the location of the file.
Would the file have to stored on a shared location temporarily in order to achieve?
Just looking for a bit of direction here am I missing a step maybe?
Regards,
James
Azure blob storage provides storing functionality only, there is no running env to perform unzip operation. So basically, we should download .zip file to Azure function, unzip it and upload files in .zip file 1 by 1.
For a quick test, I write an HTTP trigger Azure function demo that unzipping a zip file with password-protected, it works for me on local :
import azure.functions as func
import uuid
import os
import shutil
from azure.storage.blob import ContainerClient
from zipfile import ZipFile
storageAccountConnstr = '<storage account conn str>'
container = '<container name>'
#define local temp path, on Azure, the path is recommanded under /home
tempPathRoot = 'd:/temp/'
unZipTempPathRoot = 'd:/unZipTemp/'
def main(req=func.HttpRequest) -> func.HttpResponse:
reqBody = req.get_json()
fileName = reqBody['fileName']
zipPass = reqBody['password']
container_client = ContainerClient.from_connection_string(storageAccountConnstr,container)
#download zip file
zipFilePath = tempPathRoot + fileName
with open(zipFilePath, "wb") as my_blob:
download_stream = container_client.get_blob_client(fileName).download_blob()
my_blob.write(download_stream.readall())
#unzip to temp folder
unZipTempPath = unZipTempPathRoot + str(uuid.uuid4())
with ZipFile(zipFilePath) as zf:
zf.extractall(path=unZipTempPath,pwd=bytes(zipPass,'utf8'))
#upload all files in temp folder
for root, dirs, files in os.walk(unZipTempPath):
for file in files:
filePath = os.path.join(root, file)
destBlobClient = container_client.get_blob_client(fileName + filePath.replace(unZipTempPath,''))
with open(filePath, "rb") as data:
destBlobClient.upload_blob(data,overwrite=True)
#remove all temp files
shutil.rmtree(unZipTempPath)
os.remove(zipFilePath)
return func.HttpResponse("done")
Files in my container:
Result:
Using blob triggers will be better to do this as it will cause time-out errors if the size of your zip file is huge.
Anyway, this is only a demo that shows you how to do this.

google drive api to upload all pdfs to google drive

I am using the pydrive to upload pdf files to my google drive folder. I am wanting to send all *pdf files in a local folder at once with this code but not sure where to go from here? Should I use glob? If so I would like to see an example, please.
working code that sends 1 file to the designated google drive folder:
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
gauth = GoogleAuth()
gauth.LocalWebserverAuth()
drive = GoogleDrive(g_login)
folder_id = 'google_drive_id_goes_here'
f = drive.CreateFile({'title': 'testing_pdf',
'mimeType': 'application/pdf',
'parents': [{'kind': 'drive#fileLink', 'id':folder_id}]})
f.SetContentFile('/Users/Documents/python/google_drive/testing.pdf')
f.Upload()
You cant upload files at once. Create file with the API is a single thing and pydrive as no mechanism for uploading more then one .
Your going to have to put this in a loop and upload each file as you go.
import os
directory = 'the/directory/you/want/to/use'
for filename in os.listdir(directory):
if filename.endswith(".txt"):
f = open(filename)
lines = f.read()
print (lines[10])
continue
else:
continue

How to upload dataset in google colaboratory?

I need to upload dataset of images in google colaboratory. It has subfolder inside it which contains images. Whatever I found on the net was for the single file.
from google.colab import files
uploaded = files.upload()
Is there any way to do it?
For uploading data to Colab, you have three methods.
Method 1
You can directly upload file or directory in Colab UI
The data is saved in Colab local machine. In my experiment, there are three features:
1) the upload speed is good.
2) it will remain directory structure but it will not unzip directly. You need to execute this code in Colab cell
!makedir {dir_name}
!unzip {zip_file} -d {dir_name}
3) Most importantly, when Colab crashes, the data will be deleted.
Method 2
Execute the code in Colab cell:
from google.colab import files
uploaded = files.upload()
In my experiment, when you run the cell, it appears the upload button. and when the cell executing indicator is still running, you choose a file. 1) After execution, the file name will appear in the result panel. 2)Refresh Colab files, you will see the file. 3) Or execute !ls, you shall see you file. If not, the file is not uploaded successfully.
Method 3
If your data is from kaggle, you can use Kaggle API to download data to Colab local directory.
Method 4
Upload data to Google Drive, you can use 1)Google Drive Web Browser or 2) Drive API (https://developers.google.com/drive/api/v3/quickstart/python). To access drive data, use the following code in Colab.
from google.colab import drive
drive.mount('/content/drive')
I would recommend uploading data to Google Drive because it is permanent.
You need to copy your dataset into Google Drive. Then obtain the DATA_FOLDER_ID.
The best way to do so, is to open the folder in your Google Drive and copy the last part of html address. For example the folder id for the link:
https://drive.google.com/drive/folders/xxxxxxxxxxxxxxxxxxxxxxxx is xxxxxxxxxxxxxxxxxxxxxxxx
Then you can create local folders and upload each file recursively.
DATA_FOLDER_ID = 'xxxxxxxxxxxxxxxxxxxxxxxx'
ROOT_PATH = '~/you_path'
!pip install -U -q PyDrive
import os
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# 1. Authenticate and create the PyDrive client.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# choose a local (colab) directory to store the data.
local_root_path = os.path.expanduser(ROOT_PATH)
try:
os.makedirs(local_root_path)
except: pass
def ListFolder(google_drive_id, destination):
file_list = drive.ListFile({'q': "'%s' in parents and trashed=false" % google_drive_id}).GetList()
counter = 0
for f in file_list:
# If it is a directory then, create the dicrectory and upload the file inside it
if f['mimeType']=='application/vnd.google-apps.folder':
folder_path = os.path.join(destination, f['title'])
os.makedirs(folder_path)
print('creating directory {}'.format(folder_path))
ListFolder(f['id'], folder_path)
else:
fname = os.path.join(destination, f['title'])
f_ = drive.CreateFile({'id': f['id']})
f_.GetContentFile(fname)
counter += 1
print('{} files were uploaded in {}'.format(counter, destination))
ListFolder(DATA_FOLDER_ID, local_root_path)

Where is dumped file in Google Colab?

When I wrote this code in google colab:
import pickle
x=10;
output = open('data.pkl', 'wb')
pickle.dump(x,output)
x is saved and also in another window in Google Colab I can access this file and read it but I don't know where is the file. Does anybody know where is it?
It’s in the current directory. You can also download it back to your local machine with
from google.colab import files
files.download(‘data.pkl’)
You can upload it to your Google drive:
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# 1. Authenticate and create the PyDrive client.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# get the folder id where you want to save your file
file = drive.CreateFile({'parents':[{u'id': folder_id}]})
file.SetContentFile('data.pkl')
file.Upload()
This code basically fetches the data.pkl from the cloud VM and upload it permanently to your Google Drive under a specific folder.
If you choose not to specify a folder, the file will be uploaded under the root of your Google Drive.
You can save and read the dumped file anywhere in your google drive folder.
import gc
import pickle
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
pick_insert = open('drive/My Drive/data.pickle','wb')
pickle.dump(data, pick_insert)
pick_insert.close()
pick_read = open('drive/My Drive/data.pickle','rb')
data = pickle.load(pick_read)
pick_read.close()
Saved dump then can be loaded from the same dir as below,
dump(stories, open('review_dataset.pkl', 'wb'))
stories = load(open('review_dataset.pkl', 'rb'))
In my case, I was trying to access the pickle files in a sub-directory (data) under the . directory.
The data directory has 2 pickle files generated from the pre-processing step.
So I tried #korakot suggestion in the comments, and it worked fine!. That what I did so far.
# connect your colab with the drive
from google.colab import drive
drive.mount('/content/drive')
# list the directories in the home directory
import os
os.listdir('.')
# move the sub-directory (data) into google-drive
mv /content/data/ /content/drive/MyDrive/
You can obtain the pkl file using the following statements
from google.colab import files files
files.download("model.pkl")
Not only pkl you can retrieve other format of data also by changing the extension
you can save your pkl file by inputting this instead:
import pickle
from google.colab import drive
drive.mount('/content/drive')
x=10;
output = open('/content/drive/MyDrive/Colab Notebooks/data.pkl', 'wb')
pickle.dump(x,output)
and open it using this code:
import pickle
from google.colab import drive
drive.mount('/content/drive')
x = pickle.load(open('/content/drive/MyDrive/Colab Notebooks/data.pkl', 'rb'))
it worked for me :)

Categories