How to delete all files in google storage folder using python - python

I have google cloud storage bucket path stored in one variable called GS_PATH
example of google cloud storage path is gs://test/one/
Under this i have few more folders and files.
How can i delete all under gs://test/one/ path using python code
Thanks,
Arjun

There is an API to do this:
from google.cloud import storage
my_storage = storage.Client()
bucket = my_storage.get_bucket('test')
blobs = bucket.list_blobs(prefix='one/')
for blob in blobs:
blob.delete()

See https://cloud.google.com/storage/docs/deleting-objects#storage-delete-object-python for reference.
from google.cloud import storage
storage_client = storage.Client()
bucket = storage_client.get_bucket('bucket_name')
blobs = bucket.list_blobs(prefix='folder_prefix/')
for blob in blobs:
blob.delete()

Related

Download GCS object to your Vertex AI notebook in GCP

If you have a gs:// blob, how do you download that file from GCS to Vertex AI notebook in GCP using python client library?
To download GCS file to Vertex AI notebook refer following python code:
from google.cloud import storage
def download_blob(bucket_name, source_blob_name, destination_file_name):
"""Downloads a blob from the bucket."""
# bucket_name = "your-bucket-name"
# The ID of your GCS object
# source_blob_name = "storage-object-name"
# destination_file_name = "/path/to/file"
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(source_blob_name)
blob.download_to_filename(destination_file_name)
print(
"Downloaded storage object {} from bucket {} to file {}.".format(
source_blob_name, bucket_name, destination_file_name
)
)
Alternatively, if you want to download GCS file to jupyter directory you can use the command:
!gsutil cp gs://BUCKET_NAME/OBJECT_NAME JUPYTER_LOCATION

How to create csv file using service account in google colab(Python) tool

I want to create CSV file from pandas data frame in a google storage bucket using colab tool.
Right now we use our gmail authentication to load csv file in storage bucket using below command
df.to_csv("gs://Jobs/data.csv")
I have checked below google links
https://cloud.google.com/docs/authentication/production#linux-or-macos
Currently, we used the below code to get credentials from service account
def getCredentialsFromServiceAccount(path: str) -> service_account.Credentials:
return service_account.Credentials.from_service_account_info(
path
)
Kindly suggest
There are two main approaches you can use to upload a csv to Cloud Storage within Colab. Both store the csv locally first and then upload it.
Use gsutil from within Colab.
Use Cloud Storage Python Client Library
The first approach is easiest but authenticates with the user signed into Colab and not a service account.
from google.colab import auth
auth.authenticate_user()
bucket_name = 'my-bucket'
df.to_csv('data.csv', index = False)
!gsutil cp 'data.csv' 'gs://{bucket_name}/'
The next approach uses the client library and authenticates with a service account.
from google.cloud import storage
bucket_name = 'my-bucket'
# store csv locally
df.to_csv('data.csv', index = False)
# start storage client
client = storage.Client.from_service_account_json("path/to/key.json")
# get bucket
bucket = client.bucket(bucket_name)
# create blob (where you want to store csv within bucket)
blob = bucket.blob("jobs/data.csv")
# upload blob to bucket
blob.upload_from_filename("data.csv")

gzip an image file through cloud function on storage trigger

I am trying to automate gzipping on image uploaded to cloud storage bucket.
Everytime i upload an image, i want cloud function to run python code to convert it into gzip and move it to another bucket present in storage.
My code is not running. It's giving me File not found error. Also what's the right way to give full location.
My code is ..
from google.cloud import storage
import gzip
import shutil
client = storage.Client()
def hello_gcs(event, context):
"""Triggered by a change to a Cloud Storage bucket.
Args:
event (dict): Event payload.
context (google.cloud.functions.Context): Metadata for
the event.
"""
with open("/"+event['name'],'rb') as f_input :
with gzip.open("'/tmp/'+event['name']+'.gz'","wb") as f_output:
shutil.copyfileobj(f_input,f_output)
source_bucket= client.bucket(event['bucket'])
source_blob = source_bucket.blob("/tmp/" + event['name'])
destination_bucket = client.bucket('baalti2')
blob_copy = source_bucket.copy_blob(
source_blob, destination_bucket, event['name']
)
print(
"Blob {} in bucket {} moved to blob {} in bucket {}.".format(
source_blob.name,
source_bucket.name,
blob_copy.name,
destination_bucket.name,
)
)
You are using Linux file system APIs (open, shutil.copy) to access Cloud Storge which will not work. Copy the file from the bucket to local storage. Gzip the file. Copy the gzip file to the destination bucket. Use the Cloud Storage APIs to interact with Cloud Storage.

What type of API or references/commands do I need when uploading a file and also connecting to a MongoDB from a Google Function?

I'm now trying to make this simple Python script that will run when called through an API call or through a Google Function. I'm very, very new to GCP and Python as I'm more familiar with Azure and PowerShell, but I need to know what I need to use/call in order to upload a file to a bucket and also read the file information, plus then connect to a MongoDB database.
Here is the flow of what I need to do:
API/function will be called with its URL and attached to it will be an actual file, like a seismic file type.
When the API/function is called, a Python script will run that will grab that file and upload it to a bucket.
Then I need to run commands against the uploaded file to retrieve items like "version","company","wellname", etc.
Then I want to upload a document, with all of these values, into a MongoDB database.
We're basically trying to replicate something we did in Azure with Functions and a CosmosDB instance. There, we created a function that would upload the file to Azure storage, then retrieve values from the file, which I believe is the metadata of it. After, we would upload a document to CosmosDB with these values. It's a way of recording values retrieved from the file itself. Any help would be appreciated as this is part of a POC I'm trying to present on!! Please ask any questions!
To answer your questions:
Here's a code on how to upload a file to a Cloud Storage bucket using Python:
from google.cloud import storage
def upload_blob(bucket_name, source_file_name, destination_blob_name):
"""Uploads a file to the bucket."""
# bucket_name = "your-bucket-name"
# source_file_name = "local/path/to/file"
# destination_blob_name = "storage-object-name"
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
blob.upload_from_filename(source_file_name)
print(
"File {} uploaded to {}.".format(
source_file_name, destination_blob_name
)
)
Make sure that you have the Cloud Storage client library installed on your requirements.txt. Example:
google-cloud-storage>=1.33.0
To retrieve and view the object metadata here's the code and link as reference:
from google.cloud import storage
def blob_metadata(bucket_name, blob_name):
"""Prints out a blob's metadata."""
# bucket_name = 'your-bucket-name'
# blob_name = 'your-object-name'
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
blob = bucket.get_blob(blob_name)
print("Blob: {}".format(blob.name))
print("Bucket: {}".format(blob.bucket.name))
print("Storage class: {}".format(blob.storage_class))
print("ID: {}".format(blob.id))
print("Size: {} bytes".format(blob.size))
...
Finally,
To connect to MongoDB using Python, here's a sample project which could help you understand how it works.

Transfer file from URL to Cloud Storage

I'm a Ruby dev trying my hand at Google Cloud Functions written in Python and have hit a wall with transferring a remote file from a given URL to Google Cloud Storage (GCS).
In an equivalent RoR app I download to the app's ephemeral storage and then upload to GSC.
I am hoping there's a way to simply 'download' the remote file to my GCS bucket via the Cloud Function.
Here's a simplified example of what I am doing with some comments, the real code fetches the URLs from a private API, but that works fine and isn't where the issue is.
from google.cloud import storage
project_id = 'my-project'
bucket_name = 'my-bucket'
destination_blob_name = 'upload.test'
storage_client = storage.Client.from_service_account_json('my_creds.json')
# This works fine
#source_file_name = 'localfile.txt'
# When using a remote URL I get 'IOError: [Errno 2] No such file or directory'
source_file_name = 'http://www.hospiceofmontezuma.org/wp-content/uploads/2017/10/confused-man.jpg'
def upload_blob(bucket_name, source_file_name, destination_blob_name):
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
blob.upload_from_filename(source_file_name)
upload_blob(bucket_name, source_file_name, destination_blob_name)
Thanks in advance.
It is not possible to upload a file to Google Cloud Storage directly from an URL. Since you are running the script from a local environment, the file contents that you want to upload, need to be in that same environment. This means that the contents of the url need to either be stored in the memory, or in a file.
An example showing how to do it, based in your code:
Option 1: You can use the wget module, that will fetch the url and download it's contents into a local file (similar to the wget CLI command). Note that this means that the file will be stored locally, and then uploaded from the file. I added the os.remove line to remove the file once the upload is done.
from google.cloud import storage
import wget
import io, os
project_id = 'my-project'
bucket_name = 'my-bucket'
destination_blob_name = 'upload.test'
storage_client = storage.Client.from_service_account_json('my_creds.json')
source_file_name = 'http://www.hospiceofmontezuma.org/wp-content/uploads/2017/10/confused-man.jpg'
def upload_blob(bucket_name, source_file_name, destination_blob_name):
filename = wget.download(source_file_name)
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
blob.upload_from_filename(filename, content_type='image/jpg')
os.remove(filename)
upload_blob(bucket_name, source_file_name, destination_blob_name)
Option 2: using the urllib module, works similar to the wget module, but instead of writing into a file it writes to a variable. Note that I did this example im Python3, there are some differences if you plan to run your script in Python 2.X.
from google.cloud import storage
import urllib.request
project_id = 'my-project'
bucket_name = 'my-bucket'
destination_blob_name = 'upload.test'
storage_client = storage.Client.from_service_account_json('my_creds.json')
source_file_name = 'http://www.hospiceofmontezuma.org/wp-content/uploads/2017/10/confused-man.jpg'
def upload_blob(bucket_name, source_file_name, destination_blob_name):
file = urllib.request.urlopen(source_file_name)
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
blob.upload_from_string(link.read(), content_type='image/jpg')
upload_blob(bucket_name, source_file_name, destination_blob_name)
Directly transferring URLs into GCS is possible through the Cloud Transfer service, but setting up a cloud transfer job for a single URL is a lot of overhead. That sort of solution is targeted towards a situation with millions of URLs that need to become GCS objects.
Instead, I recommend writing a job that pumps an incoming stream from reading a URL into a write stream to GCS and running that somewhere in the Google Cloud close to the bucket.

Categories