Background
I've created a slack bot that listens in a channel for when a file is uploaded, downloads the content and re-uploads it to our Google Drive account and deletes it from Slack. This all works perfectly using slack-client api and google drive api in Python.
Problem
What I would like to recreate is the view that the Google Drive slack integration creates when you import a file from Google Drive, instead of just having a link (like my bot current is capable of).
I'm currently using slack_client.api_call("chat.postMessage", ..., unfurl_media=True, unfurl_links=True) however, that does not solve the problem (it still just appears as a link as seen above, instead of an attachment like the Google Drive integration.)
Anyone have any recomendations on how to achieve the same look at the Google Drive integration? The idea is that the thumbnails and previews of attachments should not go away, but everything should be hosted on Google Drive as opposed to Slack's servers since we share a ton of files.
Related
I want to automatically sync new files that are added to google drive to google cloud storage.
I have seen various people asking this on the web and most of them suggest something along the lines of:
Develop an app to poll for new files in the drive
Retrieve new files and upload them to GCS
If someone has already written an open-source library/script for this then I would like to reuse it instead of re-inventing the wheel.
Edit:
I have now written a watcher webhook API in python, and subscribed to the folder to get notification when a new file is added to google drive.
Now the issue is, when the webhook is called by Google, no information is provided about the new files/folders added.
I understand you are looking for a method to sync content on different services( NFS, Disks, etc ) to GCS in order to have a backup there and make data accessible to applications which can only access to the cloud storage buckets.
We don't have a google owned solution for this however we have several partners link which offer proprietary solutions which might work for your usecase.
I have a small issue:
When trying to upload a video using the YouTube data API with Python, the video is automatically turned private and become locked when the upload finishes.
Is there any solution for this problem?
At the very top of the documentation page for videos.insert it states.
If you application is in testing mode videos are private.
Go to google cloud console for your project under the oauth2 screen. Set it to public your videos will now upload as public or private
I created a website that I plan to connect to Google Drive. So, when students or teachers upload files, the files will go to their respective google drive accounts. Meanwhile, the only thing that goes into the database is the file link.
I've tried using Python Social Auth and implemented it successfully. But I'm still confused about how to use the access token I got to access the Google Drive API.
Please help me
For the Google Drive Python API, in all the tutorials I have seen, they require
users to create a project in their Google Dashboard, before obtaining a client ID and a client secret json file. I've been researching both the default Google Drive API and the pydrive module.
Is there a way for users to simply login to their Google Account with username and password,
without having to create a project? So once they login to their Google Account, they are free to
access all files in their Google Drive?
It's not possible to use the Drive API without creating a GCP project for the application. Otherwise Google has no idea what application is requesting access, and what scope of account access it should have.
Using simply a username and password to log in is not possible. You need to create a project and use OAuth.
it might be possible using some pysimplegui hackery or just simply modifying the code of a python based browser but in most cases it is not practical
except if you need to automate something (like renaming files ) that would take 1 hour in a place where you do not have access to GCP
I am currently trying to use Google's cloud vision API for my project. The problem is that Google cloud vision API for document text detection accepts only Google Cloud Services URI as input and output destination. But I have all my projects, data in Amazon S3 server which cant be directly used with this API.
Points to be noted:-
All data should be in kept in S3 only.
I can't change my cloud storage to GCS now.
I can't download files from S3 and upload to GCS manually.The number
of files that are incoming per day is more than 1000 and less than
100,000.
Even if I could automate downloading and uploading of the pdf, this
would serve as a bottleneck for the entire project, since I would have to deal
with concurrency issues and memory management.
Is there any workaround to make this API work with S3 URI? I am in need of your help.
Thank You
Currently, Vision API doesn't work with URLs, apart from the Google Cloud Storage ones. There's a feature request for the image search related to use the API with specific URLs where you could ask to consider this feature for the PDF/TIFF documents too, or raise a new feature request for this scenario.