I have a small issue:
When trying to upload a video using the YouTube data API with Python, the video is automatically turned private and become locked when the upload finishes.
Is there any solution for this problem?
At the very top of the documentation page for videos.insert it states.
If you application is in testing mode videos are private.
Go to google cloud console for your project under the oauth2 screen. Set it to public your videos will now upload as public or private
Related
I want to automatically sync new files that are added to google drive to google cloud storage.
I have seen various people asking this on the web and most of them suggest something along the lines of:
Develop an app to poll for new files in the drive
Retrieve new files and upload them to GCS
If someone has already written an open-source library/script for this then I would like to reuse it instead of re-inventing the wheel.
Edit:
I have now written a watcher webhook API in python, and subscribed to the folder to get notification when a new file is added to google drive.
Now the issue is, when the webhook is called by Google, no information is provided about the new files/folders added.
I understand you are looking for a method to sync content on different services( NFS, Disks, etc ) to GCS in order to have a backup there and make data accessible to applications which can only access to the cloud storage buckets.
We don't have a google owned solution for this however we have several partners link which offer proprietary solutions which might work for your usecase.
i'm a python developer, inexperienced in microsoft azure services.
For a client I have to allow downloading of videos using the azure media service (video streaming). I did find information on the subject in the documentation (https://learn.microsoft.com/en-us/azure/media-services/previous/media-services-deliver-asset-download), but I want to get there using python (so either the rest azure api, or the python sdk).
I'm starting to believe it's impossible.
I need your help please.
Everything you need to do should be completely possible with the Python SDK.
I do not recommend using the REST API directly! It does not have any built in retry policies that Azure Resource Management API requires. You can get into issues with that in production - unless you know what you are doing and roll your own retry logic.
Use the official Python SDK client for Media Services only.
Also, the link above for the REST API is pointing to the legacy v2 API - do not use that now. Use the latest v3 SDK client only here -
pip install azure-mgmt-media
We have a limited number of Python samples up here that show how to use the client SDK for Python - https://github.com/Azure-Samples/media-services-v3-python
None of us on the team are Python experts, and we don't seem to get a lot of contributions to that repo - so it is not anywhere near as comprehensive as our .NET samples here - https://github.com/Azure-Samples/media-services-v3-dotnet
But keep in mind that all the Azure SDK's are just auto generated off the REST API Swagger (Open API) - so they all use the exact same entities, and use the same JSON structure on the wire - so if you know what the REST API is doing and what the Entites are - you can easily port things around between languages. Helps to know Python first though!
You mentioned you want to download stuff - that will require the use of the Storage SDKs for python. Media Services just uses Azure Storage accounts. Meaning you can access the containers using SAS URl's to upload and download stuff. Look at the Storage samples for Python to see what to do there. https://pypi.org/project/azure-storage-blob/
The uploaded video are stored as an Assest file if the files are uploaded using Azure Media Services SDK. which will make it easier to stream video to different devices.
To stream or download an asset, you first need to "publish" it by creating a locator. Locators provide access to files contained in the asset.
Media Services supports two types of locators:
OnDemandOrigin locators, used to stream media (for example, MPEG DASH, HLS, or Smooth Streaming)
Access Signature (SAS) locators, used to download media files.
Once you create the locators, you can build the URLs that are used to stream or download your files.
Here's a guide for doing that using Rest API : https://learn.microsoft.com/en-us/azure/media-services/previous/media-services-rest-get-started
Note : you're uploading your videos directly to Azure Storage? If that's the case, instead of uploading your videos directly to Azure Storage, my suggestion would be to upload your videos using the Azure Media Services SDK
Azure Media Services has pretty good documentation which might help with your other asks: http://azure.microsoft.com/en-us/develop/media-services/resources/
I am currently trying to use Google's cloud vision API for my project. The problem is that Google cloud vision API for document text detection accepts only Google Cloud Services URI as input and output destination. But I have all my projects, data in Amazon S3 server which cant be directly used with this API.
Points to be noted:-
All data should be in kept in S3 only.
I can't change my cloud storage to GCS now.
I can't download files from S3 and upload to GCS manually.The number
of files that are incoming per day is more than 1000 and less than
100,000.
Even if I could automate downloading and uploading of the pdf, this
would serve as a bottleneck for the entire project, since I would have to deal
with concurrency issues and memory management.
Is there any workaround to make this API work with S3 URI? I am in need of your help.
Thank You
Currently, Vision API doesn't work with URLs, apart from the Google Cloud Storage ones. There's a feature request for the image search related to use the API with specific URLs where you could ask to consider this feature for the PDF/TIFF documents too, or raise a new feature request for this scenario.
Background
I've created a slack bot that listens in a channel for when a file is uploaded, downloads the content and re-uploads it to our Google Drive account and deletes it from Slack. This all works perfectly using slack-client api and google drive api in Python.
Problem
What I would like to recreate is the view that the Google Drive slack integration creates when you import a file from Google Drive, instead of just having a link (like my bot current is capable of).
I'm currently using slack_client.api_call("chat.postMessage", ..., unfurl_media=True, unfurl_links=True) however, that does not solve the problem (it still just appears as a link as seen above, instead of an attachment like the Google Drive integration.)
Anyone have any recomendations on how to achieve the same look at the Google Drive integration? The idea is that the thumbnails and previews of attachments should not go away, but everything should be hosted on Google Drive as opposed to Slack's servers since we share a ton of files.
I'm building a small educational web application. Along with other features like discussion forums, registered users will be able to view streaming videos. I'll be using Google App Engine's webapp2 framework for back-end development (with python). I want to specifically ask that how can I integrate video streaming into my application? I'm fairly new to web development and have a basic working knowledge of App Engine. I'll be using Google's Datastore to store all the app's data, but where do I store my videos that the app serves to users? I don't want to make the video content publicly available (e.g. YouTube), so what's the way to go?
I'm aware that GAE's Blobstore is dedicated to serving large files (e.g. videos) so will it be appropriate for this purpose? What are some other options?
Yes, Blobstore is fine. You can also use Google Cloud Storage, either directly or through the Blobstore API
Plenty of related Q&As to study, many contain code snippets: https://stackoverflow.com/search?q=[google-app-engine]+video+streaming
.