I'm new to azure functions, I want to deploy my python code in function app, where my code is linked with SharePoint, Outlook, SQL Server, could some one suggest me the best way to connect all 3 of them in azure functions App....#python #sql #sharepoint #azure
Firstly, would like to discuss about accessing SharePoint files from Azure function, we just need to use few imports for it from VSCode and we also have the python documentation for Office365-Rest-Client.
Below is one of the examples to download a file from SharePoint:
Import os
import tempfile
from office365.sharepoint.client_context import ClientContext
from tests import test_team_site_url, test_client_credentials
ctx = ClientContext(test_team_site_url).with_credentials(test_client_credentials)
# file_url = '/sites/team/Shared Documents/big_buck_bunny.mp4'
file_url = "/sites/team/Shared Documents/report #123.csv"
download_path = os.path.join(tempfile.mkdtemp(), os.path.basename(file_url))
with open(download_path, "wb") as local_file:
file = ctx.web.get_file_by_server_relative_path(file_url).download(local_file).execute_query()
print("[Ok] file has been downloaded into: {0}".format(download_path))
To get all the details of files folders and their operations refer to this GIT link
For connecting to SQL we have a blog which has all insights with Python code, thanks to lieben.
Related
I have a problem using the videohash package for python when deployed to Azure function.
My deployed azure function does not seem to be able to use a nested dependency properly. Specifically, I am trying to use the package “videohash” and the function VideoHash from it. The
input to VideoHash is a SAS url token for a video placed on an Azure blob storage.
In the monitor of my output it prints:
Accessing the sas url token directly takes me to the video, so that part seems to be working.
Looking at the source code for videohash this error seems to occur in the process of downloading the video from a given url (link:
https://github.com/akamhy/videohash/blob/main/videohash/downloader.py).
.. where self.yt_dlp_path = str(which("yt-dlp")). This to me indicates, that after deploying the function, the package yt-dlp isn’t properly activated. This is a dependency from the videohash
module, but adding yt-dlp directly to the requirements file of the azure function also does not solve the issue.
Any ideas on what is happening?
Deploying code to Azure function, which resulted in the details highlighted in the issue description.
I have a work around where you download the video file on you own instead of the videohash using azure.storage.blob
To download you will need a BlobServiceClient , ContainerClient and connection string of azure storage account.
Please create two files called v1.mp3 and v2.mp3 before downloading the video.
file structure:
Complete Code:
import logging
from videohash import VideoHash
import azure.functions as func
import subprocess
import tempfile
import os
from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient
def main(req: func.HttpRequest) -> func.HttpResponse:
# local file path on the server
local_path = tempfile.gettempdir()
filepath1 = os.path.join(local_path, "v1.mp3")
filepath2 = os.path.join(local_path,"v2.mp3")
# Reference to Blob Storage
client = BlobServiceClient.from_connection_string("<Connection String >")
# Reference to Container
container = client.get_container_client(container= "test")
# Downloading the file
with open(file=filepath1, mode="wb") as download_file:
download_file.write(container.download_blob("v1.mp3").readall())
with open(file=filepath2, mode="wb") as download_file:
download_file.write(container.download_blob("v2.mp3").readall())
// video hash code .
videohash1 = VideoHash(path=filepath1)
videohash2 = VideoHash(path=filepath2)
t = videohash2.is_similar(videohash1)
return func.HttpResponse(f"Hello, {t}. This HTTP triggered function executed successfully.")
Output :
Now here I am getting the ffmpeg error which related to my test file and not related to error you are facing.
This work around as far as I know will not affect performance as in both scenario you are downloading blobs anyway
I'm writing a python application requiring that I download a folder from OneDrive. I understand that there was a package called onedrivesdk in the past for doing this, but it has since been deprecated and it is now recommended that the Microsoft Graph API be used to access OneDrive files (https://pypi.org/project/onedrivesdk/). It is my understanding that this requires somehow producing a DriveItem object referring to the target folder.
I was able to access the folder via GraphClient in msgraph.core :
from azure.identity import DeviceCodeCredential
from msgraph.core import GraphClient
import configparser
config = configparser.ConfigParser()
config.read(['config.cfg'])
azure_settings = config['azure']
scopes = azure_settings['graphUserScopes'].split(' ')
device_code_credential = DeviceCodeCredential(azure_settings['clientId'], tenant_id=azure_settings['tenantId'])
client = GraphClient(credential=device_code_credential, scopes=scopes)
import json
endpoint = '/me/drive/root:/Desktop'
x = client.get(endpoint)
x is the requests.models.Response object referring to the target folder (Desktop). I don't know how to extract from x a DriveItem or otherwise iterate over its contents programmatically. How can I do this?
Thanks
I want to upload file to mediafire without API Delevoper(because when users use their accounts)
But I see mediafire api used Developer so i don't want to use it
I want to somethings like it:
import mediafire_uploader as mdf
mdf.create_login("USER_NAME","PASSWORD")
mdf.upload("file.txt")
And how can I do it?
Thanks for help!
You can't upload file to MediaFire without api,but i think it is source you want
from mediafire import MediaFireApi
from mediafire import MediaFireUploader
api = MediaFireApi()
uploader = MediaFireUploader(api)
session = api.user_get_session_token(
email='YOUR_EMAIL',
password='YOUR_PASSWORD',
app_id='42511')
api.session = session
response = api.user_get_info()
fd = open('PATH_FILE', 'rb')
result = uploader.upload(fd, 'OUTPUT_FILENAME',
folder_key='FOLDER KEY')
Default app_id is 42511, i think you don't know it
I'm facing the following error while trying to persist log files to Azure Blob storage from Azure Batch execution - "FileUploadMiscError - A miscellaneous error was encountered while uploading one of the output files". This error doesn't give a lot of information as to what might be going wrong. I tried checking the Microsoft Documentation for this error code, but it doesn't mention this particular error code.
Below is the relevant code for adding the task to Azure Batch that I have ported from C# to Python for persisting the log files.
Note: The container that I have configured gets created when the task is added, but there's no blob inside.
import datetime
import logging
import os
import azure.storage.blob.models as blob_model
import yaml
from azure.batch import models
from azure.storage.blob.baseblobservice import BaseBlobService
from azure.storage.common.cloudstorageaccount import CloudStorageAccount
from dotenv import load_dotenv
LOG = logging.getLogger(__name__)
def add_tasks(batch_client, job_id, task_id, io_details, blob_details):
task_commands = "This is a placeholder. Actual code has an actual task. This gets completed successfully."
LOG.info("Configuring the blob storage details")
base_blob_service = BaseBlobService(
account_name=blob_details['account_name'],
account_key=blob_details['account_key'])
LOG.info("Base blob service created")
base_blob_service.create_container(
container_name=blob_details['container_name'], fail_on_exist=False)
LOG.info("Container present")
container_sas = base_blob_service.generate_container_shared_access_signature(
container_name=blob_details['container_name'],
permission=blob_model.ContainerPermissions(write=True),
expiry=datetime.datetime.now() + datetime.timedelta(days=1))
LOG.info(f"Container SAS created: {container_sas}")
container_url = base_blob_service.make_container_url(
container_name=blob_details['container_name'], sas_token=container_sas)
LOG.info(f"Container URL created: {container_url}")
# fpath = task_id + '/output.txt'
fpath = task_id
LOG.info(f"Creating output file object:")
out_files_list = list()
out_files = models.OutputFile(
file_pattern=r"../stderr.txt",
destination=models.OutputFileDestination(
container=models.OutputFileBlobContainerDestination(
container_url=container_url, path=fpath)),
upload_options=models.OutputFileUploadOptions(
upload_condition=models.OutputFileUploadCondition.task_completion))
out_files_list.append(out_files)
LOG.info(f"Output files: {out_files_list}")
LOG.info(f"Creating the task now: {task_id}")
task = models.TaskAddParameter(
id=task_id, command_line=task_commands, output_files=out_files_list)
batch_client.task.add(job_id=job_id, task=task)
LOG.info(f"Added task: {task_id}")
There is a bug in Batch's OutputFile handling which causes it to fail to upload to containers if the full container URL includes any query-string parameters other than the ones included in the SAS token. Unfortunately, the azure-storage-blob Python module includes an extra query string parameter when generating the URL via make_container_url.
This issue was just raised to us, and a fix will be released in the coming weeks, but an easy workaround is instead of using make_container_url to craft the URL, craft it yourself like so: container_url = 'https://{}/{}?{}'.format(blob_service.primary_endpoint, blob_details['container_name'], container_sas).
The resulting URL should look something like this: https://<account>.blob.core.windows.net/<container>?se=2019-01-12T01%3A34%3A05Z&sp=w&sv=2018-03-28&sr=c&sig=<sig> - specifically it shouldn't have restype=container in it (which is what the azure-storage-blob package is including)
Ok...I have been trying to figure out how to do this for a long time no without much success.
I have a Python Script locally on Google App Engine Launcher that receives a image file via post. I have not launched application yet, however I am able to get to the Google Cloud SQL so I assume I can get to Google Cloud Storage.
import MySQLdb
import logging
import webapp2
import json
class PostTest(webapp2.RequestHandler):
def post(self):
image = self.request.POST.get('file'))
logging.info("Pic: %s" % self.request.POST.get('file'))
#################################
#Main Portion
#################################
application = webapp2.WSGIApplication([
('/', PostTest)
], debug=True)
The logging outputs this, so I know it is receiving the image:
INFO 2014-08-04 23:20:43,299 posttest.py:21] Pic Bytes: FieldStorage(u'file', u'tmp.jpg')
Do I connect to GoogleCloudStorage
How do I upload this image to my GoogleCloudStorage bucket called 'app'?
How do I retrieve it once it is there?
Should be a simple thing to do, however I haven't been able to find good/clear documentation on how to do this. There is REST API which is depreciated and the GoogleAppEngineCloudStorageClient confuses me.
Can someone help me please with a code example? I will be really grateful!
I created a repository with a script to do this simply: https://github.com/itsdeka/python-google-cloud-storage
Example of integration with DJango:
picture = request.FILES.get('picture', None)
file_name = 'test'
directory = 'myfolder'
format = '.jpg'
GoogleCloudStorageUtil.uploadMediaObject(file=picture,file_name=file_name,directory=directory,format=format)
Tip: it automatically creates a folder called 'myfolder' in your bucket if that folder doesn't exist
The link of all pictures uploaded in your bucket is the same except for the file name, so it is pretty easy retrieve the picture that you want.