I have a requirement where I have to trigger a dataset in a blob to my python code where processing will happen and then store the processed dataset to the blob? Where should I do it? Any notebooks?
Azure functions dont have an option to write a Python code.
Any help would be appreciated.
Depending on your design, you could create 2 processes. The first one will search the data for whatever should "trigger", then notify the second processes of the "trigger" so it can read and modify the data.
You can work with the blob using python like the examples in azures docs.
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-python
More info from this post:
Azure Blob - Read using Python
from azure.storage.blob import BlockBlobService
block_blob_service = BlockBlobService(account_name='myaccount', account_key='mykey')
block_blob_service.get_blob_to_path('mycontainer', 'myblockblob', 'out-sunset.png')
Azure Functions have support Python in preview, which is based on Linux. You can see the wiki page Azure Functions on Linux Preview to know about it.
Note
Python for Azure Functions is currently in preview. To receive important updates, subscribe to the Azure App Service announcements repository on GitHub.
There are two documents to introduce how to develop Azure Functions using Python.
Create your first Python function in Azure (preview)
Azure Functions Python developer guide
You need to follow the documents above to use Azure CLI to run the command func new and select Blob Trigger to create an Azure Functions for Python to satisfy your requirement.
You should also consider logic apps, which allows you to automate some of the tasks,and includes several actions on data sets. Please add more details to your question to get a more accurate answer to your request.
New Edit:
There is support of Python 3.0 in preview for Azure functions,per the following post
Also, steps on where you can store your code in functions can be found here
Related
I'm building a project using python and grafana where I'd like to generate a certain number of copies of certain grafana dashboards based on certain criteria. I've downloaded the grafanalib library to help me out with that, and I've read through the Generating Dashboards From Code section of the grafanalib website, but I feel like I still need more context to understand how to use this library.
So my first question is, how do I convert a grafana dashboard JSON model into a python friendly format? What method of organization do I use? I saw the dashboard generation function written in the grafanalib documentation, but it looked quite a bit different from how my JSON data is organized. I'd just like some further description of how to do the conversion.
My second question is, once I've converted my grafana JSON into a python format, how do I then get the proper information to send that generated dashboard to my grafana server? I see in the grafanalib documentation the "upload_to_grafana" function used to send the information and it takes in the three parameters (json, server, api_key), and I understand where its getting the json parameter from, but I dont get where the server information or API key are coming from or where that information is found to be input.
This is all being developed on a raspberry pi 4 just to put that out there. I'm working on a personal smart agriculture project as a way to develop my coding abilities further, as I'm self taught. Any help that can be provided to help me in my understanding is most appreciated. Thank you.
create an API key in Grafana configuration ..The secret key that u get while creating is the API key ..Server is localhost:3000 in case of installed grafana
I'm developing a react native application using Expo, it will display different data, preprocessed and cleaned with Python, along with sentiments analysis on tweets regarding a specific topic.
What is the best way to do that? I read about using RESTful API with Flask but after some reading I don't think it will serve this purpose.
Thank you in advance.
You could create an AWS lambda endpoint for sending data and retrieving the processed results. With the free tier of AWS lambda you get "...1M free requests per month and 400,000 GB-seconds of compute time per month."
Seems like it might suit your use-case.
Tutorial you can easily follow along with here: https://www.tutorialspoint.com/aws_lambda/aws_lambda_function_in_python.htm
Some more info on setting up a rest API using lambda:
https://blog.sourcerer.io/full-guide-to-developing-rest-apis-with-aws-api-gateway-and-aws-lambda-d254729d6992
Reference for AWS lambda free tier here:
https://aws.amazon.com/lambda/pricing/
I am trying to find a way to automatically update a big query table using this link: https://www6.sos.state.oh.us/ords/f?p=VOTERFTP:DOWNLOAD::FILE:NO:2:P2_PRODUCT_NUMBER:1
This link is updated with new data every week and I want to be able to replace the Big Query table with this new data. I have researched that you can export spreadsheets to Big Query, but that is not a streamlined approach.
How would I go about submitting a script that imports the data and having that data be fed to Big Query?
I assume you already have a working script that parses the content of the URL and places the contents in BigQuery. Based on that I would recommend the following workflow:
Upload the script as a Google Cloud Function. If your script isn't written in a compatible language (i.e. Python, Node, Go), you can use Google Cloud Run instead. Set the Cloud Function to be triggered by a Pub/Sub message. In this scenario, the content of your Pub/Sub message doesn't matter.
Set up a Google Cloud Scheduler job to (a) run at 12am every Saturday (or whatever time you wish) and (b) send a dummy message to the Pub/Sub topic that your Cloud Function is subscribed to.
You can try using a HTTP request to the page using a programming language like Python with the Request library, save the data into a Pandas Dataframe or a CSV file, and then using the BigQuery libraries you can push that data into a BigQuery table.
I want to track which resources are popular using the data from google analytics.
Is this possible with google analytics?
Is there a specific reason you would like to use google analytics?
You can instead log it to a log file with the python logging module.
You can also write it to a database (use FireBase for free if you are only counting).
Alternatively see this SO question.
I'm using Azure and the python SDK.
I'm using Azure's table service API for DB interaction.
I've created a table which contains data in unicode (hebrew for example). Creating tables and setting the data in unicode seems to work fine. I'm able to view the data in the database using Azure Storage Explorer and the data is correct.
The problem is when retrieving the data.. Whenever I retrieve specific row, data retrieval works fine for unicoded data:
table_service.get_entity("some_table", "partition_key", "row_key")
However, when trying to get a number of records using a filter, an encode exception is thrown for any row that has non-ascii chars in it:
tasks = table_service.query_entities('some_table', "PartitionKey eq 'partition_key'")
Is this a bug on the azure python SDK? Is there a way to set the encoding beforehand so that it won't crash? (azure doesn't give access to sys.setdefaultencoding and using DEFAULT_CHARSET on settings.py doesn't work as well)
I'm using https://www.windowsazure.com/en-us/develop/python/how-to-guides/table-service/ as reference to the table service API
Any idea would be greatly appreciated.
This looks like a bug in the Python library to me. I whipped up a quick fix and submitted a pull request on GitHub: https://github.com/WindowsAzure/azure-sdk-for-python/pull/59.
As a workaround for now, feel free to clone my repo (remembering to checkout the dev branch) and install it via pip install <path-to-repo>/src.
Caveat: I haven't tested my fix very thoroughly, so you may want to wait for the Microsoft folks to take a look at it.