I am using a blob trigger with a python function on an Azure functions app with the consumption plan. I know it is in preview but it is a bummer that the app terminates after a while of no usage. And it will not get back to live when a new blob is added.
The function works perfectly locally
Is there a way to keep the function app alive?
I did not find the right way to do it but I added a second http trigger function to start the app and that works. So my current process is: trigger the http function and then upload the blob.
I also tried a cron trigger but that also didn't fire.
Related
I have two Function Apps inside the same App function Plan:
FA1: func-elsa-api-scl-dev
FA2: func-elsa-api-temp-dev
Both Functions Apps are in the Plan: 'plan-elsa-api-scl-dev (P1v2: 1)'
Both the Function Apps use the same hybrid Connection 'scl-by-dch2shxx'.
Plan, Function Apps and hybrid connection configs are getting provisioned by my devops pipeline called Infra pipeline. At the end of infra pipeline run, i see the plan, with function apps and hybrid connection all setup. I then configure in HCM (a separate VM in onPrem) the hybrid connection. It all looks good that is in the Azure Portal Function App--Hybrid Connection as connected and also in HCM.
Then I have a separate devops pipeline, through which I am deploying my 'Functions' inside the Function App: 'func-elsa-api-scl-dev'. In this pipeline, I just build my functions and deploy them in the Function App 'func-elsa-api-scl-dev'. I am not doing anything with regards to function app provisioning or hybrid connection configuration in this pipeline.
After the first deployment, I see my deployed functions able to reach onPrem through hybrid connection without any issue. Everything works as expected. I can query onPrem Db and all.
However, when i run my 2nd pipeline (responsible for building and deploying function) "again", I get the error in my function logs '(-10709, 'Connect failed (connect timeout expired) (127.0.0.1:48256 - by-dch2shxx.de.xxx.xxx:38015)')'.
The Hybrid connection at the Function App level still shows all green and connected, but my function 'e.g. named Humboldt' throws this error.
The strange thing is that 'sometimes' when I go to my function App and remove (disconnect) the hybrid connection and add (existing connection) again 'manually', my function starts to work.
What I have done additionally is that I have setup another function app 'func-elsa-api-temp-dev' in the same app service plan. Then i put a simple telnet Function in this Temp Function App. This Temp Function App uses the same hybrid connection that my other Function App uses.
However, when i run the telnet function through the portal it is able to reach/telnet onPrem system through the same hybrid connection, while my other function in the other App still not working.
So in short, I am not sure why this same hybrid connection does not work for my Function App "func-elsa-api-scl-dev" Functions (specially when I "redeploy" it). Why I have to disconnect and register it again manually after each deployment to make it work?
I have looked into the hcm logs and nothing suspicious there.
This problem is causing us big delays. And we are unsure if we ever be able to make the Hybrid connection work properly.
Any pointer would be helpful.
Many thanks.
I'm trying to run flask-assistant code in cloud function. The code works fine in my local machine , but it is not working as a cloud function. I'm using the http trigger. The function crashes every time it is triggered.
from flask import Flask
from flask_assistant import Assistant, ask, tell
app = Flask(__name__)
assist = Assistant(app, route='/')
#assist.action('TotalSales')
def greet_and_start(request):
app.run
speech = "Hey! 1500?"
return ask(speech)
if __name__ == '__main__':
app.run(debug=True)
When you write a Google Cloud Function in Python, all you need write is the function that handles the request. For example:
def hello_get(request)
return 'Hello World!'
Cloud Functions handles all the work to create the Flask environment and handle the incoming request. All you need to do is provide the handler to handle the processing. This is the core behind Cloud Functions which provides "Serverless" infrastructure. The number and existence of actual running servers is removed from your world and you can concentrate only on what you want your logic to do. It is not surprising that your example program doesn't work as it is trying to do too much. Here is a link to a Google Cloud Functions tutorial for Python that illustrates a simple sample.
https://cloud.google.com/functions/docs/tutorials/http
Let me recommend that you study this and related documentation on Cloud Functions found here:
https://cloud.google.com/functions/docs/
Other good references include:
YouTube: Next 17 - Building serverless applications with Google Cloud Functions
Migrating from a Monolith to Microservices (Cloud Next '19)
Run Cloud Functions Everywhere (Cloud Next '19)
Functions as a Service (Cloud Next '19)
I have created a python serverless function in azure that gets executed when a new file is uploaded to azure blob (BlobTrigger). The function extracts certain properties of the file and saves it in the DB. As the next step, I want this function copy and process the same file inside a container instance running in ACS. The result of processing should be returned back to the same azure function.
This is a hypothetical architecture that I am currently brainstorming on. I wanted to know if this is feasible. Can you provide me some pointers on how I can achieve this.
I dont see any ContainerTrigger kind of functionality that can allow me to trigger the container and process my next steps.
I have tried utilizing the code examples mentioned here but they have are not really performing the tasks that I need: https://github.com/Azure-Samples/aci-docs-sample-python/blob/master/src/aci_docs_sample.py
Based on the comments above you can consider.
Azure Container Instance
Deploy your container in ACI (Azure Container Instance) and expose HTTP end point from container , just like any web url. Trigger Azure Function using blob storage trigger and then pass your blob file URL to the exposed http end point to your container. Process the file there and return the response back to azure function just like normal http request/response.
You can completely bypass azure function and can trigger your ACI (container instance) using logic apps , process the file and directly save in database.
When you are using Azure function make sure this is short lived process since Azure function will exit after certain time (default 5 mins). For long processing you may have to consider azure durable functions.
Following url can help you understand better.
https://github.com/Azure-Samples/aci-event-driven-worker-queue
I have a couple of Python apps in CloudFoundry. Now I would like to schedule their execution. For example a specific app has to be executed on the second day of each month.
I coudldn't find anything on the internet. Is that even possible?
Cloud Foundry will deploy your application inside a container. You could use libraries to execute your code on a specific schedule but either way you're paying to have that instance run the whole time.
What you're trying to do is a perfect candidate for "serverless computing" (also known as "event-driven" or "function as a service" computing.
These deployment technologies execute functions on response to a trigger e.g. a REST api call, a certain timestamp, a new database insert etc...
You could execute your python cloud foundry apps using the Openwhisk serverless compute platform.
IBM offer a hosted version of this running on their cloud platform, Bluemix.
I don't know what your code looks like so I'll use this sample hello world function:
import sys
def main(dict):
if 'message' in dict:
name = dict['message']
else:
name = 'stranger'
greeting = 'Hello ' + name + '!'
print(greeting)
return {'greeting':greeting}
You can upload your actions (functions) to OpenWhisk using either the online editor or the CLI.
Once you've uploaded your actions you can automate them on a specific schedule by using the Alarm Package. To do this in the online editor click "automate this process" and pick the alarm package.
To do this via the CLI we need to first create a trigger:
$ wsk trigger create regular_hello_world --feed /whisk.system/alarms/alarm -p cron '0 0 9 * * *'
ok: created trigger feed regular_hello_world
This will trigger every day at 9am. We then need to link this trigger to our action by creating a rule:
$ wsk rule create regular_hello_rule regular_hello_world hello_world
ok: created rule regular_hello_rule
For more info see the docs on creating python actions.
The CloudFoundry platform itself does not have a scheduler (at least not at this time) and the containers where you application runs do not have cron installed (unlikely to ever happen).
If you want to schedule code to periodically run, you have a few options.
You can deploy an application that includes a scheduler. The scheduler can run your code directly in that container or it can trigger the code to run elsewhere (ex: it sends an HTTP request to another application and that request triggers the code to run). If you trigger the code to run elsewhere, you can make the scheduler app run pretty lean (maybe with 64m of memory or less) to reduce costs.
You can look for a third party scheduler service. The availability of and cost of services like this will vary depending on your CF provider, but there are service offerings to handle scheduling. These typically function like the previous example where an HTTP request is sent to your app at a specific time and that triggers your scheduled code. Many service providers offer free tiers, which give you a small number of triggers per month at no cost.
If you have a server outside of CF with cron installed, you can use cron there to schedule the tasks and trigger the code to run on CF. You can do this like the previous examples by sending HTTP requests to your app, however, this option also gives you the possibility to make use of CloudFoundry's task feature.
CloudFoundry has the concept of a task, which is a one-time execution of some code. With it, you can execute the cf run-task command to trigger the task to run. Ex: cf run-task <app-name> "python my-task.py". More on that in the docs, here. The nice part about using tasks is that your provider will only bill you while the task is running.
To see if your provider has tasks available, run cf feature-flags and look to see if task_creation is set to enabled.
Hope that helps!
this is my first question on stackoverflow and I'm new to programming:
What is the right way to load data into the GAE datastore when deploying my app? This should only happen once at deployment.
In other words: How can I call methods in my code, such that these methods are only called when I deploy my app?
The GAE documentation for python2.7 says, that one shouldn't call a main function, so I can't do this:
if __name__ == '__main__':
initialize_datastore()
main()
Create a handler that is restricted to admins only. When that handler is invoked with a simple GET request you could have it check to see if the seed data exists and if it doesn't, insert it.
Configuring a handler to require login or administrator status.
Another option is to write a Python script that utilizes the Remote API. This would allow you to access local data sources such as a CSV file or a locally hosted database and wouldn't require you to create a potentially unwieldy handler.
Read about the Remote API in the docs.
Using the Remote API Shell - Google App Engine