I'm trying to run flask-assistant code in cloud function. The code works fine in my local machine , but it is not working as a cloud function. I'm using the http trigger. The function crashes every time it is triggered.
from flask import Flask
from flask_assistant import Assistant, ask, tell
app = Flask(__name__)
assist = Assistant(app, route='/')
#assist.action('TotalSales')
def greet_and_start(request):
app.run
speech = "Hey! 1500?"
return ask(speech)
if __name__ == '__main__':
app.run(debug=True)
When you write a Google Cloud Function in Python, all you need write is the function that handles the request. For example:
def hello_get(request)
return 'Hello World!'
Cloud Functions handles all the work to create the Flask environment and handle the incoming request. All you need to do is provide the handler to handle the processing. This is the core behind Cloud Functions which provides "Serverless" infrastructure. The number and existence of actual running servers is removed from your world and you can concentrate only on what you want your logic to do. It is not surprising that your example program doesn't work as it is trying to do too much. Here is a link to a Google Cloud Functions tutorial for Python that illustrates a simple sample.
https://cloud.google.com/functions/docs/tutorials/http
Let me recommend that you study this and related documentation on Cloud Functions found here:
https://cloud.google.com/functions/docs/
Other good references include:
YouTube: Next 17 - Building serverless applications with Google Cloud Functions
Migrating from a Monolith to Microservices (Cloud Next '19)
Run Cloud Functions Everywhere (Cloud Next '19)
Functions as a Service (Cloud Next '19)
Related
I am getting the error while deploying the Azure function from the local system.
I wen through some blogs and it is stating that my function is unable to connect with the Azure storage account which has the functions meta data.
Also, The function on the portal is showing the error as: Azure Functions runtime is unreachable
Earlier my function was running but after integrating the function with a Azure premium App service plan it has stooped working. My assumption is that my app service plan having some restriction for the inbound/outbound traffic rule and Due to this it is unable to establish the connection with the function's associated storage account.
Also, I would like to highlight that if a function is using the premium plan then we have to add few other configuration properties.
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING = "DefaultEndpointsProtocol=https;AccountName=blob_container_storage_acc;AccountKey=dummy_value==;EndpointSuffix=core.windows.net"
WEBSITE_CONTENTSHARE = "my-function-name"
For the WEBSITE_CONTENTSHARE property I have added the function app name but I am not sure with the value.
Following is the Microsoft document reference for the function properties
Microsoft Function configuration properties link
Can you please help me to resolve the issue.
Note: I am using python for the Azure functions.
I have created a new function app with Premium plan and selected the interpreter as Python. When we select Python, OS will be automatically Linux.
Below is the message we get to create functions for Premium plan function App:
Your app is currently in read only mode because Elastic Premium on Linux requires running from a package.
PortalScreenshot
We need to create, deploy and run function apps from a package, refer to the documentation on how we can run functions from package.
Documentation
Make sure to add all your local.settings.json configurations to Application Settings in function app.
Not sure of what kind of Azure Function you are using but usually when there is a Storage Account associated, we need to specify the AzureWebJobsStorage field in the serviceDependencies.json file inside Properties folder. And when I had faced the same error, the cause was that while publishing the azure function from local, some settings from the local.settings.json were missing in the Application Settings of the app service under Configuration blade.
There can be few more things which you can recheck:
Does the storage account you are trying to use existing still or is deleted by any chance.
While publishing the application from local, using the web deploy method, the publish profile is correct or has any issues.
Disabling the function app and then stopping the app service before redeploying it.
Hope any of the above mentions help you diagnose and solve the issue.
The thing is that there is a difference in how the function deployed using Consumption vs Premium service plan.
Consumption - working out of the box.
Premium - need to add the WEBSITE_RUN_FROM_PACKAGE = 1 in the function Application settings. (see https://learn.microsoft.com/en-us/azure/azure-functions/run-functions-from-deployment-package for full details)
I am using a blob trigger with a python function on an Azure functions app with the consumption plan. I know it is in preview but it is a bummer that the app terminates after a while of no usage. And it will not get back to live when a new blob is added.
The function works perfectly locally
Is there a way to keep the function app alive?
I did not find the right way to do it but I added a second http trigger function to start the app and that works. So my current process is: trigger the http function and then upload the blob.
I also tried a cron trigger but that also didn't fire.
I've set up a Google Cloud BigTable cluster on my project. The main codebase for the project runs within a standard Python App Engine environment, which can't use the gcloud-python library because of the reliance on grpcio. To get around this, I've set up a Python App Engine Flexible Environment service within the same project and written a very simple Flask server to run on it, which I can then hit from my standard environment. The code looks something like this:
from gcloud import bigtable
app = Flask(__name__)
client = bigtable.Client(project=bigtable_config.PROJECT_ID, read_only=True)
cluster = client.cluster(bigtable_config.ZONE_ID, bigtable_config.CLUSTER_ID)
table = cluster.table(bigtable_config.TABLE_ID)
#app.route("/query/<start_key>/<end_key>")
def run_query(start_key, end_key):
if not client.is_started():
client.start()
row_data = table.read_rows(start_key=start_key, end_key=end_key)
row_data.consume_all()
// do some stuff to the row data here, get results
return jsonify(results)
I can run this code locally and it works great. I can deploy it to my service and it continues to work great. However, if the service sits idle for some period of time (I've typically noticed it after about an hour), then every request I make starts failing with this error:
NetworkError(code=StatusCode.UNAUTHENTICATED, details="Request had invalid authentication credentials.")
If I redeploy the service, it starts working again. I do not observe this behavior when I'm running the service locally.
What am I doing wrong? I'm assuming I'm making some mistake in my setup of the client, where it's not properly using the app engine credentials. Do I need to kill the client and restart it when I encounter this error?
This issue is being tracked at github.
What is the best way to have my meteor app call a python script that resides on the same machine as the meteor server-side code? All I want to do is have meteor pass a string to a function in python and have python return a string to meteor.
I was thinking that I could have python monitor mongodb and extract values and write them back to mongodb once computed, but it seems much cleaner to have meteor call the function in python directly.
I am new to DDP and was not able to get very far with python-meteor (https://github.com/hharnisc/python-meteor).
Is ZeroRPC (http://zerorpc.dotcloud.com/) a good way to do it?
Thanks.
Great question.
I have looked at using DDP and ZeroRPC and even having Python write directly to Mongo.
For me, the easiest way to have Meteor and Python talk was to set up the python script as a flask app and then add an API to the flask app and have Meteor talk to Python through the API.
To get this setup working I used:
Flask API
(https://flask-restful.readthedocs.org/en/0.3.1/quickstart.html#a-minimal-api)
The Meteor HTTP package (http://docs.meteor.com/#/full/http_call)
To test it you can build something basic like this (python script converts text to upper case):
from flask import Flask
from flask.ext import restful
app = Flask(__name__)
api = restful.Api(app)
class ParseText(restful.Resource):
def get(self, text):
output = text.upper()
return output
api.add_resource(ParseText, '/<string:text>')
if __name__ == '__main__':
app.run(debug=True) # debug=True is for testing to see if calls are working.
Then in Meteor use HTTP.get to test calling the API.
If you are running everything locally then the call from Meteor would probably look something like: Meteor.http.get("http://127.0.0.1:5000/test");
I have experience in the past in implementing somehting similar by using RestFul approach.
By triggering observeChanges from Meteor, sending a http request to Python restful api endpoints (in Flask) from server, then Flask handling the requests in calling the relevant Python scripts/functions, with the return response, Meteor then handle the callback accordingly.
There are of course many other approaches you can consider, like using DDP, child_process etc. I have also considered using python-meteor before however after taking into accounts that RestFul approach is more portable and scalable (both in the same machine, or even in different machines... you can expand your servers to handle more requests etc. you get the idea).
Everyone's use case is different, and I found RestFul appoach is the best fit for my use case. I hope you find my answer useful and expand your choices of consideration and pick one which is best for your case. Good luck.
this is my first question on stackoverflow and I'm new to programming:
What is the right way to load data into the GAE datastore when deploying my app? This should only happen once at deployment.
In other words: How can I call methods in my code, such that these methods are only called when I deploy my app?
The GAE documentation for python2.7 says, that one shouldn't call a main function, so I can't do this:
if __name__ == '__main__':
initialize_datastore()
main()
Create a handler that is restricted to admins only. When that handler is invoked with a simple GET request you could have it check to see if the seed data exists and if it doesn't, insert it.
Configuring a handler to require login or administrator status.
Another option is to write a Python script that utilizes the Remote API. This would allow you to access local data sources such as a CSV file or a locally hosted database and wouldn't require you to create a potentially unwieldy handler.
Read about the Remote API in the docs.
Using the Remote API Shell - Google App Engine