Context
I want to run an AWS Lambda, call an endpoint (fire and forget) then stop the Lambda - all the while the endpoint is whirling away happily on it's own.
Attempts
1.
Using a timeout, e.g.
try:
requests.get(url, timeout=0.001)
except requests.exceptions.ReadTimeout:
...
2.
Using an async call with grequest:
import grequests
...
def handler(event, context):
try:
...
req = grequests.get(url)
grequests.send(req, grequests.Pool(1))
return {
'statusCode': 200,
'body': "Lambda done"
}
except Exception:
logger.exception('Error while running lambda')
raise
These requests don't appear to reach the API, it's almost like the request is being cancelled.
Any ideas why?
Question
How can a Lambda call a URL which takes a long time to complete? Thanks.
To anyone reading this: I fixed my problem by using AWS Batch.
You have to increase the timeout on your function. Depending on how you have defined your function, you can increase the default timeout of 300 seconds on the function page or inside your cloudformation script.
The timeout defined inside your function has no effect on lambda functionality. AWS Lambda will kill your function after 300 seconds regardless of any timeout defined inside your script. See: https://docs.aws.amazon.com/lambda/latest/dg/limits.html
Related
I have an SQS Queue triggering my AWS Lambda. There are situations when I want to provide a response from the Lamba without raising an Exception. I tried returning a None but, the queue is not triggering the Lambda again.
Here is my Lamda getting triggered by the queue
def lambda_handler(event, context):
try:
do_something()
except:
return None
When I raise an exception here, the Lamda is getting retried again. But then, I don't want to raise an exception here for my Lambda to get triggered again. How can I achieve this? I tried sending the following instead of None.
{
'statusCode': 500
}
That also doesn't get my Lambda to get triggered again.
I'm trying to implement Fire and Forget mechanism using FastAPI. I'm facing few difficulties when implementing the mechanism.
I have two applications. One is developed with FastAPI and other is Flask. FastAPI will run in AWS Lambda and it will send requests to the Flask app running on AWS ECS.
Currently, I was able to send a request to the Flask API and receive an immediate response from the FastAPI app. But I see FastAPI still running bg_tasks.add_task(make_request, request) in the background which times out after lambda execution threshold time (15 mins).
Fast API application:
def make_request(data):
"""
Function to make a post request to flask application
:param data: Data from the user to write into the file
:return: None
"""
print("***** Inside post *****")
requests.post(url=root_url, data=data)
print("***** Post completed *****")
#router.post("/write-to-file")
async def write_to_file(request: Dict, bg_tasks: BackgroundTasks):
"""
Function to queue the requests and return to the post function
:param request: Request from the user
:param bg_tasks: Background task instance
:return: Some message
"""
print(f"****** Request call started ******")
bg_tasks.add_task(make_request, request)
print(f"****** Request completed ******")
return {"Message": "Data will be written into the file"}
Flask Application:
#app.route('/', methods=['POST'])
def write():
"""
Function to write the request data into the file
:return:
"""
request_data = request.form
try:
print(f"Sleep time {int(request_data['sleep_time'])}")
time.sleep(int(request_data["sleep_time"]))
request_data = dict(request_data)
request_data['current_time'] = str(datetime.now())
with open("data.txt", "a") as f:
f.write("\n")
f.write(json.dumps(request_data, indent=4))
return {"Message": "Success"}
except Exception as e:
return {"Message": e}
Fast API (http://localhost:8000/write-to-file/) calls the write_to_file method, which adds all the tasks (requests) into the background queue and runs them in background.
This function does not wait for the process to be completed. However, it returns the response to the client side. make_request method will then trigger the Flask endpoint (http://localhost:5000/), which in turn will process the request and write to a file. Consider make_request as one AWS lambda, if flask application takes more hours to process, the lambda will wait for longer time.
Is it possible to kill lambda once the request is published, or do something else to solve the timeout issue?
With the current setup, your lambda would run for as long, as the Flask endpoint would require to process your request. Effectively, both APIs run exactly the same time.
This is because the requests.post in the lambda function must wait for the response to finish. Given that you don't care about the results of that response, I can think of several other ways to solve this.
If I were you, I would move the queue processing to the ECS side. Then the only thing that lambda would only be responsible for putting a job into the queue that the ECS worker would process when it has capacity.
This option would let you get rid of one of the APIs: you would be able to query the Flask API directly and kill the lambda function, or instead kill the Flask API and run a worker process on ECS.
Alternatively, you could respond early on the Flask API side, which would finish your HTTP request, and thus the lambda execution, sooner. This can be confusing to set up and defeats the purpose of exposing an HTTP API in the first place. Also, under some circumstances, the Flask request execution could be terminated by the webserver after a default timeout (~30 seconds).
And finally, in case you really-really want to leave your code as it is now, you could set a request to timeout after a short period of time. In case you go this route, make sure to choose a long enough timeout for Flask to start processing the request:
try:
requests.post(url=root_url, data=data, timeout=5) # throw after 5 seconds of waiting
except requests.exceptions.Timeout:
pass
I created an Azure Service Bus Queue Trigger Python Function with Visual Studio Code and I would like to return the message to the Service Bus Queue if my code fails.
import requests
import azure.functions as func
from requests.exceptions import HTTPError
def main(msg: func.ServiceBusMessage):
message = msg.get_body().decode("utf-8")
url = "http://..."
# My code
try:
requests.post(url=url, params=message)
except Exception as error:
logging.error(error)
# RETURN MESSAGE TO QUEUE HERE
I found some info about a method called unlock() and abandon() but I don't know how to implement it. Here are the links to such docs:
unlock: https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-python-how-to-use-queues#handle-application-crashes-and-unreadable-messages
abandon: https://learn.microsoft.com/en-us/python/api/azure-servicebus/azure.servicebus.common.message.deferredmessage?view=azure-python#abandon--
I also found that the queue would automatically send the message to the queue if the function fails, but then, should I write a raise... to send an Exception in the function?
Also, is there a way to return this message to the queue and set a schedule to retry later?
Service bus trigger python function runs in PeekLock mode. You don't have to call unlock() and abandon() method. Check the PeekLock behavior description:
The Functions runtime receives a message in PeekLock mode. It calls
Complete on the message if the function finishes successfully, or
calls Abandon if the function fails. If the function runs longer than
the PeekLock timeout, the lock is automatically renewed as long as the
function is running.
I've made Lambda functions before but not in Python. I know in Javascript Lambda supports the handler function being asynchronous, but I get an error if I try it in Python.
Here is the code I am trying to test:
async def handler(event, context):
print(str(event))
return {
'message' : 'OK'
}
And this is the error I get:
An error occurred during JSON serialization of response: <coroutine object handler at 0x7f63a2d20308> is not JSON serializable
Traceback (most recent call last):
File "/var/lang/lib/python3.6/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/var/lang/lib/python3.6/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/var/lang/lib/python3.6/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/var/runtime/awslambda/bootstrap.py", line 149, in decimal_serializer
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: <coroutine object handler at 0x7f63a2d20308> is not JSON serializable
/var/runtime/awslambda/bootstrap.py:312: RuntimeWarning: coroutine 'handler' was never awaited
errortype, result, fatal = report_fault(invokeid, e)
EDIT 2021:
Since this question seems to be gaining traction, I assume people are coming here trying to figure out how to get async to work with AWS Lambda as I was. The bad news is that even now more than a year later, there still isn't any support by AWS to have an asynchronous handler in a Python-based Lambda function. (I have no idea why, as NodeJS-based Lambda functions can handle it perfectly fine.)
The good news is that since Python 3.7, there is a simple workaround in the form of asyncio.run:
import asyncio
def lambda_handler(event, context):
# Use asyncio.run to synchronously "await" an async function
result = asyncio.run(async_handler(event, context))
return {
'statusCode': 200,
'body': result
}
async def async_handler(event, context):
# Put your asynchronous code here
await asyncio.sleep(1)
return 'Success'
Note: The selected answer says that using asyncio.run is not the proper way of starting an asynchronous task in Lambda. In general, they are correct because if some other resource in your Lambda code creates an event loop (a database/HTTP client, etc.), it's wasteful to create another loop and it's better to operate on the existing loop using asyncio.get_event_loop.
However, if an event loop does not yet exist when your code begins running, asyncio.run becomes the only (simple) course of action.
Not at all. Async Python handlers are not supported by AWS Lambda.
If you need to use async/await functionality in your AWS Lambda, you have to define an async function in your code (either in Lambda files or a Lambda Layer) and call asyncio.get_event_loop().run_until_complete(your_async_handler()) inside your sync handler.
Please note that asyncio.run (introduced in Python 3.7) is not a proper way to call an async handler in AWS Lambda execution environment since Lambda tries to reuse the execution context for subsequent invocations. The problem here is that asyncio.run creates a new EventLoop and closes the previous one. If you have opened any resources or created coroutines attached to the closed EventLoop from previous Lambda invocation you will get «Event loop closed» error. asyncio.get_event_loop().run_until_complete allows you to reuse the same loop. See related StackOverflow question.
AWS Lambda documentation misleads its readers a little by introducing synchronous and asynchronous invocations. Do not mix it up with sync/async Python functions. Synchronous refers to invoking AWS Lambda with further waiting for the result (blocking operation). The function is called immediately and you get the response as soon as possible. Whereas using an asynchronous invocation you ask Lambda to schedule the function execution and do not wait for the response at all. When the time comes, Lambda still will call the handler function synchronously.
Don't use run() method and call run_until_complete()
import json
import asyncio
async def my_async_method():
await some_async_functionality()
def lambda_handler(event, context):
loop = asyncio.get_event_loop()
result = loop.run_until_complete(my_async_method())
return {
'statusCode': 200,
'body': json.dumps('Hello Lambda')
}
I created batch delayed http (async) client which allows to trigger multiple async http requests and most importantly it allows to delay the start of requests so for example 100 requests are not triggered at a time.
But it has an issue. The http .fetch() method has a handleMethod parameter which handles the response, but I found out that if the delay (sleep) after the fetch isn't long enough the handle method is not even triggered. (maybe the request is killed or what meanwhile).
It is probably related to .run_sync method. How to fix that? I want to put delays but dont want this issue happen.
I need to parse the response regardless how long the request takes, regardless the following sleep call (that call has another reason as i said, and should not be related to response handling at all)
class BatchDelayedHttpClient:
def __init__(self, requestList):
# class members
self.httpClient = httpclient.AsyncHTTPClient()
self.requestList = requestList
ioloop.IOLoop.current().run_sync(self.execute)
#gen.coroutine
def execute(self):
print("exec start")
for request in self.requestList:
print("requesting " + request["url"])
self.httpClient.fetch(request["url"], request["handleMethod"], method=request["method"], headers=request["headers"], body=request["body"])
yield gen.sleep(request["sleep"])
print("exec end")