bottle.py stalls when client disconnects - python

I have a python server written with bottle. When I access the server from a website using Ajax, and then close the website before the server can send its response, the server gets stuck trying to send the response to a destination that no longer exists. When this happens, the server becomes unresponsive to any requests for about 10 seconds, before resuming normal operations.
How can I prevent this? I would like bottle to immediately stop trying if the website that made the request no longer exists.
I start the server like this:
bottle.run(host='localhost', port=port_to_listen_to, quiet=True)
and the only url exposed by the server is this:
#bottle.route('/', method='POST')
def main_server_input():
request_data = bottle.request.forms['request_data']
request_data = json.loads(request_data)
try:
response_data = process_message_from_scenario(request_data)
except:
error_message = utilities.get_error_message_details()
error_message = "Exception during processing of command:\n%s" % (error_message,)
print(error_message)
response_data = {
'success' : False,
'error_message' : error_message,
}
return(json.dumps(response_data))

Is process_message_from_scenario a long-running function? (Say, 10 seconds?)
If so, your one-and-only server thread will be tied up in that function, and no subsequent requests will be serviced during that time. Have you tried running a concurrent server, like gevent? Try this:
bottle.run(host='localhost', port=port_to_listen_to, quiet=True, server='gevent')

Related

Python POST request gives WinError 10053 error

I'm trying to make a POST request to a localhost address I've set inside the same python file but I am getting the error ConnectionAbortedError: [WinError 10053] An established connection was aborted by the software in your host machine
Here's my code:
#app.route('/callback_work/', methods=['POST'])
async def callback_work():
content_type = request.headers.get('content-type')
if (content_type == 'application/json'):
request_json = await request.get_json()
print(request_json)
return 'Callback done'
else:
return 'Content-Type not supported!'
async def capture_callback(request_json):
requests.post('http://localhost:5000/callback_work/',
json=request_json, timeout=2, headers={"Content-Type": "application/json"})
I am already providing the request_json through another function and I know it's valid and it exists. Also, I've been sending POST requests through Postman all of this time and everything was working fine. The timeout argument is there as a precaution since I was executing the script without it and it never stopped waiting for the POST request to be executed.
Do you thing there's a problem that both the function that handles the post request and the function that makes the post request, are in the same file?
the requests module is not async-able, even in an async funciton it will block. what happaned in your case is that your post request blocks and so your callback is unable to respond.
you have two general options:
use an async compatible library like aiohttp
use multiple processes either by running multiple scripts or by multiprocessing

AWS lambda does not finish execution when response is sent back to client

I'm trying to implement Fire and Forget mechanism using FastAPI. I'm facing few difficulties when implementing the mechanism.
I have two applications. One is developed with FastAPI and other is Flask. FastAPI will run in AWS Lambda and it will send requests to the Flask app running on AWS ECS.
Currently, I was able to send a request to the Flask API and receive an immediate response from the FastAPI app. But I see FastAPI still running bg_tasks.add_task(make_request, request) in the background which times out after lambda execution threshold time (15 mins).
Fast API application:
def make_request(data):
"""
Function to make a post request to flask application
:param data: Data from the user to write into the file
:return: None
"""
print("***** Inside post *****")
requests.post(url=root_url, data=data)
print("***** Post completed *****")
#router.post("/write-to-file")
async def write_to_file(request: Dict, bg_tasks: BackgroundTasks):
"""
Function to queue the requests and return to the post function
:param request: Request from the user
:param bg_tasks: Background task instance
:return: Some message
"""
print(f"****** Request call started ******")
bg_tasks.add_task(make_request, request)
print(f"****** Request completed ******")
return {"Message": "Data will be written into the file"}
Flask Application:
#app.route('/', methods=['POST'])
def write():
"""
Function to write the request data into the file
:return:
"""
request_data = request.form
try:
print(f"Sleep time {int(request_data['sleep_time'])}")
time.sleep(int(request_data["sleep_time"]))
request_data = dict(request_data)
request_data['current_time'] = str(datetime.now())
with open("data.txt", "a") as f:
f.write("\n")
f.write(json.dumps(request_data, indent=4))
return {"Message": "Success"}
except Exception as e:
return {"Message": e}
Fast API (http://localhost:8000/write-to-file/) calls the write_to_file method, which adds all the tasks (requests) into the background queue and runs them in background.
This function does not wait for the process to be completed. However, it returns the response to the client side. make_request method will then trigger the Flask endpoint (http://localhost:5000/), which in turn will process the request and write to a file. Consider make_request as one AWS lambda, if flask application takes more hours to process, the lambda will wait for longer time.
Is it possible to kill lambda once the request is published, or do something else to solve the timeout issue?
With the current setup, your lambda would run for as long, as the Flask endpoint would require to process your request. Effectively, both APIs run exactly the same time.
This is because the requests.post in the lambda function must wait for the response to finish. Given that you don't care about the results of that response, I can think of several other ways to solve this.
If I were you, I would move the queue processing to the ECS side. Then the only thing that lambda would only be responsible for putting a job into the queue that the ECS worker would process when it has capacity.
This option would let you get rid of one of the APIs: you would be able to query the Flask API directly and kill the lambda function, or instead kill the Flask API and run a worker process on ECS.
Alternatively, you could respond early on the Flask API side, which would finish your HTTP request, and thus the lambda execution, sooner. This can be confusing to set up and defeats the purpose of exposing an HTTP API in the first place. Also, under some circumstances, the Flask request execution could be terminated by the webserver after a default timeout (~30 seconds).
And finally, in case you really-really want to leave your code as it is now, you could set a request to timeout after a short period of time. In case you go this route, make sure to choose a long enough timeout for Flask to start processing the request:
try:
requests.post(url=root_url, data=data, timeout=5) # throw after 5 seconds of waiting
except requests.exceptions.Timeout:
pass

Flask redirect from a child procces - make a waiting page using only python

today I try to make a "waiting page" using Flask.
I mean a client makes a request, I want to show him a page like "wait the process can take a few minutes", and when the process ends on the server display the result.I want to display "wait" before my function manageBill.teste but redirect work only when it returned right?
#application.route('/teste', methods=['POST', 'GET'])
def test_conf():
if request.method == 'POST':
if request.form.get('confList') != None:
conf_file = request.form.get('confList')
username = request.form.get('username')
password = request.form.get('password')
date = request.form.get('date')
if date == '' or conf_file == '' or username == '' or password == '':
return "You forget to provide information"
newpid = os.fork()
if newpid == 0: # in child procces
print('A new child ', os.getpid())
error = manageBill.teste(conf_file, username, password, date)
print ("Error :" + error)
return redirect('/tmp/' + error)
else: # in parent procces
return redirect('/tmp/wait')
return error
return manageBill.manageTest()`
My /tmp route:
#application.route('/tmp/<wait>')
def wait_teste(wait):
return "The procces can take few minute, you will be redirected when the teste is done.<br>" + wait
If you are using the WSGI server (the default), requests are handled by threads. This is likely incompatible with forking.
But even if it wasn't, you have another fundamental issue. A single request can only produce a single response. Once you return redirect('/tmp/wait') that request is done. Over. You can't send anything else.
To support such a feature you have a few choices:
The most common approach is to have AJAX make the request to start a long running process. Then setup an /is_done flask endpoint that you can check (via AJAX) periodically (this is called polling). Once your endpoint returns that the work is done, you can update the page (either with JS or by redirecting to a new page).
Have /is_done be a page instead of an API endpoint that is queried from JS. Set an HTTP refresh on it (with some short timeout like 10 seconds). Then your server can send a redirect for the /is_done endpoint to the results page once the task finishes.
Generally you should strive to serve web requests as quickly as possible. You shouldn't leave connections open (to wait for a long task to finish) and you should offload these long running tasks to a queue system running separately from the web process. In this way, you can scale your ability to handle web requests and background processes separately (and one failing does not bring the other down).

Method execute server doesn't response for 7 sec

I have function named as validate_account() that return boolean. It goes to db, and do some manipulation with duration of 7 seconds. So, when I make another requests to server, it doesn't response during these 7 seconds for any request. How can I fix it? Maybe by starting new process?
#login_required
#csrf_protect
def check_account(request):
username = request.session['current_account']
account = get_object_or_404(Account, username=username)
# takes 7 seconds
login_status = validate_account(account.username, account.password)
response = {
'loginStatus': login_status
}
response = json.dumps(response)
return JsonResponse(response, safe=False)
I am running server as python manage.py runserver --nothreading --noreload
The --nothreading option disables multithreading, so you will only have one thread responding to requests. Since each thread handles requests synchronously, this causes the exact behaviour you describe.
Simply remove the --nothreading option, and multithreading will allow the server to respond to multiple requests at the same time. In production, you should also use multiple threads and/or processes to run your WSGI server

How to handle multiple clients asynchronously , on a web socket using python?

I'm basically building a visual trace route application. The trace route is basically done using a python code and the results are send to the HTML page in real time using web socket. I basically need to do long polling( the server receives one request, process it and sent a maximum of 30 replies to each client at regular or irregular intervals), as well as handle multiple clients. I basically manipulated the below code to work for my application. I found the code from Asynchronous Bottle Framework
from bottle import request, Bottle, abort
app = Bottle()
#app.route('/websocket')
def handle_websocket():
wsock = request.environ.get('wsgi.websocket')
if not wsock:
abort(400, 'Expected WebSocket request.')
while True:
try:
message = wsock.receive()
wsock.send("Your message was: %r" % message)
except WebSocketError:
break
from gevent.pywsgi import WSGIServer
from geventwebsocket import WebSocketHandler, WebSocketError
server = WSGIServer(("0.0.0.0", 8080), app,
handler_class=WebSocketHandler)
server.serve_forever()
It does work on a single request. When I issue the second one.. 'wsock.send()' fails... it shows socket dead error. Could someone guide me on, how to handle multiple clients as well. Like, should I spawn a different process for each client ? What if a client requests trace for one domain, and again(before the full result is provided to him) requests for another. Thanks in advice
Client side code :
<script type="text/javascript">
var ws = new WebSocket("ws://example.com:8080/websocket");
ws.onopen = function() {
ws.send("Hello, world");
};
ws.onmessage = function (evt) {
alert(evt.data);
};

Categories