I have an API written with python flask running on Bluemix. Whenever I send it a request and the API takes more than 120 seconds to respond it times out. It does not return anything and it returns the following error: 500 Error: Failed to establish a backside connection.
I need it to be able to process longer requests as well. Is there any way to extend the timeout value or is there a workaround for this issue?
All Bluemix traffic goes through the IBM WebSphere® DataPower® SOA Appliances, which provide reverse proxy, SSL termination, and load balancing functions. For security reasons DataPower closes inactive connections after 2 minutes.
This is not configurable (as it affects all Bluemix users), so the only solution for your scenario is to change your program to make sure the connection is not idle for more than 2 minutes.
Related
I'm load testing a Spring Boot API, a POC to show the team it will handle high throughput. I'm making the requests with a Python script that uses a multiprocessing pool. When I start sending more than like 10,000 records I get an error that "Max retries exceeded" which I've determined means the endpoint is refusing the connection from the client, because it's making too many connections.
Is there a Tomcat setting to allow more requests from a client (temporarily) for something like load testing? I tried setting "server.tomcat.max-threads" in the applicatin.properties file, but that doesn't seem to help.
I recently started working with Yaskawa's OPC UA Server provided on its robot's controller.
I'm connecting to the server via Python's OPCUA library. Everything works well, but when my code crashes or when I will close terminal without disconnecting from the server I cannot connect to it once again.
I receive an error from library, saying:
The server has reached its maximum number of sessions.
And the only way to solve this is to restart the controller by turning it off and on again.
Documentation of the server is saying that max number of sessions is 2.
Is there a way to clear the connection to the server without restarting the machine?
The server keeps track of the client session and doesn't know that your client crashed.
But the client can define a short enough SessionTimeout, after which the server can remove the crashed session.
The server may have some custom configuration where you can define the maximum number of sessions that it supports. 2 sessions is very limited, but if the hardware is very limited maybe that is the best you can get. See the product documentation about that.
I have a python web server (django)
It talks to other services such as elasticsearch
I notice when elasticsearch goes down, the web server soon (after a few minutes) stops responding to client requests.
I use https://github.com/elastic/elasticsearch-py and it implements timeout and don't think it's blocking.
My hunch is that requests piles up during the timeout period and server becomes unavailable but it's a just guess.
What's the reason for the server not being able to handle requests in such a scenario and how do I fix it?
I have nginx - uwsgi - django on unix (amazon ecs) setup if that makes difference
I have a Flask API running on API Gateway and Lambda Functions, where my Lambda Functions are configured to run in my VPC.
Normal duration for my Lambda Function should be about 3 seconds, but sometimes it spikes to 130 seconds or more, which causes my API Gateway to return a 504.
The Lambda Function makes a GET request using the requests library:
url = base_url + endpoint
req = requests.get(url, headers=headers)
response = json.loads(req.content.decode('utf-8'))
CloudWatch shows the following error on the request that times out:
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='host', port=port): Max retries exceeded with url: /foo/bar (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at foo>: Failed to establish a new connection: [Errno 110] Connection timed out'))
Most all of the posts I have read refer to an incorrectly configured Lambda Function running in a private subnet, but I know that is not my issue since my functions have access to the internet.
My other theory is that a session is getting reused on the function's underlying container, which is causing a timeout.
Thanks for your help in advance!
Since you've set your Lambda to run in your VPC, it's quite possible that a Network ACL is not allowing the traffic.
Your described behavior would be consistent with an ephemeral port being blocked. It would cause the traffic to eventually time out, leading to somewhat random spikes in lambda runtime & a failure for seemingly unknown reasons.
I'm not even sure it would be apparent from VPC flow logs what happened, since it was the ephemeral port that was blocked, not the reserved port, but I'll have to double-check that.
AWS uses ephemeral ports 1024 - 65535. I would take a look at the Network ACLs and double-check that those ports are allowed.
It is quite possible the connection you created in the first invocation of Lambda is being reused when the Lambda container is being reused in the subsequent requests. This connection may have been terminated by the server, but the Lambda container does not have an idea about it. So, it tries to make the new request on the stale connection until it times out.
Possible ways to avoid this -
1. Manage connections appropriately
Create the connection outside the Lambda handler function, but manage it for error handling inside the handler function. Also, consider terminating the connection after completion of execution if your function is not being very frequently invoked.
2. Set a timeout
Every SDK supports a timeout value for connection termination. For example, Python has 60 seconds. That means, your Python request will try to connect on that stale connection for 60 seconds before timing out. Try setting a custom timeout value for your request (something lower so that Lambda doesn't time out) and if an error occurs, catch it and create a new connection. Read more about this here.
I have a Django server running in Elastic Beanstalk and I am not sure if the process continues to run in the server or the process gets killed. Does anyone have any insight on this? There is no application logic to stop the request in case of a disconnection. Would Elastic Beanstalk kill off the process along with the client connection or will the process continue to run regardless of the timeout?
A 504 Gateway Timeout means the client trying to access the server doesn't get a response in a certain amount of time. According to the AWS documentation:
Description: Indicates that the load balancer closed a connection because a request did not complete within the idle timeout period.
Which means, the 504 response you get in your browser (or other client) when trying to access your Django app is generated by the Elastic Load Balancer that's in front of your actual server after closing the connection. Since your ELB is an external networking tool and has no actual control over your server, it cannot control your code and which processes are running or not. Meaning, the process will keep running until it has to return an HTTP response and it fails because of the closed connection.