My nginx configuration is like:
server {
listen 80 so_keepalive=30m::;
location /wsgi {
uwsgi_pass uwsgicluster;
include uwsgi_params;
uwsgi_read_timeout 30000;
uwsgi_buffering off;
}
...
}
In my python:
def application_(environ, start_response):
body = queue.Queue()
...
gevent.spawn(redis_wait, environ, body, channels)
return body
def redis_wait(environ, body, channels):
server = redis.Redis(connection_pool=REDIS_CONNECTION_POOL)
client = server.pubsub()
try:
for channel in channels:
client.subscribe(channel)
messages = client.listen()
for message in messages:
if message['type'] != 'message' and message['type'] != 'pmessage':
continue
body.put(message['data'])
finally:
client.unsubscribe()
client.close()
The problem occurs when the client connection is interrupted (either network connection abruptly lost, application terminates, etc.) redis shows that the connection on the server is still open. How do I fix this? Even with the so_keepalive, the connection isnt being cleaned up. How do I fix this?
EDIT: I've noticed through the nginx_status page that the active connection count does go down after the disconnect. The problem is that uwsgi isnt getting notified of this.
You have to wait on the uwsgi socket as well as the redis socket so that you can be notified in case the uwsgi socket closes. Example here: http://nullege.com/codes/show/src%40k%40o%40kozmic-ci-HEAD%40tailer%40__init__.py/72/uwsgi.connection_fd/python
Related
I am using nginx as a reverse proxy in front of a uWSGI server (flask apps).
Due to a memory leak, use --max-requests to reload workers after so many calls.
The issue is the following : When a worker just restarted/started, the first request it receives stays hanging between uWSGI and NGINX, the process time inside of the flask app is usual and very quick but the client waits until uwsgi_send_timeout is triggered.
Using tcpdump to see the request (nginx is XXX.14 and uWSGI is XXX.11) :
You can see in the time column that it hangs for 300 seconds (uwsgi_send_timeout) eventhough the HTTP request has been received by NGINX... uWSGI just doesn't send a [FIN] packet to signal that the connexion is closed. Then NGINX triggers the timeout and closes the session.
The end client receives a truncated response.. With a 200 status code. which is very frustrating.
This happens at every worker reload, only once, the first request no matter how big the request.
Does anyone have a workaround this issue? have I misconfigured something?
uwsgi.ini
[uwsgi]
# Get the location of the app
module = api:app
plugin = python3
socket = :8000
manage-script-name = true
mount = /=api:app
cache2 = name=xxx,items=1024
# Had to increase buffer-size because of big authentication requests.
buffer-size = 8192
## Workers management
# Number of workers
processes = $(UWSGI_PROCESSES)
master = true
# Number of requests managed by 1 worker before reloading (reload is time expensive)
max-requests = $(UWSGI_MAX_REQUESTS)
lazy-apps = true
single-interpreter = true
nginx-server.conf
server {
listen 443 ssl http2;
client_max_body_size 50M;
location #api {
include uwsgi_params;
uwsgi_pass api:8000;
uwsgi_read_timeout 300;
uwsgi_send_timeout 300;
}
For some weird reason, adding the parameter uwsgi_buffering off; in the nginx config fixed the issue.
I still don't understand why but for now this fixes my issue. If anyone has a valid explanation, don't hesitate.
server {
listen 443 ssl http2;
client_max_body_size 50M;
location #api {
include uwsgi_params;
uwsgi_pass api:8000;
uwsgi_buffering off;
uwsgi_read_timeout 300;
uwsgi_send_timeout 300;
}
I have a autobahn twisted websocket running in python which is working in a dev vm correctly but I have been unable to get working when the server is running in openshift.
Here is the shortened code which works for me in a vm.
from autobahn.twisted.websocket import WebSocketServerProtocol, WebSocketServerFactory, listenWS
from autobahn.twisted.resource import WebSocketResource
class MyServerProtocol(WebSocketServerProtocol):
def onConnect(self, request):
stuff...
def onOpen(self):
stuff...
def onMessage(self,payload):
stuff...
factory = WebSocketServerFactory(u"ws://0.0.0.0:8080")
factory.protocol = MyServerProtocol
resource = WebSocketResource(factory)
root = File(".")
root.putChild(b"ws", resource)
site = Site(root)
reactor.listenTCP(8080, site)
reactor.run()
The connection part of the client is as follows:
var wsuri;
var hostname = window.document.location.hostname;
wsuri = "ws://" + hostname + ":8080/ws";
if ("WebSocket" in window) {
sock = new WebSocket(wsuri);
} else if ("MozWebSocket" in window) {
sock = new MozWebSocket(wsuri);
} else {
log("Browser does not support WebSocket!");
window.location = "http://autobahn.ws/unsupportedbrowser";
}
The openshift configuration is as follows:
1 pod running with app.py listening on port 8080
tls not enabled
I have a non-tls route 8080 > 8080.
Firefox gives the following message in the console:
Firefox can’t establish a connection to the server at ws://openshiftprovidedurl.net:8080/ws.
when I use wscat to connect to the websocket.
wscat -c ws://openshiftprovidedurl.net/ws
I get the following error:
error: Error: unexpected server response (400)
and the application log in openshift shows the following:
2018-04-03 01:14:24+0000 [-] failing WebSocket opening handshake ('missing port in HTTP Host header 'openshiftprovidedurl.net' and server runs on non-standard port 8080 (wss = False)')
2018-04-03 01:14:24+0000 [-] dropping connection to peer tcp4:173.21.2.1:38940 with abort=False: missing port in HTTP Host header 'openshiftprovidedurl.net' and server runs on non-standard port 8080 (wss = False)
2018-04-03 01:14:24+0000 [-] WebSocket connection closed: connection was closed uncleanly (missing port in HTTP Host header 'openshiftprovidedurl.net' and server runs on non-standard port 8080 (wss = False))
Any assistance would be appreciated!
Graham Dumpleton hit the nail on the head, I modified the code from
factory = WebSocketServerFactory(u"ws://0.0.0.0:8080")
to
factory = WebSocketServerFactory(u"ws://0.0.0.0:8080", externalPort=80)
and it corrected the issue. I had to modify my index to point to the correct websocket but I am now able to connect.
Thanks!
Based on the source code of autobahn-python, you can get that message only in 2 cases.
Here is the implementation:
if not ((self.factory.isSecure and self.factory.externalPort == 443) or (not self.factory.isSecure and self.factory.externalPort == 80)):
return self.failHandshake("missing port in HTTP Host header '%s' and server runs on non-standard port %d (wss = %s)" % (str(self.http_request_host), self.factory.externalPort, self.factory.isSecure))
Because I think you are using Deployment + Service (and maybe Ingress on top of them) for your server, you can bind your server to port 80 instead of 8080 and set that port in Service and in Ingress, if you are using them.
I'm trying to set up a simple Python web server from a tutorial on a Fedora box running Nginx; I want Nginx to reverse proxy the Python server. I must be doing something wrong, though, because when I run the server and attempt to load the page through Nginx, Nginx returns a 502 to the browser and prints the following to the log:
2017/03/16 00:27:59 [error] 10613#0: *5284 connect() failed (111:
Connection refused) while connecting to upstream, client:
76.184.187.130, server: tspi.io, request: "GET /leaderboard/index.html HTTP/1.1", upstream: "http://127.0.0.1:8063/leaderboard/index.html",
host: "tspi.io"
Here's my python server:
#!/bin/env python
# with special thanks to the good folks at
# https://fragments.turtlemeat.com/pythonwebserver.php
# who generous taught me how to do all this tonight
import cgi
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
from os import curdir, sep
class BaseServer (BaseHTTPRequestHandler):
def do_GET (self):
try:
print ('Serving self.path=' + self.path)
if 'leaderboard' in self.path:
self.path = self.path[12:]
print ('self.path amended to:' + self.path)
if self.path == '/':
self.path = '/index.html'
if self.path.endswith ('.html'):
# maybe TODO is wrap this in a file IO exception handler
f_to_open = curdir + sep + self.path
f = open (f_to_open)
s = f.read()
f.close()
self.send_response (200)
self.send_header ('Content-type', 'text/html')
self.end_headers ()
self.wfile.write (s)
return
except IOError:
self.send_error (404, 'File Not Found: ' + self.path)
def do_POST (self):
try:
cytpe, pdict = cgi.parse_header(self.headers.getheader ('content-type'))
if ctype == 'multipart/form-data':
query=cgi.parse_multipart (self.rfile, pdict)
self.send_response (301)
self.endheaders()
except:
pass # What *do* you do canonically for a failed POST?
def main():
try:
server = HTTPServer (('', 8096), BaseServer)
print ('Starting BaseServer.')
server.serve_forever ()
except KeyboardInterrupt:
print ('Interrupt recieved; closing server socket')
server.socket.close()
if __name__ == '__main__':
main()
And my nginx.conf:
server {
listen 443 ssl;
server_name tspi.io;
keepalive_timeout 70;
ssl_certificate /etc/letsencrypt/live/tspi.io/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/keys/0000_key-certbot.pem;
ssl_protocols TLSv1.2;
location / {
root /data/www;
}
location ~ \.(gif|jpg|png)$ {
root /data/images;
}
location /leaderboard {
proxy_pass http://localhost:8063;
}
}
I'm trying to use the proxy_pass to pass any traffic that comes in to tspi.io/leaderboard on to the Python server, while allowing the base html pages that live under /data/www to be served by Nginx normally.
When I google, I see tons of stuff about reverse proxying PHP not having PHP-FPM configured correctly, and since I'm not using PHP at all that seems improbable. I also see stuff about configuring uwsgi, which I have no idea if that's an issue or not. I don't know if BaseHTTPServer uses uswgi; when I tried looking uswgi up, it seemed like a whole different set of classes and a whole other way to write a python server.
Any help would be very much appreciated!
The port numbers are mix-matched in your python code vs. what is provided in your nginx reverse proxy config.
I'd also recommend sending the host and remote address values to your internal application in case the need for them arises.
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
I have a setup with nginx, uwsgi, and gevent. When testing the setup's ability to handle premature client disconnects, I found that uwsgi isn't exactly responding in a timely manner.
This is how I detect that a disconnect has occurred inside of my python code:
While True:
if 'uwsgi' in sys.modules:
import uwsgi ##UnresolvedImport
fileDescriptor = uwsgi.connection_fd()
if not uwsgi.is_connected(fileDescriptor):
logger.debug("Connection was lost (client disconnect)")
break
So when uwsgi signals a lost of connection, I break out of this loop. There's also a call to gevent.sleep(2) at the bottom of the loop to prevent hammering the CPU.
With that in place I have nginx logging the close connection like this:
2016/08/16 19:23:23 [info] 32452#0: *1 epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while sending to client, client: 192.168.56.1, server: <removed>, request: "GET /myurl HTTP/1.1", upstream: "uwsgi://127.0.0.1:8070", host: "<removed>:8443"
nginx is immediately aware of the disconnect when it produces this log entry, it's within milliseconds of the client disconnecting. Yet uwsgi doesn't seem to be aware of the disconnect until seconds, sometimes almost a minute later at least in terms of notifying my code:
DEBUG - Connection was lost (client disconnect) - 391 ms[08/16/16 19:24:04 UTC])
The uwsgi.log file created via daemonize suggests it somehow saw it a second before nginx but somehow waited half a minute to actually tell my code:
[pid: 32208|app: 0|req: 2/2] 192.168.56.1 () {32 vars in 382 bytes} [Tue Aug 16 19:23:22 2016] GET /myurl => generated 141 bytes in 42030 msecs (HTTP/1.1 200) 2 headers in 115 bytes (4 switches on core 999
This is my setup in nginx:
upstream bottle {
server 127.0.0.1:8070;
}
server {
listen 8443;
ssl on;
ssl_certificate /etc/pki/tls/certs/server.crt;
ssl_certificate_key /etc/pki/tls/private/server.key;
server_name <removed>;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
include uwsgi_params;
#proxy_read_timeout 5m;
uwsgi_buffering off;
uwsgi_ignore_client_abort off;
proxy_ignore_client_abort off;
proxy_cache off;
chunked_transfer_encoding off;
#uwsgi_read_timeout 5m;
#uwsgi_send_timeout 5m;
uwsgi_pass bottle;
}
}
The odd part to me is how the timestamp from uwsgi is saying it saw it right when nginx did, however it doesn't write that entry until my code sees it ~30 seconds later. It appears from my perspective, that uwsgi is essentially lying or locking it up, yet I can't find any errors from it.
Any help is appreciated. I've attempted to remove any buffering and delays from nginx without any success.
I am trying to setup a uwsgi-hosted app such that I get graceful reloads with uwsgi --reload but I am obviously failing. Here is my test uwsgi setup:
[admin2-prod]
http = 127.0.0.1:9090
pyargv = $* --db=prod --base-path=/admin/
max-requests = 3
listen=1000
http-keepalive = 1
pidfile2 =admin.pid
add-header=Connection: keep-alive
workers = 1
master = true
chdir = .
plugins = python,http,router_static,router_uwsgi,router_http
buffer-size = 8192
pythonpath = admin2
file = admin2/app.py
static-map=/admin/static/=admin2/static/
static-map=/admin/v3/build/=admin2/client/build/
disable-logging = false
http-timeout = 100
(please, note that I ran sysctl net.core.somaxconn=1000 before)
And here is my test python script:
import httplib
connection = httplib.HTTPConnection('127.0.0.1', 9090)
connection.connect()
for i in range(0, 1000):
print 'sending... ', i
try:
connection.request('GET', '/x', '', {'Connection' : ' keep-alive'})
response = connection.getresponse()
d = response.read()
print ' ', response.status
except:
connection = httplib.HTTPConnection('127.0.0.1', 9090)
connection.connect()
The above client fails during --reload:
sending... 920
Traceback (most recent call last):
File "./test.py", line 15, in <module>
connection.connect()
File "/usr/lib64/python2.7/httplib.py", line 836, in connect
self.timeout, self.source_address)
File "/usr/lib64/python2.7/socket.py", line 575, in create_connection
raise err
socket.error: [Errno 111] Connection refused
From a tcpdump, it looks like uwsgi is indeed accepting the second incoming TCP request which happens upon the --reload, the client is sending the GET, the server is TCP ACKing it but the connection is finally RSTed by the server before sending back the HTTP response. So, what am I missing that is needed to make the server queue this incoming connection until it is ready to process it and get a real graceful reload ?
you are managing both the app and the proxy in the same uWSGI instance, so when you reload the stack you are killing the frontend web server too (the one you start with the 'http' option).
You have to split the http router in another uWSGI instance, or use nginx/haproxy or similar. Once you have two different stacks you can reload the application without closing the socket
Your exceptions happens when uwsgi process cant accept connections obviously... So, your process have to wait until server restarted - you can use loop with timeout in except block to properly handle this situation. Try this:
import httplib
import socket
import time
connection = httplib.HTTPConnection('127.0.0.1', 8000)
# connection moved below... connection.connect()
for i in range(0, 1000):
print 'sending... ', i
try:
connection.request('GET', '/x', '', {'Connection' : ' keep-alive'})
response = connection.getresponse()
d = response.read()
print ' ', response.status
except KeyboardInterrupt:
break
except socket.error:
while True:
try:
connection = httplib.HTTPConnection('127.0.0.1', 8000)
connection.connect()
except socket.error:
print 'cant connect, will try again in a second...'
time.sleep(1)
else:
break
before restart:
sending... 220
404
sending... 221
404
sending... 222
404
restarting server:
cant connect, will try again in a second...
cant connect, will try again in a second...
cant connect, will try again in a second...
cant connect, will try again in a second...
server up again:
sending... 223
404
sending... 224
404
sending... 225
404
update for your comment:
Obviously, in the real world, you can't rewrite the code of all the
http clients that connect to your server. My question is: what can I
do to get a graceful reload (no failures) for arbitrary clients.
One of universal solutions that i think can handle such problems with clients - simple proxy between client and server. With proxy you can restart server independently of clients (implies that proxy is always on).
And in fact this is commonly used - 502 (bad gateway) errors from web applications frontend proxies - exactly same situation - client receives error from proxy while application server is down! Try nginx, varnish or something similar.
Btw, uwsgi have builtin "proxy/load-balancer/router" plugin:
The uWSGI FastRouter
For advanced setups uWSGI includes the “fastrouter” plugin, a proxy/load-balancer/router speaking the uwsgi
protocol. It is built in by default. You can put it between your
webserver and real uWSGI instances to have more control over the
routing of HTTP requests to your application servers.
docs here: http://uwsgi-docs.readthedocs.io/en/latest/Fastrouter.html