I'm trying to set up a simple Python web server from a tutorial on a Fedora box running Nginx; I want Nginx to reverse proxy the Python server. I must be doing something wrong, though, because when I run the server and attempt to load the page through Nginx, Nginx returns a 502 to the browser and prints the following to the log:
2017/03/16 00:27:59 [error] 10613#0: *5284 connect() failed (111:
Connection refused) while connecting to upstream, client:
76.184.187.130, server: tspi.io, request: "GET /leaderboard/index.html HTTP/1.1", upstream: "http://127.0.0.1:8063/leaderboard/index.html",
host: "tspi.io"
Here's my python server:
#!/bin/env python
# with special thanks to the good folks at
# https://fragments.turtlemeat.com/pythonwebserver.php
# who generous taught me how to do all this tonight
import cgi
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
from os import curdir, sep
class BaseServer (BaseHTTPRequestHandler):
def do_GET (self):
try:
print ('Serving self.path=' + self.path)
if 'leaderboard' in self.path:
self.path = self.path[12:]
print ('self.path amended to:' + self.path)
if self.path == '/':
self.path = '/index.html'
if self.path.endswith ('.html'):
# maybe TODO is wrap this in a file IO exception handler
f_to_open = curdir + sep + self.path
f = open (f_to_open)
s = f.read()
f.close()
self.send_response (200)
self.send_header ('Content-type', 'text/html')
self.end_headers ()
self.wfile.write (s)
return
except IOError:
self.send_error (404, 'File Not Found: ' + self.path)
def do_POST (self):
try:
cytpe, pdict = cgi.parse_header(self.headers.getheader ('content-type'))
if ctype == 'multipart/form-data':
query=cgi.parse_multipart (self.rfile, pdict)
self.send_response (301)
self.endheaders()
except:
pass # What *do* you do canonically for a failed POST?
def main():
try:
server = HTTPServer (('', 8096), BaseServer)
print ('Starting BaseServer.')
server.serve_forever ()
except KeyboardInterrupt:
print ('Interrupt recieved; closing server socket')
server.socket.close()
if __name__ == '__main__':
main()
And my nginx.conf:
server {
listen 443 ssl;
server_name tspi.io;
keepalive_timeout 70;
ssl_certificate /etc/letsencrypt/live/tspi.io/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/keys/0000_key-certbot.pem;
ssl_protocols TLSv1.2;
location / {
root /data/www;
}
location ~ \.(gif|jpg|png)$ {
root /data/images;
}
location /leaderboard {
proxy_pass http://localhost:8063;
}
}
I'm trying to use the proxy_pass to pass any traffic that comes in to tspi.io/leaderboard on to the Python server, while allowing the base html pages that live under /data/www to be served by Nginx normally.
When I google, I see tons of stuff about reverse proxying PHP not having PHP-FPM configured correctly, and since I'm not using PHP at all that seems improbable. I also see stuff about configuring uwsgi, which I have no idea if that's an issue or not. I don't know if BaseHTTPServer uses uswgi; when I tried looking uswgi up, it seemed like a whole different set of classes and a whole other way to write a python server.
Any help would be very much appreciated!
The port numbers are mix-matched in your python code vs. what is provided in your nginx reverse proxy config.
I'd also recommend sending the host and remote address values to your internal application in case the need for them arises.
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
Related
I have a autobahn twisted websocket running in python which is working in a dev vm correctly but I have been unable to get working when the server is running in openshift.
Here is the shortened code which works for me in a vm.
from autobahn.twisted.websocket import WebSocketServerProtocol, WebSocketServerFactory, listenWS
from autobahn.twisted.resource import WebSocketResource
class MyServerProtocol(WebSocketServerProtocol):
def onConnect(self, request):
stuff...
def onOpen(self):
stuff...
def onMessage(self,payload):
stuff...
factory = WebSocketServerFactory(u"ws://0.0.0.0:8080")
factory.protocol = MyServerProtocol
resource = WebSocketResource(factory)
root = File(".")
root.putChild(b"ws", resource)
site = Site(root)
reactor.listenTCP(8080, site)
reactor.run()
The connection part of the client is as follows:
var wsuri;
var hostname = window.document.location.hostname;
wsuri = "ws://" + hostname + ":8080/ws";
if ("WebSocket" in window) {
sock = new WebSocket(wsuri);
} else if ("MozWebSocket" in window) {
sock = new MozWebSocket(wsuri);
} else {
log("Browser does not support WebSocket!");
window.location = "http://autobahn.ws/unsupportedbrowser";
}
The openshift configuration is as follows:
1 pod running with app.py listening on port 8080
tls not enabled
I have a non-tls route 8080 > 8080.
Firefox gives the following message in the console:
Firefox can’t establish a connection to the server at ws://openshiftprovidedurl.net:8080/ws.
when I use wscat to connect to the websocket.
wscat -c ws://openshiftprovidedurl.net/ws
I get the following error:
error: Error: unexpected server response (400)
and the application log in openshift shows the following:
2018-04-03 01:14:24+0000 [-] failing WebSocket opening handshake ('missing port in HTTP Host header 'openshiftprovidedurl.net' and server runs on non-standard port 8080 (wss = False)')
2018-04-03 01:14:24+0000 [-] dropping connection to peer tcp4:173.21.2.1:38940 with abort=False: missing port in HTTP Host header 'openshiftprovidedurl.net' and server runs on non-standard port 8080 (wss = False)
2018-04-03 01:14:24+0000 [-] WebSocket connection closed: connection was closed uncleanly (missing port in HTTP Host header 'openshiftprovidedurl.net' and server runs on non-standard port 8080 (wss = False))
Any assistance would be appreciated!
Graham Dumpleton hit the nail on the head, I modified the code from
factory = WebSocketServerFactory(u"ws://0.0.0.0:8080")
to
factory = WebSocketServerFactory(u"ws://0.0.0.0:8080", externalPort=80)
and it corrected the issue. I had to modify my index to point to the correct websocket but I am now able to connect.
Thanks!
Based on the source code of autobahn-python, you can get that message only in 2 cases.
Here is the implementation:
if not ((self.factory.isSecure and self.factory.externalPort == 443) or (not self.factory.isSecure and self.factory.externalPort == 80)):
return self.failHandshake("missing port in HTTP Host header '%s' and server runs on non-standard port %d (wss = %s)" % (str(self.http_request_host), self.factory.externalPort, self.factory.isSecure))
Because I think you are using Deployment + Service (and maybe Ingress on top of them) for your server, you can bind your server to port 80 instead of 8080 and set that port in Service and in Ingress, if you are using them.
I have a linux server which I am running my flask app on it like this:
flask run --host=0.0.0.0
Inside the server I can access it like this:
curl http://0.0.0.0:5000/photo (and I am getting a valid response)
However, when I am trying to access it outside the server:
http://my_ip:5000/photo - the connection is refused.
The same ip, will return an image saved on public_html with apache2 configured
http://my_ip/public_html/apple-touch-icon-144x144-precomposed.png
I use this simple snippet to get the ip-address from the interface
import socket
def get_ip_address():
""" get ip-address of interface being used """
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect(("8.8.8.8", 80))
return s.getsockname()[0]
IP = get_ip_address()
And in main:
if __name__ == '__main__':
app.run(host=IP, port=PORT, debug=False)
And running:
./app.py
* Running on http://10.2.0.41:1443/ (Press CTRL+C to quit)
I have a suspicion you have a firewall on your Linux machine that is blocking port 5000.
Solution 1:
Open the relevant port on your firewall.
Solution 2:
I would suggest you to install nginx as a web proxy and configure it so that http://my_ip/photo would forward traffic to and from http://127.0.0.1:5000/photo:
server {
listen 80;
location /photo {
proxy_pass http://127.0.0.1:5000/photo;
}
}
Here is a minimal example:
from flask import Flask
app = Flask(__name__)
app.config['DEBUG'] = True
app.config['SERVER_NAME'] = 'myapp.dev:5000'
#app.route('/')
def hello_world():
return 'Hello World!'
#app.errorhandler(404)
def not_found(error):
print(str(error))
return '404', 404
if __name__ == '__main__':
app.run(debug=True)
If I set SERVER_NAME, Flask would response every URL with a 404 error, and when I comment out that line, it functions correctly again.
/Users/sunqingyao/Envs/flask/bin/python3.6 /Users/sunqingyao/Projects/play-ground/python-playground/foo/foo.py
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 422-505-438
127.0.0.1 - - [30/Oct/2017 07:19:55] "GET / HTTP/1.1" 404 -
404 Not Found: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
Please note that this is not a duplicate of Flask 404 when using SERVER_NAME, since I'm not using Apache or any production web server. I'm just dealing with Flask's built-in development server.
I'm using Python 3.6.2, Flask 0.12.2, Werkzeug 0.12.2, PyCharm 2017.2.3 on macOS High Sierra, if it's relevant.
When set SERVER_NAME, you should make HTTP request header 'Host' the same with it:
# curl http://127.0.0.1:5000/ -sv -H 'Host: myapp.dev:5000'
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 5000 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
> Accept: */*
> Host: myapp.dev:5000
>
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< Content-Type: text/html; charset=utf-8
< Content-Length: 13
< Server: Werkzeug/0.14.1 Python/3.6.5
< Date: Thu, 14 Jun 2018 09:34:31 GMT
<
* Closing connection 0
Hello, World!
if you use web explorer, you should access it use http://myapp.dev:5000/ and set /etc/hosts file.
It is like the nginx vhost, use Host header to do routing.
I think The SERVER_NAME is mainly used for route map.
you should set host and ip by hand
app.run(host="0.0.0.0",port=5000)
if you not set host/ip but set SERVER_NAME and found it seems to work,because the app.run() have this logic:
def run(self, host=None, port=None, debug=None,
load_dotenv=True, **options):
...
_host = '127.0.0.1'
_port = 5000
server_name = self.config.get('SERVER_NAME')
sn_host, sn_port = None, None
if server_name:
sn_host, _, sn_port = server_name.partition(':')
host = host or sn_host or _host
port = int(port or sn_port or _port)
...
try:
run_simple(host, port, self, **options)
finally:
self._got_first_request = False
At last, don't use SERVER_NAME to set host,ip app.run() used, unless you know its impact on the route map.
Sometimes I find Flask's docs to be confusing (see the quotes above by #dm295 - the meaning of the implications surrounding 'SERVER_NAME' is hard to parse). But an alternative setup to (and inspired by) #Dancer Phd's answer is to specify the 'HOST' and 'PORT' parameters in a config file instead of 'SERVER_NAME'.
For example, let's say you use this config strategy proposed in the Flask docs, add the host & port number like so:
class Config(object):
DEBUG = False
TESTING = False
DATABASE_URI = 'sqlite://:memory:'
HOST = 'http://localhost' #
PORT = '5000'
class ProductionConfig(Config):
DATABASE_URI = 'mysql://user#localhost/foo'
class DevelopmentConfig(Config):
DEBUG = True
class TestingConfig(Config):
TESTING = True
From Flask docs:
the name and port number of the server. Required for subdomain support
(e.g.: 'myapp.dev:5000') Note that localhost does not support
subdomains so setting this to “localhost” does not help. Setting a
SERVER_NAME also by default enables URL generation without a request
context but with an application context.
and
More on SERVER_NAME
The SERVER_NAME key is used for the subdomain
support. Because Flask cannot guess the subdomain part without the
knowledge of the actual server name, this is required if you want to
work with subdomains. This is also used for the session cookie.
Please keep in mind that not only Flask has the problem of not knowing
what subdomains are, your web browser does as well. Most modern web
browsers will not allow cross-subdomain cookies to be set on a server
name without dots in it. So if your server name is 'localhost' you
will not be able to set a cookie for 'localhost' and every subdomain
of it. Please choose a different server name in that case, like
'myapplication.local' and add this name + the subdomains you want to
use into your host config or setup a local bind.
It looks like there's no point to setting it to localhost. As suggested in the docs, try something like myapp.dev:5000.
You can also use just port number and host inside of the app.run like:
app.run(debug=True, port=5000, host="localhost")
Delete:
app.config['DEBUG'] = True
app.config['SERVER_NAME'] = 'myapp.dev:5000'
Using debug=True worked for me:
from flask import Flask
app = Flask(__name__)
app.config['SERVER_NAME'] = 'localhost:5000'
#app.route('/')
def hello_world():
return 'Hello World!'
if __name__ == '__main__':
app.run(debug=True)
I am trying to deploy a Flask app on an Ubuntu server. I referenced this, this and this and found a lot of similar questions on SO, but I still can't figure it out.
I can run it manually from the source directory by doing uwsgi siti_uwsgi.ini and navigating to http://server_IP_address:8080/. But when I try uwsgi --socket 127.0.0.1:3031 --wsgi-file views.py --master --processes 4 --threads 2 and navigate to http://server_IP_address:3031, I get nothing.
If I go to siti.company.loc (the DNS name I set up), there is a standard Nginx 502 error page.
When I try to restart the supervisor process, it dies with a FATAL error:
can't find command "gunicorn"
What am I doing wrong? Let me know if I need to provide more info or background.
/webapps/patch/src/views.py (Flask app):
from flask import Flask, render_template, request, url_for, redirect
from flask_cors import CORS
app = Flask(__name__)
CORS(app, resources={r"/*": {'origins': '*'}})
#app.route('/')
def home():
return 'Hello'
#app.route('/site:<site>/date:<int:day>-<month>-<int:year>')
def application(site, month, day, year):
if request.method == 'GET':
# Recompile date from URL. todo: better way
dte = str(day) + "-" + str(month) + "-" + str(
print('about to run')
results = run_SITI(site, dte)
return results
def run_SITI(site, dte):
print('running SITI')
return render_template('results.html', site=site, dte=dte, results=None) # todo: Show results
if __name__ == '__main__':
app.run(debug=True)
/webapps/patch/siti_wsgi.ini (uWSGI ini):
[uwsgi]
http = :8008
chdir = /webapps/patch/src
wsgi-file = views.py
processes = 2
threads = 2
callable = app
/etc/nginx/sites-available/siti (Nginx config):
upstream flask_siti {
server 127.0.0.1:8008 fail_timeout=0;
}
server {
listen 80;
server_name siti.company.loc;
charset utf-8;
client_max_body_size 75M;
access_log /var/log/nginx/siti/access.log;
error_log /var/log/nginx/siti/error.log;
keepalive_timeout 5;
location /static {
alias /webapps/patch/static;
}
location /media {
alias /webapps/patch/media;
}
location / {
# checks for static file, if not found proxy to the app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://flask_siti;
}
}
/etc/supervisor/conf.d/siti.conf (Supervisor config):
[program:webapp_siti]
command=gunicorn -b views:app
directory=/webapps/patch/src
user=nobody
autostart=true
autorestart=true
redirect_stderr=true
/var/log/nginx/siti/error.log (Nginx error log):
2016/08/30 11:44:42 [error] 25524#0: *73 connect() failed (111: Connection refused) while connecting to upstream, $
2016/08/30 11:44:42 [error] 25524#0: *73 connect() failed (111: Connection refused) while connecting to upstream, $
2016/08/30 11:44:42 [error] 25524#0: *73 no live upstreams while connecting to upstream, client: 10.1.2.195, serve$
You have errors in nginx config:
Instead of:
upstream flask_siti {
server 127.0.0.1:8008 fail_timeout=0;
server {
...
try:
upstream flask_siti {
server 127.0.0.1:8080 fail_timeout=0;
}
server {
...
You must "activate" virtualenv in supervisor config. To do this, add following line to Your supervisor config:
environment=PATH="/webapps/patch/venv/bin",VIRTUAL_ENV="/webapps/patch/venv",PYTHONPATH="/webapps/patch/venv/lib/python:/webapps/patch/venv/lib/python/site-packages"
Was able to get it working with the following changes:
/etc/supervisor/conf.d/siti.conf (Supervisor config):
[program:webapp_siti]
command=/webapps/patch/venv/bin/gunicorn -b :8118 views:app # didn't use uwsgi.ini after all
directory=/webapps/patch/src
user=nobody
autostart=true
autorestart=true
redirect_stderr=true
/etc/nginx/sites-enabled/siti (Nginx config):
upstream flask_siti {
server 127.0.0.1:8118 fail_timeout=0; # changed ports because 8008 was already in use by something else
}
# snip ...
Turns out I had set up uWSGI to listen on port 8008. I also had an extra file in /etc/nginx/sites-enabled called siti.save that was preventing Nginx from reloading. I deleted it, reloaded/restarted Nginx, restarted Supervisor, and it worked.
My nginx configuration is like:
server {
listen 80 so_keepalive=30m::;
location /wsgi {
uwsgi_pass uwsgicluster;
include uwsgi_params;
uwsgi_read_timeout 30000;
uwsgi_buffering off;
}
...
}
In my python:
def application_(environ, start_response):
body = queue.Queue()
...
gevent.spawn(redis_wait, environ, body, channels)
return body
def redis_wait(environ, body, channels):
server = redis.Redis(connection_pool=REDIS_CONNECTION_POOL)
client = server.pubsub()
try:
for channel in channels:
client.subscribe(channel)
messages = client.listen()
for message in messages:
if message['type'] != 'message' and message['type'] != 'pmessage':
continue
body.put(message['data'])
finally:
client.unsubscribe()
client.close()
The problem occurs when the client connection is interrupted (either network connection abruptly lost, application terminates, etc.) redis shows that the connection on the server is still open. How do I fix this? Even with the so_keepalive, the connection isnt being cleaned up. How do I fix this?
EDIT: I've noticed through the nginx_status page that the active connection count does go down after the disconnect. The problem is that uwsgi isnt getting notified of this.
You have to wait on the uwsgi socket as well as the redis socket so that you can be notified in case the uwsgi socket closes. Example here: http://nullege.com/codes/show/src%40k%40o%40kozmic-ci-HEAD%40tailer%40__init__.py/72/uwsgi.connection_fd/python