Issues with nginx reverse proxy to an app server running nginx+django - python

I have an app running on Server A. I have a Server B that I want to use as a reverse proxy for accessing the app on Server A. Server A is running nginx 1.4.5 and Django (Python 2.7.6 with fastcgi), Server B is running nginx 1.4.5 as well. I also want to add on SSL that way.
The proxy is kinda working. The issue is that requests don't see to get passed along correctly. When going to https://servera.org/ I only get a 404 error instead of the log-in page I'm expecting.
This is the error message I am seeing in the browser (it's an error message, so I know the request is reaching Server A):
Page not found (404)
Request Method: GET
Request URL: http://nova.iguw.tuwien.ac.at/index.html
Using the URLconf defined in TUBadges.urls, Django tried these URL patterns, in this order:
1. ^admin/doc/
2. ^admin/
3. ^$
4. ^badges/?$
5. ^badges/(?P<uid>\d+)/?$
6. ^presets/$
7. ^svg$
8. ^bgsvg$
9.
This is my config for the reverse proxy:
upstream server_a {
server servera.org:8080 fail_timeout=0;
}
server {
listen 443 ssl;
listen 80;
server_name subdomain1.serverb.org;
server_name subdomain2.serverb.org;
keepalive_timeout 70;
ssl_certificate /etc/certificates/server_b.pem;
ssl_certificate_key /etc/certificates/server_b.key;
error_log /var/log/nginx/aurora.ssl.error.log error;
access_log off;
client_max_body_size 50M;
location ~ ^/(.+)$ {
proxy_intercept_errors off;
proxy_buffering off;
proxy_connect_timeout 5;
proxy_send_timeout 5;
proxy_read_timeout 5;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-By $server_addr:$server_port;
proxy_set_header X-Forwarded-Fo $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
access_log off;
error_log /var/log/nginx/tubadges.error.log debug;
proxy_pass http://server_a;
proxy_redirect off;
}
}
And that is the config I'm using on Server A to run the app:
server {
listen 8080; ## listen for ipv4
server_name localhost;
server_name servera.org;
client_max_body_size 5M;
error_log /var/log/nginx/app1.error.log;
access_log /var/log/nginx/app1.access.log;
location /static {
root /srv/django/projects/app1;
}
location /media {
root /srv/django/projects/app1;
}
location / {
# host and port to fastcgi server
fastcgi_pass unix:/srv/django/run/app1.socket;
fastcgi_param PATH_INFO $fastcgi_script_name;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_pass_header Authorization;
fastcgi_intercept_errors off;
}
}
I'm assuming that it's got something to do with my Django App config or the config on Server A.
Can you spot an error? Do you need more information?
Is there, maybe, an answer on here that I have just missed?
Thanks in advance!
P.S.: This is my first time asking a question on StackOverflow, so if there's a way that I can improve my question to get better answers, and you see something that really bugs, please don't hesitate to point out to me how I can improve this question. :)

Actually, it looks like Nginx is correctly passing the request through - the error page comes from Django (so the request is being serviced by Apache) and it states that the url:
http://nova.iguw.tuwien.ac.at/index.html
cannot be resolved to any of the urls defined in TUBadges.urls
You need to hit a url such as http://nova.iguw.tuwien.ac.at/admin/login or something like that.

Related

django SECURE_SSL_REDIRECT with nginx reverse proxy

Is it secure to set SECURE_SSL_REDIRECT=False if I have nginx setup as a reverse proxy serving the site over https?
I can access the site over SSL if this is set this to False, where as if it is True, I receive too many redirects response.
NOTE: nginx and django run from within docker containers.
My nginx.conf looks like:
upstream config {
server web:8000;
}
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name _;
ssl_certificate /etc/ssl/certs/cert.com.chained.crt;
ssl_certificate_key /etc/ssl/certs/cert.com.key;
location / {
proxy_pass http://config;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /staticfiles/ {
alias /home/app/web/staticfiles/;
}
}
EDIT: Added http to https redirect in nginx.conf.
You need to add the following to your "location /" block:
location / {
...
proxy_set_header X-Forwarded-Proto $scheme;
...
}

NGINX server blocks doesn't work as expected

I know it's not appropriate place to ask question about nginx but I stack with some issue for a few days and still have no idea how to solve a problem.
I would like to use nginx to redirect user from domain.com:3001 to sub.domain.com. Application on port 3001 is running in docker container, I didn't add any files in directory sites-available/sites-enabled. I have added two server blocks (vhosts) in my conf.d directory. In server block I set $upstream and resolver according to record in my /etc/resolv.conf file. The problem is that when I test in browser sub.domain.com every time I receive information that IP address could not be connected with any server (DNS_PROBE_FINISHED_NXDOMAIN) or 50x errors.
However, when I run curl sub.domain.com from the server I receive 200 with index.html response, this doesn't work when I run the same command from my local PC. Server domain is in private network. Have you any idea what my configuration files lack of?? Maybe there is some issue with the listen port when app is running in docker or maybe there is something wrong with the version of nginx? When I installed nginx there was empty conf.d directory, with no default.conf. I am lost...
Any help will be highly appreciated.
Here is my configuration files:
server.conf:
server
{
listen 80;
listen 443 ssl;
server_name sub.domain.net;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
ssl_certificate /etc/nginx/ssl/cer.crt;
ssl_certificate_key /etc/nginx/ssl/private.key;
#set_real_ip_from 127.0.0.1;
#real_ip_header X-Real-IP;
#real_ip_recursive on;
# location / {
# root /usr/share/nginx/html;
# index index.html index.htm;
# }
location / {
resolver 10.257.10.4;
set $upstream https://127.0.0.1:3000;
proxy_pass $upstream;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Proto $scheme;`enter code here`
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
nginx.conf
#user nginx;
worker_processes 1;
#error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;
#pid /var/run/nginx.pid;
include /etc/nginx/modules.conf.d/*.conf;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local]
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#tcp_nodelay on;
#gzip on;
#gzip_disable "MSIE [1-6]\.(?!.*SV1)";
server_tokens off;
include /etc/nginx/conf.d/*.conf;
}
# override global parameters e.g. worker_rlimit_nofile
include /etc/nginx/*global_params;

Django Channel with Nginx Redirect

I'm using django channel for my Django application. On top of Django, I add nginx as layer for http request. All http request is working nicely, but when I tried to create a websocket connection, it was getting 302 HTTP Code.
Nginx Configuration
# Enable upgrading of connection (and websocket proxying) depending on the
# presence of the upgrade field in the client request header
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# Define connection details for connecting to django running in
# a docker container.
upstream uwsgi {
server uwsgi:8080;
}
server {
# OTF gzip compression
gzip on;
gzip_min_length 860;
gzip_comp_level 5;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain application/xml application/x-javascript text/xml text/css application/json;
gzip_disable “MSIE [1-6].(?!.*SV1)”;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# the port your site will be served on
listen 8080;
# the domain name it will serve for
#server_name *;
charset utf-8;
error_page 500 502 /500.html;
location = /500.html {
root /html;
internal;
}
# max upload size, adjust to taste
client_max_body_size 15M;
# Django media
location /media {
# your Django project's media files - amend as required
alias /home/web/media;
expires 21d; # cache for 71 days
}
location /static {
# your Django project's static files - amend as required
alias /home/web/static;
expires 21d; # cache for 21 days
}
location /archive {
proxy_set_header Host $http_host;
autoindex on;
# your Django project's static files - amend as required
alias /home/web/archive;
expires 21d; # cache for 6h
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass uwsgi;
# the uwsgi_params file you installed needs to be passed with each
# request.
# the uwsgi_params need to be passed with each uwsgi request
uwsgi_param QUERY_STRING $query_string;
uwsgi_param REQUEST_METHOD $request_method;
uwsgi_param CONTENT_TYPE $content_type;
uwsgi_param CONTENT_LENGTH $content_length;
uwsgi_param REQUEST_URI $request_uri;
uwsgi_param PATH_INFO $document_uri;
uwsgi_param DOCUMENT_ROOT $document_root;
uwsgi_param SERVER_PROTOCOL $server_protocol;
uwsgi_param HTTPS $https if_not_empty;
uwsgi_param REMOTE_ADDR $remote_addr;
uwsgi_param REMOTE_PORT $remote_port;
uwsgi_param SERVER_PORT $server_port;
uwsgi_param SERVER_NAME $server_name;
if (!-f $request_filename) {
proxy_pass http://uwsgi;
break;
}
# Require http version 1.1 to allow for upgrade requests
proxy_http_version 1.1;
# We want proxy_buffering off for proxying to websockets.
proxy_buffering off;
# http://en.wikipedia.org/wiki/X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if you use HTTPS:
# proxy_set_header X-Forwarded-Proto https;
# pass the Host: header from the client for the sake of redirects
proxy_set_header Host $http_host;
# We've set the Host header, so we don't need Nginx to muddle
# about with redirects
proxy_redirect off;
# Depending on the request value, set the Upgrade and
# connection headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
Routing.py
from channels.routing import route
from consumers import ws_add, ws_message, ws_disconnect
channel_routing = [
route("websocket.connect", ws_add),
route("websocket.receive", ws_message),
route("websocket.disconnect", ws_disconnect),
]
Consumers. py
from channels import Channel, Group
from channels.sessions import channel_session
from channels.auth import channel_session_user, channel_session_user_from_http
# Connected to websocket.connect
#channel_session_user_from_http
def ws_add(message):
# Accept the connection
message.reply_channel.send({"accept": True})
# Add to the group
Group("progress-%s" % message.user.username).add(message.reply_channel)
#channel_session_user
def ws_message(message):
Group("progress-%s" % message.user.username).send({
"text": message['text'],
})
# Connected to websocket.disconnect
#channel_session_user
def ws_disconnect(message):
Group("progress-%s" % message.user.username).discard(message.reply_channel)
If i remove the nginx layer it is working nicely. Is there any configuration that I miss?

Serve flask python on https and another port without https

What I'm trying to accomplish.
Have a domain on https. Check. it's working ok using the following config. The flask app runs on port 1337 -> nginx takes it -> serves it though https. Everything is working nicely
Now I want to run another app, on port 1338 let's say. But if I do this, the browser (chrome) automatically redirects it to https.
I want: http://example.com:1338 .... to run ok
I get: https://example.com:1338 ... error certificate
My question is: how can I make the other app (on port 1338) either work with https:// or to work with http://
Here's my config...
server {
listen 80 default_server;
listen [::]:80 default_server;
root /home/cleverbots;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name _;
# SSL configuration
#
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
ssl_certificate /xxxxxxxxxx.crt;
ssl_certificate_key /xxxxxxxxxx.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ecdh_curve secp384r1;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
# Disable preloading HSTS for now. You can use the commented out header line that includes
# the "preload" directive if you understand the implications.
#add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
ssl_dhparam /xxxxxx/dhparam.pem;
location /static/ {
expires 30d;
add_header Last-Modified $sent_http_Expires;
alias /home/my_first_app/application/static/;
}
location / {
try_files $uri #tornado;
}
location #tornado {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:1337;
}
}
The answer to your question depends on what exactly you want the user experience to be.
As I understand your goal, you only have one domain (example.com). Your first app, (I'm going to call it app1337) is running on port 1337 and you can access in a browser at https://example.com/. Now you want to add another app (app1338) that you want to be able to access at https://example.com:1338/. The problem here is that only one service can run on a given port on a given interface. This can work, but means that you have to be really careful to make sure that your flask app only listens on loopback (127.0.0.1) and Nginx only listens on your Ethernet interface. If not, you'll get "socket already in use" errors. I would recommend instead using something else like 8338 in Nginx to avoid this confusion.
The fastest solution I can see would be to leave your existing server block exactly as is. Duplicate the entire thing, and in the new block:
Change the 2 listen 443 lines to the port you want to use in browser
(8338).
Remove the listen 80 lines or, if you want to serve the app on both ssl and non-ssl, change the port to the non-ssl port you want to use.
Change your proxy_pass line to point to your second flask app.
Like Keenan, I would recommend you use subdomains to sort your traffic. Something like https://app1337.example.com/ and https://app1338.example.com/ to make for a better user experience. To do this, duplicate the server block as above, but this time leave the ports the same, but change the "server_name" directive in each block to match the domain. Remove all of the "default_server" parts from the listen directives.
As an example:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name app1337.example.com;
# SSL configuration
# Certificate and key for "app1337.example.com"
ssl_certificate /xxxxxxxxxx.crt;
ssl_certificate_key /xxxxxxxxxx.key;
# The rest of the ssl stuff is common and can be moved to a shared file and included
# in whatever blocks it is needed.
include sslcommon.conf;
root /home/cleverbots;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
location /static/ {
expires 30d;
add_header Last-Modified $sent_http_Expires;
alias /home/my_first_app/application/static/;
}
location / {
try_files $uri #tornado;
}
location #tornado {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:1337;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name app1338.example.com;
# SSL configuration
# Certificate and key for "app1338.example.com"
ssl_certificate /xxxxxxxxxx.crt;
ssl_certificate_key /xxxxxxxxxx.key;
# The rest of the ssl stuff is common and can be moved to a shared file and included
# in whatever blocks it is needed.
include sslcommon.conf;
## This might be different for app1338
root /home/cleverbots;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
## This might be different for app1338
location /static/ {
expires 30d;
add_header Last-Modified $sent_http_Expires;
alias /home/my_first_app/application/static/;
}
location / {
try_files $uri #app1338;
}
location #app1338 {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:1338;
}
}

websockets proxied by nginx to gunicorn over https giving 400 (bad request)

I am having trouble establishing a websocket in my Flask web application.
On the client side, I am emitting a "ping" websocket event every second to the server. In the browser console, I see the following error each second
POST https://example.com/socket.io/?EIO=3&transport=polling&t=LOkVYzQ&sid=88b5202cf38f40879ddfc6ce36322233 400 (BAD REQUEST)
GET https://example.com/socket.io/?EIO=3&transport=polling&t=LOkVZLN&sid=5a355bbccb6f4f05bd46379066876955 400 (BAD REQUEST)
WebSocket connection to 'wss://example.com/socket.io/?EIO=3&transport=websocket&sid=5a355bbccb6f4f05bd46379066876955' failed: WebSocket is closed before the connection is established.
I have the following nginx.conf
server {
listen 80;
server_name example.com www.example.com;
return 301 https://$server_name$request_uri;
}
upstream app_server {
# for UNIX domain socket setups
server unix:/pathtowebapp/gunicorn.sock fail_timeout=0;
}
server {
listen 443 ssl;
server_name example.com www.example.com;
keepalive_timeout 5;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
charset utf-8;
client_max_body_size 30M;
location / {
try_files $uri #proxy_to_app;
}
location /socket.io {
proxy_pass http://app_server;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Upgrade websocket;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400;
proxy_buffering off;
proxy_headers_hash_max_size 1024;
}
location /static {
alias /pathtowebapp/webapp/static;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
#proxy_buffering off;
proxy_pass http://app_server;
}
}
I have been looking all over for examples of a websocket working with https using nginx in front of gunicorn.
My webpage loads, although the websocket connection is not successful.
The client side websocket is established using the following javascript:
var socket = io.connect('https://' + document.domain + ':' + location.port + namespace);
Here is my gunicorn.conf
import multiprocessing
bind = 'unix:/pathtowebapp/gunicorn.sock'
workers = multiprocessing.cpu_count() * 2 + 1
worker_class = 'eventlet'
[EDIT] if I configure nginx the way it is in the Flask-IO documentation and just run (env)$ python deploy_app.py then it works. But I was under the impression that this was not as production-ideal as the setup I previously mentioned
The problem is that you are running multiple workers on gunicorn. This is not a configuration that is currently supported, due to the very limited load balancer in gunicorn that does not support sticky sessions. Documentation reference: https://flask-socketio.readthedocs.io/en/latest/#gunicorn-web-server.
Instead, run several gunicorn instances, each with one worker, and then set up nginx to do the load balancing, using the ip_hash method so that sessions are sticky.
Also, in case you are not aware, if you run multiple servers you need to also run a message queue, so that the processes can coordinate. This is also covered in the documentation link above.

Categories