Websockets not working with Django channels after deploying - python

I'm trying to deploy my Django project, but I faced some difficulties.
Everything is working perfectly on local machine.
I'm using Django + nginx + uvicorn (runned by supervisor). Also, I got my SSL certificate in use.
When I try to connect to websocket (/ws) via loading the page and making my js files work, I get a message in console:
WebSocket connection to 'wss://example.com/ws/' failed
Here is my nginx config
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
ssl on;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
server_name example.com;
client_max_body_size 100M;
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript;
location /ws/ {
proxy_pass https://uvicorn/ws;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_intercept_errors on;
proxy_redirect off;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-NginX-Proxy true;
proxy_ssl_session_reuse off;
}
location /static/ {
root /root/server/social;
expires 1d;
}
location /media/ {
root /root/server/social;
expires 1d;
}
location / {
proxy_pass https://uvicorn;
proxy_set_header Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
upstream uvicorn {
server unix:/tmp/uvicorn.sock;
}
And the supervisor config
[program:django]
command = /root/server/social/venv/bin/python -m uvicorn myproject.asgi:application --uds /tmp/uvicorn.sock --ssl-keyfile=/etc/letsencrypt/live/mysite.com/privkey.pem --ssl-certfile=/etc/letsencrypt/live/mysite.com/fullchain.pem
directory = /root/server/social
stderr_logfile=/var/log/long.err.log
stdout_logfile=/var/log/long.out.log
autostart=true
autorestart=true
When everything is ready, I start nginx: service nginx restart. I get no mistakes.
After that I go with service supervisor restart. And here is the thing. If no one uses the website, in my log files (/var/log/long.err.log) I get no mistakes:
INFO: connection closed
INFO: Shutting down
INFO: Finished server process [606]
INFO: Started server process [660]
INFO: Waiting for application startup.
INFO: ASGI 'lifespan' protocol appears unsupported.
INFO: Application startup complete.
INFO: Uvicorn running on unix socket /tmp/uvicorn.sock (Press CTRL+C to quit)
But if someone uses the website and activates websocket, I get:
File "/root/server/social/venv/lib/python3.9/site-packages/websockets/legacy/protocol.py", line 1216, in write_close_frame
await self.write_frame(True, OP_CLOSE, data, _state=State.CLOSING)
File "/root/server/social/venv/lib/python3.9/site-packages/websockets/legacy/protocol.py", line 1189, in write_frame
await self.drain()
File "/root/server/social/venv/lib/python3.9/site-packages/websockets/legacy/protocol.py", line 1178, in drain
await self.ensure_open()
File "/root/server/social/venv/lib/python3.9/site-packages/websockets/legacy/protocol.py", line 921, in ensure_open
raise self.connection_closed_exc()
websockets.exceptions.ConnectionClosedError: sent 1000 (OK); no close frame received
INFO: connection closed
Here is everything connected with websockets in project structure if needed:
routing.py
from django.urls import re_path
from . import consumers
websocket_urlpatterns = [
re_path(r"ws/messenger/", consumers.ChatConsumer.as_asgi()),
re_path(r"ws/profile/", consumers.ProfileConsumer.as_asgi()),
re_path(r"ws/like/", consumers.LikeConsumer.as_asgi())
]
asgi.py
import os
from django.core.asgi import get_asgi_application
from channels.routing import ProtocolTypeRouter, URLRouter
from channels.auth import AuthMiddlewareStack
import messenger.routing
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings')
application = ProtocolTypeRouter({
"http":get_asgi_application(),
"websocket": AuthMiddlewareStack(
URLRouter(messenger.routing.websocket_urlpatterns)
)
})
consumers.py
import json
from channels.generic.websocket import WebsocketConsumer
from profile.models import User, Message, Chat, Publication
from asgiref.sync import async_to_sync
from datetime import datetime
group_members = []
class ChatConsumer(WebsocketConsumer):
...
class ProfileConsumer(WebsocketConsumer):
...
class LikeConsumer(WebsocketConsumer):
...

Related

Websockets Secure with headers in Python

For my application i'm using secure websockets, which is working fine. But I would like to secure it a bit more.
For the websocket python server im using the websockets library (on asyncio). but when I check the path value which is sent with the websockets.serve(), I'll always get the path of the socket and the sent_ip is always local.
How can I change my configuration so I can block other ips which are trying to connect
Server.py
import ssl
import asyncio
import logging
import websockets
import pathlib
logging.basicConfig()
STATE = {'value': 0}
USERS = set()
async def register(websocket):
USERS.add(websocket)
print("connection made!")
async def unregister(websocket):
USERS.remove(websocket)
async def update(websocket):
await websocket.send("Jobnumber: 1")
async def counter(websocket, path):
await register(websocket)
addr, seq = websocket.remote_address
print(addr) #ALWAYS localhost
print(path) #always the same path /server/sock (as configured in NGNIX)
try:
async for message in websocket:
print (message)
finally:
await unregister(websocket)
ssl_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
ssl_context.load_cert_chain(
pathlib.Path(__file__).with_name('privkey.pem'))
asyncio.get_event_loop().run_until_complete(
websockets.serve(counter, '', 8004, ssl=ssl_context))
asyncio.get_event_loop().run_forever()
Nginx:
server {
root /var/www/html/;
index index.php index.html index.htm index.nginx-debian.html;
server_name [hidden];
location / {
try_files $uri $uri/ =404;
}
location /server/sock {
proxy_pass https://pythonserver;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/../fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/.../privkey.pem; # managed by
}
upstream pythonserver {
server localhost:8004;
}
Try using this for Nginx, since you are using websockets as well:
server {
listen 80 ;
server_name <url>;
large_client_header_buffers 8 32k;
if ($http_user_agent ~* Googlebot) {
return 403;
}
access_log /var/log/nginx/access.log;
location / {
add_header 'Access-Control-Allow-Origin' "$http_origin";
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, DELETE, PUT';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Headers' 'User-Agent,Keep-Alive,Content-Type';
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://<url>:443;
proxy_read_timeout 90;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_buffers 8 32k;
proxy_buffer_size 64k;
}
}
server {
listen 443;
server_name abc.com;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log warn;
ssl on;
ssl_certificate /etc/nginx/ssl/ssl.crt;
ssl_certificate_key /etc/nginx/ssl/ssl.key;
location / {
add_header 'Access-Control-Allow-Origin' "$http_origin";
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, DELETE, PUT';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Keep-Alive';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
add_header 'Access-Control-Allow-Credentials' 'true';
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_next_upstream error;
proxy_pass http://pythonserver;
add_header X-Upstream $upstream_addr;
add_header Host $http_host;
proxy_read_timeout 90;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_buffers 8 32k;
proxy_buffer_size 64k;
}
}
update url as your server name in 2 places.

Django Channel with Nginx Redirect

I'm using django channel for my Django application. On top of Django, I add nginx as layer for http request. All http request is working nicely, but when I tried to create a websocket connection, it was getting 302 HTTP Code.
Nginx Configuration
# Enable upgrading of connection (and websocket proxying) depending on the
# presence of the upgrade field in the client request header
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# Define connection details for connecting to django running in
# a docker container.
upstream uwsgi {
server uwsgi:8080;
}
server {
# OTF gzip compression
gzip on;
gzip_min_length 860;
gzip_comp_level 5;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain application/xml application/x-javascript text/xml text/css application/json;
gzip_disable “MSIE [1-6].(?!.*SV1)”;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# the port your site will be served on
listen 8080;
# the domain name it will serve for
#server_name *;
charset utf-8;
error_page 500 502 /500.html;
location = /500.html {
root /html;
internal;
}
# max upload size, adjust to taste
client_max_body_size 15M;
# Django media
location /media {
# your Django project's media files - amend as required
alias /home/web/media;
expires 21d; # cache for 71 days
}
location /static {
# your Django project's static files - amend as required
alias /home/web/static;
expires 21d; # cache for 21 days
}
location /archive {
proxy_set_header Host $http_host;
autoindex on;
# your Django project's static files - amend as required
alias /home/web/archive;
expires 21d; # cache for 6h
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass uwsgi;
# the uwsgi_params file you installed needs to be passed with each
# request.
# the uwsgi_params need to be passed with each uwsgi request
uwsgi_param QUERY_STRING $query_string;
uwsgi_param REQUEST_METHOD $request_method;
uwsgi_param CONTENT_TYPE $content_type;
uwsgi_param CONTENT_LENGTH $content_length;
uwsgi_param REQUEST_URI $request_uri;
uwsgi_param PATH_INFO $document_uri;
uwsgi_param DOCUMENT_ROOT $document_root;
uwsgi_param SERVER_PROTOCOL $server_protocol;
uwsgi_param HTTPS $https if_not_empty;
uwsgi_param REMOTE_ADDR $remote_addr;
uwsgi_param REMOTE_PORT $remote_port;
uwsgi_param SERVER_PORT $server_port;
uwsgi_param SERVER_NAME $server_name;
if (!-f $request_filename) {
proxy_pass http://uwsgi;
break;
}
# Require http version 1.1 to allow for upgrade requests
proxy_http_version 1.1;
# We want proxy_buffering off for proxying to websockets.
proxy_buffering off;
# http://en.wikipedia.org/wiki/X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if you use HTTPS:
# proxy_set_header X-Forwarded-Proto https;
# pass the Host: header from the client for the sake of redirects
proxy_set_header Host $http_host;
# We've set the Host header, so we don't need Nginx to muddle
# about with redirects
proxy_redirect off;
# Depending on the request value, set the Upgrade and
# connection headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
Routing.py
from channels.routing import route
from consumers import ws_add, ws_message, ws_disconnect
channel_routing = [
route("websocket.connect", ws_add),
route("websocket.receive", ws_message),
route("websocket.disconnect", ws_disconnect),
]
Consumers. py
from channels import Channel, Group
from channels.sessions import channel_session
from channels.auth import channel_session_user, channel_session_user_from_http
# Connected to websocket.connect
#channel_session_user_from_http
def ws_add(message):
# Accept the connection
message.reply_channel.send({"accept": True})
# Add to the group
Group("progress-%s" % message.user.username).add(message.reply_channel)
#channel_session_user
def ws_message(message):
Group("progress-%s" % message.user.username).send({
"text": message['text'],
})
# Connected to websocket.disconnect
#channel_session_user
def ws_disconnect(message):
Group("progress-%s" % message.user.username).discard(message.reply_channel)
If i remove the nginx layer it is working nicely. Is there any configuration that I miss?

Why is "nginx + tornado" setup taking longer to return results compared to just "tornado" setup?

I have an nginx in front of 5 tornado servers.
When I call one of my Tornado server directly then the results are returned very fast.
But when I call nginx instead, it takes VERY long to return results. On checking the logs I can see that the request comes in as "OPTIONS" into nginx and the selected tornado server almost immediately. But then it takes its own sweet little time after which I see "GET" request in the logs and then the response is returned. Why is there such a long delay between OPTIONS and GET? When calling Tornado directly, OPTIONS and GET request happens back to back very quickly. Do I need to change something on my nginx config file to make the performance better?
My nginx config looks like this:
worker_processes 1;
error_log logs/error.;
events {
worker_connections 1024;
}
http {
# Enumerate all the Tornado servers here
upstream frontends {
server 127.0.0.1:5052;
server 127.0.0.1:5053;
server 127.0.0.1:5054;
server 127.0.0.1:5055;
server 127.0.0.1:5056;
}
include mime.types;
default_type application/octet-stream;
keepalive_timeout 65;
sendfile on;
server {
listen 5050;
server_name x;
ssl on;
ssl_certificate certificate.crt;
ssl_certificate_key keyfile.key;
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass https://frontends;
}
}
}
And my tornado files have this structure:
import tornado.httpserver
import tornado.ioloop
import tornado.web
from flasky import app
from tornado.wsgi import WSGIContainer
from tornado.ioloop import IOLoop
from tornado.web import FallbackHandler
tr = WSGIContainer(app)
application = tornado.web.Application([
(r".*", FallbackHandler, dict(fallback=tr)),
])
if __name__ == '__main__':
http_server = tornado.httpserver.HTTPServer(application, ssl_options={
"certfile": "certificate.crt",
"keyfile": "keyfile.key",
})
http_server.listen(5056, address='127.0.0.1')
IOLoop.instance().start()

websockets proxied by nginx to gunicorn over https giving 400 (bad request)

I am having trouble establishing a websocket in my Flask web application.
On the client side, I am emitting a "ping" websocket event every second to the server. In the browser console, I see the following error each second
POST https://example.com/socket.io/?EIO=3&transport=polling&t=LOkVYzQ&sid=88b5202cf38f40879ddfc6ce36322233 400 (BAD REQUEST)
GET https://example.com/socket.io/?EIO=3&transport=polling&t=LOkVZLN&sid=5a355bbccb6f4f05bd46379066876955 400 (BAD REQUEST)
WebSocket connection to 'wss://example.com/socket.io/?EIO=3&transport=websocket&sid=5a355bbccb6f4f05bd46379066876955' failed: WebSocket is closed before the connection is established.
I have the following nginx.conf
server {
listen 80;
server_name example.com www.example.com;
return 301 https://$server_name$request_uri;
}
upstream app_server {
# for UNIX domain socket setups
server unix:/pathtowebapp/gunicorn.sock fail_timeout=0;
}
server {
listen 443 ssl;
server_name example.com www.example.com;
keepalive_timeout 5;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
charset utf-8;
client_max_body_size 30M;
location / {
try_files $uri #proxy_to_app;
}
location /socket.io {
proxy_pass http://app_server;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Upgrade websocket;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400;
proxy_buffering off;
proxy_headers_hash_max_size 1024;
}
location /static {
alias /pathtowebapp/webapp/static;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
#proxy_buffering off;
proxy_pass http://app_server;
}
}
I have been looking all over for examples of a websocket working with https using nginx in front of gunicorn.
My webpage loads, although the websocket connection is not successful.
The client side websocket is established using the following javascript:
var socket = io.connect('https://' + document.domain + ':' + location.port + namespace);
Here is my gunicorn.conf
import multiprocessing
bind = 'unix:/pathtowebapp/gunicorn.sock'
workers = multiprocessing.cpu_count() * 2 + 1
worker_class = 'eventlet'
[EDIT] if I configure nginx the way it is in the Flask-IO documentation and just run (env)$ python deploy_app.py then it works. But I was under the impression that this was not as production-ideal as the setup I previously mentioned
The problem is that you are running multiple workers on gunicorn. This is not a configuration that is currently supported, due to the very limited load balancer in gunicorn that does not support sticky sessions. Documentation reference: https://flask-socketio.readthedocs.io/en/latest/#gunicorn-web-server.
Instead, run several gunicorn instances, each with one worker, and then set up nginx to do the load balancing, using the ip_hash method so that sessions are sticky.
Also, in case you are not aware, if you run multiple servers you need to also run a message queue, so that the processes can coordinate. This is also covered in the documentation link above.

AttributeError when attempting to deploy gunicorn with HTTPS

I am attempting to deploy my server using Gunicorn over https. However, no matter what nginx configuration I use, I always get an attribute error in Gunicorn. I don't think the problem lies with Nginx though, but with gunicorn. But I don't know how to fix it. Here is the command I'm using to start my server:
gunicorn -b 0.0.0.0:8000 --certfile=/etc/ssl/cert_chain.crt --keyfile=/etc/ssl/server.key pyhub2.wsgi
And here is my nginx configuration file:
server {
# port to listen on. Can also be set to an IP:PORT
listen 80;
server_name www.xxxxx.co;
rewrite ^ https://$server_name$request_uri? permanent;
include "/opt/bitnami/nginx/conf/vhosts/*.conf";
}
server {
# port to listen on. Can also be set to an IP:PORT
listen 443;
ssl on;
ssl_certificate /etc/ssl/cert_chain.crt;
ssl_certificate_key /etc/ssl/server.key;
server_name www.xxxx.co;
access_log /opt/bitnami/nginx/logs/access.log;
error_log /opt/bitnami/nginx/logs/error.log;
location /xxxx.txt {
root /home/bitnami;
}
location / {
proxy_set_header X-Forwarded-For $scheme;
proxy_buffering off;
proxy_pass https://0.0.0.0:8000;
}
location /status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
# PageSpeed
#pagespeed on;
#pagespeed FileCachePath /opt/bitnami/nginx/var/ngx_pagespeed_cache;
# Ensure requests for pagespeed optimized resources go to the pagespeed
# handler and no extraneous headers get set.
#location ~ "\.pagespeed\.([a-z]\.)?[a-z]{2}\.[^.]{10}\.[^.]+" { add_header "" ""; }
#location ~ "^/ngx_pagespeed_static/" { }
#location ~ "^/ngx_pagespeed_beacon$" { }
#location /ngx_pagespeed_statistics { allow 127.0.0.1; deny all; }
#location /ngx_pagespeed_message { allow 127.0.0.1; deny all; }
location /static/ {
autoindex on;
alias /opt/bitnami/apps/django/django_projects/PyHub2/static/;
}
location /admin {
proxy_pass https://127.0.0.1:8000;
allow 96.241.66.109;
deny all;
}
location /robots.txt {
root /opt/bitnami/apps/django/django_projects/PyHub2;
}
include "/opt/bitnami/nginx/conf/vhosts/*.conf";
}
The following is the error that I get whenever attempting to connect:
Traceback (most recent call last):
File "/opt/bitnami/python/lib/python2.7/site-packages/gunicorn/arbiter.py", line 515, in spawn_worker
worker.init_process()
File "/opt/bitnami/python/lib/python2.7/site-packages/gunicorn/workers/base.py", line 126, in init_process
self.run()
File "/opt/bitnami/python/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 119, in run
self.run_for_one(timeout)
File "/opt/bitnami/python/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 66, in run_for_one
self.accept(listener)
File "/opt/bitnami/python/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 30, in accept
self.handle(listener, client, addr)
File "/opt/bitnami/python/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 141, in handle
self.handle_error(req, client, addr, e)
File "/opt/bitnami/python/lib/python2.7/site-packages/gunicorn/workers/base.py", line 213, in handle_error
self.log.exception("Error handling request %s", req.uri)
AttributeError: 'NoneType' object has no attribute 'uri'
[2015-12-29 22:12:26 +0000] [1887] [INFO] Worker exiting (pid: 1887)
[2015-12-30 03:12:26 +0000] [1921] [INFO] Booting worker with pid: 1921
And my wsgi per request of Klaus D.
"""
WSGI config for pyhub2 project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/1.8/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "pyhub2.settings")
application = get_wsgi_application()
If nginx is handling the SSL negotiation and gunicorn is running upstream, you shouldn't need to pass --certfile=/etc/ssl/cert_chain.crt --keyfile=/etc/ssl/server.key when launching gunicorn.
You might try a Nginx configuration to the tune of:
upstream app_server {
server 127.0.0.1:8000;
}
server {
listen 80;
listen [::]:80 default_server ipv6only=on;
server_name www.xxxxx.co;
# Redirect to SSL
rewrite ^ https://$server_name$request_uri? permanent;
include "/opt/bitnami/nginx/conf/vhosts/*.conf";
}
server {
# Listen for SSL requests
listen 443;
server_name www.xxxx.co;
ssl on;
ssl_certificate /etc/ssl/cert_chain.crt;
ssl_certificate_key /etc/ssl/server.key;
client_max_body_size 4G;
keepalive_timeout 5;
location = /favicon.ico { access_log off; log_not_found off; }
access_log /opt/bitnami/nginx/logs/access.log;
error_log /opt/bitnami/nginx/logs/error.log;
location /xxxx.txt {
root /home/bitnami;
}
location /status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
location /static {
autoindex on;
alias /opt/bitnami/apps/django/django_projects/PyHub2/static;
}
location /admin {
include proxy_params;
proxy_set_header X-Forwarded-For $scheme;
proxy_set_header X-Forwarded-Proto $scheme;
# Proxy to upstream app_server
proxy_pass http://app_server;
allow 96.241.66.109;
deny all;
}
location /robots.txt {
root /opt/bitnami/apps/django/django_projects/PyHub2;
}
location / {
try_files $uri #app_proxy;
}
location #app_proxy {
# Handle requests, proxy to SSL
include proxy_params;
proxy_set_header X-Forwarded-For $scheme;
proxy_set_header X-Forwarded-Proto $scheme;
# Proxy to upstream app_server
proxy_pass http://app_server;
}
include "/opt/bitnami/nginx/conf/vhosts/*.conf";
}
Also, you might try launching gunicorn with the --check-config flag to check for configuration errors outside of SSL, and ensure that you're able to access :8000 locally without SSL.

Categories