Missing module when deploying Django on AWS Ubuntu EC2 - python

I'm having some problems hosting a Django site on an ubuntu AWS server. I have it all running on local host fine.
I am following these instructions: https://github.com/ialbert/biostar-central/blob/master/docs/deploy.md
when i try and run it in aws console using:
waitress-serve --port 8080 live.deploy.simple_wsgi:application
i get import error:
1. No module named simple_wsgi
Then if i use the base settings file (not the cut down one), i get import error
1. No module named logger
I've tried moving settings files around and copying the settings files to deploy.env and deploy.py and then sample.env and sample.py and i can't get it running. Please help

I had the same issue and opened it in Biostar project.
Here is how I fixed it: by serving the application via gunicorn and nginx.
Here is my script:
cp live/staging.env live/deploy.env
cp live/staging.py live/deploy.py
# Replace the value of DJANGO_SETTINGS_MODULE to "live.deploy" thanks to http://stackoverflow.com/a/5955623/535203
sed -i -e '/DJANGO_SETTINGS_MODULE=/ s/=.*/=live.deploy/' live/deploy.env
[[ -n "$BIOSTAR_HOSTNAME" ]] && sed -i -e "/BIOSTAR_HOSTNAME=/ s/=.*/=$BIOSTAR_HOSTNAME/" live/deploy.env
source live/deploy.env
biostar.sh init import
# Nginx config based on http://docs.gunicorn.org/en/stable/deploy.html and https://github.com/ialbert/biostar-central/blob/production/conf/server/biostar.nginx.conf
tee "$LIVE_DIR/nginx.conf" <<EOF
worker_processes 1;
user nobody nogroup;
# 'user nobody nobody;' for systems with 'nobody' as a group instead
pid /tmp/nginx.pid;
error_log /tmp/nginx.error.log;
events {
worker_connections 1024; # increase if you have lots of clients
accept_mutex off; # set to 'on' if nginx worker_processes > 1
# 'use epoll;' to enable for Linux 2.6+
# 'use kqueue;' to enable for FreeBSD, OSX
}
http {
include /etc/nginx/mime.types;
# fallback in case we can't determine a type
default_type application/octet-stream;
access_log /tmp/nginx.access.log combined;
sendfile on;
upstream app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response
# for UNIX domain socket setups
server unix:/tmp/biostar.sock fail_timeout=0;
# for a TCP configuration
# server 192.168.0.7:8000 fail_timeout=0;
}
server {
# if no Host match, close the connection to prevent host spoofing
listen 8080 default_server;
return 444;
}
server {
# use 'listen 80 deferred;' for Linux
# use 'listen 80 accept_filter=httpready;' for FreeBSD
listen 8080;
client_max_body_size 5M;
# set the correct host(s) for your site
server_name $SITE_DOMAIN;
keepalive_timeout 25s;
# path for static files
root $LIVE_DIR/export/;
location = /favicon.ico {
alias $LIVE_DIR/export/static/favicon.ico;
}
location = /sitemap.xml {
alias $LIVE_DIR/export/static/sitemap.xml;
}
location = /robots.txt {
alias $LIVE_DIR/export/static/robots.txt;
}
location /static/ {
autoindex on;
expires max;
add_header Pragma public;
add_header Cache-Control "public";
access_log off;
}
location / {
# checks for static file, if not found proxy to app
try_files \$uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS
# proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host \$http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
proxy_pass http://app_server;
}
}
}
EOF
gunicorn -b unix:/tmp/biostar.sock biostar.wsgi &
# Start Nginx in non daemon mode thanks to http://stackoverflow.com/a/28099946/535203
nginx -g 'daemon off;' -c "$LIVE_DIR/nginx.conf"

Related

NGINX server blocks doesn't work as expected

I know it's not appropriate place to ask question about nginx but I stack with some issue for a few days and still have no idea how to solve a problem.
I would like to use nginx to redirect user from domain.com:3001 to sub.domain.com. Application on port 3001 is running in docker container, I didn't add any files in directory sites-available/sites-enabled. I have added two server blocks (vhosts) in my conf.d directory. In server block I set $upstream and resolver according to record in my /etc/resolv.conf file. The problem is that when I test in browser sub.domain.com every time I receive information that IP address could not be connected with any server (DNS_PROBE_FINISHED_NXDOMAIN) or 50x errors.
However, when I run curl sub.domain.com from the server I receive 200 with index.html response, this doesn't work when I run the same command from my local PC. Server domain is in private network. Have you any idea what my configuration files lack of?? Maybe there is some issue with the listen port when app is running in docker or maybe there is something wrong with the version of nginx? When I installed nginx there was empty conf.d directory, with no default.conf. I am lost...
Any help will be highly appreciated.
Here is my configuration files:
server.conf:
server
{
listen 80;
listen 443 ssl;
server_name sub.domain.net;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
ssl_certificate /etc/nginx/ssl/cer.crt;
ssl_certificate_key /etc/nginx/ssl/private.key;
#set_real_ip_from 127.0.0.1;
#real_ip_header X-Real-IP;
#real_ip_recursive on;
# location / {
# root /usr/share/nginx/html;
# index index.html index.htm;
# }
location / {
resolver 10.257.10.4;
set $upstream https://127.0.0.1:3000;
proxy_pass $upstream;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Proto $scheme;`enter code here`
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
nginx.conf
#user nginx;
worker_processes 1;
#error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;
#pid /var/run/nginx.pid;
include /etc/nginx/modules.conf.d/*.conf;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local]
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#tcp_nodelay on;
#gzip on;
#gzip_disable "MSIE [1-6]\.(?!.*SV1)";
server_tokens off;
include /etc/nginx/conf.d/*.conf;
}
# override global parameters e.g. worker_rlimit_nofile
include /etc/nginx/*global_params;

Nginx/Uwsgi/Flask POST times out if body is too large

I'm using a dockerimage based on https://github.com/tiangolo/uwsgi-nginx-flask-docker/tree/master/python3.6. I am running a python app inside that accepts a POST, does some processing on the json body, and returns a simple json response back. A post like this:
curl -H "Content-Type: application/json" -X POST http://10.4.5.168:5002/test -d '{"test": "test"}'
works fine. If, however, I post a larger json file, I get a 504: Gateway Timeout.
curl -H "Content-Type: application/json" -X POST http://10.4.5.168:5002/test -d #some_6mb_file.json
I have a feeling that there is an issue with the communication between Nginx and Uwsgi, but I'm not sure how to fix it.
EDIT: I jumped inside the docker container and restarted nginx manually to get better logging. I'm receiving the following error:
2018/12/21 20:47:45 [error] 611#611: *1 upstream timed out (110:
Connection timed out) while reading response header from upstream,
client: 10.4.3.168, server: , request: "POST
/model/refuel_verification_model/predict HTTP/1.1", upstream:
"uwsgi://unix:///tmp/uwsgi.sock", host: "10.4.3.168:5002"
From inside the container, I started a second instance of my Flask app, running without Nginx and Uwsgi and it worked fine. The response took approximately 5 seconds to be returned (due to the processing time of the data.)
Configurations:
/etc/nginx/nginx.conf:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
daemon off;
/etc/nginx/conf.d/nginx.conf:
server {
listen 80;
location / {
try_files $uri #app;
}
location #app {
include uwsgi_params;
uwsgi_pass unix:///tmp/uwsgi.sock;
}
location /static {
alias /app/static;
}
}
/etc/nginx/conf.d/upload.conf:
client_max_body_size 128m;
client_body_buffer_size 128m;
I encountered this behavior, when proxying to aiohttp (Python) apps.
In my case, in the location block for the proxy, I needed to disable caching:
Removed from the block:
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
As a result, the working config is something like this:
server {
listen 80;
location / {
try_files $uri #app;
}
location #app {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_buffering off;
proxy_pass http://myapp;
}
location /static {
alias /app/static;
}
}
There was an issue with Tensorflow. I had loaded a tensorflow model during the app initialization and then tried to use it later. Because of the threading done by the webserver and the "non-thread-safe" nature of Tensorflow, the processing hung leading to the timeout.

Django on Nginx help needed

I am trying to deploy my website to aws ec2. It's in python/django and I want to learn how to deploy websites myself. I had some issues with aws's EBS so first I'd like to know how to do that manually.
I decided to use gunicorn and nginx for this.
I can run the website using gunicorn on a virtual env and I created the following script in /home/ec2-user/gunicorn_start.bash:
#!/bin/bash
NAME="davidbiencom" # Name of the
application
DJANGODIR=/home/ec2-user/davidbien # Django project directory
SOCKFILE=/home/ec2-user/virtual/run/gunicorn.sock
USER=ec2-user
GROUP=ec2-user
NUM_WORKERS=3
DJANGO_SETTINGS_MODULE=davidbiencom.settings
DJANGO_WSGI_MODULE=davidbiencom.wsgi
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source /home/ec2-user/virtual/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do$
exec gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--bind=unix:$SOCKFILE \
--log-level=debug \
--log-file=-
This runs fine I believe as there are no errors.
Next I install nginx and start the service. I confirm it's running as I get the welcome page. Next I do the following:
Go to /etc/nginx/nginx.conf and add the following to http
include /etc/nginx/sites-enabled/*.conf;
I then create two folders in /etc/nginx/ sites-available and sites-enabled.
I create a file davidbien.conf and enter the following inside(UPDATED):
upstream app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response
# for UNIX domain socket setups
server unix:/home/ec2-user/virtual/run/gunicorn.sock fail_timeout=0;
# for a TCP configuration
# server 192.168.0.7:8000 fail_timeout=0;
}
server {
listen 80;
server_name 35.176.185.50;
#Max upload size
client_max_body_size 75M; # adjust to taste
location /static/ {
root /home/ec2-user/davidbien/static;
}
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS
# proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
proxy_pass http://app_server;
}
}
I save this file and run the following command:
ln -s /etc/nginx/sites-available/davidbiencom.conf /etc/nginx/sites-enabled/davidbiencom.conf
After this is done I restart nginx and I when I enter the ip address I get 502 bad gateway error.
What could be wrong here?
Thank you.
EDIT:
Here's the error logs from var/log/nginx/error.log
2017/11/10 22:26:27 [error] 27620#0: *1 open() "/usr/share/nginx/html/favicon.iicon.ico" failed (2: No such file or directory), client: 2.96.149.96, server: $localhost, request: "GET /favicon.ico HTTP/1.1", host: "35.176.185.50",referrer: "http://35.176.185.50/"
EDIT 2:
Here's the etc/nginxnginx/conf file:
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
#include /etc/nginx/mime.types;
#default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/sites-enabled/*.conf;
index index.html index.htm;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name localhost;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
# redirect server error pages to the static page /40x.html
#
error_page 404 /404.html;
location = /40x.html {
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
You're telling nginx that it has to forward the request to 127.0.0.1:3031 but from what I see from the script to start gunicorn, gunicorn is binded to a socket, if you will start your gunicorn worker on 127.0.0.1:3031 it should work

nginx: [warn] conflicting server name "example.com" on 0.0.0.0:80, ignored

When I'm trying to restart nginx, I'm getting the following error
nginx: [warn] conflicting server name "example.io" on 0.0.0.0:80, ignored
I used my deploy scripts, to do deploying for 2 domains. For first it works fine, but for second, it gives error.
Here is my nginx.conf file
#
worker_processes 2;
#
user nginx nginx;
#
pid /opt/nginx/pids/nginx.pid;
error_log /opt/nginx/logs/error.log;
#
events {
worker_connections 4096;
}
#
http {
#
log_format full_log '$remote_addr - $remote_user $request_time $upstream_response_time '
'[$time_local] "$request" $status $body_bytes_sent $request_body "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
#
access_log /opt/nginx/logs/access.log;
ssl on;
ssl_certificate /opt/nginx/cert/example.crt;
ssl_certificate_key /opt/nginx/cert/example.key;
#
include /opt/nginx/conf/vhosts/*.conf;
# Deny access to any other host
server {
server_name example.io; #default
return 444;
}
}
Not sure,but try changing server name in
/etc/nginx/sites-enabled/default
it should help
I had got the same problem. To resolve this, I looked for conflict domain "example.io" in conf files.
In the following file, there was a snippet added for "example.io" at the bottom. "default_server" server section was as it is but section was added at the end of the file.
/etc/nginx/sites-available/default
server {
listen 80;
listen [::]:80;
server_name example.io;
root /var/www/html;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
So I think you need to search server_name in all files which reside inside "/etc/nginx/sites-available" folder.
Any how your domain name is added in
/etc/nginx/sites-enabled/default
To confirm search using
grep -r mydomain.com /etc/nginx/sites-enabled/*
and remove the domain name.
Then restart Nginx using
sudo systemctl restart nginx
Juste change listen 80 to another number like 8000 or 5000, whatever but not 80.
For good job, don't edit nginx.conf itself, create your own file.conf and link it. following this link. to see clearly what i means.

AttributeError when attempting to deploy gunicorn with HTTPS

I am attempting to deploy my server using Gunicorn over https. However, no matter what nginx configuration I use, I always get an attribute error in Gunicorn. I don't think the problem lies with Nginx though, but with gunicorn. But I don't know how to fix it. Here is the command I'm using to start my server:
gunicorn -b 0.0.0.0:8000 --certfile=/etc/ssl/cert_chain.crt --keyfile=/etc/ssl/server.key pyhub2.wsgi
And here is my nginx configuration file:
server {
# port to listen on. Can also be set to an IP:PORT
listen 80;
server_name www.xxxxx.co;
rewrite ^ https://$server_name$request_uri? permanent;
include "/opt/bitnami/nginx/conf/vhosts/*.conf";
}
server {
# port to listen on. Can also be set to an IP:PORT
listen 443;
ssl on;
ssl_certificate /etc/ssl/cert_chain.crt;
ssl_certificate_key /etc/ssl/server.key;
server_name www.xxxx.co;
access_log /opt/bitnami/nginx/logs/access.log;
error_log /opt/bitnami/nginx/logs/error.log;
location /xxxx.txt {
root /home/bitnami;
}
location / {
proxy_set_header X-Forwarded-For $scheme;
proxy_buffering off;
proxy_pass https://0.0.0.0:8000;
}
location /status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
# PageSpeed
#pagespeed on;
#pagespeed FileCachePath /opt/bitnami/nginx/var/ngx_pagespeed_cache;
# Ensure requests for pagespeed optimized resources go to the pagespeed
# handler and no extraneous headers get set.
#location ~ "\.pagespeed\.([a-z]\.)?[a-z]{2}\.[^.]{10}\.[^.]+" { add_header "" ""; }
#location ~ "^/ngx_pagespeed_static/" { }
#location ~ "^/ngx_pagespeed_beacon$" { }
#location /ngx_pagespeed_statistics { allow 127.0.0.1; deny all; }
#location /ngx_pagespeed_message { allow 127.0.0.1; deny all; }
location /static/ {
autoindex on;
alias /opt/bitnami/apps/django/django_projects/PyHub2/static/;
}
location /admin {
proxy_pass https://127.0.0.1:8000;
allow 96.241.66.109;
deny all;
}
location /robots.txt {
root /opt/bitnami/apps/django/django_projects/PyHub2;
}
include "/opt/bitnami/nginx/conf/vhosts/*.conf";
}
The following is the error that I get whenever attempting to connect:
Traceback (most recent call last):
File "/opt/bitnami/python/lib/python2.7/site-packages/gunicorn/arbiter.py", line 515, in spawn_worker
worker.init_process()
File "/opt/bitnami/python/lib/python2.7/site-packages/gunicorn/workers/base.py", line 126, in init_process
self.run()
File "/opt/bitnami/python/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 119, in run
self.run_for_one(timeout)
File "/opt/bitnami/python/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 66, in run_for_one
self.accept(listener)
File "/opt/bitnami/python/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 30, in accept
self.handle(listener, client, addr)
File "/opt/bitnami/python/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 141, in handle
self.handle_error(req, client, addr, e)
File "/opt/bitnami/python/lib/python2.7/site-packages/gunicorn/workers/base.py", line 213, in handle_error
self.log.exception("Error handling request %s", req.uri)
AttributeError: 'NoneType' object has no attribute 'uri'
[2015-12-29 22:12:26 +0000] [1887] [INFO] Worker exiting (pid: 1887)
[2015-12-30 03:12:26 +0000] [1921] [INFO] Booting worker with pid: 1921
And my wsgi per request of Klaus D.
"""
WSGI config for pyhub2 project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/1.8/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "pyhub2.settings")
application = get_wsgi_application()
If nginx is handling the SSL negotiation and gunicorn is running upstream, you shouldn't need to pass --certfile=/etc/ssl/cert_chain.crt --keyfile=/etc/ssl/server.key when launching gunicorn.
You might try a Nginx configuration to the tune of:
upstream app_server {
server 127.0.0.1:8000;
}
server {
listen 80;
listen [::]:80 default_server ipv6only=on;
server_name www.xxxxx.co;
# Redirect to SSL
rewrite ^ https://$server_name$request_uri? permanent;
include "/opt/bitnami/nginx/conf/vhosts/*.conf";
}
server {
# Listen for SSL requests
listen 443;
server_name www.xxxx.co;
ssl on;
ssl_certificate /etc/ssl/cert_chain.crt;
ssl_certificate_key /etc/ssl/server.key;
client_max_body_size 4G;
keepalive_timeout 5;
location = /favicon.ico { access_log off; log_not_found off; }
access_log /opt/bitnami/nginx/logs/access.log;
error_log /opt/bitnami/nginx/logs/error.log;
location /xxxx.txt {
root /home/bitnami;
}
location /status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
location /static {
autoindex on;
alias /opt/bitnami/apps/django/django_projects/PyHub2/static;
}
location /admin {
include proxy_params;
proxy_set_header X-Forwarded-For $scheme;
proxy_set_header X-Forwarded-Proto $scheme;
# Proxy to upstream app_server
proxy_pass http://app_server;
allow 96.241.66.109;
deny all;
}
location /robots.txt {
root /opt/bitnami/apps/django/django_projects/PyHub2;
}
location / {
try_files $uri #app_proxy;
}
location #app_proxy {
# Handle requests, proxy to SSL
include proxy_params;
proxy_set_header X-Forwarded-For $scheme;
proxy_set_header X-Forwarded-Proto $scheme;
# Proxy to upstream app_server
proxy_pass http://app_server;
}
include "/opt/bitnami/nginx/conf/vhosts/*.conf";
}
Also, you might try launching gunicorn with the --check-config flag to check for configuration errors outside of SSL, and ensure that you're able to access :8000 locally without SSL.

Categories