Bad Request (400) Nginx + Gunicorn + Django + FreeBSD - python

been stuck for a while trying to figure out why I keep on getting a 400 error, nginx log leaves me with not many clues (access log works which indicates DNS is correct).
The Gunicorn runners are running and website can be accessed locally (through "links 127.0.0.1:8000"), however between Nginx and Gunicorn something seems to go wrong since I cannot access the website using the domain.
Solving this would make me very happy :)
Added in Django config
ALLOWED_HOSTS = ['mydomain.tdl', 'www.mydomain.tdl']
Nginx config:
#user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /usr/local/etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx-access.log;
error_log /var/log/nginx-error.log info;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
#text/javascript;
##
# Virtual Host Configs
##
upstream myapp_app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server unix:/webapps/django/myapp/run/gunicorn.sock fail_timeout=0;
}
server {
#listen 80 is default
server_name mydomain.se;
return 301 http://www.mydomain.se$request_uri;
}
server {
listen 80;
server_name www.mydomain.se;
# return 301 http://www.mydomain.se$request_uri;
client_max_body_size 4G;
access_log /webapps/django/myapp/logs/nginx-access.log;
error_log /webapps/django/myapp/logs/nginx-error.log info;
location /static/ {
alias /webapps/django/myapp/static/;
}
location /media/ {
alias /webapps/django/myapp/casinoguden/media/;
}
location / {
if (!-f $request_filename) {
proxy_pass http://myapp_app_server;
break;
}
# include includes/botblock;
}
# Error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /webapps/django/myapp/static/;
}
}
}
Gunicorn config shell script:
#!/usr/local/bin/bash
NAME="myapp" # Name of the application
DJANGODIR=/webapps/django/myapp/ # Django project directory
PROJECTDIR=/webapps/django/myapp/
SOCKFILE=/webapps/django/myapp/run/gunicorn.sock # we will communicte using this unix socket
USER=david # the user to run as
GROUP=wheel # the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=myapp.settings_prod # which settings file should Django use
DJANGO_WSGI_MODULE=myapp.wsgi # WSGI module name
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd /home/david
source venv16/bin/activate
cd $DJANGODIR
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
#export PYTHONPATH=$DJANGODIR:$PYTHONPATH
cd $PROJECTDIR
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--bind=unix:$SOCKFILE \
--bind=127.0.0.1:8000 \
--log-level=debug \
--log-file=-

Solved by adding: proxy_set_header Host $host; under the location directive in nginx.conf

Related

How does the Nginx+Flask interpret python source file?

I'm trying update a web project which based on the nginx+Flask+uwsgi. And when I updated any python files, I found nginx was still using the old ones. When I removed all *.pyc files, no new pyc file was generated by python interpreter. It looks like there is a cache, I followed the answer of this question to try to clear the cache of nginx. But it didn't work.
Does anybody know any solution to let nginx to interpret python from new source file?
This is the nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile off;
#tcp_nopush on;
#tcp_nodelay on;
keepalive_timeout 65;
#types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
server_name m.xxxx.com.cn;
location / {
include uwsgi_params;
uwsgi_pass unix:/root/003_exampleproject/exampleproject.sock;
}
}
And this is another config files:
(venv) [root#VM_0_3_centos 003_exampleproject]# cat /etc/systemd/system/exampleproject.service
[Unit]
Description=uWSGI instance to serve exampleproject
After=network.target
[Service]
User=root
Group=nginx
WorkingDirectory=/root/003_exampleproject/
Environment="PATH=/root/003_exampleproject/venv"
ExecStart=/root/003_exampleproject/venv/bin/uwsgi --ini exampleproject.ini --logto /var/log/uwsgi/exampleproject.log
[Install]
WantedBy=multi-user.target
(venv) [root#VM_0_3_centos 003_exampleproject]# cat exampleproject.ini
[uwsgi]
module = manage:app
master = true
processes = 5
socket = exampleproject.sock
chmod-socket = 660
vacuum = true
die-on-term = true
env = MBUS_ADMIN=root#example.com
Short answer:
service exampleproject restart
or
systemctl restart exampleproject.service
Long answer:
Regarding to your config files service that need to be restarted is:
exampleproject.service
If you run command
service --status-all
You should have it listed there.
Then you may use commands:
service exampleproject
will print allowed commands.
Try then:
service exampleproject restart
Note: service command should work on your centos, but if not work in your distribution you may try to use alternative:
systemctl restart exampleproject.service

Hosting 2 django project inside one VPS server

I am trying to run 2 django project in one single VPS server in 2 different domain. I am using gunicorn as my project server.
I have created 2 virtual env, 2 supervisor and 2 separate file in sites-available and enable-folder.
My project is running well but the problem is in one time only one project is running in both the domain. Though in my nginx sites-avaible files are given different domain as server_name still one django project is running both of the domain
Can any oone help.
/etc/nginx/sites-avaiable/VaidText
upstream sample_project_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server unix:/home/example/test.example.com/TestEnv/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name test.example.com;
client_max_body_size 4G;
access_log /home/example/test.example.com/logs/nginx-access.log;
error_log /home/example/test.example.com/logs/nginx-error.log;
location /static/ {
alias /home/ubuntu/static/;
}
location /media/ {
alias /home/ubuntu/media/;
}
location / {
# an HTTP header important enough to have its own Wikipedia entry:
# http://en.wikipedia.org/wiki/X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS, this helps Rack
# set the proper protocol for doing redirects:
# proxy_set_header X-Forwarded-Proto https;
# pass the Host: header from the client right along so redirects
# can be set properly within the Rack application
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
# set "proxy_buffering off" *only* for Rainbows! when doing
# Comet/long-poll stuff. It's also safe to set if you're
# using only serving fast clients with Unicorn + nginx.
# Otherwise you _want_ nginx to buffer responses to slow
# clients, really.
# proxy_buffering off;
# Try to serve static files from nginx, no point in making an
# *application* server like Unicorn/Rainbows! serve static files.
if (!-f $request_filename) {
proxy_pass http://sample_project_server;
break;
}
}
# Error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/ubuntu/static/;
}
}
/etc/nginx/sites-avaiable/SheikhText
upstream sample_project_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server
unix:/home/example/sheikhnoman.example.com/SheikhEnv/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name sheikhnoman.example.com;
client_max_body_size 4G;
access_log /home/example/sheikhnoman.example.com/logs/nginx-access.log;
error_log /home/example/sheikhnoman.example.com/logs/nginx-error.log;
location /static/ {
alias /home/ubuntu/sheikhnoman/static/;
}
location /media/ {
alias /home/ubuntu/sheikhnoman/media/;
}
location / {
# an HTTP header important enough to have its own Wikipedia entry:
# http://en.wikipedia.org/wiki/X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS, this helps Rack
# set the proper protocol for doing redirects:
# proxy_set_header X-Forwarded-Proto https;
# pass the Host: header from the client right along so redirects
# can be set properly within the Rack application
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
# set "proxy_buffering off" *only* for Rainbows! when doing
# Comet/long-poll stuff. It's also safe to set if you're
# using only serving fast clients with Unicorn + nginx.
# Otherwise you _want_ nginx to buffer responses to slow
# clients, really.
# proxy_buffering off;
# Try to serve static files from nginx, no point in making an
# *application* server like Unicorn/Rainbows! serve static files.
if (!-f $request_filename) {
proxy_pass http://sample_project_server;
break;
}
}
# Error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/ubuntu/sheikhnoman/static/;
}
}
Your two ngnix config files contains the same "sample_project_server" upstream name. Try to set a different upsteam name for each of your files.

Serve static files through Nginx and uwsgi

I'm struggling to let Nginx serve static files and tried out about every configuration option I could find. Here's the app's basic structure.
ROOT
|_ static
|_css/
|_js/
|_ app.py
I've read Nginx docs about it, but the files do not seem to cache.
Here's my /etc/nginx/nginx.conf:
user www-data;
events {
worker_connections 64;
}
http {
gzip on; # Enables compression
gzip_types
"application/javascript;charset=utf-8" application/javascript text/javascript
"text/css;charset=utf-8" text/css
"text/plain;charset=utf-8" text/plain;
server {
listen 80;
server_name www.domain.tk domain.tk;
location /static/ {
alias /home/myuser/myapp/static/;
add_header Cache-Control public;
expires 14d;
access_log off;
}
location / {
include uwsgi_params;
uwsgi_pass unix:/home/myuser/myapp/myapp.sock; # REPLACE user
}
}
}
I'm using a uWSGI server and a Flask application if that matters. The developer console says the app should leverage Browser caching, how do I configure Nginx to work with serving the static files?

nginx intercepting google oauth redirect

I have my django app setup on a gunicorn server that is being proxied by nginx(also used for static files), and nginx is "intercepting" the GET request with the credentials code from Google! Why is nginx stealing the request instead of passing it to gunicorn to be processed?
here is my api info for my web application:
Client ID:
67490467925-v76j4e7bcdrps3ve37q41bnrtjm3jclj.apps.googleusercontent.com
Email address:
67490467925-v76j4e7bcdrps3ve37q41bnrtjm3jclj#developer.gserviceaccount.com
Client secret:
XquTw495rlwsHOodhWk
Redirect URIs: http://www.quickerhub.com
JavaScript origins: https://www.quickerhub.com
and here is the perfect GET request being stolen by nginx:
http://www.quickerhub.com/?code=4/bzqKIpj3UA3bBiyJfQzi3svzPBLZ.QoB_rXWZ6hUbmmS0T3UFEsPMOFF4fwI
and of course sweet nginx is giving me the "Welcome to nginx!" page...
Is there a way to tell nginx to pass these requests on to gunicorn? Or am I doing something incorrectly?
Thanks!
NGINX vhost config:
upstream interest_app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server unix:/webapps/hello_django/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name quickerhub.com;
client_max_body_size 4G;
access_log /webapps/hello_django/logs/nginx-access.log;
error_log /webapps/hello_django/logs/nginx-error.log;
location /static {
root /webapps/hello_django/interest/;
}
location /media {
root /webapps/hello_django/interest/;
}
location /static/admin {
root /webapps/hello_django/lib/python2.7/site-packages/django/contrib/admin/;
}
location / {
# an HTTP header important enough to have its own Wikipedia entry:
# http://en.wikipedia.org/wiki/X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS, this helps Rack
# set the proper protocol for doing redirects:
# proxy_set_header X-Forwarded-Proto https;
# pass the Host: header from the client right along so redirects
# can be set properly within the Rack application
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
# set "proxy_buffering off" *only* for Rainbows! when doing
# Comet/long-poll stuff. It's also safe to set if you're
# using only serving fast clients with Unicorn + nginx.
# Otherwise you _want_ nginx to buffer responses to slow
# clients, really.
# proxy_buffering off;
# Try to serve static files from nginx, no point in making an
# *application* server like Unicorn/Rainbows! serve static files.
if (!-f $request_filename) {
proxy_pass http://interest_app_server;
break;
}
}
# Error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /webapps/hello_django/interest/templates/;
}
}
you have
server_name quickerhub.com;
the get request is coming back to
http://www.quickerhub.com/?code=4/bzqKIpj3UA3bBiyJfQzi3svzPBLZ.QoB_rXWZ6hUbmmS0T3UFEsPMOFF4fwI
quickerhub.com != www.quickerhub.com and so nginx is falling through to serving the default page (for when it can't find a vhost).
all you need to do is use
server_name www.quickerhub.com quickerhub.com;
or even better, add this to "canonicalise" all your urls to the version without www.
server {
server_name www.quickerhub.com;
expires epoch;
add_header Cache-Control "no-cache, public, must-revalidate, proxy-revalidate";
rewrite ^ http://quickerhub.com$request_uri permanent;
}

configuration fail nginx setting for tornadoweb, unknown directive "user"

I've got this error in nginx version 1.0.0
nginx: [emerg] unknown directive "user" in /etc/nginx/sites-enabled/
tornado:1
if I remove user www-data the worker processes got error
nginx: [emerg] unknown directive "worker_processes" in /etc/nginx/
sites-enabled/tornado:1
I've search on google but still got nothing
please help
this is my tornado in site-available
user www-data www-data;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
use epoll;
}
http {
# Enumerate all the Tornado servers here
upstream frontends {
server 127.0.0.1:8081;
server 127.0.0.1:8082;
server 127.0.0.1:8083;
server 127.0.0.1:8084;
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
keepalive_timeout 65;
proxy_read_timeout 200;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
gzip on;
gzip_min_length 1000;
gzip_proxied any;
gzip_types text/plain text/html text/css text/xml
application/x-javascript application/xml
application/atom+xml text/javascript;
# Only retry if there was a communication error, not a timeout
# on the Tornado server (to avoid propagating "queries of death"
# to all frontends)
proxy_next_upstream error;
server {
listen 8080;
# Allow file uploads
client_max_body_size 50M;
location ^~ /static/ {
root /var/www;
if ($query_string) {
expires max;
}
}
location = /favicon.ico {
rewrite (.*) /static/favicon.ico;
}
location = /robots.txt {
rewrite (.*) /static/robots.txt;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect false;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://frontends;
}
}
}
Probably a bit overdue, but if anyone stumbles on this here's a hint:
Probably config collision, check in /etc/nginx for a .conf file with same directive.
Also worth checking, is whether the nginx.conf has an "include" line. It's very common and is a source of collisions.
For example.
evan#host:~/$ cat /etc/nginx/nginx.conf | grep include
include /etc/nginx/mime.types;
include /etc/nginx/conf.d/.conf;
include /etc/nginx/sites-enabled/;
In this case, a directive in /etc/nginx/sites-enabled/ will clash with the the contents of nginx.conf. Make sure you don't double up on anything between the included files.
Just want to elaborate on Kjetil M.'s answer, as that worked for me but I did not get what he means immediately. I wasn't until after a lot of attempts did I fix the problem and had a "oh that's what he meant" momment.
If your /etc/nginx/nginx.conf file and one of other config files /etc/nginx/sites-enabled/ use the same directive such as "user", you will run into this error. Just make sure only 1 version is active and comment out the other ones.
worker_* directives must be on the top of configuration, that means must be in /etc/nginx/nginx.conf
Example:
My firsts lines are:
user www-data;
worker_processes 4;
worker_connections 1024;
if you want to know how many workers is the best for your server you can run this command:
grep processor /proc/cpuinfo | wc -l
this tell you how many cores do you have, it doesn't make sense to have more workers than cores for websites.
if you want to know how many connections your workers can handle you can use this:
ulimit -n
Hope it helps.
I was getting the same error, but when I started nginx with -c options as
nginx -c conf.d/myapp.conf
it worked fine
Another thing, if you've created the config file on Windows and are using in on Linux, make sure the line endings are correct ("\r\n" vs. "\r") and that the file is not stored as unicode.
In my case, the error message appeared to show a space before user, even though there was no space there:
nginx: [emerg] unknown directive " user" in /etc/nginx/nginx.conf:1
Turns out that two of my .conf files had a BOM at the beginning of the file. Removing the BOM fixed the issue.

Categories