I am trying to run 2 django project in one single VPS server in 2 different domain. I am using gunicorn as my project server.
I have created 2 virtual env, 2 supervisor and 2 separate file in sites-available and enable-folder.
My project is running well but the problem is in one time only one project is running in both the domain. Though in my nginx sites-avaible files are given different domain as server_name still one django project is running both of the domain
Can any oone help.
/etc/nginx/sites-avaiable/VaidText
upstream sample_project_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server unix:/home/example/test.example.com/TestEnv/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name test.example.com;
client_max_body_size 4G;
access_log /home/example/test.example.com/logs/nginx-access.log;
error_log /home/example/test.example.com/logs/nginx-error.log;
location /static/ {
alias /home/ubuntu/static/;
}
location /media/ {
alias /home/ubuntu/media/;
}
location / {
# an HTTP header important enough to have its own Wikipedia entry:
# http://en.wikipedia.org/wiki/X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS, this helps Rack
# set the proper protocol for doing redirects:
# proxy_set_header X-Forwarded-Proto https;
# pass the Host: header from the client right along so redirects
# can be set properly within the Rack application
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
# set "proxy_buffering off" *only* for Rainbows! when doing
# Comet/long-poll stuff. It's also safe to set if you're
# using only serving fast clients with Unicorn + nginx.
# Otherwise you _want_ nginx to buffer responses to slow
# clients, really.
# proxy_buffering off;
# Try to serve static files from nginx, no point in making an
# *application* server like Unicorn/Rainbows! serve static files.
if (!-f $request_filename) {
proxy_pass http://sample_project_server;
break;
}
}
# Error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/ubuntu/static/;
}
}
/etc/nginx/sites-avaiable/SheikhText
upstream sample_project_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server
unix:/home/example/sheikhnoman.example.com/SheikhEnv/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name sheikhnoman.example.com;
client_max_body_size 4G;
access_log /home/example/sheikhnoman.example.com/logs/nginx-access.log;
error_log /home/example/sheikhnoman.example.com/logs/nginx-error.log;
location /static/ {
alias /home/ubuntu/sheikhnoman/static/;
}
location /media/ {
alias /home/ubuntu/sheikhnoman/media/;
}
location / {
# an HTTP header important enough to have its own Wikipedia entry:
# http://en.wikipedia.org/wiki/X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS, this helps Rack
# set the proper protocol for doing redirects:
# proxy_set_header X-Forwarded-Proto https;
# pass the Host: header from the client right along so redirects
# can be set properly within the Rack application
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
# set "proxy_buffering off" *only* for Rainbows! when doing
# Comet/long-poll stuff. It's also safe to set if you're
# using only serving fast clients with Unicorn + nginx.
# Otherwise you _want_ nginx to buffer responses to slow
# clients, really.
# proxy_buffering off;
# Try to serve static files from nginx, no point in making an
# *application* server like Unicorn/Rainbows! serve static files.
if (!-f $request_filename) {
proxy_pass http://sample_project_server;
break;
}
}
# Error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/ubuntu/sheikhnoman/static/;
}
}
Your two ngnix config files contains the same "sample_project_server" upstream name. Try to set a different upsteam name for each of your files.
Related
I started the django project on digitalocean.com. The problem is that on my ip address my site is loaded and there are no problems, but if I go through the domain, the "Bad Request error (400)". Domain bought at godaddy.com. Domain settings are correct in accordance with the hosting tutorial https://www.digitalocean.com/community/tutorials/how-to-point-to-digitalocean-nameservers-from-common-domain-registrars:
ns1.digitalocean.com
ns2.digitalocean.com
ns3.digitalocean.com
Configuring nginx:
upstream bbb_app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server unix:/webapps/bbb/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name example.ru;
client_max_body_size 4G;
access_log /webapps/bbb/logs/nginx-access.log;
error_log /webapps/bbb/logs/nginx-error.log;
location /static/ {
alias /webapps/bbb/static/;
}
location /media/ {
alias /webapps/bbb/media/;
}
location / {
# an HTTP header important enough to have its own Wikipedia entry:
# http://en.wikipedia.org/wiki/X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS, this helps Rack
# set the proper protocol for doing redirects:
# proxy_set_header X-Forwarded-Proto https;
# pass the Host: header from the client right along so redirects
# can be set properly within the Rack application
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
# set "proxy_buffering off" *only* for Rainbows! when doing
# Comet/long-poll stuff. It's also safe to set if you're
# using only serving fast clients with Unicorn + nginx.
# Otherwise you _want_ nginx to buffer responses to slow
# clients, really.
# proxy_buffering off;
# Try to serve static files from nginx, no point in making an
# *application* server like Unicorn/Rainbows! serve static files.
if (!-f $request_filename) {
proxy_pass http://bbb_app_server;
break;
}
}
# Error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /webapps/bbb/static/;
}
Here are my settings for settings.py:
try:
from .settings_prod import *
except:
pass
Here are my settings for settings_prod.py:
DEBUG = False
ALLOWED_HOSTS = ['0.80.00.000',
'.example.ru']
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'db1',
'USER': 'bbb',
'PASSWORD': 'bbb',
'HOST': 'localhost',
'PORT': '',
}
}
checked nginx -t:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
The way you are handling your settings seems to be the problem.
If I understood correctly, you have a settings.py and a settings_prod.py.
Then, inside the settings.py, you are importing all info from settings_prod.py.
What I think is happening is, this import happens at the top of the file. Then, probably in the remainder of the settings.py, you may have another ALLOWED_HOSTS, which is overriding the ALLOWED_HOSTS = ['0.80.00.000', '.example.ru'] defined in your settings_prod.py.
If you move the following code:
try:
from .settings_prod import *
except:
pass
To the bottom of the settings.py file, it will override properly the configurations.
I have a Flask (with Flask-restplus) app running locally on port 5000. When I launch the app locally and go below URL I can see the Swagger UI.
http://localhost:5000/api/#
But when I run it behind NGINX, gunicorn and go to
http://localhost:81/api/#
I get below error
Can't read from server. It may not have the appropriate access-control-origin settings.
When I look at the chrome error I see a request being made to http://localhost/api/swagger.json Colud this be the problem as the NGINX container is running on port 81?
$(function () {
window.swaggerUi = new SwaggerUi({
url: "http://localhost/api/swagger.json",
validatorUrl: "" || null,
dom_id: "swagger-ui-container",
supportedSubmitMethods: ['get', 'post', 'put', 'delete', 'patch'],
onComplete: function(swaggerApi, swaggerUi){
},
onFailure: function(data) {
log("Unable to Load SwaggerUI"); // <<<-- This is where it breaks
},
});
window.swaggerUi.load();
However I am able to make a postman request to my through http://localhost:81/api/centres/1 and I get expected data
After googling since last three days the options are:
To send CORS headers on response. I dont prefer this as it is a security risk.
To configure NGINX to redirect requests to correct url (http://flask.pocoo.org/snippets/35/)
This is what my server config looks like
server {
listen 81;
charset utf-8;
# Configure NGINX to reverse proxy HTTP requests to the upstream server
(Gunicorn (WSGI server))
location / {
# Define the location of the proxy server to send the request to
proxy_pass http://web:8000;
proxy_redirect http://localhost/api http://localhost:81/api;
# Redefine the header fields that NGINX sends to the upstream server
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Scheme $scheme;
}
}
It still doesnt show me the swagger UI. I'm a newbie to this world of Docker, Nginx and gunicorn. How can fix this issue?
I tried to deploy a sample django application in amazon ec2 server with the help of nginx and gunicorn. I added proxy pass in nginx. After I run the server and accessing my IP I was able to view the welcome to django page. But when I navigate to some other urls, say admin its shows 404 not found error.
How to fix this error.
Nginx config:
upstream app {
server 127.0.0.1:8000;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
server_name IP;
location /static/ {
root /home/ubuntu/workspace/business;
}
location / {
proxy_pass http://app;
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
}
You need change this:
location / {
proxy_pass http://app;
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
For this:
location / {
proxy_pass http://gunicorn:8888; #use your gunicorn port
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
The problem was with the line
try_files $uri $uri/ =404;
This line is causing every url except the main url to route to 404 page.
I removed it and hence working.
Thanks to #Richard Smith for mentioning it in the comment
I have my django app setup on a gunicorn server that is being proxied by nginx(also used for static files), and nginx is "intercepting" the GET request with the credentials code from Google! Why is nginx stealing the request instead of passing it to gunicorn to be processed?
here is my api info for my web application:
Client ID:
67490467925-v76j4e7bcdrps3ve37q41bnrtjm3jclj.apps.googleusercontent.com
Email address:
67490467925-v76j4e7bcdrps3ve37q41bnrtjm3jclj#developer.gserviceaccount.com
Client secret:
XquTw495rlwsHOodhWk
Redirect URIs: http://www.quickerhub.com
JavaScript origins: https://www.quickerhub.com
and here is the perfect GET request being stolen by nginx:
http://www.quickerhub.com/?code=4/bzqKIpj3UA3bBiyJfQzi3svzPBLZ.QoB_rXWZ6hUbmmS0T3UFEsPMOFF4fwI
and of course sweet nginx is giving me the "Welcome to nginx!" page...
Is there a way to tell nginx to pass these requests on to gunicorn? Or am I doing something incorrectly?
Thanks!
NGINX vhost config:
upstream interest_app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server unix:/webapps/hello_django/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name quickerhub.com;
client_max_body_size 4G;
access_log /webapps/hello_django/logs/nginx-access.log;
error_log /webapps/hello_django/logs/nginx-error.log;
location /static {
root /webapps/hello_django/interest/;
}
location /media {
root /webapps/hello_django/interest/;
}
location /static/admin {
root /webapps/hello_django/lib/python2.7/site-packages/django/contrib/admin/;
}
location / {
# an HTTP header important enough to have its own Wikipedia entry:
# http://en.wikipedia.org/wiki/X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS, this helps Rack
# set the proper protocol for doing redirects:
# proxy_set_header X-Forwarded-Proto https;
# pass the Host: header from the client right along so redirects
# can be set properly within the Rack application
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
# set "proxy_buffering off" *only* for Rainbows! when doing
# Comet/long-poll stuff. It's also safe to set if you're
# using only serving fast clients with Unicorn + nginx.
# Otherwise you _want_ nginx to buffer responses to slow
# clients, really.
# proxy_buffering off;
# Try to serve static files from nginx, no point in making an
# *application* server like Unicorn/Rainbows! serve static files.
if (!-f $request_filename) {
proxy_pass http://interest_app_server;
break;
}
}
# Error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /webapps/hello_django/interest/templates/;
}
}
you have
server_name quickerhub.com;
the get request is coming back to
http://www.quickerhub.com/?code=4/bzqKIpj3UA3bBiyJfQzi3svzPBLZ.QoB_rXWZ6hUbmmS0T3UFEsPMOFF4fwI
quickerhub.com != www.quickerhub.com and so nginx is falling through to serving the default page (for when it can't find a vhost).
all you need to do is use
server_name www.quickerhub.com quickerhub.com;
or even better, add this to "canonicalise" all your urls to the version without www.
server {
server_name www.quickerhub.com;
expires epoch;
add_header Cache-Control "no-cache, public, must-revalidate, proxy-revalidate";
rewrite ^ http://quickerhub.com$request_uri permanent;
}
I've got this error in nginx version 1.0.0
nginx: [emerg] unknown directive "user" in /etc/nginx/sites-enabled/
tornado:1
if I remove user www-data the worker processes got error
nginx: [emerg] unknown directive "worker_processes" in /etc/nginx/
sites-enabled/tornado:1
I've search on google but still got nothing
please help
this is my tornado in site-available
user www-data www-data;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
use epoll;
}
http {
# Enumerate all the Tornado servers here
upstream frontends {
server 127.0.0.1:8081;
server 127.0.0.1:8082;
server 127.0.0.1:8083;
server 127.0.0.1:8084;
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
keepalive_timeout 65;
proxy_read_timeout 200;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
gzip on;
gzip_min_length 1000;
gzip_proxied any;
gzip_types text/plain text/html text/css text/xml
application/x-javascript application/xml
application/atom+xml text/javascript;
# Only retry if there was a communication error, not a timeout
# on the Tornado server (to avoid propagating "queries of death"
# to all frontends)
proxy_next_upstream error;
server {
listen 8080;
# Allow file uploads
client_max_body_size 50M;
location ^~ /static/ {
root /var/www;
if ($query_string) {
expires max;
}
}
location = /favicon.ico {
rewrite (.*) /static/favicon.ico;
}
location = /robots.txt {
rewrite (.*) /static/robots.txt;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect false;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://frontends;
}
}
}
Probably a bit overdue, but if anyone stumbles on this here's a hint:
Probably config collision, check in /etc/nginx for a .conf file with same directive.
Also worth checking, is whether the nginx.conf has an "include" line. It's very common and is a source of collisions.
For example.
evan#host:~/$ cat /etc/nginx/nginx.conf | grep include
include /etc/nginx/mime.types;
include /etc/nginx/conf.d/.conf;
include /etc/nginx/sites-enabled/;
In this case, a directive in /etc/nginx/sites-enabled/ will clash with the the contents of nginx.conf. Make sure you don't double up on anything between the included files.
Just want to elaborate on Kjetil M.'s answer, as that worked for me but I did not get what he means immediately. I wasn't until after a lot of attempts did I fix the problem and had a "oh that's what he meant" momment.
If your /etc/nginx/nginx.conf file and one of other config files /etc/nginx/sites-enabled/ use the same directive such as "user", you will run into this error. Just make sure only 1 version is active and comment out the other ones.
worker_* directives must be on the top of configuration, that means must be in /etc/nginx/nginx.conf
Example:
My firsts lines are:
user www-data;
worker_processes 4;
worker_connections 1024;
if you want to know how many workers is the best for your server you can run this command:
grep processor /proc/cpuinfo | wc -l
this tell you how many cores do you have, it doesn't make sense to have more workers than cores for websites.
if you want to know how many connections your workers can handle you can use this:
ulimit -n
Hope it helps.
I was getting the same error, but when I started nginx with -c options as
nginx -c conf.d/myapp.conf
it worked fine
Another thing, if you've created the config file on Windows and are using in on Linux, make sure the line endings are correct ("\r\n" vs. "\r") and that the file is not stored as unicode.
In my case, the error message appeared to show a space before user, even though there was no space there:
nginx: [emerg] unknown directive " user" in /etc/nginx/nginx.conf:1
Turns out that two of my .conf files had a BOM at the beginning of the file. Removing the BOM fixed the issue.