How does the Nginx+Flask interpret python source file? - python

I'm trying update a web project which based on the nginx+Flask+uwsgi. And when I updated any python files, I found nginx was still using the old ones. When I removed all *.pyc files, no new pyc file was generated by python interpreter. It looks like there is a cache, I followed the answer of this question to try to clear the cache of nginx. But it didn't work.
Does anybody know any solution to let nginx to interpret python from new source file?
This is the nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile off;
#tcp_nopush on;
#tcp_nodelay on;
keepalive_timeout 65;
#types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
server_name m.xxxx.com.cn;
location / {
include uwsgi_params;
uwsgi_pass unix:/root/003_exampleproject/exampleproject.sock;
}
}
And this is another config files:
(venv) [root#VM_0_3_centos 003_exampleproject]# cat /etc/systemd/system/exampleproject.service
[Unit]
Description=uWSGI instance to serve exampleproject
After=network.target
[Service]
User=root
Group=nginx
WorkingDirectory=/root/003_exampleproject/
Environment="PATH=/root/003_exampleproject/venv"
ExecStart=/root/003_exampleproject/venv/bin/uwsgi --ini exampleproject.ini --logto /var/log/uwsgi/exampleproject.log
[Install]
WantedBy=multi-user.target
(venv) [root#VM_0_3_centos 003_exampleproject]# cat exampleproject.ini
[uwsgi]
module = manage:app
master = true
processes = 5
socket = exampleproject.sock
chmod-socket = 660
vacuum = true
die-on-term = true
env = MBUS_ADMIN=root#example.com

Short answer:
service exampleproject restart
or
systemctl restart exampleproject.service
Long answer:
Regarding to your config files service that need to be restarted is:
exampleproject.service
If you run command
service --status-all
You should have it listed there.
Then you may use commands:
service exampleproject
will print allowed commands.
Try then:
service exampleproject restart
Note: service command should work on your centos, but if not work in your distribution you may try to use alternative:
systemctl restart exampleproject.service

Related

Flask+nginx+uwsgi: only serve url with nginx if flask doesn't have a route for it

nginx config for the server (the main nginx one is the default one on debian 9):
server {
listen 80;
server_name subdomain.domain.com;
include /etc/nginx/mime.types;
location /galleries {
autoindex on;
alias /srv/galleries/;
}
location / {
include uwsgi_params;
uwsgi_pass unix:/tmp/scraper.sock;
}
}
uwsgi config:
[uwsgi]
module = wsgi:app
master = true
processes = 5
socket = /tmp/scraper.sock
chmod-socket = 777
uid = www-data
gid = www-data
vacuum = true
die-on-term = true
plugins = python3
py-autoreload = 1
If I try creating a route for /galleries/whatever, ie like this:
#app.route("/galleries/whatever")
def test():
return "Hello"
I'll just see the indexed files inside /galleries/whatever through nginx instead of going through flask.
Is there a way for me to force nginx to only handle requests if flask returns 404? Alternatively, is there a better way for me to serve files while still having them available under those urls? Keep in mind the /galleries folder is pretty big and generated by another program.
I run the server with "uwsgi --ini server.ini" and nothing else.

Bad Request (400) Nginx + Gunicorn + Django + FreeBSD

been stuck for a while trying to figure out why I keep on getting a 400 error, nginx log leaves me with not many clues (access log works which indicates DNS is correct).
The Gunicorn runners are running and website can be accessed locally (through "links 127.0.0.1:8000"), however between Nginx and Gunicorn something seems to go wrong since I cannot access the website using the domain.
Solving this would make me very happy :)
Added in Django config
ALLOWED_HOSTS = ['mydomain.tdl', 'www.mydomain.tdl']
Nginx config:
#user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /usr/local/etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx-access.log;
error_log /var/log/nginx-error.log info;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
#text/javascript;
##
# Virtual Host Configs
##
upstream myapp_app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server unix:/webapps/django/myapp/run/gunicorn.sock fail_timeout=0;
}
server {
#listen 80 is default
server_name mydomain.se;
return 301 http://www.mydomain.se$request_uri;
}
server {
listen 80;
server_name www.mydomain.se;
# return 301 http://www.mydomain.se$request_uri;
client_max_body_size 4G;
access_log /webapps/django/myapp/logs/nginx-access.log;
error_log /webapps/django/myapp/logs/nginx-error.log info;
location /static/ {
alias /webapps/django/myapp/static/;
}
location /media/ {
alias /webapps/django/myapp/casinoguden/media/;
}
location / {
if (!-f $request_filename) {
proxy_pass http://myapp_app_server;
break;
}
# include includes/botblock;
}
# Error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /webapps/django/myapp/static/;
}
}
}
Gunicorn config shell script:
#!/usr/local/bin/bash
NAME="myapp" # Name of the application
DJANGODIR=/webapps/django/myapp/ # Django project directory
PROJECTDIR=/webapps/django/myapp/
SOCKFILE=/webapps/django/myapp/run/gunicorn.sock # we will communicte using this unix socket
USER=david # the user to run as
GROUP=wheel # the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=myapp.settings_prod # which settings file should Django use
DJANGO_WSGI_MODULE=myapp.wsgi # WSGI module name
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd /home/david
source venv16/bin/activate
cd $DJANGODIR
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
#export PYTHONPATH=$DJANGODIR:$PYTHONPATH
cd $PROJECTDIR
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--bind=unix:$SOCKFILE \
--bind=127.0.0.1:8000 \
--log-level=debug \
--log-file=-
Solved by adding: proxy_set_header Host $host; under the location directive in nginx.conf

Nginx Configure Static Site and Django+uWSGI

I am having a bit of trouble getting Nginx to serve a static index.html page, and also to serve a a Django site. Could anyone point me out to what I'm doing wrong?
I have port 80 and 8000 open.
I have spent three days trying to get this working.
Locally I have no problems, then again I'm not using nginx for that.
I placed uwsgi_params within /etc/nginx/uwsgi_params
/etc/nginx/sites-enabled
Here are my symlinks to the actual configuration files
site1.conf -> /home/jesse/projects/site1/conf/site1.conf
site2.conf -> /home/jesse/projects/site2/conf/site2.conf
/home/jesse/projects/site1/conf/site1.conf
This is just a basic static site, but it won't load :(
server {
listen 80;
server_name www.site1.com;
rewrite ^(.*) http://site1.com$1 permanent;
location / {
root /home/jesse/projects/site1/;
}
}
/home/jesse/projects/site2/conf/site2.conf
= The manage.py/wsgy.py is located under /home/jesse/projects/site2/site2/
= This is a Django site using uWSGI, I installed it with $ pip install uwsgi.
server {
listen 80;
server_name www.site2.com;
rewrite ^(.*) http://site2com$1 permanent;
root /home/site2/projects/site2/site2;
location /static/ {
alias /home/jesse/site2/projects/site2/site2/static/;
#expires 30d;
}
location /media/ {
alias /home/jesse/site2/projects/site2/site2/media/;
#expires 30d;
}
location / {
include uwsgi_params;
uwsgi_pass 127.0.0.1:8000;
}
}
/home/site2/projects/site2/conf
[uwsgi]
projectname = site2
projectdomain = site2.com
base = /home/jesse/site2/projects/site2/site2
# Config
plugins = python
master = true
protocol = uwsgi
env = DJANGO_SETTINGS_MODULE=%(projectname).settings
pythonpath = %(base)/src/%(projectname)
module = %(projectname).wsgi
socket = 127.0.0.1:8000
logto = %(base)/logs/uwsgi.log
# Runs daemon in background
daemonize = /home/jesse/log/$(projectname).log
Nginx Restart
$ sudo service nginx restart
* Restarting nginx nginx [ OK ]
= The site1 produces a Not Found (Not a 404)
= The site2 produces a
I would appreciate any assistance :)
rewrite ^(.*) http://site1.com$1 permanent;
this in site1, is not managed by any server, because no server_name handles site.com (without www).
then again with site2
rewrite ^(.*) http://site2com$1 permanent;
first fix those lines.
the correct way in my opinion is to write a server rule that catches www. names and rewrites them to non www, and place site1.com or site2.com as server_name in the rules you have now, as an example of rewiring
server {
listen 80;
server_name www.site1.com
return 301 http://site1.com$request_uri?;
}

nginx + uwsgi for multiple sites using multiple ports

I would like to host 2 sites in one IP address 1.2.3.4 for example. I want to visit them by using different ports. For example, I would like to have 1.2.3.4:8000 for siteA, while 1.2.3.4:9000 to point to siteB. I am using nginx + uwsgi.
Here is the example to configure one of sites.
For NGINX, I had:
server {
listen 8000; ## listen for ipv4; this line is default and implied
location / {
uwsgi_pass unix:///tmp/uwsgi.sock;
include uwsgi_params;
uwsgi_read_timeout 1800;
}
}
For UWSGI, I had:
[uwsgi]
socket = /tmp/uwsgi.sock
master = true
harakiri = 60
listen = 5000
limit-as = 512
reload-on-as = 500
reload-on-rss = 500
pidfile = /tmp/uwsgi.pid
daemonize = /tmp/uwsgi.log
**chdir = /home/siteA**
module = wsgi_app
plugins = python
To visit siteA, I simple go to 1.2.3.4:8000.
I have no problem with configuration of one site, but I have no idea to make it working with two sites.
Please note that I didnot bind the site with the server name. Does it matter?
Thanks in advance.
P.S. The following is the way I launch NGINX and UWSGI.
I first put the nginx conf file (for siteA, I called it as siteA_for_ngxing.conf) in the /etc/nginx/sites-available/ directory.
I then use uwsgi --ini uwsgi.ini to start uwsgi. (the file of uwsgi.ini contains the above [uwsgi])...
Any help?
The following example might be useless for you, because it seems you installed uWSGI manually, instead of using system repository. But I think, you can easly find how uWSGI is configured on Ubuntu and make the same configuration on your system.
Here how I have done it on Ubuntu. I installed both uWSGI and nginx from Ubuntu repo, so I got the following dirs:
/etc/nginx/sites-available
/etc/nginx/sites-enabled
/etc/uwsgi/apps-available
/etc/uwsgi/apps-enabled
On /etc/uwsgi/apps-available I placed two files: app_a.ini and app_b.ini. There is no option socket (as well as pid and daemonize) in these files. uWSGI will detect socket, log, and pid file names using ini-file name. Then I created symlink to these files in /etc/uwsgi/apps-enabled to enable apps.
For nginx I used /etc/nginx/sites-available/default config file (it already symlinked to enabled dir).
upstream app_a {
server unix:///run/uwsgi/app/app_a/socket;
}
upstream app_b {
server unix:///run/uwsgi/app/app_b/socket;
}
server {
listen 8000;
location / {
uwsgi_pass app_a;
include uwsgi_params;
}
}
server {
listen 9000;
location / {
uwsgi_pass app_b;
include uwsgi_params;
}
}

configuration fail nginx setting for tornadoweb, unknown directive "user"

I've got this error in nginx version 1.0.0
nginx: [emerg] unknown directive "user" in /etc/nginx/sites-enabled/
tornado:1
if I remove user www-data the worker processes got error
nginx: [emerg] unknown directive "worker_processes" in /etc/nginx/
sites-enabled/tornado:1
I've search on google but still got nothing
please help
this is my tornado in site-available
user www-data www-data;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
use epoll;
}
http {
# Enumerate all the Tornado servers here
upstream frontends {
server 127.0.0.1:8081;
server 127.0.0.1:8082;
server 127.0.0.1:8083;
server 127.0.0.1:8084;
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
keepalive_timeout 65;
proxy_read_timeout 200;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
gzip on;
gzip_min_length 1000;
gzip_proxied any;
gzip_types text/plain text/html text/css text/xml
application/x-javascript application/xml
application/atom+xml text/javascript;
# Only retry if there was a communication error, not a timeout
# on the Tornado server (to avoid propagating "queries of death"
# to all frontends)
proxy_next_upstream error;
server {
listen 8080;
# Allow file uploads
client_max_body_size 50M;
location ^~ /static/ {
root /var/www;
if ($query_string) {
expires max;
}
}
location = /favicon.ico {
rewrite (.*) /static/favicon.ico;
}
location = /robots.txt {
rewrite (.*) /static/robots.txt;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect false;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://frontends;
}
}
}
Probably a bit overdue, but if anyone stumbles on this here's a hint:
Probably config collision, check in /etc/nginx for a .conf file with same directive.
Also worth checking, is whether the nginx.conf has an "include" line. It's very common and is a source of collisions.
For example.
evan#host:~/$ cat /etc/nginx/nginx.conf | grep include
include /etc/nginx/mime.types;
include /etc/nginx/conf.d/.conf;
include /etc/nginx/sites-enabled/;
In this case, a directive in /etc/nginx/sites-enabled/ will clash with the the contents of nginx.conf. Make sure you don't double up on anything between the included files.
Just want to elaborate on Kjetil M.'s answer, as that worked for me but I did not get what he means immediately. I wasn't until after a lot of attempts did I fix the problem and had a "oh that's what he meant" momment.
If your /etc/nginx/nginx.conf file and one of other config files /etc/nginx/sites-enabled/ use the same directive such as "user", you will run into this error. Just make sure only 1 version is active and comment out the other ones.
worker_* directives must be on the top of configuration, that means must be in /etc/nginx/nginx.conf
Example:
My firsts lines are:
user www-data;
worker_processes 4;
worker_connections 1024;
if you want to know how many workers is the best for your server you can run this command:
grep processor /proc/cpuinfo | wc -l
this tell you how many cores do you have, it doesn't make sense to have more workers than cores for websites.
if you want to know how many connections your workers can handle you can use this:
ulimit -n
Hope it helps.
I was getting the same error, but when I started nginx with -c options as
nginx -c conf.d/myapp.conf
it worked fine
Another thing, if you've created the config file on Windows and are using in on Linux, make sure the line endings are correct ("\r\n" vs. "\r") and that the file is not stored as unicode.
In my case, the error message appeared to show a space before user, even though there was no space there:
nginx: [emerg] unknown directive " user" in /etc/nginx/nginx.conf:1
Turns out that two of my .conf files had a BOM at the beginning of the file. Removing the BOM fixed the issue.

Categories