I have configured a django developed site using Nginx, gunicorn and supervisor on ubuntu 14.04 and it working perfectly for more than 2 years without any lag in response and request.
In my site i have a script/management command that takes the database dump and pushes to s3 through cron job, few days before the code stopped working as started throwing me an error socket.error: [Errno 104] Connection reset by peer and i posted the complete traceback here but could n;t get any response and so started googling around, and i got to see this post and so made changes as describe there like in order to get rid of socket.error: [Errno 104] Connection reset by peer error it was mentioned to add the following lines to /etc/sysctl.conf
Workaround for TCP Window Scaling bugs in other ppl's equipment
net.ipv4.tcp_wmem = 4096 16384 512000
net.ipv4.tcp_rmem = 4096 87380 512000
I have added them and tried running $ sudo sysctl -p, after this executed the dbbackup/s3 uploading command python manage.py db_backup but still facing the same socket.error: [Errno 104] Connection reset by peer error, and so reverted/removed the changes(removed the above added lines in /etc/sysctl.conf) and re run the command $ sudo sysctl -p and so i have my previous changes back.
Also on my nginx configuration
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # SSLv2
I read some where that removing TLSv1 from above setting ssl_protocols will solve socket.error: [Errno 104] Connection reset by peer problem, and so i removed it and restarted nginx server, now tried to run db_backup management command it seems to be working, but i added back TLSv1 to setting ssl_protocols just to make sure and confirm with some one else
Now the actual problem is after doing the above changes and reverting back and restarting supervisor, nginx my site has became damn damn slow
I have different sections on site like
Home page
Contact us
Many pages that shows/list the list of records fetching from postgres database
The home page and contact us page are working as usual but the page that related to database fetching was not able to load even after 3 minutes and displaying 502 Bad Gateway nginx/1.4.6 (Ubuntu)
I tried everything like restarting postgres, nginx, supervisor and double checked the file /etc/sysctl.conf to make sure it does n't any new changes. Everything seems to be perfect but could n;t able to understand why the site has gone slow
Nginx and gunicorn files
server {
listen 80;
server_name example.com www.example.com m.example.com;
location / {
return 301 https://www.example.com$request_uri;
# proxy_pass http://127.0.0.1:8001;
}
location /static/ {
alias /user/apps/example_webapp/project/new_media/;
}
}
server {
listen 443 ssl;
server_name example.com www.example.com m.example.com;
ssl_certificate /etc/ssl/example/example.com.chained.crt;
ssl_certificate_key /etc/ssl/example/www.example.com.key;
ssl_session_timeout 20m;
ssl_session_cache shared:SSL:10m; # ~ 40,000 sessions
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # SSLv2
# ssl_ciphers ALL:!aNull:!eNull:!SSLv2:!kEDH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+EXP:#STRENGTH;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
client_max_body_size 20M;
location / {
proxy_pass http://127.0.0.1:8001;
proxy_connect_timeout 300s;
proxy_read_timeout 300s;
}
location /static/ {
alias /user/apps/example_webapp/project/new_media/;
}
}
Gunicorn
bind = "127.0.0.1:8001"
workers = 3
loglevel = "debug"
proc_name = "project"
daemon = False
pythonpath = "/user/apps/project_name/"
errorlog = "/user/apps/gunicorn_configurations/gunicorn_logfiles/gunicorn_errors.log"
timeout = 90
So can anyone please let me know how to bring my site back to original state ? also what might be the causes for slowed up suddenly without any reason ? where should i check for any help or errors ?
Related
I'm using smtplib to send simple emails for booking in a flask application I'm using google mail and have an app password as well as allowed less secure applications. I have the booking system running on my personal computer, but as soon as I port it over to the VPS it stops working, for an unknown reason other than the username and password are not accepted, but they are definitely correct, and it will run by itself but wont when run in wsgi and nginx.
Nginx config
server {
listen 80;
server_name example.com;
# return 301 https://$server_name$request_uri;
location / {
uwsgi_pass unix:/path/too/chatbot.sock;
include uwsgi_params;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com;`
ssl_certificate /path/too/keys.pem;
ssl_certificate_key /path/too//primarykey.pem;
ssl_trusted_certificate /path/too//keys.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
# curl https://ssl-config.mozilla.org/ffdhe2048.txt > /path/to/dhparam
#ssl_dhparam /path/to/dhparam;
# intermediate configuration
ssl_protocols TLSv1.2;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers on;
# HSTS (ngx_http_headers_module is required) (63072000 seconds)
add_header Strict-Transport-Security "max-age=63072000" always;
# replace with the IP address of your resolver
resolver 8.8.8.8;
location / {
include uwsgi_params;
uwsgi_pass unix:/path/too/chatbot.sock;
}
}
UWSGI.ini file
[uwsgi]
module=wsgi:app
master = true
processes = 5
enable-threads = true
socket = chatbot.sock
chmod.socket = 666
vacuum = true
die-on-term = true
.env
DIALOGFLOW_PROJECT_ID=projectid
GOOGLE_APPLICATION_CREDENTIALS=Ajsonfile.json
RESTFUL_CREDENTIALS=restful_credentials.json
MAIL_USERNAME=example#gmail.com
MAIL_PASSWORD=apasswordforemailaddress
My current thinking is that wsgi or nginx are unable too find the file due to some sort of permissions issue but I've chown'ed all the related files, I'm getting the same issue with my google api key now too.
all the information is stored in a .env file which has the correct group access, along with all the other files running on the site already.
I don't know what would be helpful to post here other than I'm using nginx and wsgi to expose a flask application, some items are stored in a .env file that doesn't seem to be read.
To get them to load while running in WSGI you need to use dot-env package
from dotenv import load_dotenv, find_dotenv
load_dotenv(find_dotenv())
[uwsgi]
base = /var/www/html/poopbuddy-api
chdir = %(base)
app = app
I don't know exactly what chdir does but I think it at least sets the default location to the root directory of the app. From there, load_dotenv() works.
I am using nginx as a reverse proxy in front of a uWSGI server (flask apps).
Due to a memory leak, use --max-requests to reload workers after so many calls.
The issue is the following : When a worker just restarted/started, the first request it receives stays hanging between uWSGI and NGINX, the process time inside of the flask app is usual and very quick but the client waits until uwsgi_send_timeout is triggered.
Using tcpdump to see the request (nginx is XXX.14 and uWSGI is XXX.11) :
You can see in the time column that it hangs for 300 seconds (uwsgi_send_timeout) eventhough the HTTP request has been received by NGINX... uWSGI just doesn't send a [FIN] packet to signal that the connexion is closed. Then NGINX triggers the timeout and closes the session.
The end client receives a truncated response.. With a 200 status code. which is very frustrating.
This happens at every worker reload, only once, the first request no matter how big the request.
Does anyone have a workaround this issue? have I misconfigured something?
uwsgi.ini
[uwsgi]
# Get the location of the app
module = api:app
plugin = python3
socket = :8000
manage-script-name = true
mount = /=api:app
cache2 = name=xxx,items=1024
# Had to increase buffer-size because of big authentication requests.
buffer-size = 8192
## Workers management
# Number of workers
processes = $(UWSGI_PROCESSES)
master = true
# Number of requests managed by 1 worker before reloading (reload is time expensive)
max-requests = $(UWSGI_MAX_REQUESTS)
lazy-apps = true
single-interpreter = true
nginx-server.conf
server {
listen 443 ssl http2;
client_max_body_size 50M;
location #api {
include uwsgi_params;
uwsgi_pass api:8000;
uwsgi_read_timeout 300;
uwsgi_send_timeout 300;
}
For some weird reason, adding the parameter uwsgi_buffering off; in the nginx config fixed the issue.
I still don't understand why but for now this fixes my issue. If anyone has a valid explanation, don't hesitate.
server {
listen 443 ssl http2;
client_max_body_size 50M;
location #api {
include uwsgi_params;
uwsgi_pass api:8000;
uwsgi_buffering off;
uwsgi_read_timeout 300;
uwsgi_send_timeout 300;
}
When I'm trying to restart nginx, I'm getting the following error
nginx: [warn] conflicting server name "example.io" on 0.0.0.0:80, ignored
I used my deploy scripts, to do deploying for 2 domains. For first it works fine, but for second, it gives error.
Here is my nginx.conf file
#
worker_processes 2;
#
user nginx nginx;
#
pid /opt/nginx/pids/nginx.pid;
error_log /opt/nginx/logs/error.log;
#
events {
worker_connections 4096;
}
#
http {
#
log_format full_log '$remote_addr - $remote_user $request_time $upstream_response_time '
'[$time_local] "$request" $status $body_bytes_sent $request_body "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
#
access_log /opt/nginx/logs/access.log;
ssl on;
ssl_certificate /opt/nginx/cert/example.crt;
ssl_certificate_key /opt/nginx/cert/example.key;
#
include /opt/nginx/conf/vhosts/*.conf;
# Deny access to any other host
server {
server_name example.io; #default
return 444;
}
}
Not sure,but try changing server name in
/etc/nginx/sites-enabled/default
it should help
I had got the same problem. To resolve this, I looked for conflict domain "example.io" in conf files.
In the following file, there was a snippet added for "example.io" at the bottom. "default_server" server section was as it is but section was added at the end of the file.
/etc/nginx/sites-available/default
server {
listen 80;
listen [::]:80;
server_name example.io;
root /var/www/html;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
So I think you need to search server_name in all files which reside inside "/etc/nginx/sites-available" folder.
Any how your domain name is added in
/etc/nginx/sites-enabled/default
To confirm search using
grep -r mydomain.com /etc/nginx/sites-enabled/*
and remove the domain name.
Then restart Nginx using
sudo systemctl restart nginx
Juste change listen 80 to another number like 8000 or 5000, whatever but not 80.
For good job, don't edit nginx.conf itself, create your own file.conf and link it. following this link. to see clearly what i means.
I have a setup with nginx, uwsgi, and gevent. When testing the setup's ability to handle premature client disconnects, I found that uwsgi isn't exactly responding in a timely manner.
This is how I detect that a disconnect has occurred inside of my python code:
While True:
if 'uwsgi' in sys.modules:
import uwsgi ##UnresolvedImport
fileDescriptor = uwsgi.connection_fd()
if not uwsgi.is_connected(fileDescriptor):
logger.debug("Connection was lost (client disconnect)")
break
So when uwsgi signals a lost of connection, I break out of this loop. There's also a call to gevent.sleep(2) at the bottom of the loop to prevent hammering the CPU.
With that in place I have nginx logging the close connection like this:
2016/08/16 19:23:23 [info] 32452#0: *1 epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while sending to client, client: 192.168.56.1, server: <removed>, request: "GET /myurl HTTP/1.1", upstream: "uwsgi://127.0.0.1:8070", host: "<removed>:8443"
nginx is immediately aware of the disconnect when it produces this log entry, it's within milliseconds of the client disconnecting. Yet uwsgi doesn't seem to be aware of the disconnect until seconds, sometimes almost a minute later at least in terms of notifying my code:
DEBUG - Connection was lost (client disconnect) - 391 ms[08/16/16 19:24:04 UTC])
The uwsgi.log file created via daemonize suggests it somehow saw it a second before nginx but somehow waited half a minute to actually tell my code:
[pid: 32208|app: 0|req: 2/2] 192.168.56.1 () {32 vars in 382 bytes} [Tue Aug 16 19:23:22 2016] GET /myurl => generated 141 bytes in 42030 msecs (HTTP/1.1 200) 2 headers in 115 bytes (4 switches on core 999
This is my setup in nginx:
upstream bottle {
server 127.0.0.1:8070;
}
server {
listen 8443;
ssl on;
ssl_certificate /etc/pki/tls/certs/server.crt;
ssl_certificate_key /etc/pki/tls/private/server.key;
server_name <removed>;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
include uwsgi_params;
#proxy_read_timeout 5m;
uwsgi_buffering off;
uwsgi_ignore_client_abort off;
proxy_ignore_client_abort off;
proxy_cache off;
chunked_transfer_encoding off;
#uwsgi_read_timeout 5m;
#uwsgi_send_timeout 5m;
uwsgi_pass bottle;
}
}
The odd part to me is how the timestamp from uwsgi is saying it saw it right when nginx did, however it doesn't write that entry until my code sees it ~30 seconds later. It appears from my perspective, that uwsgi is essentially lying or locking it up, yet I can't find any errors from it.
Any help is appreciated. I've attempted to remove any buffering and delays from nginx without any success.
This question already has answers here:
How to specify which eth interface Django test server should listen on?
(3 answers)
Closed 7 years ago.
I have a server aerv.nl.
it has django (a python framework) but when i run the django server it says:
server started at: http://127.0.0.1:8000/ how can i let the server run on: http://www.aerv.nl/~filip/ ? (a real url)
You'll have to configure your http server and Django. For example if you're using apache you'll need to go through this:
https://docs.djangoproject.com/en/1.9/howto/deployment/wsgi/modwsgi/
What you're doing here is setting up your server to handle the http requests through your django app.
You will need to understand how DNS works, then use redirecting and then some proper server (like nginx or apache with e.g. gunicorn), not django development server, which shouldn't be used on production. There is no way you could do what you ask for just with ./manage runserver. All you can do is to change IP address and port to something different by e.g.: ./manage.py runserver 192.168.0.12:9999 so for example other computers in your network might access your site on this specific IP and port.
Example
You are owner of domain example.com, you have server where you want to server your site with IP address e.g. 5.130.2.19.
You need go to your domain provider and add an A record which connects these together: example.com -> 5.130.2.19.
Then on your server you set up webserver, e.g. nginx and let it run with e.g. this config for your particular server/site:
server {
listen 80;
server_name example.com;
client_max_body_size 4G;
location /static/ {
autoindex on;
alias /var/www/example/django/static/;
}
location /media/ {
autoindex on;
alias /var/www/example/django/media/;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://upstream_server;
break;
}
}
upstream upstream_server {
server unix:/var/www/example/gunicorn.sock fail_timeout=10s;
}
then you would need to run gunicorn with something like
gunicorn example.wsgi:application --bind=unix:/var/www/example/gunicorn.sock
That should be all, but it's of course very brief. Just substitute example.com for your URL. It is up to you if this going to be specific record in nginx config (think about this as an entry point) or if it is going to be one of route specified in your django project.
How does it work?
User put example.com into an address bar, your computer then ask global DNS servers: To what IP address example.com points to?, DNS reply: It's 5.130.2.19. User's browser then sends HTTP request on that IP, where nginx get this request and looks inside its config if there is example.com handler. It find that it is there and that he should look for files in unix:/var/www/example/gunicorn.sock. He looks there and see working gunicorn, which basically parse python django project to something what nginx can present as your website.