UWSGI + NGINX 502 Bad Gateway - python

I got a Web.py app and wanted to push it into production.
As recommended by the Web.py Community I decided to use uWSGI and Nginx for this.
My App uses Memcached for Session Storing and MySQL for other storing tasks. The app works fine on my MacBook.
I configured the uWSGI + Nginx setup previously which worked fine. But know I receive a 502 Bad Gateway when I try to access the Index Page on my Ubuntu Server.
BUT: when entering another page I receive all the content I wanted.
In general the app works fine in the Ubuntu environment, as I tested it by typing python app.py 8080. I was able to enter the page.tld:8080/ and receive all content.
My uWSGI config:
[uwsgi]
gid = www-data
uid = www-data
vhost = true
plugins = python
logdate
#socket = /tmp/uwsgi_vhosts.sock
socket = 127.0.0.1:3031
master = true
processes = 1
harakiri = 120
limit-as = 128
memory-report
no-orphans
The Nginx config:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
# Make site accessible from http://localhost/
server_name page.tld;
location / {
include uwsgi_params;
uwsgi_pass 127.0.0.1:3031;
# This is the absolute path to the folder containing your application
uwsgi_param UWSGI_CHDIR /var/www/page.tld/apps;
# This is actually not necessary for our simple application,
# but you may need this in future
uwsgi_param UWSGI_PYHOME /var/www/page.tld/apps;
# This is the name of your application file, minus the '.py' extension
uwsgi_param UWSGI_SCRIPT test;
}
I keep getting this lines in the vhosts.log of uWSGI:
libgcc_s.so.1 must be installed for pthread_cancel to work
- DAMN ! worker 1 (pid: 1281) died, killed by signal 6 :( trying respawn ...
- Respawned uWSGI worker 1 (new pid: 1330)
Please let me know if you need to see other parts of the configuration.
And these lines in the error.log of nginx:
[error] 1233#0: *1 upstream prematurely closed connection while reading response header from upstream, client: xxx.xxx.xxx.xxx, server: page.tld, request: "GET / HTTP/1.1", upstream: "uwsgi://127.0.0.1:3031", host: "page.tld"
Let me know if any other logs are needed to solve this.
Update: It seems that I get 502 Bad Gateway, when I want to access a page that has to load things from the MySQL Database. But as it is working without uWSGI & NGINX I guess that nginx kills the uwsgi instance for some reason when it tries to load things from the Database.

I recently fixed this problem by setting a higher memory limit within the uwsgi. You will need to restart the uwsgi. I am running uwsgi emperor at start up. So, in my case I rebooted.
[uwsgi]
...
limit-as = 512
System:
Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-43-generic x86_64)
mysqlclient==1.3.6

Related

Flask: Not able to access requests using nginx and directly I am able to access

I am setting up Flask, uwsgi, and nginx
Below is the config setup for uwsgi
[uwsgi]
module = wsgi:app
master = true
processes = 5
protocol = http
socket = 0.0.0.0:8443
buffer-size=32768
die-on-term = true
enable-threads = true
vacuum = true
When I try to Get Request , it is working fine
# curl --location --request GET 'http://10.1.1.10:8443/info?id=1&subid=2'
I have created ubuntu (docker container)
# access from nginx docker container with same curl working fine
# curl --location --request GET 'http://10.1.1.10:8443/info?id=1&subid=2'
Next, I have installed nginx, configured like below
cat /etc/nginx/sites-available/flaskconfig
server {
# the port your site will be served on
listen 80;
# the IP Address your site will be served on
server_name 10.1.1.10;
# Proxy connections to application server
location / {
include uwsgi_params;
uwsgi_pass 10.1.1.10:8443;
}
}
Created Linked file
# mkdir /etc/nginx/sites-enabled
# ln -s /etc/nginx/sites-available/flaskconfig /etc/nginx/sites-enabled/mysite.com
Restarted nginx service and service is running
I also Cross verified nginx with "nginx -t" command, config is good
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
It is not working, when I tried to access from nginx proxy and below is curl command
# curl --location --request GET 'http://10.1.1.10:8080/info?id=1&subid=2'
Note: why 8080 port is because nginx host container is started from -p 8080:80 and also tried with port 80 in above same curl , no luck
I guess, It is something wrong with nginx config/setup and I am unable to figure it out, struck from almost 2 days, Any help is really appreciated
message /var/log/nginx/error.log
2020/12/04 17:18:17 [error] 1195#1195: *3 upstream prematurely closed connection while reading response header from upstream, client: 172.10.1.16, server: 10.1.1.10, request: "GET /info?id=1&subid=2 HTTP/1.1", upstream: "uwsgi://10.1.1.10:8443", host: "10.1.1.10:8080"
Another Note:
Only Ngnix is running in Docker and both will accessing host and docker from outside is 10.1.1.10 IP, if we need to access ngnix i need pass 10.1.1.10:8080, if i need access uwsgi then 10.1.1.10:8443

NGINX + Gunicorn + Flask - 502 Bad Gateway - Permission Denied on Socket File

We are trying to set up NGINX as a reverse proxy to our Gunicorn Python application. We have been following this Guide from Digital Ocean (https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-gunicorn-and-nginx-on-ubuntu-16-04#create-a-systemd-unit-file). Both Gunicorn and NGINX are running on the same Ubuntu 16.04 32-bit virtual machine.
All of the posts we've seen online dealing with this type of permissions issue seem to point to the wrong "Group" setting in the service file, or to wrong permissions on the socket file. But as you can see below, we have the group set to "www-data". The socket file appears to have the necessary permissions and www-data is the owner.
What we currently have set (I've replaced our app name with "app"):
run.py
from flask import current_app
import os
from os import path
from application import app
from instance.config import Config
if __name__ == '__main__':
conf = Config()
app.run(host='0.0.0.0', debug=False, threaded=True)
/etc/systemd/system/app.service
[Unit]
Description=Application
After=network.target
[Service]
User=<root>
Group=www-data
WorkingDirectory=/home/<root>/app
Environment="PATH=/home/<root>/venv/bin"
ExecStart=/home/<root>/venv/bin/gunicorn --workers 3 --bind unix:app.sock -m 007 run:app
[Install]
WantedBy=multi-user.target
/etc/nginx/sites-available/app
server {
listen 80;
server_name app.com;
location / {
include proxy_params;
proxy_pass http://unix:/home/<root>/app/app.sock;
}
}
/var/log/nginx/error.log
2020/06/05 16:49:22 [crit] 2176#2176: *1 connect() to unix:/home/<root>/app/app.sock failed (13: Permission denied) while connecting to upstream, client: 10.0.2.2, server: app.com, request: "GET / HTTP/1.1", upstream: "http://unix:/home/<root>/app/app.sock:/", host: "app.com"
Here are the permissions on the socket file:
gsi#ubuntu:~/app$ ls -l app.sock
srwxrwx--- 1 <root> www-data 0 Jun 5 16:10 app.sock
We're new to NGINX so we're not quite sure what the issue is or how to troubleshoot this. Can anyone see where we're going wrong here? Please let us know if there's additional info we can provide.
I just ran into this problem. I was able to create the gunicorn socket file, but nginx complained about permission denied. The issue was that my socket file was in a sub-folder and the root folder did not have read or execute permissions. So even though the sub-folder had the correct permissions, the root folder prevented nginx from entering the sub-folder.
The solution was to add read and execute permissions to the root folder:
chmod o+rx /example_root_folder
We were able to resolve this by giving the www-data group access to the full application folder: sudo chgrp www-data ~/app. It already had access to the socket file specifically, but not the application folder.
I didn't think this was necessary since we specified the root user as the owner of the service. The root user already had access to the app folder and the instructions we were following didn't have steps for setting up the group access.
I don't have a lot of experience with Linux permissions/ownership though so this might be obvious to most experienced users.
Instead of entering app.com in the server name, try entering the IP address of the host machine and see if that works on the machine itself by running:
$curl <IP address of the host machine>
If still it doesn't work, I have written an article on the same, try to implement it using that and let me know if it works!
Hope it helps! :)

Forwarding Nginx port to gunicorn instance

I was following this tutorial to setup my flask server.
https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-gunicorn-and-nginx-on-ubuntu-18-04#step-6-%E2%80%94-securing-the-application
When I got to step 6 I see that they are setting flask for the whole url but I would like to point it to a specific port.
This is the code I have for my nginx which points. This currenly produces a 404.
server {
listen 5000;
server_name site.com;
location / {
include proxy_params;
proxy_pass http://unix:/home/user/project/project.sock;
}
}
All the other files are the same as the tutorial. I have tried to modify the .sock file but it seems like it was generated automatically and it can't be modified. In addition I need to find a way for nginx to handle this before I worry about handling it from gunicorn.
My end goal is to have nginx foward requests to flask running when a request is sent to 0.0.0.0:5000 and have all other requests 0.0.0.0 , 0.0.0.0/* be handled by nginx.
Any help to undestand all this is really appreciated got lost at this point.
EDIT
my nginx configuration in sites-available
server {
server_name domain www.domain;
location / {
include proxy_params;
proxy_pass http://127.0.0.1:8080/;
}
}
If you want flask to be open to a port instead of a file you should override
[service] to
[service]
...
ExecStart=/home/sammy/myproject/myprojectenv/bin/gunicorn --workers 3 --bind 127.0.0.1:8000 -m 007 wsgi:app
And change your nginx config to proxy_pass http://127.0.0.1:8000/;
This way you can have access to port 8000 for checking how are working gunicorn and flask. Remember to be careful with firewall rules to secure port 8000. For a good discussion on which one is better you can try: gunicorn + nginx: Server via socket or proxy?

Python Django+Nginx+uwsgi 502 Bad Gateway

Centos7,when I connect to my website, shows 502 Bad Gateway,
I test my website with command
uwsgi --ini
systemctl start nginx
And I cant figure out what's happened,please help me!
here's nginx.conf
upstream django {
server 127.0.0.1:8000;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name example.com;
charset utf-8;
include /etc/nginx/default.d/*.conf;
location / {
include uwsgi_params;
uwsgi_pass django;
}
location /static/ {
alias /usr/local/etc/dmp/static/;
}
}
and uwsgi setting
[uwsgi]
chdir = /usr/local/etc/dmp
module = DMP_python.wsgi
plugins = python3
socket = :8000
chmod-socket = 666
master = true
processes = 2
vacuum = true
You're using the wrong setting to tell uwsgi to use an HTTP port. You need http-socket rather than socket.
There can be multiple reasons for which an upstream might return invalid or even do not return any response
Verify if upstream uwsgi is actually running locally in the centos instance and can handle incoming requests
for this to verify run it as http-socket = :8000 in uwsgi.ini and then run uwsgi --ini uwsgi.ini
if uwsgi running fine on localhost, then change config back to socket = :8000
On centos 7.x SELinux package is enabled and runs in enforcing mode. So it won't allow nginx to write/connect to a socket.
verify if SELinux has the policy for nginx to write to sockets
check if read/connect/write to socket is allowed grep nginx /var/log/audit/audit.log | audit2allow -m nginx
do so by grep nginx /var/log/audit/audit.log | audit2allow -M nginx
and finally semodule -i nginx.pp
permission to connect to socket issue should be resolved by now
verify if nginx can do network connection with upstream. Check nginx error.log or getsebool -a | grep httpd
to allow it run setsebool httpd_can_network_connect on -P

Serving a request from gunicorn

Trying to setup a server on Rackspace.com.
Have done the following things:
Installed Centos 6.3
Installed Python 2.7
Installed gunicorn using the "Quick Start" on their home page: gunicorn.org/
In the quick start, a "hello world" application seems to be initialized:
Create file "myapp.py":
(tutorial) $ vi myapp.py
(tutorial) $ cat myapp.py
Contents of "myapp.py"
def app(environ, start_response):
data = "Hello, World!\n"
start_response("200 OK", [
("Content-Type", "text/plain"),
("Content-Length", str(len(data)))
])
return iter([data])
Since I know very little about servers, I do not know what to do next. I tried typing the server's IP address into the browser, but that seemed to result in a timeout.
I'm not sure if there is:
something else that needs to be installed. Nginx is mentioned under "deploy" on the gunicorn website. Looks like Nginx is a proxy server which is confusing to me because I thought gunicorn was a server. Not sure why I need two servers?
something that needs to be configured in gunicorn
something that needs to be configured on the server itself
something that else entirely that needs to be done in order to actually serve a request
What are the next steps?
Thanks so much!
since gunicorn is a Web server on your case Nginx will act as a back proxy passing the an HTTP request from Nginx to gunicorn.
So, I will put here the steps to take for a simple Nginx and Gunicorn configuration running on the same machine.
Starting with nginx configuration
Go to your /etc/nginx/nginx.conf and under the http{} make sure you have: include /etc/nginx/site-enabled/*;
http{
# other configurations (...)
include /etc/nginx/sites-enabled/*;
}
now, include a file on /etc/nginx/sites-enabled/mysite.conf where you will proxy your requests to your gunicorn app.
server {
listen 80 default; # this means nginx will be
# listening requests on port 80 and
# this will be the default nginx server
server_name localhost;
# declare proxy params and values to forward to your gunicorn webserver
proxy_pass_request_headers on;
proxy_pass_request_body on;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 120s;
location / {
# here is where you declare that every request to /
# should be proxy to 127.0.0.1:8000 (which is where
# your gunicorn will be running on)
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 10;
proxy_read_timeout 10;
proxy_pass http://127.0.0.1:8000/; # the actual nginx directive to
# forward the request
}
}
Ok, at this point all you have is an Nginx acting as a proxy where all the requests going to 127.0.0.1:80 will be passed to 127.0.0.1:8000.
Now is time to configure your Gunicorn webserver:
Usually the way I do I use a configuration file, Gunicorn config file can be an ordinary python file. So now, create a file at any location you like, I will assume this file will be /etc/gunicorn/mysite.py
workers = 3 # number of workers Gunicorn will spawn
bind = '127.0.0.1:8000' # this is where you declare on which address your
# gunicorn app is running.
# Basically where Nginx will forward the request to
pidfile = '/var/run/gunicorn/mysite.pid' # create a simple pid file for gunicorn.
user = 'user' # the user gunicorn will run on
daemon = True # this is only to tell gunicorn to deamonize the server process
errorlog = '/var/log/gunicorn/error-mysite.log' # error log
accesslog = '/var/log/gunicorn/access-mysite.log' # access log
proc_name = 'gunicorn-mysite' # the gunicorn process name
Ok, all set in configuration. Now all you have to do its to start the servers.
Starting the gunicorn and telling it which app to use and which config file.
from the command line and the folder where your myapp.py file is located run:
gunicorn -c /etc/gunicorn/mysite.py mysite:app
Now, only start nginx.
/etc/init.d/nginx start
or
service nginx start
Hope this helps.
looking at the quickstart guide, you probably should have run
(tutorial) $ ../bin/gunicorn -w 4 myapp:app
which should have produced a line that looks a bit like:
Listening at: http://127.0.0.1:8000
Among others. see if you can access your site at that address.
Also Note that 127.0.0.1 is the loopback address; accessible only from that host itself. To get gunicorn to bind to a different option, pass it --bind 0.0.0.0:80, as Jan-Philip suggests.
Since you mention rackspace, its possible that you may need to adjust the firewall settings to allow incoming connections to the desired ports.
Looks like you do not have a web application developed so far. So, I assume that your goal for now is to set up a development environment. For the time being, develop your web application using the development web server included in most frameworks, e.g. Flask.
Whatever framework you are using, make the development web server listen on 0.0.0.0 so that the service is listening on all configured network interfaces and make sure that the port is open to the outside (check the Rackspace settings).
When you are done developing your application or are looking into an existing one, you have to deploy it in a solid way. Then, gunicorn behind nginx is an option.
I will roughly go through your questions. It looks you have to read a bit more :-)
Nginx is mentioned under "deploy" on the gunicorn website. Looks like Nginx is a proxy server which is confusing to me because I thought gunicorn was a server. Not sure why I need two servers?
Nginx is a full-featured web server. It is appreciated for its performance and stability. People use it to serve static files (to not burden a dynamic web application with this task), to forward requests to web applications whenever necessary, for SSL-termination, and for load-balancing. Note that this is an incomplete picture.
gunicorn is a server for serving WSGI apps. Mainly, it manages worker processes that actually execute the web application.
something that needs to be configured in gunicorn.
something that needs to be configured on the server itself.
something that else entirely that needs to be done in order to actually serve a request.
Actually, you can optimize your linux box in endless ways (for performance, e.g. increase the file descriptor limit and for security). Within gunicorn, you can configure the number of worker processes and a lot more. If you have nginx as frontend or even another load balancer, this one has its own configuration. You see, your setup might become very complex for actual deployment in a real-world scenario. This is not trivial.
However, for playing around with a WSGI application, just set up your development framework properly, which is very simple in most cases, and make sure that there are no firewall issues. That's all.

Categories