I have Nginx running as a reverse proxy on a computer with only one open port. Through this port and Nginx I redirect the received requests to several internal servers. Now I need to run InfluxDB on this computer, but the client writing to InfluxDB is on another computer.
My first idea was to add a new location to redirect input requests since port 8086 is closed, for example:
location /databasets {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:8086;
}
and then, with Python, I use:
client = InfluxDBClient(host='https://myurl', port=10000, 'root', 'root', dbname='mydb', path='databasets', ssl=True, proxies={"https": "https://myurl:10000/databasets"})
But so far it doesn't work, I have tried a couple of ways of configuring the nginx.conf file that I have seen on the internet and also changing the host / port in the Python client. I don't know if this is not possible, or on which side is the error, any ideas?
Thanks in advance
Add the following config in your nginx config
location /databasets/ {
proxy_pass http://localhost:8086;
rewrite `^/databasets/(.*) /$1 break`;
proxy_set_header Host $host;
}
The input url needs to be rewritten
Related
I am simply running a flask app and not using nginx and uwsgi, yes my host is behind the load blancer .
I am trying to read all the keys which can read the IP address, but I am not getting the actual IP of the client.
X-Real-IP is changing on every request and X-Forwarded-For has only one IP address which is the loadbalancer IP.
Same issue with bottle. When I started the application directly python app.py , I am not able to get the real IP address.
Is this must to use uwsgi and nginx for a sample app to read IP?
If I use below configuration and forward the uwsgi_param I can read the list of IP address in the response.
Below wsgi_file.ini
[uwsgi]
socket = 127.0.0.1:8000
plugin = python
wsgi-file = app/app.py
process = 3
callable = app
nginx.conf
server {
listen 3000;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
uwsgi_pass 0.0.0.0:8000; #unix:///tmp/uwsgi.sock;
include /etc/nginx/uwsgi_params;
uwsgi_param X-Real-IP $remote_addr;
uwsgi_param X-Forwarded-For $proxy_add_x_forwarded_for;
uwsgi_param X-Forwarded-Proto $http_x_forwarded_proto;
}
}
I started the nginx server and ran the application using command:
uwsgi --ini wsgi_file.ini.
The IP address of the client can be obtained in Flask with request.remote_addr.
Note that if you are using a reverse proxy, load balancer or any other intermediary between the client and the server, then this is going to return the IP address of the last intermediary, the one that sends requests directly into the Flask server. If the intermediaries include the X-Real-IP, X-Forwarded-For or Forwarded headers, then you can still figure out the IP address of the client.
I've created a rest API for my Django app but how I go to api.website.com rather than something like www.website.com/api
Btw I'm using nginx if that has to do anything with this
In your nginx configuration add something like this. This passes all requests on api.website.com to your gunicorn socket -> your django app.
server {
listen *:80;
server_name api.website.com;
location ~ ^/api(.*)$ {
try_files $uri $1 /$1;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://gunicorn_socket/;
}
}
I have the following config (inside the server tag) for my nginx server:
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Fix the “It appears that your reverse proxy set up is broken" error.
proxy_pass http://localhost:5000;
proxy_read_timeout 90;
}
location /api {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Fix the “It appears that your reverse proxy set up is broken" error.
proxy_pass http://localhost:2233/;
proxy_read_timeout 90;
proxy_redirect default;
}
I now try to access /api/auth/login/ via my webbrowser. At port 2233 I have a python server with Flask running. Now in the python console i get:
"GET //auth/login/ HTTP/1.0" 404 -
In my oppinion this path is messy and also not configured in flask, thats why there is a 404 response (for /auth/login i have a route).
How do I get rid of the leading slash nginx produces?
You are using the proxy_pass directive to alias /api/foo to /foo. Alias tends to work best if both source and target URIs end with a / or neither end with a /.
So:
location /api/ {
proxy_pass http://localhost:2233/;
...
}
Will correctly map /api/foo to /foo without adding the double-/ at the beginning. See this document for details.
This may also mean that the bare URI /api may not work correctly now.
Alternatively, perform the alias using rewrite ... break; instead of proxy_pass:
location /api {
rewrite ^/api(?:/(.*))?$ /$1 break;
proxy_pass http://localhost:2233;
...
}
See this document for details.
What I'm trying to accomplish.
Have a domain on https. Check. it's working ok using the following config. The flask app runs on port 1337 -> nginx takes it -> serves it though https. Everything is working nicely
Now I want to run another app, on port 1338 let's say. But if I do this, the browser (chrome) automatically redirects it to https.
I want: http://example.com:1338 .... to run ok
I get: https://example.com:1338 ... error certificate
My question is: how can I make the other app (on port 1338) either work with https:// or to work with http://
Here's my config...
server {
listen 80 default_server;
listen [::]:80 default_server;
root /home/cleverbots;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name _;
# SSL configuration
#
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
ssl_certificate /xxxxxxxxxx.crt;
ssl_certificate_key /xxxxxxxxxx.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ecdh_curve secp384r1;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
# Disable preloading HSTS for now. You can use the commented out header line that includes
# the "preload" directive if you understand the implications.
#add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
ssl_dhparam /xxxxxx/dhparam.pem;
location /static/ {
expires 30d;
add_header Last-Modified $sent_http_Expires;
alias /home/my_first_app/application/static/;
}
location / {
try_files $uri #tornado;
}
location #tornado {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:1337;
}
}
The answer to your question depends on what exactly you want the user experience to be.
As I understand your goal, you only have one domain (example.com). Your first app, (I'm going to call it app1337) is running on port 1337 and you can access in a browser at https://example.com/. Now you want to add another app (app1338) that you want to be able to access at https://example.com:1338/. The problem here is that only one service can run on a given port on a given interface. This can work, but means that you have to be really careful to make sure that your flask app only listens on loopback (127.0.0.1) and Nginx only listens on your Ethernet interface. If not, you'll get "socket already in use" errors. I would recommend instead using something else like 8338 in Nginx to avoid this confusion.
The fastest solution I can see would be to leave your existing server block exactly as is. Duplicate the entire thing, and in the new block:
Change the 2 listen 443 lines to the port you want to use in browser
(8338).
Remove the listen 80 lines or, if you want to serve the app on both ssl and non-ssl, change the port to the non-ssl port you want to use.
Change your proxy_pass line to point to your second flask app.
Like Keenan, I would recommend you use subdomains to sort your traffic. Something like https://app1337.example.com/ and https://app1338.example.com/ to make for a better user experience. To do this, duplicate the server block as above, but this time leave the ports the same, but change the "server_name" directive in each block to match the domain. Remove all of the "default_server" parts from the listen directives.
As an example:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name app1337.example.com;
# SSL configuration
# Certificate and key for "app1337.example.com"
ssl_certificate /xxxxxxxxxx.crt;
ssl_certificate_key /xxxxxxxxxx.key;
# The rest of the ssl stuff is common and can be moved to a shared file and included
# in whatever blocks it is needed.
include sslcommon.conf;
root /home/cleverbots;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
location /static/ {
expires 30d;
add_header Last-Modified $sent_http_Expires;
alias /home/my_first_app/application/static/;
}
location / {
try_files $uri #tornado;
}
location #tornado {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:1337;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name app1338.example.com;
# SSL configuration
# Certificate and key for "app1338.example.com"
ssl_certificate /xxxxxxxxxx.crt;
ssl_certificate_key /xxxxxxxxxx.key;
# The rest of the ssl stuff is common and can be moved to a shared file and included
# in whatever blocks it is needed.
include sslcommon.conf;
## This might be different for app1338
root /home/cleverbots;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
## This might be different for app1338
location /static/ {
expires 30d;
add_header Last-Modified $sent_http_Expires;
alias /home/my_first_app/application/static/;
}
location / {
try_files $uri #app1338;
}
location #app1338 {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:1338;
}
}
I am having trouble establishing a websocket in my Flask web application.
On the client side, I am emitting a "ping" websocket event every second to the server. In the browser console, I see the following error each second
POST https://example.com/socket.io/?EIO=3&transport=polling&t=LOkVYzQ&sid=88b5202cf38f40879ddfc6ce36322233 400 (BAD REQUEST)
GET https://example.com/socket.io/?EIO=3&transport=polling&t=LOkVZLN&sid=5a355bbccb6f4f05bd46379066876955 400 (BAD REQUEST)
WebSocket connection to 'wss://example.com/socket.io/?EIO=3&transport=websocket&sid=5a355bbccb6f4f05bd46379066876955' failed: WebSocket is closed before the connection is established.
I have the following nginx.conf
server {
listen 80;
server_name example.com www.example.com;
return 301 https://$server_name$request_uri;
}
upstream app_server {
# for UNIX domain socket setups
server unix:/pathtowebapp/gunicorn.sock fail_timeout=0;
}
server {
listen 443 ssl;
server_name example.com www.example.com;
keepalive_timeout 5;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
charset utf-8;
client_max_body_size 30M;
location / {
try_files $uri #proxy_to_app;
}
location /socket.io {
proxy_pass http://app_server;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Upgrade websocket;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400;
proxy_buffering off;
proxy_headers_hash_max_size 1024;
}
location /static {
alias /pathtowebapp/webapp/static;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
#proxy_buffering off;
proxy_pass http://app_server;
}
}
I have been looking all over for examples of a websocket working with https using nginx in front of gunicorn.
My webpage loads, although the websocket connection is not successful.
The client side websocket is established using the following javascript:
var socket = io.connect('https://' + document.domain + ':' + location.port + namespace);
Here is my gunicorn.conf
import multiprocessing
bind = 'unix:/pathtowebapp/gunicorn.sock'
workers = multiprocessing.cpu_count() * 2 + 1
worker_class = 'eventlet'
[EDIT] if I configure nginx the way it is in the Flask-IO documentation and just run (env)$ python deploy_app.py then it works. But I was under the impression that this was not as production-ideal as the setup I previously mentioned
The problem is that you are running multiple workers on gunicorn. This is not a configuration that is currently supported, due to the very limited load balancer in gunicorn that does not support sticky sessions. Documentation reference: https://flask-socketio.readthedocs.io/en/latest/#gunicorn-web-server.
Instead, run several gunicorn instances, each with one worker, and then set up nginx to do the load balancing, using the ip_hash method so that sessions are sticky.
Also, in case you are not aware, if you run multiple servers you need to also run a message queue, so that the processes can coordinate. This is also covered in the documentation link above.