flask not able to get the real ip of the remote client - python

I am simply running a flask app and not using nginx and uwsgi, yes my host is behind the load blancer .
I am trying to read all the keys which can read the IP address, but I am not getting the actual IP of the client.
X-Real-IP is changing on every request and X-Forwarded-For has only one IP address which is the loadbalancer IP.
Same issue with bottle. When I started the application directly python app.py , I am not able to get the real IP address.
Is this must to use uwsgi and nginx for a sample app to read IP?
If I use below configuration and forward the uwsgi_param I can read the list of IP address in the response.
Below wsgi_file.ini
[uwsgi]
socket = 127.0.0.1:8000
plugin = python
wsgi-file = app/app.py
process = 3
callable = app
nginx.conf
server {
listen 3000;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
uwsgi_pass 0.0.0.0:8000; #unix:///tmp/uwsgi.sock;
include /etc/nginx/uwsgi_params;
uwsgi_param X-Real-IP $remote_addr;
uwsgi_param X-Forwarded-For $proxy_add_x_forwarded_for;
uwsgi_param X-Forwarded-Proto $http_x_forwarded_proto;
}
}
I started the nginx server and ran the application using command:
uwsgi --ini wsgi_file.ini.

The IP address of the client can be obtained in Flask with request.remote_addr.
Note that if you are using a reverse proxy, load balancer or any other intermediary between the client and the server, then this is going to return the IP address of the last intermediary, the one that sends requests directly into the Flask server. If the intermediaries include the X-Real-IP, X-Forwarded-For or Forwarded headers, then you can still figure out the IP address of the client.

Related

How can I connect to InfluxDB through Nginx reverse proxy?

I have Nginx running as a reverse proxy on a computer with only one open port. Through this port and Nginx I redirect the received requests to several internal servers. Now I need to run InfluxDB on this computer, but the client writing to InfluxDB is on another computer.
My first idea was to add a new location to redirect input requests since port 8086 is closed, for example:
location /databasets {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:8086;
}
and then, with Python, I use:
client = InfluxDBClient(host='https://myurl', port=10000, 'root', 'root', dbname='mydb', path='databasets', ssl=True, proxies={"https": "https://myurl:10000/databasets"})
But so far it doesn't work, I have tried a couple of ways of configuring the nginx.conf file that I have seen on the internet and also changing the host / port in the Python client. I don't know if this is not possible, or on which side is the error, any ideas?
Thanks in advance
Add the following config in your nginx config
location /databasets/ {
proxy_pass http://localhost:8086;
rewrite `^/databasets/(.*) /$1 break`;
proxy_set_header Host $host;
}
The input url needs to be rewritten

FastAPI (starlette) get client real IP

I have an API on FastAPI and i need to get the client real IP address when he request my page.
I'm ty to use starlette Request. But it returns my server IP, not client remote IP.
My code:
#app.post('/my-endpoint')
async def my_endpoint(stats: Stats, request: Request):
ip = request.client.host
print(ip)
return {'status': 1, 'message': 'ok'}
What i'm doing wrong? How to get real IP (like in Flask request.remote_addr)?
request.client should work, unless you're running behind a proxy (e.g. nginx) in that case use uvicorn's --proxy-headers flag to accept these incoming headers and make sure the proxy forwards them.
The FastAPI using-request-directly doc page shows this example:
from fastapi import FastAPI, Request
app = FastAPI()
#app.get("/items/{item_id}")
def read_root(item_id: str, request: Request):
client_host = request.client.host
return {"client_host": client_host, "item_id": item_id}
Having had this example would have saved me ten minutes of mussing with Starlette's Request class
You don't need to set --proxy-headers bc it is enabled by default, but it only trusts IPs from --forwarded-allow-ips which defaults to 127.0.0.1
To be safe, you should only trust proxy headers from the ip of your reverse proxy (instead of trust all with '*'). If it's on the same machine then the defaults should work. Although I noticed from my nginx logs that it was using ip6 to communicate with uvicorn so I had to use --forwarded-allow-ips='[::1]' then I could see the ip addresses in FastAPI. You can also use --forwarded-allow-ips='127.0.0.1,[::1]' to catch both ip4 and ip6 on localhost.
--proxy-headers / --no-proxy-headers - Enable/Disable X-Forwarded-Proto, X-Forwarded-For, X-Forwarded-Port to populate remote address info. Defaults to enabled, but is restricted to only trusting connecting IPs in the forwarded-allow-ips configuration.
--forwarded-allow-ips - Comma separated list of IPs to trust with proxy headers. Defaults to the $FORWARDED_ALLOW_IPS environment variable if available, or '127.0.0.1'. A wildcard '*' means always trust.
Ref: https://www.uvicorn.org/settings/#http
if you use the nginx and uvicorn,you should set proxy-headers for uvicorn,and your nginx config should be add Host、X-Real-IPand X-Forwarded-For.
e.g.
server {
# the port your site will be served on
listen 80;
# the domain name it will serve for
server_name <your_host_name>; # substitute your machine's IP address or FQDN
# add_header Access-Control-Allow-Origin *;
# add_header Access-Control-Allow-Credentials: true;
add_header Access-Control-Allow-Headers Content-Type,XFILENAME,XFILECATEGORY,XFILESIZE;
add_header access-control-allow-headers authorization;
# Finally, send all non-media requests to the Django server.
location / {
proxy_pass http://127.0.0.1:8000/; # the uvicorn server address
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
on the nginx document:
This middleware can be applied to add HTTP proxy support to an
application that was not designed with HTTP proxies in mind. It
sets REMOTE_ADDR, HTTP_HOST from X-Forwarded headers. While
Werkzeug-based applications already can use
:py:func:werkzeug.wsgi.get_host to retrieve the current host even if
behind proxy setups, this middleware can be used for applications which
access the WSGI environment directly。
If you have more than one proxy server in front of your app, set
num_proxies accordingly.
Do not use this middleware in non-proxy setups for security reasons.
The original values of REMOTE_ADDR and HTTP_HOST are stored in
the WSGI environment as werkzeug.proxy_fix.orig_remote_addr and
werkzeug.proxy_fix.orig_http_host
:param app: the WSGI application
:param num_proxies: the number of proxy servers in front of the app.
If you have configured your nginx configuration properly based on #AllenRen's answer,
Try using --proxy-headers and also --forwarded-allow-ips='*' flags for uvicorn.
You would use the below code to getting the real-IP address from the client. If you have using reverse proxying and port forwarding
#app.post('/my-endpoint')
async def my_endpoint(stats: Stats, request: Request):
x = 'x-forwarded-for'.encode('utf-8')
for header in request.headers.raw:
if header[0] == x:
print("Find out the forwarded-for ip address")
origin_ip, forward_ip = re.split(', ', header[1].decode('utf-8'))
print(f"origin_ip:\t{origin_ip}")
print(f"forward_ip:\t{forward_ip}")
return {'status': 1, 'message': 'ok'}
I have deployed with docker-compose file and changes are
nginx. conf file
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:8000;
}
Changes in Dockerfile
EXPOSE 8000
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0"]
Changes in docker-compose.yaml file
version: "3.7"
services:
app:
build: ./fastapi
container_name: ipinfo
restart: always
ports:
- "8000:8000"
network_mode: host
nginx:
build: ./nginx
container_name: nginx
restart: always
ports:
- "80:80"
- "443:443"
network_mode: host
After these changes got client external IP correctly
Sharing what has worked for me on an Apache server setup on a stand-alone ubuntu-based web-server instance/droplet (Amazon EC2 / DigitalOcean / Hetzner / SSDnodes). TL;DR : use X_Forwarded_For
I'm assuming you have a domain name registered and are pinning your server to it.
In the code
from fastapi import FastAPI, Header
app = FastAPI()
#app.get("/API/path1")
def path1(X_Forwarded_For: Optional[str] = Header(None)):
print("X_Forwarded_For:",X_Forwarded_For)
return { "X_Forwarded_For":X_Forwarded_For }
This gives a null when running in local machine and hitting localhost:port/API/path1 , but in my deployed site it's properly giving my IP address when I hit the API.
In the program launch command
uvicorn launch1:app --port 5010 --host 0.0.0.0 --root-path /site1
main program is in launch1.py . Note the --root-path arg here - that's important if your application is going to deployed not at root level of a URL.
This takes care of url mappings, so in the program code above we didn't need to include it in the #app.get line. Makes the program portable - tomorrow you can move it from /site1 to /site2 path without having to edit the code.
In the server setup
The setting on my web-server:
Apache server is setup
LetsEncrypt SSL is enabled
Edit /etc/apache2/sites-available/[sitename]-le-ssl.conf
Add these lines inside <VirtualHost *:443> tag:
ProxyPreserveHost On
ProxyPass /site1/ http://127.0.0.1:5010/
ProxyPassReverse /site1/ http://127.0.0.1:5010/
Enable proxy_http and restart Apache
a2enmod proxy_http
systemctl restart apache2
some good guides for server setup:
https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-ubuntu-20-04
https://www.digitalocean.com/community/tutorials/how-to-use-apache-http-server-as-reverse-proxy-using-mod_proxy-extension-ubuntu-20-04
With this all setup, you can hit your api endpoint on https://[sitename]/site1/API/path1 and should see the same IP address in the response as what you see on https://www.whatismyip.com/ .
I have docker-compose and nginx proxy. The following helped:
in forwarded-allow-ips specified '*' (environment variable in docker-compose.yml file)
- FORWARDED_ALLOW_IPS=*
Added the code to the nginx.conf file as recommended by #allenren
location /api/ {
proxy_pass http://backend:8000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Using the Header dependency should let you access the X-Real-IP header.
from fastapi import FastAPI, Depends, Header
app = FastAPI()
#app.get('/')
def index(real_ip: str = Header(None, alias='X-Real-IP')):
return real_ip
Now if you start the server (in this case on port 8000) and hit it with a request with that X-Real-IP header set you should see it echo back.
http :8000/ X-Real-IP:111.222.333.444
HTTP/1.1 200 OK
content-length: 17
content-type: application/json
server: uvicorn
"111.222.333.444"
If you are using nginx as a reverse proxy; the direct solution is to include the proxy_params file like so:
location /api {
include proxy_params;
proxy_pass http://localhost:8000;
}

Flask-SocketIO and 400 Bad Request

I'm running a Flask application with socketio to deal with notifications.
The Flask app is listening at port 5000 and the client is in 8080.
the js client is always getting this error:
VM15520:1 GET http://localhost:5000/socket.io/?EIO=3&transport=polling&t=Mb2_LpO 400 (Bad Request)
Access to XMLHttpRequest at 'http://localhost:5000/socket.io/?EIO=3&transport=polling&t=Mb2_LpO' from origin 'http://localhost:8080' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
I'm actually starting my app with gunicorn as follows:
gunicorn --workers=1 --worker-class eventlet --certfile=keys/key.crt --keyfile=keys/key.key --bind 0.0.0.0:5000 myapp.run:app
and this is my run.py:
import eventlet
from myapp import create_app
eventlet.monkey_patch()
app, celery = create_app('config_prod.py')
I'm also using CORS(app) in my app factory.
I also tried adding this in one of my blueprints:
#api.after_request
def after_request(response):
response.headers.add('Access-Control-Allow-Origin', 'http://localhost:8080')
response.headers.add('Access-Control-Allow-Headers',
'Origin, X-Requested-With, Content-Type, Accept, Authorization')
response.headers.add('Access-Control-Allow-Methods', 'GET,PUT,POST,DELETE,OPTIONS')
response.headers.add('Access-Control-Allow-Credentials', 'false')
return response
I'm using nginx as a reverse proxy, and so I tried adding the corresponding configuration I've seen at flask-socketio's docs:
location /socket.io {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass https://my_backend_host/socket.io;
}
What's wrong?
Thanks!
I got similar 400 issue with React Flask-SocketIO, the issue was due to CORS error.
The following code in flask fixed my issue,
socketio = SocketIO(app)
socketio.init_app(app, cors_allowed_origins="*")
Also make sure you are using eventlet or gevent-websocket in your server when you select [websocket] only transport. Gevent doesnt have websocket support so works only with HTTP polling fallback.
The problem was that I was adding the origin host with http instead of https.
Everything is working fine.
I fixed it by add this line to the nginx.conf:
proxy_set_header Origin "";

Serve flask python on https and another port without https

What I'm trying to accomplish.
Have a domain on https. Check. it's working ok using the following config. The flask app runs on port 1337 -> nginx takes it -> serves it though https. Everything is working nicely
Now I want to run another app, on port 1338 let's say. But if I do this, the browser (chrome) automatically redirects it to https.
I want: http://example.com:1338 .... to run ok
I get: https://example.com:1338 ... error certificate
My question is: how can I make the other app (on port 1338) either work with https:// or to work with http://
Here's my config...
server {
listen 80 default_server;
listen [::]:80 default_server;
root /home/cleverbots;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name _;
# SSL configuration
#
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
ssl_certificate /xxxxxxxxxx.crt;
ssl_certificate_key /xxxxxxxxxx.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ecdh_curve secp384r1;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
# Disable preloading HSTS for now. You can use the commented out header line that includes
# the "preload" directive if you understand the implications.
#add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
ssl_dhparam /xxxxxx/dhparam.pem;
location /static/ {
expires 30d;
add_header Last-Modified $sent_http_Expires;
alias /home/my_first_app/application/static/;
}
location / {
try_files $uri #tornado;
}
location #tornado {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:1337;
}
}
The answer to your question depends on what exactly you want the user experience to be.
As I understand your goal, you only have one domain (example.com). Your first app, (I'm going to call it app1337) is running on port 1337 and you can access in a browser at https://example.com/. Now you want to add another app (app1338) that you want to be able to access at https://example.com:1338/. The problem here is that only one service can run on a given port on a given interface. This can work, but means that you have to be really careful to make sure that your flask app only listens on loopback (127.0.0.1) and Nginx only listens on your Ethernet interface. If not, you'll get "socket already in use" errors. I would recommend instead using something else like 8338 in Nginx to avoid this confusion.
The fastest solution I can see would be to leave your existing server block exactly as is. Duplicate the entire thing, and in the new block:
Change the 2 listen 443 lines to the port you want to use in browser
(8338).
Remove the listen 80 lines or, if you want to serve the app on both ssl and non-ssl, change the port to the non-ssl port you want to use.
Change your proxy_pass line to point to your second flask app.
Like Keenan, I would recommend you use subdomains to sort your traffic. Something like https://app1337.example.com/ and https://app1338.example.com/ to make for a better user experience. To do this, duplicate the server block as above, but this time leave the ports the same, but change the "server_name" directive in each block to match the domain. Remove all of the "default_server" parts from the listen directives.
As an example:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name app1337.example.com;
# SSL configuration
# Certificate and key for "app1337.example.com"
ssl_certificate /xxxxxxxxxx.crt;
ssl_certificate_key /xxxxxxxxxx.key;
# The rest of the ssl stuff is common and can be moved to a shared file and included
# in whatever blocks it is needed.
include sslcommon.conf;
root /home/cleverbots;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
location /static/ {
expires 30d;
add_header Last-Modified $sent_http_Expires;
alias /home/my_first_app/application/static/;
}
location / {
try_files $uri #tornado;
}
location #tornado {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:1337;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name app1338.example.com;
# SSL configuration
# Certificate and key for "app1338.example.com"
ssl_certificate /xxxxxxxxxx.crt;
ssl_certificate_key /xxxxxxxxxx.key;
# The rest of the ssl stuff is common and can be moved to a shared file and included
# in whatever blocks it is needed.
include sslcommon.conf;
## This might be different for app1338
root /home/cleverbots;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
## This might be different for app1338
location /static/ {
expires 30d;
add_header Last-Modified $sent_http_Expires;
alias /home/my_first_app/application/static/;
}
location / {
try_files $uri #app1338;
}
location #app1338 {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:1338;
}
}

Django on a url other than localhost [duplicate]

This question already has answers here:
How to specify which eth interface Django test server should listen on?
(3 answers)
Closed 7 years ago.
I have a server aerv.nl.
it has django (a python framework) but when i run the django server it says:
server started at: http://127.0.0.1:8000/ how can i let the server run on: http://www.aerv.nl/~filip/ ? (a real url)
You'll have to configure your http server and Django. For example if you're using apache you'll need to go through this:
https://docs.djangoproject.com/en/1.9/howto/deployment/wsgi/modwsgi/
What you're doing here is setting up your server to handle the http requests through your django app.
You will need to understand how DNS works, then use redirecting and then some proper server (like nginx or apache with e.g. gunicorn), not django development server, which shouldn't be used on production. There is no way you could do what you ask for just with ./manage runserver. All you can do is to change IP address and port to something different by e.g.: ./manage.py runserver 192.168.0.12:9999 so for example other computers in your network might access your site on this specific IP and port.
Example
You are owner of domain example.com, you have server where you want to server your site with IP address e.g. 5.130.2.19.
You need go to your domain provider and add an A record which connects these together: example.com -> 5.130.2.19.
Then on your server you set up webserver, e.g. nginx and let it run with e.g. this config for your particular server/site:
server {
listen 80;
server_name example.com;
client_max_body_size 4G;
location /static/ {
autoindex on;
alias /var/www/example/django/static/;
}
location /media/ {
autoindex on;
alias /var/www/example/django/media/;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://upstream_server;
break;
}
}
upstream upstream_server {
server unix:/var/www/example/gunicorn.sock fail_timeout=10s;
}
then you would need to run gunicorn with something like
gunicorn example.wsgi:application --bind=unix:/var/www/example/gunicorn.sock
That should be all, but it's of course very brief. Just substitute example.com for your URL. It is up to you if this going to be specific record in nginx config (think about this as an entry point) or if it is going to be one of route specified in your django project.
How does it work?
User put example.com into an address bar, your computer then ask global DNS servers: To what IP address example.com points to?, DNS reply: It's 5.130.2.19. User's browser then sends HTTP request on that IP, where nginx get this request and looks inside its config if there is example.com handler. It find that it is there and that he should look for files in unix:/var/www/example/gunicorn.sock. He looks there and see working gunicorn, which basically parse python django project to something what nginx can present as your website.

Categories