Websockets Secure with headers in Python - python

For my application i'm using secure websockets, which is working fine. But I would like to secure it a bit more.
For the websocket python server im using the websockets library (on asyncio). but when I check the path value which is sent with the websockets.serve(), I'll always get the path of the socket and the sent_ip is always local.
How can I change my configuration so I can block other ips which are trying to connect
Server.py
import ssl
import asyncio
import logging
import websockets
import pathlib
logging.basicConfig()
STATE = {'value': 0}
USERS = set()
async def register(websocket):
USERS.add(websocket)
print("connection made!")
async def unregister(websocket):
USERS.remove(websocket)
async def update(websocket):
await websocket.send("Jobnumber: 1")
async def counter(websocket, path):
await register(websocket)
addr, seq = websocket.remote_address
print(addr) #ALWAYS localhost
print(path) #always the same path /server/sock (as configured in NGNIX)
try:
async for message in websocket:
print (message)
finally:
await unregister(websocket)
ssl_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
ssl_context.load_cert_chain(
pathlib.Path(__file__).with_name('privkey.pem'))
asyncio.get_event_loop().run_until_complete(
websockets.serve(counter, '', 8004, ssl=ssl_context))
asyncio.get_event_loop().run_forever()
Nginx:
server {
root /var/www/html/;
index index.php index.html index.htm index.nginx-debian.html;
server_name [hidden];
location / {
try_files $uri $uri/ =404;
}
location /server/sock {
proxy_pass https://pythonserver;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/../fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/.../privkey.pem; # managed by
}
upstream pythonserver {
server localhost:8004;
}

Try using this for Nginx, since you are using websockets as well:
server {
listen 80 ;
server_name <url>;
large_client_header_buffers 8 32k;
if ($http_user_agent ~* Googlebot) {
return 403;
}
access_log /var/log/nginx/access.log;
location / {
add_header 'Access-Control-Allow-Origin' "$http_origin";
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, DELETE, PUT';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Headers' 'User-Agent,Keep-Alive,Content-Type';
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://<url>:443;
proxy_read_timeout 90;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_buffers 8 32k;
proxy_buffer_size 64k;
}
}
server {
listen 443;
server_name abc.com;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log warn;
ssl on;
ssl_certificate /etc/nginx/ssl/ssl.crt;
ssl_certificate_key /etc/nginx/ssl/ssl.key;
location / {
add_header 'Access-Control-Allow-Origin' "$http_origin";
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, DELETE, PUT';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Keep-Alive';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
add_header 'Access-Control-Allow-Credentials' 'true';
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_next_upstream error;
proxy_pass http://pythonserver;
add_header X-Upstream $upstream_addr;
add_header Host $http_host;
proxy_read_timeout 90;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_buffers 8 32k;
proxy_buffer_size 64k;
}
}
update url as your server name in 2 places.

Related

how to connect nginx proxy_pass to aiohttp webserver in python (Error)

Im trying to connect my nginx proxy_pass to my aiohttp web server
But im keep getting errors
Here is my Nginx config:
server {
server_name www.example.com;
location /nextpay {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_buffering off;
proxy_pass https://127.0.0.1:5001;
}
}
And here is my code :
from aiohttp import web
from aiohttp.web_request import Request
WEB_SERVER_HOST = "127.0.0.1"
WEB_SERVER_PORT = 5001
Router = web.RouteTableDef()
#Router.get('/nextpay')
async def verify(request: Request):
print(type(request))
return web.Response(text="Hello, world")
def main():
app = web.Application()
app.add_routes(Router)
web.run_app(app, host=WEB_SERVER_HOST, port=WEB_SERVER_PORT)
if __name__ == "__main__":
main()
And this is the error im keep getting every time i request on /nextpay :
aiohttp.http_exceptions.BadStatusLine: 400, message="Bad status line 'invalid HTTP method'"
The Problem was that i used https instead of http:
server {
server_name www.example.com;
location /nextpay {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_buffering off;
proxy_pass http://127.0.0.1:5001;
}
}

django SECURE_SSL_REDIRECT with nginx reverse proxy

Is it secure to set SECURE_SSL_REDIRECT=False if I have nginx setup as a reverse proxy serving the site over https?
I can access the site over SSL if this is set this to False, where as if it is True, I receive too many redirects response.
NOTE: nginx and django run from within docker containers.
My nginx.conf looks like:
upstream config {
server web:8000;
}
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name _;
ssl_certificate /etc/ssl/certs/cert.com.chained.crt;
ssl_certificate_key /etc/ssl/certs/cert.com.key;
location / {
proxy_pass http://config;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /staticfiles/ {
alias /home/app/web/staticfiles/;
}
}
EDIT: Added http to https redirect in nginx.conf.
You need to add the following to your "location /" block:
location / {
...
proxy_set_header X-Forwarded-Proto $scheme;
...
}

websockets proxied by nginx to gunicorn over https giving 400 (bad request)

I am having trouble establishing a websocket in my Flask web application.
On the client side, I am emitting a "ping" websocket event every second to the server. In the browser console, I see the following error each second
POST https://example.com/socket.io/?EIO=3&transport=polling&t=LOkVYzQ&sid=88b5202cf38f40879ddfc6ce36322233 400 (BAD REQUEST)
GET https://example.com/socket.io/?EIO=3&transport=polling&t=LOkVZLN&sid=5a355bbccb6f4f05bd46379066876955 400 (BAD REQUEST)
WebSocket connection to 'wss://example.com/socket.io/?EIO=3&transport=websocket&sid=5a355bbccb6f4f05bd46379066876955' failed: WebSocket is closed before the connection is established.
I have the following nginx.conf
server {
listen 80;
server_name example.com www.example.com;
return 301 https://$server_name$request_uri;
}
upstream app_server {
# for UNIX domain socket setups
server unix:/pathtowebapp/gunicorn.sock fail_timeout=0;
}
server {
listen 443 ssl;
server_name example.com www.example.com;
keepalive_timeout 5;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
charset utf-8;
client_max_body_size 30M;
location / {
try_files $uri #proxy_to_app;
}
location /socket.io {
proxy_pass http://app_server;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Upgrade websocket;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400;
proxy_buffering off;
proxy_headers_hash_max_size 1024;
}
location /static {
alias /pathtowebapp/webapp/static;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
#proxy_buffering off;
proxy_pass http://app_server;
}
}
I have been looking all over for examples of a websocket working with https using nginx in front of gunicorn.
My webpage loads, although the websocket connection is not successful.
The client side websocket is established using the following javascript:
var socket = io.connect('https://' + document.domain + ':' + location.port + namespace);
Here is my gunicorn.conf
import multiprocessing
bind = 'unix:/pathtowebapp/gunicorn.sock'
workers = multiprocessing.cpu_count() * 2 + 1
worker_class = 'eventlet'
[EDIT] if I configure nginx the way it is in the Flask-IO documentation and just run (env)$ python deploy_app.py then it works. But I was under the impression that this was not as production-ideal as the setup I previously mentioned
The problem is that you are running multiple workers on gunicorn. This is not a configuration that is currently supported, due to the very limited load balancer in gunicorn that does not support sticky sessions. Documentation reference: https://flask-socketio.readthedocs.io/en/latest/#gunicorn-web-server.
Instead, run several gunicorn instances, each with one worker, and then set up nginx to do the load balancing, using the ip_hash method so that sessions are sticky.
Also, in case you are not aware, if you run multiple servers you need to also run a message queue, so that the processes can coordinate. This is also covered in the documentation link above.

How to setup Nginx to proxy to two service on one machine?

I have written two micros services with python and ruby. The python one serves some api requests. and the ruby one serves the other api requests.
the python one listens port 80 and can handle /users /feeds requests
the ruby one listens port 4567 and can handle /orders /products requests.
the following is my config file .but it does not work with nginx .
upstream midgard_api_cluster
{
server unix:/tmp/midgard_api.sock;
}
upstream tradeapi {
server 127.0.0.1:4567;
}
server {
listen 80;
server_name my.domain.name;
client_max_body_size 20M;
set $x_remote_addr $http_x_real_ip;
if ($x_remote_addr = "") {
set $x_remote_addr $remote_addr;
}
access_log /var/log/nginx/midgard/access_log ;
error_log /var/log/nginx/midgard/error_log ;
charset utf-8;
location /static/ {
root /opt/www/templates/;
expires 30d;
}
location / {
error_page 502 503 504 /500.html;
uwsgi_pass midgard_api_cluster;
include uwsgi_params;
# proxy_redirect default;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $x_remote_addr;
proxy_set_header Host $http_host;
proxy_set_header Range $http_range;
proxy_connect_timeout 10;
proxy_send_timeout 10;
proxy_read_timeout 11;
}
location /products {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://tradeapi;
proxy_redirect off;
}
location /orders {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://tradeapi;
proxy_redirect off;
}
}
Now , when i use
curl http://my.domain.name/products
It got a 404 error and the request was directed to the python service .
and
curl http://my.domain.name:3000/products
can get the right response .
How can i setup the nginx configuration file and route the request to the ruby service ?
locations are processed in order. The /-location matches before /products so the later is never reached. Put / at the end of the config file.

Nginx+Tornado static files aren't being handled by nginx, why?

I'm trying to set up Tornado server behind nginx proxy, here're the relevant bits of the configuration:
server {
listen 80;
server_name localhost;
location html/ {
root /srv/www/intj.com/html;
index login.html;
if ($query_string) {
expires max;
}
}
location = /favicon.ico {
rewrite (.*) /html/favicon.ico;
}
location = /robots.txt {
rewrite (.*) /html/robots.txt;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://localhost:8888;
}
}
I can get to my Python server through nginx, but when I request a static pages, such as, say login.html, which is located in /srv/www/intj.com/html/login.html, instead of loading the static file, the request is forwarded to Tornado, which doesn't know what to make of it.
What did I do wrong?
Well, it actually had to be ^~ /html/, but I don't really know what it means / what is the difference, so it would be cool if someone could enlighten me.
Try this and tell me how it goes.
server {
listen 80;
server_name localhost;
location / {
if($query_string) {
root /srv/www/intj.com/html;
index index.html;
try_files $uri $uri/;
}
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://localhost:8888;
}
}

Categories