django-allauth: how to modify email confirmation url? - python

I'm running django on port 8001, while nginx is handling webserver duties on port 80. nginx proxies views and some REST api calls to Django. I'm using django-allauth for user registration/authentication.
When a new user registers, django-allauth sends the user an email with a link to click. Because django is running on port 8001, the link looks like http://machine-hostname:8001/accounts/confirm-email/xxxxxxxxxxxxxx
How can I make the url look like http://www.example.com/accounts/confirm-email/xxxxxxxx ?
Thanks!

Django get hostname and port from HTTP headers.
Add proxy_set_header Host $http_host; into your nginx configuration before options proxy_pass.

I had the same problem, and also found that ferrangb's solution had no effect on outgoing allauth emails. mr_tron's answer got me halfway, but I had to do a little bit more:
1) In nginx configuration, put
proxy_set_header Host $http_host
before options proxy_pass.
2) In settings.py, add the domain name to ALLOWED_HOSTS. I also added the www version of my domain name, since I get traffic to both addresses.
ALLOWED_HOSTS = ['127.0.0.1', 'example.com', 'www.example.com']
And, of course, restart nginx and gunicorn or whatever is serving your Django. With the first step but not the second, all hits to the site were instant 400 errors (unless DEBUG = True in settings.py.)

Related

How do I set a wildcard for CSRF_TRUSTED_ORIGINS in Django?

After updating from Django 2 to Django 4.0.1 I am getting CSRF errors on all POST requests. The logs show:
"WARNING:django.security.csrf:Forbidden (Origin checking failed - https://127.0.0.1 does not match any trusted origins.): /activate/"
I can't figure out how to set a wildcard for CSRF_TRUSTED_ORIGINS? I have a server shipped to customers who host it on their own domain so there is no way for me to no the origin before hand. I have tried the following with no luck:
CSRF_TRUSTED_ORIGINS = ["https://*", "http://*"]
and
CSRF_TRUSTED_ORIGINS = ["*"]
Explicitly setting "https://127.0.0.1" in the CSRF_TRUSTED_ORIGINS works but won't work in my customer's production deployment which will get another hostname.
The Django app is running using Gunicorn behind NGINX. Because SSL is terminated after NGINX request.is_secure() returns false which results in Origin header not matching the host here:
https://github.com/django/django/blob/3ff7f6cf07a722635d690785c31ac89484134bee/django/middleware/csrf.py#L276
I resolved the issue by adding the following in Django:
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
And ensured that NGINX is forwarding the http scheme with the following in my NGINX conf:
proxy_set_header X-Forwarded-Proto $scheme;
Yes, it had changed in the 4.0 version as you can see here here
Changed in Django 4.0:
The values in older versions must only include
the hostname (possibly with a leading dot) and not the scheme or an
asterisk.
Also, Origin header checking isn’t performed in older versions.
Note: you are not supposed to use * in production.

How to debug "You may need to add 'maginate.net' to ALLOWED_HOSTS"

I know this question is very similar to others, but I read all of them and still have not found a solution.
I registered maginate.net with Google Domains so the domain is active. When entering that domain it gives a DisallowedHost Exception. It says to put the domain name in ALLOWED_HOSTS, which I did, in the local_settings.py. And when I put the IP address 206.189.179.58, the website runs perfectly. In my ALLOWED_HOST is a list:
ALLOWED_HOSTS = ['206.189.179.58', 'maginate.net', 'www.maginate.net']
And yes I have restarted the server many times. I don't know if my settings.py has anything to do with this, but leaving the ALLOWED_HOSTS blank or not still gives the error. I'm also following this tutorial and doing exactly what it says.
You have updated the settings.py in only local system and you have not uploaded the updated settings code to production.
I went through your url and error it is showing that.
see your ALLOWED_HOSTS it only contains '206.189.179.58' and 'maginate.net' and 'www.maginate.net' is not added in ALLOWED_HOSTS.
Just try changing and uploading.
Update after seeing code
You have put your settings.py and local_settings.py inside portfolio directory but it should be inside portfolio/portfolio
It will work fine.
Taken from this chapter from Test-Driven Development with Python, it looks like you could be having problems with your nginx configuration:
Fixing ALLOWED_HOSTS with Nginx: passing on the Host header
The problem turns out to be that, by default, Nginx strips out the
Host headers from requests it forwards, and it makes it "look like"
they came from localhost after all. We can tell it to forward on the
original host header by adding the proxy_set_header directive:
server: /etc/nginx/sites-available/superlists-staging.ottg.eu
server {
listen 80;
server_name superlists-staging.ottg.eu;
location /static {
alias /home/elspeth/sites/superlists-staging.ottg.eu/static;
}
location / {
proxy_pass http://unix:/tmp/superlists-stagng.ottg.eu.socket;
proxy_set_header Host $host;
}
}

Flask url handling for port

I have Kibana (part of elasticsearch stack) running on xx.xxx.xxx.xxx:5601. Since Kibana does not have authentication of its own, I am trying to wrap it under my flask login setup. In other words, if someone tries to visit xx.xxx.xxx.xxx:5601, I need the page to be redirected to my flask login page. I can use the #login_required decorator on the URL to achieve this...but I don't know how to setup the flask route URL to handle the port 5601 since it needs to begin with a leading slash.
#app.route("/")
#login_required
Any suggestions?
EDIT
#senaps: App 1 is flask that runs on 0.0.0.0, port 9500, App 2 is node.js based Kibana that I can choose to either run on localhost port 5601 and then expose via nginx, or i can directly make public on IP:5601. Either way, it is running as a "service" on startup and listening on 5601 at all times.
Problem statement - App 2 to be wrapped under App 1 login. I do not want to use nginx for authentication of App 2 but rather the App 1 flask login setup.
I'm currently using gunicorn to serve flask app and have nginx reverse proxy setup to route to flask app. Guide followed is digitalocean
Option 1 - Node.js Kibana application exposed to public on IP:5601.
server {
listen 80;
server_name example.com;
location / {
include proxy_params;
proxy_pass http://unix:/home/ubuntu/myproject/myproject.sock;
}}
If I visit IP, it goes to my flask app, great. I'm unable to figure out how to handle flask view URL if someone visits IP:5601. Instead of taking them to Kibana, it should redirect to my flask app for authentication.
I tried adding another server block to listen at 5601 and proxy_pass to the flask sock file, I get a nginx error that says it cannot bind to 5601 and asks me to kill the listener at 5601. But I need Kibana running at 5601 at all times (unless I can figure out a way to launch this service via python flask).
Option 2 - Kibana application runs on localhost port 5601 mounted at "/kibana" in order to not conflict with "/" needed for flask. Then it is exposed via nginx reverse proxy.
server {
listen 80;
server_name example.com;
location / {
include proxy_params;
proxy_pass http://unix:/home/ubuntu/myproject/myproject.sock;
}
location /kibana/ {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
rewrite /kibana/(.*)$ /$1 break;
}}
With this setup, one can access Kibana by going to IP/kibana. But problem with Option 2 is if I have a /kibana view in my flask app to catch it, it does not take effect as redirection to Kibana happens at nginx, so flask never gets involved.
I coudln't find much info on stack etc. since most solutions deal with using nginx to authenticate Kibana and not any other python applications.
Given this, how would I corporate your solution? Many thanks in advance for looking into this.
so you have 2 separate apps right?
you want the second app to only work if user is authenticated with first app.
simplest way would be, to use the same db.this way, flask login would check for the user's authentication based on the same db. with that being said, you may not be able to handle session's perfectly okay.
the trick is in uwsgi nad nginx. you should use Emperor mode of uwsgi so both apps are deployed.
#app.route("/")
#login_required
def function()
now, the question might be how would we get the second app's / route if the first app has that route too. well, this will not be a problem since the url is different. but you need your nginx configured to relay requests for xx.x.x.x to the first app and x.x.x.x:y to the second app.
server {
listen 80;
server_name example.org www.example.org;
root /var/www/port80/
}
server {
listen 5601;
server_name example.org www.example.org;
root /var/www/port81/
}
since you asked for suggestions on how to do it, i don't include codes. so you can figure out based on your setup. or you should tell us how you setup and serve the two apps, so we could provide more of code.
One approach is to proxy all traffic to the Kibana server through the Flask application. You can use a catch-all route to handle forwarding of the different paths. You would disallow access to Kibana from sources other than from the Flask application.
import requests # may require `pip install requests`
kibana_server_baseurl = 'https://xxx.xxx.xxx.xxx:5601/'
#app.route('/', defaults={'path': ''})
#app.route('/<path:path>')
#login_required
def proxy_kibana_requests(path):
# ref http://flask.pocoo.org/snippets/118/
url = kibana_server_baseurl + path
req = requests.get(url, stream = True)
return Response(stream_with_context(req.iter_content()), content_type = req.headers['content-type'])
Another options is to use Nginx as a reverse proxy and use Nginx to handle authentication. The simplest, if it meets your needs, is to use basic auth. https://www.nginx.com/resources/admin-guide/restricting-access-auth-basic/.
Alternatively you could check for a custom header in the Nginx config on access to the Kibana application and redirect to the Flask application if it were missing.
Another option is to use an existing Kibana authentication proxy. A commercial option, Elastic x-pack is a popular option. Another OSS option is https://github.com/fangli/kibana-authentication-proxy. I have not personally used either.

Host Not Allowed 192...224 even though it is already in my django project

I have a Django project that I have pushed to docker and then a digital ocean server for live testing in a working environment. In the settings file, I have an IP address from my digital ocean server added to the allowed host portion of the settings file, but I am getting the following error:
DisallowedHost at /
Invalid HTTP_HOST header: '192...244:8000'. You may need to add '192...244' to ALLOWED_HOSTS.
Here is the code I have
ALLOWED_HOSTS = ['192...244', 'localhost', '127.0.0.1']
I didn't add the full IP, even though I have the full IP in my files.
Try this:
ALLOWED_HOSTS = ['*']

SAML2 Service Provider on non standard port behind a reverse proxy

I have a SAML2 service provider (Open edX Platform if it makes a difference), configured according to docs and otherwise working normally. It runs at http://lms.local:8000 and works just fine with TestShib test Identity Provider and other 3rd party providers.
Problems begin when nginx reverse proxy is introduced. The setup is as follows:
nginx, obviously, runs on port 80
LMS (the service provider) runs on port 8000
lms.local is aliased to localhost via hosts file
Nginx have the following site config:
server {
listen 80;
server_name lms.local;
location / {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
if ($request_method = 'OPTIONS') {
return 204;
}
}
}
The problem is the following: python-social-auth detects that the server runs on lms.local:8000 (via request.META['HTTP_PORT']). So, if an attempt was made to use SAML SSO via the nginx proxy, it fails with the following message:
Authentication failed: SAML login failed: ['invalid_response'] (The response was received at http://lms.local:8000/auth/complete/tpa-saml/ instead of http://lms.local/auth/complete/tpa-saml/)
If that helps, an exception that causes this message is thrown in python-saml.OneLogin_Saml2_Response.is_valid.
The questions is: is that possible to run SP behind a reverse proxy on the same domain, but on different port? Shibboleth wiki says it is totally possible to run a SP behind the reverse proxy on different domain, but says nothing about ports.
In this particular case reverse proxy was sending X-Forwarded-Host and X-Forwarded-Port headers, so I just modified django strategy to use those values instead of what Django provides (i.e. request.get_host and request.META['SERVER_PORT']), which yielded two pull requests:
https://github.com/edx/edx-platform/pull/9848
https://github.com/omab/python-social-auth/pull/741

Categories