How to dynamically load HTTP routing into NGINX from your webframework? - python

I've been following the Web Frameworks Benchmark and have noticed that a number of web framework suffer from the same performance penalty, that being they do HTTP routing within the framework itself and not leverage the highly performant HTTP server of NGINX to do routing.
For example, in the Flask python framework, you might have:
#app.route('/add', methods=['POST'])
def add_entry():
...
Which makes your application much easier to follow than doing it directly within NGINX config file like so:
server {
listen 80;
server_name example.com;
location /add {
... // defer to Flask (python) app
}
Question: How can you gain the performance of NGINX built-in HTTP routing (using NGINX own config file to define routing), while also keeping the easy of application development by defining the HTTP routing within your web framework?
Is there a way you can dynamically load into NGINX from INSERT_NAME_OF_YOUR_WEBFRAMEWORK the HTTP routing?

I don't know a ready to use library. But it seems pretty easy to write a script, which generates an Nginx config file from application's routes (for example, during application setup). This file can be included into main configuration of server using "include" command of Nginx config:
server {
listen 80;
server_name example.com;
include /path/to/application/routes.conf
}

Related

Flask url handling for port

I have Kibana (part of elasticsearch stack) running on xx.xxx.xxx.xxx:5601. Since Kibana does not have authentication of its own, I am trying to wrap it under my flask login setup. In other words, if someone tries to visit xx.xxx.xxx.xxx:5601, I need the page to be redirected to my flask login page. I can use the #login_required decorator on the URL to achieve this...but I don't know how to setup the flask route URL to handle the port 5601 since it needs to begin with a leading slash.
#app.route("/")
#login_required
Any suggestions?
EDIT
#senaps: App 1 is flask that runs on 0.0.0.0, port 9500, App 2 is node.js based Kibana that I can choose to either run on localhost port 5601 and then expose via nginx, or i can directly make public on IP:5601. Either way, it is running as a "service" on startup and listening on 5601 at all times.
Problem statement - App 2 to be wrapped under App 1 login. I do not want to use nginx for authentication of App 2 but rather the App 1 flask login setup.
I'm currently using gunicorn to serve flask app and have nginx reverse proxy setup to route to flask app. Guide followed is digitalocean
Option 1 - Node.js Kibana application exposed to public on IP:5601.
server {
listen 80;
server_name example.com;
location / {
include proxy_params;
proxy_pass http://unix:/home/ubuntu/myproject/myproject.sock;
}}
If I visit IP, it goes to my flask app, great. I'm unable to figure out how to handle flask view URL if someone visits IP:5601. Instead of taking them to Kibana, it should redirect to my flask app for authentication.
I tried adding another server block to listen at 5601 and proxy_pass to the flask sock file, I get a nginx error that says it cannot bind to 5601 and asks me to kill the listener at 5601. But I need Kibana running at 5601 at all times (unless I can figure out a way to launch this service via python flask).
Option 2 - Kibana application runs on localhost port 5601 mounted at "/kibana" in order to not conflict with "/" needed for flask. Then it is exposed via nginx reverse proxy.
server {
listen 80;
server_name example.com;
location / {
include proxy_params;
proxy_pass http://unix:/home/ubuntu/myproject/myproject.sock;
}
location /kibana/ {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
rewrite /kibana/(.*)$ /$1 break;
}}
With this setup, one can access Kibana by going to IP/kibana. But problem with Option 2 is if I have a /kibana view in my flask app to catch it, it does not take effect as redirection to Kibana happens at nginx, so flask never gets involved.
I coudln't find much info on stack etc. since most solutions deal with using nginx to authenticate Kibana and not any other python applications.
Given this, how would I corporate your solution? Many thanks in advance for looking into this.
so you have 2 separate apps right?
you want the second app to only work if user is authenticated with first app.
simplest way would be, to use the same db.this way, flask login would check for the user's authentication based on the same db. with that being said, you may not be able to handle session's perfectly okay.
the trick is in uwsgi nad nginx. you should use Emperor mode of uwsgi so both apps are deployed.
#app.route("/")
#login_required
def function()
now, the question might be how would we get the second app's / route if the first app has that route too. well, this will not be a problem since the url is different. but you need your nginx configured to relay requests for xx.x.x.x to the first app and x.x.x.x:y to the second app.
server {
listen 80;
server_name example.org www.example.org;
root /var/www/port80/
}
server {
listen 5601;
server_name example.org www.example.org;
root /var/www/port81/
}
since you asked for suggestions on how to do it, i don't include codes. so you can figure out based on your setup. or you should tell us how you setup and serve the two apps, so we could provide more of code.
One approach is to proxy all traffic to the Kibana server through the Flask application. You can use a catch-all route to handle forwarding of the different paths. You would disallow access to Kibana from sources other than from the Flask application.
import requests # may require `pip install requests`
kibana_server_baseurl = 'https://xxx.xxx.xxx.xxx:5601/'
#app.route('/', defaults={'path': ''})
#app.route('/<path:path>')
#login_required
def proxy_kibana_requests(path):
# ref http://flask.pocoo.org/snippets/118/
url = kibana_server_baseurl + path
req = requests.get(url, stream = True)
return Response(stream_with_context(req.iter_content()), content_type = req.headers['content-type'])
Another options is to use Nginx as a reverse proxy and use Nginx to handle authentication. The simplest, if it meets your needs, is to use basic auth. https://www.nginx.com/resources/admin-guide/restricting-access-auth-basic/.
Alternatively you could check for a custom header in the Nginx config on access to the Kibana application and redirect to the Flask application if it were missing.
Another option is to use an existing Kibana authentication proxy. A commercial option, Elastic x-pack is a popular option. Another OSS option is https://github.com/fangli/kibana-authentication-proxy. I have not personally used either.

Run 2 wormholes on the same Raspberry Pi with Dataplicity

So I have a simple web page that runs on Nginx and has Rest API calls to a Python Flask app.
I'd like to put them through 2 wormholes on Dataplicity. One for the web page and the other one for the backend app.
At the moment I can only do either. Is there a way to make it work?
Thanks!
Yep, put nginx in front and park the apps under different locations.
Let's say your app is listening on port 8080 and the other app on port 8081.
Then your nginx config might look like this:
server {
listen ...;
...
location /app1/ {
proxy_pass http://127.0.0.1:8080;
}
location /app2/ {
proxy_pass http://127.0.0.1:8081;
}
...
}
Which means your apps will be accessible locally as:
http localhost/app1/
http localhost/app2/
This will appear as this on Dataplicity wormhole:
https 123123123.dataplicity.io/app1/
https 123123123.dataplicity.io/app2/
Hope that helps :)
M.

Error in connecting domain name to nginx

I'm trying to connect my droplets on Digital Ocean to a domain name (example.com)
Currently using uwsgi, nginx and the web apps is in python (flask, MySQL)
I have configured my project .conf as such:
server {
listen 80;
server_name ip-address example.com www.example.com;
}
location {
include uwsgi_params;
uwsgi_pass unix:///home/user/example/example.sock;
}
I have added hosts:
127.0.0.1 perhatian.com www.perhatian.com
The site currently is not reachable, however, when i access the IP its working.
Any help ?
It looks like you do not have proper DNS setup towards that domain name (perhatian.com) as seen from https://intodns.com/perhatian.com
If you are trying to load perhatian.com in a browser from any where, you'll need to setup A records pointing towards the servers IP on which you are able to load the website.

SAML2 Service Provider on non standard port behind a reverse proxy

I have a SAML2 service provider (Open edX Platform if it makes a difference), configured according to docs and otherwise working normally. It runs at http://lms.local:8000 and works just fine with TestShib test Identity Provider and other 3rd party providers.
Problems begin when nginx reverse proxy is introduced. The setup is as follows:
nginx, obviously, runs on port 80
LMS (the service provider) runs on port 8000
lms.local is aliased to localhost via hosts file
Nginx have the following site config:
server {
listen 80;
server_name lms.local;
location / {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
if ($request_method = 'OPTIONS') {
return 204;
}
}
}
The problem is the following: python-social-auth detects that the server runs on lms.local:8000 (via request.META['HTTP_PORT']). So, if an attempt was made to use SAML SSO via the nginx proxy, it fails with the following message:
Authentication failed: SAML login failed: ['invalid_response'] (The response was received at http://lms.local:8000/auth/complete/tpa-saml/ instead of http://lms.local/auth/complete/tpa-saml/)
If that helps, an exception that causes this message is thrown in python-saml.OneLogin_Saml2_Response.is_valid.
The questions is: is that possible to run SP behind a reverse proxy on the same domain, but on different port? Shibboleth wiki says it is totally possible to run a SP behind the reverse proxy on different domain, but says nothing about ports.
In this particular case reverse proxy was sending X-Forwarded-Host and X-Forwarded-Port headers, so I just modified django strategy to use those values instead of what Django provides (i.e. request.get_host and request.META['SERVER_PORT']), which yielded two pull requests:
https://github.com/edx/edx-platform/pull/9848
https://github.com/omab/python-social-auth/pull/741

Multiple storage engines for django media: prefer local, fallback to CDN

I have a django/mezzanine/django-cumulus project that uses the rackspace cloudfiles CDN for media storage. I would like to automatically serve all static files from the local MEDIA_ROOT, if they exist, and only fallback to the CDN URL if they do not.
One possible approach is to manage the fallback at the template level, using tags. I would prefer not to have to override all the admin templates (eg) just for this, however.
Is there a way to modify the handling of all media to use one storage engine first, and switch to a second on error?
The best way is to have this working, is to have a different web server serving all of your media (I used nginx). Then you setup a load balancer to detect failure and redirect all the requests to CDN in case of a failure.
One thing that you might have to figure out is the image path.(use HAProxy to rewrite the request URL, if you need to)
Based on Anup's suggestion, I found that this bit of nginx config nicely handles the 404 condition:
location /static/ {
root /path/to/static_root;
# ...
error_page 404 = #cdn;
}
location #cdn {
# cdn_cname.example.com is an alias for deadbeef012345.r99.cf5.rackcdn.com
rewrite ^/(.*)$ http://cdn_cname.example.com/$1 last;
}
This will correctly redirect any request for a /static/ URI that returns 404 on the local server to the CDN. However, django-cumulus still renders links to static files via the CDN. To fix that, I added the following to the CUMULUS block of settings.py:
CUMULUS {
# ...
'CONTAINER_URI': 'http://example.com/static',
}
Now, django-cumulus links use the local server's static URI, which will hit the nginx configuration above, and only redirect to the CDN when necessary. Hooray!

Categories