How do I make Django admin URLs accessible to localhost only? - python

What is the simplest way to make Django /admin/ urls accessible to localhost only?
Options I have thought of:
Seperate the admin site out of the project (somehow) and run as a different virtual host (in Apache2)
Use a proxy in front of the hosting (Apache2) web server
Restrict the URL in Apache within WSGI somehow.
Is there a standard approach?
Thanks!

Id go for apache configuration:
<Location /admin>
Order Deny, Allow
Deny from all
Allow from 127.0.0.1
</Location>
HTH.

I'd go for the Apache configuration + run a proxy in front + restrict in WSGI :
I dislike Apache for communicating with web clients when dynamic content generation is involved. Because of it's execution model, a slow or disconnected client can tie up the Apache process. If you have a proxy in front ( i prefer nginx, but even a vanilla apache will do ), the proxy will worry about the clients and Apache can focus on a new dynamic content request.
Depending on your Apache configuration, a process can also slurp a lot of memory and hold onto it until it hits MaxRequests. If you have memory intensive code in /admin ( many people do ), you can end up with Apache processes that grab a lot more memory than they need. If you split your Apache config into /admin and /!admin , you can tweak your apache settings to have a larger number of /!admin servers which require a smaller potential footprint.
I'm paranoid server setups.
I want to ensure the proxy only sends /admin to a certain Apache port.
I want to ensure that Apache only receives /admin on certain apache port, and that it came from the proxy (with a secret header) or from localhost.
I want to ensure that the WSGI is only running the /admin stuff based on certain server/client conditions.

Related

Using two different frameworks on a single domain (Oracle Weblogic / Django)

Suppose my company has a site at https://example.com, and it is powered by an older version of Oracle Weblogic. The company wants to eventually transition the site to a Django framework, but wants to do it piecemeal.
Specifically, it wants to maintain the original site on the old framework, but wants set up a subfolder like https://example.com/newurl/ (or, alternatively, a subdomain like https://newurl.example.com) which will contain a Django project with new features etc., and any subdirectories within this new url will likewise consist of Django apps only.
My question is, is it possible to contain both frameworks on the same domain in this manner, and if so how one would go about it using Apache? Thanks.
Yes, sure it's possible. Try reverse proxy software, such as:
Nginx Proxy
HaProxy
Varnish Cache
reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or more servers. These resources are then returned to the client, appearing as if they originated from the proxy server itself.[1] Unlike a forward proxy, which is an intermediary for its associated clients to contact any server, a reverse proxy is an intermediary for its associated servers to be contacted by any client. In other words, a proxy acts on behalf of the client(s), while a reverse-proxy acts on behalf of the server(s). ()
Nginx reverse proxy sample config
server {
listen 80;
server_name example.com;
location ~ /newurl {
proxy_pass http://django-server;
}
location ~ /oldurl {
proxy_pass http://oracle-weblogic-server;
}
}
HaProxy reverse proxy sample config
frontend http_frontend
bind *:80
mode http
option httpclose
acl is_newurl hdr_end(host) -i newurl
use_backend django if is_newurl
acl is_oldurl hdr_end(host) -i oldurl
use_backend oracle if is_oldurl
backend django
mode http
cookie SERVERID insert indirect nocache
server django django-server:80 check cookie django
backend oracle
mode http
cookie SERVERID insert indirect nocache
server oracle oracle-weblogic-server:80 check cookie oracle

Kubernetes - Configuring Angular to Work with a Backend API

I have two containers Auth and Frontend. I have managed to get both the containers working independently, I need to establish the link between the two to send and receive HTTP requests.
Generally, the connections are made in angular like http://localhost:3000/auth/.
Note: Both are in different deployments and services.
Should I be using Ingress or Nginx?
If your Frontend angular application, needs to connect to the Auth application an the two run on different networks, then just use the IP of your host running the Auth container. If your app requires load balancing, security or you just want to add another level of abstraction and control you may use a proxy like Nginx.
service will do the job, you just need to replace localhost with service name.

Sharing sessions between two flask servers

I have a backend with two flask servers. One that handles all RESTfull request and one that is an flask-socketio server. Is there a way how I can share the session variables (Logged in User etc.) between these two applications? They do run over different ports if that is important.
How I have understood sessions they work over client session cookies so shouldn't both of these servers have access to the information? If yes how? and if not is there a way to achieve the same effect?
There are a couple of ways to go at this, depending on how you have your two servers set up.
The easiest solution is to have both servers appear to the client on the same domain and port. For example, you can have www.example.com/socket.io as the root for your Socket.IO server, and any other URLs on www.example.com going to your HTTP server. To achieve this, you need to use a reverse proxy server, such as nginx. Clients do not connect directly to your servers, instead they connect to nginx on a single port, and nginx is configured to forward requests the the appropriate server depending on the URL.
With the above set up both servers are exposed to the client on the same domain, so session cookies will be sent to both.
If you want to have your servers appear separate to your client, then a good option to share session data is to switch to server-side sessions, stored in Redis, memcache, etc. You can use the Flask-Session to set that up.
Hope this helps!
I found that flask.session.sid = sid_from_another_domain works fine in having individual sub domains case.
I have several flask apps has individual domain names like A.domain.com, B.domain.com, and C.domain.com.
They all are based on flask and has redis session manager connected to same redis server.
I wanted to federate them to be logged in at once and to be logged out too.
I had to save the session id on db with user infomation together when I logged in A and passed it to domain B.
These domains communicates using oauth2 protocol, and I used flask_dance in this case.
And set it into flask.session.sid on B domain.
And then I could confirmed that this implementation works fine.

django verify HttpResponse reached user

Is there anyway to make sure that an HttpResponse in django has successfully reached the end-user?
A normal TCP connection will end with FIN request! Could this be detected?
Currently, I save data in a temporary table and once the device receives the data, it sends a "confirmation message" to the server which in turn pulls data from temporary table and commits it.
Short answer is no. In production, there will be several layers of abstraction on top of django that prevent django from getting the level of detail that you want. In a small production set up, you will have
user --> nginx or apache --> gunicorn or wsgi running django
So at best, django will only be able to tell if nginx or apache received the response.
In larger scale production systems you will have
user --> load balancer --> nginx or apache --> gunicorn or wsgi running django
And in some even larger production scale systems you might see:
user --> cloudflare or incapsula (caching and firewall service) --> load balancer --> nginx or apache --> gunicorn or wsgi running django
So the short answer is, absent a special ACK or confirmation message, like the one that you already have in place, this can’t be done.

Differentiate nginx, haproxy, varnish and uWSGI/Gunicorn [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am really new to sys admin stuff, and have only provisioned a VPS with nginx(serving the static files) and gunicorn as the web server.
I have lately been reading about different other stuff. I came to know about other tools:
nginx : high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server
haproxy : high performance load balancer
varnish : caching HTTP reverse proxy
gunicorn : python WSGI http server
uwsgi : another python WSGI server
I have been reading about all the above 5 tools and I have confused myself as to which one is used for what purpose? Could someone please explain me in lay man terms what use is each of the tool put in, when used together and which specific concern do they address?
Let's say you plan to host a few websites on your new VPS. Let's look at the tools you might need for each site.
HTTP Servers
Website 'Alpha' just consists of a some pure HTML, CSS and Javascript. The content is static.
When someone visits website Alpha, their browser will issue an HTTP request. You have configured (via DNS and name server configuration) that request to be directed to the IP address of your VPS. Now you need your VPS to be able to accept that HTTP request, decide what to do with it, and issue a response that the visitor's browser can understand. You need an HTTP server, such as Apache httpd or NGINX, and let's say you do some research and eventually decide on NGINX.
Application Servers
Website 'Beta' is dynamic, written using the Django Web Framework.
WSGI is an protocol that describes the interface between a Python application (the django app) and an application server. So what you need now is an WSGI app server, which will be able to understand web requests, make appropriate 'calls' to the application's various objects, and return the results. You have many options here, including gunicorn and uWSGI. Let's say you do some research and eventually decide on uWSGI.
uWSGI can accept and handle HTTPS requests for static content as well, so if you wanted to you could have website Alpha served entirely by NGINX and website Beta served entirely by uWSGI. And that would be that.
Reverse Proxy Servers
But uWSGI has poor performance in dealing with static content, so you would rather use NGINX for static content like images, even on website Beta. But then something would have to distinguish between requests and send them to the right place. Is that possible?
It turns out NGINX is not just an HTTP server but also a reverse proxy server: it is capable of redirecting incoming requests to another place, like your uWSGI application server, or many other places, collecting the response(s) and sending them back to the original requester. Awesome! So you configure all incoming requests to go to NGINX, which will serve up static content or, when required, redirect it to the app server.
Load Balancing with multiple web servers
You are also hosting Website Gamma, which is a blog that is popular internationally and receives a ton of traffic.
For Gamma you decide to set up multiple web servers. All incoming requests are going to your original VPS with NGINX, and you configure NGINX to redirect the request to one of several other web servers based in round-robin fashion, and return the response to the original requester.
HAProxy is web server that specializes in balancing loads for high traffic sites. In this case, you were able to use NGINX to handle traffic for site Gamma. In other scenarios, one may choose to set up a high-availability cluster: e.g., send all requests to a server like HAProxy, which intelligently redirects traffic to a cluster of nginx servers similar to your original VPS.
Cache Server
Website Gamma exceeded the capacity of your VPS due to the sheer volume of traffic. Let's say you instead hosted website Delta, and the reason your web server is unable to handle Delta is due to a popular feature that is very content-heavy.
A cache server is able to understand what media content is being frequently requested and store this content differently, such that it can be more quickly served. This is achieved by reducing disk IO operations; the popular content can be stored in memory or virtual memory instead. You might decide to combine your existing NGINX stack with a technology like Varnish or Memchached to achieve this type of optimization and server website Gamma more effectively.
I will put a very concise (very informal) description for each one, in the order they would be hit when you make a request from your web browser:
HAProxy balances your traffic load, so if your webpage is receiving 5000 hits per second, you can't handle that with only one
webserver, so HAProxy will balance the hits among the webservers you
had behind.
Varnish is a cache server, it sits upfront your webservers and behind HAProxy, so if a resource is already cached by Varnish he will serve the request itself, instead
of passing the request to the webservers behind.
ngingx, gunicorn, uwsgi are web servers, that would be behind varnish and will get the requests that varnish will let pass
through. These web servers use optimized designs to handle high
loads (requests per second).
First gunicorn and uwsgi are both appservers. In other words they are responsible for running your python code in a stable and performant manner. Usually as a backend to a regular webserver.
The webserver would be nginx, it excels at serving static assets and passing the requests for dynamic content on to the appservers.
If the above doesn't give enough performance you add in varnish between nginx and the client, it should speed up repeated requests for the same thing.
haproxy is a load balancer, if you have several servers for the same content, this software will attempt to distribute requests among them optimally.
so basically:
your python code lives in the appserver (uwsgi or gunicorn)
your static webassets live in nginx
haproxy and varnish are software that allow you to better server very large amounts of requests

Categories