Using two different frameworks on a single domain (Oracle Weblogic / Django) - python

Suppose my company has a site at https://example.com, and it is powered by an older version of Oracle Weblogic. The company wants to eventually transition the site to a Django framework, but wants to do it piecemeal.
Specifically, it wants to maintain the original site on the old framework, but wants set up a subfolder like https://example.com/newurl/ (or, alternatively, a subdomain like https://newurl.example.com) which will contain a Django project with new features etc., and any subdirectories within this new url will likewise consist of Django apps only.
My question is, is it possible to contain both frameworks on the same domain in this manner, and if so how one would go about it using Apache? Thanks.

Yes, sure it's possible. Try reverse proxy software, such as:
Nginx Proxy
HaProxy
Varnish Cache
reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or more servers. These resources are then returned to the client, appearing as if they originated from the proxy server itself.[1] Unlike a forward proxy, which is an intermediary for its associated clients to contact any server, a reverse proxy is an intermediary for its associated servers to be contacted by any client. In other words, a proxy acts on behalf of the client(s), while a reverse-proxy acts on behalf of the server(s). ()
Nginx reverse proxy sample config
server {
listen 80;
server_name example.com;
location ~ /newurl {
proxy_pass http://django-server;
}
location ~ /oldurl {
proxy_pass http://oracle-weblogic-server;
}
}
HaProxy reverse proxy sample config
frontend http_frontend
bind *:80
mode http
option httpclose
acl is_newurl hdr_end(host) -i newurl
use_backend django if is_newurl
acl is_oldurl hdr_end(host) -i oldurl
use_backend oracle if is_oldurl
backend django
mode http
cookie SERVERID insert indirect nocache
server django django-server:80 check cookie django
backend oracle
mode http
cookie SERVERID insert indirect nocache
server oracle oracle-weblogic-server:80 check cookie oracle

Related

Sharing sessions between two flask servers

I have a backend with two flask servers. One that handles all RESTfull request and one that is an flask-socketio server. Is there a way how I can share the session variables (Logged in User etc.) between these two applications? They do run over different ports if that is important.
How I have understood sessions they work over client session cookies so shouldn't both of these servers have access to the information? If yes how? and if not is there a way to achieve the same effect?
There are a couple of ways to go at this, depending on how you have your two servers set up.
The easiest solution is to have both servers appear to the client on the same domain and port. For example, you can have www.example.com/socket.io as the root for your Socket.IO server, and any other URLs on www.example.com going to your HTTP server. To achieve this, you need to use a reverse proxy server, such as nginx. Clients do not connect directly to your servers, instead they connect to nginx on a single port, and nginx is configured to forward requests the the appropriate server depending on the URL.
With the above set up both servers are exposed to the client on the same domain, so session cookies will be sent to both.
If you want to have your servers appear separate to your client, then a good option to share session data is to switch to server-side sessions, stored in Redis, memcache, etc. You can use the Flask-Session to set that up.
Hope this helps!
I found that flask.session.sid = sid_from_another_domain works fine in having individual sub domains case.
I have several flask apps has individual domain names like A.domain.com, B.domain.com, and C.domain.com.
They all are based on flask and has redis session manager connected to same redis server.
I wanted to federate them to be logged in at once and to be logged out too.
I had to save the session id on db with user infomation together when I logged in A and passed it to domain B.
These domains communicates using oauth2 protocol, and I used flask_dance in this case.
And set it into flask.session.sid on B domain.
And then I could confirmed that this implementation works fine.

How to avoid Cross-Origin Resource Sharing error when hitting a Django API from another server process?

I am building a WebApp that has two separate components:
A "frontend" node.js application server (running on localhost:3099) that web-visitors will visit
A "backend" Django server (running on localhost:3098) that manages my ORM and talks to the database. Web visitors will not interact with this server directly at all. This server simply publishes a RESTful API that will be consumed by the frontend.
I will implement security restrictions that will prevent anyone except the frontend server from accessing the backend's API.
One of the API endpoint the backend publishes looks like this: http://localhost:3098/api/myApi/.
I can successfully hit that API from curl like so: curl -X POST -H "Content-Type: application/json" -d '{"myKey1":"myVal1", "myKey2":"myVal2"}' http://localhost:3098/api/myApi/
However, when I try to hit that same API from my frontend server using Javascript, I get the following error in my browser's console window:
XMLHttpRequest cannot load http://localhost:3098/api/myApi/.
No 'Access-Control-Allow-Origin' header is present on the requested resource.
Origin 'http://localhost:3099' is therefore not allowed access.
To solve this, took the following steps:
I installed django-cors-headers
I added 'corsheaders' to my INSTALLED_APPS
I added 'corsheaders.middleware.CorsMiddleware' to MIDDLEWARE_CLASSES above 'django.middleware.common.CommonMiddleware'
I declared the whitelist: CORS_ORIGIN_WHITELIST = ('localhost', '127.0.0.1',)
However, implementing django-cors-headers seems to have made no difference. I'm still getting the same CORS error. How can I solve this issue?
CORS is port-sensetive. Specification says that
If there is no port component of the URI:
1. Let uri-port be the default port for the protocol given by uri-scheme.
Otherwise:
2. Let uri-port be the port component of the URI.
And
If the two origins are scheme/host/port triples, the two origins are the same if, and only if, they have identical schemes, hosts, and ports.
This means that, with your spec CORS handles your whitelist as localhost:80, 127.0.0.1:80. I believe specifying localhost:3099 should resolve this issue.

Multiple backend servers accessible from a Flask server

I want to have a front-end server where my clients can connect, and depending on the client, be redirected (transparently) to another Flask application that will handle the specific client needs (eg. there can be different applications).
I also want to be able to add / remove / restart those backend clients whenever I want without killing the main server for the other clients.
I'd like the clients to:
not detect that there are other servers in the backend (the URL should be the same host)
not have to reenter their credentials when they are redirected to the other process
What would be the best approach?
The front-end server that you describe is essentially what is known as a reverse proxy.
The reverse proxy receives requests from clients and forwards them to a second line of internal servers that clients cannot reach directly. Typically the decision of which internal server to forward a request to is made based on some aspect of the request URL. For example, you can assign a different sub-domain to each internal application.
After the reverse proxy receives a response from the internal server it forwards it on to the client as if it was its own response. The existence of internal servers is not revealed to the client.
Solving authentication is simple, as long as all your internal servers share the same authentication mechanism and user database. Each request will come with authentication information. This could for example be a session cookie that was set by the login request, direct user credentials or some type of authentication token. In all cases you can validate logins in the same way in all your applications.
Nginx is a popular web server that works well as a reverse proxy.
Sounds like you want a single sign-on setup for a collection of service endpoints with a single entry point.
I would consider deploying all my services as Flask applications with no knowledge of how they are to be architected. All they know is all requests for resources need some kind of credentials associated with them. The manner you pass those credentials can vary. You can use something like the FAS Flask Auth Plugin to handle authentication. Or you can do something simpler like package the credentials provided to your entry service in the HTTP headers of the subsequent requests to other services. Flask.request.headers in your subsequent services will give you access to the right headers to pass to your authentication service.
There are a lot of ways you can go when it comes to details, but I think this general architecture should work for you.

Differentiate nginx, haproxy, varnish and uWSGI/Gunicorn [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am really new to sys admin stuff, and have only provisioned a VPS with nginx(serving the static files) and gunicorn as the web server.
I have lately been reading about different other stuff. I came to know about other tools:
nginx : high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server
haproxy : high performance load balancer
varnish : caching HTTP reverse proxy
gunicorn : python WSGI http server
uwsgi : another python WSGI server
I have been reading about all the above 5 tools and I have confused myself as to which one is used for what purpose? Could someone please explain me in lay man terms what use is each of the tool put in, when used together and which specific concern do they address?
Let's say you plan to host a few websites on your new VPS. Let's look at the tools you might need for each site.
HTTP Servers
Website 'Alpha' just consists of a some pure HTML, CSS and Javascript. The content is static.
When someone visits website Alpha, their browser will issue an HTTP request. You have configured (via DNS and name server configuration) that request to be directed to the IP address of your VPS. Now you need your VPS to be able to accept that HTTP request, decide what to do with it, and issue a response that the visitor's browser can understand. You need an HTTP server, such as Apache httpd or NGINX, and let's say you do some research and eventually decide on NGINX.
Application Servers
Website 'Beta' is dynamic, written using the Django Web Framework.
WSGI is an protocol that describes the interface between a Python application (the django app) and an application server. So what you need now is an WSGI app server, which will be able to understand web requests, make appropriate 'calls' to the application's various objects, and return the results. You have many options here, including gunicorn and uWSGI. Let's say you do some research and eventually decide on uWSGI.
uWSGI can accept and handle HTTPS requests for static content as well, so if you wanted to you could have website Alpha served entirely by NGINX and website Beta served entirely by uWSGI. And that would be that.
Reverse Proxy Servers
But uWSGI has poor performance in dealing with static content, so you would rather use NGINX for static content like images, even on website Beta. But then something would have to distinguish between requests and send them to the right place. Is that possible?
It turns out NGINX is not just an HTTP server but also a reverse proxy server: it is capable of redirecting incoming requests to another place, like your uWSGI application server, or many other places, collecting the response(s) and sending them back to the original requester. Awesome! So you configure all incoming requests to go to NGINX, which will serve up static content or, when required, redirect it to the app server.
Load Balancing with multiple web servers
You are also hosting Website Gamma, which is a blog that is popular internationally and receives a ton of traffic.
For Gamma you decide to set up multiple web servers. All incoming requests are going to your original VPS with NGINX, and you configure NGINX to redirect the request to one of several other web servers based in round-robin fashion, and return the response to the original requester.
HAProxy is web server that specializes in balancing loads for high traffic sites. In this case, you were able to use NGINX to handle traffic for site Gamma. In other scenarios, one may choose to set up a high-availability cluster: e.g., send all requests to a server like HAProxy, which intelligently redirects traffic to a cluster of nginx servers similar to your original VPS.
Cache Server
Website Gamma exceeded the capacity of your VPS due to the sheer volume of traffic. Let's say you instead hosted website Delta, and the reason your web server is unable to handle Delta is due to a popular feature that is very content-heavy.
A cache server is able to understand what media content is being frequently requested and store this content differently, such that it can be more quickly served. This is achieved by reducing disk IO operations; the popular content can be stored in memory or virtual memory instead. You might decide to combine your existing NGINX stack with a technology like Varnish or Memchached to achieve this type of optimization and server website Gamma more effectively.
I will put a very concise (very informal) description for each one, in the order they would be hit when you make a request from your web browser:
HAProxy balances your traffic load, so if your webpage is receiving 5000 hits per second, you can't handle that with only one
webserver, so HAProxy will balance the hits among the webservers you
had behind.
Varnish is a cache server, it sits upfront your webservers and behind HAProxy, so if a resource is already cached by Varnish he will serve the request itself, instead
of passing the request to the webservers behind.
ngingx, gunicorn, uwsgi are web servers, that would be behind varnish and will get the requests that varnish will let pass
through. These web servers use optimized designs to handle high
loads (requests per second).
First gunicorn and uwsgi are both appservers. In other words they are responsible for running your python code in a stable and performant manner. Usually as a backend to a regular webserver.
The webserver would be nginx, it excels at serving static assets and passing the requests for dynamic content on to the appservers.
If the above doesn't give enough performance you add in varnish between nginx and the client, it should speed up repeated requests for the same thing.
haproxy is a load balancer, if you have several servers for the same content, this software will attempt to distribute requests among them optimally.
so basically:
your python code lives in the appserver (uwsgi or gunicorn)
your static webassets live in nginx
haproxy and varnish are software that allow you to better server very large amounts of requests

How do I make Django admin URLs accessible to localhost only?

What is the simplest way to make Django /admin/ urls accessible to localhost only?
Options I have thought of:
Seperate the admin site out of the project (somehow) and run as a different virtual host (in Apache2)
Use a proxy in front of the hosting (Apache2) web server
Restrict the URL in Apache within WSGI somehow.
Is there a standard approach?
Thanks!
Id go for apache configuration:
<Location /admin>
Order Deny, Allow
Deny from all
Allow from 127.0.0.1
</Location>
HTH.
I'd go for the Apache configuration + run a proxy in front + restrict in WSGI :
I dislike Apache for communicating with web clients when dynamic content generation is involved. Because of it's execution model, a slow or disconnected client can tie up the Apache process. If you have a proxy in front ( i prefer nginx, but even a vanilla apache will do ), the proxy will worry about the clients and Apache can focus on a new dynamic content request.
Depending on your Apache configuration, a process can also slurp a lot of memory and hold onto it until it hits MaxRequests. If you have memory intensive code in /admin ( many people do ), you can end up with Apache processes that grab a lot more memory than they need. If you split your Apache config into /admin and /!admin , you can tweak your apache settings to have a larger number of /!admin servers which require a smaller potential footprint.
I'm paranoid server setups.
I want to ensure the proxy only sends /admin to a certain Apache port.
I want to ensure that Apache only receives /admin on certain apache port, and that it came from the proxy (with a secret header) or from localhost.
I want to ensure that the WSGI is only running the /admin stuff based on certain server/client conditions.

Categories