I have a backend with two flask servers. One that handles all RESTfull request and one that is an flask-socketio server. Is there a way how I can share the session variables (Logged in User etc.) between these two applications? They do run over different ports if that is important.
How I have understood sessions they work over client session cookies so shouldn't both of these servers have access to the information? If yes how? and if not is there a way to achieve the same effect?
There are a couple of ways to go at this, depending on how you have your two servers set up.
The easiest solution is to have both servers appear to the client on the same domain and port. For example, you can have www.example.com/socket.io as the root for your Socket.IO server, and any other URLs on www.example.com going to your HTTP server. To achieve this, you need to use a reverse proxy server, such as nginx. Clients do not connect directly to your servers, instead they connect to nginx on a single port, and nginx is configured to forward requests the the appropriate server depending on the URL.
With the above set up both servers are exposed to the client on the same domain, so session cookies will be sent to both.
If you want to have your servers appear separate to your client, then a good option to share session data is to switch to server-side sessions, stored in Redis, memcache, etc. You can use the Flask-Session to set that up.
Hope this helps!
I found that flask.session.sid = sid_from_another_domain works fine in having individual sub domains case.
I have several flask apps has individual domain names like A.domain.com, B.domain.com, and C.domain.com.
They all are based on flask and has redis session manager connected to same redis server.
I wanted to federate them to be logged in at once and to be logged out too.
I had to save the session id on db with user infomation together when I logged in A and passed it to domain B.
These domains communicates using oauth2 protocol, and I used flask_dance in this case.
And set it into flask.session.sid on B domain.
And then I could confirmed that this implementation works fine.
Related
I am creating an inventory application for my company. The app has it's inventory information stored through RDS. I am using MySQL and have pymysql to connect. Throughout development I have had no issue connecting from the laptop that I created the database from. I want to know how to allow other computers with the application to connect. Is there a way to avoid adding each individual IP address to a security group? I would just like those with the application downloaded to have access without requiring additional login credentials.
When I use the application on my home computer I receive an error when trying to connect to the database.
pymysql.connect(db=dbname, host=host, port=port, user=user, password=password)
Side-note on security:
It is typically a very bad idea to grant remote applications direct access to your database, especially without giving each user/app their own password.
You are effectively opening your database to anyone that has the credentials, and you are including those credentials in the app itself. Somebody could obtain those credentials and, presumably, do quite a bit of damage to the contents of the database.
Also, you are locking-in your database schema with the inability to change in future. Let's say you have a table with particular columns and your application directly access that table. In future, if you modify the table, you should need to simultaneously update every application that is using the database. That's not really feasible if other people are running the application on their own systems.
The better approach is to expose an API and have the applications use the API to access the data:
The API should manage authentication, so you can determine who to allow in and, more importantly, track who is doing what. This will also avoid the problem of having to add each individual IP address of users, since you will be managing an authentication layer.
The API will apply a layer of business logic rather than allowing the remote application to do whatever it wishes. This stops remote users being able to do things like deleting the entire database. It also means that, instead of remote apps simply passing an SQL statement, they will need to pass information through the defined API (typically in the form of Action + Parameters).
It provides a stable interface into the application allowing you to make changes in the backend (eg changing the contents of a table) while still presenting the same interface to client applications.
Rule of thumb: Only one application server should be directly accessing a given database. You might have multiple servers running the same software to provide high availability and to support a high load of traffic, but don't let the client apps directly access the database. That's what makes a "3-tier" application so much better than a "2-tier" application. (Client - Server - Database)
I have two containers Auth and Frontend. I have managed to get both the containers working independently, I need to establish the link between the two to send and receive HTTP requests.
Generally, the connections are made in angular like http://localhost:3000/auth/.
Note: Both are in different deployments and services.
Should I be using Ingress or Nginx?
If your Frontend angular application, needs to connect to the Auth application an the two run on different networks, then just use the IP of your host running the Auth container. If your app requires load balancing, security or you just want to add another level of abstraction and control you may use a proxy like Nginx.
service will do the job, you just need to replace localhost with service name.
I want to have a front-end server where my clients can connect, and depending on the client, be redirected (transparently) to another Flask application that will handle the specific client needs (eg. there can be different applications).
I also want to be able to add / remove / restart those backend clients whenever I want without killing the main server for the other clients.
I'd like the clients to:
not detect that there are other servers in the backend (the URL should be the same host)
not have to reenter their credentials when they are redirected to the other process
What would be the best approach?
The front-end server that you describe is essentially what is known as a reverse proxy.
The reverse proxy receives requests from clients and forwards them to a second line of internal servers that clients cannot reach directly. Typically the decision of which internal server to forward a request to is made based on some aspect of the request URL. For example, you can assign a different sub-domain to each internal application.
After the reverse proxy receives a response from the internal server it forwards it on to the client as if it was its own response. The existence of internal servers is not revealed to the client.
Solving authentication is simple, as long as all your internal servers share the same authentication mechanism and user database. Each request will come with authentication information. This could for example be a session cookie that was set by the login request, direct user credentials or some type of authentication token. In all cases you can validate logins in the same way in all your applications.
Nginx is a popular web server that works well as a reverse proxy.
Sounds like you want a single sign-on setup for a collection of service endpoints with a single entry point.
I would consider deploying all my services as Flask applications with no knowledge of how they are to be architected. All they know is all requests for resources need some kind of credentials associated with them. The manner you pass those credentials can vary. You can use something like the FAS Flask Auth Plugin to handle authentication. Or you can do something simpler like package the credentials provided to your entry service in the HTTP headers of the subsequent requests to other services. Flask.request.headers in your subsequent services will give you access to the right headers to pass to your authentication service.
There are a lot of ways you can go when it comes to details, but I think this general architecture should work for you.
I have a website which uses Amazon EC2 with Django and Google App Engine for its powerful Image API and image serving infrastructure. When a user uploads an image the browser makes an AJAX request to my EC2 server for the Blobstore upload url. I'm fetching this through my Django server so I can check whether the user is authenticated or not and then the server needs to get the url from the App Engine server. After the upload is complete and processed in App Engine I need to send the upload info back to the django server so I can build the required model instances. How can I accomplish this? I was thinking to use urllib but how can I secure this to make sure the urls will only get accessed by my servers only and not by a web user? Maybe some sort of secret key?
apart from the Https call ( which you should be making to transfer info to django ), you can go with AES encryption ( use Pycrypto/ any other lib). It takes a secret key to encrypt your message.
For server to server communication, traditional security advice would recommend some sort of IP range restriction at the web server level for the URLs in addition to whatever default security is in place. However, since you are making the call from a cloud provider to another cloud provider, your ability to permanently control the IP address of either the client and the server may diminished.
That said, I would recommend using a standard username/password authentication mechanism and HTTPS for transport security. A basic auth username/password would be my recommendation(https:\\username:password#appengine.com\). In addition, I would make sure to enforce a lockout based on a certain number of failed attempts in a specific time window. This would discourage attempts to brute force the password.
Depending on what web framework you are using on the App Engine, there is probably already support for some or all of what I just mentioned. If you update this question with more specifics on your architecture or open a new question with more information, we could give you a more accurate recommendation.
SDC provides a secure tunnel from AppEngine to a private network elsewhere -- which could be your EC2 instance, if you run it there.
I need to write a cgi page which will act like a reverse proxy between the user and another page (mbean). The issue is that each mbean uses different port and I do not know ahead of time which port user will want to hit.
Therefore want I need to do is following:
A) Give user a page which will allow him to choose which application he wants to hit
B) spawn a reverse proxy base on information above (which gives me port, server, etc..)
C) the user connects to the remote mbean page via the reverse proxy and therefore never "leaves" the original page.
The reason for C is that user does not have direct access to any of the internal apps only has access to initial port 80.
I looked at twisted and it appears to me like it can do the job. What I don't know is how to spawn twisted process from within cgi so that it can establish the connection and keep further connection within the reverse proxy framework.
BTW I am not married to twisted, if there is another tool that would do the job better, I am all ears. I can't do things like mod_proxy (for instance) since the wide range of ports would make configuration rather silly (at around 1000 different proxy settings).
You don't need to spawn another process, that would complicate things a lot. Here's how I would do it based on something similar in my current project :
Create a WSGI application, which can live behind a web server.
Create a request handler (or "view") that is accessible from any URL mapping as long as the user doesn't have a session ID cookie.
In the request handler, the user can choose the target application and with it, the hostname, port number, etc. This request handler creates a connection to the target application, for example using httplib and assigns a session ID to it. It sets the session ID cookie and redirects the user back to the same page.
Now when your user hits the application, you can use the already open http connection to redirect the query. Note that WSGI supports passing back an open file-like object as response, including those provided by httplib, for increased performance.