I'm new to setting up a server for python apps, slowly getting my head round all the tools and config options.
I'd like to configure a testing instance on an existing server that has plesk and apache installed. I managed to set up the python environment, virtualenv, the flask app inclusive database and run it successfully on http://domain.test:5000 however I'd need to remove the port number from the domain.
Gunicorn seems to be the tool for that, however I'm not sure how to go about it as plesk is apparently installed on port 80 - so is there any way to get this configured on that server with some port hiding/masking/redirect or do I need to move to a standalone server?
Additionally I'd like to add a ssl certificate to that domain but one step at the time...
The method run on a Flask application takes a keyword argument port:
from flask import Flask
app = Flask(__name__)
app.run(port=80)
Of course you'll need root privileges to run on port 80
Related
I have a flask app that I have cloned onto my aws ec2 instance. I can only run it using a virtual environment (which I activate by running the following):
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install --upgrade pip
$ pip install flask==1.1.1
Below is my app:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run()
It runs just fine when executing env FLASK_APP=app.py flask run. The problem is, I'd like to access the exposed routes remotely using my aws ec2's public IP or hostname. I get a tiemout error whenever I try to access it though. I think this is because i'm running the flask app in a virtual environment but I'm not sure. I can't find any good tutorials on how to expose this. Where am I going wrong here?
Quick n' safe solution:
As you're running the development server, which isn't meant for production. I would say the best way to connect to this app from just your own machine, is with an ssh tunnel:
ssh -L 5000:localhost:5000 ec2_addr
You can then point your web-browser to http://localhost:5000/ on your client machine, which will be tunneled to port 5000 on the EC2 instance. This is a fast way to connect to a Flask (or any) server on a remote Linux box. The tunnel is destroyed when you stop that ssh session.
Longer method:
I'd like to access the exposed routes remotely using my aws ec2's public IP or hostname.
The timeout isn't because it's running in a virtual environment: You'll probably find it's because you need to assign Security Groups to your instance through the EC2 console. These allow you to open certain ports on the public IP.
See this other answer I wrote, regarding EC2 security groups.
However be careful. You shouldn't expose the development server in this manner. There are a number of tutorials like this one from digital ocean which cover deploying a flask app behind gunicorn and nginx with a Let's Encrypt SSL cert. You should find yourself in a position where your security group exposes port 80 and 443, with requests to port 80 being redirected to the https URL by the nginx configuration.
If this sounds like a whole load of hassle / you don't have Linux skills / you don't want to learn Linux skills, then these are common reasons people pick managed services like AWS Elastic Beanstalk, to which you can deploy your flask app (official guide), without having to worry about server config. Heroku is another (non AWS) service which offers such features.
It really depends on what you require / wish to gain. Stay safe!
Goal:
Use OIDC from a Flask app running in a Docker container.
Background:
I'm building a web application with Flask and want to use Keycloak to provide access. For this I use the Python library flask_oidc.
All services are run locally with a docker-compose file:
Gunicorn that runs the Flask application (port 5000)
Keycloak (port 8080)
I followed the example of https://gist.github.com/thomasdarimont/145dc9aa857b831ff2eff221b79d179a and even reduced my app to just this.
Problem:
Start all services
Navigate to the service that requires login (/private in the example)
The user is rerouted to the Keycloud server prompting to log in.
The user logs in.
Keycloak routes the user back to the app (/oidc_callback)
!!! The Flask app crashes with an OSError: [Errno 99] Cannot assign requested address error.
This is caused deep down in flask_oidc > oauth2client > httplib2 when trying to connect to the Keycloak server.
What I think is happening is that the library tries to open a connection with the Keycloak server, but tries to bind this to localhost. This probably fails, since inside the Docker container the applications are bound to 0.0.0.0.
What I tried:
[WORKS] Run the Gunicorn/Flask app outside of a container, and run the Keylcoak inside a container.
This shows (to me) that all my settings and the code are fine, but that the problem is somewhere in the interaction between Docker<-->flask_oidc+.
Question:
Can anyone explain this? I really hope that someone has a working setup (Flask inside docker with flask_oidc), and is willing to share this.
UPDATE [5-12-2018]
I think I figured it out. I used PyOIDC to manually go through all the steps and be able to debug.
When running both services in Docker on your own computer (localhost) you get a conflict:
Users go to localhost:8080 to find Keycloak and localhost:5000 to find the App.
The Flask app runs inside the container, and localhost doesn't resolve to the host but rather to itself inside the container.
You can let Flask connect to http://keycloak/ using the network in the container, but then it returns all the configuration under this domain. Which is bad, because to the outside world it should be localhost:8080.
Now, if you actually have domain names (for example keycloak.awesome.app and app.awesome.app) I think it'll just work fine, since it'll use an outside DNS to resolve it to an IP address, which is the correct machine.
Bonus: PyOIDC can retrieve the provider configuration from the Keycloak, so no more manual typing for this. Yay!
New setup
For local development I decided to make a little setup as follows:
(1) Add to /etc/hosts:
127.0.0.1 keycloak.dev.local
127.0.0.1 app.dev.local
(2) Add to your Flask service in the docker-compose.yml:
extra_hosts: #host.docker.internal is not accepted.
- "keycloak.dev.local:<YOUR IP ADRESS>"
- "app.dev.local:<YOUR IP ADRESS>"
(=) Now, both you and the Flask application can access keycloak.dev.local and can proper reponses!
Note that I would still prefer a nicer solution. This setup fails as soon as my ip address changes.
flask-oidc gets token endpoint configuration from the client secrets file.
I managed to make it work by making the following changes:
Created a docker network for the flask app and keycloak containers;
Edited the attributes userinfo_uri, token_uri and token_introspection_uri, replacing the hostname with the keycloak container hostname (i'm using docker-compose, in this case the keycloak container hostname is the service name).
Example:
{
"web": {
"auth_uri": "http://localhost:8080/auth/realms/REMOVED/protocol/openid-connect/auth",
"client_id": "flask-client",
"client_secret": "REMOVED",
"redirect_uris": ["http://localhost:5000/oidc_callback"],
"userinfo_uri": "http://keycloak:8080/auth/realms/REMOVED/protocol/openid-connect/userinfo",
"token_uri": "http://keycloak:8080/auth/realms/REMOVED/protocol/openid-connect/token",
"token_introspection_uri": "http://keycloak:8080/auth/realms/REMOVED/protocol/openid-connect/token/introspect"
}
}
Now it connects through the docker network to exchange token info.
I'm following Flask Quickstart guide and can run my web app via http://myip.com:5000.
One issue is that my web is only accessible as long as I keep my SSH remote connection session - when I sleep/shutdown my PC, the website shutdown too.
How can I make it permanent available?
You need to use a regular web server, such as apache2. You can't use the python server for production purposes. Here is how you do it with apache: http://flask.pocoo.org/docs/0.10/deploying/mod_wsgi/
I'm trying to get a Crossbar.io app running on Heroku. Crossbar.io requires you to put the app's host in a config file that's used to launch the app. I've tried the following:
my-app-name.herokuapp.com: No dice. I imagine Heroku does some fancy redirection internally that prevents this from working.
$HOSTNAME: running a script that outputs the HOSTNAME and using the result in the config file doesn't work either. The HOSTNAME is a GUID that contains no useful information.
IP: I tried getting the external IP of the app, but no luck. The IP changes each time I start the app.
Is there an established way to do this on Heroku?
Also the config requires a port and Heroku seems to assign these dynamically. Any way to access the port as well (ideally before the app runs)
For the host use 0.0.0.0. For the port number it's slightly more complicated...
When it creates a web dynamo Heroku sets a PORT environment variable with the dynamo's port. To set this in crossbar you need to create a script that reads that variable and writes it into your config wherever the port is requested. Then you make sure that the script returns 0 on exit and put the following in your Procfile:
web: ./your_config_helper_script && crossbar start
That runs your script first (which should get your config file ready) before running crossbar
I am trying to use the flask development server at an office with a strict proxy that blocks the default 127.0.0.1:5000 host and port. I have tried using different hosts and ports with no success. I have also tried setting up Flask with XAMPP on Windows via mod_wsgi with no success. I am looking for an option to continue testing flask on my local machine with as little setup as possible as my production environment is a PaaS and does not use the same setup as my local machine.