I've been trying to understand how it's possible to launch a flask project by running flask run.
What actually happens behind the scenes? How is it possible to actually launch the app using the flask keyword? I got as far as understanding that it is based on the Click library (https://palletsprojects.com/p/click/) but I still don't understand what happens step by step (the internals).
If someone could explain that would be appreciated. Thank you!
To launch a flask application, you can use the command flask run. But how does it work?
Before running any flask application, flask needs to be told how to import it by setting certain environment variables. On your terminal, you run these commands in their order:
(venv) $ export FLASK_APP=app.py
(venv) $ flask run
What is app.py? This is the entry point of your application. In this file, you probably have:
from app import app
# Here, you application instance is being imported.
# The application instance is where you defined your flask app.
Alternatively, this file may have:
if __name__ == '__main__':
app.run(host='0.0.0.0', debug=True)
After the server initializes it will wait for client connections. The output from flask run indicates that the server is running on IP address 127.0.0.1, which is always the address of your own computer. This address is so common that is also has a simpler name that you may have seen before: localhost.
Applications deployed on production web servers typically listen on port 443, or sometimes 80 if they do not implement encryption, but access to these ports require administration rights. Since this application is running in a development environment, Flask uses the freely available port 5000.
To access you application on your web browser, paste this URL:
http://localhost:5000/
There are other environment variables that flask can use:
FLASK_ENV (sets your environment, either to development or production)
FLASK_DEBUG (enables/disables debugging)
Before running flask run, you will run need to add the other environment variables in your terminal:
(venv) $ export FLASK_APP=app.py
(venv) $ export FLASK_ENV=development
(venv) $ export FLASK_DEBUG=True
(venv) $ flask run
Now, you will have specified that your application is running on a development server, and you have enabled flask debugging features.
Every time you want to see the changes in your application, you will need to restart your server by running the same commands in your terminal.
Starting with version 1.0, Flask allows you to register environment variables that you want to be automatically imported when you run the flask command.
It is recommended that you store flask environment variables (these are the ones needed to run your application) in a file called .flaskenv in your project's root directory.
(venv) $ touch .flaskenv
Then update this file with your environment variables:
# .flaskenv
FLASK_APP=app.py
FLASK_ENV=development
FLASK_DEBUG=True
To implement this option, you will need the packege python-dotenv:
(venv)$ pip3 install python-dotenv
This is optional, but it makes it a lot easier rather than you having to memorize all environment variables which you pass via the terminal.
To run your flask app using this option, you will only need:
(venv)$ flask run
Related
My Flask web application runs using nginx and gunicorn. I use supervisor to let my application run in the background. I always updated my files using Windows Power Shell and the command SCP. After i moved the new edited files, which are already existing on my Ubuntu server, to the server, i use the command sudo supervisorctl reload to restart the flask app to see the changes. But this time the flask app did not start and i only get 502 Bad Gateway. It does not matter how many times i reload the supervisor or restart nginx, i only get the error code 502.
The issue was a not installed module and a typing error in a configuration file.
I am attempting to run the tutorial from the Flask RESTful documentation but am running into an error when running the Resourceful routing code. I have copied the code for it verbatim, but when I attempt to run the code, I run into the situation below:
export FLASK_APP=api.py
flask run
curl http://localhost:5000/todo1 -d "data=Remember the milk" -X PUT
And the return is:
{"message": "Internal Server Error"}
Does anyone have a suggestion for what's happening here? Any insight would be appreciated.
When using the flask run command, production mode is the default setting. If Flask encounters an error while running in production mode, it will automatically suppress any errors and only return a generic {"message": "Internal Server Error"}. However, for development this can be very annoying as it makes it difficult to determine the root cause of an error.
Flask has a built-in development mode that--among other things--will disable this. Since you are using the flask cli the easiest thing to do would be to set an environment variable.
# Mac/Linux
$ export FLASK_ENV=development
# Windows
$ set FLASK_ENV=development
# The same command is used to undo this on both platforms
$ unset FLASK_ENV
This won't automatically fix the bug you're experiencing, but what it will do is allow you to see why you are encountering the error, which is usually more important anyway.
You can read more about the other available options for development mode here
I have a flask app that I have cloned onto my aws ec2 instance. I can only run it using a virtual environment (which I activate by running the following):
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install --upgrade pip
$ pip install flask==1.1.1
Below is my app:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run()
It runs just fine when executing env FLASK_APP=app.py flask run. The problem is, I'd like to access the exposed routes remotely using my aws ec2's public IP or hostname. I get a tiemout error whenever I try to access it though. I think this is because i'm running the flask app in a virtual environment but I'm not sure. I can't find any good tutorials on how to expose this. Where am I going wrong here?
Quick n' safe solution:
As you're running the development server, which isn't meant for production. I would say the best way to connect to this app from just your own machine, is with an ssh tunnel:
ssh -L 5000:localhost:5000 ec2_addr
You can then point your web-browser to http://localhost:5000/ on your client machine, which will be tunneled to port 5000 on the EC2 instance. This is a fast way to connect to a Flask (or any) server on a remote Linux box. The tunnel is destroyed when you stop that ssh session.
Longer method:
I'd like to access the exposed routes remotely using my aws ec2's public IP or hostname.
The timeout isn't because it's running in a virtual environment: You'll probably find it's because you need to assign Security Groups to your instance through the EC2 console. These allow you to open certain ports on the public IP.
See this other answer I wrote, regarding EC2 security groups.
However be careful. You shouldn't expose the development server in this manner. There are a number of tutorials like this one from digital ocean which cover deploying a flask app behind gunicorn and nginx with a Let's Encrypt SSL cert. You should find yourself in a position where your security group exposes port 80 and 443, with requests to port 80 being redirected to the https URL by the nginx configuration.
If this sounds like a whole load of hassle / you don't have Linux skills / you don't want to learn Linux skills, then these are common reasons people pick managed services like AWS Elastic Beanstalk, to which you can deploy your flask app (official guide), without having to worry about server config. Heroku is another (non AWS) service which offers such features.
It really depends on what you require / wish to gain. Stay safe!
I wrote a server application in Python with Flask and now I would like to get it up and running on a virtual machine I have set up. Thus, I would really appreciate guidance in two areas.
How do I get a server setup so that it is perpetually running, and other computers can access it? The computers can be in the same network so I don't have to worry about a domain name or anything. I am just looking for multiple devices to be able to access it. I am currently able to run the server on my local machine and everything works just fine.
I have my virtual linux machine set up remotely, so I SSH into it and do everything from command line, but I am a bit lost as to how to do the aforementioned stuff from the command line.
Any guidance/help is much appreciated! The web-searching I have done hasn't pointed me in the right direction. I apologize if any of my terminology was off (if so, please feel free to correct me so I learn!). Thank you!
Use systemd on Ubuntu, /etc/systemd/system, for a simple setup (probably not ideal for a production setup though).
I do this sometimes for Python Flask app that I'm prototyping. First, put your application code in /opt/my-app. I usually just cd /opt and git clone a repo there. Then, create a file called /etc/systemd/system/my-app.service. In that file, add the following:
[Unit]
Description=My App daemon
After=network.target postgresql.service
Wants=postgresql.service
[Service]
EnvironmentFile=/etc/sysconfig/my-app
WorkingDirectory=/opt/my-app/ # <- this is where your app lives
User=root
Group=root
Type=simple
ExecStart=/usr/bin/python server.py # <- this starts your app
Restart=always
RestartSec=5s
[Install]
WantedBy=multi-user.target
Next, paste any environment variables you have into a file called /etc/sysconfig/my-app like:
DB_HOST=localhost
DB_USER=postgres
DB_PASSWORD=postgres
DB_NAME=postgres
Then you can do:
service my-app start
service my-app stop
service my-app restart
and then you can hit the app running on the servers IP and port (just like if you ran python app.py or python server.py. To check the logs for your daemon process, if it doesn't seem to work out, you can run:
journalctl -u my-app -e
In production, I'm not sure this is the best setup, probably better to look into something like ngnix. But I do this for prototypes all the time and it's pretty great.
I am trying to deploy a Django app on openshift (python3.3, django1.7, Openshift 2.1).
I need to set the OPENSHIFT_PYTHON_WSGI_APPLICATION to point to an alternative wsgi.py location.
I have tried using the pre_build script to set the variable, using the following commands:
export OPENSHIFT_PYTHON_WSGI_APPLICATION="$OPENSHIFT_REPO_DIR"geartest4/wsgi.py
echo "-------> $OPENSHIFT_PYTHON_WSGI_APPLICATION"
I can see during the git push that the pre_build script sets the variable correctly. The echo shows the correct path as expected. However wsgi.py does not launch and I get:
CLIENT_ERROR: WSGI application was not found
When I immediately ssh into the gear and check the environment variable I see that OPENSHIFT_PYTHON_WSGI_APPLICATION="" is not set.
If I set the variable manually from my workstation using rhc set-env OPENSHIFT_PYTHON_WSGI_APPLICATION=/var/lib/openshift/gear_name/bla/bla then the variable sticks, the wsgi server launches, and the app works fine.
The problem is that I don't want to use rhc set-env because that means I have to hardwire the gear name in the path. This becomes a problem when I want to do scaling with multiple gears.
Anyone have any ideas on how to set the variable and make stick?
The environment variable OPENSHIFT_PYTHON_WSGI_APPLICATION can be set to a relative path like this:
rhc env set OPENSHIFT_PYTHON_WSGI_APPLICATION=wsgi/wsgi.py
The openshift cartridge openshift-django17 by jfmatth uses this approach, too.