To get gunicorn under supervisord to use the virtual environment /home/ubuntu/venv/bin it is not necessary to seek a judicious place to put source /home/ubuntu/venv/bin/activate. It is sufficient to write:
[program:hello]
command=/home/ubuntu/venv/bin/gunicorn -b localhost:8000 hello:app
directory=/home/ubuntu/hello/
environment=PATH="/home/ubuntu/venv/bin:%(ENV_PATH)"
in /usr/supervisor/hello.conf.
The next task is to bring in a whole slew of environment variables. One way is to laboriously augment the supervisord config file as follows.
[program:hello]
...
environment=PATH="/home/ubuntu/venv/bin:%(ENV_PATH)",SECRET_KEY="%(ENV_SECRET_KEY)",DATABASE_URI="%(ENV_DATABASE_URI)",etc1,etc2,etc3
Is there a way to bring in the environment variables in one shot (after they're initialized in, say, ~/.profile?
Related:
1, 2, 3, 4, 5, 6, 7
Here is a recipe:
Write the environment variables in a file /home/ubuntu/prog/.env.
export FLASK_APP=/home/ubuntu/prog/hello.py
export SECRET_KEY=ABCD
export DATABASE_PASSWORD=EFGH
Use dotenv's load_dotenv to load the environment variables.
from flask import Flask
from os.path import join, dirname
from os import environ
from dotenv import load_dotenv
app = Flask(__name__)
dotenv_path = join(dirname(__file__), '.env')
load_dotenv(dotenv_path)
#app.route('/')
def hello():
SECRET_KEY = environ.get("SECRET_KEY")
DATABASE_PASSWORD = environ.get("DATABASE_PASSWORD")
return SECRET_KEY + DATABASE_PASSWORD
Write a file /etc/supervisor/hello.conf.
[program:hello]
command=/home/ubuntu/venv/bin/gunicorn -b localhost:8000 hello:app
directory=/home/ubuntu/prog
stdout_logfile=/home/ubuntu/prog/hello_out.log
stderr_logfile=/home/ubuntu/prog/hello_err.log
user=ubuntu
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
[supervisord]
logfile=/home/ubuntu/prog/hello_supervisord.log
pidfile=/tmp/supervisord.pid
Load the environment and point to the the app.
source /home/ubuntu/prog/.env
The environment variables are now loaded,
$ export | grep SECRET
declare -x SECRET_KEY="ABCD"
and they will be passed to the sub-process without messing with supervisord's environment=.
Launch supervisord in the foreground to confirm all is well.
/usr/bin/supervisord -n -edebug -c /etc/supervisor/hello.conf
Confirm from another shell that all is well.
$ curl localhost:8000
ABCDEFGH
Kill supervisord. Since it's in the foreground, it's enough to CTRL-c it.
Launch supervisord as a daemon.
/usr/bin/supervisord -c /etc/supervisor/hello.conf
Keep an eye on the three log files prog/hello_out.log, prog/hello_err.log, and prog/hello_supervisord.log.
Perhaps the most important point is to avoid using supervisord's environment=. SO chatter suggests that it handles commas, quotation marks, tabs, even newlines. Empirically, this doesn't hold (at least for supervisord 3.3.5), and the documentation does not settle it one way or the other. The two lines 942-943
seem to be where the parsing happens, if someone cares to investigate the insufficiency of the docs.
To bring in the environment variables in one shot, permanently, securely you have to add following lines to .bashrc in $HOME directory.
For this open the .bashrc file in your home directory with your favorite code editor:
nano .bashrc
Add the following line somewhere in your .bashrc file:
export SECRET_KEY="YOUR SECRET KEY."
Now to use this SECRET_KEY in flask you have to import os module and make use of it like this.
from flask import Flask
import os
app = Flask(__name__)
#app.route('/')
def hello():
SECRET_KEY = os.environ.get('SECRET_KEY')
return SECRET_KEY
Hope this helps.
Related
Working on a heroku django project.
Goal: I want to run the server locally with same code on cloud too (makes sense).
Problem:
Environments differ from a linux server (heroku) to a local PC windows system.
Local environment variables differ from cloud Heroku config vars.
Heroku config vars can be setup easily using the CLI heroku config:set TIMES=2.
While setting up local env vars is a total mess.
I tried the following in cmd:
py -c "import os;os.environ['Times']=2" # To set an env var
Then ran py -c "import os;os.environ.get('Times','Not Found')" stdout: "Not Found".
After a bit of research it appeared to be that such env vars are stored temporarily per process/session usage.
Solution theory: Redirect os.environ to .env file of the root heroku project instead of the PC env vars. So I found this tool direnv perfect for Unix-like OSs but not available for Windows.
views.py code (runs perfect on cloud, sick on the local machine):
import os
import requests
from django.shortcuts import render
from django.http import HttpResponse
from .models import Greeting
def index(request):
# get method takes 2 parameters (env_var_string,return value if var is not found)
times = int(os.environ.get('TIMES',3))
return HttpResponse('<p>'+ 'Hello! ' * times+ '</p>')
def db(request):
greeting = Greeting()
greeting.save()
greetings = Greeting.objects.all()
return render(request, "db.html", {"greetings": greetings})
Main Question: Is there a proper way to hide secrets locally in windows and access them by os.environ['KEY']?
Another solution theory: I was wondering if a python virtual environment has it's own environment variables. If yes i activate a venv locally without affecting the cloud. Therefore os.environ['KEY'] is redirected to the venv variables. Again it's just a theory.
You can use environment variables which you can get via os.environ['KEY'].
The same code will work on both local development and on Heroku.
On Heroku define these variables using ConfigVars heroku config:set KEY=val while locally (on Windows for example) define the same variables in an .env file (use dotenv to load them). The .env file is never committed with the source code.
I'm investigating how to develop a decent web app with Python. Since I don't want some high-order structures to get in my way, my choice fell on the lightweight Flask framework. Time will tell if this was the right choice.
So, now I've set up an Apache server with mod_wsgi, and my test site is running fine. However, I'd like to speed up the development routine by making the site automatically reload upon any changes in py or template files I make. I see that any changes in site's .wsgi file causes reloading (even without WSGIScriptReloading On in the apache config file), but I still have to prod it manually (ie, insert extra linebreak, save). Is there some way how to cause reload when I edit some of the app's py files? Or, I am expected to use IDE that refreshes the .wsgi file for me?
Run the flask run CLI command with debug mode enabled, which will automatically enable the reloader. As of Flask 2.2, you can pass --app and --debug options on the command line.
$ flask --app main.py --debug run
--app can also be set to module:app or module:create_app instead of module.py. See the docs for a full explanation.
More options are available with:
$ flask run --help
Prior to Flask 2.2, you needed to set the FLASK_APP and FLASK_ENV=development environment variables.
$ export FLASK_APP=main.py
$ export FLASK_ENV=development
$ flask run
It is still possible to set FLASK_APP and FLASK_DEBUG=1 in Flask 2.2.
If you are talking about test/dev environments, then just use the debug option. It will auto-reload the flask app when a code change happens.
app.run(debug=True)
Or, from the shell:
$ export FLASK_DEBUG=1
$ flask run
http://flask.palletsprojects.com/quickstart/#debug-mode
In test/development environments
The werkzeug debugger already has an 'auto reload' function available that can be enabled by doing one of the following:
app.run(debug=True)
or
app.debug = True
You can also use a separate configuration file to manage all your setup if you need be. For example I use 'settings.py' with a 'DEBUG = True' option. Importing this file is easy too;
app.config.from_object('application.settings')
However this is not suitable for a production environment.
Production environment
Personally I chose Nginx + uWSGI over Apache + mod_wsgi for a few performance reasons but also the configuration options. The touch-reload option allows you to specify a file/folder that will cause the uWSGI application to reload your newly deployed flask app.
For example, your update script pulls your newest changes down and touches 'reload_me.txt' file. Your uWSGI ini script (which is kept up by Supervisord - obviously) has this line in it somewhere:
touch-reload = '/opt/virtual_environments/application/reload_me.txt'
I hope this helps!
If you're running using uwsgi look at the python auto reload option:
uwsgi --py-autoreload 1
Example uwsgi-dev-example.ini:
[uwsgi]
socket = 127.0.0.1:5000
master = true
virtualenv = /Users/xxxx/.virtualenvs/sites_env
chdir = /Users/xxx/site_root
module = site_module:register_debug_server()
callable = app
uid = myuser
chmod-socket = 660
log-date = true
workers = 1
py-autoreload = 1
site_root/__init__.py
def register_debug_server():
from werkzeug.debug import DebuggedApplication
app = Flask(__name__)
app.debug = True
app = DebuggedApplication(app, evalex=True)
return app
Then run:
uwsgi --ini uwsgi-dev-example.ini
Note: This example also enables the debugger.
I went this route to mimic production as close as possible with my nginx setup. Simply running the flask app with it's built in web server behind nginx it would result in a bad gateway error.
For Flask 1.0 until 2.2, the basic approach to hot re-loading is:
$ export FLASK_APP=my_application
$ export FLASK_ENV=development
$ flask run
you should use FLASK_ENV=development (not FLASK_DEBUG=1)
as a safety check, you can run flask run --debugger just to make sure it's turned on
the Flask CLI will now automatically read things like FLASK_APP and FLASK_ENV if you have an .env file in the project root and have python-dotenv installed
app.run(use_reloader=True)
we can use this, use_reloader so every time we reload the page our code changes will be updated.
I got a different idea:
First:
pip install python-dotenv
Install the python-dotenv module, which will read local preference for your project environment.
Second:
Add .flaskenv file in your project directory. Add following code:
FLASK_ENV=development
It's done!
With this config for your Flask project, when you run flask run and you will see this output in your terminal:
And when you edit your file, just save the change. You will see auto-reload is there for you:
With more explanation:
Of course you can manually hit export FLASK_ENV=development every time you need. But using different configuration file to handle the actual working environment seems like a better solution, so I strongly recommend this method I use.
Use this method:
app.run(debug=True)
It will auto-reload the flask app when a code change happens.
Sample code:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def index():
return "Hello World"
if __name__ == '__main__':
app.run(debug=True)
Well, if you want save time not reloading the webpage everytime when changes happen, then you can try the keyboard shortcut Ctrl + R to reload the page quickly.
From the terminal you can simply say
export FLASK_APP=app_name.py
export FLASK_ENV=development
flask run
or in your file
if __name__ == "__main__":
app.run(debug=True)
Enable the reloader in flask 2.2:
flask run --reload
Flask applications can optionally be executed in debug mode. In this mode, two very convenient modules of the development server called the reloader and the debugger are enabled by default.
When the reloader is enabled, Flask watches all the source code files of your project and automatically restarts the server when any of the files are modified.
By default, debug mode is disabled. To enable it, set a FLASK_DEBUG=1 environment variable before invoking flask run:
(venv) $ export FLASK_APP=hello.py for Windows use > set FLASK_APP=hello.py
(venv) $ export FLASK_DEBUG=1 for Windows use > set FLASK_DEBUG=1
(venv) $ flask run
* Serving Flask app "hello"
* Forcing debug mode on
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 273-181-528
Having a server running with the reloader enabled is extremely useful during development, because every time you modify and save a source file, the server automatically restarts and picks up the change.
To achieve this in PyCharm set 'Environment Variables' section to:
PYTHONUNBUFFERED=1;
FLASK_DEBUG=1
For Flask 'run / debug configurations'.
To help with fast automatic change in browser:
pip install livereload
from livereload import Server
if __name__ == '__main__':
server = Server(app.wsgi_app)
server.serve()
Next, Start your server again:
eg. your .py file is app.py
python app.py
I believe a better solution is to set the app configuration. For me, I built the tool and then pushed it to a development server where I had to set up a WSGI pipeline to manage the flask web app. I had some data being updated to a template and I wanted it to refresh every X minutes (WSGI deployment for the Flask site through APACHE2 on UBUNTU 18). In your app.py or whatever your main app is, add app.config.update dictionary below and mark TEMPLATES_AUTO_RELOAD=True, you will find that any templates that are automatically updated on the server will be reflected in the browser. There is some great documentation on the Flask site for configuration handling found here.
app = Flask(__name__)
app.config.update(
TEMPLATES_AUTO_RELOAD=True
)
On my production server, I've set environment variables both inside and outside my virtualenv (only because I don't understand this issue going on) including a variable HELLO_WORLD_PROD which I've set to '1'. in the python interpreter, both inside and out my venv, os.environ.get('HELLO_WORLD_PROD') == '1' returns True. In my settings folder, I have:
import os
if os.environ.get('HELLO_WORLD_PROD') == '1':
from hello_world.settings.prod import * # noqa
else:
from hello_world.settings.dev import * # noqa
Both prod.py and dev.py inherit from base.py, and in base DEBUG = False, and only in dev.py does DEBUG = True.
However, when I trigger an error through the browser, I'm seeing the debug page.
I'm using nginx and gunicorn. Why is my application importing the wrong settings file?
You can see my gunicorn conf here
Thanks in advance for your patience!
I was using sudo service gunicorn start to run gunicorn. The problem is service strips all environment variables but TERM, PATH and LANG. To fix it, in my exec line in my gunicorn.conf I added the environment variables there using the --env flag, like exec env/bin/gunicorn --env HELLO_WORLD_PROD=1 --env DB_PASSWORD=secret etc.
In my __init__.py file I am trying to set:
import os
app.config['SECRET_KEY'] = os.environ['MY_KEY']
but am getting the error:
raise KeyError(key)
KeyError: 'MY_KEY'
When I run printenv, the variable MY_KEY is present.
Also in IDLE I tried running:
import os
print os.environ['MY_KEY']
and I get the correct output.
I set MY_KEY in /etc/profile using:
export MY_KEY="1234example_secret_key"
I did restart my computer after making the change to the profile file.
Would anyone know what the issue may be?
Thanks for your help.
If you are running your process under supervisor and/or gunicorn (judging but you comments you are) you can user supervisor environment config param.
[program:my_app]
...
environment = MY_KEY="ABCD",MY_KEY2="EFG"
You can also use gunicorn --env flag or env config file param.
gunicorn -b 127.0.0.1:8000 --env MY_KEY=ABCD test:app
The downside of the 2nd option is that your key will be visible to anyone who has access to your machine.
The best approach is to use the app.config.from_envvar function and store your config into machine specific config settings location (this could be on encrypted filesystem). In this case your code would look like this:
app = Flask(__name__)
...
app.config.from_envvar('MACHINE_SPECIFIC_SETTINGS')
Your MACHINE_SPECIFIC_SETTINGS env variable could point to the file that will have the MY_KEY value.
MACHINE_SPECIFIC_SETTINGS=/path/to/config.py
By following this tutorial, I have now a Celery-Django app that is working fine if I launch the worker with this command:
celery -A myapp worker -n worker1.%h
in my Django settings.py, I set all parameters for Celery (IP of the messages broker, etc...). Everything is working well.
My next step now, is to run this app as a Daemon. So I have followed this second tutorial and everything is simple, except now, my Celery parameters included in settings.py are not loaded. By example, messages broker IP is set to 127.0.0.1 but in my settings.py, I set it at an other IP address.
In the tutorial, they say:
make sure that the module that defines your Celery app instance also sets a default value for DJANGO_SETTINGS_MODULE as shown in the example Django project in First steps with Django.
So I made it sure. I had in /etc/default/celeryd this:
export DJANGO_SETTINGS_MODULE="myapp.settings"
Still not working... So I also, had this line in /etc/init.d/celeryd, again not working.
I don't know what to do anymore. Is someone has a clue?
EDIT:
Here is my celery.py:
from __future__ import absolute_import
import os
from django.conf import settings
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myapp.settings')
app = Celery('myapp')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
EDIT #2:
Here is my /etc/default/celeryd:
# Names of nodes to start
# most will only start one node:
CELERYD_NODES="worker1.%h"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/usr/local/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="myapp"
# Where to chdir at start.
CELERYD_CHDIR="/home/ubuntu/myapp-folder/"
# Extra command-line arguments to the worker
CELERYD_OPTS=""
# %N will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists, e.g. nobody).
CELERYD_USER="ubuntu"
CELERYD_GROUP="ubuntu"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE=myapp.settings
export PYTHONPATH=$PYTHONPATH:/home/ubuntu/myapp-folder
All answers here could be a part of the solution but at the end, it was still not working.
But I finally succeeded to make it work.
First of all, in /etc/init.d/celeryd, I have changed this line:
CELERYD_MULTI=${CELERYD_MULTI:-"celeryd-multi"}
by:
CELERYD_MULTI=${CELERYD_MULTI:-"celery multi"}
The first one was tagged as deprecated, could be the problem.
Moreover, I put this as option:
CELERYD_OPTS="--app=myapp"
And don't forget to export some environments variables:
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="myapp.settings"
export PYTHONPATH="$PYTHONPATH:/home/ubuntu/myapp-folder"
With all of this, it's now working on my side.
The problem is most likely that celeryd can't find your Django settings file because myapp.settings isn't in the the $PYTHONPATH then the application runs.
From what I recall, Python will look in the $PYTHONPATH as well as the local folder when importing files. When celeryd runs, it likely checks the path for a module app, doesn't find it, then looks in the current folder for a folder app with an __init__.py (i.e. a python module).
I think that all you should need to do is add this to your /etc/default/celeryd file:
export $PYTHONPATH:path/to/your/app
Below method does not helps to run celeryd, rather helps to run celery worker as a service which will be started at boot time.
commands like this sudo service celery status also works.
celery.conf
# This file sits in /etc/init
description "Celery for example"
start on runlevel [2345]
stop on runlevel [!2345]
#Send KILL after 10 seconds
kill timeout 10
script
#project(working_ecm) and Vitrualenv(working_ecm/env) settings
chdir /home/hemanth/working_ecm
exec /home/hemanth/working_ecm/env/bin/python manage.py celery worker -B -c 2 -f /var/log/celery-ecm.log --loglevel=info >> /tmp/upstart-celery-job.log 2>&1
end script
respawn
In your second tutorial they set the django_settings variable to:
export DJANGO_SETTINGS_MODULE="settings"
This could be a reason why your settings is not found in case it changes to directory
"/home/ubuntu/myapp-folder/"
Then you defined your app with "myapp" and then you say settings is in "myapp.settings"
This could lead to the fact that it searchs the settings file in
"/home/ubuntu/myapp-folder/myapp/myapp/settings"
So my suggestion is to remove the "myapp." in the DJANGO_SETTINGS_MODULE variable and dont forget quotation marks
I'd like to add an answer for anyone stumbling on this more recently.
I followed the getting started First Steps guide to a tee with Celery 4.4.7, as well as the Daemonization tutorial without luck.
My initial issue:
celery -A app_name worker -l info works without issue (actual celery configuration is OK).
I could start celeryd as daemon and status command would show OK, but couldn't receive tasks. Checking logs, I saw the following:
[2020-11-01 09:33:15,620: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
This was an indication that celeryd was not connecting to my broker (redis). Given CELERY_BROKER_URL was already set in my configuration, this meant my celery app settings were not being pulled in for the daemon process.
I tried sudo C_FAKEFORK=1 sh -x -l -E /etc/init.d/celeryd start to see if any of my celery settings were pulled in, and i noticed that app was set to default default (not the app name specified as CELERY_APP in /etc/default/celeryd
Since celery -A app_name worker -l info worked, fixed the issue by exporting CELERY_APP in /etc/default/celeryd/, instead of just setting the variable per documentation.
TL;DR
If celery -A app_name worker -l info works (replace app_name with what you've defined in the Celery first steps guide), and sudo C_FAKEFORK=1 sh -x -l -E /etc/init.d/celeryd start does not show your celery app settings being pulled in, add the following to the end of your /etc/default/celeryd:
export CELERY_APP="app_name"