Does Flask have an equivalent to Rails initializers? - python

I'm building a Flask web app and would like to run some background services when the app starts up.
For example, in the web app you have the ability to add service accounts, routers ip's, server ip's and such. I'd like to have some background services running network scans, wmi calls and some other tasks to constantly update the database with relevant information.
In Rails I've used initializers in config/initializers to start some daemons:
# Start all the daemons to update the DB
system('script/daemon restart vs_updater.rb')
system('script/daemon restart c_fw_updater.rb')
system('script/daemon restart sans_updater.rb')
system('script/daemon restart sf_updater.rb')
Is there such an equivalent for Flask? Or should I just build separate scripts and run them in a different manner?

you can add commands to the __init__.py file in the root directory and use subprocess to run the script
from subprocess import call
call("script/daemon restart vs_updater.py")
fab
http://www.fabfile.org/
subprocess
https://docs.python.org/2/library/subprocess.html

Related

Flask: Call a separate create_app function when running as cli vs server

I have a flask app using the app factory pattern and a create_app function.
In addition to providing server routes, I have also defined some CLI commands using #app.cli.command.
Is it possible to use a different create_app when running as a server vs when running a CLI command. Alternatively can any parameters be passed to create_app when running as a CLI command. I can achieve this by setting an environment variable each time I run the CLI command. E.g.
flask run
FLASK_MY_VARIABLE=x flask my_command
I can then read this environment variable and act on it. Is it possible to do it in a way that I don't need to supply the environment variable and just run flask my_command while still either running a different create_app or having a parameter set so I know the app was invoked as a CLI command?

How to keep django_q cluster running in linux server?

I am using django_q for some scheduling and automations in my django project.
I successfully configured all the needed stuff but to get django_q running I have to type in the server command line 'python manage.py qcluster' and after i close the shell session id doesn't work anymore.
In the django_q official documentation it says that there is no need for a supervisor, but this is not running.
Any ideas?
There are a few approaches you can use.
You could install the screen program to create a terminal session which stays around after logout. See also: https://superuser.com/questions/451057/keep-processes-alive-after-ssh-logout
You could use systemd to automatically start your qcluster. This has the advantage that it will start qcluster again if your server is rebooted. You'll want to write a service unit file with Type=simple. Here's a list of resources.
Here's an example unit file. (You may need to adapt this somewhat.)
[Unit]
Description=qcluster daemon
[Service]
User=<django user>
Group=<django group>
WorkingDirectory=<your working dir>
Environment=PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/ bin/
ExecStart=python manage.py qcluster
Restart=always
[Install]
WantedBy=multi-user.target

How to make apscheduler jobs run when served with WSGI + Apache?

I am working on a flask backend and one of its features require regular calls to an external API and store some results in the database. I used APScheduler to do so.
Not having full access to the upstream server (docker container within google cloud platforms), we managed to serve the backend using apache's mod_wsgi.
Running the backend from my debugger on my PC, the scheduled tasks seem to work as my database is populated.
But the server does not seem to run those tasks at all at when I query the database which should be populated, the tables are empty.
My usage of APScheduler is as follows in the __init__.py :
from apscheduler.schedulers.background import BackgroundScheduler
# Some unrelated code here
scheduler = BackgroundScheduler()
import module as service
scheduler.add_job(service.job1, 'interval', minutes=10)
scheduler.add_job(service.job2, 'interval', minutes=5)
scheduler.add_job(service.job3, 'interval', minutes=1)
scheduler.start()
I'm asking if there are additional steps that I need to do in order for those tasks to be run on the upstream server.

How to switch different config file running flask with uwsgi

I am deploying my flask app with uwsgi to multiple servers.
Different server should run the same app but read different config file. There are two config file used in my app.
one is 'config.py' read by flask:
app.config.from_object('config')
another is the 'uwsgi_config.ini' which is used when starting uwsgi:
uwsgi uwsgi_config.ini
Since I have several server, I must write several config files like:
config.dev.py config.test.py config.prod.py
uwsgi_config.dev.py uwsgi_config.dev.py uwsgi_config.prod.py
So my question is how can I switch different tiers when starting uwsgi without hacking the source code every time ?
I think the key thing is I should run uwsgi like this:
uwsgi uwsgi_config.dev.ini
and then flask can read the 'dev' tier from uwsgi_config.dev.ini.
Is there a simple way to do this ?

how to debug when flask blocking

I was running flask in a public_ip server(with some users)
run command is
(host='0.0.0.0', port=80, debug=True)
but flask server blocking somewhere one hour ago (the log show last request is one hour before)
so how can I debug on it?(figure out it's blocking on which python line?)
I have tried
gdb python3.4-dbg pid,but my flask app can't run by python3.4-dbg,for
from PIL import _imaging as core
ImportError: cannot import name '_imaging'
I believe the command is:
gdb -p pid
to attach to a running process.
oh,I find a way
after install python-dbg
use gdb python pid to attach flask
and use py-bt py-list py-locals to check the blocking stack

Categories