For a Django application, I want Heroku Scheduler to perform the following commands:
heroku redis:cli
flushall
exit (ctrl-c)
I do this myself once a day now in terminal, but it would be a lot easier if I can schedule these commands.
My question is, is it possible to put these commands in a Python script, or do I need to work in another way? Does anyone has experience with this?
Related
I am looking to run commands I typically would run through Heroku CLI via a python script, namely:
heroku pg:backups:capture
Historically, for running Heroku commands in Python I have used Heroku3.py, which works for me for things like restarting dynos. I am having difficulty finding a way to execute commands for addons, such as the one listed above.
Is there a way to call CLI commands through python?
I need some help,
My setup is Django, Postgres, Celery, Redis β all dockerized. Besides regular user-related features, the app should scrape info in the background mode.
What I need is to launch the scraping function manually from management command like "python manage.py start_some_scrape --param1 --param2 ..etc", and know that this script works in the background mode informing me only by logs.
At this moment script works without Celery and only while the terminal connection is alive what is not useful because the scraper should work a long time β like days.
Is Celery the right option for this?
How to pass a task from management command to Celery properly?
How to prevent Celery to be blocked by the long-time task? Celery also has other tasks β related and not related to the scraping script. Is there are threads or some other way?
Thx for help!
A simple way would be to just send your task to the background of your shell. With nohup it shouldn't be terminated even if your shell session expires.
your-pc:~$ nohup python manage.py start_some_scrape --param1 --param2 > logfile.txt &
Your route to achieve what u want is
Django-->Celery(Redis)-->SQLite<--Scrapyd<--Scrapy
If u want to use a shared DB like PostGre, u need to patch scrapyd as it supports only SQLite.
There is a github project https://github.com/holgerd77/django-dynamic-scraper that can do what u want or simple teach u how to pass celery tasks.
Celery is an asynchronous task queue/job queue so u dont have any kind of blocks from his side.
I am looking for help deploying my flash app. I've already written the app and it works well. I'm currently using the following command in the directory of my flask code:
sudo uwsgi --socket 0.0.0.0:70 --protocol=http -w AppName:app --buffer-size=32768
This is on my Amazon Lightsail instance. I have the instance linked to a static public IP, and if I navigate to the website, it works great. However, to get the command to continuously run in the background even after logging out of the Lightsail, I first start a screen command, execute the above line of code, and then detach the screen using ctrl-a-d.
The problem is, if the app crashes (which is understandable since it is very large and under development), or if the command is left running for too long, the process is killed, and it is no longer being served.
I am looking for a better method of deploying a flask app on Amazon Lightsail so that it will redeploy the app in the event of a crash without any interaction from myself.
Generally you would write your own unit file for systemd to keep your application running, auto restart when it crashes and start when you boot your instances.
There are many tutorials out there showing how to write such a unit file. Some examples:
Systemd: Service File Examples
Creating a Linux service with systemd
How to write startup script for Systemd?
You can use pm2
Starting an application with PM2 is straightforward. It will auto
discover the interpreter to run your application depending on the
script extension. This can be configurable via the Ecosystem config
file, as I will show you later on this article.
All you need to install pm2 and then
pm2 start appy.py
Great, this application will now run forever, meaning that if the process exit or throw an exception it will get automatically restarted. If you exit the console and connect again you will still be able to check the application state.
To list application managed by PM2 run:
pm2 ls
You can also check logs
pm2 logs
Keeping Processes Alive at Server Reboot
If you want to keep your application online across unexpected (or expected) server restart, you will want to setup init script to tell your system to boot PM2 and your applications.
Itβs really simple with PM2, just run this command (without sudo):
pm2 startup
Pm2 Manage-Python-Processes
How do I run a python script that is in my main directory with Heroku scheduler?
Normally I run this through the command line with Heroku run python "script.py", but this syntax is clearly not correct for the Heroku Scheduler. Where it says "rake do_something" what should the correct syntax be to run a python script here? I've tried "python script.py" and this does not work either.
Thanks!
The Heroku Scheduler will try to run any command you give it. For Python, if you added a mytask.py to
your app repo, you could have the Scheduler run:
python mytask.py
Instead of waiting for the Scheduler to run to see if the command works as expected, you can also test run it like this:
heroku run python mytask.py # or heroku run:detached ...
heroku logs --tail
Another way to use the Scheduler would be to extend your app with a cli tool or a script runner that shares the app context. A popular one for Flask is Flask-Script.
Note: the "rake" references in the Heroku Scheduler docs example is for running tasks with ruby.
I have followd the following links for running a cron job :
django - cron
custom management commands
but all of these ways work if i run commands like:
or
python manage.py crontab add
or
python manage.py runcron
but i don't want to do cron jobs without django server running
i mean i want to run django server and it automatically call a certain function by itself
during running of server for example every (say) 5 minutes.
If I understand correctly you can't use django cron if django isn't running; so create a bash script to check if django is running and if not start it.
MYPROG="myprog"
RESTART="myprog params"
PGREP="/usr/bin/pgrep"
# find myprog pid
$PGREP ${MYPROG}
# if not running
if [ $? -ne 0 ]
then
$RESTART
fi
Then for your cron:
*/5 * * * * /myscript.sh
This is off the cuff and not individualized to your setup but it's as good as any to start.
Let me see if I can elaborate:
On Linux you'll use cron Windows will use at; these are system services not to be confused with Django. They are essentially scheduled tasks.
The custom management commands is essentially a script that you point the cron job to.
You'll need to do some homework on what a cron job is (if you're using Linux) and see how to schedule a reoccuring task and how to have it issue the custom command. This is my understanding of what you're trying to do. If it's not you need to clarify.