My django project calls a python file at a scheduled time using "at" scheduler. This is executed within my models.py
command = 'echo "python /path/to/script.py params" | /usr/bin/at -t [time] &> path/to/at.log'
status = os.system(command)
Where [time] is schedule time.
It works perfectly when I run it within Django Dev server (I usually run as root but it also works with other users as well)
But when I deployed my application on Apache using mod_wsgi, it doesn't work. at logs shows that the job was schedule but it doesn't execute it.
I tried everything from changing the ownership to www-data, permissions, made it into executable to all users, to setuid to root (Huge Security Issue)
The last thing I want to do is run apache as root user.
Use cron or celery for scheduled tasks. If you need to run something as root, it'd make sense to re-write your script as a simple daemon and run that as root, you can pass commands to it pretty easily with zeromq.
Related
I am looking for help deploying my flash app. I've already written the app and it works well. I'm currently using the following command in the directory of my flask code:
sudo uwsgi --socket 0.0.0.0:70 --protocol=http -w AppName:app --buffer-size=32768
This is on my Amazon Lightsail instance. I have the instance linked to a static public IP, and if I navigate to the website, it works great. However, to get the command to continuously run in the background even after logging out of the Lightsail, I first start a screen command, execute the above line of code, and then detach the screen using ctrl-a-d.
The problem is, if the app crashes (which is understandable since it is very large and under development), or if the command is left running for too long, the process is killed, and it is no longer being served.
I am looking for a better method of deploying a flask app on Amazon Lightsail so that it will redeploy the app in the event of a crash without any interaction from myself.
Generally you would write your own unit file for systemd to keep your application running, auto restart when it crashes and start when you boot your instances.
There are many tutorials out there showing how to write such a unit file. Some examples:
Systemd: Service File Examples
Creating a Linux service with systemd
How to write startup script for Systemd?
You can use pm2
Starting an application with PM2 is straightforward. It will auto
discover the interpreter to run your application depending on the
script extension. This can be configurable via the Ecosystem config
file, as I will show you later on this article.
All you need to install pm2 and then
pm2 start appy.py
Great, this application will now run forever, meaning that if the process exit or throw an exception it will get automatically restarted. If you exit the console and connect again you will still be able to check the application state.
To list application managed by PM2 run:
pm2 ls
You can also check logs
pm2 logs
Keeping Processes Alive at Server Reboot
If you want to keep your application online across unexpected (or expected) server restart, you will want to setup init script to tell your system to boot PM2 and your applications.
It’s really simple with PM2, just run this command (without sudo):
pm2 startup
Pm2 Manage-Python-Processes
Can anyone provide me with the steps of how to execute a Python script at regular intervals in Cron inside Virtual Machine (using Google Cloud Platform)?
I read this link https://cloud.google.com/appengine/docs/standard/python/config/cron
but still could not figure out how to get it to work.
In regards to step (1) - Create the cron.yaml file in the root directory of your application (alongside app.yaml).--> does that mean we have to create both files cron.yaml and app.yaml? I do not see those files. What does app.yaml contain?
If you are using a virtual machine as you suggest, then those instructions you've linked may not be relevant as they are for App Engine.
With a Compute Engine VM you should use the inbuilt Linux cron functionality. For these instructions, I'm going to assume that you want to execute the script every 10 minutes. You can adapt this value for your needs.
Here is how you should proceed if you want to execute a script via a cron job on GCP virtual machine.
1) Run this command to enter the crontab configuration page.
crontab -e
**Note, the above command will allow you to edit the crontab configuration for the user you are logged in as. If you would like to execute the script as the root user, add 'sudo' to the start of the command to edit the crontab configuration for the root user.
2) In the cron configuration, you will be able to add an entry for intervals in minutes, hours, days of month, month and day of the week. On the same line, you can add the command you would like to execute- in your case a command to execute your python script.
As an example, if you wanted to run the script every 10 minutes with python, you would add an entry such as this:
*/10 * * * * /usr/bin/python /path/to/you/python/script.py
3) Once you've saved the crontab configuration and exited from the file, you need to restart the cron service for your changes to take affect. You can do this by running the following command.
sudo systemctl restart cron
There is some useful information here if you would like to discover more about running cron jobs in Linux.
My cron jobs work fine on localhost but when i deploy they are not getting added.
The following settings.py:
CRONJOBS = [
('*/1 * * * *', 'push.cron.my_scheduled_job')
]
In development, cron works perfectly by doing this:
python manage.py crontab add
python manage.py crontab run 2e847f370afca8feeddaa55d5094d128
But when i deploy it to server using.. the cron jobs don't get added automatically. How do i add the crob jobs to the server ?
I just managed to get this working.
First I wrote the script as a "Django Custom Management Command".
Then I established a SSH connection, which starts at directory "/home/ec2-user", and entered "crontab -e" to edit the crontab.
In the crontab file, just add the following line (replace MY_CUSTOM_MANAGEMENT_COMMAND with your own file):
source /opt/python/run/venv/bin/activate && python manage.py MY_CUSTOM_MANAGEMENT_COMMAND
Then you're done.
You didn't mention in your question, but there's something I would like to point out, because I've seen it in some well known blogs: you don't need a worker tier for this, the crontab works just fine in the web server tier. Use the worker tier if you have some heavy background processing.
Your cron jobs running on localhost are not related to your server. You will need to run them separately pretty much in the same manner as you do in your local.
## I am assuming that you already activated your virtual env
python manage.py crontab add #returns a hash
python manage.py crontab run "hash" # put the has here without quotes
You could automate by writing and running some sort of bash script.
I will recommend that you use celery-beat instead of crontab if you want some automation.
I have followd the following links for running a cron job :
django - cron
custom management commands
but all of these ways work if i run commands like:
or
python manage.py crontab add
or
python manage.py runcron
but i don't want to do cron jobs without django server running
i mean i want to run django server and it automatically call a certain function by itself
during running of server for example every (say) 5 minutes.
If I understand correctly you can't use django cron if django isn't running; so create a bash script to check if django is running and if not start it.
MYPROG="myprog"
RESTART="myprog params"
PGREP="/usr/bin/pgrep"
# find myprog pid
$PGREP ${MYPROG}
# if not running
if [ $? -ne 0 ]
then
$RESTART
fi
Then for your cron:
*/5 * * * * /myscript.sh
This is off the cuff and not individualized to your setup but it's as good as any to start.
Let me see if I can elaborate:
On Linux you'll use cron Windows will use at; these are system services not to be confused with Django. They are essentially scheduled tasks.
The custom management commands is essentially a script that you point the cron job to.
You'll need to do some homework on what a cron job is (if you're using Linux) and see how to schedule a reoccuring task and how to have it issue the custom command. This is my understanding of what you're trying to do. If it's not you need to clarify.
I have a small problem running a python script as a specific user account in my CentOS 6 box.
My cron.d/cronfile looks like this:
5 17 * * * reports /usr/local/bin/report.py > /var/log/report.log 2>&1
The account reports exists and all the files that are to be accessed by that script are chowned and chgrped to reports. The python script is chmod a+r. The python script starts with a #!/usr/bin/env python.
But this is not the problem. The problem is that I see nothing in the logfile. The python script doesn't even start to run! Any ideas why this might be?
If I change the user to root instead of reports in the cronfile, it runs fine. However I cannot run it as root in production servers.
If you have any questions please ask :)
/e:
If I do sudo -u reports python report.py it works fine.
Cron jobs run with the permissions of the user that the cron job was setup under.
I.E. Whatever is in the cron table of the reports user, will be run as the reports user.
If you're having to so sudo to get the script to run when logged in as reports, then the script likely won't run as a cron job either. Can you run this script when logged in as reports without sudo? If not, then the cron job can't either. Make sense?
Check your logs - are you getting permissions errors?
There are a myriad of reasons why your script would need certain privs, but an easy way to fix this is to set the cron job up under root instead of reports. The longer way is to see what exactly is requiring elevated permissions and fix that. Is it file permissions? A protected command? Maybe adding reports to certain groups would allow you to run it under reports instead of root.
*be ULTRA careful if/when you setup cron jobs as root