My cron jobs work fine on localhost but when i deploy they are not getting added.
The following settings.py:
CRONJOBS = [
('*/1 * * * *', 'push.cron.my_scheduled_job')
]
In development, cron works perfectly by doing this:
python manage.py crontab add
python manage.py crontab run 2e847f370afca8feeddaa55d5094d128
But when i deploy it to server using.. the cron jobs don't get added automatically. How do i add the crob jobs to the server ?
I just managed to get this working.
First I wrote the script as a "Django Custom Management Command".
Then I established a SSH connection, which starts at directory "/home/ec2-user", and entered "crontab -e" to edit the crontab.
In the crontab file, just add the following line (replace MY_CUSTOM_MANAGEMENT_COMMAND with your own file):
source /opt/python/run/venv/bin/activate && python manage.py MY_CUSTOM_MANAGEMENT_COMMAND
Then you're done.
You didn't mention in your question, but there's something I would like to point out, because I've seen it in some well known blogs: you don't need a worker tier for this, the crontab works just fine in the web server tier. Use the worker tier if you have some heavy background processing.
Your cron jobs running on localhost are not related to your server. You will need to run them separately pretty much in the same manner as you do in your local.
## I am assuming that you already activated your virtual env
python manage.py crontab add #returns a hash
python manage.py crontab run "hash" # put the has here without quotes
You could automate by writing and running some sort of bash script.
I will recommend that you use celery-beat instead of crontab if you want some automation.
Related
Can anyone provide me with the steps of how to execute a Python script at regular intervals in Cron inside Virtual Machine (using Google Cloud Platform)?
I read this link https://cloud.google.com/appengine/docs/standard/python/config/cron
but still could not figure out how to get it to work.
In regards to step (1) - Create the cron.yaml file in the root directory of your application (alongside app.yaml).--> does that mean we have to create both files cron.yaml and app.yaml? I do not see those files. What does app.yaml contain?
If you are using a virtual machine as you suggest, then those instructions you've linked may not be relevant as they are for App Engine.
With a Compute Engine VM you should use the inbuilt Linux cron functionality. For these instructions, I'm going to assume that you want to execute the script every 10 minutes. You can adapt this value for your needs.
Here is how you should proceed if you want to execute a script via a cron job on GCP virtual machine.
1) Run this command to enter the crontab configuration page.
crontab -e
**Note, the above command will allow you to edit the crontab configuration for the user you are logged in as. If you would like to execute the script as the root user, add 'sudo' to the start of the command to edit the crontab configuration for the root user.
2) In the cron configuration, you will be able to add an entry for intervals in minutes, hours, days of month, month and day of the week. On the same line, you can add the command you would like to execute- in your case a command to execute your python script.
As an example, if you wanted to run the script every 10 minutes with python, you would add an entry such as this:
*/10 * * * * /usr/bin/python /path/to/you/python/script.py
3) Once you've saved the crontab configuration and exited from the file, you need to restart the cron service for your changes to take affect. You can do this by running the following command.
sudo systemctl restart cron
There is some useful information here if you would like to discover more about running cron jobs in Linux.
I followed this tutorial to set up Gunicorn to run Django on a VPS, this is working perfectly fine and the web server is running on Nginx.
I created a separate manage.py command that I want to run Async using a worker, I am unsure how to integrate this through Gunicorn.
This is a follow up to Run code on first Django start, where the recommendation was to create a separate manage.py command and then run it as a separate worker process through Gunicorn.
Gunicorn's purpose here is to serve the Django project using WSGI, it doesn't use manage.py at all. You should call anything related to manage.py directly:
$ cd <projectdir>
$ source myprojectenv/bin/activate
$ python manage.py <your command here>
For setting it as a worker, you can either set a cron job that points the python binary in the virtualenv or you can consider making a Celery setup with the process management tool (supervisord, docker etc) of your choice.
I have been researching docker and understand almost everything I have read so far. I have built a few images, linked containers together, mounted volumes, and even got a sample django app running.
The one thing I can not wrap my head around is setting up a development environment. The whole point of docker is to be able to take your environment anywhere so that everything you do is portable and consistent. If I am running a django app in production being served by gunicorn for example, I need to restart the server in order for my code changes to take affect; this is not ideal when you are working on your project in your local laptop environment. If I make a change to my models or views I don't want to have to attach to the container, stop gunicorn, and then restart it every time I make a code change.
I am also not sure how I would run management commands. python manage.py syncdb would require me to get inside of the container and run commands. I also use south to manage data and schema migrations python manage.py migrate. How are others dealing with this issue?
Debugging is another issue. Would I have to somehow get all my logs to save somewhere so I can look at things? I usually just look at the django development server's output to see errors and prints.
It seems that I would have to make a special dev-environment container that had a bunch of workarounds; that seems like it completely defeats the purpose of the tool in the first place though. Any suggestions?
Update after doing more research:
Thanks for the responses. They set me on the right path.
I ended up discovering fig http://www.fig.sh/ It let's you orchestrate the linking and mounting of volumes, you can run commands. fig run container_name python manage.py syncdb . It seems pretty nice and I have been able to set up my dev environment using it.
Made a diagram of how I set it up using vagrant https://www.vagrantup.com/.
I just run
fig up
in the same directory as my fig.yml file and it does everything needed to link the containers and start the server. I am just running the development server when working on my mac so that it restarts when I change python code.
At my current gig we setup a bash script called django_admin. You run it like so:
django_admin <management command>
Example:
django_admin syncdb
The script looks something like this:
docker run -it --rm \
-e PYTHONPATH=/var/local \
-e DJANGO_ENVIRON=LOCAL \
-e LC_ALL=en_US.UTF-8 \
-e LANG=en_US.UTF-8 \
-v /src/www/run:/var/log \
-v /src/www:/var/local \
--link mysql:db \
localhost:5000/www:dev /var/local/config/local/django-admin $#
I'm guessing you could also hook something up like this to manage.py
I normally wrap my actual CMD in a script that launches a bash shell. Take a look at Docker-Jetty container as an example. The final two lines in the script are:
/opt/jetty/bin/jetty.sh restart
bash
This will start jetty and then open a shell.
Now I can use the following command to enter a shell inside the container and run any commands or look at logs. Once I am done I can use Ctrl-p + Ctrl-q to detach from the container.
docker attach CONTAINER_NAME
I have followd the following links for running a cron job :
django - cron
custom management commands
but all of these ways work if i run commands like:
or
python manage.py crontab add
or
python manage.py runcron
but i don't want to do cron jobs without django server running
i mean i want to run django server and it automatically call a certain function by itself
during running of server for example every (say) 5 minutes.
If I understand correctly you can't use django cron if django isn't running; so create a bash script to check if django is running and if not start it.
MYPROG="myprog"
RESTART="myprog params"
PGREP="/usr/bin/pgrep"
# find myprog pid
$PGREP ${MYPROG}
# if not running
if [ $? -ne 0 ]
then
$RESTART
fi
Then for your cron:
*/5 * * * * /myscript.sh
This is off the cuff and not individualized to your setup but it's as good as any to start.
Let me see if I can elaborate:
On Linux you'll use cron Windows will use at; these are system services not to be confused with Django. They are essentially scheduled tasks.
The custom management commands is essentially a script that you point the cron job to.
You'll need to do some homework on what a cron job is (if you're using Linux) and see how to schedule a reoccuring task and how to have it issue the custom command. This is my understanding of what you're trying to do. If it's not you need to clarify.
My django project calls a python file at a scheduled time using "at" scheduler. This is executed within my models.py
command = 'echo "python /path/to/script.py params" | /usr/bin/at -t [time] &> path/to/at.log'
status = os.system(command)
Where [time] is schedule time.
It works perfectly when I run it within Django Dev server (I usually run as root but it also works with other users as well)
But when I deployed my application on Apache using mod_wsgi, it doesn't work. at logs shows that the job was schedule but it doesn't execute it.
I tried everything from changing the ownership to www-data, permissions, made it into executable to all users, to setuid to root (Huge Security Issue)
The last thing I want to do is run apache as root user.
Use cron or celery for scheduled tasks. If you need to run something as root, it'd make sense to re-write your script as a simple daemon and run that as root, you can pass commands to it pretty easily with zeromq.