I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically.
Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this.
Does anyone know how to set this up?
To clarify: I know I can set up a cron job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero).
I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.
One solution that I have employed is to do this:
1) Create a custom management command, e.g.
python manage.py my_cool_command
2) Use cron (on Linux) or at (on Windows) to run my command at the required times.
This is a simple solution that doesn't require installing a heavy AMQP stack. However there are nice advantages to using something like Celery, mentioned in the other answers. In particular, with Celery it is nice to not have to spread your application logic out into crontab files. However the cron solution works quite nicely for a small to medium sized application and where you don't want a lot of external dependencies.
EDIT:
In later version of windows the at command is deprecated for Windows 8, Server 2012 and above. You can use schtasks.exe for same use.
**** UPDATE ****
This the new link of django doc for writing the custom management command
Celery is a distributed task queue, built on AMQP (RabbitMQ). It also handles periodic tasks in a cron-like fashion (see periodic tasks). Depending on your app, it might be worth a gander.
Celery is pretty easy to set up with django (docs), and periodic tasks will actually skip missed tasks in case of a downtime. Celery also has built-in retry mechanisms, in case a task fails.
We've open-sourced what I think is a structured app. that Brian's solution above alludes too. We would love any / all feedback!
https://github.com/tivix/django-cron
It comes with one management command:
./manage.py runcrons
That does the job. Each cron is modeled as a class (so its all OO) and each cron runs at a different frequency and we make sure the same cron type doesn't run in parallel (in case crons themselves take longer time to run than their frequency!)
If you're using a standard POSIX OS, you use cron.
If you're using Windows, you use at.
Write a Django management command to
Figure out what platform they're on.
Either execute the appropriate "AT" command for your users, or update the crontab for your users.
Interesting new pluggable Django app: django-chronograph
You only have to add one cron entry which acts as a timer, and you have a very nice Django admin interface into the scripts to run.
Look at Django Poor Man's Cron which is a Django app that makes use of spambots, search engine indexing robots and alike to run scheduled tasks in approximately regular intervals
See: http://code.google.com/p/django-poormanscron/
I had exactly the same requirement a while ago, and ended up solving it using APScheduler (User Guide)
It makes scheduling jobs super simple, and keeps it independent for from request-based execution of some code. Following is a simple example.
from apscheduler.schedulers.background import BackgroundScheduler
scheduler = BackgroundScheduler()
job = None
def tick():
print('One tick!')\
def start_job():
global job
job = scheduler.add_job(tick, 'interval', seconds=3600)
try:
scheduler.start()
except:
pass
Hope this helps somebody!
Django APScheduler for Scheduler Jobs. Advanced Python Scheduler (APScheduler) is a Python library that lets you schedule your Python code to be executed later, either just once or periodically. You can add new jobs or remove old ones on the fly as you please.
note: I'm the author of this library
Install APScheduler
pip install apscheduler
View file function to call
file name: scheduler_jobs.py
def FirstCronTest():
print("")
print("I am executed..!")
Configuring the scheduler
make execute.py file and add the below codes
from apscheduler.schedulers.background import BackgroundScheduler
scheduler = BackgroundScheduler()
Your written functions Here, the scheduler functions are written in scheduler_jobs
import scheduler_jobs
scheduler.add_job(scheduler_jobs.FirstCronTest, 'interval', seconds=10)
scheduler.start()
Link the File for Execution
Now, add the below line in the bottom of Url file
import execute
You can check the full code by executing
[Click here]
https://github.com/devchandansh/django-apscheduler
Brian Neal's suggestion of running management commands via cron works well, but if you're looking for something a little more robust (yet not as elaborate as Celery) I'd look into a library like Kronos:
# app/cron.py
import kronos
#kronos.register('0 * * * *')
def task():
pass
RabbitMQ and Celery have more features and task handling capabilities than Cron. If task failure isn't an issue, and you think you will handle broken tasks in the next call, then Cron is sufficient.
Celery & AMQP will let you handle the broken task, and it will get executed again by another worker (Celery workers listen for the next task to work on), until the task's max_retries attribute is reached. You can even invoke tasks on failure, like logging the failure, or sending an email to the admin once the max_retries has been reached.
And you can distribute Celery and AMQP servers when you need to scale your application.
I personally use cron, but the Jobs Scheduling parts of django-extensions looks interesting.
Although not part of Django, Airflow is a more recent project (as of 2016) that is useful for task management.
Airflow is a workflow automation and scheduling system that can be used to author and manage data pipelines. A web-based UI provides the developer with a range of options for managing and viewing these pipelines.
Airflow is written in Python and is built using Flask.
Airflow was created by Maxime Beauchemin at Airbnb and open sourced in the spring of 2015. It joined the Apache Software Foundation’s incubation program in the winter of 2016. Here is the Git project page and some addition background information.
Put the following at the top of your cron.py file:
#!/usr/bin/python
import os, sys
sys.path.append('/path/to/') # the parent directory of the project
sys.path.append('/path/to/project') # these lines only needed if not on path
os.environ['DJANGO_SETTINGS_MODULE'] = 'myproj.settings'
# imports and code below
I just thought about this rather simple solution:
Define a view function do_work(req, param) like you would with any other view, with URL mapping, return a HttpResponse and so on.
Set up a cron job with your timing preferences (or using AT or Scheduled Tasks in Windows) which runs curl http://localhost/your/mapped/url?param=value.
You can add parameters but just adding parameters to the URL.
Tell me what you guys think.
[Update] I'm now using runjob command from django-extensions instead of curl.
My cron looks something like this:
#hourly python /path/to/project/manage.py runjobs hourly
... and so on for daily, monthly, etc'. You can also set it up to run a specific job.
I find it more managable and a cleaner. Doesn't require mapping a URL to a view. Just define your job class and crontab and you're set.
after the part of code,I can write anything just like my views.py :)
#######################################
import os,sys
sys.path.append('/home/administrator/development/store')
os.environ['DJANGO_SETTINGS_MODULE']='store.settings'
from django.core.management impor setup_environ
from store import settings
setup_environ(settings)
#######################################
from
http://www.cotellese.net/2007/09/27/running-external-scripts-against-django-models/
You should definitely check out django-q!
It requires no additional configuration and has quite possibly everything needed to handle any production issues on commercial projects.
It's actively developed and integrates very well with django, django ORM, mongo, redis. Here is my configuration:
# django-q
# -------------------------------------------------------------------------
# See: http://django-q.readthedocs.io/en/latest/configure.html
Q_CLUSTER = {
# Match recommended settings from docs.
'name': 'DjangoORM',
'workers': 4,
'queue_limit': 50,
'bulk': 10,
'orm': 'default',
# Custom Settings
# ---------------
# Limit the amount of successful tasks saved to Django.
'save_limit': 10000,
# See https://github.com/Koed00/django-q/issues/110.
'catch_up': False,
# Number of seconds a worker can spend on a task before it's terminated.
'timeout': 60 * 5,
# Number of seconds a broker will wait for a cluster to finish a task before presenting it again. This needs to be
# longer than `timeout`, otherwise the same task will be processed multiple times.
'retry': 60 * 6,
# Whether to force all async() calls to be run with sync=True (making them synchronous).
'sync': False,
# Redirect worker exceptions directly to Sentry error reporter.
'error_reporter': {
'sentry': RAVEN_CONFIG,
},
}
Yes, the method above is so great. And I tried some of them. At last, I found a method like this:
from threading import Timer
def sync():
do something...
sync_timer = Timer(self.interval, sync, ())
sync_timer.start()
Just like Recursive.
Ok, I hope this method can meet your requirement. :)
A more modern solution (compared to Celery) is Django Q:
https://django-q.readthedocs.io/en/latest/index.html
It has great documentation and is easy to grok. Windows support is lacking, because Windows does not support process forking. But it works fine if you create your dev environment using the Windows for Linux Subsystem.
I had something similar with your problem today.
I didn't wanted to have it handled by the server trhough cron (and most of the libs were just cron helpers in the end).
So i've created a scheduling module and attached it to the init .
It's not the best approach, but it helps me to have all the code in a single place and with its execution related to the main app.
I use celery to create my periodical tasks. First you need to install it as follows:
pip install django-celery
Don't forget to register django-celery in your settings and then you could do something like this:
from celery import task
from celery.decorators import periodic_task
from celery.task.schedules import crontab
from celery.utils.log import get_task_logger
#periodic_task(run_every=crontab(minute="0", hour="23"))
def do_every_midnight():
#your code
I am not sure will this be useful for anyone, since I had to provide other users of the system to schedule the jobs, without giving them access to the actual server(windows) Task Scheduler, I created this reusable app.
Please note users have access to one shared folder on server where they can create required command/task/.bat file. This task then can be scheduled using this app.
App name is Django_Windows_Scheduler
ScreenShot:
If you want something more reliable than Celery, try TaskHawk which is built on top of AWS SQS/SNS.
Refer: http://taskhawk.readthedocs.io
For simple dockerized projects, I could not really see any existing answer fit.
So I wrote a very barebones solution without the need of external libraries or triggers, which runs on its own. No external os-cron needed, should work in every environment.
It works by adding a middleware: middleware.py
import threading
def should_run(name, seconds_interval):
from application.models import CronJob
from django.utils.timezone import now
try:
c = CronJob.objects.get(name=name)
except CronJob.DoesNotExist:
CronJob(name=name, last_ran=now()).save()
return True
if (now() - c.last_ran).total_seconds() >= seconds_interval:
c.last_ran = now()
c.save()
return True
return False
class CronTask:
def __init__(self, name, seconds_interval, function):
self.name = name
self.seconds_interval = seconds_interval
self.function = function
def cron_worker(*_):
if not should_run("main", 60):
return
# customize this part:
from application.models import Event
tasks = [
CronTask("events", 60 * 30, Event.clean_stale_objects),
# ...
]
for task in tasks:
if should_run(task.name, task.seconds_interval):
task.function()
def cron_middleware(get_response):
def middleware(request):
response = get_response(request)
threading.Thread(target=cron_worker).start()
return response
return middleware
models/cron.py:
from django.db import models
class CronJob(models.Model):
name = models.CharField(max_length=10, primary_key=True)
last_ran = models.DateTimeField()
settings.py:
MIDDLEWARE = [
...
'application.middleware.cron_middleware',
...
]
Simple way is to write a custom shell command see Django Documentation and execute it using a cronjob on linux. However i would highly recommend using a message broker like RabbitMQ coupled with celery. Maybe you can have a look at
this Tutorial
One alternative is to use Rocketry:
from rocketry import Rocketry
from rocketry.conds import daily, after_success
app = Rocketry()
#app.task(daily.at("10:00"))
def do_daily():
...
#app.task(after_success(do_daily))
def do_after_another():
...
if __name__ == "__main__":
app.run()
It also supports custom conditions:
from pathlib import Path
#app.cond()
def file_exists(file):
return Path(file).exists()
#app.task(daily & file_exists("myfile.csv"))
def do_custom():
...
And it also supports Cron:
from rocketry.conds import cron
#app.task(cron('*/2 12-18 * Oct Fri'))
def do_cron():
...
It can be integrated quite nicely with FastAPI and I think it could be integrated with Django as well as Rocketry is essentially just a sophisticated loop that can spawn, async tasks, threads and processes.
Disclaimer: I'm the author.
Another option, similar to Brian Neal's answer it to use RunScripts
Then you don't need to set up commands. This has the advantage of more flexible or cleaner folder structures.
This file must implement a run() function. This is what gets called when you run the script. You can import any models or other parts of your django project to use in these scripts.
And then, just
python manage.py runscript path.to.script
I am getting a bit odd behavior using the override_settings decorator. It basically works when I run the test alone, but won't work if I run the whole testing suite.
In this test I am changing the REST_FRAMEWORK options, because when running this suite I want to set the authentication settings, with the other tests do not use authentication:
#override_settings(REST_FRAMEWORK=AUTH_REST_FRAMEWORK)
class AuthTestCase(TestCase):
#classmethod
def setUpClass(cls):
super(AuthTestCase, cls).setUpClass()
cls.client = Client()
def test_i_need_login(self):
response = client.get('/')
self.assertEqual(response.status_code, 401)
so if I do...
$ python manage.py test myapp/tests/test_auth.py
The settings are applied and works great!
but if run the whole testing suite like:
$ python manage.py test
The test will fail. It seems to me that these settings (or some objects) are being cached from other tests. I also have another class in another test file that uses a Client instance in similar way.
Environment:
Python: 2.7
Django: 1.10
Edit:
The workaround I found to this problem was to use find to run the tests, it can be an alias or a script with...
find . -name 'test*.py' -exec python manage.py test {} \;
The downside is that the output of many tests gets piled up in the screen and it may create/destroy test database a few times. Unless you add options to the command like REUSE_DB if using django-nose.
Well, there is a warning about this very situation.
Warning
The settings file contains some settings that are only consulted
during initialization of Django internals. If you change them with
override_settings, the setting is changed if you access it via the
django.conf.settings module, however, Django’s internals access it
differently. Effectively, using override_settings() or
modify_settings() with these settings is probably not going to do what
you expect it to do.
The first time you run the tests, you are running a specific test case so the override takes effect. The second time you run the test, you are running a whole suite and your specific testcase probably isn't the first one that's being run. So the above happens.
Here is the scenario, my website has some unsafe code, which is generated by website users, to run on my server.
I want to disable some reserved words for python to protect my running environment, such as eval, exec, print and so on.
Is there a simple way (without changing the python interpreter, my python version is 2.7.10) to implement the feature I described before?
Many thanks.
Disabling names on python level won't help as there are numerous ways around it. See this and this post for more info. This is what you need to do:
For CPython, use RestrictedPython to define a restricted subset of Python.
For PyPy, use sandboxing. It allows you to run arbitrary python code in a special environment that serializes all input/output so you can check it and decide which commands are allowed before actually running them.
Since version 3.8 Python supports audit hooks so you can completely prevent certain actions:
import sys
def audit(event, args):
if event == 'compile':
sys.exit('nice try!')
sys.addaudithook(audit)
eval('5')
Additionally, to protect your host OS, use
either virtualization (safer) such as KVM or VirtualBox
or containerization (much lighter) such as lxd or docker
In the case of containerization with docker you may need to add AppArmor or SELinux policies for extra safety. lxd already comes with AppArmor policies by default.
Make sure you run the code as a user with as little privileges as possible.
Rebuild the virtual machine/container for each user.
Whichever solution you use, don't forget to limit resource usage (RAM, CPU, storage, network). Use cgroups if your chosen virtualization/containerization solution does not support these kinds of limits.
Last but not least, use timeouts to prevent your users' code from running forever.
One way is to shadow the methods:
def not_available(*args, **kwargs):
return 'Not allowed'
eval = not_available
exec = not_available
print = not_available
However, someone smart can always do this:
import builtins
builtins.print('this works!')
So the real solution is to parse the code and not allow the input if it has such statements (rather than trying to disable them).
What is the latest way to write Python tests? What modules/frameworks to use?
And another question: are doctest tests still of any value? Or should all the tests be written in a more modern testing framework?
Thanks, Boda Cydo.
The usual way is to use the builtin unittest module for creating unit tests and bundling them together to test suites which can be run independently. unittest is very similar to (and inspired by) jUnit and thus very easy to use.
If you're interested in the very latest changes, take a look at the new PyCon talk by Michael Foord:
PyCon 2010: New and Improved: Coming changes to unittest
Using the built-in unittest module is as relevant and easy as ever. The other unit testing options, py.test,nose, and twisted.trial are mostly compatible with unittest.
Doctests are of the same value they always were—they are great for testing your documentation, not your code. If you are going to put code examples in your docstrings, doctest can assure you keep them correct and up to date. There's nothing worse than trying to reproduce an example and failing, only to later realize it was actually the documentation's fault.
I don't know much about doctests, but at my university, nose testing is taught and encouraged.
Nose can be installed by following this procedure (I'm assuming you're using a PC - Windows OS):
install setuptools
Run DOS Command Prompt (Start -> All Programs -> Accessories -> Command Prompt)
For this step to work, you must be connected to the internet. In DOS, type: C:\Python25\Scripts\easy_install nose
If you are on a different OS, check this site
EDIT:
It's been two years since I originally wrote this post. Now, I've learned of this programming principle called Designing by Contract. This allows a programmer to define preconditions, postconditions and invariants (called contracts) for all functions in their code. The effect is that an error is raised if any of these contracts are violated.
The DbC framework that I would recommend for python is called PyContract I have successfully used it in my evolutionary programming framework
In my current project I'm using unittest, minimock, nose. In the past I've made heavy use of doctests, but in a large projects some tests can get kinda unwieldy, so I tend to reserve usage of doctests for simpler functions.
If you are using setuptools or distribute (you should be switching to distribute), you can set up nose as the default test collector so that you can run your tests with "python setup.py test"
setup(name='foo',
...
test_suite='nose.collector',
...
Now running "python setup.py test" will invoke nose, which will crawl your project for things that look like tests and run them, accumulating the results. If you also have doctests in your project, you can run nosetests with the --with-doctest option to enable the doctest plugin.
nose also has integration with coverage
nosetests --with-coverage.
You can also use the --cover-html --cover-html-dir options to generate an HTML coverage report for each module, with each line of code that is not under test highlighted. I wouldn't get too obsessed with getting coverage to report 100% test coverage for all modules. Some code is better left for integration tests, which I'll cover at the end.
I have become a huge fan of minimock, as it makes testing code with a lot of external dependencies really easy. While it works really well when paired with doctest, it can be used with any testing framework using the unittest.TraceTracker class. I would encourage you to avoid using it to test all of your code though, since you should still try to write your code so that each translation unit can be tested in isolation without mocking. Sometimes that's not possible though.
Here is an (untested) example of such a test using minimock and unittest:
# tests/test_foo.py
import minimock
import unittest
import foo
class FooTest(unittest2.TestCase):
def setUp(self):
# Track all calls into our mock objects. If we don't use a TraceTracker
# then all output will go to stdout, but we want to capture it.
self.tracker = minimock.TraceTracker()
def tearDown(self):
# Restore all objects in global module state that minimock had
# replaced.
minimock.restore()
def test_bar(self):
# foo.bar invokes urllib2.urlopen, and then calls read() on the
# resultin file object, so we'll use minimock to create a mocked
# urllib2.
urlopen_result = minimock.Mock('urlobject', tracker=self.tracker)
urlopen_result.read = minimock.Mock(
'urlobj.read', tracker=self.tracker, returns='OMG')
foo.urllib2.urlopen = minimock.Mock(
'urllib2.urlopen', tracker=self.tracker, returns=urlopen_result)
# Now when we call foo.bar(URL) and it invokes
# *urllib2.urlopen(URL).read()*, it will not actually send a request
# to URL, but will instead give us back the dummy response body 'OMG',
# which it then returns.
self.assertEquals(foo.bar('http://example.com/foo'), 'OMG')
# Now we can get trace info from minimock to verify that our mocked
# urllib2 was used as intended. self.tracker has traced our calls to
# *urllib2.urlopen()*
minimock.assert_same_trace(self.tracker, """\
Called urllib2.urlopen('http://example.com/foo)
Called urlobj.read()
Called urlobj.close()""")
Unit tests shouldn't be the only kinds of tests you write though. They are certainly useful and IMO extremely important if you plan on maintaining this code for any extended period of time. They make refactoring easier and help catch regressions, but they don't really test the interaction between various components and how they interact (if you do it right).
When I start getting to the point where I have a mostly finished product with decent test coverage that I intend to release, I like to write at least one integration test that runs the complete program in an isolated environment.
I've had a lot of success with this on my current project. I had about 80% unit test coverage, and the rest of the code was stuff like argument parsing, command dispatch and top level application state, which is difficult to cover in unit tests. This program has a lot of external dependencies, hitting about a dozen different web services and interacting with about 6,000 machines in production, so running this in isolation proved kinda difficult.
I ended up writing an integration test which spawns a WSGI server written with eventlet and webob that simulates all of the services my program interacts with in production. Then the integration test monkey patches our web service client library to intercept all HTTP requests and send them to the WSGI application. After doing that, it loads a state file that contains a serialized snapshot of the state of the cluster, and invokes the application by calling it's main() function. Now all of the external services my program interacts with are simulated, so that I can run my program as it would be run in production in a repeatable manner.
The important thing to remember about doctests is that the tests are based on string comparisons, and the way that numbers are rendered as strings will vary on different platforms and even in different python interpreters.
Most of my work deals with computations, so I use doctests only to test my examples and my version string. I put a few in the __init__.py since that will show up as the front page of my epydoc-generated API documentation.
I use nose for testing, although I'm very interested in checking out the latest changes to py.test.