Make Celery use Django's test database without task_always_eager - python

When running tests in Django applications that make use of Celery tasks I can't fully test tasks that need to get data from the database since they don't connect to the test database that Django creates.
Setting task_always_eager in Celery to True partially solves this problem but as the documentation for testing says, this doesn't fully reflect how the code will run on a real Celery worker and isn't suitable for testing.
How can I make Celery tasks use the Django test database when running Django tests without setting task_always_eager = True?

Short = You must run celery worker as in production
Easy:
Use dedicated test db (as in production)
Configure celery to use it
Start celery worker manually before you run tests
Advanced:
Use auto created test db (it may be sqlite)
Run celery worker in your test setUp()
Configure celery to use auto created test db (copy django.conf.settings.DATABASE from test process to celery)
And always you must provide message broker for celery.
I have a test that requires dedicated celery worker to check my code that passes messages between celery task and calling code:
https://gist.github.com/Sovetnikov/a7ad982fc77e8dfbc528bfc20fcf3b1e
This python module is two in one - a TestUnit and celery worker runner with self contained configuration.
My code does not utilize any db, but you can easily adapt it to your need. Just pass django.conf.settings.DATABASE (as json or pickle or whatever you like method) to celery starter code and configure Django DATABASE to point to test db.
Additional info:
There is complete solution for this case https://github.com/RentMethod/celerytest (i tried
some old version of it and have no luck because it uses threads, with python GIL ... and i think it is over-complicated)
Sample code, how to configure DATABASE settings and init django itself in single module https://gist.github.com/Sovetnikov/369a8d05ba2b6482fa20769bc498f122

A simple solution is to use celery.contrib.testing.worker.start_worker to spawn a Celery worker within the Django test process. Because it lives in the same process, it can access the default in-memory test database, but because it lives in its own thread, it isn't eager and the task_always_eager flag is not needed or recommended.

Related

Use existing celery workers for Airflow's Celeryexecutor workers

I am trying to introduce dynamic workflows into my landscape that involves multiple steps of different model inference where the output from one model gets fed into another model.Currently we have few Celery workers spread across hosts to manage the inference chain. As the complexity increase, we are attempting to build workflows on the fly. For that purpose, I got a dynamic DAG setup with Celeryexecutor working. Now, is there a way I can retain the current Celery setup and route airflow driven tasks to the same workers? I do understand that the setup in these workers should have access to the DAG folders and environment same as the airflow server. I want to know how the celery worker need to be started in these servers so that airflow can route the same tasks that used to be done by the manual workflow from a python application. If I start the workers using command "airflow celery worker", I cannot access my application tasks. If I start celery the way it is currently ie "celery -A proj", airflow has nothing to do with it. Looking for ideas to make it work.
Thanks #DejanLekic. I got it working (though the DAG task scheduling latency was too much that I dropped the approach). If someone is looking to see how this was accomplished, here are few things I did to get it working.
Change the airflow.cfg to change the executor,queue and result back-end settings (Obvious)
If we have to use Celery worker spawned outside the airflow umbrella, change the celery_app_name setting to celery.execute instead of airflow.executors.celery_execute and change the Executor to "LocalExecutor". I have not tested this, but it may even be possible to avoid switching to celery executor by registering airflow's Task in the project's celery App.
Each task will now call send_task(), the AsynResult object returned is then stored in either Xcom(implicitly or explicitly) or in Redis(implicitly push to the queue) and the child task will then gather the Asyncresult ( it will be an implicit call to get the value from Xcom or Redis) and then call .get() to obtain the result from the previous step.
Note: It is not necessary to split the send_task() and .get() between two tasks of the DAG. By splitting them between parent and child, I was trying to take advantage of the lag between tasks. But in my case, the celery execution of tasks completed faster than airflow's inherent latency in scheduling dependent tasks.

Django: How to ignore tasks with Celery?

Without changing the code itself, Is there a way to ignore tasks in Celery?
For example, when using Django mails, there is a Dummy Backend setting. This is perfect since it allows me, from a .env file to deactivate mail sending in some environments (like testing, or staging). The code itself that handles mail sending is not changed with if statements or decorators.
For celery tasks, I know I could do it in code using mocks or decorators, but I'd like to do it in a clean way that is 12factors compliant, like with Django mails. Any idea?
EDIT to explain why I want to do this:
One of the main motivation behind this, is that it creates coupling between Django web server and Celery tasks.
For example, when running unit tests, if the broker server (Redis for me) is not running, then if delay() method is called, it freezes forever, because there is no timeout when Celery tries to send a task to Redis.
From an architecture view, this is very bad. I'd like my unit tests can run properly without the requirement to run a Celery broker!
Thanks!
As far as the coupling is concerned, your Django application would still be tied to celery if you use a dummy backend. Just your tasks won't execute. Maybe this is acceptable in your case but in my opinion, it can cause some problems. For example, if the piece of code you are trying to test, submits a task to celery, and in a later part, tries to retrieve the result for that task, it will fail. Because the dummy backend will never execute the task.
For unit testing, as you mentioned in your question, you can use the task_always_eager setting. If you turn it on, your Django app will no longer depend upon a running worker. It will execute tasks in the same thread in a synchronous fashion and return the result.

Setup celery periodic task

How do I set up a periodic task with Celerybeat and Flask that queries a database every hour?
The environment looks like this:
/
|-app
|-__init__.py
|-jobs
|-task.py
|-celery-beat.sh
|-celery-worker.sh
|-manage.py
I currently have a query function called run_query() located in task.py
I want the scheduler to kick in once the application initiates so I have the following lines in my /app/__init__.py folder:
celery = Celery()
#celery.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
sender.add_periodic_task(1, app.jobs.task.run_query())
(For simplicity's sake, I've set it up so that if it runs, it will run every minute. No such luck yet.)
When I launch the celery-worker.sh it recognizes my function under the [tasks] heading. But the scheduled function never runs. I can manually force the function to run by issuing the following at the command prompt:
>> from app.jobs import task
>> task.run_query.delay()
EDIT: Added celerybeat.sh
As a follow up: If the database is accessed through a flask context, during my asynch function call is it wise to create a new flask context to access the database? Use the existing flask context? Or forget contexts altogether and just initiate a connection to the database? My worry is that if I just initiate a new connection it may interfere with the existing context's connection?
To run periodic tasks you need some kind of schduler (Eg. celery beat).
celery beat is a scheduler; It kicks off tasks at regular intervals, that are then executed by available worker nodes in the
cluster.
You have to ensure only a single scheduler is running for a schedule
at a time, otherwise you’d end up with duplicate tasks. Using a
centralized approach means the schedule doesn’t have to be
synchronized, and the service can operate without using locks.
Reference: periodic-tasks
You can invoke scheduler with command,
$ celery -A proj beat #different process from your worker
You can also embed beat inside the worker by enabling the workers -B
option, this is convenient if you’ll never run more than one worker
node, but it’s not commonly used and for that reason isn’t recommended
for production use Starting scheduler:
$ celery -A proj worker -B
Reference: celery starting scheduler

Celery: why do I need a broker for periodic tasks?

I have a standalong script that scrapes a page, initiates a connection to a database, and writes database to it. I need it to execute periodically after x hours. I can make it with using a bash script, with the pseudocode:
while true
do
python scraper.py
sleep 60*60*x
done
From what I read about message brokers, they are used for sending "signals" from one running program to another, like HTTP in principle. Like I have a piece of code that accepts an email id from user, it sends signal with email-id to another piece of code that will send the email.
I need celery to run a periodic task on heroku. I already have a mongodb on a separate server. WHy do I need to run another server for rabbitmq or redis just for this? Can I use celery without the broker?
Celery architecture is designed to scale and distribute tasks across several servers. For sites like yours it might be an overkill. Queue service is generally needed to maintain the task list and signal the status of finished tasks.
You might want to take a look in Huey instead. Huey is small-scale Celery "Clone" needing only Redis as an external dependency, not RabbitMQ. It's still using Redis queue mechanism to line the tasks in queue.
There also exists Advanced Python scheduler which does not need even Redis, but can hold the state of the queue in memory in-process.
Alternatively if you have very small amount of periodical tasks, no delayed tasks, I would just use Cron and pure Python scripts to run the tasks.
As the Celery documentation explains:
Celery communicates via messages, usually using a broker to mediate between clients and workers. To initiate a task, a client adds a message to the queue, which the broker then delivers to a worker.
You can use your existing MongoDB database as broker. see Using MongoDB.
For the application like this, its better use Django Background Tasks
,
Installation
Install from PyPI:
pip install django-background-tasks
Add to INSTALLED_APPS:
INSTALLED_APPS = (
# ...
'background_task',
# ...
)
Migrate your database:
python manage.py makemigrations background_task
python manage.py migrate
Creating and registering tasks
To register a task use the background decorator:
from background_task import background
from django.contrib.auth.models import User
#background(schedule=60)
def notify_user(user_id):
# lookup user by id and send them a message
user = User.objects.get(pk=user_id)
user.email_user('Here is a notification', 'You have been notified')
This will convert the notify_user into a background task function. When you call it from regular code it will actually create a Task object and stores it in the database. The database then contains serialised information about which function actually needs running later on. This does place limits on the parameters that can be passed when calling the function - they must all be serializable as JSON. Hence why in the example above a user_id is passed rather than a User object.
Calling notify_user as normal will schedule the original function to be run 60 seconds from now:
notify_user(user.id)
This is the default schedule time (as set in the decorator), but it can be overridden:
notify_user(user.id, schedule=90) # 90 seconds from now
notify_user(user.id, schedule=timedelta(minutes=20)) # 20 minutes from now
notify_user(user.id, schedule=timezone.now()) # at a specific time
Also you can run original function right now in synchronous mode:
notify_user.now(user.id) # launch a notify_user function and wait for it
notify_user = notify_user.now # revert task function back to normal function.
Useful for testing.
You can specify a verbose name and a creator when scheduling a task:
notify_user(user.id, verbose_name="Notify user", creator=user)

How do I get django celery to write to the test database for my functional tests?

I am working on a Django application. We're using celery to queue writes to our Mongo database. I'm trying to write a functional test (using Selenium) for a function that queues something in celery.
The problem is that celery writes to the main Mongo database instead of the test database. How can I set up my functional tests to work with an instance of celery that writes to the test database?
We're using 'django_nose.NoseTestSuiteRunner' as our TEST_RUNNER.
UPDATE:
I haven't been able to figure out how to use another instance of celery for the tests, but I have found a way to bypass celery for the functional tests.
In my settings.py:
FUNC_TEST_COMMAND=['functional']
func_test_command = filter(lambda element: element in FUNC_TEST_COMMAND, sys.argv)
if len(func_test_command) > 0:
CELERY_ALWAYS_EAGER = True
This mimics the behaviour of an AsyncResult without sending anything through a message queue when running the functional test suite. (See http://celery.readthedocs.org/en/2.4/configuration.html#celery-always-eager for more info.)
This solution is probably not ideal for functional tests, because it cuts out one of the application layers.
Using CELERY_ALWAYS_EAGER = True does indeed bypass Celery's asynchornous processing. In order to write to the test database, you'll need to start your celeryd worker using the connection settings to the test database.
I'd suggest that you take a look at LiveServerTestCase if your using an automated test client for running functional tests.
Then make sure you have a separate settings module your running your tests with that properly configures Celery to use your project's database for transport.

Categories