Run a Celery worker that connects to the Django Test DB - python

BACKGROUND: I'm working on a project that uses Celery to schedule tasks that will run at a certain time in the future. These tasks push the state of the Final State Machine forward. Here's an example:
A future reminder is scheduled to be sent to the user in 2 days.
When that scheduled task runs, an email is sent, and the FSM is advanced to the next state
The next state is to schedule a reminder to run in another two days
When this task runs, it will send another email, advance state
Etc...
I'm currently using CELERY_ALWAYS_EAGER as suggested by this SO answer
The problem with using that technique in tests, is that the task code, which is meant to run in a separate thread is running in the same one as the one that schedules it. This causes the FSM state to not be saved properly, and making it hard to test. I haven't been able to determine what exactly causes it, but it seems like at the bottom of the call stack you are saving to the current state, but as you return up the call stack, a previous state is being saved. I could possibly spend more time determining what is going wrong when the code is not running how it should, but it seems more logical to try to get the code running how it should and make sure it's doing what it should.
QUESTION: I would therefore like to know if there is a way to run a full on celery setup that django can use during a test run. If it could be run automagically, that would be ideal, but even some manual intervention would be better than having to test behavior by hand. I'm thinking something could be possible if I set a break in the tests, run the celery worker to connect to the test DB, continue the django tests. Has anyone tried something like this before?

What you are trying to do is not unit testing but rather functional / integration testing.
I would recommend to use some BDD framework (Behave, Lettuce) and run BDD tests from a CI server (TravisCI or Jenkins) against external server (staging environment for example).
So, the process could be:
Push changes to GitHub
GitHub launches build on CI server
CI server runs unit tests
CI server deploys to integration environment (or staging, if you don't have integration)
CI server runs integration end to end tests against the new deployed code
If all succeeds, this build will be promoted to "can be deploy to production" or something like that

Related

How to run long background processes - Heroku App - Dash Python

I'm working on a Baseball Simulator app with Dash. It uses a SGD model to simulate gameplay between a lineup and a pitcher. The app (under construction) can be found here: https://capstone-baseball-simulator.herokuapp.com/ and the repo: https://github.com/c-fried/capstone_heroku
To summarize the question: I want to be able to run the lineup optimizer on the heroku server.
There are potentially two parts to this: 1. Running the actual function while avoiding timeout. & 2. Displaying the progress of the function as it's running.
There are several issues I'm having with solving this:
The function is expensive and cannot be completed before the 30-second timeout. (It takes several minutes to complete.)
For this, I attempted to follow these instructions (https://devcenter.heroku.com/articles/python-rq) by creating a worker.py (still in the repo), moving the function to the external .py...etc. The problem I believe was that the process still was taking too long and therefore terminating.
I'm (knowingly) using a global variable in the function which works when I run locally but does not work when deployed (for reasons I somewhat understand - workers don't share memory https://dash.plotly.com/sharing-data-between-callbacks)
I was using a global to be able to see live updates of what the function was doing as it was running. Again, worked as a hack locally, but doesn't work on the server. I don't know how else I can watch the progress of the function without some kind of global operation going on. I'd love a clever solution to this, but I can't think of it.
I'm not experienced with web apps, so thanks for the advice in advance.
A common approach to solve this problem is to,
Run the long calculation asynchronously, e.g. using a background service
On completion, put the result in a shared storage space, e.g. a redis cache or an S3 bucket
Check for updates using an Interval component or a Websocket component
I can recommend Celery for keeping track of the tasks.

Django: How to ignore tasks with Celery?

Without changing the code itself, Is there a way to ignore tasks in Celery?
For example, when using Django mails, there is a Dummy Backend setting. This is perfect since it allows me, from a .env file to deactivate mail sending in some environments (like testing, or staging). The code itself that handles mail sending is not changed with if statements or decorators.
For celery tasks, I know I could do it in code using mocks or decorators, but I'd like to do it in a clean way that is 12factors compliant, like with Django mails. Any idea?
EDIT to explain why I want to do this:
One of the main motivation behind this, is that it creates coupling between Django web server and Celery tasks.
For example, when running unit tests, if the broker server (Redis for me) is not running, then if delay() method is called, it freezes forever, because there is no timeout when Celery tries to send a task to Redis.
From an architecture view, this is very bad. I'd like my unit tests can run properly without the requirement to run a Celery broker!
Thanks!
As far as the coupling is concerned, your Django application would still be tied to celery if you use a dummy backend. Just your tasks won't execute. Maybe this is acceptable in your case but in my opinion, it can cause some problems. For example, if the piece of code you are trying to test, submits a task to celery, and in a later part, tries to retrieve the result for that task, it will fail. Because the dummy backend will never execute the task.
For unit testing, as you mentioned in your question, you can use the task_always_eager setting. If you turn it on, your Django app will no longer depend upon a running worker. It will execute tasks in the same thread in a synchronous fashion and return the result.

Apache - Running Long Running Processes In Background

Before you go any further, I am currently working in a very restricted environment. Installing additional dll's/exe's, and other admin like activities are frustratingly difficult. I am fully aware that some of the methodology described in this post is far from best practice...
I would like to start a long running background process that start/stops with Apache. I have a cgi enabled python script that takes as input all of the parameters necessary to run a complex "job". It is not feasible to run this job in the cgi script itself - because a)cgi is already slow to begin with and b)multiple simultaneous requests would definitely cause trouble. The cgi script will do nothing more than enter the parameters into a "jobs" database.
Normally, I would set something up like MSMQ in conjunction with a Windows Service. I would have a web service add a job to the queue, and the windows service would be polling the queue at some standard interval - processing jobs in sequence...
How could I accomplish the same in Apache? I can easily enough create a python script to serve as the background job processor. My questions are:
how do I start it process up with, leave it running with, and stop with Apache?
how can i monitor the process - make sure stays alive with Apache?
Any tips or insight welcome.
Note. OS is Windows Server 2008
Heres a pretty hacky solution for anyone looking to do something similar.
Set up a windows scheduled task that does that background processing. set it to run once a day or whatever interval you want (it is irrelevant, as you'll see in next steps)
In the Settings tab of the Scheduled Task - make sure the "Allow task to be run on demand" option is checked. Also, under the "If the task is already running..." text, make sure the Do not start a new instance option in selected.
Then, from the cgi script - it is possible to invoke the scheduled task from the command line(subprocess module) see here. With the options set above - if the task is already running - any subsequent run on demands are ignored.

Message queues- how do I know who I am?

I have a Flask application that uses Nose to discover and run a series of tests in a particular directory. The tests take a long time to run, so I want to report the progress to the user as things are happening.
I use Celery to create a task that runs the test so I can return immediately and start displaying a results page. Now I need to start reporting results. I'm thinking in the test that I can just put a message on the queue that says 'I've completed step N'.
I know that Celery has task context I could use to determine which queue to write to, but the test isn't part of the task, it's a function that's called from the task. I also can't use a flask session, because that context is gone when the test run is moved to a task.
I have seen several ways to do data driven nose tests, such as test generators or nose-testconfig, but that doesn't meet the requirement that the message queue name will be dynamic and there may be several threads running the same test.
So, my question is: How do I tell the test that it corresponds to a particular celery task, ie: the one that started the test, so I can report it's status on the correct message queue?

Run a repeating task for a web app

This seems like a simple question, but I am having trouble finding the answer.
I am making a web app which would require the constant running of a task.
I'll use sites like Pingdom or Twitterfeed as an analogy. As you may know, Pingdom checks uptime, so is constantly checking websites to see if they are up and Twitterfeed checks RSS feeds to see if they;ve changed and then tweet that. I too need to run a simple script to cycle through URLs in a database and perform an action on them.
My question is: how should I implement this? I am familiar with cron, currently using it to do my server backups. Would this be the way to go?
I know how to make a Python script which runs indefinitely, starting back at the beginning with the next URL in the database when I'm done. Should I just run that on the server? How will I know it is always running and doesn't crash or something?
I hope this question makes sense and I hope I am not repeating someone else or anything.
Thank you,
Sam
Edit: To be clear, I need the task to run constantly. As in, check URL 1 in the database, check URl 2 in the database, check URL 3 and, when it reaches the last one, go right back to the beginning. Thanks!
If you need a repeatable running of the task which can be run from command line - that's what the cron is ideal for.
I don't see any demerits of this approach.
Update:
Okay, I saw the issue somewhat different. Now I see several solutions:
run the cron task at set intervals, let it process the data once per run, next time it will process the data on another run; use PIDs/Database/semaphores to avoid parallel processes;
update the processes that insert/update data in the database; let the information be processed when it is inserted/updated; c)
write a demon process which will reside in memory and check the data in real time.
cron would definitely be a way to go with this, as well as any other task scheduler you may prefer.
The main point is found in the title to your question:
Run a repeating task for a web app
The background task and the web application should be kept separate. They can share code, they can share access to a database, but they should be separate and discrete application contexts. (Consider them as separate UIs accessing the same back-end logic.)
The main reason for this is because web applications and background processes are architecturally very different and aren't meant to be mixed. Consider the structure of a web application being held within a web server (Apache, IIS, etc.). When is the application "running"? When it is "on"? It's not really a running task. It's a service waiting for input (requests) to handle and generate output (responses) and then go back to waiting.
Web applications are for responding to requests. Scheduled tasks or daemon jobs are for running repeated processes in the background. Keeping the two separate will make your management of the two a lot easier.

Categories