I have a Django web application hosted on IIS. I subprocess should be consistently running alongside the web application. When I run the application locally using
python manage.py runserver
the background task runs perfectly while the application is running. However, hosted on IIS the background task does not appear to run. How do I make the task run even when hosted on IIS?
In the manage.py file of Django I have the following code:
def run_background():
return subprocess.Popen(["python", "background.py"], creationflag=subprocess.CREATE_NEW_PROCESS_GROUP)
run_background()
execute_from_command_line(sys.argv)
What can be done to make the background task always run even on IIS?
Celery is a classic option for a background task manager.
https://pypi.org/project/celery/
Alternatively, I've used a library called schedule when I wanted something a little more lightweight. Note that schedule is still in its infancy. If you need something that is going to maintain support down the line, go with celery to be safe.
https://pypi.org/project/schedule/
Without knowing the context of your project, I can't say which I would choose, but they're both good options for task management.
On Windows, you can use Task Scheduler to automatically start your background process when Windows starts, using an arbitrary user account.
This was the "officially suggested" solution for Celery 3 on Windows until some years ago, and I believe it can be easily adapted to run any process.
You can find a detailed explanation here:
https://www.calazan.com/windows-tip-run-applications-in-the-background-using-task-scheduler/
Related
I am trying to build a Flask application on Windows where user uploads a big Excel file then it is processed in Python which takes 4-5 minutes. I need to process those tasks in background after user uploads the file.
I RQ, Celery, etc. but those are not working on Windows and I have never worked on Linux. I need some advice on how to achieve this.
celery and rq can work on windows but have some trouble
for rq use this
and for celery use this
I don't think it's accurate to say that you can't run RQ on Windows, it just has some limitations (as you can in the documentation).
As you can run Redis on Windows, you might want to give a try to other task queues based on Redis. One such example is huey. There are at least examples of people who were successful running it on Windows (e.g. look at this SO question).
I solved this by using WSL Linux Emulation on windows.. and running my RQ worker on WSL..
I am not sure though if I will face any issues in future but as of now its queuing and processing tasks as I desire..
info Might be useful for somebody with same problem
I have a django-based function that needs to be run only once in Django, when the app boots up. The tricky part is that:
The code in question heavily uses django ORM, so apps have to be ready at that point,
The code should run only once - e.g. not once per every worker, but exactly once per "website" (regardless of whether it's deployed via gunicorn with a dozen of workers, or run locally via the built-in development web-server),
The code should only run once the app boots, but NOT by running other management commands. The code can take a while to complete, and I don't want it to trigger each time I run makemigrations or shell.
I could, theoretically, just introduce a locking mechanism and let it run somewhere in AppConfig.ready() method, but that would still run in all management commands as well.
Since the app is packaged in Docker, I've been also thinking about simply wrapping the code in a separate management command and running it in the entrypoint, just before the app is being started. That seems to do the trick, but it will be done automatically only in that particular container - if somebody runs a local development server to work on the app on his own, he might not be aware that an additional command should be run.
I've searched through the documentation and it doesn't look like Django has a way to do this nativelly on its own. Perhaps there's a better approach that I can't think of?
I had exactly the same need. I run the startup code in AppConfig.ready(), depending on an environment variable that you have to set when you don't want the startup code to be executed.
SKIP_STARTUP = settings.DEBUG or os.environ.get('SKIP_STARTUP')
class myapp(AppConfig):
def ready(self):
if not SKIP_STARTUP:
<startup stuff>
I have a web application that runs locally only (does not run on a remote server). The web application is basically just a ui to adjust settings and see some information about the main application. Web UI was used as opposed to a native application due to portability and ease of development. Now, in order to start and stop the main application, I want to achieve this through a button in the web application. However, I couldn't find a suitable way to start a asynchronous and managed task locally. I saw there is a library called celery, however that seems to be suitable to a distributed environment, which mine is not.
My main need to be able to start/stop the task, as well as the check if the task is running (so I can display that in the ui). Is there any way to achieve this?
celery can work just fine locally. Distributed is just someone else's computer after all :)
You will have to install all the same requirements and the like. You can kick off workers by hand, or as a service, just like in the celery docs.
The thing is, I read this post stating best practices to set up a code to run at every specified interval over a period of time using the python library - APS Scheduler. Now, it obviously works perfectly fine if I do it on a test environment and run the application from the command prompt.
However, I come from a background where most my projects are university level and never ran in production but for this one, I would like to. I have access to AWS and can configure any kind of server on AWS and I am open to other options as well. It would be great if I could get a headstart on what to look if I have to run this application as a service from a server or a remote machine without having to constantly monitoring it and providing interrupts on command prompt.
I do not have any experience of running Python applications in production so any input would be appreciated. Also, I do not know how to execute this code in production (except for through aws cli) but that session expires once I close my CLI so that does not seem like the most appropriate way to do it so any help on that end would be appreciated too.
The Answer was very simple and does not make a lot of sense and might not be applicable to all.
Now, what I had was a python flask application so I configured the app in a virtual environment using eb-virt on the aws server and then created an executable wsgi script which I then ran as a service using mod_wsgi plugin for the apache http server and then I was able to run my app.
I want to give celery a try. I'm interested in a simple way to schedule crontab-like tasks, similar to Spring's quartz.
I see from celery's documentation that it requires running celeryd as a daemon process. Is there a way to refrain from running another external process and simply running this embedded in my django instance? Since I'm not interested in distributing the work at the moment, I'd rather keep it simple.
Add CELERY_ALWAYS_EAGER=True option in your django settings file and all your tasks will be executed locally. Seems like for the periodic tasks you have to execute celery beat as well.