Running "tasks" periodically with Django without seperate server - python

I realize similar questions have been asked however they have all been about a sepficic problem whereas I don't even now how I would go about doing what I need to.
That is: From my Django webapp I need to scrape a website periodically while my webapp runs on a server. The first options that I found were "django-background-tasks" (which doesn't seem to work the way I want it to) and 'celery-beat' which recommends getting another server if i understood correctly.
I figured just running a seperate thread would work but I can't seem to make that work without it interrupting the server and vice-versa and it's not the "correct" way of doing it.
Is there a way to run a task periodically without the need for a seperate server and a request to be made to an app in Django?

'celery-beat' which recommends getting another server if i understood correctly.
You can host celery (and any other needed components) on the same server as your Django app. They would be separate processes entirely.
It's not an uncommon setup to have a Django app + celery worker(s) + message queue all bundled into the same server deployment. Deploying on separate servers may be ideal, just as it would be ideal to distribute your Django app across many servers, but is by no means necessary.

I'm not sure if this is the "correct" way but it was a cheap and easy way for me to do it. I just created custom Django Management Commands and have them run via a scheduler such as CRON or in my case I just utilized Heroku Scheduler for my app.

Related

Send scheduled email django

I have made a small django app to fit all my needs. I will use it on my company level to track simple tasks of couple mehanical engineers. Now, only thing left is to send scheduled emails in my Django app (every day at noon, if someone is behind with work, he would get an email). Since I'm using Windows and I'll deploy this app on Windows, I can't use cron job (this only works on Linux, as I've seen on forums), which is simple and easy. Only way I found so far was using django-celery-beat. This is not so easy to set up, and I need to run 'worker' each time I run my server. This is a bit above my level and I would need to learn a lot more (and it needs to have a message broker, like RabbitMQ, which I also need to run and learn to implement).
I was wondering is there a more easy way to send a simple email every day at noon? I don't want to install additional apps, I wish to keep it simple as possible.
You can do it by Dockerizing Django with Redis and Celery.
Dockerizing is the process of packing, deploying, and running applications using Docker containers.
please use the below link to read more about dockerizing
Dockerizing
Dockerizing Django with Postgres, Redis and Celery

Good way to run ONE indefinite process on Django Framework

I'm building a web app using a Django framework. There isn't much user interaction with only a few static links, navbar, and a few plots which come from my app itself. The main part of the app comes from a python script which reads data from an external source, does some data processing on it, and then writes to my django database. Then after writing to the database a new page is created with information about the database entry. Note that there is no user interaction so no starting or stopping the task. I want the task to run in the background 24/7.
Thus far I've looked into celery and django-background-tasks. Celery seems like a bit much for my use case. I don't think I need a broker service as I just want to run 1 task which the user will never interact with. Additionally, I don't need multiple workers. Django-background-tasks seems like a good lightweight alternative but it seems it does not support indefinite tasks without having to refresh the task every once in a while (ideally I don't want this). Is there a tool that is better suited for this use case? Or am I just completely misunderstanding celery and django-background-tasks.
Update
Thanks for the comments, everyone! So I looked up what was mentioned by #D Malan and I think a tool like Supervisord might fit my use case. That is I can run a python script separately from the django application in the background and then have the python script interact with the django application. The only problem I have now is getting the process to interact with the django app. #deceze mentioned invoking a manage command from the python script. So this would involve creating a subprocess from the python script calling a custom django management command which updates my database? Or can I use the django.core.management.call_command but from a python file separate from the django application. If this is the case how would it know where to get the command from?

How does server-side rendering work with a non-Node.js backend on Heroku?

I have been developing a Python app that serves a React frontend with server-side rendering.
Locally, this has worked fine as I'm able to run two servers on separate ports to handle different parts of my application. My Python backend receives the initial request and then sends an http request to my Node.js server which does my server-side rendering. The result is then sent back to my Python backend which injects the server-rendered frontend into the HTML which is sent to the client.
However, Heroku limits applications to a single, dynamically generated port. This limits me to only running one web server which means I'm no longer able to run my Node.js server to do my server-side rendering. I have thought of some gimmicky ways to make this work, but I don't want to have to create an entirely new app on Heroku just to run the Node.js server I need.
I'm not sure how I can make this work with these limitations in place so I'm hoping I can learn some alternative ways to make this work on Heroku. What are some viable workarounds to handle this problem?
I think you need to create to separate apps on Heroku(even though you don't want to), as far as I know there's no other available options on Heroku.
I use Heroku for a SSR application running on two apps. One for frontend(react) and one for backend(nodejs). Works like a charm

Running Python through FastCGI with nginx on Ubuntu

I've already looked at other threads on this, but most don't go into enough setup detail which is where I need help.
I have an Ubuntu based VPS running with nginx, serving PHP sites through php-cgi on port 9000.
I'd like to start doing more with Python, so I've written a deployment script which I essentially want to use as a post-receive hook on my local GitLab server as my first python script. I can run this script successfully by running python script.py on the command line but in order to use this as a post-receive hook I need it be able to access it via http.
I looked at this guide on the nginx wiki but partway down is says to:
And start the django fastcgi process:
python ./manage.py runfcgi host=127.0.0.1 port=8080
Now, like I said I am pretty new to python, and I have never used the Django framework. Can anyone assit on how I am supposed to start the fastcgi server? Do I replace ./manage.py with the name of my script? Any help would be appriciated as everything I've found online refers to working with Django.
Do I replace ./manage.py with the name of my script?
No. It's highly unlikely your script is a FastCGI server, or that it can accept HTTP requests of any kind since you mention running it over the command line. (From what little I know of FastCGI, an app supporting it has to be able to handle a stream of requests coming in over stdin in a specific format, so there's definitely some plumbing involved.)
I'd say the easiest approach would be to use some web framework just to act as HTTP/FastCGI middleware. For your use a "microframework" like Flask (or even Paste but I found the documentation inscrutable) sounds like it'd work fine. The idea would be to have two interfaces to your main code, one that can handle command line arguments, and one that can handle a HTTP request, ultimately both would just call one function that actually does the work. (If you want to keep the command-line version of the app.)
The Flask documentation also mentions using uWSGI or standalone workers as deployment options. I'm not familiar with the former; the latter I wouldn't recommend for a simple, low-traffic app for the same reasons as the approach in the next paragraph.
Considering you use a VPS, you might even be able to just run the app as a standalone server process using the http.server module, but I'm not sure that's the better choice unless you absolutely want to avoid using any sort of framework. You'd have to make sure the app starts up if the server is rebooted or that it restarts when it crashes and it seems easier to just have nginx do the job of the supervisor.
UPDATE: Scratch that, it seems that nginx won't handle supervising a FastCGI worker process for you, which would've been the main advantage of the approach. In light of that it doesn't matter which of the three approaches you use since you'll have to set up a service supervisor one way or the other. I'd say go with uWSGI since flup (which is needed for Flask+FastCGI) seems abandoned since 2011, and the uWSGI protocol is apparently supported in nginx natively. Otherwise you'd need to use a different webserver than nginx, one that will manage a FastCGI worker for you. If this is an option, I'd consider Cherokee, which can be configured using a web GUI.
tl;dr: you need to write a (very simple) webapp. While it is feasible to do this without a web framework of any kind, in my opinion using one is easier, since you some (nontrivial) plumbing for free and there's a lot of guidance available on how to deploy them.

How to make Python / Nginx / FastCGI automatically recompile code when it is updated/changed?

So I've been working on my first Django / Python project and I got my production server up and running. I was wondering if it's possible to make Python/FastCGI (not really sure which is responsible for the task) to recompile my code. As of right now, when I upload updated code, I need to restart the server for the changes to take place. I read that you can add some kind of mysite.fcgi file to lighttpd so it see's that you've updated the code, can you do the same for Nginx / FastCGI?
for anyone else that was interested in my question.. this is only a partial solution, but I ended up finding my answer here: How to gracefully restart django running fcgi behind nginx?
You can just run the script (I'm going to modify it a bit), everytime you edit your code and it will gracefully restart everything without dropping connections.
This is a general guide from the mod_wsgi project that outlines how you can monitor code changes from your app_wsgi.py and restart the current process if any of the modules have changed. You need to restart the Python process, because threads contending over modules could mean that a freshly reloaded module has outdated references from other modules that are still waiting to get discovered for reload.
If you want something that works nicely with nginx, Django and wsgi apps in general, take a peek at Spawning as your wsgi server. It's approach to code reloading is about as graceful as it gets.
It has great documentation, well documented request handling model and it just works, which makes it such a no-brainer to configure. You'd need less than five minutes from now to having your Django instance running on Spawning. Here's another topical blog to get your juices running.

Categories