I have a simple Django app with a database which stores a series of messages and datetime at which I want them to printed to screen. Is there a way to have Django call a method which would check to see if any new messages needed printing and, if so, print them?
I have heard about celery for scheduling tasks but it seems to be massively overkill for what I need.
After the clarification for your use case in the comment to Stewart's answer, I suggest using cronjobs and a custom manage.py command.
Model
To filter out all notifications that haven't been sent it is a good idea to have a flag on the model, e.g. is_notified = models.BooleanField(default=False). This way it becomes fast and easy to filter the necessary messages, e.g. with MyModel.objects.filter(is_notified=False, 'send_on__lte': datetime.now()).
A custom manage.py command
In the custom manage.py command you have full access to your Django setup. Writing them is documented in Writing custom django-admin commands.
The command will usually (at least):
filter all notifications that should be sent
iterate over them and try to send the email
when successful, set is_notified to True and save the instance
Cronjob
The cronjob is easy to setup. $ crontab -l will list all cronjobs that are currently installed. $ crontab -e will open the default editor (probably vi(m) or nano) to add new cronjobs.
Example: running command every 5 minutes:
*/5 * * * * /home/foobar/my-virtualenv/bin/python /home/foobar/my-django-dir/manage.py my_django_command >> /home/logs/my_django_command.log 2>&1
Adding is done by pasting the snippet to a new line in the file that opens after calling $ crontab -e and saving the file.
*/5 * * * *
specifies to run the cronjob every five minutes.
/home/foobar/my-virtualenv/bin/python
specifies to call Python from your virtualenv (if you use one) rather than the system version.
/home/foobar/my-django-dir/manage.py my_django_command
calls your manage.py command just like you would do.
>> /home/logs/my_django_command.log 2>&1
specifies that all output (standard output and errors) generated by the manage.py command will be saved to the file my_django_command.log. Just make sure that the directories (in this case home/logs) exist.
Do you need them printed to the page without the user refreshing the page in their browser? If so, you need to write some JavaScript AJAX code to continuously poll your application view for new content to write to the page.
Here's an example tutorial on AJAX using Django: https://realpython.com/blog/python/django-and-ajax-form-submissions/
If you don't want to use cron then you could use django-cronograph.
Related
I have a shell script that calls ./manage.py a few times, and would like to create the same functionality within a python 3.9.2 script. I have tried subprocess.run and os.system but get hung up for various reasons. Currently the shell script looks like
./manage.py dump_object water_testing.watertest '*' > ./water_testing/fixtures/dump_stevens.json
./manage.py dump_object tp.eqsvc '*' >> ./water_testing/fixtures/dump_stevens.json
...
It takes time to dissect the custom management commands suggested below, so I will need to formulate a timeline for management approval. Does anyone have an explanation of how Django attempts to tackle security implications with this? We need a quick fix for dev and some pointers on prod. This is what we are looking for down and dirty time being, so if anyone has a working example that would be awesome!
# `input` args/params are necessary
# `capture_output` is good if we need to do something with it later
# `check` the subprocess actually fired off and completed into traces are crucial.
output = subprocess.run(["manage.py"], input="dump_object water_testing.watertest '*' > ./water_testing/fixtures/dump_stevens.json", capture_output=True, text=True, check=True)
# this won't work either
os.system("python ./manage.py dump_object water_testing.watertest '*' > ./water_testing/fixtures/dump_stevens.json")
Maybe we just need a link on how to call python script from python scripts, and a nudge on how to break processes down to get the solution underway ourselves. Thanks ahead of time for your consideration.
As of Django 3.2, you can use call_command to run manage.py scripts:
from django.core import management
management.call_command('makemigrations')
You can also specify if the session should be interactive and use additional command arguments.
https://docs.djangoproject.com/en/3.2/ref/django-admin/#django.core.management.call_command
I'm trying to automate the following via Fabric:
SSH to a remote host.
Execute a python script (the Django management command dbshell).
Pass known values to prompts that the script generates.
If I were to do this manually, it would like something like:
$ ssh -i ~/.ssh/remote.pem ubuntu#10.10.10.158
ubuntu#10.10.10.158$ python manage.py dbshell
postgres=> Password For ubuntu: _____ # i'd like to pass known data to this prompt
postgres=> # i'd like to pass known data to the prompt here, then exit
=========
My current solution looks something like:
from fabric.api import run
from fabric.context_managers import settings as fabric_settings
with fabric_settings(host_string='10.10.10.158', user='ubuntu', key_filename='~/.ssh/remote.pem'):
run('python manage.py dbshell')
# i am now left wondering if fabric can do what i'm asking....
Replied to Sean via Twitter on this, but the first thing to check out here is http://docs.fabfile.org/en/1.10/usage/env.html#prompts - not perfect but may suffice in some situations :)
The upcoming v2 has a more solid implementation of this feature in the pipe, and that will ideally have room for a more pexpect-like API (meaning, something more serially oriented) as an option too.
You can use Pexpect which runs the system and checks the output, if the output matches given pattern Pexpect can respond as a human typing.
I am looking to run a Python script every n days as part of a Django app.
I need this script to add new rows to the database for each attribute that a user has.
For example a User has many Sites each that have multiple Metric data points. I need to add a new Metric data point for each Site every n days by running a .py script.
I have already written the script itself, but it just works locally.
Is it the right approach to:
1) Get something like Celery or just a simple cron task running to run the python script every n days.
2) Have the python script run through all the Metrics for each Site and add a new data point by executing a SQL command from the python script itself?
You can use Django's manage commad, then use crontab to run this command circularly.
The first step is to write the script - which you have already done. This script should run the task which is to be repeated.
The second step is to schedule the execution.
The de-facto way of doing that is to write a cron entry for the script. This will let the system's cron daemon trigger the script at the interval you select.
To do so, create an entry in the crontab file for the user that owns the script; the command to do so is crontab -e, which will load a plain text file in your preferred editor.
Next, you need to add an entry to this file. The format is:
minutes hours day-of-month month day-of-week script-to-execute
So if you want to run your script every 5 days:
0 0 */5 * * /bin/bash /home/user/path/to/python.py
I have a shell script that I want to run automatically every day at 08 AM, and I am not authorised to use the crontab because I don't have root permission
My home directory is /home/user1/.
Any suggestions?
Ideally you should have your system administrator add your user account to /etc/cron.allow - without that you do not have permission to use the crontab command, as you probably discovered.
If that is not possible, then you could use a wrapper shell script along these lines (note: untested):
#!/bin/bash
TIME="tomorrow 8am"
while :; do
# Get the current time in seconds since the Epoch
CURRENT=$(date +%s)
# Get the next start time in seconds
NEXT=$(date +%s -d "$TIME")
# Sleep for the intervening time
sleep $((NEXT-CURRENT))
# Execute the command to repeat
/home/user1/mycommand.sh
done
You start the wrapper script in the background, or e.g. in a screen session, and as long as it's active it will regularly execute your script. Keep in mind that:
Much like cron there is no real accuracy w.r.t. the start time. If you care about timing to the second, then this is not the way to go.
If the script is interrupted for whatever reason, such as a server reboot, you will have to somehow restart. Normally I'd suggest an #reboot entry in crontab, but that seems to not be an option for you.
If there is some sort of process-cleaning mechanism that kills long-term user processed you are probably out of luck.
Your system administrator may have simply neglected to allow users access to cron - or it may have been an explicit decision. In the second case they might not take too well to you leaving a couple of processes overnight in order to bypass that restriction.
Even if you dont have root permission you can set cron job. Chcek these 2 commands as user1, if you can modify it or its throwing any error.
crontab -l
If you can see then try this as well:
crontab -e
If you can open and edit, then you can run that script with cron.
by adding this line:
* 08 * * * /path/to/your/script
I don't think root permission is required to create a cron job. Editing a cronjob that's not owned by you - there's where you'd need root.
In a pinch, you can use at(1). Make sure the program you run reschedules the at job. Warning: this goes to heck if the machine is down for any length of time.
I have a python script I'm successfully executing every night at midnight. It's outputting the log file, however, I want it to also send an email with the log contents.
I've read this is pretty use to do, but I've had no luck thus far. I've tried this but it does not work. Does anyone else have some other suggestions?
I'm running Ubuntu 14.04, if that makes a difference with the mail smtp.
MAILTO=mcgoga12#wfu.edu
0 0 * * * /usr/bin/python /home/grant/Developer/Projects/StudyBug/Main.py > /home/grant/Desktop/Studybuglog.log 2>&1
Cron will send everything sent by the command to its standard output (what would be sent to the screen if you ran the command from the command line) in an email to the email address in MAILTO.
Unfortunately for you, you are changing the behaviour of this command using shell redirection. If you ran the command exactly as written above, there would be nothing shown on the screen because everything is written to the file (because you redirect standard output to the file using the '>' operator).
If you want an email, remove the >, and everything after it and then test.
If you also want to write to a log file, you might try the 'tee' command, or changing your script to take a log file as a command line argument, and write to both the log file and the standard output.