How can I call manage.py within another python script? - python

I have a shell script that calls ./manage.py a few times, and would like to create the same functionality within a python 3.9.2 script. I have tried subprocess.run and os.system but get hung up for various reasons. Currently the shell script looks like
./manage.py dump_object water_testing.watertest '*' > ./water_testing/fixtures/dump_stevens.json
./manage.py dump_object tp.eqsvc '*' >> ./water_testing/fixtures/dump_stevens.json
...
It takes time to dissect the custom management commands suggested below, so I will need to formulate a timeline for management approval. Does anyone have an explanation of how Django attempts to tackle security implications with this? We need a quick fix for dev and some pointers on prod. This is what we are looking for down and dirty time being, so if anyone has a working example that would be awesome!
# `input` args/params are necessary
# `capture_output` is good if we need to do something with it later
# `check` the subprocess actually fired off and completed into traces are crucial.
output = subprocess.run(["manage.py"], input="dump_object water_testing.watertest '*' > ./water_testing/fixtures/dump_stevens.json", capture_output=True, text=True, check=True)
# this won't work either
os.system("python ./manage.py dump_object water_testing.watertest '*' > ./water_testing/fixtures/dump_stevens.json")
Maybe we just need a link on how to call python script from python scripts, and a nudge on how to break processes down to get the solution underway ourselves. Thanks ahead of time for your consideration.

As of Django 3.2, you can use call_command to run manage.py scripts:
from django.core import management
management.call_command('makemigrations')
You can also specify if the session should be interactive and use additional command arguments.
https://docs.djangoproject.com/en/3.2/ref/django-admin/#django.core.management.call_command

Related

Paramiko read stdout after every stdin [duplicate]

I am writing a program in Python which must communicate through SSH with a physical target, and send to this targets some commands automatically (it is for testing).
I start by doing this with Paramiko and everything was perfect until I have to send several commands and when for example the second one must be execute in the context of the first (for example the first one makes cd /mytargetRep and the second one is ./executeWhatIWant). I can't use exec_command to do so, because each exec_command starts a new session.
I try to use a channel with invoke_shell(), but I have an other problem with this one: I don't know when command execution is ended by doing this. I can have some very short (in time) command execution, and some other are really more longer so I need to know when the command execution is over.
I know a workaround it to use exec_command with a shell logic operations such as && or using ;. For example exec_command("cd /mytargetRep && ./executeWhatIWant"). But I can't do that, because it must also be possible to execute some commands manually (I have a minimalist terminal where I can send commands), so for example, the user will make cd /mytargetRep then ./executeWhatIWant and not cd /mytargetRep && ./executeWhatIWant.
So my question is: is there a solution by using Paramiko to send several commands in a same SSH session and be able to know the end of the command execution?
Thanks
It seems that you want to implement an interactive shell, yet you need to control individual commands execution. That's not really possible with just SSH interface. "shell" channel in SSH is black box with an input and output. So there's nothing in Paramiko that will help you implementing this.
If you need to find out when a specific command finishes or where an output of a specific command ends, you need to use features of a shell.
You can solve that by inserting a unique separator (string) in between and search for it in the channel output stream. With a common *nix shells something like this works:
channel = ssh.invoke_shell()
channel.send('cd /mytargetRep\n')
channel.send('echo unique-string-separating-output-of-the-commands\n')
channel.send('./executeWhatIWant\n')
Though I do not really think that you need that very often. Most commands that are needed to make a specific commands working, like cd or set, do not really output anything.
So in most cases you can use SSHClient.exec_command and your code will be a way simpler and more reliable:
Execute multiple commands in Paramiko so that commands are affected by their predecessors
Even if you need to use something seemingly complex like su/sudo, it is still better to stick with SSHClient.exec_command:
Executing command using "su -l" in SSH using Python
For a similar question, see:
Combining interactive shell and recv_exit_status method using Paramiko

Setting up django parallel test in setting.py

Hello I know that it's possible to run tests in django in parallel via --parallel flag eg. python manage.py test --parallel 10. It really speeds up testing in project I'm working for, what is really nice. But Developers in company shares different hardware setups. So ideally I would like to put parallel argument in ./app_name/settings.py so every developer would use at least 4 threads in testing or number of cores provided by multiprocessing lib.
I know that I can make another script let's say run_test.py in which I make use of --parallel, but I would love to make parallel testing 'invisible'.
To sum up - my question is: Can I put number of parallel test runs in settings of django app?
And if answer is yes. There is second question - Would command line argument (X) manage.py --parallel X override settings from './app_name/settings'
Any help is much appreciated.
There is no setting for this, but you can override the test command to set a different default value. In one of your installed apps, create a .management.commands submodule, and add a test.py file. In there you need to subclass the old test command:
from django.conf import settings
from django.core.management.commands.test import Command as TestCommand
class Command(TestCommand):
def add_arguments(self, parser):
super().add_arguments(parser)
if hasattr(settings, 'TEST_PARALLEL_PROCESSES'):
parser.set_defaults(parallel=settings.TEST_PARALLEL_PROCESSES)
This adds a new default to the --parallel flag. Running python manage.py test --parallel=1 will still override the default.

Is it possible to use fabric to pass commands to an interactive shell?

I'm trying to automate the following via Fabric:
SSH to a remote host.
Execute a python script (the Django management command dbshell).
Pass known values to prompts that the script generates.
If I were to do this manually, it would like something like:
$ ssh -i ~/.ssh/remote.pem ubuntu#10.10.10.158
ubuntu#10.10.10.158$ python manage.py dbshell
postgres=> Password For ubuntu: _____ # i'd like to pass known data to this prompt
postgres=> # i'd like to pass known data to the prompt here, then exit
=========
My current solution looks something like:
from fabric.api import run
from fabric.context_managers import settings as fabric_settings
with fabric_settings(host_string='10.10.10.158', user='ubuntu', key_filename='~/.ssh/remote.pem'):
run('python manage.py dbshell')
# i am now left wondering if fabric can do what i'm asking....
Replied to Sean via Twitter on this, but the first thing to check out here is http://docs.fabfile.org/en/1.10/usage/env.html#prompts - not perfect but may suffice in some situations :)
The upcoming v2 has a more solid implementation of this feature in the pipe, and that will ideally have room for a more pexpect-like API (meaning, something more serially oriented) as an option too.
You can use Pexpect which runs the system and checks the output, if the output matches given pattern Pexpect can respond as a human typing.

Django: How to trigger events based on datetimes in the database

I have a simple Django app with a database which stores a series of messages and datetime at which I want them to printed to screen. Is there a way to have Django call a method which would check to see if any new messages needed printing and, if so, print them?
I have heard about celery for scheduling tasks but it seems to be massively overkill for what I need.
After the clarification for your use case in the comment to Stewart's answer, I suggest using cronjobs and a custom manage.py command.
Model
To filter out all notifications that haven't been sent it is a good idea to have a flag on the model, e.g. is_notified = models.BooleanField(default=False). This way it becomes fast and easy to filter the necessary messages, e.g. with MyModel.objects.filter(is_notified=False, 'send_on__lte': datetime.now()).
A custom manage.py command
In the custom manage.py command you have full access to your Django setup. Writing them is documented in Writing custom django-admin commands.
The command will usually (at least):
filter all notifications that should be sent
iterate over them and try to send the email
when successful, set is_notified to True and save the instance
Cronjob
The cronjob is easy to setup. $ crontab -l will list all cronjobs that are currently installed. $ crontab -e will open the default editor (probably vi(m) or nano) to add new cronjobs.
Example: running command every 5 minutes:
*/5 * * * * /home/foobar/my-virtualenv/bin/python /home/foobar/my-django-dir/manage.py my_django_command >> /home/logs/my_django_command.log 2>&1
Adding is done by pasting the snippet to a new line in the file that opens after calling $ crontab -e and saving the file.
*/5 * * * *
specifies to run the cronjob every five minutes.
/home/foobar/my-virtualenv/bin/python
specifies to call Python from your virtualenv (if you use one) rather than the system version.
/home/foobar/my-django-dir/manage.py my_django_command
calls your manage.py command just like you would do.
>> /home/logs/my_django_command.log 2>&1
specifies that all output (standard output and errors) generated by the manage.py command will be saved to the file my_django_command.log. Just make sure that the directories (in this case home/logs) exist.
Do you need them printed to the page without the user refreshing the page in their browser? If so, you need to write some JavaScript AJAX code to continuously poll your application view for new content to write to the page.
Here's an example tutorial on AJAX using Django: https://realpython.com/blog/python/django-and-ajax-form-submissions/
If you don't want to use cron then you could use django-cronograph.

Conscientious print to stdout from Python daemon

I wrote a simple script using python-daemon which prints to sys.stdout:
#!/usr/bin/env python
#-*- coding: utf-8 -*-
import daemon
import sys
import time
def main():
with daemon.DaemonContext(stdout=sys.stdout):
while True:
print "matt daemon!!"
time.sleep(3)
if __name__ == '__main__':
main()
The script works as I would hope, except for one major flaw--it interrupts my input when I'm typing in my shell:
(daemon)modocache $ git clomatt daemon!!
matt daemon!!ne
matt daemon!! https://github.com/schacon/cowsay.git
(daemon)modocache $
Is there any way for the output to be displayed in a non-intrusive way? I'm hoping for something like:
(daemon)modocache $ git clo
matt daemon!! # <- displayed on new line
(daemon)modocache $ git clo # <- what I had typed so far is displayed on a new line
Please excuse me if this is a silly question, I'm not too familiar with how shells work in general.
Edit: Clarification
The reason I would like this script to run daemonized is that I want to provide updates to the shell user from within the shell, such as printing weather updates to the console in a non-intrusive way. If there is a better way to accomplish this, please let me know. But the purpose is to display information from within the terminal (not via, say, Growl notifications), without blocking.
If it doesn't need to be an "instant" notification, and you can wait until the next time the user runs a command, you can bake all kinds of things into your bash shell prompt. Mine tells me the time and the git repository status for the directory I'm in, for example.
The shell variable for "normal user" shell prompts is PS1, so Googling for bash PS1 or bash prompt customisation will get you some interesting examples.
Here's some links:
Some basic customisations
A more complex example: git bash prompt
In general, you can include the output of any arbitrary script in the prompt string. Be aware, however, that high-latency commands will delay printing of the prompt string until they can be evaluated, so it may be a good idea to cache information. (For example, if you want to display the weather from a weather website, don't make your bash prompt go out and retrieve the webpage every time the prompt is displayed!)
A daemon process, by definition should run in the background. Therefore it should write to a log file.
So either you redirect it's output to a logfile (shell redirect or hand it over to some sys logging daemon) or you make it write to a logfile in the python code.
Update:
man write
man wall
http://linux.die.net/man/1/write, http://linux.die.net/man/1/wall
It probably is best practice to write to a log file for daemons. But could you not write to stderr and have the behavior desired above with inter-woven lines?
Take a look at the logging library (part of the standard library). This can be made to route debug and runtime data either to the console or a file (or anywhere for that matter) depending on the state of the system.
It provides several log facilities, e.g. error, debug, info. Each can be configured differently.
See documentation on logging - link
import logging
logging.basicConfig(filename='example.log',level=logging.DEBUG)
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')

Categories