Django management command doesn't flush stdout - python

I'm trying to print to console before and after processing that takes a while in a Django management command, like this:
import requests
import xmltodict
from django.core.management.base import BaseCommand
def get_all_routes():
url = 'http://busopen.jeju.go.kr/OpenAPI/service/bis/Bus'
r = requests.get(url)
data = xmltodict.parse(r.content)
return data['response']['body']['items']['item']
class Command(BaseCommand):
help = 'Updates the database via Bus Info API'
def handle(self, *args, **options):
self.stdout.write('Saving routes ... ', ending='')
for route in get_all_routes():
route_obj = Route(
route_type=route['routeTp'], route_id=route['routeId'], route_number=route['routeNum'])
route_obj.save()
self.stdout.write('done.')
In the above code, Saving routes ... is expected to print before the loop begins, and done. right next to it when the loop completes so that it looks like Saving routes ... done. in the end.
However, the former doesn't print until the loop completes, when both strings finally print at the same time, which is not what I expected.
I found this question, where the answer suggests flushing the output i.e. self.stdout.flush(), so I added that to my code:
def handle(self, *args, **options):
self.stdout.write('Saving routes ... ', ending='')
self.stdout.flush()
for route in get_all_routes():
route_obj = Route(
route_type=route['routeTp'], route_id=route['routeId'], route_number=route['routeNum'])
route_obj.save()
self.stdout.write('done.')
Still, the result remains unchanged.
What could have I done wrong?

The thing to keep in mind is you're using self.stdout (as suggested in the Django docs), which is BaseCommand's override of Python's standard sys.stdout. There are two main differences between the 2 relevant to your problem:
The default "ending" in BaseCommand's version of self.stdout.write() is a new-line, forcing you to use the ending='' parameter, unlike sys.stdout.write() that has an empty ending as the default. This in itself is not causing your problem.
The BaseCommand version of flush() does not really do anything (who would have thought?). This is a known bug: https://code.djangoproject.com/ticket/29533
So you really have 2 options:
Not use BaseCommand's self.stdout but instead use sys.stdout, in which case the flush does work
Force the stdout to be totally unbuffered while running the management command by passing the "-u" parameter to python. So instead of running python manage.py <subcommand>, run python -u manage.py <subcommand>
Hope this helps.

Have you try to set PYTHONUNBUFFERED environment variable?

Related

Custom Ansible module is giving param extra params error

I am trying to implement hostname like module and my target machine in an amazon-ec2. But When I am running the script its giving me below error:
[ansible-user#ansible-master ~]$ ansible node1 -m edit_hostname.py -a node2
ERROR! this task 'edit_hostname.py' has extra params, which is only allowed in the following modules: meta, group_by, add_host, include_tasks, import_role, raw, set_fact, command, win_shell, import_tasks, script, shell, include_vars, include_role, include, win_command
My module is like this:
#!/usr/bin/python
from ansible.module_utils.basic import *
try:
import json
except ImportError:
import simplejson as json
def write_to_file(module, hostname, hostname_file):
try:
with open(hostname_file, 'w+') as f:
try:
f.write("%s\n" %hostname)
finally:
f.close()
except Exception:
err = get_exception()
module.fail_json(msg="failed to write to the /etc/hostname file")
def main():
hostname_file = '/etc/hostname'
module = AnsibleModule(argument_spec=dict(name=dict(required=True, type=str)))
name = module.params['name']
write_to _file(module, name, hostname_file)
module.exit_json(changed=True, meta=name)
if __name__ == "__main__":
main()
I don't know where I am making the mistake. Any help will be greatly appreciated. Thank you.
When developing a new module, I would recommend to use the boilerplate described in the documentation. This also shows that you'll need to use AnsibleModule to define your arguments.
In your main, you should add something like the following:
def main():
# define available arguments/parameters a user can pass to the module
module_args = dict(
name=dict(type='str', required=True)
)
# seed the result dict in the object
# we primarily care about changed and state
# change is if this module effectively modified the target
# state will include any data that you want your module to pass back
# for consumption, for example, in a subsequent task
result = dict(
changed=False,
original_hostname='',
hostname=''
)
module = AnsibleModule(
argument_spec=module_args
supports_check_mode=False
)
# manipulate or modify the state as needed (this is going to be the
# part where your module will do what it needs to do)
result['original_hostname'] = module.params['name']
result['hostname'] = 'goodbye'
# use whatever logic you need to determine whether or not this module
# made any modifications to your target
result['changed'] = True
# in the event of a successful module execution, you will want to
# simple AnsibleModule.exit_json(), passing the key/value results
module.exit_json(**result)
Then, you can call the module like so:
ansible node1 -m mymodule.py -a "name=myname"
ERROR! this task 'edit_hostname.py' has extra params, which is only allowed in the following modules: meta, group_by, add_host, include_tasks, import_role, raw, set_fact, command, win_shell, import_tasks, script, shell, include_vars, include_role, include, win_command
As explained by your error message, an anonymous default parameter is only supported by a limited number of modules. In your custom module, the paramter you created is called name. Moreover, you should not include the .py extension in the module name. You have to call your module like so as an ad-hoc command:
$ ansible node1 -m edit_hostname -a name=node2
I did not test your module code so you may have further errors to fix.
Meanwhile, I still strongly suggest you use the default boilerplate from the ansible documentation as proposed in #Simon's answer.

Group Django commands to a folder inside the same app

Is it allowed to group custom Django commands to separate folders inside the same Django app?
I have a lot of them and wanted to group them logically by purpose. Created folders but Django can't find them.
Maybe I'm trying to run them wrong. Tried:
python manage.py process_A_related_data
the same plus imported all commands in __init__.py
python manage.py folderA process_A_related_data
python manage.py folderA.process_A_related_data
python manage.py folderA/process_A_related_data
Got following error:
Unknown command: 'folderA/process_A_related_data'
Type 'manage.py help' for usage.
I think you can create a basic custom command which will run other commands from relevent folders. Here is an approach you can take:
First make a folder structure like this:
management/
commands/
folder_a/
process_A_related_data.py
folder_b/
process_A_related_data.py
process_data.py
Then inside process_data.py, update the command like this:
from django.core import management
from django.core.management.base import BaseCommand
import importlib
class Command(BaseCommand):
help = 'Folder Process Commands'
def add_arguments(self, parser):
parser.add_argument('-u', '--use', type=str, nargs='?', default='folder_a.process_A_related_data')
def handle(self, *args, **options):
try:
folder_file_module = options['use'] if options['use'].startswith('.') else '.' + options['use']
command = importlib.import_module(folder_file_module, package='your_app.management.commands')
management.call_command(command.Command())
except ModuleNotFoundError:
self.stderr.write(f"No relevent folder found: {e.name}")
Here I am using call_command method to call other managment commands.
Then run commands like this:
python manage.py process_data --use folder_a.process_A_related_data
Finally, if you want to run commands like python manage.py folder_a.process_A_related_data, then probably you need to change in manage.py. Like this:
import re
...
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
if re.search('folder_[a-z].*', sys.argv[-1]):
new_arguments = sys.argv[:-1] + ['process_data','--use', sys.argv[-1]]
execute_from_command_line(new_arguments)
else:
execute_from_command_line(sys.argv)
You should be able to partition the code by using mixins (I have not tried this in this context, though)
A standard management command looks like
from django.core.management.base import BaseCommand
class Command(BaseCommand):
help = 'FIXME A helpful comment goes here'
def add_arguments(self, parser):
parser.add_argument( 'name', ...)
# more argument definitions
def handle(self, *args, **options):
# do stuff
Which can probably be replaced by a "stub" in app/management/commands:
from wherever.commands import FooCommandMixin
from django.core.management.base import BaseCommand
class Command(FooCommandMixin, BaseCommand):
# autogenerated -- do not put any code in here!
pass
and in wherever/commands
class FooCommandMixin( object):
help = 'FIXME A helpful comment goes here'
def add_arguments(self, parser):
parser.add_argument( 'name', ...)
# more argument definitions
def handle(self, *args, **options):
# do the work
It would not be hard to write a script to go through a list of file names or paths (using glob.glob) using re.findall to identify appropriate class declarations, and to (re)generate a matching stub for each in the app's management/commands folder.
Also/instead Python's argparse allows for the definition of sub-commands. So you should be able to define a command that works like
./manage.py foo bar --aa --bb something --cc and
./manage.py foo baz --bazzy a b c
where the syntax after foo is determined by the next word (bar or baz or ...). Again I have no experience of using subcommands in this context.
I found no mention of support for this feature in the release notes. It looks to be that this is still not supported as of version Django 3.0. I would suggest that you use meaningful names for your files that help you specify. You could always come up w/ a naming convention!
A workaround could be: create a specific Django "satellite" app for each group of management commands.
In recent version of Django, the requirements for a Python module to be an app are minimal: you won't need to provide any fake models.py or other specific files as happened in the old days.
While far from perfect from a stylistic point of view, you still gain a few advantages:
no need to hack the framework at all
python manage.py will list the commands grouped by app
you can control the grouping by providing suitable names to the apps
you can use these satellite apps as container for specific unit tests
I always try to avoid fighting against the framework, even when this means to compromise, and sometimes accept it's occasional design limitations.

Script behaves differently when run from cron job and from command line using django manage.py when using a cron supervisor

I know crons run in a different environment than command lines, but I'm using absolute paths everywhere and I don't understand why my script behaves differently. I believe it is somehow related to my cron_supervisor which runs the django "manage.py" within a sub process.
Cron:
0 * * * * /home/p1/.virtualenvs/prod/bin/python /home/p1/p1/manage.py cron_supervisor --command="/home/p1/.virtualenvs/prod/bin/python /home/p1/p1/manage.py envoyer_argent"
This will call the cron_supervisor, and it's call the script, but the script won't be executed as it would if I would run:
/home/p1/.virtualenvs/prod/bin/python /home/p1/p1/manage.py envoyer_argent
Is there something particular to be done for the script to be called properly when running it through another script?
Here is the supervisor, which basically is for error handling and making sure we get warned if something goes wrong within the cron scripts themselves.
import logging
import os
from subprocess import PIPE, Popen
from django.core.management.base import BaseCommand
from command_utils import email_admin_error, isomorphic_logging
from utils.send_slack_message import send_slack_message
CURRENT_DIR = os.path.dirname(os.path.abspath(__file__))
PROJECT_DIR = CURRENT_DIR + '/../../../'
logging.basicConfig(
level=logging.INFO,
filename=PROJECT_DIR + 'cron-supervisor.log',
format='%(asctime)s %(levelname)s: %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
class Command(BaseCommand):
help = "Control a subprocess"
def add_arguments(self, parser):
parser.add_argument(
'--command',
dest='command',
help="Command to execute",
)
parser.add_argument(
'--mute_on_success',
dest='mute_on_success',
action='store_true',
help="Don't post any massage on success",
)
def handle(self, *args, **options):
try:
isomorphic_logging(logging, "Starting cron supervisor with command \"" + options['command'] + "\"")
if options['command']:
self.command = options['command']
else:
error_message = "Empty required parameter --command"
# log error
isomorphic_logging(logging, error_message, "error")
# send slack message
send_slack_message("Cron Supervisor Error: " + error_message)
# send email to admin
email_admin_error("Cron Supervisor Error", error_message)
raise ValueError(error_message)
if options['mute_on_success']:
self.mute_on_success = True
else:
self.mute_on_success = False
# running process
process = Popen([self.command], stdout=PIPE, stderr=PIPE, shell=True)
output, error = process.communicate()
if output:
isomorphic_logging(logging, "Output from cron:" + output)
# check for any subprocess error
if process.returncode != 0:
error_message = 'Command \"{command}\" - Error \nReturn code: {code}\n```{error}```'.format(
code=process.returncode,
error=error,
command=self.command,
)
self.handle_error(error_message)
else:
message = "Command \"{command}\" ended without error".format(command=self.command)
isomorphic_logging(logging, message)
# post message on slack if process isn't muted_on_success
if not self.mute_on_success:
send_slack_message(message)
except Exception as e:
error_message = 'Command \"{command}\" - Error \n```{error}```'.format(
error=e,
command=self.command,
)
self.handle_error(error_message)
def handle_error(self, error_message):
# log the error in local file
isomorphic_logging(logging, error_message)
# post message in slack
send_slack_message(error_message)
# email admin
email_admin_error("Cron Supervisor Error", error_message)
Example of script not executed properly when being called by the cron, through the cron_supervisor:
# -*- coding: utf-8 -*-
import json
import logging
import os
from django.conf import settings
from django.core.management.base import BaseCommand
from utils.lock import handle_lock
logging.basicConfig(
level=logging.INFO,
filename=os.path.join(settings.BASE_DIR, 'crons.log'),
format='%(asctime)s %(levelname)s: %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
class Command(BaseCommand):
help = "Envoi de l'argent en attente"
#handle_lock
def handle(self, *args, **options):
logging.info("some logs that won't be log (not called)")
logging.info("Those logs will be correcly logged")
Additionally, I have another issue with the logging which I don't quite understand either, I specify to store logs within cron-supervisor.log but they don't get stored there, I couldn't figure out why. (but that's not related to my main issue, just doesn't help with debug)
Your cron job can't just run the Python interpreter in the virtualenv; this is completely insufficient. You need to activate the env just like in an interactive environment.
0 * * * * . /home/p1/.virtualenvs/prod/bin/activate; python /home/p1/p1/manage.py cron_supervisor --command="python /home/p1/p1/manage.py envoyer_argent"
This is already complex enough that you might want to create a separate wrapper script containing these commands.
Without proper diagnostics of how your current script doesn't work, it's entirely possible that this fix alone is insufficient. Cron jobs do not only (or particularly) need absoute paths; the main differences compared to interactive shells is that cron jobs run with a different and more spare environment, where e.g. the shell's PATH, various library paths, environment variables etc can be different or missing altogether; and of course, no interactive facilities are available.
The system variables will hopefully be taken care of by your virtualenv; if it's correctly done, activating it will set up all the variables (PATH, PYTHONPATH, etc) your script needs. There could still be things like locale settings which are set up by your shell only when you log in interactively; but again, without details, let's just hope this isn't an issue for you.
The reason some people recommend absolute paths is that this will work regardless of your working directory. But a correctly written script should work fine in any directory; if it matters, the cron job will start in the owner's home directory. If you wanted to point to a relative path from there, this will work fine inside a cron job just as it does outside.
As an aside, you probably should not use subprocess.Popen() if one of the higher-level wrappers from the subprocess module do what you want. Unless compatibility with legacy Python versions is important, you should probably use subprocess.run() ... though running Python as a subprocess of Python is also often a useless oomplication. See also my answer to this related question.

How to print the console to a text file AFTER the program finishes (Python)?

I have a program that outputs many calculations and results to the console through the print statement. I want to write some code to export (or save) all the contents of the console to a simple text file.
I searched StackOverflow and other sites but I found some methods to redirect the print statement to print to a file directly, but I want the program to work normally, to display outputs to the console, then to save its contents AFTER all operations of the program done.
I am using PyCharm with Python2.7 if it matters
Ok, so normally to get it done, you have to rewrite python print built-in function. But... There is ipython, which provides some hooks.
First you need to have ipython installed:
#bash
sudo pip install ipython
(I'm using sudo to simple locate then folder I need to reach, read further)
After ipython installation you'll have ipython extensions folder available, so get to it:
#bash
cd ~/.ipython/extensions/
and create there let's say a file called print_to_file.py, here is its content:
#python
class PrintWatcher(object):
def __init__(self, ip):
self.shell = ip
def post_execute(self):
with open('/home/turkus/shell.txt', 'a+') as f:
in_len = len(self.shell.user_ns['In'])
i = in_len - 1
in_ = self.shell.user_ns['In'][i]
out = self.shell.user_ns['Out'].get(i, '')
# you can edit this line if you want different input in shell.txt
f.write('{}\n{}\n'.format(in_, out))
def load_ipython_extension(ip):
pw = PrintWatcher(ip)
ip.events.register('post_run_cell', pw.post_execute)
After saving a file just run:
#bash
ipython profile create
# you will get something like that:
[ProfileCreate] Generating default config file: u'/home/turkus/.ipython/profile_default/ipython_config.py'
Now get back to setting up our hook. We must open ipython_config.py created under path above and put there some magic (there is a lot of stuff there, so go to the end of file):
# some commented lines here
c = get_config()
c.InteractiveShellApp.extensions = [
'print_to_file'
]
After saving it, you can run ipython and write your code. Every your input will be written in a file under path you provided above, in my case it was:
/home/turkus/shell.txt
Notes
You can avoid loading your extension every time ipython fires up, by just delete 'print_to_file' from c.InteractiveShellApp.extensions list in ipython_config.py. But remember that you can load it anytime you need, just by typing in ipython console:
➜ ~ ipython
Python 2.7.12 (default, Jul 1 2016, 15:12:24)
Type "copyright", "credits" or "license" for more information.
IPython 4.0.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: %load_ext print_to_file
Any change in print_to_file.py is being reflected in open ipython shell after using %reload_ext print_to_file command, so you don't have to exit from and fire up it again.
I am unsure how you could receive the contents of a console for any editor however this can be achieved quite simply by replacing your print() statements with .write
class Writer(object):
def __init__(self, out_file, overwrite=False):
self.file_name = out_file
self.overwrite = overwrite
self.history = []
def write(self, statement):
self.history.append(statement)
print statement
def close(self):
if self.overwrite:
self.out_file = open(self.file_name, 'wb')
else:
self.out_file = open(self.file_name, 'ab')
for x in self.history:
self.out_file.write(x+'/n')
self.out_file.close()
self.history = []
p = Writer('my_output_file.txt')
p.write('my string to print and save!')
p.close() #close the writer to save the contents to a file before exiting
After I know understood your question I think you search the tee command
python your_program | tee output.txt
This will show you the output both, in the console and in output.txt
PS: Since you did not answer to my comment which OS you use I assumed that you use either Linux or MACOS. Should work on both. I don't know how to do this on windows...
You could override the print function which will still be accessible through the builtins module
import builtins
f = open("logs.txt", "w")
def print(*args, sep=' ', end='\n', **kwargs):
builtins.print(*args, sep=sep, end=end, **kwargs)
f.write(sep.join(*args) + end)
EDIT: A similar solution for Python 2
from __future__ import print_function
class Print:
def __init__(self, print_function, filename='test', mode='w'):
self.print_function = print_function
self.file = open(filename, 'w')
def __call__(self, *args, **kwargs):
self.print_function(*args, **kwargs)
kwargs['file'] = self.file
self.print_function(*args, **kwargs)
print = Print(print, 'logs.txt')
This creates a print function that you use exactly as the function you import from __future__.
To close the file when everything is done you have to run:
print.file.close()
Maybe you should create a variable that will log the outputs and then put it into a file.
For ex:
print statement
logger += statement+"\n" #a new line char so each statement is on a new line
with open('file.txt', 'a') as f:
f.write(statement)
With all thanks and respect to all who contributed to this question. I have finally found a solution to this problem with minimal modifications to my original code. The solution is provided by the member #Status and here is its link .
Although I searched a lot before posting my question, but the answers of the respected members enlightened my mind to a precise search especially the contributions of #turkus, who performs an exceptional work, and #Glostas who opened my eyes to the "tee" which guided me to find the solution I posted (although it does not contain "tee").
The solution, as of the mentioned post with slight modifications:
1- Put the following Class in the program:
class Logger(object):
"""
Lumberjack class - duplicates sys.stdout to a log file and it's okay
source: https://stackoverflow.com/a/24583265/5820024
"""
def __init__(self, filename="Red.Wood", mode="a", buff=0):
self.stdout = sys.stdout
self.file = open(filename, mode, buff)
sys.stdout = self
def __del__(self):
self.close()
def __enter__(self):
pass
def __exit__(self, *args):
pass
def write(self, message):
self.stdout.write(message)
self.file.write(message)
def flush(self):
self.stdout.flush()
self.file.flush()
os.fsync(self.file.fileno())
def close(self):
if self.stdout != None:
sys.stdout = self.stdout
self.stdout = None
if self.file != None:
self.file.close()
self.file = None
2- At the beginning of the program, before any print statements, put this line:
my_console = Logger('my_console_file.txt') # you can change the file's name
3- At the end of the program, after all of the print statements, put this line:
my_console.close()
I tested this, and It works perfectly, and finally I have a clone of the console's output after the program ends.
With best regards to everybody, and Many thanks to all contributors.
There is a very obvious but not very elegant solution.
instead of:
print statement 1
calculation
print statement 2
you can make something like
sexport =''
calculation
print statement 1
sexport += statement1 + "\n"
calculaztion
print statement 2
sexport += statement 2
finally just save sexport to a file

Easy way to suppress output of fabric run?

I am running a command on the remote machine:
remote_output = run('mysqldump --no-data --user=username --password={0} database'.format(password))
I would like to capture the output, but not have it all printed to the screen. What's the easiest way to do this?
It sounds like Managing output section is what you're looking for.
To hide the output from the console, try something like this:
from __future__ import with_statement
from fabric.api import hide, run, get
with hide('output'):
run('mysqldump --no-data test | tee test.create_table')
get('~/test.create_table', '~/test.create_table')
Belows is the sample results:
No hosts found. Please specify (single) host string for connection: 192.168.6.142
[192.168.6.142] run: mysqldump --no-data test | tee test.create_table
[192.168.6.142] download: /home/quanta/test.create_table <- /home/quanta/test.create_table
Try this if you want to hide everything from log and avoid fabric throwing exceptions when command fails:
from __future__ import with_statement
from fabric.api import env,run,hide,settings
env.host_string = 'username#servernameorip'
env.key_filename = '/path/to/key.pem'
def exec_remote_cmd(cmd):
with hide('output','running','warnings'), settings(warn_only=True):
return run(cmd)
After that, you can check commands result as shown in this example:
cmd_list = ['ls', 'lss']
for cmd in cmd_list:
result = exec_remote_cmd(cmd)
if result.succeeded:
sys.stdout.write('\n* Command succeeded: '+cmd+'\n')
sys.stdout.write(result+"\n")
else:
sys.stdout.write('\n* Command failed: '+cmd+'\n')
sys.stdout.write(result+"\n")
This will be the console output of the program (observe that there aren't log messages from fabric):
* Command succeeded: ls
Desktop espaiorgcats.sql Pictures Public Videos
Documents examples.desktop projectes scripts
Downloads Music prueba Templates
* Command failed: lss
/bin/bash: lss: command not found
For fabric==2.4.0 you can hide output using the following logic
conn = Connection(host="your-host", user="your-user")
result = conn.run('your_command', hide=True)
result.stdout.strip() # here you can get the output
As other answers allude, fabric.api doesn't exist anymore (as of writing, fabric==2.5.0) 8 years after the question. However the next most recent answer here implies providing hide=True to every .run() call is the only/accepted way to do it.
Not being satisfied I went digging for a reasonable equivalent to a context where I can specify it only once. It feels like there should still be a way using an invoke.context.Context but I didn't want to spend any longer on this, and the easiest way I could find was using invoke.config.Config, which we can access via fabric.config.Config without needing any additional imports.
>>> import fabric
>>> c = fabric.Connection(
... "foo.example.com",
... config=fabric.config.Config(overrides={"run": {"hide": True}}),
... )
>>> result = c.run("hostname")
>>> result.stdout.strip()
'foo.example.com'
As of Fabric 2.6.0 hide argument to run is not available.
Expanding on suggestions by #cfillol and #samuel-harmer, using a fabric.Config may be a simpler approach:
>>> import fabric
>>> conf = fabric.Config()
>>> conf.run.hide = True
>>> conf.run.warn = True
>>> c = fabric.Connection(
... "foo.example.com",
... config=conf
... )
>>> result = c.run("hostname")
This way no command output is printed and no exception is thrown on command failure.
As Samuel Harmer also pointed out in his answer, it is possible to manage output of the run command at the connection level.
As of version 2.7.1:
from fabric import Config, Connection
connection = Connection(
host,
config = Config(overrides = {
"run": { "hide": "stdout" }
}),
...
)

Categories