I have a context manager that captures output to a string for a block of code indented under a with statement. This context manager yields a custom result object which will, when the block has finished executing, contain the captured output.
from contextlib import contextmanager
#contextmanager
def capturing():
"Captures output within a 'with' block."
from cStringIO import StringIO
class result(object):
def __init__(self):
self._result = None
def __str__(self):
return self._result
try:
stringio = StringIO()
out, err, sys.stdout, sys.stderr = sys.stdout, sys.stderr, stringio, stringio
output = result()
yield output
finally:
output._result, sys.stdout, sys.stderr = stringio.getvalue(), out, err
stringio.close()
with capturing() as text:
print "foo bar baz",
print str(text) # prints "foo bar baz"
I can't just return a string, of course, because strings are immutable and thus the one the user gets back from the with statement can't be changed after their block of code runs. However, it is something of a drag to have to explicitly convert the result object to a string after the fact with str (I also played with making the object callable as a bit of syntactic sugar).
So is it possible to make the result instance act like a string, in that it does in fact return a string when named? I tried implementing __get__, but that appears to only work on attributes. Or is what I want to do not really possible?
How to make a class that acts like a string?
Subclass str
import os
class LikeAStr(str):
'''Making a class like a str object; or more precisely
making a str subclass with added contextmanager functionality.'''
def __init__(self, diff_directory):
self._iwd = os.getcwd()
self._cwd = diff_directory
def __enter__(self):
return self
def __exit__(self, ext_typ, exc_value, traceback):
try: os.chdir(self._iwd) # might get deleted within the "with" statement
except: pass
def __str__(self):
return self._cwd
def __repr__(self):
return repr(self._cwd)
astr = LikeAStr('C:\\')
with LikeAStr('C:\\') as astr:
print 1, os.getcwd()
os.chdir( astr ) # expects str() or unicode() not some other class
print 2, os.getcwd()
#
# out of with block
print 3, os.getcwd()
print 4, astr == 'C:\\'
Output:
1 D:\Projects\Python\
2 C:\
3 D:\Projects\Python\
4 True
I don't believe there is a clean way to do what you want.
text is defined in the modules' globals() dict.
You would have to modify this globals() dict from within the capturing object:
The code below would break if you tried to use the with from within a function, since then text would be in the function's scope, not the globals.
import sys
import cStringIO
class capturing(object):
def __init__(self,varname):
self.varname=varname
def __enter__(self):
self.stringio=cStringIO.StringIO()
self.out, sys.stdout = sys.stdout, self.stringio
self.err, sys.stderr = sys.stderr, self.stringio
return self
def __exit__(self,ext_type,exc_value,traceback):
sys.stdout = self.out
sys.stderr = self.err
self._result = self.stringio.getvalue()
globals()[self.varname]=self._result
def __str__(self):
return self._result
with capturing('text') as text:
print("foo bar baz")
print(text) # prints "foo bar baz"
# foo bar baz
print(repr(text))
# 'foo bar baz\n'
At first glance, it looked like UserString (well, actually MutableString, but that's going away in Python 3.0) was basically what I wanted. Unfortunately, UserString doesn't work quite enough like a string; I was getting some odd formatting in print statements ending in commas that worked fine with str strings. (It appears you get an extra space printed if it's not a "real" string, or something.) I had the same issue with a toy class I created to play with wrapping a string. I didn't take the time to track down the cause, but it appears UserString is most useful as an example.
I actually ended up using a bytearray because it works enough like a string for most purposes, but is mutable. I also wrote a separate version that splitlines() the text into a list. This works great and is actually better for my immediate use case, which is removing "extra" blank lines in the concatenated output of various functions. Here's that version:
import sys
from contextlib import contextmanager
#contextmanager
def capturinglines(output=None):
"Captures lines of output to a list."
from cStringIO import StringIO
try:
output = [] if output is None else output
stringio = StringIO()
out, err = sys.stdout, sys.stderr
sys.stdout, sys.stderr = stringio, stringio
yield output
finally:
sys.stdout, sys.stderr = out, err
output.extend(stringio.getvalue().splitlines())
stringio.close()
Usage:
with capturinglines() as output:
print "foo"
print "bar"
print output
['foo', 'bar']
with capturinglines(output): # append to existing list
print "baz"
print output
['foo', 'bar', 'baz']
I think you might be able to build something like this.
import StringIO
capturing = StringIO.StringIO()
print( "foo bar baz", file= capturing )
Now 'foo bar baz\n' == capturing.getvalue()
That's the easiest. It works perfectly with no extra work, except to fix your print functions to use the file= argument.
How to make a class that acts like a string?
If you don't want to subclass str for whatever reason:
class StrBuiltin(object):
def __init__(self, astr=''):
self._str = astr
def __enter__(self):
return self
def __exit__(self, ext_typ, exc_value, traceback):
pass # do stuff
def __str__(self):
return self._str
def __repr__(self):
return repr(self._str)
def __eq__(self, lvalue):
return lvalue == self._str
def str(self):
'''pretend to "convert to a str"'''
return self._str
astr = StrBuiltin('Eggs&spam')
if isinstance( astr.str(), str):
print 'Is like a str.'
else:
print 'Is not like a str.'
I know you didn't want to do str(MyClass) but MyClass.str() kind of implies, to me, that this class is expected to expose itself as a str to functions which expect a str as part of the object. Instead of some unexpected result of "who know's what would be returned by str( SomeObject ).
This is an old question but is an interesting one.
Using the idea from #S.Lott you can use contextmanagers to create a more robust and reusable tool:
#contextmanager
def redefine_print(stream):
global print
from functools import partial, wraps
old_print = print
try:
print = wraps(print)(partial(print, file=stream))
yield print
finally:
print = old_print
sample use with file-like objects:
with open('file', 'a+') as stream:
print('a') # print in the interface
with redefine_print(stream):
print('b') # print in the file
print('c') # print in the interface
stream.seek(0)
print(stream.readlines())
sample use with StringIO objects
import io
stream = io.StringIO()
with redefine_print(stream) as xprint:
print('b') # add to the ioStream
xprint('x') # same as print, just to see how the object works
print(stream.getvalue()) # print the intercepted value
print(xprint.__doc__) # see how #wraps helps to keep print() signature
Related
I am currently writing myself a program in python 3.7 and was wanting to add a timestamp to the front of my printing in the format:
<hh:mm:ss> WhateverImPrinting
I took a look at other forums and I get some code which used sys.stdout, overwriting the text using the write function.
My issue is it is returning the timestamp both before and after my print.
e.g. <14:21:51> Hello<14:21:51>
This should be:
<14:21:51> Hello
My code:
old_f = sys.stdout # Get old print output
class PrintTimestamp:
# #staticmethod
def write(self, x):
old_f.write("<{}> {}".format(str(pC.Timestamp.hhmmss()), x))
# #staticmethod
def flush(self):
pass
sys.stdout = PrintTimestamp() # Set new print output
I have run this after all my classes and functions, but before if __name__ == '__main__'
You can simply override print function in Python 3.x:
from datetime import datetime
old_print = print
def timestamped_print(*args, **kwargs):
old_print(datetime.now(), *args, **kwargs)
print = timestamped_print
then
print("Test")
should print
2019-09-30 01:23:44.67890 Test
Here you go.
from datetime import datetime
class PrintTimeStamp():
def write(self,x):
ts = str(datetime.now.hour())+":"+str(datetime.now().minute)+":"+str(datetime.now().second)
print("<{}> {}".format(str(ts),x)
pts = PrintTimeStamp()
pts.write("test")
I have written code that works for my cause and displays the results in the terminal:
print 'The following results are found:'
# some code iterations here
...
...
print 'User - {0}, Title - {1}'.format(...)
Currently, I am trying to implement a new optional argument such that it allows me to choose whether I would like the above results to be written to a text file.
While I can get it to work, it is not in the most elegant method:
# output_to_path is a boolean argument here.
if output_to_file:
# file_path, I use `open(file_dir, "w")`
print >> file_path, 'The following results are found:'
print 'The following results are found:'
# some code iterations here
...
...
if output_to_file:
print 'User - {0}, Title - {1}'.format(...)
print 'User - {0}, Title - {1}'.format(...)
Is it possible to only write the above print statements once, whether output_to_file is true or false? I ask as I do have a ton of print statements to begin with.
Here's a way to do it with a context manager, which is similar to what's being done in the answer to the question I referred you to in a comment below your question.
The twist is that in order to be able to selectively turn output to the files on and off as desired, the simplest route seemed to be implement it as a class (instead of applying the contextlib's #contextmanager decorator to a function as was done there).
Hope this isn't too much code...
import sys
class OutputManager(object):
""" Context manager that controls whether sysout goes only to the interpreter's
current stdout stream or to both it and a given file.
"""
def __init__(self, filename, mode='wt'):
self.output_to_file = True
self.saved_stdout = sys.stdout
self.file = open(filename, mode)
sys.stdout = self
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
sys.stdout = self.saved_stdout # Restore.
self.file.close()
def write(self, message):
self.saved_stdout.write(message)
if self.output_to_file:
self.file.write(message)
def enable(self):
self.output_to_file = True
def disable(self):
self.output_to_file = False
if __name__ == '__main__':
# Sample usage.
with OutputManager('cmtest.txt') as output_manager:
print 'This line goes to both destinations.'
output_manager.disable()
print 'This line goes only to the display/console/terminal.'
output_manager.enable()
print 'Once again, to both destinations.'
You could write a function that does what you want:
def custom_print(message):
print(message) # always prints to stdout
if output_to_file:
print >> file_path, message
Then you call it like this:
custom_print('The following results are found:')
...
custom_print('User - {0}, Title - {1}'.format(...))
So I've written a module that contains a bunch of functions to easily interact with a subprocess. This subprocess has a whole bunch of settings that let you change how it formats and behaves. I realized that it'd be nice to have a convenience class that you could use as a handler to store the settings you prefer to use and pass them on to the module level functions. here's the example code I'm doing testing with:
import inspect
class MyHandler(object):
def __init__(self):
self.format_string='class format string'
self.database='class database'
self.mode = "class mode"
def rename(self, *args, **kwargs):
self._pass_to_function(rename, *args, **kwargs)
def _pass_to_function(self, function, *overrided_args, **overrided_kwargs):
# get the function's remaining arguments with the inspect module
functon_kwargs = inspect.getargspec(function)[0][len(overrided_args):]
handler_vars = vars(self)
kwargs_to_pass = {}
for arg in functon_kwargs:
if arg in handler_vars:
kwargs_to_pass[arg] = handler_vars[arg]
for arg in overrided_kwargs:
kwargs_to_pass[arg] = overrided_kwargs[arg]
return function(*overrided_args, **kwargs_to_pass)
def rename(targets, format_string=None, database=None, mode=None,
not_in_class='None'):
print 'targets = {}'.format(targets)
print 'format_string = {}'.format(format_string)
print 'database = {}'.format(database)
print 'mode = {}'.format(mode)
print 'not_in_class = {}\n'.format(not_in_class)
return
The thing I like about this solution is that it uses the attributes stored in the class, but you can easily override them by simply adding them to the method call if you want a one-off with a different setting. To do this I have the _pass_to_function as a kind of wrapper function to parse and fill in the needed settings and overrides. Here's how it looks:
>>> import argstest
>>> argstest.rename('some_file.avi', database='some database')
targets = some_file.avi
format_string = None
database = some database
mode = None
not_in_class = None
>>> tst = argstest.MyHandler()
>>> tst.rename('some_file.avi')
targets = some_file.avi
format_string = class format string
database = class database
mode = class mode
not_in_class = None
>>> tst.rename('some_file.avi', 'one off format string', not_in_class=True)
targets = some_file.avi
format_string = one off format string
database = class database
mode = class mode
not_in_class = True
Now in my real module I have dozens of module-level functions that I want to access from the handler class. Ideally they would generate automatically based on the functions in the module. Seeing as how all the methods are only going to be passing everything to _pass_to_function I get the sense that this shouldn't be very difficult but I'm having a lot of trouble figuring out exactly how.
I've read about using type to generate a meta-class, but I don't see how I would use it in this situation. Am I not seeing how I could use type? Should I use some sort of module level script that adds the functions with setattr? Is what I was doing the better/clearer way to do things?
Any and all advice would be appreciated.
Okay, I think I've answered my own question for now. This is how the module looks:
import inspect
import sys
from types import MethodType
class MyHandler(object):
def __init__(self):
self.format_string = 'class format string'
self.database = 'class database'
self.mode = "class mode"
self._populate_methods()
def _populate_methods(self):
to_add = inspect.getmembers(sys.modules[__name__], inspect.isfunction)
to_add = [x[0] for x in to_add if not x[0].startswith('_')]
for func_name in to_add:
func = getattr(sys.modules[__name__], func_name) # strings to functions
self._add_function_as_method(func_name, func)
def _add_function_as_method(self, func_name, func):
def f(self, *args, **kwargs): # the template for the method we'll add
return self._pass_to_function(func, *args, **kwargs)
setattr(MyHandler, func_name, MethodType(f, None, MyHandler))
def _pass_to_function(self, function, *overrided_args, **overrided_kwargs):
functon_kwargs = inspect.getargspec(function)[0][len(overrided_args):]
handler_vars = vars(self)
kwargs_to_pass = {}
for arg in functon_kwargs:
if arg in handler_vars:
kwargs_to_pass[arg] = handler_vars[arg]
for arg in overrided_kwargs:
kwargs_to_pass[arg] = overrided_kwargs[arg]
return function(*overrided_args, **kwargs_to_pass)
def rename(targets, format_string=None, database=None, mode=None,
not_in_class='None'):
print 'targets = {}'.format(targets)
print 'format_string = {}'.format(format_string)
print 'database = {}'.format(database)
print 'mode = {}'.format(mode)
print 'not_in_class = {}\n'.format(not_in_class)
return
def something_else():
print "this function should become a method"
def _not_a_member():
print "this function should not become a method"
I've added the _populate_methods and the _add_function_as_method member functions. the _populate_methods function gets the name of all "public" functions in the module, de-references them to their function and passes each one though _add_function_as_method. All this method does is use an internal function to capture arguments and sent them to _pass_to_function, and set that function as a method using setattr.
phew
so it works, but I'm still wondering if there isn't a clearer or more straight forward way to get this done. I'd be very grateful if anyone could chime in.
Let's say I have a class like so:
class Shell:
def cat(self, file):
try:
with open(file, 'r') as f:
print f.read()
except IOError:
raise IOError('invalid file location: {}'.format(f))
def echo(self, message):
print message
def ls(self, path):
print os.listdir(path)
In a javascript context, you might be able to do something like "Class"[method_name](), depending on how things were structured. I am looking for something similar in python to make this a "simulated operating system". EG:
import os
def runShell(user_name):
user_input = None
shell = Shell()
while(user_input != 'exit' or user_input != 'quit'):
user_input = raw_input('$'+ user_name + ': ')
...
now, the idea is they can type in something like this...
$crow: cat ../my_text
... and behind the scenes, we get this:
shell.cat('../my_text')
Similarly, I would like to be able to print all method definitions that exist within that class when they type help. EG:
$crow: help\n
> cat (file)
> echo (message)
> ls (path)
is such a thing achievable in python?
You can use the built-in function vars to expose all the members of an object. That's maybe the simplest way to list those for your users. If you're only planning to print to stdout, you could also just call help(shell), which will print your class members along with docstrings and so on. help is really only intended for the interactive interpreter, though, so you'd likely be better off writing your own help-outputter using vars and the __doc__ attribute that's magically added to objects with docstrings. For example:
class Shell(object):
def m(self):
'''Docstring of C#m.'''
return 1
def t(self, a):
'''Docstring of C#t'''
return 2
for name, obj in dict(vars(Shell)).items():
if not name.startswith('__'): #filter builtins
print(name, '::', obj.__doc__)
To pick out and execute a particular method of your object, you can use getattr, which grabs an attribute (if it exists) from an object, by name. For example, to select and run a simple function with no arguments:
fname = raw_input()
if hasattr(shell, fname):
func = getattr(shell, fname)
result = func()
else:
print('That function is not defined.')
Of course you could first tokenize the user input to pass arguments to your function as needed, like for your cat example:
user_input = raw_input().split() # tokenize
fname, *args = user_input #This use of *args syntax is not available prior to Py3
if hasattr(shell, fname):
func = getattr(shell, fname)
result = func(*args) #The *args syntax here is available back to at least 2.6
else:
print('That function is not defined.')
I'm trying to learn programming through Python and I like to know if it's possible to get just the return value of a function and not its other parts. Here's the code:
Let's say, this is the main function:
variable_a = 5
while variable_a > 0 :
input_user = raw_input(": ")
if input_user == "A":
deduct(variable_a)
variable_a = deduct(variable_a)
else:
exit(0)
Then this is the deduct function:
def deduct(x):
print "Hello world!"
x = x - 1
return x
What happens is that, it does the calculation and deduct until variable_a reaches 0. However, "Hello world!" gets printed twice, I think because of variable_a = deduct(variable_a) (correct me if I'm wrong). So I was thinking, can I just capture the return value of deduct() and not capture the rest? So that in this instance, after going through deduct(), variable_a would just have a plain value of 2 (without the "Hello world!").
Am I missing things? :?
Editor's note: I remove the blank lines, so it can be pasted to REPL.
The printing of "Hello world" is what's known as a side effect - something produced by the function which is not reflected in the return value. What you're asking for is how to call the function twice, once to produce the side effect and once to capture the function return value.
In fact you don't have to call it twice at all - once is enough to produce both results. Simply capture the return value on the one and only call:
if input_user == "A":
variable_a = deduct(variable_a)
else:
If you don't want your function to print output, the correct solution is to not use print in it. :P
The first time you call deduct, it doesn't do anything except print that message, so you could probably just remove that line and be fine.
However, there is a slightly messy way to suppress print statements. You can temporarily replace your program's output file with a placeholder that does nothing.
import sys
class FakeOutput(object):
def write(self, data):
pass
old_out = sys.stdout
sys.stdout = FakeFile()
print "Hello World!" # does nothing
sys.stdout = old_out
print "Hello Again!" # works normally
You could even make a context manager to make this more convenient.
import sys
class FakeOutput(object):
def __enter__(self):
self.out_stdout = sys.stdout
sys.stdout = self
return self
def __exit__(self, *a):
sys.stdout = self.out_stdout
def write(self, data):
pass
print "Hello World!" # works
with FakeOutput():
print "Hello Again!" # doesn't do anything
print "Hello Finally!" # works