I encountered a bit weird (to me) behavior upon running the following script.
As you can see it seems as though write is called multiple times and I wonder why this is, as I have explicitly overridden the file=sys.stdout behavior.
How exactly does print pipe streams under the hood, does it pipe to all channels? Does it have some default behavior, the docs are not very specific except for the following:
The file argument must be an object with a write(string) method; if it
is not present or None, sys.stdout will be used.
Test script
import sys
def debug(*args, **kwargs):
pass
def _debugwrite(obj):
print("You're looking at Attila, the psychopathic killer, the caterpillar")
out = sys.stderr
out.write(obj)
debug.write = _debugwrite
print("Don't you ever disrespect the caterpillar", file=debug)
Output:
You're looking at Attila, the psychopathic killer, the caterpillar
You're looking at Attila, the psychopathic killer, the caterpillar
Don't you ever disrespect the caterpillar
What I expected:
You're looking at Attila, the psychopathic killer, the caterpillar
Don't you ever disrespect the caterpillar
What I tried:
I tried to use inspect module to get the caller, maybe see who does the actual call to write but I get module, idk why :( is this obvious?
Further questions:
Is there any way to debug a function beyond Python and go into the underlying C call? Because well the main Python distribution, is CPython, and if my understanding is correct, Python is just an api for the underlying C code. A call in Python gets translated to a C call under the hood eventually. So for instance I found out that the print is defined as follows in C, but it's tough for me to understand what's going on there (because, erm, I don't know C) but maybe by going with a debugger I could print stuff out, see what is what and figure out maybe at least the flow if not everything. I'd very much like to understand what's going on under the hood in general instead of taking stuff for granted.
Thx in advance for your time!
You're looking for something really complicated when the answer is dead simple.
I don't even know what "pipe to all channels" would mean, but print does nothing of the sort. All it does is call write on the file object you passed it.
However, it calls write once for each argument, once for each sep, and once for the end.
So, this line:
print("Don't you ever disrespect the caterpillar", file=debug)
… is roughly equivalent to:
debug.write(str("Don't you ever disrespect the caterpillar"))
debug.write("\n")
… which of course means you get your extra print message twice.
By the way, for debugging or understanding things like this in the future: If you changed the extra print to include, say, repr(obj), what's happening would have been obvious:
def _debugwrite(obj):
print("stderring " + repr(obj))
out = sys.stderr
out.write(obj)
The output is then:
stderring "Don't you ever disrespect the caterpillar"
stderring '\n'
Don't you ever disrespect the caterpillar
Not very mysterious anymore, right?
And of course stdout and stderr are separate streams, with their own buffers. (By default, when talking to a TTY, stdout is line-buffered, and stderr is unbuffered.) So the ordering isn't what you'd naively expect, but it makes sense. If you just add in flushes, the output turns into:
stderring "Don't you ever disrespect the caterpillar"
Don't you ever disrespect the caterpillarstderring '\n'
(with a blank line at the end).
For your bonus questions:
I tried to use inspect module to get the caller, maybe see who does the actual call to write but I get module, idk why :( is this obvious?
I'm assuming you did something like inspect.stack()[1].function? If so, the code you're inspecting is the top-level code in the module, so inspect shows it as a fake function named <module>.
Is there any way to debug a function beyond Python and go into the underlying C call?
Sure. Just run CPython itself under lldb, gdb, Microsoft's debugger, or whatever else you usually use for debugging binary programs. You can put breakpoints in the ceval loop or in a particular C API function or wherever you want. You may want to make a debug build of CPython (do ./configure --help to see the options) to make this even better.
Because well the main Python distribution, is CPython, and if my understanding is correct, Python is just an api for the underlying C code.
Well, not quite. It's a compiler and a bytecode interpreter. That bytecode interpreter largely uses the same C API that's exposed for the extending/embedding interface, but the overlap isn't 100%; there are places where it deals with the structures below the C API level.
A call in Python gets translated to a C call under the hood eventually. So for instance I found out that the print is defined as follows in C, but it's tough for me to understand what's going on there (because, erm, I don't know C) but maybe by going with a debugger I could print stuff out, see what is what and figure out maybe at least the flow if not everything. I'd very much like to understand what's going on under the hood in general instead of taking stuff for granted.
Yes, you can do that, but you will need to understand both C and the CPython API (e.g., things like how to find the C slot equivalent to __call__) to figure out where to put your breakpoints and start tracing.
And for cases like these, it's a lot easier to just write wrappers in Python and debug them in Python. For example:
import builtins
def print(*args, **kwargs):
return builtins.print(*args, **kwargs)
Or, if you're worried about print being called in other modules, not just in yours, you can even shadow it in builtins:
builtins._print = builtins.print
def print(*args, **kwargs):
return builtins._print(*args, **kwargs)
builtins.print = print
Now you can just use pdb to break on every call to print at the Python level, without worrying about the C.
And of course you can even debug this code in PyPy or Jython or whatever to see if it's any different from CPython above the "builtin" level.
You get the result you see because builtin_print() calls PyFile_Write*() twice, once in order to print the argument, and again to print the EOL. They are out of order because by default stderr is unbuffered and stdout is line-buffered.
Related
I'm trying to test a Python script (2.7) where I work with the standar input (readed with raw_input() and writed with a simple print) but I don't find how do this and I'm sure that this issue is very simple.
This is a very very very resume code of my script:
def example():
number = raw_input()
print number
if __name__ == '__main__':
example()
I want to write a unittest test to check this, but I don't find how. I've trying with StringIO and other things but I don't find the solution to do this really simple.
Somebody have a idea?
PD: Of course in the real script I use data blocks with several lines and other kind of data.
Thank you so much.
EDIT:
Thank you so much for the first really specific answer, it works perfectly, only I've had a little problem importing StringIO, because I was doing import StringIO and I needed to import like from StringIO import StringIO (I don't understand really why), but be that as It may, it works.
But I I've found another problem using this way, in my project I need test a scripts with this way (that work perfectly thanks to your support) but I want do this:
I have a file with a lot of test to pass over a script, so I open the file and read blocks of info with their result blocks and I would like to do that the code will be able to process a block checking their result and do the same with other and another...
Something like this:
class Test(unittest.TestCase):
...
#open file and process saving data like datablocks and results
...
allTest = True
for test in tests:
stub_stdin(self, test.dataBlock)
stub_stdouts(self)
runScrip()
if sys.stdout.getvalue() != test.expectResult:
allTest = False
self.assertEqual(allTest, True)
I know that maybe unittest doesn't has sense now, but you can do a idea about I want. So, this way fails and I don't know why.
Typical techniques involve mocking the standard sys.stdin and sys.stdout with your desired items. If you do not care for Python 3 compatibility you can just use the StringIO module, however if you want forward thinking and is willing to restrict to Python 2.7 and 3.3+, supporting for this both Python 2 and 3 in this way becomes possible without too much work through the io module (but requires a bit of modification, but put this thought on hold for now).
Assuming you already have a unittest.TestCase going, you can create a utility function (or method in the same class) that will replace sys.stdin/sys.stdout as outlined. First the imports:
import sys
import io
import unittest
In one of my recent projects I've done this for stdin, where it take a str for the inputs that the user (or another program through pipes) will enter into yours as stdin:
def stub_stdin(testcase_inst, inputs):
stdin = sys.stdin
def cleanup():
sys.stdin = stdin
testcase_inst.addCleanup(cleanup)
sys.stdin = StringIO(inputs)
As for stdout and stderr:
def stub_stdouts(testcase_inst):
stderr = sys.stderr
stdout = sys.stdout
def cleanup():
sys.stderr = stderr
sys.stdout = stdout
testcase_inst.addCleanup(cleanup)
sys.stderr = StringIO()
sys.stdout = StringIO()
Note that in both cases, it accepts a testcase instance, and calls its addCleanup method that adds the cleanup function call that will reset them back to where they were when the duration of a test method is concluded. The effect is that for the duration from when this was invoked in the test case until the end, sys.stdout and friends will be replaced with the io.StringIO version, meaning you can check its value easily, and don't have to worry about leaving a mess behind.
Better to show this as an example. To use this, you can simply create a test case like so:
class ExampleTestCase(unittest.TestCase):
def test_example(self):
stub_stdin(self, '42')
stub_stdouts(self)
example()
self.assertEqual(sys.stdout.getvalue(), '42\n')
Now, in Python 2, this test will only pass if the StringIO class is from the StringIO module, and in Python 3 no such module exists. What you can do is use the version from the io module with a modification that makes it slightly more lenient in terms of what input it accepts, so that the unicode encoding/decoding will be done automatically rather than triggering an exception (such as print statements in Python 2 will not work nicely without the following). I typically do this for cross compatibility between Python 2 and 3:
class StringIO(io.StringIO):
"""
A "safely" wrapped version
"""
def __init__(self, value=''):
value = value.encode('utf8', 'backslashreplace').decode('utf8')
io.StringIO.__init__(self, value)
def write(self, msg):
io.StringIO.write(self, msg.encode(
'utf8', 'backslashreplace').decode('utf8'))
Now plug your example function plus every code fragment in this answer into one file, you will get your self contained unittest that works in both Python 2 and 3 (although you need to call print as a function in Python 3) for doing testing against stdio.
One more note: you can always put the stub_ function calls in the setUp method of the TestCase if every single test method requires that.
Of course, if you want to use various mocks related libraries out there to stub out stdin/stdout, you are free to do so, but this way relies on no external dependencies if this is your goal.
For your second issue, test cases have to be written in a certain way, where they must be encapsulated within a method and not at the class level, your original example will fail. However you might want to do something like this:
class Test(unittest.TestCase):
def helper(self, data, answer, runner):
stub_stdin(self, data)
stub_stdouts(self)
runner()
self.assertEqual(sys.stdout.getvalue(), answer)
self.doCleanups() # optional, see comments below
def test_various_inputs(self):
data_and_answers = [
('hello', 'HELLOhello'),
('goodbye', 'GOODBYEgoodbye'),
]
runScript = upperlower # the function I want to test
for data, answer in data_and_answers:
self.helper(data, answer, runScript)
The reason why you might want to call doCleanups is to prevent the cleanup stack from getting as deep as all the data_and_answers pairs are there, but that will pop everything off the cleanup stack so if you had any other things that need to be cleaned up at the end this might end up being problematic - you are free to leave that there as all of the stdio related objects will be restored at the end in the same order, so the real one will always be there. Now the function I wanted to test:
def upperlower():
raw = raw_input()
print (raw.upper() + raw),
So yes, a bit of explanation for what I did might help: remember within a TestCase class, the framework relies strictly on the instance's assertEqual and friends for it to function. So to ensure testing being done at the right level you really want to call those asserts all the time so that helpful error messages will be shown at the moment the error occurred with the inputs/answers that didn't quite show up right, rather than until the very end like what you did with the for loop (that will tell you something was wrong, but not exactly where out of the hundreds and now you are mad). Also the helper method - you can call it anything you want, as long as it doesn't start with test because then the framework will try to run it as one and it will fail terribly. So just follow this convention and you can basically have templates within your test case to run your test with - you can then use it in a loop with a bunch of inputs/outputs like what I did.
As for your other question:
only I've had a little problem importing StringIO, because I was doing import StringIO and I needed to import like from StringIO import StringIO (I don't understand really why), but be that as It may, it works.
Well, if you look at my original code I did show you how did import io and then overrode the io.StringIO class by defining class StringIO(io.StringIO). However it works for you because you are doing this strictly from Python 2, whereas I do try to target my answers to Python 3 whenever possible given that Python 2 will (probably definitely this time) not be supported in less than 5 years. Think of the future users that might be reading this post who had similar problem as you. Anyway, yes, the original from StringIO import StringIO works, as that's the StringIO class from the StringIO module. Though from cStringIO import StringIO should work as that imports the C version of the StringIO module. It works because they all offer close enough interfaces, and so they will basically work as intended (until of course you try to run this under Python 3).
Again, putting all this together along with my code should result in a self-contained working test script. Do remember to look at documentation and follow the form of the code, and not invent your own syntax and hoping things to work (and as for exactly why your code didn't work, because the "test" code was defined at where the class was being constructed, so all of that was executed while Python was importing your module, and since none of the things that are needed for the test to run are even available (namely the class itself doesn't even exist yet), the whole thing just dies in fits of twitching agony). Asking questions here help too, even though the issue you face is something really common, not having a quick and simple name to search for your exact problem does make it difficult to figure out where you went wrong, I supposed? :) Anyway good luck, and good on you for taking the effort to test your code.
There are other methods, but given that the other questions/answers I looked at here at SO doesn't seem to help, I hope this one this. Other ones for reference:
How to supply stdin, files and environment variable inputs to Python unit tests?
python mocking raw input in unittests
Naturally, it bares repeating that all of this can be done using unittest.mock available in Python 3.3+ or the original/rolling backport version on pypi, but given that those libraries hides some of the intricacies on what actually happens, they may end up hiding some of the details on what actually happens (or need to happen) or how the redirection actually happens. If you want, you can read up on unittest.mock.patch and go down slightly to the StringIO patching sys.stdout section.
I am writing a Python interpreter and want to redirect the function's return values to stdout, like the Python Interpreter in Interactive Mode. Within this mode, when the user calls a function, its return value is printed on the screen. The same occurs with expressions.
E.g.
>>> foo()
'Foo return value'
>>> 2+4
6
>>> print('Hello!')
'Hello!'
Changing the sys.stdout only affects the print function. How do I redirect the other expressions to stdout?
Thank you
First, the interactive mode does not print the return value from any function called. Instead, it prints the result of whatever expression the user typed in. If that's not a function call, it still gets printed. If it has 3 function calls in it, it still prints one result, not 3 lines. And so on.
So, trying to redirect function return values to stdout is the wrong thing to do.
What the interactive interpreter does is something sort of like this:
line = raw_input(sys.ps1)
_ = eval(line)
if _ is not None:
print repr(_)
(You may notice that you can change sys.ps1 from the interactive prompt to change what the prompt looks like, access _ to get the last value, etc.)
However, that's not what it really does. And that's not how you should go about this yourself either. If you try, you'll have to deal with complexities like keeping your own globals separate from the user's, handling statements as well as expressions, handling multi-line statements and expressions (doing raw_input(sys.ps2) is easy, but how do you know when to do that?), interacting properly with readline and rlcomplete, etc.
There's a section of the documentation called Custom Python Interpreters which explains the easy way to do this:
The modules described in this chapter allow writing interfaces similar to Python’s interactive interpreter. If you want a Python interpreter that supports some special feature in addition to the Python language, you should look at the code module.
And code:
… provides facilities to implement read-eval-print loops in Python. Two classes and convenience functions are included which can be used to build applications which provide an interactive interpreter prompt.
The idea is that you let Python do all the hard stuff, up to whatever level you want to take over, and then you just write the part on top of that.
You may want to look at the source for IDLE, ipython, bpython, etc. for ideas.
Instead of using exec() to run the user input, try eval():
retval = eval(user_input)
sys.stdout.write(repr(retval) + "\n")
Sometimes I have a lot of prints scattered around function to print debug output.
To switch this debug outputs I came up with this:
def f(debug=False):
print = __builtins__.print if debug else lambda *p: None
Or if I need to print something apart from debug message, I create dprint function for debug messages.
The problem is, when debug=False, this print statements slow down the code considerably, because lambda *p: None is still called, and function invocation are known to be slow.
So, my question is: Is there any better way to efficiently disable all these debug prints for them not to affect code performance?
All the answers are regarding my not using logging module. This is a good to notice, but this doesn't answer the question how to avoid function invocations that slow down the code considerably - in my case 25 times (if it's possible (for example by tinkering with function code object to through away all the lines with print statements or somehow else)). What these answers suggest is replacing print with logging.debug, which should be even slower. And this question is about getting rid of those function calls completely.
I tried using logging instead of lambda *p: None, and no surprise, code became even slower.
Maybe someone would like to see the code where those prints caused 25 slowdown: http://ideone.com/n5PGu
And I don't have anything against logging module. I think it's a good practice to always stick to robust solutions without some hacks. But I thinks there is nothing criminal if I used those hacks in 20-line one-time code snippet.
Not as a restriction, but as a suggestion, maybe it's possible to delete some lines (e.g. starting with print) from function source code and recompile it? I laid out this approach in the answer below. Though I would like to see some comments on that solution, I welcome other approaches to solving this problem.
You should use the logging module instead. See http://docs.python.org/library/logging.html
Then you can set the log level depending on your needs, and create multiple logger objects, that log about different subjects.
import logging
#set your log level
logging.basicConfig(level=logging.DEBUG)
logging.debug('This is a log message')
In your case: you could simply replace your print statement with a log statement, e.g.:
import logging
print = __builtins__.print if debug else logging.debug
now the function will only be print anything if you set the logging level to debug
logging.basicConfig(level=logging.DEBUG)
But as a plus, you can use all other logging features on top! logging.error('error!')
Ned Batchelder wrote in the comment:
I suspect the slow down is in the calculation of the arguments to
your debug function. You should be looking for ways to avoid those
calculations. Preprocessing Python is just a distraction.
And he is right as slowdown is actually caused by formatting string with format method which happens regardless if the resulting string will be logged or not.
So, string formatting should be deferred and dismissed if no logging will occur. This may be achieved by refactoring dprint function or using log.debug in the following way:
log.debug('formatted message: %s', interpolated_value)
If message won't be logged, it won't be formatted, unlike print, where it's always formatted regardless of if it'll be logged or discarded.
The solution on log.debug's postponed formatting gave Martijn Pieters here.
Another solution could be to dynamically edit code of f and delete all drpint calls. But this solution is highly unrecommended to be used:
You are correct, you should never resort to this, there are so many
ways it can go wrong. First, Python is not a language designed for
source-level transformations, and it's hard to write it a transformer
such as comment_1 without gratuitously breaking valid code. Second,
this hack would break in all kinds of circumstances - for example,
when defining methods, when defining nested functions, when used in
Cython, when inspect.getsource fails for whatever reason. Python is
dynamic enough that you really don't need this kind of hack to
customize its behavior.
Here is the code of this approach, for those who like to get acquainted with it:
from __future__ import print_function
DEBUG = False
def dprint(*args,**kwargs):
'''Debug print'''
print(*args,**kwargs)
_blocked = False
def nodebug(name='dprint'):
'''Decorator to remove all functions with name 'name' being a separate expressions'''
def helper(f):
global _blocked
if _blocked:
return f
import inspect, ast, sys
source = inspect.getsource(f)
a = ast.parse(source) #get ast tree of f
class Transformer(ast.NodeTransformer):
'''Will delete all expressions containing 'name' functions at the top level'''
def visit_Expr(self, node): #visit all expressions
try:
if node.value.func.id == name: #if expression consists of function with name a
return None #delete it
except(ValueError):
pass
return node #return node unchanged
transformer = Transformer()
a_new = transformer.visit(a)
f_new_compiled = compile(a_new,'<string>','exec')
env = sys.modules[f.__module__].__dict__
_blocked = True
try:
exec(f_new_compiled,env)
finally:
_blocked = False
return env[f.__name__]
return helper
#nodebug('dprint')
def f():
dprint('f() started')
print('Important output')
dprint('f() ended')
print('Important output2')
f()
More information: Replacing parts of the function code on-the-fly
As a hack, yes, that works. (And there is no chance in hell those lambda no-ops are your app's bottleneck.)
However, you really should be doing logging properly by using the logging module.
See http://docs.python.org/howto/logging.html#logging-basic-tutorial for a basic example of how this should be done.
You definitely need to use the logging module of Python, it's very practical and you can change the log level of your application. Example:
>>> import logging
>>> logging.basicConfig(level=logging.DEBUG)
>>> logging.debug('Test.')
DEBUG:root:Test.
I have some code that uses ctypes to try to determine if the file pointed to by sys.stdout is actually stdout. I know that on any POSIX-compliant system, and even on Windows, it should be safe to assume this is true if sys.stdout.fileno() == 1, so my question is not how to do this in general.
In my code (which is already using ctypes for something unrelated to my question) I carelessly had something like:
libc = ctypes.CDLL(ctypes.util.find_library('c'))
real_stdout = libc.fileno(ctypes.c_void_p.in_dll(libc, 'stdout'))
if sys.stdout.fileno() == real_stdout:
...
This works perfectly fine on Linux, so I didn't really think about it much. It looked nicer and more readable than hard-coding 1 as the file descriptor. But I found a few days later that my code wasn't working on OSX.
It turns outs OSX's libc doesn't export any symbol called 'stdout'. Instead its stdio.h has stdout defined as:
#define stdout __stdoutp
If I change my code to c_void_p.in_dll(libc, '__stdoutp') my code works as expected, but of course that's OSX-only. Windows, it turns out, has a similar issue (at least if using MSVC).
I will probably just change my code to use 1, but my question still stands, out of curiosity, if there's a cross-platform way to get the stdio pointer (and likewise stdin and stderr) without assuming that it's using the POSIX-compliant descriptor?
As so often when it comes to C, if you want compatibility, you'll have to go and look in the relevant standard. Since you mention windows, I guess you're not actually wanting the POSIX standard, but rather the C one.
C99 section 7,19,1 defines stdout to be a macro, and thus not a variable. That means there's no way you can rely on finding it using dlsym (which I assume in_dll uses). The actual expression could just as well be a function call or a fixed address. Perhaps not very likely, but it is possible...
As said in the comments, the fileno function is in turn defined by POSIX, not by C. C has no concept of file descriptors. I think you're better off assuming POSIX and just checking for the value 1, which it specifies.
If you're simply interested in making things work, rather than strict standards adherence (like me), you can find the "real" name of stdout by writing a simple C snippet:
echo -e '#include <stdio.h>\nFILE* mystdout = stdout;' > test.c
cpp test.c | tail
Gives you the output:
FILE* mystdout = __stdoutp;
This means that you also need to try ctypes.c_void_p.in_dll(libc, '__stdoutp') to cover the case of darwin.
I love being able to modify the arguments the get sent to a function, using settrace, like :
import sys
def trace_func(frame,event,arg):
value = frame.f_locals["a"]
if value % 2 == 0:
value += 1
frame.f_locals["a"] = value
def f(a):
print a
if __name__ == "__main__":
sys.settrace(trace_func)
for i in range(0,5):
f(i)
And this will print:
1
1
3
3
5
What other cool stuff can you do using settrace?
I would strongly recommend against abusing settrace. I'm assuming you understand this stuff, but others coming along later may not. There are a few reasons:
Settrace is a very blunt tool. The OP's example is a simple one, but there's practically no way to extend it for use in a real system.
It's mysterious. Anyone coming to look at your code would be completely stumped why it was doing what it was doing.
It's slow. Invoking a Python function for every line of Python executed is going to slow down your program by many multiples.
It's usually unnecessary. The original example here could have been accomplished in a few other ways (modify the function, wrap the function in a decorator, call it via another function, etc), any of which would have been better than settrace.
It's hard to get right. In the original example, if you had not called f directly, but instead called g which called f, your trace function wouldn't have done its job, because you returned None from the trace function, so it's only invoked once and then forgotten.
It will keep other tools from working. This program will not be debuggable (because debuggers use settrace), it will not be traceable, it will not be possible to measure its code coverage, etc. Part of this is due to lack of foresight on the part of the Python implementors: they gave us settrace but no gettrace, so it's difficult to have two trace functions that work together.
Trace functions make for cool hacks. It's fun to be able to abuse it, but please don't use it for real stuff. If I sound hectoring, I apologize, but this has been done in real code, and it's a pain. For example, DecoratorTools uses a trace function to perform the magic feat of making this syntax work in Python 2.3:
# Method decorator example
from peak.util.decorators import decorate
class Demo1(object):
decorate(classmethod) # equivalent to #classmethod
def example(cls):
print "hello from", cls
A neat hack, but unfortunately, it meant that any code that used DecoratorTools wouldn't work with coverage.py (or debuggers, I guess). Not a good tradeoff if you ask me. I changed coverage.py to provide a mode that lets it work with DecoratorTools, but I wish I hadn't had to.
Even code in the standard library sometimes gets this stuff wrong. Pyexpat decided to be different than every other extension module, and invoke the trace function as if it were Python code. Too bad they did a bad job of it.
</rant>
I made a module called pycallgraph which generates call graphs using sys.settrace().
Of course, code coverage is accomplished with the trace function. One cool thing we haven't had before is branch coverage measurement, and that's coming along nicely, about to be released in an alpha version of coverage.py.
So for example, consider this function:
def foo(x):
if x:
y = 10
return y
if you test it with this call:
assert foo(1) == 10
then statement coverage will tell you that all the lines of the function were executed. But of course, there's a simple problem in that function: calling it with 0 raises a UnboundLocalError.
Branch measurement would tell you that there's a branch in the code that isn't fully exercised, because only one leg of the branch is ever taken.
For example, get the memory consumption of Python code line-by-line: http://pypi.python.org/pypi/memory_profiler
One latest project that uses settrace heavily is PySnooper
It helps new programmers to trace/log/monitor their program output. Cheers!
I don't have an exhaustively comprehensive answer but one thing I did with it, with the help of another user on SO, was create a program that generates the trace tables of other Python programs.
The python debugger Pdb uses sys.settrace to analyse lines to debug.
Here's an c optimization/extension for pdb that also uses sys.settrace
https://bitbucket.org/jagguli/cpdb