I love being able to modify the arguments the get sent to a function, using settrace, like :
import sys
def trace_func(frame,event,arg):
value = frame.f_locals["a"]
if value % 2 == 0:
value += 1
frame.f_locals["a"] = value
def f(a):
print a
if __name__ == "__main__":
sys.settrace(trace_func)
for i in range(0,5):
f(i)
And this will print:
1
1
3
3
5
What other cool stuff can you do using settrace?
I would strongly recommend against abusing settrace. I'm assuming you understand this stuff, but others coming along later may not. There are a few reasons:
Settrace is a very blunt tool. The OP's example is a simple one, but there's practically no way to extend it for use in a real system.
It's mysterious. Anyone coming to look at your code would be completely stumped why it was doing what it was doing.
It's slow. Invoking a Python function for every line of Python executed is going to slow down your program by many multiples.
It's usually unnecessary. The original example here could have been accomplished in a few other ways (modify the function, wrap the function in a decorator, call it via another function, etc), any of which would have been better than settrace.
It's hard to get right. In the original example, if you had not called f directly, but instead called g which called f, your trace function wouldn't have done its job, because you returned None from the trace function, so it's only invoked once and then forgotten.
It will keep other tools from working. This program will not be debuggable (because debuggers use settrace), it will not be traceable, it will not be possible to measure its code coverage, etc. Part of this is due to lack of foresight on the part of the Python implementors: they gave us settrace but no gettrace, so it's difficult to have two trace functions that work together.
Trace functions make for cool hacks. It's fun to be able to abuse it, but please don't use it for real stuff. If I sound hectoring, I apologize, but this has been done in real code, and it's a pain. For example, DecoratorTools uses a trace function to perform the magic feat of making this syntax work in Python 2.3:
# Method decorator example
from peak.util.decorators import decorate
class Demo1(object):
decorate(classmethod) # equivalent to #classmethod
def example(cls):
print "hello from", cls
A neat hack, but unfortunately, it meant that any code that used DecoratorTools wouldn't work with coverage.py (or debuggers, I guess). Not a good tradeoff if you ask me. I changed coverage.py to provide a mode that lets it work with DecoratorTools, but I wish I hadn't had to.
Even code in the standard library sometimes gets this stuff wrong. Pyexpat decided to be different than every other extension module, and invoke the trace function as if it were Python code. Too bad they did a bad job of it.
</rant>
I made a module called pycallgraph which generates call graphs using sys.settrace().
Of course, code coverage is accomplished with the trace function. One cool thing we haven't had before is branch coverage measurement, and that's coming along nicely, about to be released in an alpha version of coverage.py.
So for example, consider this function:
def foo(x):
if x:
y = 10
return y
if you test it with this call:
assert foo(1) == 10
then statement coverage will tell you that all the lines of the function were executed. But of course, there's a simple problem in that function: calling it with 0 raises a UnboundLocalError.
Branch measurement would tell you that there's a branch in the code that isn't fully exercised, because only one leg of the branch is ever taken.
For example, get the memory consumption of Python code line-by-line: http://pypi.python.org/pypi/memory_profiler
One latest project that uses settrace heavily is PySnooper
It helps new programmers to trace/log/monitor their program output. Cheers!
I don't have an exhaustively comprehensive answer but one thing I did with it, with the help of another user on SO, was create a program that generates the trace tables of other Python programs.
The python debugger Pdb uses sys.settrace to analyse lines to debug.
Here's an c optimization/extension for pdb that also uses sys.settrace
https://bitbucket.org/jagguli/cpdb
Related
I encountered a bit weird (to me) behavior upon running the following script.
As you can see it seems as though write is called multiple times and I wonder why this is, as I have explicitly overridden the file=sys.stdout behavior.
How exactly does print pipe streams under the hood, does it pipe to all channels? Does it have some default behavior, the docs are not very specific except for the following:
The file argument must be an object with a write(string) method; if it
is not present or None, sys.stdout will be used.
Test script
import sys
def debug(*args, **kwargs):
pass
def _debugwrite(obj):
print("You're looking at Attila, the psychopathic killer, the caterpillar")
out = sys.stderr
out.write(obj)
debug.write = _debugwrite
print("Don't you ever disrespect the caterpillar", file=debug)
Output:
You're looking at Attila, the psychopathic killer, the caterpillar
You're looking at Attila, the psychopathic killer, the caterpillar
Don't you ever disrespect the caterpillar
What I expected:
You're looking at Attila, the psychopathic killer, the caterpillar
Don't you ever disrespect the caterpillar
What I tried:
I tried to use inspect module to get the caller, maybe see who does the actual call to write but I get module, idk why :( is this obvious?
Further questions:
Is there any way to debug a function beyond Python and go into the underlying C call? Because well the main Python distribution, is CPython, and if my understanding is correct, Python is just an api for the underlying C code. A call in Python gets translated to a C call under the hood eventually. So for instance I found out that the print is defined as follows in C, but it's tough for me to understand what's going on there (because, erm, I don't know C) but maybe by going with a debugger I could print stuff out, see what is what and figure out maybe at least the flow if not everything. I'd very much like to understand what's going on under the hood in general instead of taking stuff for granted.
Thx in advance for your time!
You're looking for something really complicated when the answer is dead simple.
I don't even know what "pipe to all channels" would mean, but print does nothing of the sort. All it does is call write on the file object you passed it.
However, it calls write once for each argument, once for each sep, and once for the end.
So, this line:
print("Don't you ever disrespect the caterpillar", file=debug)
… is roughly equivalent to:
debug.write(str("Don't you ever disrespect the caterpillar"))
debug.write("\n")
… which of course means you get your extra print message twice.
By the way, for debugging or understanding things like this in the future: If you changed the extra print to include, say, repr(obj), what's happening would have been obvious:
def _debugwrite(obj):
print("stderring " + repr(obj))
out = sys.stderr
out.write(obj)
The output is then:
stderring "Don't you ever disrespect the caterpillar"
stderring '\n'
Don't you ever disrespect the caterpillar
Not very mysterious anymore, right?
And of course stdout and stderr are separate streams, with their own buffers. (By default, when talking to a TTY, stdout is line-buffered, and stderr is unbuffered.) So the ordering isn't what you'd naively expect, but it makes sense. If you just add in flushes, the output turns into:
stderring "Don't you ever disrespect the caterpillar"
Don't you ever disrespect the caterpillarstderring '\n'
(with a blank line at the end).
For your bonus questions:
I tried to use inspect module to get the caller, maybe see who does the actual call to write but I get module, idk why :( is this obvious?
I'm assuming you did something like inspect.stack()[1].function? If so, the code you're inspecting is the top-level code in the module, so inspect shows it as a fake function named <module>.
Is there any way to debug a function beyond Python and go into the underlying C call?
Sure. Just run CPython itself under lldb, gdb, Microsoft's debugger, or whatever else you usually use for debugging binary programs. You can put breakpoints in the ceval loop or in a particular C API function or wherever you want. You may want to make a debug build of CPython (do ./configure --help to see the options) to make this even better.
Because well the main Python distribution, is CPython, and if my understanding is correct, Python is just an api for the underlying C code.
Well, not quite. It's a compiler and a bytecode interpreter. That bytecode interpreter largely uses the same C API that's exposed for the extending/embedding interface, but the overlap isn't 100%; there are places where it deals with the structures below the C API level.
A call in Python gets translated to a C call under the hood eventually. So for instance I found out that the print is defined as follows in C, but it's tough for me to understand what's going on there (because, erm, I don't know C) but maybe by going with a debugger I could print stuff out, see what is what and figure out maybe at least the flow if not everything. I'd very much like to understand what's going on under the hood in general instead of taking stuff for granted.
Yes, you can do that, but you will need to understand both C and the CPython API (e.g., things like how to find the C slot equivalent to __call__) to figure out where to put your breakpoints and start tracing.
And for cases like these, it's a lot easier to just write wrappers in Python and debug them in Python. For example:
import builtins
def print(*args, **kwargs):
return builtins.print(*args, **kwargs)
Or, if you're worried about print being called in other modules, not just in yours, you can even shadow it in builtins:
builtins._print = builtins.print
def print(*args, **kwargs):
return builtins._print(*args, **kwargs)
builtins.print = print
Now you can just use pdb to break on every call to print at the Python level, without worrying about the C.
And of course you can even debug this code in PyPy or Jython or whatever to see if it's any different from CPython above the "builtin" level.
You get the result you see because builtin_print() calls PyFile_Write*() twice, once in order to print the argument, and again to print the EOL. They are out of order because by default stderr is unbuffered and stdout is line-buffered.
I have one script that calls another - let's call them master.py and fetch.py. I suppose the second script could be integrated into the first, but it does have distinct functionality - so keeping them separate seems like a good way to force myself to learn how to call outside scripts.
Here's the basic structure of fetch.py:
<import block>
infiles = <paths>
arcpy.env.workspace = os.path.dirname(infile)
ws = arcpy.env.workspace
newfile_list = []
def main():
name = <name>
if not arcpy.Exists(name + ".gdb"):
global ws
new_gdb = DM.CreateFileGDB(ws, name + ".gdb")
newfile_list.append(new_gdb)
other_func1()
other_func2()
print "\nNew files from fetch.py:"
for i in newfile_list:
print " " + i
def other_func1():
stuff
def other_func2():
stuff
if __name__ == '__main__':
main()
And master.py:
<import block>
infiles = <paths>
def f1():
stuff
def f2():
stuff
import fetch
fetch.main()
f1()
f2()
The problems, concerning placement of the import block & file definitions of fetch.py:
When I put them inside main() and run it as a standalone script, my arcpy functions don't work because my various imports haven't run yet. Putting them ahead of main() and making them global, solves this problem.
But when I put them outside of main() as you see here, I get an error saying the local variable ws is referenced before assignment. I think this may have to do with my calling fetch.main(), where the initial lines don't get read. (I was able to make it work by declaring global ws, but don't know if this is advisable.)
How do I structure fetch.py so that the import statements and file definitions get read, both when run as a standalone script and when called?
Instead of rewriting your code for you, I will try to point you to some ideas from the developer philosophy toolkit to get you on the way how to refine your approach.
My feeling is that you should contemplate YAGNI and KISS when thinking about your code.
In your comment you write:
I'm keeping the 'fetch' script separate because finding feature orientation seems like a useful stand-alone tool down the road.
Like I say: YAGNI and KISS: If you need it as a standalone tool in the future then split it out in the future. For now keep it simple and put the code that belongs together in one module and make it callable exactly the way you need it. When you work with it and understand the problem better and see what else is needed, you can then add other ways of calling your script. There is no reason why you shouldn't keep all the code in one module while it is still manageable and just add different ways to call it.
If the time comes that you want to organize your code into several modules you can refactor it in a way that you pull out the things you do on the module level at the moment (either into functions or maybe even classes). Then you can control exactly what should happen when and you can control the order of things better.
As further reading in regards to structuring a Python project I would recommend the pypa sample project and Starting A Python Project The Right Way. The problems you have with imports will likely cease to exist if you structure your code and your project in a sensible, pythonic way.
I'm trying to test a Python script (2.7) where I work with the standar input (readed with raw_input() and writed with a simple print) but I don't find how do this and I'm sure that this issue is very simple.
This is a very very very resume code of my script:
def example():
number = raw_input()
print number
if __name__ == '__main__':
example()
I want to write a unittest test to check this, but I don't find how. I've trying with StringIO and other things but I don't find the solution to do this really simple.
Somebody have a idea?
PD: Of course in the real script I use data blocks with several lines and other kind of data.
Thank you so much.
EDIT:
Thank you so much for the first really specific answer, it works perfectly, only I've had a little problem importing StringIO, because I was doing import StringIO and I needed to import like from StringIO import StringIO (I don't understand really why), but be that as It may, it works.
But I I've found another problem using this way, in my project I need test a scripts with this way (that work perfectly thanks to your support) but I want do this:
I have a file with a lot of test to pass over a script, so I open the file and read blocks of info with their result blocks and I would like to do that the code will be able to process a block checking their result and do the same with other and another...
Something like this:
class Test(unittest.TestCase):
...
#open file and process saving data like datablocks and results
...
allTest = True
for test in tests:
stub_stdin(self, test.dataBlock)
stub_stdouts(self)
runScrip()
if sys.stdout.getvalue() != test.expectResult:
allTest = False
self.assertEqual(allTest, True)
I know that maybe unittest doesn't has sense now, but you can do a idea about I want. So, this way fails and I don't know why.
Typical techniques involve mocking the standard sys.stdin and sys.stdout with your desired items. If you do not care for Python 3 compatibility you can just use the StringIO module, however if you want forward thinking and is willing to restrict to Python 2.7 and 3.3+, supporting for this both Python 2 and 3 in this way becomes possible without too much work through the io module (but requires a bit of modification, but put this thought on hold for now).
Assuming you already have a unittest.TestCase going, you can create a utility function (or method in the same class) that will replace sys.stdin/sys.stdout as outlined. First the imports:
import sys
import io
import unittest
In one of my recent projects I've done this for stdin, where it take a str for the inputs that the user (or another program through pipes) will enter into yours as stdin:
def stub_stdin(testcase_inst, inputs):
stdin = sys.stdin
def cleanup():
sys.stdin = stdin
testcase_inst.addCleanup(cleanup)
sys.stdin = StringIO(inputs)
As for stdout and stderr:
def stub_stdouts(testcase_inst):
stderr = sys.stderr
stdout = sys.stdout
def cleanup():
sys.stderr = stderr
sys.stdout = stdout
testcase_inst.addCleanup(cleanup)
sys.stderr = StringIO()
sys.stdout = StringIO()
Note that in both cases, it accepts a testcase instance, and calls its addCleanup method that adds the cleanup function call that will reset them back to where they were when the duration of a test method is concluded. The effect is that for the duration from when this was invoked in the test case until the end, sys.stdout and friends will be replaced with the io.StringIO version, meaning you can check its value easily, and don't have to worry about leaving a mess behind.
Better to show this as an example. To use this, you can simply create a test case like so:
class ExampleTestCase(unittest.TestCase):
def test_example(self):
stub_stdin(self, '42')
stub_stdouts(self)
example()
self.assertEqual(sys.stdout.getvalue(), '42\n')
Now, in Python 2, this test will only pass if the StringIO class is from the StringIO module, and in Python 3 no such module exists. What you can do is use the version from the io module with a modification that makes it slightly more lenient in terms of what input it accepts, so that the unicode encoding/decoding will be done automatically rather than triggering an exception (such as print statements in Python 2 will not work nicely without the following). I typically do this for cross compatibility between Python 2 and 3:
class StringIO(io.StringIO):
"""
A "safely" wrapped version
"""
def __init__(self, value=''):
value = value.encode('utf8', 'backslashreplace').decode('utf8')
io.StringIO.__init__(self, value)
def write(self, msg):
io.StringIO.write(self, msg.encode(
'utf8', 'backslashreplace').decode('utf8'))
Now plug your example function plus every code fragment in this answer into one file, you will get your self contained unittest that works in both Python 2 and 3 (although you need to call print as a function in Python 3) for doing testing against stdio.
One more note: you can always put the stub_ function calls in the setUp method of the TestCase if every single test method requires that.
Of course, if you want to use various mocks related libraries out there to stub out stdin/stdout, you are free to do so, but this way relies on no external dependencies if this is your goal.
For your second issue, test cases have to be written in a certain way, where they must be encapsulated within a method and not at the class level, your original example will fail. However you might want to do something like this:
class Test(unittest.TestCase):
def helper(self, data, answer, runner):
stub_stdin(self, data)
stub_stdouts(self)
runner()
self.assertEqual(sys.stdout.getvalue(), answer)
self.doCleanups() # optional, see comments below
def test_various_inputs(self):
data_and_answers = [
('hello', 'HELLOhello'),
('goodbye', 'GOODBYEgoodbye'),
]
runScript = upperlower # the function I want to test
for data, answer in data_and_answers:
self.helper(data, answer, runScript)
The reason why you might want to call doCleanups is to prevent the cleanup stack from getting as deep as all the data_and_answers pairs are there, but that will pop everything off the cleanup stack so if you had any other things that need to be cleaned up at the end this might end up being problematic - you are free to leave that there as all of the stdio related objects will be restored at the end in the same order, so the real one will always be there. Now the function I wanted to test:
def upperlower():
raw = raw_input()
print (raw.upper() + raw),
So yes, a bit of explanation for what I did might help: remember within a TestCase class, the framework relies strictly on the instance's assertEqual and friends for it to function. So to ensure testing being done at the right level you really want to call those asserts all the time so that helpful error messages will be shown at the moment the error occurred with the inputs/answers that didn't quite show up right, rather than until the very end like what you did with the for loop (that will tell you something was wrong, but not exactly where out of the hundreds and now you are mad). Also the helper method - you can call it anything you want, as long as it doesn't start with test because then the framework will try to run it as one and it will fail terribly. So just follow this convention and you can basically have templates within your test case to run your test with - you can then use it in a loop with a bunch of inputs/outputs like what I did.
As for your other question:
only I've had a little problem importing StringIO, because I was doing import StringIO and I needed to import like from StringIO import StringIO (I don't understand really why), but be that as It may, it works.
Well, if you look at my original code I did show you how did import io and then overrode the io.StringIO class by defining class StringIO(io.StringIO). However it works for you because you are doing this strictly from Python 2, whereas I do try to target my answers to Python 3 whenever possible given that Python 2 will (probably definitely this time) not be supported in less than 5 years. Think of the future users that might be reading this post who had similar problem as you. Anyway, yes, the original from StringIO import StringIO works, as that's the StringIO class from the StringIO module. Though from cStringIO import StringIO should work as that imports the C version of the StringIO module. It works because they all offer close enough interfaces, and so they will basically work as intended (until of course you try to run this under Python 3).
Again, putting all this together along with my code should result in a self-contained working test script. Do remember to look at documentation and follow the form of the code, and not invent your own syntax and hoping things to work (and as for exactly why your code didn't work, because the "test" code was defined at where the class was being constructed, so all of that was executed while Python was importing your module, and since none of the things that are needed for the test to run are even available (namely the class itself doesn't even exist yet), the whole thing just dies in fits of twitching agony). Asking questions here help too, even though the issue you face is something really common, not having a quick and simple name to search for your exact problem does make it difficult to figure out where you went wrong, I supposed? :) Anyway good luck, and good on you for taking the effort to test your code.
There are other methods, but given that the other questions/answers I looked at here at SO doesn't seem to help, I hope this one this. Other ones for reference:
How to supply stdin, files and environment variable inputs to Python unit tests?
python mocking raw input in unittests
Naturally, it bares repeating that all of this can be done using unittest.mock available in Python 3.3+ or the original/rolling backport version on pypi, but given that those libraries hides some of the intricacies on what actually happens, they may end up hiding some of the details on what actually happens (or need to happen) or how the redirection actually happens. If you want, you can read up on unittest.mock.patch and go down slightly to the StringIO patching sys.stdout section.
I'm just finding out now that when importing a module, it seems to run through ALL the code, instead of just the one function that I want it to go through. I've been trying to find a way around this, but can't seem to get it. Here is what is happening.
#mainfile.py
from elsewhere import something_else
number = 0
def main():
print('What do you want to do? 1 - something else')
donow = input()
if donow == '1':
something_else()
while 1:
main()
#elsewhere.py
print('I dont know why this prints')
def something_else():
from mainfile import number
print('the variable number is',number)
Now, although this code KIND OF works the way I want it to, the first time when I initiate it, it will go to the main menu twice. For example: I start the program, press one, then it asks me what I want to do again. If I press one again, then it will print "the variable number is 0".
Once I get this working, I would like to be importing a lot of variables back and forth. The only issue is,if I add more import statements to "elsewhere.py" I think it will just initiate the program more and more. If I put "from mainfile import number" on line 1 of "elsewhere.py", I think this raises an error. Are there any workarounds to this? Can I make a different file? What if I made a class to store variables, if that is possible? I'm very new to programming, I would appreciate it if answers are easy to read for beginners. Thank you for the help.
As Jan notes, that's what import does. When you run import, it runs all of the code in the module. You might think: no it doesn't! What about the code inside something_else? That doesn't get run! Right, when the def statement is executed it creates a new function, but it doesn't run it. Basically, it saves the code for later.
The solution is that pretty much all interesting code should be in a function. There are a few cases which make sense to put at the top-level, but if in doubt, put it inside a function. In your particular case, you shouldn't be printing at the top level, if you need to print for some reason, put that into a function and call it when you need it. If you care when something happens, put it in a function.
On a second node, don't import your primary script in other scripts. I.e. if your mainfile.py directly, don't import that in other files. You can but it produces confusing results, and its really best to pretend that it doesn't work.
Don't try to import variables back and forth. Down that path lies only misery. You should only be importing things that don't change. Functions, classes, etc. In any other case, you'll have hard time making it do what you want.
If you want to move variables between places, you have other options:
Pass function arguments
Return values from a function
Use classes
I'll leave it is an exercise to the reader to learn how to do those things.
import executes imported code
import simply takes the Python source file and executes it. This is why it prints, because that instruction is in the code and with import all the instructions get exectued.
To prevent execution of part of imported package/module, you shall use the famous:
if __name__ == "__main__":
print("I do not print with `import`")
Note, that this behaviour is not new in Python 3, it works the same way in Python 2.x too.
Sometimes I have a lot of prints scattered around function to print debug output.
To switch this debug outputs I came up with this:
def f(debug=False):
print = __builtins__.print if debug else lambda *p: None
Or if I need to print something apart from debug message, I create dprint function for debug messages.
The problem is, when debug=False, this print statements slow down the code considerably, because lambda *p: None is still called, and function invocation are known to be slow.
So, my question is: Is there any better way to efficiently disable all these debug prints for them not to affect code performance?
All the answers are regarding my not using logging module. This is a good to notice, but this doesn't answer the question how to avoid function invocations that slow down the code considerably - in my case 25 times (if it's possible (for example by tinkering with function code object to through away all the lines with print statements or somehow else)). What these answers suggest is replacing print with logging.debug, which should be even slower. And this question is about getting rid of those function calls completely.
I tried using logging instead of lambda *p: None, and no surprise, code became even slower.
Maybe someone would like to see the code where those prints caused 25 slowdown: http://ideone.com/n5PGu
And I don't have anything against logging module. I think it's a good practice to always stick to robust solutions without some hacks. But I thinks there is nothing criminal if I used those hacks in 20-line one-time code snippet.
Not as a restriction, but as a suggestion, maybe it's possible to delete some lines (e.g. starting with print) from function source code and recompile it? I laid out this approach in the answer below. Though I would like to see some comments on that solution, I welcome other approaches to solving this problem.
You should use the logging module instead. See http://docs.python.org/library/logging.html
Then you can set the log level depending on your needs, and create multiple logger objects, that log about different subjects.
import logging
#set your log level
logging.basicConfig(level=logging.DEBUG)
logging.debug('This is a log message')
In your case: you could simply replace your print statement with a log statement, e.g.:
import logging
print = __builtins__.print if debug else logging.debug
now the function will only be print anything if you set the logging level to debug
logging.basicConfig(level=logging.DEBUG)
But as a plus, you can use all other logging features on top! logging.error('error!')
Ned Batchelder wrote in the comment:
I suspect the slow down is in the calculation of the arguments to
your debug function. You should be looking for ways to avoid those
calculations. Preprocessing Python is just a distraction.
And he is right as slowdown is actually caused by formatting string with format method which happens regardless if the resulting string will be logged or not.
So, string formatting should be deferred and dismissed if no logging will occur. This may be achieved by refactoring dprint function or using log.debug in the following way:
log.debug('formatted message: %s', interpolated_value)
If message won't be logged, it won't be formatted, unlike print, where it's always formatted regardless of if it'll be logged or discarded.
The solution on log.debug's postponed formatting gave Martijn Pieters here.
Another solution could be to dynamically edit code of f and delete all drpint calls. But this solution is highly unrecommended to be used:
You are correct, you should never resort to this, there are so many
ways it can go wrong. First, Python is not a language designed for
source-level transformations, and it's hard to write it a transformer
such as comment_1 without gratuitously breaking valid code. Second,
this hack would break in all kinds of circumstances - for example,
when defining methods, when defining nested functions, when used in
Cython, when inspect.getsource fails for whatever reason. Python is
dynamic enough that you really don't need this kind of hack to
customize its behavior.
Here is the code of this approach, for those who like to get acquainted with it:
from __future__ import print_function
DEBUG = False
def dprint(*args,**kwargs):
'''Debug print'''
print(*args,**kwargs)
_blocked = False
def nodebug(name='dprint'):
'''Decorator to remove all functions with name 'name' being a separate expressions'''
def helper(f):
global _blocked
if _blocked:
return f
import inspect, ast, sys
source = inspect.getsource(f)
a = ast.parse(source) #get ast tree of f
class Transformer(ast.NodeTransformer):
'''Will delete all expressions containing 'name' functions at the top level'''
def visit_Expr(self, node): #visit all expressions
try:
if node.value.func.id == name: #if expression consists of function with name a
return None #delete it
except(ValueError):
pass
return node #return node unchanged
transformer = Transformer()
a_new = transformer.visit(a)
f_new_compiled = compile(a_new,'<string>','exec')
env = sys.modules[f.__module__].__dict__
_blocked = True
try:
exec(f_new_compiled,env)
finally:
_blocked = False
return env[f.__name__]
return helper
#nodebug('dprint')
def f():
dprint('f() started')
print('Important output')
dprint('f() ended')
print('Important output2')
f()
More information: Replacing parts of the function code on-the-fly
As a hack, yes, that works. (And there is no chance in hell those lambda no-ops are your app's bottleneck.)
However, you really should be doing logging properly by using the logging module.
See http://docs.python.org/howto/logging.html#logging-basic-tutorial for a basic example of how this should be done.
You definitely need to use the logging module of Python, it's very practical and you can change the log level of your application. Example:
>>> import logging
>>> logging.basicConfig(level=logging.DEBUG)
>>> logging.debug('Test.')
DEBUG:root:Test.