Dynamically Defining a Dummy Decorator - python

I am using a nice tool called https://github.com/rkern/line_profiler
To use it, you need to put a decorator #profile at multiple places in the script to indicate which functions should be profiled. Then you execute the script via
kernprof -l script_to_profile.py
Obviously, when running the script by itself via python script_to_profile.py, the decorator is not defined and hence the script crashes.
I know how to define an identity decorator and I can pass a flag from the command line and define it in the main script depending on how the flag is set. However, I don't know how to pass the decorator definition (or the flag) to modules I load so they don't crash at the moment they are loaded. Any ideas?
def profile(func):
return func

A very simple way would be to check if something named profile exists, and if it doesn't, then define it to be your identity decorator. Something like this.
try:
profile
except NameError:
def profile(func):
return func
You could go a little further and make sure it's something callable — probably not necessary:
import typing
try:
profile
except NameError:
profile = None
if not isinstance(profile, typing.Callable):
def profile(func):
return func

Related

How to run fixture before mark.parametrize

Sample_test.py
#pytest.mark.parametrize(argnames="key",argvalues=ExcelUtils.getinputrows(__name__),scope="session")
def test_execute():
#Do something
conftest.py
#pytest.fixture(name='setup',autouse=True,scope="session")
def setup_test(pytestconfig):
dict['environment']="QA"
As shown in the code above , I need to run the setup fixture before the test_execute method because the getinputrows method requires the environment to read the sheet. Unfortunately, the parametrize fixture gets executed before the setup_test. Is there any way this is possible?
You need to execute the parameter inside the test function, not in the decorator:
#pytest.mark.parametrize("key", [ExcelUtils.getinputrows], scope="session")
def test_execute(key):
key(__name__)
#Do something
or bind __name__ to the function call beforehand, but again, call the function inside your test:
#pytest.mark.parametrize("key", [lambda: ExcelUtils.getinputrows(__name__)], scope="session")
def test_execute(key):
key()
#Do something
Mind you I am not fully understanding what you're doing, so these examples might or might not make sense.

Make instance methods global in module?

I create a lib which can be imported and used as is, and there is also a start script, which creates a single instance of the main class for use within the script:
# read config, init stuff, and then create an instance
r = RepoClient(config)
Now, the start script accepts a 'command' argument, and as of now, it must be invoke like:
repo config.json -c 'r.ls()'
i.e. the 'r' variable must be used.
I would like to be able to drop the 'r' variable. For that, the start script, somehow, needs the ls function. I can do it by putting the following in the script, after the r instance is created:
ls = r.ls
etc. for all the commands the RepoClient class supports.
Is there anything automatic? The code below doesn't work, of course, but you get the idea:
from r import *
What I can think of is annotating methods with a custom #command annotation, iterating over all the methods and checking for it and if found, setting it as a script global, but hopefully the batteries do support something like this already ;d
EDIT: for now, the command passed as the last argument is run the following way:
exec(sys.argv[2])
No, there is no way to do that "automatically". However, you don't actually need to copy the items into the global namespace. You can pass a namespace to exec to use. You could give your RepoClient a __getitem__ method, allowing it to act as a namespace. Here's a simple example:
class Foo(object):
def blah(self):
print("Blah!")
def __getitem__(self, attr):
return getattr(self, attr)
f = Foo()
exec('blah()', globals(), f)
It outputs Blah!.

How to make Python Decorator NOT run when imported

I've decorated a method in Python. And when I import the module that contains the method, the decorator autoruns.
I realize that this is how decorators were made however Is there a way to have decorators NOT do this?
It sounds like what you want to do is to choose what decorator to apply at run time. Something like this might work:
to_decorate = []
def decorate_later(func):
to_decorate.append(func)
return func
#decorate_later
def do_stuff(*args, **kw):
print('I am doing stuff')
#decorate_later
def do_more_stuff(*args, **kw):
print('Even more stuff')
def apply_decorator(decorator):
for func in to_decorate:
globals()[func.func_name] = decorator(func)
Then you can import the module and all the functions will be defined as normal. decorate_later returns the original function unmodified. You can call apply_decorator() to apply a specified decorator to all of the functions in the module that were registered by #decorate_later
This is exactly what the venusian library does; you define your decorators according to their API, but the actual behavior isn't triggered until you do a "scan" of the containing module or package.
You don't even need to have a global app object to use venusian decorators; you can pass in the app object as part of the scan, and it'll get passed along to the decorator implementations. So, for example, the same functions can be shared among multiple owners with only a single decorator, just by doing more than one scan.
This is what the Pyramid web framework uses for e.g. event registration, so that merely importing a module doesn't expect to need an app instance. A good example is their event subscriber.
Use
if __name__ == "__main__":
#code
in the file, where code is all outside a method or class ( that runs when you import it).

Is there a way to step into decorated functions, skipping decorator code

I have a module which decorates some key functions with custom decorators.
Debugging these functions with pdb often is a bit of a pain, because every time I step into a decorated function I first have to step through the decorator code itself.
I could of course just set the debugger to break within the function I'm interested in, but as key functions they are called many times from many places so I usually prefer to start debugging outside the function.
I tried to illustrate it with code, but I don't know if that helps:
def i_dont_care_about_this(fn):
#functiontools.wraps(fn)
def wrapper(*args, **kwargs):
return fn(*args, **kwargs)
return wrapper
#i_dont_care_about_this
def i_only_care_about_this():
# no use to set pdb here
def i_am_here():
import pdb; pdb.set_trace()
i_only_care_about_this()
So, is there a way for me to step into i_only_care_about_this from i_am_herewithout going through i_dont_care_about_this?
Essentially I want to skip all decorator code when using s to (s)tep into a given decorated function.
If the decorator is purely for logging or other non-functional behavior, then make it a no-op for debugging - insert this code right after the definition of i_dont_care_about_this:
DEBUG = False
# uncomment this line when pdb'ing
# DEBUG = True
if DEBUG:
i_dont_care_about_this = lambda fn : fn
But if it contains actual active code, then you will have to do the work using pdb methods, such as a conditionalized call to pdb.set_trace inside the code inside the decorator:
BREAK_FLAG = False
...
# (inside your function you want to debug)
if BREAK_FLAG:
import pdb; pdb.set_trace()
...
# at your critical calling point
BREAK_FLAG = True
I don't think you can do that. It would change the meaning of step to be something very different.
However, there is a way to achieve something similar to what you want. Set a breakpoint in your decorated function and one just before the decorated function is called. Now, disable the breakpoint inside the function.
Now, when you run the code, it will only break when you reach the specific invocation you care about. Once that break happens, re-enable the breakpoint in the function and continue the execution. This will execute all the decorated code and break on the first line of the decorated function.
TL;DR: Modify bdb.Bdb so that it adds decorator's module name to the list of skipped code. This works with both pdb and ipdb, possibly many others. Examples at the bottom.
From my own experiments with pdb.Pdb (the class that actually does the debugging in case of pdb and ipdb), it seems like it is perfectly doable without modifying neither the code of the function you want to debug nor the decorator.
Python debuggers have facilities that make it possible to skip some predefined code. After all, the debuger has to skip its own code to be of any use.
In fact, the base class for python debuggers has something called "skip argument". It's an argument to it's __init__(), that specifies what the debugger should ignore.
From Python Documentation:
The skip argument, if given, must be an iterable of glob-style module name patterns. The debugger will not step into frames that originate in a module that matches one of these patterns. Whether a frame is considered to originate in a certain module is determined by the __name__ in the frame globals.
The problem with this is that it is specified on a call to set_trace(), after which we already landed in the frame of the decorator, on a break. So there is no feature there that would let us add to that argument at runtime.
Fortunately, modifying existing code at runtime is easy in Python, and there are hacks that we can use to add decorator's module name whenever Bdb.__init__() is called. We can "decorate" Bdb class, so that our module is added to skip list whenever someone creates a Bdb object.
So, here be the example of just that. Please excuse the weird signature and usage of Bdb.__init__() instead of super() - in order to be compatible with pdb we have to do it this way:
# magic_decorator.py
import bdb
old_bdb = bdb.Bdb
class DontDebugMeBdb(bdb.Bdb):
#classmethod
def __init__(cls, *args, **kwargs):
if 'skip' not in kwargs or kwargs['skip'] is None:
kwargs['skip'] = []
kwargs['skip'].append(__name__)
old_bdb.__init__(*args, **kwargs)
#staticmethod
def reset(*args, **kwargs):
old_bdb.reset(*args, **kwargs)
bdb.Bdb = DontDebugMeBdb
def dont_debug_decorator(func):
print("Decorating {}".format(func))
def decorated():
"""IF YOU SEE THIS IN THE DEBUGER - YOU LOST"""
print("I'm decorated")
return func()
return decorated
# buged.py
from magic_decorator import dont_debug_decorator
#dont_debug_decorator
def debug_me():
print("DEBUG ME")
Output of ipdb.runcall in Ipython:
In [1]: import buged, ipdb
Decorating <function debug_me at 0x7f0edf80f9b0>
In [2]: ipdb.runcall(buged.debug_me)
I'm decorated
--Call--
> /home/mrmino/treewrite/buged.py(4)debug_me()
3
----> 4 #dont_debug_decorator
5 def debug_me():
ipdb>
With the following:
def my_decorator(fn):
def wrapper(*args, **kwargs):
return fn(*args, **kwargs)
return wrapper
#my_decorator
def my_func():
...
I invoke pdb with import pdb; pdb.run('my_func()') which enters pdb here:
> <string>(1)<module>()
step to enter the call stack – we are now looking at the first line of the decorator function definition:
def my_decorator(fn):
-> def wrapper(*args, **kwargs):
return fn(*args, **kwargs)
return wrapper
next until pdb is on (pointing at) the line where we return the original function (this may be one next or multiple – just depends on your code):
def my_decorator(fn):
def wrapper(*args, **kwargs):
-> return fn(*args, **kwargs)
return wrapper
step into the original function and voila! we are now at at the point where we can next through our original function.
-> #my_decorator
def my_funct():
...
Not entirely the answer to the question but for the newcomers, if someone debugs decorated function in VSCode to skip the decorator and step in the function do the following:
place a breakpoint inside a function you decorated (function body)
call that function and stat debugging
instead of clicking step over or step into click continue and you end up inside a function.
continue debugging as usual
For example:
#some_decorator
def say_hello(name):
x = f"Hello {name}"
return x
hello = say_hello(name="John")
Place one breakpoint at hello and the second breakpoint at x inside a function.

Nose ignores test with custom decorator

I have some relatively complex integration tests in my Python code. I simplified them greatly with a custom decorator and I'm really happy with the result. Here's a simple example of what my decorator looks like:
def specialTest(fn):
def wrapTest(self):
#do some some important stuff
pass
return wrapTest
Here's what a test may look like:
class Test_special_stuff(unittest.TestCase):
#specialTest
def test_something_special(self):
pass
This works great and is executed by PyCharm's test runner without a problem. However, when I run a test from the commandline using Nose, it skips any test with the #specialTest decorator.
I have tried to name the decorator as testSpecial, so it matches default rules, but then my FN parameter doesn't get passed.
How can I get Nose to execute those test methods and treat the decorator as it is intended?
SOLUTION
Thanks to madjar, I got this working by restructuring my code to look like this, using functools.wraps and changing the name of the wrapper:
from functools import wraps
def specialTest(fn):
#wraps(fn)
def test_wrapper(self,*args,**kwargs):
#do some some important stuff
pass
return test_wrapper
class Test_special_stuff(unittest.TestCase):
#specialTest
def test_something_special(self):
pass
If I remember correctly, nose loads the test based on their names (functions whose name begins with test_). In the snippet you posted, you do not copy the __name__ attribute of the function in your wrapper function, so the name of the function returned is wrapTest and nose decides it's not a test.
An easy way to copy the attributes of the function to the new one is to used functools.wraps.

Categories