Python - scope of wrapping functions [duplicate] - python

This question already has answers here:
Counting python method calls within another method
(3 answers)
Closed 9 years ago.
The goal is to wrap a function or method and carry data around with the wrapper that's unique to the wrapped function.
As an example - let's say I have object myThing with method foo. I want to wrap myThing.foo with myWrapper, and (as an example) I want to be able to count the number of times myThing.foo is actually called.
So far, the only method I've found to be effective is to just add an attribute to the object -- but this feels a little bit clumsy.
class myThing(object):
def foo(self):
return "Foo called."
def myWrap(some_func):
def _inner(self):
#a wild kludge appears!
try:
self.count += 1
except AttributeError:
self.count = 0
return some_func(self)
return _inner
Stick = myThing()
myThing.foo = myWrap(myThing.foo)
for i in range(0, 10):
Stick.foo() ##outputs "Foo called." 10 times
Stick.count # the value of Stick.count
So, this achieves the goal, and in fact if there are multiple instances of myThing then each one 'tracks' its own self.count value, which is part of my intended goal. However, I am not certain that adding an attribute to each instance of myThing is the best way to achieve this. If, for example, I were to write a wrapper for a function that wasn't part of an object or class, adding attributes to an object that isn't there won't do anything.
Maybe there is a hole in my understanding of what's actually happening when a method or function is wrapped. I do know that one can maintain some kind of static data within a closure, as with the following example:
def limit(max_value):
def compare(x):
return x > max_value
return compare
isOverLimit = limit(30)
isOverLimit(45) #returns True
isOverLimit(12) #returns False
alsoOver = limit(20)
alsoOver(25) # returns True
isOverLimit(25) # returns False
The second example proving that it's not simply modifying the original instance of limit, and that isOverLimit continues to act as it did before the second alsoOver is created. So I get the sense that there's a way for the wrapper to carry an incremental variable around with it, and that I'm just missing something obvious.

Seems like this is a dupe of Counting python method calls within another method
The short answer is to use a decorator on the method/function you want to count, and have the decorator store the counter as a function attribute. See the answers in the question I linked.

Related

when do we initialise a function call within the function vs as an argument?

I have a question about arguments in functions, in particular initialising an array or other data structure within the function call, like the following:
def helper(root, result = []):
...
My question is, what is the difference between the above vs. doing:
def helper(root):
result = []
I can see why this would be necessary if we were to run recursions, i.e. we would need to use the first case in some instances.
But are there any other instances, and am I right in saying it is necessary in some cases for recursion, or can we always use the latter instead?
Thanks
Python uses pointers for lists, so initializing a list or any other mutable objects in function definition is a bad idea.
The best way of doing it is like this:
def helper(root, result=None):
if isinstance(result, type(None)):
result = []
Now if you only pass one argument to the function, the "result" will be an empty list.
If you initiate the list within the function definition, by calling the function multiple times, "result" won't reset and it will keep the values from previous calls.

A new way to keep track of global variables inside a recursive function in Python?

So I came across a recursive solution to a problem that keeps track of a global variable differently than I've seen before. I am aware of two ways:
One being by using the global keyword:
count = 0
def global_rec(counter):
global count
count += 1
# do stuff
print(count)
And another using default variables:
def variable_recursive(counter, count=0):
count += 1
if counter <= 0:
return count
return variable_recursive(counter-1, count)
The new way:
#driver function
def driver(counter):
#recursive function being called here
rec_utility.result = 0 <---initializing
rec_utility(counter) <--- calling the recursive function
print(rec_utility.result)
def rec_utility(counter):
if counter <= 0:
return
rec_utility.result += 1 <---- 'what is happening here'
rec_utility(counter-1)
I find this way a lot simpler, as in default variable method we have to return the variables we want to keep a track of and the code get really messy really fast. Can someone please explain why passing a variable joint to a function, like an object property works? I understand that python functions are nothing but objects, but is this a hacky way of keeping track of the variables or is it common practice? If so why do we have so many ways to achieve the same task? Thanks!
This isn't as magical as you might think. It might be poor practice.
rec_utility is just a variable in your namespace which happens to be a function. dir() will show it listed when it is in scope. As an object it can have new fields set. dir(rec_utility) will show these new fields, along with __code__ and others.
Like any object, you can set a new field value, as you are doing in your code. There is only one rec_utility function, even though you call it recursively, so its the same field when you initialize it and when you modify it.
Once you understand it, you can decide if it is a good idea. It might be less confusing or error prone to use a parameter.
In some sense, this question has nothing to do with recursive functions. Suppose a function requires an item of information to operate correctly, then do you:
provide it via a global; or
pass it in as a parameter; or
set it as a function attribute prior to calling it.
In the final case, it’s worth considering that it is not entirely robust:
def f():
# f is not this function!!
return f.x + 1
f.x = 100
for f in range(10): pass
Generally, we would consider the second option the best one. There’s nothing special really about its recursive nature, other than the need to provide state, which is information, to the next invocation.

passing object to object [duplicate]

This question already has answers here:
Can you explain closures (as they relate to Python)?
(13 answers)
Closed 6 years ago.
I am trying to understand the background of why the following works:
def part_string(items):
if len(items) == 1:
item = items[0]
def g(obj):
return obj[item]
else:
def g(obj):
return tuple(obj[item] for item in items)
return g
my_indexes = (2,1)
my_string = 'ABCDEFG'
function_instance = part_string(my_indexes)
print(function_instance(my_string))
# also works: print(part_string(my_indexes)(my_string))
how come I can pass my_string to function_instance object even though I already passed my_indexes attributes to part_string() when creating function_instance? why Python accepts my_string implicitly?
I guess it has something to do with the following, so more questions here:
what is obj in g(obj)? can this be something other e.g. g(stuff) (like with self which is just a convention)?
what if I want to pass 2 objects to function_instance? how do I refer to them in g(obj)?
Can You recommend some reading on this?
What you're encountering is a closure.
When you write part_string(my_indexes) you're creating a new function, and upon calling it, you use the old variables you gave to part_string together with the new variables given to function_instance.
You may name the inner function whatever you want, and there is no convention. (obj is used in here but it can be pie. There are no conventions for this except func for function closures (decorators).
If you wish to pass two variables to the function, you may define two variables to the g(obj) function:
def g(var1, var2):
...
Here's some more info regarding closures in python.

Parentheses in Python's functions and decorators(wrappers)

Thanks for reading my question. As I'm still new to Python, I would like to ask about the () in Python.
def addOne(myFunc):
def addOneInside():
return myFunc() + 1
return addOneInside # <-----here is the question
#addOne
def oldFunc():
return 3
print oldFunc()
Please note that on line four, although the programme returns a function, it does not need parentheses(). Why does it NOT turn out with an error for syntax error? Thank you very much for your answers in advance!
The parentheses are used to run a function, but without them the name still refers to the function just like a variable.
return myFunc() + 1
This will evaluate the myFunc function, add 1 to its value and then return that value. The brackets are needed in order to get the function to run and return a numeric value.
return addOneInside
This is not actually running addOneInside, it is merely returning the function as a variable. You could assign this to another name and store it for later use. You could theoretically do this:
plusOne = addOneInside
plusOne()
And it will actually call the addOneInside function.
The particular instance in your initial question is known as a Decorator, and it's a way for you to perform code on the parameters being passed to your function. Your example is not very practical, but I can modify it to show a simple use case.
Let's say that you want to only have positive numbers passed to your function. If myFunc is passed a negative number, you want it to be changed to 0. You can manage this with a decorator like this.
def addOne(myFunc):
def addOneInside(num):
if num < 0:
num = 0
return myFunc(num)
return addOneInside # <-----here is the question
#addOne
def oldFunc(number):
return number
To explain, the #addOne is the decorator syntax, and it's attaching the addOneInside function to be called on the argument/s of oldFunc whenever you call it. So now here's some sample output:
oldFunc(-12)
>>> 0
oldFunc(12)
>>> 12
So now you could add logic to oldFunc that operates independently of the parameter parsing logic. You could also relatively easily change what parameters are permitted. Maybe there's also a maximum cap to hit, or you want it to log or note that the value shouldn't be negative. You can also apply this decorator to multiple functions and it will perform the same on all of them.
This blogpost explained a lot for me, so if this information is too brief to be clear, try reading the long detailed explanation there.
Your indentation in function addOne() was incorrect (I have fixed it), but I don't think that this was your problem.
If you are using Python3, then print is a function and must be called like this:
print(oldFunc())

Type checking of arguments Python [duplicate]

This question already has answers here:
What is the best (idiomatic) way to check the type of a Python variable? [duplicate]
(10 answers)
Closed 5 years ago.
Sometimes checking of arguments in Python is necessary. e.g. I have a function which accepts either the address of other node in the network as the raw string address or class Node which encapsulates the other node's information.
I use type() function as in:
if type(n) == type(Node):
do this
elif type(n) == type(str)
do this
Is this a good way to do this?
Update 1: Python 3 has annotation for function parameters. These can be used for type checks using tool: http://mypy-lang.org/
Use isinstance(). Sample:
if isinstance(n, unicode):
# do this
elif isinstance(n, Node):
# do that
...
>>> isinstance('a', str)
True
>>> isinstance(n, Node)
True
Sounds like you're after a "generic function" - one which behaves differently based on the arguments given. It's a bit like how you'll get a different function when you call a method on a different object, but rather than just using the first argument (the object/self) to lookup the function you instead use all of the arguments.
Turbogears uses something like this for deciding how to convert objects to JSON - if I recall correctly.
There's an article from IBM on using the dispatcher package for this sort of thing:
From that article:
import dispatch
#dispatch.generic()
def doIt(foo, other):
"Base generic function of 'doIt()'"
#doIt.when("isinstance(foo,int) and isinstance(other,str)")
def doIt(foo, other):
print "foo is an unrestricted int |", foo, other
#doIt.when("isinstance(foo,str) and isinstance(other,int)")
def doIt(foo, other):
print "foo is str, other an int |", foo, other
#doIt.when("isinstance(foo,int) and 3<=foo<=17 and isinstance(other,str)")
def doIt(foo, other):
print "foo is between 3 and 17 |", foo, other
#doIt.when("isinstance(foo,int) and 0<=foo<=1000 and isinstance(other,str)")
def doIt(foo, other):
print "foo is between 0 and 1000 |", foo, other
You can also use a try catch to type check if necessary:
def my_function(this_node):
try:
# call a method/attribute for the Node object
if this_node.address:
# more code here
pass
except AttributeError, e:
# either this is not a Node or maybe it's a string,
# so behavior accordingly
pass
You can see an example of this in Beginning Python in the second about generators (page 197 in my edition) and I believe in the Python Cookbook. Many times catching an AttributeError or TypeError is simpler and apparently faster. Also, it may work best in this manner because then you are not tied to a particular inheritance tree (e.g., your object could be a Node or it could be something other object that has the same behavior as a Node).
No, typechecking arguments in Python is not necessary. It is never
necessary.
If your code accepts addresses as rawstring or as a Node object, your
design is broken.
That comes from the fact that if you don't know already the type of an
object in your own program, then you're doing something wrong already.
Typechecking hurts code reuse and reduces performance. Having a function
that performs different things depending on the type of the object passed
is bug-prone and has a behavior harder to understand and maintain.
You have following saner options:
Make a Node object constructor that accepts rawstrings, or a function
that converts strings in Node objects. Make your function assume the
argument passed is a Node object. That way, if you need to pass a
string to the function, you just do:
myfunction(Node(some_string))
That's your best option, it is clean, easy to understand and maintain.
Anyone reading the code immediatelly understands what is happening,
and you don't have to typecheck.
Make two functions, one that accepts Node objects and one that accepts
rawstrings. You can make one call the other internally, in the most
convenient way (myfunction_str can create a Node object and call
myfunction_node, or the other way around).
Make Node objects have a __str__ method and inside your function,
call str() on the received argument. That way you always get a string
by coercion.
In any case, don't typecheck. It is completely unnecessary and has only
downsides. Refactor your code instead in a way you don't need to typecheck.
You only get benefits in doing so, both in short and long run.

Categories