I found out that using
(int)('123')
Works, the same as using int('123')
I explored it a bit and noticed that it work with other functions too.
def add_10(num):
return num + 10
print (add_10)(10) # prints 20
And it also works with classes
class MyClass(object):
def __init__(self, x):
self.x = x
print (MyClass)(10).x # returns 10
I never seem this behaviour before, is it used by anyone? Does this have a name? Where in the docs is this stated? Why do we have this?
It works in both Python 2.7 and Python 3.
Edit:
Further testing and I noticed that the parenthesis don't have any effect. Using ((((int))))('2') is the same as int('2')
Let's try to put this in other words: a function in Python is just an ordinary object.
Appending a pair of parentheses to an object's name, whatever it is, causes the previous object to be called - i.e., it's __call__ method is invocated with the passed parameters.
So, a name in Python, whether from a function or not, can be surrounded by parentheses. The parentheses will be resolved first, as an expression - so -
in (int)(5), (int) is processed as an expression which yields int. Which happens to be a callable object.
One wayto make it easier to understand is to make the expression in parentheses to be less trivial. For example - one can come up with "addable functions" that when added create a new callable object that chains the return value across all functions. It is more or less straightforward to do that:
def compose(func1, func2):
def composed(*args, **kw):
return func2(func1(*args, **kw))
return composed
class addable(object):
def __init__(self, func):
self.func = func
def __add__(self, other):
if callable(other):
return addable(compose(self.func, other))
raise TypeError
def __call__(self, *args, **kw):
return self.func(*args, **kw)
#addable
def mysum(a, b):
return a + b
#addable
def dup(a):
return a * 2
And it works like this at the interactive console:
>>> (mysum + dup)(3, 3)
12
>>>
You can add parenthesis in many places without affecting how the code runs:
>>> (1)+(2)
3
>>> (1)+(((2)))
3
>>> (((int)))('123')
123
It's not casting, you're only surrounding the function object with parenthesis.
You can put parentheses around any expression. One kind of expression is a name. Names can refer to any value, and strings, functions, and types are all just different kinds of values. Multiple layers of parentheses aren't special, either: since a parenthesized expression is also an expression, you can put them in parentheses.
Related
I want to write a wrapper function which call one function and pass the results to another function. The arguments and return types of the functions are the same, but I have problem with returning lists and multiple values.
def foo():
return 1,2
def bar():
return (1,2)
def foo2(a,b):
print(a,b)
def bar2(p):
a,b=p
print(a,b)
def wrapper(func,func2):
a=func()
func2(a)
wrapper(bar,bar2)
wrapper(foo,foo2)
I am searching for a syntax which works with both function pairs to use it in my wrapper code.
EDIT: The definitions of at least foo2 and bar2 should stay this way. Assume that they are from an external library.
There is no distinction. return 1,2 returns a tuple. Parentheses do not define a tuple; the comma does. foo and bar are identical.
As I overlooked until JacobIRR's comment, your problem is that you need to pass an actual tuple, not the unpacked values from a tuple, to bar2:
a = foo()
foo2(*a)
a = bar()
bar2(a)
I don't necessarily agree with the design, but following your requirements in the comments (the function definitions can't change), you can write a wrapper that tries to execute each version (packed vs. unpacked) since it sounds like you might not know what the function expects. The wrapper written below, argfixer, does exactly that.
def argfixer(func):
def wrapper(arg):
try:
return func(arg)
except TypeError:
return func(*arg)
return wrapper
def foo():
return 1,2
def bar():
return (1,2)
#argfixer
def foo2(a,b):
print(a,b)
#argfixer
def bar2(p):
a,b=p
print(a,b)
a = foo()
b = bar()
foo2(a)
foo2(b)
bar2(a)
bar2(b)
However, if you aren't able to put the #argfixer on the line before the function definitions, you could alternatively wrap them like this in your own script before calling them:
foo2 = argfixer(foo2)
bar2 = argfixer(bar2)
And as mentioned in previous comments/answers, return 1,2 and return (1,2) are equivalent and both return a single tuple.
This code does not run because of arg differences. It runs if you use def foo2(*args): and def bar2(*p):.
The return 1, 2 and return (1, 2) are equivalent. The comma operator just creates a tuple, whether it is enclosed in parentheses or not.
All programming languages that I know of return a single value, so, since you want to return multiple, those values must be wrapped into a collection type, in this case, a tuple.
The problem is in the way you call the second function. Make it bar2(a) instead of bar2(*a), which breaks the tuple into separate arguments.
Question
Is there any way to declare function arguments as non-strict (passed by-name)?
If this is not possible directly: are there any helper functions or decorators that help me achieve something similar?
Concrete example
Here is a littly toy-example to experiment with.
Suppose that I want to build a tiny parser-combinator library that can cope with the following classic grammar for arithmetic expressions with parentheses (numbers replaced by a single literal value 1 for simplicity):
num = "1"
factor = num
| "(" + expr + ")"
term = factor + "*" + term
| factor
expr = term + "+" + expr
| term
Suppose that I define a parser combinator as an object that has a method parse that can take list of tokens, current position, and either throw a parse error, or return a result and a new position. I can nicely define a ParserCombinator base class that provides + (concatenation) and | (alternative). Then I can define parser combinators that accept constant strings, and implement + and |:
# Two kinds of errors that can be thrown by a parser combinator
class UnexpectedEndOfInput(Exception): pass
class ParseError(Exception): pass
# Base class that provides methods for `+` and `|` syntax
class ParserCombinator:
def __add__(self, next):
return AddCombinator(self, next)
def __or__(self, other):
return OrCombinator(self, other)
# Literally taken string constants
class Lit(ParserCombinator):
def __init__(self, string):
self.string = string
def parse(self, tokens, pos):
if pos < len(tokens):
t = tokens[pos]
if t == self.string:
return t, (pos + 1)
else:
raise ParseError
else:
raise UnexpectedEndOfInput
def lit(str):
return Lit(str)
# Concatenation
class AddCombinator(ParserCombinator):
def __init__(self, first, second):
self.first = first
self.second = second
def parse(self, tokens, pos):
x, p1 = self.first.parse(tokens, pos)
y, p2 = self.second.parse(tokens, p1)
return (x, y), p2
# Alternative
class OrCombinator(ParserCombinator):
def __init__(self, first, second):
self.first = first
self.second = second
def parse(self, tokens, pos):
try:
return self.first.parse(tokens, pos)
except:
return self.second.parse(tokens, pos)
So far, everything is fine. However, because the non-terminal symbols of the grammar are defined in a mutually recursive fashion, and I cannot eagerly unfold the tree of all possible parser combinations, I have to work with factories of parser combinators, and wrap them into something like this:
# Wrapper that prevents immediate stack overflow
class LazyParserCombinator(ParserCombinator):
def __init__(self, parserFactory):
self.parserFactory = parserFactory
def parse(self, tokens, pos):
return self.parserFactory().parse(tokens, pos)
def p(parserFactory):
return LazyParserCombinator(parserFactory)
This indeed allows me to write down the grammar in a way that is very close to the EBNF:
num = p(lambda: lit("1"))
factor = p(lambda: num | (lit("(") + expr + lit(")")))
term = p(lambda: (factor + lit("*") + term) | factor)
expr = p(lambda: (term + lit("+") + expr) | term)
And it actually works:
tokens = [str(x) for x in "1+(1+1)*(1+1+1)+1*(1+1)"]
print(expr.parse(tokens, 0))
However, the p(lambda: ...) in every line is a bit annoying. Is there some idiomatic way to get rid of it? It would be nice if one could somehow pass the whole RHS of a rule "by-name", without triggering the eager evaluation of the infinite mutual recursion.
What I've tried
I've checked what's available in the core language: it seems that only if, and and or can "short-circuit", please correct me if I'm wrong.
I've tried looking at how other non-toy-example libraries do this.
For example,
funcparserlib
uses explicit forward declarations to avoid mutual recursion
(look at the forward_decl and value.define
part in github README.md example code).
The parsec.py uses some special #generate decorators
and seems to do something like monadic parsing using coroutines.
That's all very nice, but my goal is to understand what options
I have with regards to the basic evaluation strategies available
in Python.
I've also found something like the lazy_object_proxy.Proxy, but it didn't seem to help to instantiate such objects in more concise way.
So, is there a nicer way to pass arguments by-name and avoid the blowup of mutually recursively defined values?
It's a nice idea, but it's not something that Python's syntax allows for: Python expressions are always evaluated strictly (with the exception of if blocks and and and or short-circuiting expressions).
In particular, the problem is that in an expression like:
num = p(lit("1"))
The p function argument is always received with a new name binding to the same object. The object resulting from evaluating lit("1") is not named anything (until a name is created by the formal parameter to p), so there is no name there to bind to. Conversely, there must be an object there, or otherwise p wouldn't be able to receive a value at all.
What you could do is add a new object to use instead of a lambda to defer evaluation of a name. For example, something like:
class DeferredNamespace(object):
def __init__(self, namespace):
self.__namespace = namespace
def __getattr__(self, name):
return DeferredLookup(self.__namespace, name)
class DeferredLookup(object):
def __init__(self, namespace, name):
self.__namespace = namespace
self.__name = name
def __getattr__(self, name):
return getattr(getattr(self.__namespace, self.__name), name)
d = DeferredNamespace(locals())
num = p(d.lit("1"))
In this case, d.lit actually doesn't return lit, it returns a DeferredLookup object that will use getattr(locals(), 'lit') to resolve its members when they are actually used. Note that this captures locals() eagerly, which you might not want; you can adapt that to use a lambda, or better yet just create all your entities in some other namespace anyway.
You still get the wart of the d. in the syntax, which may or may not be a deal-breaker, depending on your goals with this API.
Special solution for functions that must accept exactly one by-name argument
If you want to define a function f that has to take one single argument by-name, consider making f into a #decorator. Instead of an argument littered with lambdas, the decorator can then directly receive the function definition.
The lambdas in the question appear because we need a way to make the execution of the right hand sides lazy. However, if we change the definitions of non-terminal symbols to be defs rather than local variables, the RHS is also not executed immediately. Then what we have to do is to convert these defs into ParserCombinators somehow. For this, we can use decorators.
We can define a decorator that wraps a function into a LazyParserCombinator as follows:
def rule(f):
return LazyParserCombinator(f)
and then apply it to the functions that hold the definitions of each grammar rule:
#rule
def num(): return lit("1")
#rule
def factor(): return num | (lit("(") + expr + lit(")"))
#rule
def term(): return factor + lit("*") + term | factor
#rule
def expr(): return (term + lit("+") + expr) | term
The syntactic overhead within the right hand sides of the rules is minimal (no overhead for referencing other rules, no p(...)-wrappers or ruleName()-parentheses needed), and there is no counter-intuitive boilerplate with lambdas.
Explanation:
Given a higher order function h, we can use it to decorate other function f as follows:
#h
def f():
<body>
What this does is essentially:
def f():
<body>
f = h(f)
and h is not constrained to returning functions, it can also return other objects, like ParserCombinators above.
[I am using python 2.7]
I wanted to make a little wrapper function that add one output to a function. Something like:
def add_output(fct, value):
return lambda *args, **kargs: (fct(*args,**kargs),value)
Example of use:
def f(a): return a+1
g = add_output(f,42)
print g(12) # print: (13,42)
This is the expected results, but it does not work if the function given to add_ouput return more than one output (nor if it returns no output). In this case, the wrapped function will return two outputs, one contains all the output of the initial function (or None if it returns no output), and one with the added output:
def f1(a): return a,a+1
def f2(a): pass
g1 = add_output(f1,42)
g2 = add_output(f2,42)
print g1(12) # print: ((12,13),42) instead of (12,13,42)
print g2(12) # print: (None,42) instead of 42
I can see this is related to the impossibility to distinguish between one output of type tuple and several output. But this is disappointing not to be able to do something so simple with a dynamic language like python...
Does anyone have an idea on a way to achieve this automatically and nicely enough, or am I in a dead-end ?
Note:
In case this change anything, my real purpose is doing some wrapping of class (instance) method, to looks like function (for workflow stuff). However it is require to add self in the output (in case its content is changed):
class C(object):
def f(self): return 'foo','bar'
def wrap(method):
return lambda self, *args, **kargs: (self,method(self,*args,**kargs))
f = wrap(C.f)
c = C()
f(c) # returns (c,('foo','bar')) instead of (c,'foo','bar')
I am working with python 2.7, so I a want solution with this version or else I abandon the idea. I am still interested (and maybe futur readers) by comments about this issue for python 3 though.
Your add_output() function is what is called a decorator in Python. Regardless, you can use one of the collections module's ABCs (Abstract Base Classes) to distinguish between different results from the function being wrapped. For example:
import collections
def add_output(fct, value):
def wrapped(*args, **kwargs):
result = fct(*args, **kwargs)
if isinstance(result, collections.Sequence):
return tuple(result) + (value,)
elif result is None:
return value
else: # non-None and non-sequence
return (result, value)
return wrapped
def f1(a): return a,a+1
def f2(a): pass
g1 = add_output(f1, 42)
g2 = add_output(f2, 42)
print g1(12) # -> (12,13,42)
print g2(12) # -> 42
Depending of what sort of functions you plan on decorating, you might need to use the collections.Iterable ABC instead of, or in addition to, collections.Sequence.
I tried to reimplement something like partial (which later will have more behavior). Now in the following example lazycall1 seems to work just as fine as lazycall2, so I don't understand why the documentation of partial suggests using the longer second version. Any suggestions? Can it get me in trouble?
def lazycall1(func, *args, **kwargs):
def f():
func(*args, **kwargs)
return f
def lazycall2(func, *args, **kwargs):
def f():
func(*args, **kwargs)
f.func=func # why do I need that?
f.args=args
f.kwargs=kwargs
return f
def A(x):
print("A", x)
def B(x):
print("B", x)
a1=lazycall1(A, 1)
b1=lazycall1(B, 2)
a1()
b1()
a2=lazycall2(A, 3)
b2=lazycall2(B, 4)
a2()
b2()
EDIT: Actually the answers given so far aren't quite right. Even with double arguments it would work. Is there another reason?
def lazycall(func, *args):
def f(*args2):
return func(*(args+args2))
return f
def sum_up(a, b):
return a+b
plusone=lazycall(sum_up, 1)
plustwo=lazycall(sum_up, 2)
print(plusone(6)) #7
print(plustwo(9)) #11
The only extra thing the second form has, are some extra properties. This might be helpful if you start passing around the functions returned by lazycall2, so that the receiving function may make decisions based on these values.
functools.partial can accept additional arguments - or overridden arguments - in the inner, returned function. Your inner f() functions don't, so there's no need for what you're doing in lazycall2. However, if you wanted to do something like this:
def sum(a, b):
return a+b
plusone = lazycall3(sum, 1)
plusone(6) # 7
You'd need to do what is shown in those docs.
Look closer at the argument names in the inner function newfunc in the Python documentation page you link to, they are different than those passed to the inner function, args vs. fargs, keywords vs. fkeywords. Their implementation of partial saves the arguments that the outer function was given and adds them to the arguments given to the inner function.
Since you reuse the exact same argument names in your inner function, the original arguments to the outer function won't be accessible in there.
As for setting func, args, and kwargs attributes on the outer function, a function is an object in Python, and you can set attributes on it. These attributes allow you to get access to the original function and arguments after you have passed them into your lazycall functions. So a1.func will be A and a1.args will be [1].
If you don't need to keep track of the original function and arguments, you should be fine
with your lazycall1.
I've been playing around in depth with attempting to write my own version of a memoizing decorator before I go looking at other people's code. It's more of an exercise in fun, honestly. However, in the course of playing around I've found I can't do something I want with decorators.
def addValue( func, val ):
def add( x ):
return func( x ) + val
return add
#addValue( val=4 )
def computeSomething( x ):
#function gets defined
If I want to do that I have to do this:
def addTwo( func ):
return addValue( func, 2 )
#addTwo
def computeSomething( x ):
#function gets defined
Why can't I use keyword arguments with decorators in this manner? What am I doing wrong and can you show me how I should be doing it?
You need to define a function that returns a decorator:
def addValue(val):
def decorator(func):
def add(x):
return func(x) + val
return add
return decorator
When you write #addTwo, the value of addTwo is directly used as a decorator. However, when you write #addValue(4), first addValue(4) is evaluated by calling the addValue function. Then the result is used as a decorator.
You want to partially apply the function addValue - give the val argument, but not func. There are generally two ways to do this:
The first one is called currying and used in interjay's answer: instead of a function with two arguments, f(a,b) -> res, you write a function of the first arg that returns another function that takes the 2nd arg g(a) -> (h(b) -> res)
The other way is a functools.partial object. It uses inspection on the function to figure out what arguments a function needs to run (func and val in your case ). You can add extra arguments when creating a partial and once you call the partial, it uses all the extra arguments given.
from functools import partial
#partial(addValue, val=2 ) # you can call this addTwo
def computeSomething( x ):
return x
Partials are usually a much simpler solution for this partial application problem, especially with more than one argument.
Decorators with any kinds of arguments -- named/keyword ones, unnamed/positional ones, or some of each -- essentially, ones you call on the #name line rather than just mention there -- need a double level of nesting (while the decorators you just mention have a single level of nesting). That goes even for argument-less ones if you want to call them in the # line -- here's the simplest, do-nothing, double-nested decorator:
def double():
def middling():
def inner(f):
return f
return inner
return middling
You'd use this as
#double()
def whatever ...
note the parentheses (empty in this case since there are no arguments needed nor wanted): they mean you're calling double, which returns middling, which decorates whatever.
Once you've seen the difference between "calling" and "just mentioning", adding (e.g. optional) named args is not hard:
def doublet(foo=23):
def middling():
def inner(f):
return f
return inner
return middling
usable either as:
#doublet()
def whatever ...
or as:
#doublet(foo=45)
def whatever ...
or equivalently as:
#doublet(45)
def whatever ...