pythonic way to rewrite an assignment in an if statement - python

Is there a pythonic preferred way to do this that I would do in C++:
for s in str:
if r = regex.match(s):
print r.groups()
I really like that syntax, imo it's a lot cleaner than having temporary variables everywhere. The only other way that's not overly complex is
for s in str:
r = regex.match(s)
if r:
print r.groups()
I guess I'm complaining about a pretty pedantic issue. I just miss the former syntax.

How about
for r in [regex.match(s) for s in str]:
if r:
print r.groups()
or a bit more functional
for r in filter(None, map(regex.match, str)):
print r.groups()

Perhaps it's a bit hacky, but using a function object's attributes to store the last result allows you to do something along these lines:
def fn(regex, s):
fn.match = regex.match(s) # save result
return fn.match
for s in strings:
if fn(regex, s):
print fn.match.groups()
Or more generically:
def cache(value):
cache.value = value
return value
for s in strings:
if cache(regex.match(s)):
print cache.value.groups()
Note that although the "value" saved can be a collection of a number of things, this approach is limited to holding only one such at a time, so more than one function may be required to handle situations where multiple values need to be saved simultaneously, such as in nested function calls, loops or other threads. So, in accordance with the DRY principle, rather than writing each one, a factory function can help:
def Cache():
def cache(value):
cache.value = value
return value
return cache
cache1 = Cache()
for s in strings:
if cache1(regex.match(s)):
# use another at same time
cache2 = Cache()
if cache2(somethingelse) != cache1.value:
process(cache2.value)
print cache1.value.groups()
...

There's a recipe to make an assignment expression but it's very hacky. Your first option doesn't compile so your second option is the way to go.
## {{{ http://code.activestate.com/recipes/202234/ (r2)
import sys
def set(**kw):
assert len(kw)==1
a = sys._getframe(1)
a.f_locals.update(kw)
return kw.values()[0]
#
# sample
#
A=range(10)
while set(x=A.pop()):
print x
## end of http://code.activestate.com/recipes/202234/ }}}
As you can see, production code shouldn't touch this hack with a ten foot, double bagged stick.

This might be an overly simplistic answer, but would you consider this:
for s in str:
if regex.match(s):
print regex.match(s).groups()

There is no pythonic way to do something that is not pythonic. It's that way for a reason, because 1, allowing statements in the conditional part of an if statement would make the grammar pretty ugly, for instance, if you allowed assignment statements in if conditions, why not also allow if statements? how would you actually write that? C like languages don't have this problem, because they don't have assignment statements. They make do with just assignment expressions and expression statements.
the second reason is because of the way
if foo = bar:
pass
looks very similar to
if foo == bar:
pass
even if you are clever enough to type the correct one, and even if most of the members on your team are sharp enough to notice it, are you sure that the one you are looking at now is exactly what is supposed to be there? it's not unreasonable for a new dev to see this and just fix it (one way or the other) and now its definitely wrong.

Whenever I find that my loop logic is getting complex I do what I would with any other bit of logic: I extract it to a function. In Python it is a lot easier than some other languages to do this cleanly.
So extract the code that just generates the items of interest:
def matching(strings, regex):
for s in strings:
r = regex.match(s)
if r: yield r
and then when you want to use it, the loop itself is as simple as they get:
for r in matching(strings, regex):
print r.groups()

Yet another answer is to use the "Assign and test" recipe for allowing assigning and testing in a single statement published in O'Reilly Media's July 2002 1st edition of the Python Cookbook and also online at Activestate. It's object-oriented, the crux of which is this:
# from http://code.activestate.com/recipes/66061
class DataHolder:
def __init__(self, value=None):
self.value = value
def set(self, value):
self.value = value
return value
def get(self):
return self.value
This can optionally be modified slightly by adding the custom __call__() method shown below to provide an alternative way to retrieve instances' values -- which, while less explicit, seems like a completely logical thing for a 'DataHolder' object to do when called, I think.
def __call__(self):
return self.value
Allowing your example to be re-written:
r = DataHolder()
for s in strings:
if r.set(regex.match(s))
print r.get().groups()
# or
print r().groups()
As also noted in the original recipe, if you use it a lot, adding the class and/or an instance of it to the __builtin__ module to make it globally available is very tempting despite the potential downsides:
import __builtin__
__builtin__.DataHolder = DataHolder
__builtin__.data = DataHolder()
As I mentioned in my other answer to this question, it must be noted that this approach is limited to holding only one result/value at a time, so more than one instance is required to handle situations where multiple values need to be saved simultaneously, such as in nested function calls, loops or other threads. That doesn't mean you should use it or the other answer, just that more effort will be required.

Related

How can I unit test code which uses a confluent_kafka Consumer? [duplicate]

Ruby can add methods to the Number class and other core types to get effects like this:
1.should_equal(1)
But it seems like Python cannot do this. Is this true? And if so, why? Does it have something to do with the fact that type can't be modified?
Rather than talking about different definitions of monkey patching, I would like to just focus on the example above. I have already concluded that it cannot be done as a few of you have answered. But I would like a more detailed explanation of why it cannot be done, and maybe what feature, if available in Python, would allow this.
To answer some of you: The reason I might want to do this is simply aesthetics/readability.
item.price.should_equal(19.99)
This reads more like English and clearly indicates which is the tested value and which is the expected value, as supposed to:
should_equal(item.price, 19.99)
This concept is what Rspec and some other Ruby frameworks are based on.
No, you cannot. In Python, all data (classes, methods, functions, etc) defined in C extension modules (including builtins) are immutable. This is because C modules are shared between multiple interpreters in the same process, so monkeypatching them would also affect unrelated interpreters in the same process. (Multiple interpreters in the same process are possible through the C API, and there has been some effort towards making them usable at Python level.)
However, classes defined in Python code may be monkeypatched because they are local to that interpreter.
What exactly do you mean by Monkey Patch here? There are several slightly different definitions.
If you mean, "can you change a class's methods at runtime?", then the answer is emphatically yes:
class Foo:
pass # dummy class
Foo.bar = lambda self: 42
x = Foo()
print x.bar()
If you mean, "can you change a class's methods at runtime and make all of the instances of that class change after-the-fact?" then the answer is yes as well. Just change the order slightly:
class Foo:
pass # dummy class
x = Foo()
Foo.bar = lambda self: 42
print x.bar()
But you can't do this for certain built-in classes, like int or float. These classes' methods are implemented in C and there are certain abstractions sacrificed in order to make the implementation easier and more efficient.
I'm not really clear on why you would want to alter the behavior of the built-in numeric classes anyway. If you need to alter their behavior, subclass them!!
You can do this, but it takes a little bit of hacking. Fortunately, there's a module now called "Forbidden Fruit" that gives you the power to patch methods of built-in types very simply. You can find it at
http://clarete.github.io/forbiddenfruit/?goback=.gde_50788_member_228887816
or
https://pypi.python.org/pypi/forbiddenfruit/0.1.0
With the original question example, after you write the "should_equal" function, you'd just do
from forbiddenfruit import curse
curse(int, "should_equal", should_equal)
and you're good to go! There's also a "reverse" function to remove a patched method.
def should_equal_def(self, value):
if self != value:
raise ValueError, "%r should equal %r" % (self, value)
class MyPatchedInt(int):
should_equal=should_equal_def
class MyPatchedStr(str):
should_equal=should_equal_def
import __builtin__
__builtin__.str = MyPatchedStr
__builtin__.int = MyPatchedInt
int(1).should_equal(1)
str("44").should_equal("44")
Have fun ;)
Python's core types are immutable by design, as other users have pointed out:
>>> int.frobnicate = lambda self: whatever()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can't set attributes of built-in/extension type 'int'
You certainly could achieve the effect you describe by making a subclass, since user-defined types in Python are mutable by default.
>>> class MyInt(int):
... def frobnicate(self):
... print 'frobnicating %r' % self
...
>>> five = MyInt(5)
>>> five.frobnicate()
frobnicating 5
>>> five + 8
13
There's no need to make the MyInt subclass public, either; one could just as well define it inline directly in the function or method that constructs the instance.
There are certainly a few situations where Python programmers who are fluent in the idiom consider this sort of subclassing the right thing to do. For instance, os.stat() returns a tuple subclass that adds named members, precisely in order to address the sort of readability concern you refer to in your example.
>>> import os
>>> st = os.stat('.')
>>> st
(16877, 34996226, 65024L, 69, 1000, 1000, 4096, 1223697425, 1223699268, 1223699268)
>>> st[6]
4096
>>> st.st_size
4096
That said, in the specific example you give, I don't believe that subclassing float in item.price (or elsewhere) would be very likely to be considered the Pythonic thing to do. I can easily imagine somebody deciding to add a price_should_equal() method to item if that were the primary use case; if one were looking for something more general, perhaps it might make more sense to use named arguments to make the intended meaning clearer, as in
should_equal(observed=item.price, expected=19.99)
or something along those lines. It's a bit verbose, but no doubt it could be improved upon. A possible advantage to such an approach over Ruby-style monkey-patching is that should_equal() could easily perform its comparison on any type, not just int or float. But perhaps I'm getting too caught up in the details of the particular example that you happened to provide.
You can't patch core types in python.
However, you could use pipe to write a more human readable code:
from pipe import *
#Pipe
def should_equal(obj, val):
if obj==val: return True
return False
class dummy: pass
item=dummy()
item.value=19.99
print item.value | should_equal(19.99)
If you really really really want to do a monkey patch in Python, you can do a (sortof) hack with the "import foo as bar" technique.
If you have a class such as TelnetConnection, and you want to extend it, subclass it in a separate file and call it something like TelnetConnectionExtended.
Then, at the top of your code, where you would normally say:
import TelnetConnection
change that to be:
import TelnetConnectionExtended as TelnetConnection
and then everywhere in your code that you reference TelnetConnection will actually be referencing TelnetConnectionExtended.
Sadly, this assumes that you have access to that class, and the "as" only operates within that particular file (it's not a global-rename), but I've found it to be useful from time to time.
Here's an example of implementing item.price.should_equal, although I'd use Decimal instead of float in a real program:
class Price(float):
def __init__(self, val=None):
float.__init__(self)
if val is not None:
self = val
def should_equal(self, val):
assert self == val, (self, val)
class Item(object):
def __init__(self, name, price=None):
self.name = name
self.price = Price(price)
item = Item("spam", 3.99)
item.price.should_equal(3.99)
No but you have UserDict UserString and UserList which were made with exactly this in mind.
If you google you will find examples for other types, but this are builtin.
In general monkey patching is less used in Python than in Ruby.
What does should_equal do? Is it a boolean returning True or False? In that case, it's spelled:
item.price == 19.99
There's no accounting for taste, but no regular python developer would say that's less readable than your version.
Does should_equal instead set some sort of validator? (why would a validator be limited to one value? Why not just set the value and not update it after that?) If you want a validator, this could never work anyway, since you're proposing to modify either a particular integer or all integers. (A validator that requires 18.99 to equal 19.99 will always fail.) Instead, you could spell it like this:
item.price_should_equal(19.99)
or this:
item.should_equal('price', 19.99)
and define appropriate methods on item's class or superclasses.
It seems what you really wanted to write is:
assert item.price == 19.99
(Of course comparing floats for equality, or using floats for prices, is a bad idea, so you'd write assert item.price == Decimal(19.99) or whatever numeric class you were using for the price.)
You could also use a testing framework like py.test to get more info on failing asserts in your tests.
No, you can't do that in Python. I consider it to be a good thing.
No, sadly you cannot extend types implemented in C at runtime.
You can subclass int, although it is non-trivial, you may have to override __new__.
You also have a syntax issue:
1.somemethod() # invalid
However
(1).__eq__(1) # valid
Here is how I made custom string/int/float...etc. methods:
class MyStrClass(str):
def __init__(self, arg: str):
self.arg_one = arg
def my_str_method(self):
return self.arg_one
def my_str_multiple_arg_method(self, arg_two):
return self.arg_one + arg_two
class MyIntClass(int):
def __init__(self, arg: int):
self.arg_one = arg
def my_int_method(self):
return self.arg_one * 2
myString = MyStrClass("StackOverflow")
myInteger = MyIntClass(15)
print(myString.count("a")) # Output: 1
print(myString.my_str_method()) # Output: StackOverflow
print(myString.my_str_multiple_arg_method(" is cool!")) # Output: StackOverflow is cool!
print(myInteger.my_int_method()) # Output: 30
It's maybe not the best solution, but it works just fine.
Here's how I achieve the .should_something... behavior:
result = calculate_result('blah') # some method defined somewhere else
the(result).should.equal(42)
or
the(result).should_NOT.equal(41)
I included a decorator method for extending this behavior at runtime on a stand-alone method:
#should_expectation
def be_42(self)
self._assert(
action=lambda: self._value == 42,
report=lambda: "'{0}' should equal '5'.".format(self._value)
)
result = 42
the(result).should.be_42()
You have to know a bit about the internals but it works.
Here's the source:
https://github.com/mdwhatcott/pyspecs
It's also on PyPI under pyspecs.

Best practice for using parentheses in Python function returns?

I'm learning Python and, so far, I absolutely love it. Everything about it.
I just have one question about a seeming inconsistency in function returns, and I'm interested in learning the logic behind the rule.
If I'm returning a literal or variable in a function return, no parentheses are needed:
def fun_with_functions(a, b):
total = a + b
return total
However, when I'm returning the result of another function call, the function is wrapped around a set of parentheses. To wit:
def lets_have_fun():
return(fun_with_functions(42, 9000))
This is, at least, the way I've been taught, using the A Smarter Way to Learn Python book. I came across this discrepancy and it was given without an explanation. You can see the online exercise here (skip to Exercize 10).
Can someone explain to me why this is necessary? Is it even necessary in the first place? And are there other similar variations in parenthetical syntax that I should be aware of?
Edit: I've rephrased the title of my question to reflect the responses. Returning a result within parentheses is not mandatory, as I originally thought, but it is generally considered best practice, as I have now learned.
It's not necessary. The parentheses are used for several reason, one reason it's for code style:
example = some_really_long_function_name() or another_really_function_name()
so you can use:
example = (some_really_long_function_name()
or
another_really_function_name())
Another use it's like in maths, to force evaluation precede. So you want to ensure the excute between parenthese before. I imagine that the functions return the result of another one, it's just best practice to ensure the execution of the first one but it's no necessary.
I don't think it is mandatory. Tried in both python2 and python3, and a without function defined without parentheses in lets_have_fun() return clause works just fine. So as jack6e says, it's just a preference.
if you
return ("something,) # , is important, the ( ) are optional, thx #roganjosh
you are returning a tuple.
If you are returning
return someFunction(4,9)
or
return (someFunction(4,9))
makes no difference. To test, use:
def f(i,g):
return i * g
def r():
return f(4,6)
def q():
return (f(4,6))
print (type(r()))
print (type(q()))
Output:
<type 'int'>
<type 'int'>

Function returning reference to itself Python

Does the following have any practical use in Python?
>>> def a(n):
print(n)
return a
Or even:
>>> def a(n):
print(n)
return b
>>> def b(n):
print(n+3)
return a
This is common practice, maybe not so much with functions but widely used in OOP. Basically, whenever you're not using a getter (a method that returns properties of the object) or returning something specific, there is no cost to returning the object itself. But it allows to compress code as in
house = House()
exits = house.setDoors(2).setWindows(4).getNumberOfEmergencyExitsRequired()
While alternatively, you would have to write
house = House()
house.setDoors(2)
house.setWindows(4)
exits = house.getNumberOfEmergencyExistsRequired()
It's not the end of the world, but it allows to compress code without reducing readability, hence it is a nice thing.
To your examples
The first one is straight forward and similar, it allows compression of code. The second one is actually not something I personally would do, because
a(3)(5) == a(3); b(5)
In this simple example, there is no reason why it should behave like that and might be confusing.
Back to OOP
Anyhow, in OOP, of course you could imagine
class House(object):
def addDoorByColor(self, doorColor):
door = new Door()
door.setColor(doorColor)
self.door = door
return self.door
Where then
house = House();
house.addDoorByColor('red').open()
would "open the door". This is probably not the best example for this scenario, but something I came along with right now just to show that there is potential use of returning "other objects". However, the last case would probably better be done by
door = new Door('red')
house.addDoor(door)
door.open()

Memoize a function so that it isn't reset when I rerun the file in Python

I often do interactive work in Python that involves some expensive operations that I don't want to repeat often. I'm generally running whatever Python file I'm working on frequently.
If I write:
import functools32
#functools32.lru_cache()
def square(x):
print "Squaring", x
return x*x
I get this behavior:
>>> square(10)
Squaring 10
100
>>> square(10)
100
>>> runfile(...)
>>> square(10)
Squaring 10
100
That is, rerunning the file clears the cache. This works:
try:
safe_square
except NameError:
#functools32.lru_cache()
def safe_square(x):
print "Squaring", x
return x*x
but when the function is long it feels strange to have its definition inside a try block. I can do this instead:
def _square(x):
print "Squaring", x
return x*x
try:
safe_square_2
except NameError:
safe_square_2 = functools32.lru_cache()(_square)
but it feels pretty contrived (for example, in calling the decorator without an '#' sign)
Is there a simple way to handle this, something like:
#non_resetting_lru_cache()
def square(x):
print "Squaring", x
return x*x
?
Writing a script to be executed repeatedly in the same session is an odd thing to do.
I can see why you'd want to do it, but it's still odd, and I don't think it's unreasonable for the code to expose that oddness by looking a little odd, and having a comment explaining it.
However, you've made things uglier than necessary.
First, you can just do this:
#functools32.lru_cache()
def _square(x):
print "Squaring", x
return x*x
try:
safe_square_2
except NameError:
safe_square_2 = _square
There is no harm in attaching a cache to the new _square definition. It won't waste any time, or more than a few bytes of storage, and, most importantly, it won't affect the cache on the previous _square definition. That's the whole point of closures.
There is a potential problem here with recursive functions. It's already inherent in the way you're working, and the cache doesn't add to it in any way, but you might only notice it because of the cache, so I'll explain it and show how to fix it. Consider this function:
#lru_cache()
def _fact(n):
if n < 2:
return 1
return _fact(n-1) * n
When you re-exec the script, even if you have a reference to the old _fact, it's going to end up calling the new _fact, because it's accessing _fact as a global name. It has nothing to do with the #lru_cache; remove that, and the old function will still end up calling the new _fact.
But if you're using the renaming trick above, you can just call the renamed version:
#lru_cache()
def _fact(n):
if n < 2:
return 1
return fact(n-1) * n
Now the old _fact will call fact, which is still the old _fact. Again, this works identically with or without the cache decorator.
Beyond that initial trick, you can factor that whole pattern out into a simple decorator. I'll explain step by step below, or see this blog post.
Anyway, even with the less-ugly version, it's still a bit ugly and verbose. And if you're doing this dozens of times, my "well, it should look a bit ugly" justification will wear thin pretty fast. So, you'll want to handle this the same way you always factor out ugliness: wrap it in a function.
You can't really pass names around as objects in Python. And you don't want to use a hideous frame hack just to deal with this. So you'll have to pass the names around as strings. ike this:
globals().setdefault('fact', _fact)
The globals function just returns the current scope's global dictionary. Which is a dict, which means it has the setdefault method, which means this will set the global name fact to the value _fact if it didn't already have a value, but do nothing if it did. Which is exactly what you wanted. (You could also use setattr on the current module, but I think this way emphasizes that the script is meant to be (repeatedly) executed in someone else's scope, not used as a module.)
So, here that is wrapped up in a function:
def new_bind(name, value):
globals().setdefault(name, value)
… which you can turn that into a decorator almost trivially:
def new_bind(name):
def wrap(func):
globals().setdefault(name, func)
return func
return wrap
Which you can use like this:
#new_bind('foo')
def _foo():
print(1)
But wait, there's more! The func that new_bind gets is going to have a __name__, right? If you stick to a naming convention, like that the "private" name must be the "public" name with a _ prefixed, we can do this:
def new_bind(func):
assert func.__name__[0] == '_'
globals().setdefault(func.__name__[1:], func)
return func
And you can see where this is going:
#new_bind
#lru_cache()
def _square(x):
print "Squaring", x
return x*x
There is one minor problem: if you use any other decorators that don't wrap the function properly, they will break your naming convention. So… just don't do that. :)
And I think this works exactly the way you want in every edge case. In particular, if you've edited the source and want to force the new definition with a new cache, you just del square before rerunning the file, and it works.
And of course if you want to merge those two decorators into one, it's trivial to do so, and call it non_resetting_lru_cache.
However, I'd keep them separate. I think it's more obvious what they do. And if you ever want to wrap another decorator around #lru_cache, you're probably still going to want #new_bind to be the outermost decorator, right?
What if you want to put new_bind into a module that you can import? Then it's not going to work, because it will be referring to the globals of that module, not the one you're currently writing.
You can fix that by explicitly passing your globals dict, or your module object, or your module name as an argument, like #new_bind(__name__), so it can find your globals instead of its. But that's ugly and repetitive.
You can also fix it with an ugly frame hack. At least in CPython, sys._getframe() can be used to get your caller's frame, and frame objects have a reference to their globals namespace, so:
def new_bind(func):
assert func.__name__[0] == '_'
g = sys._getframe(1).f_globals
g.setdefault(func.__name__[1:], func)
return func
Notice the big box in the docs that tells you this is an "implementation detail" that may only apply to CPython and is "for internal and specialized purposes only". Take this seriously. Whenever someone has a cool idea for the stdlib or builtins that could be implemented in pure Python, but only by using _getframe, it's generally treated almost the same as an idea that can't be implemented in pure Python at all. But if you know what you're doing, and you want to use this, and you only care about present-day versions of CPython, it will work.
There is no persistent_lru_cache in the stdlib. But you can build one pretty easily.
The functools source is linked directly from the docs, because this is one of those modules that's as useful as sample code as it is for using it directly.
As you can see, the cache is just a dict. If you replace that with, say, a shelf, it will become persistent automatically:
def persistent_lru_cache(filename, maxsize=128, typed=False):
"""new docstring explaining what dbpath does"""
# same code as before up to here
def decorating_function(user_function):
cache = shelve.open(filename)
# same code as before from here on.
Of course that only works if your arguments are strings. And it could be a little slow.
So, you might want to instead keep it as an in-memory dict, and just write code that pickles it to a file atexit, and restores it from a file if present at startup:
def decorating_function(user_function):
# ...
try:
with open(filename, 'rb') as f:
cache = pickle.load(f)
except:
cache = {}
def cache_save():
with lock:
with open(filename, 'wb') as f:
pickle.dump(cache, f)
atexit.register(cache_save)
# …
wrapper.cache_save = cache_save
wrapper.cache_filename = filename
Or, if you want it to write every N new values (so you don't lose the whole cache on, say, an _exit or a segfault or someone pulling the cord), add this to the second and third versions of wrapper, right after the misses += 1:
if misses % N == 0:
cache_save()
See here for a working version of everything up to this point (using save_every as the "N" argument, and defaulting to 1, which you probably don't want in real life).
If you want to be really clever, maybe copy the cache and save that in a background thread.
You might want to extend the cache_info to include something like number of cache writes, number of misses since last cache write, number of entries in the cache at startup, …
And there are probably other ways to improve this.
From a quick test, with save_every=1, this makes the cache on both get_pep and fib (from the functools docs) persistent, with no measurable slowdown to get_pep and a very small slowdown to fib the first time (note that fib(100) has 100097 hits vs. 101 misses…), and of course a large speedup to get_pep (but not fib) when you re-run it. So, just what you'd expect.
I can't say I won't just use #abarnert's "ugly frame hack", but here is the version that requires you to pass in the calling module's globals dict. I think it's worth posting given that decorator functions with arguments are tricky and meaningfully different from those without arguments.
def create_if_not_exists_2(my_globals):
def wrap(func):
if "_" != func.__name__[0]:
raise Exception("Function names used in cine must begin with'_'")
my_globals.setdefault(func.__name__[1:], func)
def wrapped(*args):
func(*args)
return wrapped
return wrap
Which you can then use in a different module like this:
from functools32 import lru_cache
from cine import create_if_not_exists_2
#create_if_not_exists_2(globals())
#lru_cache()
def _square(x):
print "Squaring", x
return x*x
assert "_square" in globals()
assert "square" in globals()
I've gained enough familiarity with decorators during this process that I was comfortable taking a swing at solving the problem another way:
from functools32 import lru_cache
try:
my_cine
except NameError:
class my_cine(object):
_reg_funcs = {}
#classmethod
def func_key (cls, f):
try:
name = f.func_name
except AttributeError:
name = f.__name__
return (f.__module__, name)
def __init__(self, f):
k = self.func_key(f)
self._f = self._reg_funcs.setdefault(k, f)
def __call__(self, *args, **kwargs):
return self._f(*args, **kwargs)
if __name__ == "__main__":
#my_cine
#lru_cache()
def fact_my_cine(n):
print "In fact_my_cine for", n
if n < 2:
return 1
return fact_my_cine(n-1) * n
x = fact_my_cine(10)
print "The answer is", x
#abarnert, if you are still watching, I'd be curious to hear your assessment of the downsides of this method. I know of two:
You have to know in advance what attributes to look in for a name to associate with the function. My first stab at it only looked at func_name which failed when passed an lru_cache object.
Resetting a function is painful: del my_cine._reg_funcs[('__main__', 'fact_my_cine')], and the swing I took at adding a __delitem__ was unsuccessful.

Multiple Value Return Pattern in Python (not tuple, list, dict, or object solutions)

There were several discussions on "returning multiple values in Python", e.g.
1,
2.
This is not the "multiple-value-return" pattern I'm trying to find here.
No matter what you use (tuple, list, dict, an object), it is still a single return value and you need to parse that return value (structure) somehow.
The real benefit of multiple return value is in the upgrade process. For example,
originally, you have
def func():
return 1
print func() + func()
Then you decided that func() can return some extra information but you don't want to break previous code (or modify them one by one). It looks like
def func():
return 1, "extra info"
value, extra = func()
print value # 1 (expected)
print extra # extra info (expected)
print func() + func() # (1, 'extra info', 1, 'extra info') (not expected, we want the previous behaviour, i.e. 2)
The previous codes (func() + func()) are broken. You have to fix it.
I don't know whether I made the question clear... You can see the CLISP example. Is there an equivalent way to implement this pattern in Python?
EDIT: I put the above clisp snippets online for your quick reference.
Let me put two use cases here for multiple return value pattern. Probably someone can have alternative solutions to the two cases:
Better support smooth upgrade. This is shown in the above example.
Have simpler client side codes. See following alternative solutions I have so far. Using exception can make the upgrade process smooth but it costs more codes.
Current alternatives: (they are not "multi-value-return" constructions, but they can be engineering solutions that satisfy some of the points listed above)
tuple, list, dict, an object. As is said, you need certain parsing from the client side. e.g. if ret.success == True: blabla. You need to ret = func() before that. It's much cleaner to write if func() == True: blabal.
Use Exception. As is discussed in this thread, when the "False" case is rare, it's a nice solution. Even in this case, the client side code is still too heavy.
Use an arg, e.g. def func(main_arg, detail=[]). The detail can be list or dict or even an object depending on your design. The func() returns only original simple value. Details go to the detail argument. Problem is that the client need to create a variable before invocation in order to hold the details.
Use a "verbose" indicator, e.g. def func(main_arg, verbose=False). When verbose == False (default; and the way client is using func()), return original simple value. When verbose == True, return an object which contains simple value and the details.
Use a "version" indicator. Same as "verbose" but we extend the idea there. In this way, you can upgrade the returned object for multiple times.
Use global detail_msg. This is like the old C-style error_msg. In this way, functions can always return simple values. The client side can refer to detail_msg when necessary. One can put detail_msg in global scope, class scope, or object scope depending on the use cases.
Use generator. yield simple_return and then yield detailed_return. This solution is nice in the callee's side. However, the caller has to do something like func().next() and func().next().next(). You can wrap it with an object and override the __call__ to simplify it a bit, e.g. func()(), but it looks unnatural from the caller's side.
Use a wrapper class for the return value. Override the class's methods to mimic the behaviour of original simple return value. Put detailed data in the class. We have adopted this alternative in our project in dealing with bool return type. see the relevant commit: https://github.com/fqj1994/snsapi/commit/589f0097912782ca670568fe027830f21ed1f6fc (I don't have enough reputation to put more links in the post... -_-//)
Here are some solutions:
Based on #yupbank 's answer, I formalized it into a decorator, see github.com/hupili/multiret
The 8th alternative above says we can wrap a class. This is the current engineering solution we adopted. In order to wrap more complex return values, we may use meta class to generate the required wrapper class on demand. Have not tried, but this sounds like a robust solution.
try inspect?
i did some try, and not very elegant, but at least is doable.. and works :)
import inspect
from functools import wraps
import re
def f1(*args):
return 2
def f2(*args):
return 3, 3
PATTERN = dict()
PATTERN[re.compile('(\w+) f()')] = f1
PATTERN[re.compile('(\w+), (\w+) = f()')] = f2
def execute_method_for(call_str):
for regex, f in PATTERN.iteritems():
if regex.findall(call_str):
return f()
def multi(f1, f2):
def liu(func):
#wraps(func)
def _(*args, **kwargs):
frame,filename,line_number,function_name,lines,index=\
inspect.getouterframes(inspect.currentframe())[1]
call_str = lines[0].strip()
return execute_method_for(call_str)
return _
return liu
#multi(f1, f2)
def f():
return 1
if __name__ == '__main__':
print f()
a, b = f()
print a, b
Your case does need code editing. However, if you need a hack, you can use function attributes to return extra values , without modifying return values.
def attr_store(varname, value):
def decorate(func):
setattr(func, varname, value)
return func
return decorate
#attr_store('extra',None)
def func(input_str):
func.extra = {'hello':input_str + " ,How r you?", 'num':2}
return 1
print(func("John")+func("Matt"))
print(func.extra)
Demo : http://codepad.org/0hJOVFcC
However, be aware that function attributes will behave like static variables, and you will need to assign values to them with care, appends and other modifiers will act on previous saved values.
the magic is you should use design pattern blablabla to not use actual operation when you process the result, but use a parameter as the operation method, for your case, you can use the following code:
def x():
#return 1
return 1, 'x'*1
def f(op, f1, f2):
print eval(str(f1) + op + str(f2))
f('+', x(), x())
if you want generic solution for more complicated situation, you can extend the f function, and specify the process operation via the op parameter

Categories