Lambda function for classes in python? - python

There must be an easy way to do this, but somehow I can wrap my head around it. The best way I can describe what I want is a lambda function for a class. I have a library that expects as an argument an uninstantiated version of a class to work with. It then instantiates the class itself to work on. The problem is that I'd like to be able to dynamically create versions of the class, to pass to the library, but I can't figure out how to do it since the library expects an uninstantiated version. The code below describes the problem:
class Double:
def run(self,x):
return x*2
class Triple:
def run(self,x):
return x*3
class Multiply:
def __init__(self,mult):
self.mult = mult
def run(self,x):
return x*self.mult
class Library:
def __init__(self,c):
self.c = c()
def Op(self,val):
return self.c.run(val)
op1 = Double
op2 = Triple
#op3 = Multiply(5)
lib1 = Library(op1)
lib2 = Library(op2)
#lib3 = Library(op3)
print lib1.Op(2)
print lib2.Op(2)
#print lib3.Op(2)
I can't use the generic Multiply class, because I must instantiate it first which breaks the library "AttributeError: Multiply instance has no call method". Without changing the Library class, is there a way I can do this?

Does the library really specify that it wants an "uninitialized version" (i.e. a class reference)?
It looks to me as if the library actually wants an object factory. In that case, it's acceptable to type:
lib3 = Library(lambda: Multiply(5))
To understand how the lambda works, consider the following:
Multiply5 = lambda: Multiply(5)
assert Multiply5().run(3) == Multiply(5).run(3)

There's no need for lambda at all. lambda is just syntatic sugar to define a function and use it at the same time. Just like any lambda call can be replaced with an explicit def, we can solve your problem by creating a real class that meets your needs and returning it.
class Double:
def run(self,x):
return x*2
class Triple:
def run(self,x):
return x*3
def createMultiplier(n):
class Multiply:
def run(self,x):
return x*n
return Multiply
class Library:
def __init__(self,c):
self.c = c()
def Op(self,val):
return self.c.run(val)
op1 = Double
op2 = Triple
op3 = createMultiplier(5)
lib1 = Library(op1)
lib2 = Library(op2)
lib3 = Library(op3)
print lib1.Op(2)
print lib2.Op(2)
print lib3.Op(2)

This is sort of cheating, but you could give your Multiply class a __call__ method that returns itself:
class Multiply:
def __init__(self,mult):
self.mult = mult
def __call__(self):
return self
def run(self,x):
return x*self.mult
That way when the library calls c() it actually calls c.__call__() which returns the object you want.

def mult(x):
def f():
return Multiply(x)
return f
op3 = mult(5)
lib3 = Library(op3)
print lib3.Op(2)

If I understand your problem space correctly, you have a general interface that takes 1 argument which is called using the Library class. Unfortunately, rather than calling a function, Library assumes that the function is wrapped in a class with a run method.
You can certainly create these classes programatically. Classes may be returned by methods, and thanks to the concept of closures you should be able to wrap any function in a Class that meets your needs. Something like:
def make_op(f):
class MyOp(object):
def run(self, x):
return f(x)
return MyOp
op1 = make_op(lambda x: return x*2)
op2 = make_op(lambda x: return x*3)
def multiply_op(y):
return make_op(lambda x: return x*y)
op3 = multiply_op(3)
lib1 = Library(op1)
lib2 = Library(op2)
lib3 = Library(op3)
print( lib1.Op(2) )
print( lib2.Op(2) )
print( lib3.Op(2) )
That being said, changing Library to take a function and then providing functions is probably the stronger way to do this.

Since type is the default class of a python class object, and calling a class creates a new instance of that class, calling type with the correct arguments will result in a new class.
my_class = type("my_class", (object,), {"an_attribute": 1})
my_class now refers to a new class named "my_class", which is a subclass of object, with an attribute called "an_attribute", whose value is 1. Since methods are also just class attributes pointing to a function object, you can add them to the dictionary of attributes as well:
{"an_attribute": 1, "a_method": lambda self: print("Hello")}
This is how it works. I do not recommend doing it this way, unless you absolutely need to. In 99% of all cases, you don't. Refer to #Parker Coates' answer for the clean way to achieve your goal.

Related

How to make a decorator for defining required properties of a class?

I tried writing a decorator as such (going off memory, excuse any problems in code):
def required(fn):
def wrapped(self):
self.required_attributes += [fn.__name__]
fn(self)
return wrapped
and I used this to decorate #property attributes in classes, e.g.:
#property
#required
def some_property(self):
return self._some_property
...so that I could do something like this:
def validate_required_attributes(instance):
for attribute in instance.required_attributes:
if not hasattr(instance, attribute):
raise ValueError(f"Required attribute {attribute} was not set!")
Now I forgot that this wouldn't work because in order for the required_attributes to be updated with the name of the property, I would have to retrieve the property first. So in essence, when I do init in the class, I can just do a self.propertyname to add it... but this solution is not nice at all, I might as well create a list of required attribute names in the init.
From what I know, the decorator is applied at compile time so I wouldn't be able to modify the required_attributes before defining the wrapped function. Is there another way I can make this work? I just want a nice, elegant solution.
Thanks!
I think the attrs library does what you want. You can define a class like this, where x and y are required and z is optional.
from attr import attrs, attrib
#attrs
class MyClass:
x = attrib()
y = attrib()
z = attrib(default=0)
Testing it out:
>>> instance = MyClass(1, 2)
>>> print(instance)
MyClass(x=1, y=2, z=0)
Here's my take at doing it with a class decorator and a method decorator. There's probably a nicer way of doing this using metaclasses (nice being the API not the implementation ;)).
def requiredproperty(f):
setattr(f, "_required", True)
return property(f)
def hasrequiredprops(cls):
props = [x for x in cls.__dict__.items() if isinstance(x[1], property)]
cls._required_props = {k for k, v in props if v.fget._required}
return cls
#hasrequiredprops
class A(object):
def __init__(self):
self._my_prop = 1
def validate(self):
print("required attributes are", ",".join(self._required_props))
#requiredproperty
def my_prop(self):
return self._my_prop
This should make validation work without the requiring the caller to touch the property first:
>>> a = A()
>>> a.validate()
required attributes are my_prop
>>> a.my_prop
1
The class decorator is required to make sure it has the required property names duing instantiation. The requiredproperty function is just a way to mark the properties as required.
That being said, I'm not completely sure what you are trying to achieve here. Perhaps validation of the instance attribute values that the property should return?

Function in Class __init__ Dict | Functions Evaluated on Call of Class

I have a bunch of functions that I am storing in a dictionary used to gather data from a more "cryptic" source (I have written functions to access this data).
In my code, I want to create "visibility" of what functions / parameters are loading variables used in the rest of the class. So, I would like to have a class where, upon init, a dictionary of functions stands up that can be used by further functions in the class. The issue I am running into is that I want these functions to be called only when they are retrieved from the dictionary by a later function. I do not want the functions evaluated upon init.
Problem: Some of the functions I am passing into the dictionary are "incomplete" as I would like to pass in additional parameters allowed via partial. The issue is that it appears init of the class evaluates all the functions in the dictionary rather than storing them as functions. I get an error from partial telling me that the first argument must be callable.
Here is an example of what I am doing (age works, month does not):
from functools import partial as part
class A:
def __init__(self):
self.rawInput={
'age':lu.source('personalInfo', 'age', asArray=1)
,'month':lu.source('employInfo', 'months_employed')
}
self.outputDict={}
self.resultsDict={}
class output(object):
def age(self):
age = A().rawInput['age']
return len(age)
def month(self):
stuff=[]
for x in range(0,1):
month = part(A().rawInput['month'], x=x)
stuff.append(month)
return stuff
SOLUTION
Ah, looks like the posted summary from 7stud works. I just now just place the values / functions into the dict as partials with standard parameters and then pass additional ones as needed in the function call
from functools import partial as part
def source(name, attrib, none=None):
if none!=None:
print 'ham'
else:
print 'eggs'
class A:
def __init__(self):
self.rawInput={
'age':part(source,'personalInfo', 'age')
,'month':part(source,'employInfo', 'months_employed')
}
self.outputDict={}
self.resultsDict={}
class output:
def age(self):
A().rawInput['age']()
def month(self):
x = 1
A().rawInput['month'](x)
c = A.output()
c.age()
c.month()
eggs
ham
The issue is that it appears init of the class evaluates all the
functions in the dictionary rather than storing them as functions.
() is the function execution operator. So, when you write:
'age':lu.source('personalInfo', 'age', asArray=1)
the function lu.source is executed immediately, and the result is assigned to the "age" key in the dictionary.
Here's an example using partial:
from functools import partial
def myadd(x, y):
return x+y
def mysubtract(x, y):
return x-y
funcs = {}
funcs["add_3"] = partial(myadd, 3)
funcs["subtr_from_10"] = partial(mysubtract, 10)
print(
funcs["add_3"](2) #Note the function execution operator
)
print(
funcs["subtr_from_10"](3) #Note the function execution operator
)
--output:--
5
7
Note that in the line:
funcs["add_3"] = partial(myadd, 3)
the () is used with partial. So why does that work? It works because partial returns a function-like thing, so you end up with something like this:
funcs["add_3"] = some_func
Here is sort of how partial works:
def mypartial(func, x):
def newfunc(val):
return x + val
return newfunc
add_3 = mypartial(myadd, 3) #equivalent to add_3 = newfunc
print(add_3(2)) #=>5
Response to comment:
Okay, you could do this:
def myadd(x, y, z):
return x+y+z
funcs = {}
funcs["add"] = {
"func": myadd,
"args": (3, 4)
}
func = funcs["add"]["func"]
args = funcs["add"]["args"]
result = func(*args, z=2)
print(result) #=> 9
But that makes calling the function much more tortuous. If you are going to call the function with the arguments anyway, then why not imbed the arguments in the function using partial?

Python - If a function is a first class object, can a function have a method?

I have a class which maintains a list of functions. These functions are just objects sitting in a queue and every so often the class pops one off and executes it. However, there are times when I would like to print out this list, and I'm imagining code as follows:
for function in self.control_queue:
print function.summarize()
if function.ready():
function()
In other words, I would like to call methods called summarize() and ready(), that I want to define somewhere, on these function objects. Also, I would like to be able to toss anonymous functions on this queue - i.e., generate everything dynamically.
you can make it a class and define __call__
class MyClass():
def summarize(self):
#summarize stuff
pass
def ready(self):
#ready stuff
pass
def _call__(self):
#put the code here, for when you call myClass()
pass
How you run it:
function = MyClass()
print function.summarize()
if function.ready():
function()
You have a couple possible approaches.
You could add the definitions to functions.
def foo():
pass
# later..
foo.summarize = lambda: "To pair with bar"
foo.ready = lambda: True
You could create class objects to wrap the function operation.
class Func():
def summarize(self):
return "Function!"
def ready(self):
return self.ready
def __call__(self):
# Act as a function
Or you can have a function which checks the function label for these capabilities.
def summarize_func(func):
return func.__name__ # Or branch here on specific names/attributes
def ready_func(func):
return True # Or branch on names/attributes
Finally to accommodate anonymous functions you can check for prescience of these attributes and return optimistically if the attributes are absent. Then you can combine above approaches with something that will work on any function.
def summarize_func(func):
if hasattr(func, summarize):
return func.summarize()
else:
# Note this will just be '<lambda>' for anonymous funcs
return func.__name__
def ready_func(func):
if hasattr(func, ready):
return func.ready()
else:
return True
One option is to implement function as a class instance:
class Function(object):
def summarize(self): pass # some relevant code here
def __call__(self): pass # and there
and use it later with
function = Function()
With __call__ magic method implemented, this function becomes a callable object.
For sure, you can assign attributes to functions, but it is rather obscure and conterintuitive:
>>> def summ(a): return sum(a)
...
>>> def function(a): return a
...
>>> function.sum=summ
>>> function.sum([1,2,3])
6

How to automatically expose and decorate function versions of methods in Python?

I would like to expose the methods of a class as functions (after decoration) in my local scope. For example if I had a class and decorator:
def some_decorator(f):
...transform f...
return decorated_f
class C(object):
def __init__(self,x):
self.x = x
def f(self,y):
"Some doc string"
return self.x + y
def g(self,y,z):
"Some other doc string"
return self.x + y + z
and if I didn't care about automizing the process I could add the following code to my module the following:
#some_decorator
def f(C_instance,x):
"Some doc string."
return C_instance.f(x)
#some_decorator
def g(C_instance,x,y):
"Some other doc string."
return C_instance.g(x,y)
to the effect that the following evaluate to True
c = C(0)
f(c,1) == c.f(1)
But I would like to be able to do this automatically. Something like:
my_funs = expose_methods(MyClass)
for funname, fun in my_funs.iteritems():
locals()[funname] = some_decorator(fun)
foo = MyClass(data)
some_functionality(foo,*args) == foo.some_functionality(*args)
would do the trick (although it feels a little wrong declaring local variables this way). How can I do this in a way so that all the relevant attributes of the method correctly transform into the function versions? I would appreciate any suggestions.
P.S.
I am aware that I can decorate methods of class instances, but this is not really what I am after. It is more about (1) a preference for the function version syntax (2) the fact that my decorators make functions map over collections of objects in fancy and optimized ways. Getting behavior (2) by decorating methods would require my collections classes to inherit attributes from the objects they contain, which is orthogonal to the collection semantics.
Are you aware that you can use the unbound methods directly?
obj.foo(arg) is equivalent to ObjClass.foo(obj, arg)
class MyClass(object):
def foo(self, arg):
...
obj = MyClass()
print obj.foo(3) == MyClass.foo(obj, 3) # True
See also Class method differences in Python: bound, unbound and static and the documentation.
You say you have a preference for the function syntax. You could just define all your methods outside the class instead, and they would work exactly as you desire.
class C(object):
def __init__(self,x):
self.x = x
def f(c,y):
"Some doc string"
return c.x + y
def g(c,y,z):
"Some other doc string"
return c.x + y + z
If you want them on the class as well, you can always:
for func in f, g:
setattr(C, func.__name__, func)
Or with locals introspection instead of function name introspection:
for name in 'f', 'g':
setattr(C, name, locals()[name])
Or with no introspection, which is arguably a lot simpler and easier to manage unless you have quite a lot of these methods/functions:
C.f = f
C.g = g
This also avoids the potential issue mentioned in the comments on codeape's answer about Python checking that the first argument of an unbound method is an instance of the class.

Python extension methods

OK, in C# we have something like:
public static string Destroy(this string s) {
return "";
}
So basically, when you have a string you can do:
str = "This is my string to be destroyed";
newstr = str.Destroy()
# instead of
newstr = Destroy(str)
Now this is cool because in my opinion it's more readable. Does Python have something similar? I mean instead of writing like this:
x = SomeClass()
div = x.getMyDiv()
span = x.FirstChild(x.FirstChild(div)) # so instead of this
I'd like to write:
span = div.FirstChild().FirstChild() # which is more readable to me
Any suggestion?
You can just modify the class directly, sometimes known as monkey patching.
def MyMethod(self):
return self + self
MyClass.MyMethod = MyMethod
del(MyMethod)#clean up namespace
I'm not 100% sure you can do this on a special class like str, but it's fine for your user-defined classes.
Update
You confirm in a comment my suspicion that this is not possible for a builtin like str. In which case I believe there is no analogue to C# extension methods for such classes.
Finally, the convenience of these methods, in both C# and Python, comes with an associated risk. Using these techniques can make code more complex to understand and maintain.
You can do what you have asked like the following:
def extension_method(self):
#do stuff
class.extension_method = extension_method
I would use the Adapter pattern here. So, let's say we have a Person class and in one specific place we would like to add some health-related methods.
from dataclasses import dataclass
#dataclass
class Person:
name: str
height: float # in meters
mass: float # in kg
class PersonMedicalAdapter:
person: Person
def __init__(self, person: Person):
self.person = person
def __getattr__(self, item):
return getattr(self.person, item)
def get_body_mass_index(self) -> float:
return self.person.mass / self.person.height ** 2
if __name__ == '__main__':
person = Person('John', height=1.7, mass=76)
person_adapter = PersonMedicalAdapter(person)
print(person_adapter.name) # Call to Person object field
print(person_adapter.get_body_mass_index()) # Call to wrapper object method
I consider it to be an easy-to-read, yet flexible and pythonic solution.
You can change the built-in classes by monkey-patching with the help of forbidden fruit
But installing forbidden fruit requires a C compiler and unrestricted environment so it probably will not work or needs hard effort to run on Google App Engine, Heroku, etc.
I changed the behaviour of unicode class in Python 2.7 for a Turkish i,I uppercase/lowercase problem by this library.
# -*- coding: utf8 -*-
# Redesigned by #guneysus
import __builtin__
from forbiddenfruit import curse
lcase_table = tuple(u'abcçdefgğhıijklmnoöprsştuüvyz')
ucase_table = tuple(u'ABCÇDEFGĞHIİJKLMNOÖPRSŞTUÜVYZ')
def upper(data):
data = data.replace('i',u'İ')
data = data.replace(u'ı',u'I')
result = ''
for char in data:
try:
char_index = lcase_table.index(char)
ucase_char = ucase_table[char_index]
except:
ucase_char = char
result += ucase_char
return result
curse(__builtin__.unicode, 'upper', upper)
class unicode_tr(unicode):
"""For Backward compatibility"""
def __init__(self, arg):
super(unicode_tr, self).__init__(*args, **kwargs)
if __name__ == '__main__':
print u'istanbul'.upper()
You can achieve this nicely with the following context manager that adds the method to the class or object inside the context block and removes it afterwards:
class extension_method:
def __init__(self, obj, method):
method_name = method.__name__
setattr(obj, method_name, method)
self.obj = obj
self.method_name = method_name
def __enter__(self):
return self.obj
def __exit__(self, type, value, traceback):
# remove this if you want to keep the extension method after context exit
delattr(self.obj, self.method_name)
Usage is as follows:
class C:
pass
def get_class_name(self):
return self.__class__.__name__
with extension_method(C, get_class_name):
assert hasattr(C, 'get_class_name') # the method is added to C
c = C()
print(c.get_class_name()) # prints 'C'
assert not hasattr(C, 'get_class_name') # the method is gone from C
I'd like to think that extension methods in C# are pretty much the same as normal method call where you pass the instance then arguments and stuff.
instance.method(*args, **kwargs)
method(instance, *args, **kwargs) # pretty much the same as above, I don't see much benefit of it getting implemented in python.
After a week, I have a solution that is closest to what I was seeking for. The solution consists of using getattr and __getattr__. Here is an example for those who are interested.
class myClass:
def __init__(self): pass
def __getattr__(self, attr):
try:
methodToCall = getattr(myClass, attr)
return methodToCall(myClass(), self)
except:
pass
def firstChild(self, node):
# bla bla bla
def lastChild(self, node):
# bla bla bla
x = myClass()
div = x.getMYDiv()
y = div.firstChild.lastChild
I haven't test this example, I just gave it to give an idea for who might be interested. Hope that helps.
C# implemented extension methods because it lacks first class functions, Python has them and it is the preferred method for "wrapping" common functionality across disparate classes in Python.
There are good reasons to believe Python will never have extension methods, simply look at the available built-ins:
len(o) calls o.__len__
iter(o) calls o.__iter__
next(o) calls o.next
format(o, s) calls o.__format__(s)
Basically, Python likes functions.

Categories