Calling a function in the dictionary in parallel with obtaining a value - python

I want to pass a key to a dictonary, get an attached to this key value and also call a function attached to same key if the function is present. I'm able to create something like this d={'some_key':('value', func())}, but it calls the function when the dictonary is initialized. I know that I can get a tuple with value and fucntion, check the length of this tuple and if the length equals 2 then call a function, but isn't there a more elegant way? Can I somehow make the functon activate only when I input a certain key without any other syntax? Just write d[some_key], get a corresponding value and execute a function without any additional brackets.

You'd need to subclass dict to override the __getitem__ method:
class MyDict(dict):
def __getitem__(self, index):
a, b = super().__getitem__(index)
return a, b()
def myfunc():
return "Hello world"
mydict = MyDict({'a': (100, myfunc)})
print(mydict['a'])
outputs
(100, 'Hello world')
If you want to call the function and return the value:
class MyDict(dict):
def __getitem__(self, index):
a, b = super().__getitem__(index)
b()
return a
Note that this is very unexpected behavior, so make sure your users know what will happen when they use this dictionary.

Related

How to create a function that takes dictionary inputs?

I am using a module for a project, I have to pass a function to the module and the model does something like:
class module:
def __init__(self, function, dictionary):
# dictionary is {'x':2, 'y':4, 'z':23}
function(**dictionary)
And my function is something like:
def function(*foo):
return sum(foo)
The problem is that, the module needs named variables, and will pass it to the function like an unpacked dictionary, and the number of elements in dictionary can be variable, so I cannot pre-write the function as def function(x,y,z): return sum(x,y,z), and this raises an error. I do not wish to modify the module, because then, the code will not be universal. How can I solve this problem by just changing my code?
EDIT: I need foo as a list to use in the function
You module that you can't change is calling your function with:
function(**dictionary)
You won't be able to write your function to the argument is a list — it's not being passed a list. You can accept the keywords as a dict and the easily make a list. Your function just needs to be prepared to be called that way:
def f(**foo):
This leads to:
class module:
def __init__(self, function, dictionary):
# dictionary is {'x':2, 'y':4, 'z':23}
function(**dictionary)
def f(**foo):
print(sum(foo.values()))
module(f, {'x':2, 'y':4, 'z':23})
# prints 29 as expected
def function(*args,**Kwargs):
try:
return sum(*args)
else:
return sum(**kwargs.values())
double * unpacked dictionary values, and one * is to unpacked anything(except dictionary).
The number and type of arguments are determined by code of function init
In your case this a single argument of type dictionary. So you have always to pass such function f(x) where x is a dictionary.
So the that is function f that deals with the argument.
E.g.
def fsum(x): return sum(x.values())
...
__init__(fsum, {'a': 1, 'b': 2, 'c': 3})
It seems you want the sum of the values:
def __init__(self, function, dictionary):
# dictionary is {'x':2, 'y':4, 'z':23}
function(dictionary.values())
The dictionary.values() will give a list of [2, 4, 23] for your example.

Return values automatically in python class

I am a new python user. Thus it might be very silly. But what is the best way to run a class automatically (which has several functions inside) and return the result for a given value. for example:
class MyClass():
def __init__(self,x):
self.x=x
def funct1(self):
return (self.x)**2
##or any other function
def funct2(self,y):
return y/100.0
##or any other function
def wrapper(self):
y=self.funct1()
z=self.funct2(y)
##or other combination of functions
return z
Right now to run this, I am using:
run=MyClass(5)
run.wrapper()
But I want to run like this:
MyClass(5)
Which will return a value and can be saved inside a variable with out needing to use the wrapper function.
You can create a functor as below:
class MyClass(object):
def __init__(self,x):
self.x=x
def funct1(self):
return (self.x)**2
##or any other function
def funct2(self,y):
return y/100.0
##or any other function
def __call__(self):
y=self.funct1()
z=self.funct2(y)
##or other combination of functions
return z
The call to this functor will be like as follow:
MyClass(5)() # Second () will call the method __call__. and first one will call constructor
Hope this will help you.
So when you write MyClass(5), you're instatiating a new instance of that class: MyClass and so the short answer is no, you do need the wrapper because when you instantiate the class it will necessarily return the object and not a certain value.
If you want to just return a value based on the input (say 5) consider using a function instead.
The function would be like:
def my_func(x):
y = x**2
z = y/100.0
return z
There a lots of reasons to use classes, see this answer https://stackoverflow.com/a/33072722/4443226 -- but if you're just concerned with the output of an operation/equation/function then I'd stick with functions.
__init__ method should return None.
documentation link
no non-None value may be returned by init()

Wrap callback function to include extra argument when caller verifies exact callback signature

I'm trying to bind multiple callback functions across multiple properties with code that looks something like:
for key in keys:
def callback(self, value):
#Do stuff...
return None
doSomething(callback)
This works because the calling code (that calls callback) expect exactly two parameters and the callback to return None. The issue is now I want to wrap the callback such that I can also pass the key in something like:
for key in keys:
def wrappedCallback(self, value):
#How do I get key in here???
realCallback(self, key, value)
return None
doSomething(wrapperCallback)
But I have no idea how to get key inside of wrapperCallback. I can't add an extra default parameter like:
...
#This throws with "expected a function taking 2 arguments, not 3"
def wrappedCallback(self, value, key=key):
realCallback(self, key, value)
...
because this will throw an error from the caller (it's C code that expects a very strict callback). I've also tried functools.partial but then I get expected a function, not a functools.partial
How do I wrap the passed callback function to include the external parameter key (from the for loop) while keeping the exact signature described?
I would create a callback generator that takes your parameters (such as key) and creates your callbacks.
>>> def callback_generator(key):
... def callback(self, value):
... do_something_with(key, value)
... return callback
...
>>> for key in keys:
... doSomething(callback_generator(key))
...
Minutes after I figured out that I can create a function inside the for loop to get a new stack frame. This unfortunately does not seem at all Pythonic...
for key in keys:
def dummyForStackFrame(key): #Extra function to get a new stack frame with key
def wrappedCallback(self, value):
realCallback(self, key, value)
return None
doSomething(wrappedCallback
dummyForStackFrame(key)

Use __get__, __set__ with dictionary item?

Is there a way to make a dictionary of functions that use set and get statements and then use them as set and get functions?
class thing(object):
def __init__(self, thingy)
self.thingy = thingy
def __get__(self,instance,owner):
return thingy
def __set__(self,instance,value):
thingy += value
theDict = {"bob":thing(5), "suzy":thing(2)}
theDict["bob"] = 10
wanted result is that 10 goes into the set function and adds to the existing 5
print theDict["bob"]
>>> 15
actual result is that the dictionary replaces the entry with the numeric value
print theDict["bob"]
>>> 10
Why can't I just make a function like..
theDict["bob"].add(10)
is because it's building off an existing and already really well working function that uses the set and get. The case I'm working with is an edge case and wouldn't make sense to reprogram everything to make work for this one case.
I need some means to store instances of this set/get thingy that is accessible but doesn't create some layer of depth that might break existing references.
Please don't ask for actual code. It'd take pages of code to encapsulate the problem.
You could do it if you can (also) use a specialized version of the dictionary which is aware of your Thing class and handles it separately:
class Thing(object):
def __init__(self, thingy):
self._thingy = thingy
def _get_thingy(self):
return self._thingy
def _set_thingy(self, value):
self._thingy += value
thingy = property(_get_thingy, _set_thingy, None, "I'm a 'thingy' property.")
class ThingDict(dict):
def __getitem__(self, key):
if key in self and isinstance(dict.__getitem__(self, key), Thing):
return dict.__getitem__(self, key).thingy
else:
return dict.__getitem__(self, key)
def __setitem__(self, key, value):
if key in self and isinstance(dict.__getitem__(self, key), Thing):
dict.__getitem__(self, key).thingy = value
else:
dict.__setitem__(self, key, value)
theDict = ThingDict({"bob": Thing(5), "suzy": Thing(2), "don": 42})
print(theDict["bob"]) # --> 5
theDict["bob"] = 10
print(theDict["bob"]) # --> 15
# non-Thing value
print(theDict["don"]) # --> 42
theDict["don"] = 10
print(theDict["don"]) # --> 10
No, because to execute theDict["bob"] = 10, the Python runtime doesn't call any methods at all of the previous value of theDict["bob"]. It's not like when myObject.mydescriptor = 10 calls the descriptor setter.
Well, maybe it calls __del__ on the previous value if the refcount hits zero, but let's not go there!
If you want to do something like this then you need to change the way the dictionary works, not the contents. For example you could subclass dict (with the usual warnings that you're Evil, Bad and Wrong to write a non-Liskov-substituting derived class). Or you could from scratch implement an instance of collections.MutableMapping. But I don't think there's any way to hijack the normal operation of dict using a special value stored in it.
theDict["bob"] = 10 is just assign 10 to the key bob for theDict.
I think you should know about the magic methods __get__ and __set__ first. Go to: https://docs.python.org/2.7/howto/descriptor.html Using a class might be easier than dict.

Can I use a dynamic mapping to unpack keyword arguments in Python?

Long story short, I want to call format with arbitrarily named arguments, which will preform a lookup.
'{Thing1} and {other_thing}'.format(**my_mapping)
I've tried implementing my_mapping like this:
class Mapping(object):
def __getitem__(self, key):
return 'Proxied: %s' % key
my_mapping = Mapping()
Which works as expected when calling my_mapping['anything']. But when passed to format() as shown above I get:
TypeError: format() argument after ** must be a mapping, not Mapping
I tried subclassing dict instead of object, but now calling format() as shown raises KeyError. I even implemented __contains__ as return True, but still KeyError.
So it seems that ** is not just calling __getitem__ on the object passed in. Does anyone know how to get around this?
In Python 2 you can do this using string.Formatter class.
>>> class Mapping(object):
... def __getitem__(self, key):
... return 'Proxied: %s' % key
...
>>> my_mapping = Mapping()
>>> from string import Formatter
>>> Formatter().vformat('{Thing1} and {other_thing}', (), my_mapping)
'Proxied: Thing1 and Proxied: other_thing'
>>>
vformat takes 3 args: the format string, sequence of positional fields and mapping of keyword fields. Since positional fields weren't needed, I used an empty tuple ().
Python 3.2+:
'{Thing1} and {other_thing}'.format_map(my_mapping)
This may be a bit of necromancy, but I recently came across this problem, and this SO question was the first result. I wasn't happy with using string.Formatter, and wanted it to Just Work (TM).
If you implement a keys() function for your class as well as __getitem__(), then **my_mapping will work.
I.e:
class Mapping(object):
def __getitem__(self, key):
return 'Proxied: %s' % key
def keys(self):
return proxy.keys()
where
>>> my_mapping = Mapping()
>>> my_mapping.keys()
['Thing1','other_thing',...,'Thing2']
will result in a successful mapping that will work with .format.
Apparently (though I haven't actually looked at the source for str.format), it appears to use keys() to get a list of keys, then map the identifiers given in the string to those keys, then use __getitem__() to retrieve the specified values.
Hope this helps.
EDIT:
If you are in #aaron-mcmillin's position, and the key set is large, then a possible approach is to not generate a full set of keys, but generate a smaller subset. This only works of course if you know you will only need to format a small subset.
I.e:
class Mapping(object):
...
def keys(self):
return ['Thing1','other_thing', 'Thing2']
This is the best I could come up with:
If you have a custom mapping object that you want to pass to a func taking key-word arguments, then it must have a set of keys (which may be dynamically generated, but it must be a finite set), and it must be able to map those keys somehow. So, if you can assume that it will have an __iter__ to get the keys, and a __getitem__ that will succeed for each of those keys, e.g.:
class Me(object):
def __init__(self):
pass
def __iter__(self):
return iter(['a', 'b', 'c'])
def __getitem__(self, key):
return 12
Say the function is:
def myfunc(**kwargs):
print kwargs, type(kwargs)
Then we can pass it along by making a dict:
m = Me()
myfunc(**dict((k, m[k]) for k in m))
Resulting in:
{'a': 12, 'c': 12, 'b': 12} <type 'dict'>
Apparently this must be the way it's done... even if you pass in an object derived from dict, the function will still have a dict for the kwargs:
class Me(dict): pass
m = Me()
print type(m) #prints <class '__Main__.Me'>
def myfunc(**kwargs):
print type(kwargs)
myfunc(**m) #prints <type 'dict'>
Since it sounds like you wanted to do something like return a value based on what the key was, without having a particular set of keys in mind, it seems like you can't use the format function.

Categories