I have dict that I want to convert to several different objects. For instance:
Currently Have
kwargs = {'this': 7, 'that': 'butterfly'}
And I want to convert it, maybe using something similar to function argument unpacking, but without actually calling a function, so that I end up with something like:
Desired
**kwargs
print(this) # Will print out `7`
print(that) # Will print out `'butterfly'`
I know that I cannot use the double-star [**] bare like I have shown. But is there something similar I can do?
Edit: More Specifics
Since folks were asking for details and warning that what I am trying to do is dangerous, I will give more context for my need.
I have a property in a class that I want to set using keyword arguments. Something like what is shown here, but unpacking kwargs instead of args.
As a functioning toy example, it looks like this:
class A(object):
def __init__(self):
self._idx = None
#property
def idx(self):
return self._idx
#idx.setter
def idx(self, kwargs):
print(kwargs)
self._idx = {}
for kw, val in kwargs.items():
self._idx[kw] = val
This works. But now I am unit testing the function, and in the unit test, I would like to be able to provide the specific key-value pairs as normal objects, so that I can effectively use hypothesis and pytest's #mark.parametrize effectively.
In order to do this, it looks like I will need the "bare unpacking" like I described above.
Since these are all actually local variables within a method, I feel like #g.d.d.c's concerns are probably not significant. It sounds like his worries were if I created new global variables without knowing what they were.
Note: this approach is dangerous. You should not (in most circumstances) muddle with locals() or globals(). However, what you want can be done, kind of:
>>> kwargs = {'this': 7, 'that': 'butterfly'}
>>>
>>> locals().update(kwargs)
>>> this
7
>>> that
'butterfly'
It still calls a function (the update on locals()) but it does give you names in your local namespace.
You could do:
this, that = {"this": 7, "that": "butterfly"}.values()
But it is really specific. The question would be why do you have the values in a dict in the first place.
You seem to have them in kwargs, why cannot you expend them there?
If you don't know the keys in the dict how can you then use them in the code?
If you know the keys in the dict, why not simply expand it like so:
this, that = kwargs["this"], kwargs["that"]
And my final question? Why not use kwargs["this"] where you need the variable, especially if you don't need it a lot.
Can you provide us with the real use case so that we can tell you how we would do that?
Related
I have seen lots of information on storing functions in variables for execution, but the functions shown are always of the:
def foo():
print("Hello")
x = foo
x()
variety. Is it possible to store a function and its variables for later execution? I've currently worked around this by creating a tuple with the function as the first item and the variables as a nested tuple for the second item and then calling:
menu_input[KEY_D] = action_system.drop_item, (character,))
...LATER IN CODE...
for key in current_input:
if key in menu.menu_input.keys():
func, args = menu.menu_input[keys]
func(*args)
but I would really prefer to be able to store the function and its variables all together.
I would prefer to be able to store:
menu_input[KEY_D] = action_system.drop_item(character)
...LATER IN CODE...
for key in current_input:
if key in menu.menu_input.keys():
menu.menu_input[keys]()
because the function that handles menu input does not understand or care about the menu function itself, so it really doesn't need to see the input or care about the arguments passed.
Is this possible? If so, what am I missing? Sorry if this is obvious or obviously a bad idea--relatively new to python. If it is a terrible idea, I'd love to know why. I'm avoiding making this a class function of the menu item because I'm trying to work within an entity-component-system model where components store all data and the systems operate on them.
The functools.partial function is made for this purppose:
from functools import partial
menu_input[KEY_D] = partial(action_system.drop_item, character)
...
for key in current_input:
if key in menu.menu_input:
menu.menu_input[key]()
You could choose to use a lambda expression with no arguments.
def foo(value):
print(value)
x = lambda : foo("Hello")
x()
I would like to know what the best practice is regarding default values for functions.
Let's say I have a function:
def my_function(x, **kwargs):
kwargs_default = {'boolean_offset': False}
kwargs_default.update(kwargs)
if kwargs_default['boolean_offset']:
x += 100
return x
It is just a quick example and does not have any other meaning.
my_function(2) will return 2. my_function(2, boolean_offset=True) will return 102.
The point is that I have a variable called boolean_offset that is turned off by default, but may be turned on by the user.
In my real problem I have a function with many input variables. Often not all of these input variables are used and in most cases users want to use the default settings. To make the code more readable I would like to use *args and **kwargs. Further I would like the potentially used variables to have default values, which can be overwritten by the user.
Is the code in my example the best way to do this?
*args and **kwargs do not make the code more readable, on the contrary, they are pure hell.
Before, your editor could show you the function parameters, now there is just **kwargs - and there is no reliable way to find which parameters can or must be set.
If you have many parameters, you should either split the function or add a configuration class , which can be leveraged to make sanity checks, too.
You can use **kwargs or *args. If you want to use **Kwargs, i would add the default value of boolean_offset in the declaration, like this:
def my_function(x, boolean_offset = False, **kwargs)
and i would use **kwargs for pass more arguments without default value.
While I do agree with the fact that using *args / **kwargs is not the way to go, if you still wish to do something around these lines you could define a default dict and work with it.
something like this :
from collections import defaultdict
def my_function(x, **kwargs):
my_default_dict = defaultdict(lambda : False)
for key in kwargs.keys():
my_default_dict[key] = kwargs[key]
This is assuming you want all your default values to be a certain value(false in this case) - You can then work with my_default_dict.
In any other case you will have to do it manually (Further strengthening the answer above me).
I have a dictionary, and a print statement as follows:
d = {'ar':4, 'ma':4, 'family':pf.Normal()}
print(d)
Which gives me
{'ar': 4, 'ma': 4, 'family': <pyflux.families.normal.Normal object at 0x11b6bc198>}
Is there any way to clean up the value of the 'family' key? It's important that the calls remains simply 'print(d)' because it used to print other dictionaries without this issue. Is this possible? Thanks for your time.
EDIT:
Thanks for the answer guys, I would mark one as right but I haven't tried them and can't confirm. I ended up creating another dictionary with the cleaned up string as a key, and the object as the value. It was a bit more work but I did it before reading/getting the responses so I just stuck to it. Thanks nonetheless!
You are mistaken. You don't want to change what print(dict) outputs. That would require changing the way builtin dictionaries are printed. You want to add a custom __repr__() to your pf.Normal() object.
I believe pf.Normal() comes from the pyflux package, so I suggest seeing what data the class is suppose to hold, and pretty printing it by inheriting from the class:
class CustomNormalObject(pf.Normal):
def __repr__(self):
# Add pretty printed data here
pass
Or if you need to pass your own arguments into the custom class, you can use super():
class CustomNormalObject(pf.Normal):
def __init__(self, myparm, *args, **kwargs):
# If using Python 3, you can call super without
# passing in any arguments. This is simply for Python 2
# compatibility.
super(CustomNormalObject, self).__init__(*args, **kwargs)
self.myparm = myparm
def __repr__(self):
# Add pretty printed data here
pass
as I think pyflux is an external library, I think you should not edit it directly. An easy way to do what you want is to overwrite the __str__ or __repr__ methods of the Normal class.
pyflux.families.normal.Normal.__str__ = lambda self: "Clean up value here."
Here I used a lambda function for illustration, but of course you can use a defined function.
There were several discussions on "returning multiple values in Python", e.g.
1,
2.
This is not the "multiple-value-return" pattern I'm trying to find here.
No matter what you use (tuple, list, dict, an object), it is still a single return value and you need to parse that return value (structure) somehow.
The real benefit of multiple return value is in the upgrade process. For example,
originally, you have
def func():
return 1
print func() + func()
Then you decided that func() can return some extra information but you don't want to break previous code (or modify them one by one). It looks like
def func():
return 1, "extra info"
value, extra = func()
print value # 1 (expected)
print extra # extra info (expected)
print func() + func() # (1, 'extra info', 1, 'extra info') (not expected, we want the previous behaviour, i.e. 2)
The previous codes (func() + func()) are broken. You have to fix it.
I don't know whether I made the question clear... You can see the CLISP example. Is there an equivalent way to implement this pattern in Python?
EDIT: I put the above clisp snippets online for your quick reference.
Let me put two use cases here for multiple return value pattern. Probably someone can have alternative solutions to the two cases:
Better support smooth upgrade. This is shown in the above example.
Have simpler client side codes. See following alternative solutions I have so far. Using exception can make the upgrade process smooth but it costs more codes.
Current alternatives: (they are not "multi-value-return" constructions, but they can be engineering solutions that satisfy some of the points listed above)
tuple, list, dict, an object. As is said, you need certain parsing from the client side. e.g. if ret.success == True: blabla. You need to ret = func() before that. It's much cleaner to write if func() == True: blabal.
Use Exception. As is discussed in this thread, when the "False" case is rare, it's a nice solution. Even in this case, the client side code is still too heavy.
Use an arg, e.g. def func(main_arg, detail=[]). The detail can be list or dict or even an object depending on your design. The func() returns only original simple value. Details go to the detail argument. Problem is that the client need to create a variable before invocation in order to hold the details.
Use a "verbose" indicator, e.g. def func(main_arg, verbose=False). When verbose == False (default; and the way client is using func()), return original simple value. When verbose == True, return an object which contains simple value and the details.
Use a "version" indicator. Same as "verbose" but we extend the idea there. In this way, you can upgrade the returned object for multiple times.
Use global detail_msg. This is like the old C-style error_msg. In this way, functions can always return simple values. The client side can refer to detail_msg when necessary. One can put detail_msg in global scope, class scope, or object scope depending on the use cases.
Use generator. yield simple_return and then yield detailed_return. This solution is nice in the callee's side. However, the caller has to do something like func().next() and func().next().next(). You can wrap it with an object and override the __call__ to simplify it a bit, e.g. func()(), but it looks unnatural from the caller's side.
Use a wrapper class for the return value. Override the class's methods to mimic the behaviour of original simple return value. Put detailed data in the class. We have adopted this alternative in our project in dealing with bool return type. see the relevant commit: https://github.com/fqj1994/snsapi/commit/589f0097912782ca670568fe027830f21ed1f6fc (I don't have enough reputation to put more links in the post... -_-//)
Here are some solutions:
Based on #yupbank 's answer, I formalized it into a decorator, see github.com/hupili/multiret
The 8th alternative above says we can wrap a class. This is the current engineering solution we adopted. In order to wrap more complex return values, we may use meta class to generate the required wrapper class on demand. Have not tried, but this sounds like a robust solution.
try inspect?
i did some try, and not very elegant, but at least is doable.. and works :)
import inspect
from functools import wraps
import re
def f1(*args):
return 2
def f2(*args):
return 3, 3
PATTERN = dict()
PATTERN[re.compile('(\w+) f()')] = f1
PATTERN[re.compile('(\w+), (\w+) = f()')] = f2
def execute_method_for(call_str):
for regex, f in PATTERN.iteritems():
if regex.findall(call_str):
return f()
def multi(f1, f2):
def liu(func):
#wraps(func)
def _(*args, **kwargs):
frame,filename,line_number,function_name,lines,index=\
inspect.getouterframes(inspect.currentframe())[1]
call_str = lines[0].strip()
return execute_method_for(call_str)
return _
return liu
#multi(f1, f2)
def f():
return 1
if __name__ == '__main__':
print f()
a, b = f()
print a, b
Your case does need code editing. However, if you need a hack, you can use function attributes to return extra values , without modifying return values.
def attr_store(varname, value):
def decorate(func):
setattr(func, varname, value)
return func
return decorate
#attr_store('extra',None)
def func(input_str):
func.extra = {'hello':input_str + " ,How r you?", 'num':2}
return 1
print(func("John")+func("Matt"))
print(func.extra)
Demo : http://codepad.org/0hJOVFcC
However, be aware that function attributes will behave like static variables, and you will need to assign values to them with care, appends and other modifiers will act on previous saved values.
the magic is you should use design pattern blablabla to not use actual operation when you process the result, but use a parameter as the operation method, for your case, you can use the following code:
def x():
#return 1
return 1, 'x'*1
def f(op, f1, f2):
print eval(str(f1) + op + str(f2))
f('+', x(), x())
if you want generic solution for more complicated situation, you can extend the f function, and specify the process operation via the op parameter
We're considering using Python (IronPython, but I don't think that's relevant) to provide a sort of 'macro' support for another application, which controls a piece of equipment.
We'd like to write fairly simple functions in Python, which take a few arguments - these would be things like times and temperatures and positions. Different functions would take different arguments, and the main application would contain user interface (something like a property grid) which allows the users to provide values for the Python function arguments.
So, for example function1 might take a time and a temperature, and function2 might take a position and a couple of times.
We'd like to be able to dynamically build the user interface from the Python code. Things which are easy to do are to find a list of functions in a module, and (using inspect.getargspec) to get a list of arguments to each function.
However, just a list of argument names is not really enough - ideally we'd like to be able to include some more information about each argument - for instance, it's 'type' (high-level type - time, temperature, etc, not language-level type), and perhaps a 'friendly name' or description.
So, the question is, what are good 'pythonic' ways of adding this sort of information to a function.
The two possibilities I have thought of are:
Use a strict naming convention for arguments, and then infer stuff about them from their names (fetched using getargspec)
Invent our own docstring meta-language (could be little more than CSV) and use the docstring for our metadata.
Because Python seems pretty popular for building scripting into large apps, I imagine this is a solved problem with some common conventions, but I haven't been able to find them.
Decorators are a good way to add metadata to functions. Add one that takes a list of types to append to a .params property or something:
def takes(*args):
def _takes(fcn):
fcn.params = args
return fcn
return _takes
#takes("time", "temp", "time")
def do_stuff(start_time, average_temp, stop_time):
pass
I would use some kind of decorator:
class TypeProtector(object):
def __init__(self, fun, types):
self.fun, self.types = fun, types
def __call__(self, *args, **kwargs)
# validate args with self.types
pass
# run function
return fun(*args, **kwargs)
def types(*args):
def decorator(fun):
# validate args count with fun parameters count
pass
# return covered function
return TypeProtector(fun, args)
return decorator
#types(Time, Temperature)
def myfunction(foo, bar):
pass
myfunction('21:21', '32C')
print myfunction.types
The 'pythonic' way to do this are function annotations.
def DoSomething(critical_temp: "temperature", time: "time")
pass
For python 2.x, I like to use the docstring
def my_func(txt):
"""{
"name": "Justin",
"age" :15
}"""
pass
and it can be automatically assign to the function object with this snippet
for f in globals():
if not hasattr(globals()[f], '__call__'):
continue
try:
meta = json.loads(globals()[f].__doc__)
except:
continue
for k, v in meta.items():
setattr(globals()[f], k, v)