How to fix current variable value from parent scope in lambda - python

I need to create a pack of callbacks in runtime which use values from a dictionary (one key-value pair per callback). After the loop over the dict, the dict is got lost, but callbacks should have keep the correct values. (It reminds me about scopes and clojures in JavaScript)
Simple python script to reproduce my problem:
cmds = list()
def main():
values = {'label1':'value1', 'label2':'value2'}
for l,v in values.items():
add_command(command = lambda: apply_new_item(l,v))
def apply_new_item(label, value):
print('{}: {}'.format(label, value))
def add_command(command):
cmds.append(command)
def call_actions():
for command in cmds:
command()
# Create callbacks
main()
# Check callbacks
call_actions()
For the case if I am trying to resolve my problem completely wrong way, I would like to give some context where I need such behavior.
I have an application which uses TKinter to create UI. The application should support loading of custom entity configuration, which is python script itself. The requirement for the script is to have one or more functions defined which provide python object of specific structure (which defines custom entity). I load given by user config file as python module and gather all functions with specific names as custom entity providers. Then I call each config provider, convert result to my internal entity configuration and want to add in runtime a menu item which should create an instance from the corresponding entity config (instance creation involves some calls to random, so two entities which are created from the same config are always different).
So, I have a dict (name to internal config) with correct values, but when I add TKinter menu item in foreach loop over the dict, all commands point to the last value of iterators:
custom_entities_menu = tk.Menu(entities_menu)
# ...
for name, cfg in custom_configs.items():
custom_entities_menu.add_command(label = name, command = lambda: self.__create_new_entity(name, cfg))

Functools or binding arguments may work. My understanding is that functools is more pythonic:
import functools
cmds = list()
def main():
values = {'label1':'value1', 'label2':'value2'}
for l,v in values.items():
add_command(command = functools.partial(apply_new_item, l,v))
def apply_new_item(label, value):
print('{}: {}'.format(label, value))
def add_command(command):
cmds.append(command)
def call_actions():
for command in cmds:
command()
# Create callbacks
main()
# Check callbacks
call_actions()
It would also work with binding arguments in lambda:
cmds = list()
def main():
values = {'label1':'value1', 'label2':'value2'}
for l,v in values.items():
add_command(command = lambda l=l, v=v: apply_new_item(l,v))
def apply_new_item(label, value):
print('{}: {}'.format(label, value))
def add_command(command):
cmds.append(command)
def call_actions():
for command in cmds:
command()
# Create callbacks
main()
# Check callbacks
call_actions()
Edit: added binding args in lambda
Edit: moved functools partial to the top because it's more pythonic

Related

How to avoid naming a variable "varname"?

I am working on a piece of code that contains a variable named varname. Here is a simplified version of it:
class Task:
def __init__(self, context, taskname, varname='results'):
self.context = context
self.taskname = taskname
self.varname = varname
def execute(self):
"""
Dispatch this task to the correct handler.
"""
logger.info(f"Running task: {self.taskname}")
try:
handler = getattr(self, self.taskname)
except AttributeError:
raise RuntimeError(f'Task "{self.taskname}" is currently not implemented')
# Assign output of task to ``varname``
context[self.varname] = handler()
return context
The Task class is intended to be used by a task runner where callers can specify the name of the task and also a "varname" where the results will be stored. That way, subsequent tasks can refer to the results of earlier tasks. Multiple tasks are run like this:
execution_context = {}
todolist = [
('some_task', 'results'),
('some_other_task', None),
]
for task_name, varname in todolist:
task = Task(execution_context, task_name, varname)
execution_context.update(task.execute())
Here is my question: How can I avoid using a variable named varname in this situation? It seems silly to have a variable that holds the name of another variable (well, actually the name of a dictionary key). Of course, I can rename the variable, but in the past when I've been confronted with situations like this, there was usually a more elegant solution than "evaluating" variable names. Or is there perhaps nothing wrong with this approach?

Is there a way in Python structlog to change the key from 'logger' to ''namespace"?

I am using structlog - http://www.structlog.org/en/stable/ in my Python Project. I have one if the processors in the configuration to be
stdlib.add_logger_name
This adds the key in the event_dict to be logger. But, I want to change the key string to something else like namespace rather than logger. How can I do that?
I have checked the function for
stdlib.add_logger_name(logger, method_name, event_dict)
but that function uses hardcoded string logger as
event_dict["logger"] = logger.name
Currently, structlog.stdlib.add_logger_name() is 6 LoC, of which you most likely only need two:
def add_logger_name(logger, method_name, event_dict):
"""
Add the logger name to the event dict.
"""
record = event_dict.get("_record")
if record is None:
event_dict["logger"] = logger.name
else:
event_dict["logger"] = record.name
return event_dict
Just copy and paste it and adapt it to your needs.
It wouldn't be worth it to add options to the processor and slow it down for everybody since it didn't come up until today, but structlog has been engineered purposefully to make such customizations easy.
Thanks to hynek's answer.
I solved this by adding a local function:
def add_logger_name(logger, method_name, event_dict):
"""
Add the logger name to the event dict with namespace as the key as per logging convention
"""
record = event_dict.get("_record")
if record is None:
event_dict["namespace"] = logger.name
else:
event_dict["namespace"] = record.name
return event_dict
Setting this in the
processors=[add_logger_name,...]

Python: Using API Event Handlers with OOP

I am trying to build some UI panels for an Eclipse based tool. The API for the tool has a mechanism for event handling based on decorators, so for example, the following ties callbackOpen to the opening of a_panel_object:
#panelOpenHandler(a_panel_object)
def callbackOpen(event):
print "opening HERE!!"
This works fine, but I wanted to wrap all of my event handlers and actual data processing for the panel behind a class. Ideally I would like to do something like:
class Test(object):
def __init__(self):
# initialise some data here
#panelOpenHandler(a_panel_object)
def callbackOpen(self, event):
print "opening HERE!!"
But this doesn't work, I think probably because I am giving it a callback that takes both self and event, when the decorator is only supplying event when it calls the function internally (note: I have no access to source code on panelOpenHandler, and it is not very well documented...also, any error messages are getting swallowed by Eclipse / jython somewhere).
Is there any way that I can use a library decorator that provides one argument to the function being decorated on a function that takes more than one argument? Can I use lambdas in some way to bind the self argument and make it implicit?
I've tried to incorporate some variation of the approaches here and here, but I don't think that it's quite the same problem.
Your decorator apparently registers a function to be called later. As such, it's completely inappropriate for use on a class method, since it will have no idea of which instance of the class to invoke the method on.
The only way you'd be able to do this would be to manually register a bound method from a particular class instance - this cannot be done using the decorator syntax. For example, put this somewhere after the definition of your class:
panelOpenHandler(context.controls.PerformanceTuneDemoPanel)(Test().callbackOpen)
I found a work around for this problem. I'm not sure if there is a more elegant solution, but basically the problem boiled down to having to expose a callback function to global() scope, and then decorate it with the API decorator using f()(g) syntax.
Therefore, I wrote a base class (CallbackRegisterer), which offers the bindHandler() method to any derived classes - this method wraps a function and gives it a unique id per instance of CallbackRegisterer (I am opening a number of UI Panels at the same time):
class CallbackRegisterer(object):
__count = 0
#classmethod
def _instanceCounter(cls):
CallbackRegisterer.__count += 1
return CallbackRegisterer.__count
def __init__(self):
"""
Constructor
#param eq_instance 0=playback 1=record 2=sidetone.
"""
self._id = self._instanceCounter()
print "instantiating #%d instance of %s" % (self._id, self._getClassName())
def bindHandler(self, ui_element, callback, callback_args = [], handler_type = None,
initialize = False, forward_event_args = False, handler_id = None):
proxy = lambda *args: self._handlerProxy(callback, args, callback_args, forward_event_args)
handler_name = callback.__name__ + "_" + str(self._id)
if handler_id is not None:
handler_name += "_" + str(handler_id)
globals()[handler_name] = proxy
# print "handler_name: %s" % handler_name
handler_type(ui_element)(proxy)
if initialize:
proxy()
def _handlerProxy(self, callback, event_args, callback_args, forward_event_args):
try:
if forward_event_args:
new_args = [x for x in event_args]
new_args.extend(callback_args)
callback(*new_args)
else:
callback(*callback_args)
except:
print "exception in callback???"
self.log.exception('In event callback')
raise
def _getClassName(self):
return self.__class__.__name__
I can then derive a class from this and pass in my callback, which will be correctly decorated using the API decorator:
class Panel(CallbackRegisterer):
def __init__(self):
super(Panel, self).__init__()
# can bind from sub classes of Panel as well - different class name in handle_name
self.bindHandler(self.controls.test_button, self._testButtonCB, handler_type = valueChangeHandler)
# can bind multiple versions of same function for repeated ui elements, etc.
for idx in range(0, 10):
self.bindHandler(self.controls["check_box_"+str(idx)], self._testCheckBoxCB,
callback_args = [idx], handler_type = valueChangeHandler, handler_id = idx)
def _testCheckBoxCB(self, *args):
check_box_id = args[0]
print "in _testCheckBoxCB #%d" % check_box_id
def _testButtonCB(self):
"""
Handler for test button
"""
print "in _testButtonCB"
panel = Panel()
Note, that I can also derive further sub-classes from Panel, and any callbacks bound there will get their own unique handler_name, based on class name string.

Share plugin resources with implemented permission rules

I have multiple scripts that are exporting a same interface and they're executed using execfile() in insulated scope.
The thing is, I want them to share some resources so that each new script doesn't have to load them again from the start, thus loosing starting speed and using unnecessary amount of RAM.
The scripts are in reality much better encapsulated and guarded from malicious plug-ins than presented in example below, that's where problems for me begins.
The thing is, I want the script that creates a resource to be able to fill it with data, remove data or remove a resource, and of course access it's data.
But other scripts shouldn't be able to change another's scripts resource, just read it. I want to be sure that newly installed plug-ins cannot interfere with already loaded and running ones via abuse of shared resources.
Example:
class SharedResources:
# Here should be a shared resource manager that I tried to write
# but got stuck. That's why I ask this long and convoluted question!
# Some beginning:
def __init__ (self, owner):
self.owner = owner
def __call__ (self):
# Here we should return some object that will do
# required stuff. Read more for details.
pass
class plugin (dict):
def __init__ (self, filename):
dict.__init__(self)
# Here some checks and filling with secure versions of __builtins__ etc.
# ...
self["__name__"] = "__main__"
self["__file__"] = filename
# Add a shared resources manager to this plugin
self["SharedResources"] = SharedResources(filename)
# And then:
execfile(filename, self, self)
# Expose the plug-in interface to outside world:
def __getattr__ (self, a):
return self[a]
def __setattr__ (self, a, v):
self[a] = v
def __delattr__ (self, a):
del self[a]
# Note: I didn't use self.__dict__ because this makes encapsulation easier.
# In future I won't use object itself at all but separate dict to do it. For now let it be
----------------------------------------
# An example of two scripts that would use shared resource and be run with plugins["name"] = plugin("<filename>"):
# Presented code is same in both scripts, what comes after will be different.
def loadSomeResource ():
# Do it here...
return loadedresource
# Then Load this resource if it's not already loaded in shared resources, if it isn't then add loaded resource to shared resources:
shr = SharedResources() # This would be an instance allowing access to shared resources
if not shr.has_key("Default Resources"):
shr.create("Default Resources")
if not shr["Default Resources"].has_key("SomeResource"):
shr["Default Resources"].add("SomeResource", loadSomeResource())
resource = shr["Default Resources"]["SomeResource"]
# And then we use normally resource variable that can be any object.
# Here I Used category "Default Resources" to add and/or retrieve a resource named "SomeResource".
# I want more categories so that plugins that deal with audio aren't mixed with plug-ins that deal with video for instance. But this is not strictly needed.
# Here comes code specific for each plug-in that will use shared resource named "SomeResource" from category "Default Resources".
...
# And end of plugin script!
----------------------------------------
# And then, in main program we load plug-ins:
import os
plugins = {} # Here we store all loaded plugins
for x in os.listdir("plugins"):
plugins[x] = plugin(x)
Let say that our two scripts are stored in plugins directory and are both using some WAVE files loaded into memory.
Plugin that loads first will load the WAVE and put it into RAM.
The other plugin will be able to access already loaded WAVE but not to replace or delete it, thus messing with other plugin.
Now, I want each resource to have an owner, some id or filename of the plugin script, and that this resource is writable only by it's owner.
No tweaking or workarounds should enable the other plugin to access the first one.
I almost did it and then got stuck, and my head is spining with concepts that when implemented do the thing, but only partially.
This eats me, so I cannot concentrate any more. Any suggestion is more than welcome!
Adding:
This is what I use now without any safety included:
# Dict that will hold a category of resources (should implement some security):
class ResourceCategory (dict):
def __getattr__ (self, i): return self[i]
def __setattr__ (self, i, v): self[i] = v
def __delattr__ (self, i): del self[i]
SharedResources = {} # Resource pool
class ResourceManager:
def __init__ (self, owner):
self.owner = owner
def add (self, category, name, value):
if not SharedResources.has_key(category):
SharedResources[category] = ResourceCategory()
SharedResources[category][name] = value
def get (self, category, name):
return SharedResources[category][name]
def rem (self, category, name=None):
if name==None: del SharedResources[category]
else: del SharedResources[category][name]
def __call__ (self, category):
if not SharedResources.has_key(category):
SharedResources[category] = ResourceCategory()
return SharedResources[category]
__getattr__ = __getitem__ = __call__
# When securing, this must not be left as this, it is unsecure, can provide a way back to SharedResources pool:
has_category = has_key = SharedResources.has_key
Now a plugin capsule:
class plugin(dict):
def __init__ (self, path, owner):
dict.__init__()
self["__name__"] = "__main__"
# etc. etc.
# And when adding resource manager to the plugin, register it with this plugin as an owner
self["SharedResources"] = ResourceManager(owner)
# ...
execfile(path, self, self)
# ...
Example of a plugin script:
#-----------------------------------
# Get a category we want. (Using __call__() ) Note: If a category doesn't exist, it is created automatically.
AudioResource = SharedResources("Audio")
# Use an MP3 resource (let say a bytestring):
if not AudioResource.has_key("Beep"):
f = open("./sounds/beep.mp3", "rb")
Audio.Beep = f.read()
f.close()
# Take a reference out for fast access and nicer look:
beep = Audio.Beep # BTW, immutables doesn't propagate as references by themselves, doesn't they? A copy will be returned, so the RAM space usage will increase instead. Immutables shall be wrapped in a composed data type.
This works perfectly but, as I said, messing resources is too much easy here.
I would like an instance of ResourceManager() to be in charge to whom return what version of stored data.
So, my general approach would be this.
Have a central shared resource pool. Access through this pool would be read-only for everybody. Wrap all data in the shared pool so that no one "playing by the rules" can edit anything in it.
Each agent (plugin) maintains knowledge of what it "owns" at the time it loads it. It keeps a read/write reference for itself, and registers a reference to the resource to the centralized read-only pool.
When an plugin is loaded, it gets a reference to the central, read-only pool that it can register new resources with.
So, only addressing the issue of python native data structures (and not instances of custom classes), a fairly locked down system of read-only implementations is as follows. Note that the tricks that are used to lock them down are the same tricks that someone could use to get around the locks, so the sandboxing is very weak if someone with a little python knowledge is actively trying to break it.
import collections as _col
import sys
if sys.version_info >= (3, 0):
immutable_scalar_types = (bytes, complex, float, int, str)
else:
immutable_scalar_types = (basestring, complex, float, int, long)
# calling this will circumvent any control an object has on its own attribute lookup
getattribute = object.__getattribute__
# types that will be safe to return without wrapping them in a proxy
immutable_safe = immutable_scalar_types
def add_immutable_safe(cls):
# decorator for adding a new class to the immutable_safe collection
# Note: only ImmutableProxyContainer uses it in this initial
# implementation
global immutable_safe
immutable_safe += (cls,)
return cls
def get_proxied(proxy):
# circumvent normal object attribute lookup
return getattribute(proxy, "_proxied")
def set_proxied(proxy, proxied):
# circumvent normal object attribute setting
object.__setattr__(proxy, "_proxied", proxied)
def immutable_proxy_for(value):
# Proxy for known container types, reject all others
if isinstance(value, _col.Sequence):
return ImmutableProxySequence(value)
elif isinstance(value, _col.Mapping):
return ImmutableProxyMapping(value)
elif isinstance(value, _col.Set):
return ImmutableProxySet(value)
else:
raise NotImplementedError(
"Return type {} from an ImmutableProxyContainer not supported".format(
type(value)))
#add_immutable_safe
class ImmutableProxyContainer(object):
# the only names that are allowed to be looked up on an instance through
# normal attribute lookup
_allowed_getattr_fields = ()
def __init__(self, proxied):
set_proxied(self, proxied)
def __setattr__(self, name, value):
# never allow attribute setting through normal mechanism
raise AttributeError(
"Cannot set attributes on an ImmutableProxyContainer")
def __getattribute__(self, name):
# enforce attribute lookup policy
allowed_fields = getattribute(self, "_allowed_getattr_fields")
if name in allowed_fields:
return getattribute(self, name)
raise AttributeError(
"Cannot get attribute {} on an ImmutableProxyContainer".format(name))
def __repr__(self):
proxied = get_proxied(self)
return "{}({})".format(type(self).__name__, repr(proxied))
def __len__(self):
# works for all currently supported subclasses
return len(get_proxied(self))
def __hash__(self):
# will error out if proxied object is unhashable
proxied = getattribute(self, "_proxied")
return hash(proxied)
def __eq__(self, other):
proxied = get_proxied(self)
if isinstance(other, ImmutableProxyContainer):
other = get_proxied(other)
return proxied == other
class ImmutableProxySequence(ImmutableProxyContainer, _col.Sequence):
_allowed_getattr_fields = ("count", "index")
def __getitem__(self, index):
proxied = get_proxied(self)
value = proxied[index]
if isinstance(value, immutable_safe):
return value
return immutable_proxy_for(value)
class ImmutableProxyMapping(ImmutableProxyContainer, _col.Mapping):
_allowed_getattr_fields = ("get", "keys", "values", "items")
def __getitem__(self, key):
proxied = get_proxied(self)
value = proxied[key]
if isinstance(value, immutable_safe):
return value
return immutable_proxy_for(value)
def __iter__(self):
proxied = get_proxied(self)
for key in proxied:
if not isinstance(key, immutable_scalar_types):
# If mutable keys are used, returning them could be dangerous.
# If owner never puts a mutable key in, then integrity should
# be okay. tuples and frozensets should be okay as keys, but
# are not supported in this implementation for simplicity.
raise NotImplementedError(
"keys of type {} not supported in "
"ImmutableProxyMapping".format(type(key)))
yield key
class ImmutableProxySet(ImmutableProxyContainer, _col.Set):
_allowed_getattr_fields = ("isdisjoint", "_from_iterable")
def __contains__(self, value):
return value in get_proxied(self)
def __iter__(self):
proxied = get_proxied(self)
for value in proxied:
if isinstance(value, immutable_safe):
yield value
yield immutable_proxy_for(value)
#classmethod
def _from_iterable(cls, it):
return set(it)
NOTE: this is only tested on Python 3.4, but I tried to write it to be compatible with both Python 2 and 3.
Make the root of the shared resources a dictionary. Give a ImmutableProxyMapping of that dictionary to the plugins.
private_shared_root = {}
public_shared_root = ImmutableProxyMapping(private_shared_root)
Create an API where the plugins can register new resources to the public_shared_root, probably on a first-come-first-served basis (if it's already there, you can't register it). Pre-populate private_shared_root with any containers you know you're going to need, or any data you want to share with all plugins but you know you want to be read-only.
It might be convenient if the convention for the keys in the shared root mapping were all strings, like file-system paths (/home/dalen/local/python) or dotted paths like python library objects (os.path.expanduser). That way collision detection is immediate and trivial/obvious if plugins try to add the same resource to the pool.

difference between Class().property and self.class.property

The following code is my original attempt at getting a gui app to update its font etc,, when the user changed it in the config.ini settings:
def on_config_change(self, config, section, key, value):
"""
sets font,size colour etc..
when user changes in settings.
"""
if config is self.config:
token = (section, key)
if token == ('Font', 'button_font'):
print('Our button font has been changed to', value)
GetInformation().lay_button.font_size = str(value)
GetInformation().bet_button.font_size = str(value)
def build(self):
self.config.write()
return GetInformation()
My code updated the config but the screen never updated without restarting the app.
The following code works:
def on_config_change(self, config, section, key, value):
"""
sets font,size colour etc..
when user changes in settings.
"""
if config is self.config:
token = (section, key)
if token == ('Font', 'button_font'):
print('Our button font has been changed to', value)
self.getInformation.lay_button.font_size = str(value)
self.getInformation.bet_button.font_size = str(value)
def build(self):
self.config.write()
self.getInformation = GetInformation()
return self.getInformation
What is the difference between calling GetInformation().lay_button.font_size
and self.getInformation.lay_button.font_size?
Unless you design your code specifically (typically a Singleton pattern), Class().method() creates a new object instantiated from Class and calls on it the method method. The object is then destroyed as there is no receiving variable specified.
self.object.method() on the other hand calls the method method on the existing object self.object. This object is persistent as it is saved as a member of your top level class (self).
In your first example, you are actually calling different methods on three different objects. In the method on_config_change, the two objects are immediately destroyed. In the second example, all the calls apply on the same object, that then keeps its modified properties.
Setting GetInformation().lay_button.font_size changes the lay_button's font size for a brand new GetInformation that you just created and didn't hook into anything.
Setting self.getInformation.lay_button.font_size changes the lay_button's font size for the current GetInformation that on_config_change fired for. It's that GetInformation that's hooked into your system so it's that GetInformation you need to do your work on.

Categories