So, I have a web application where there's a certain database object with attributes I would like to cache in a redis store. Relatively simple to do manually, with something like below:
db_object.update({<attribute>: <value>})
redis.set(db_object.id, <value>)
The issue here is that it's an attribute that is changed in many places throughout the codebase. Doesn't mean this approach won't work, it just means that it makes for code that is very repetitive. I would much rather just have a wrapper for the cache that I can access directly whenever I need to. This means that any time I change the particular attribute I'm interested in I would like to update my redis store, theoretically like so:
def __setattr__(self, name, value):
self.__dict__[name] = value
if name == <attribute>:
redis.set(self.id, value)
which would solve all my problems. The only issue is that, as detailed here I cannot directly modify the __dict__ in mapped objects. How can I achieve the same effect?
Found a nice way to approach this on another question. Can just call the super method instead of directly altering __dict__
Method below has the desired effect:
def __setattr__(self, name, value):
if name == <attribute>:
redis.set(self.id, value)
super(Dataset, self).__setattr__(name, value)
Related
I am looking for someone to explain the basics of how to use, and not use setattr().
My problem arose trying to use one class method/function to return data that is then put in another method/function. Perhaps a simpler approach would be much better in this case, but I'm trying to understand how classes work/are used. This problem seems to hinge on setattr(), and this is my attempt to make a fairly simple use of this.
Though it's not quite the same problem, I was following Python The Hard Way, ex42—the while loop # lines 18-41.
I tried writing an \__init__(), and using getattr() instead, thinking perhaps something needed to be in the class' namespace, but this doesn't seem to help.
#! /bin/python2.6
class HolyGrail(object):
def __init__(self):
self.start = 'start_at_init'
# function definition in question:
# TypeError: 'str' object is not callable
def run_it(self):
start = setattr(self, 'name', 'get_thing')
start = self.name
# Something wrong here?
value_returned = start() #I believe this == self.get_thing()
use_it(value_returned)
"""
# alternate function definitions
# NameError: global name 'start' is not defined
def __init__(self):
self.start = 'get_thing'
def run_it(self):
go_do_it = getattr(self, start)
first_output = go_do_it()
use_it(first_output)
"""
def get_thing(self):
return "The Knights Who Say ... Ni!"
def use_it(self, x):
print x
print "We want a shrubbery!"
my_instance = HolyGrail()
my_instance.run_it()
#Karl Knechtel, #Amber , #Chris Morgan thanks for your help.
I think I can now explain my own answer! This required a better grasp of self as an object for me. It's an instance name that gets tagged up with stuff like attributes.
The class could be a Town, and then.
getattr looks for a house using it's name so you are ready to call on it soon, and comes up with a different place if you don't find the house
--With getattr a 'name' exists, and you go find it. Makes the step from one function to another dynamic
As a bonus you may have a default value, useful to get a fallback default method--connection failed or something?
setattr builds a house and gives it a name so you can call in on it later.
You could potentially rebuild this house, or go to a particular place if you are unable to find it.
--setattr makes an attribute name and gives, or changes it's value, to be called on later
Perhaps a user turns sound off, then future methods don't output any audio.
I could have written my function a number of ways, but there's no need to change any attributes:
def run_it(self):
yo = getattr(self, 'get_thing')
answer = yo()
setattr(self, 'deal_accepted', self.use_it) #really ott
no = getattr(self, 'deal_accepted')
no(answer)
Properly corrected code:
def run_it(self):
value_returned = self.get_thing()
self.use_it(value_returned)
The Python docs say all that needs to be said, as far as I can see.
setattr(object, name, value)
This is the counterpart of getattr(). The arguments are an object, a string and an arbitrary value. The string may name an existing attribute or a new attribute. The function assigns the value to the attribute, provided the object allows it. For example, setattr(x, 'foobar', 123) is equivalent to x.foobar = 123.
You are setting self.name to the string "get_thing", not the function get_thing.
If you want self.name to be a function, then you should set it to one:
setattr(self, 'name', self.get_thing)
However, that's completely unnecessary for your other code, because you could just call it directly:
value_returned = self.get_thing()
Setattr:
We use setattr to add an attribute to our class instance. We pass the class instance, the attribute name, and the value. and with getattr we retrive these values
For example
Employee = type("Employee", (object,), dict())
employee = Employee()
# Set salary to 1000
setattr(employee,"salary", 1000 )
# Get the Salary
value = getattr(employee, "salary")
print(value)
To add to the other answers, a common use case I have found for setattr() is when using configs. It is common to parse configs from a file (.ini file or whatever) into a dictionary. So you end up with something like:
configs = {'memory': 2.5, 'colour': 'red', 'charge': 0, ... }
If you want to then assign these configs to a class to be stored and passed around, you could do simple assignment:
MyClass.memory = configs['memory']
MyClass.colour = configs['colour']
MyClass.charge = configs['charge']
...
However, it is much easier and less verbose to loop over the configs, and setattr() like so:
for name, val in configs.items():
setattr(MyClass, name, val)
As long as your dictionary keys have the proper names, this works very well and is nice and tidy.
*Note, the dict keys need to be strings as they will be the class object names.
Suppose you want to give attributes to an instance which was previously not written in code.
The setattr() does just that.
It takes the instance of the class self and key and value to set.
class Example:
def __init__(self, **kwargs):
for key, value in kwargs.items():
setattr(self, key, value)
The first thing that strikes me about your code is that you set start to the value of setattr.
start = setattr(self, 'name', 'get_thing')
start = self.name
# Something wrong here?
value_returned = start() #I believe this == self.get_thing()
use_it(value_returned)
The return value of setattr is None. It is an operation that is performed and does a task, but does not return a value. Similar to calling something like exit(), the operation just runs.
Enter python from your shell and you'll see
a = exit() #exits your program
Setting a to print('x') performs like exit and like your setattr, however, calling upon a doesn't invoke our method.
a = print('15') #returns 15
a #returns None
if you want to be able to call upon setattr from within your class, you could define a method for it.
def setAtrrClassInstance(self,name,value): #This returns None
setattr(self,name,value)
I'm imitating the behavior of the ConfigParser module to write a highly specialized parser that exploits some well-defined structure in the configuration files for a particular application I work with. The files follow the standard INI structure:
[SectionA]
key1=value1
key2=value2
[SectionB]
key3=value3
key4=value4
For my application, the sections are largely irrelevant; there is no overlap between keys from different sections and all the users only remember the key names, never which section they're supposed to go in. As such, I'd like to override __getattr__ and __setattr__ in the MyParser class I'm creating to allow shortcuts like this:
config = MyParser('myfile.cfg')
config.key2 = 'foo'
The __setattr__ method would first try to find a section called key2 and set that to 'foo' if it exists. Assuming there's no such section, it would look inside each section for a key called key2. If the key exists, then it gets set to the new value. If it doesn't exist, the parser would finally raise an AttributeError.
I've built a test implementation of this, but the problem is that I also want a couple straight-up attributes exempt from this behavior. I want config.filename to be a simple string containing the name of the original file and config.content to be the dictionary that holds the dictionaries for each section.
Is there a clean way to set up the filename and content attributes in the constructor such that they will avoid being overlooked by my custom getters and setters? Will python look for attributes in the object's __dict__ before calling the custom __setattr__?
pass filename, content to super class to handle it
class MyParser(object):
def __setattr__(self, k, v):
if k in ['filename', 'content']:
super(MyParser, self).__setattr__(k, v)
else:
# mydict.update(mynewattr) # dict handles other attrs
I think it might be cleaner to present a dictionary-like interface for the contents of the file and leave attribute access for internal purposes. However, that's just my opinion.
To answer your question, __setattr__() is called prior to checking in __dict__, so you can implement it as something like this:
class MyParser(object):
specials = ("filename", "content")
def __setattr__(self, attr, value):
if attr in MyParser.specials:
self.__dict__[attr] = value
else:
# Implement your special behaviour here
Suppose I have a class NamedObject which has an attribute name. Now if I had to use a setter, I would first have to define a getter (I guess?) like so:
class NamedObject:
def __init__(self, name):
self.name = name
#property
def name(self):
return self._name
Now I was wondering, inside the setter, should I use self._name or self.name, the getter or the actual attribute? When setting the name, I ofc. need to use _name, but what about when I'm getting INSIDE the setter? For example:
#name.setter
def name(self, value):
if self._name != str(value): # Or should I do 'if self.name != value' ?
self.doStuff(self._name) # Or doStuff(self.name) ?
self.doMoreStuff()
self._name = str(value)
Does it actually matter which one to use, and why use one over the other?
There's no normal reason to use the external interface when your setter is part of the internal interface. I suppose you might be able to construct a scenario where you might want to, but by default, just use the internal variable.
If your getter has significant logic (like lazy initialization), then you should access through the getter all the time.
class Something(object):
UNINITIALIZED = object()
LAZY_ATTRS = ('x','y','z')
def __init__(self):
for attr in self.LAZY_ATTRS:
setattr(self, '_'+attr, self.UNINITIALIZED)
def _get_x(self):
if self._x is self.UNINITIALIZED:
self._x = self.doExpensiveInitStuff('x')
return self._x
But if all your getter does is return self._x, just access the internal variable directly.
Using the getter instead of just accessing the internal variable adds another function call to your setting logic, and in Python, function calls are expensive. If you are writing this:
def _get_x(self):
return self._x
def _set_x(self, value):
self._x = value
x = property(fget=_get_x, fset=_set_x)
then you are suffering from "Too Much Java" syndrome. Java developers have to write this kind of stuff, because if it later becomes necessary to add behavior to the setting or getting of x, all the accesses to x outside of the class have to be recompiled. But in Python, you are far better off keeping things simple, and just defining x as an instance variable, and converting to a property only when the need arises to add some kind of setting or getting behavior. See YAGNI and YAGNI.
Paul already answered well.
For the sake of completeness I'd like to add that using getters/setters consistently makes it easier to override a class. There are several implications here.
If you envision that a particular class is very likely to be overriden/extended by yourself or others, then using getters/setters early on might be beneficial in terms of less time spent later for refactoring. Still, I agree to the keep it simple viewpoint: Use the below only sparingly, because of the runtime cost and reading/coding effort.
If validation is done in the getter, too, then either use the instance attribute directly in the setter, or provide two different getters name() and _name() (or name_already_checked()) so that both can be overridden and use the simple getter without validation inside the setter. This is to allow extension of both the fast, no-validation type of getter as well as the usual, provided for customers, getter.
This does violate the YAGNI principle that Paul pointed to. However, if you do release code for a wider audience "overengineering" is often advisable. Libraries benefit from added flexibility and foresight.
I'm trying to figure out if there's an elegant and concise way to have a class accessing one of its own properties when "used" as a dictionary, basically redirecting all the methods that'd be implemented in an ordered dictionary to one of its properties.
Currently I'm inheriting from IterableUserDict and explicitly setting its data to another property, and it seems to be working, but I know that UserDict is considered sort of old, and I'm concerned I might be overlooking something.
What I have:
class ConnectionInterface(IterableUserDict):
def __init__(self, hostObject):
self._hostObject= hostObject
self.ports= odict.OrderedDict()
self.inputPorts= odict.OrderedDict()
self.outputPorts= odict.OrderedDict()
self.data= self.ports
This way I expect the object to behave and respond (and be used) the way I mean it to, except I want to get a freebie ordered dictionary behaviour on its property "ports" when it's iterated, items are gotten by key, something is looked up ala if this in myObject, and so on.
Any advice welcome, the above seems to be working fine, but I have an odd itch that I might be missing something.
Thanks in advance.
In the end inheriting IterableUserDict and setting self.data explicitly worked out to what I needed and hasn't had any unforeseen consequences or added dodgyness when serialising and deserialising.
Sticking to my original solution I guess and can recommend it if anybody needs a simple and full fledged dict like behaviour on a selected subset of data in their own objects.
It's fairly simple and doesn't have particularly strict scalability or complexity requirements stressing it though.
Sure, you can do this. The primary thing with dictionaries is the getattr and setattr methods, so you can implement the magic methods __getattr__ and __setattr__ something like this:
def __getattr__(self, key):
return self.ports[key]
def __setattr__(self, key, value):
self.ports[key] = value
If you want implementation for .keys() and .values() and stuff, just write them in this style:
def keys(self):
return self.ports.keys()
I tried to create dynamic object to validate my config in fly and present result as object. I tried to achieve this by creating such class:
class SubConfig(object):
def __init__(self, config, key_types):
self.__config = config
self.__values = {}
self.__key_types = key_types
def __getattr__(self, item):
if item in self.__key_types:
return self.__values[item] or None
else:
raise ValueError("No such item to get from config")
def __setattr__(self, item, value):
if self.__config._blocked:
raise ValueError("Can't change values after service has started")
if item in self.__key_types:
if type(value) in self.__key_types[item]:
self.__values[item] = value
else:
raise ValueError("Can't assing value in different type then declared!")
else:
raise ValueError("No such item to set in config")
SubConfig is wrapper for section in config file. Config has switch to kill possibility to change values after program started (you can change values only on initialization).
The problem is when I setting any value it is getting in infinity loop in getattr. As I read __getattr__ shouldn't behave like that (first take existing attr, then call __getattr__). I was comparing my code with available examples but I can't get a thing.
I noticed that all problems are generated my constructor.
The problem is that your constructor in initialising the object calls __setattr__, which then calls __getattr__ because the __ private members aren't initialised yet.
There are two ways I can think of to work around this:
One option is to call down to object.__setattr__ thereby avoiding your __setattr__ or equivalently use super(SubConfig, self).__setattr__(...) in __init__. You could also set values in self.__dict__ directly. A problem here is that because you're using double-underscores you'd have to mangle the attribute names manually (so '__config' becomes '_SubConfig__config'):
def __init__(self, config, key_types):
super(SubConfig, self).__setattr__('_SubConfig__config', config)
super(SubConfig, self).__setattr__('_SubConfig__values', {})
super(SubConfig, self).__setattr__('_SubConfig__key_types', key_types)
An alternative is to have __setattr__ detect and pass through access to attribute names that begin with _ i.e.
if item.startswith('_')
return super(SubConfig, self).__setattr__(item, value)
This is more Pythonic in that if someone has a good reason to access your object's internals, you have no reason to try to stop them.
Cf ecatmur's answer for the root cause - and remember that __setattr__ is not symetrical to __getattr__ - it is unconditionnaly called on each and every attempt to bind an object's attribute. Overriding __setattr__ is tricky and should not be done if you don't clearly understand the pros and cons.
Now for a simple practical solution to your use case: rewrite your initializer to avoid triggering setattr calls:
class SubConfig(object):
def __init__(self, config, key_types):
self.__dict__.update(
_SubConfig__config=config,
_SubConfig__values={},
_SubConfig__key_types=key_types
)
Note that I renamed your attributes to emulate the name-mangling that happens when using the double leading underscores naming scheme.