Related
Edit
Thanks all! Changed the _sort function and now it works.
Original post
I'm trying to create a sorted dict class as a way to mess around with dunder methods. I know collections.OrderedDict exists.
When I try to overload __getitem__ or __setitem__, Python acts as if I am trying to index a list with a string key. Here is my code for the class:
class SortedDict:
def __init__(self, **kwargs):
self.map = dict(kwargs)
self._sort()
def __str__(self):
return str(self.map)
def __getitem__(self, key):
return self.map[key]
def __setitem__(self, name, value):
self.map[name] = value
def keys(self):
return self.map.keys()
def add(self, **kwargs):
for key in kwargs:
self.map[key] = kwargs[key]
self._sort()
indices = dict()
for key in kwargs.keys():
indices[key] = self.index(key)
return indices
def remove(self, *args):
for key in args:
self.map.pop(key)
def index(self, key: str):
keys = list()
for dict_key in self.map.keys():
keys.append(dict_key)
return keys.index(key)
def contains(self, key: str):
return key in self.map
def _sort(self):
self.map = sorted(self.map)
When I execute the following code to test __getitem__:
from sorted_dict import SortedDict
test_dict= SortedDict(test1=1, test2=2, a=2, b=3)
print(test_dict['test1'])
I get this error:
Traceback (most recent call last):
File "c:\Users\Chris\Desktop\Code\DMC2\mapping_editor\tree.py", line 16, in <module>
print(test_dict['test1'])
File "c:\Users\Chris\Desktop\Code\DMC2\mapping_editor\sorted_dict.py", line 11, in __getitem__
return self.map[key]
TypeError: list indices must be integers or slices, not str
I get a similar error when trying to use __setitem__. I am using VS code, and when I hover my cursor over self.map in either of those functions the type is shown as dict[str, Any] | list[str]. If I print the type of self.map in either of the functions, it prints <class 'list'>, but when I print the class of self.map in the constructor it prints as <class 'dict'>, which is what I would expect. When I print self.map in the __setitem__ or __getitem__ functions it prints as a list of the keys, but in the constructor it prints as a dictionary would. What am I missing?
As mentioned in the comments sorted(self.map) returns a list of the sorted map keys. To get a sorted dictionary you can do
def _sort(self):
self.map = dict(sorted(self.map.items()))
This will give you {'a': 2, 'b': 3, 'test1': 1, 'test2': 2}.
How do I make Python dictionary members accessible via a dot "."?
For example, instead of writing mydict['val'], I'd like to write mydict.val.
Also I'd like to access nested dicts this way. For example
mydict.mydict2.val
would refer to
mydict = { 'mydict2': { 'val': ... } }
I've always kept this around in a util file. You can use it as a mixin on your own classes too.
class dotdict(dict):
"""dot.notation access to dictionary attributes"""
__getattr__ = dict.get
__setattr__ = dict.__setitem__
__delattr__ = dict.__delitem__
mydict = {'val':'it works'}
nested_dict = {'val':'nested works too'}
mydict = dotdict(mydict)
mydict.val
# 'it works'
mydict.nested = dotdict(nested_dict)
mydict.nested.val
# 'nested works too'
You can do it using this class I just made. With this class you can use the Map object like another dictionary(including json serialization) or with the dot notation. I hope to help you:
class Map(dict):
"""
Example:
m = Map({'first_name': 'Eduardo'}, last_name='Pool', age=24, sports=['Soccer'])
"""
def __init__(self, *args, **kwargs):
super(Map, self).__init__(*args, **kwargs)
for arg in args:
if isinstance(arg, dict):
for k, v in arg.iteritems():
self[k] = v
if kwargs:
for k, v in kwargs.iteritems():
self[k] = v
def __getattr__(self, attr):
return self.get(attr)
def __setattr__(self, key, value):
self.__setitem__(key, value)
def __setitem__(self, key, value):
super(Map, self).__setitem__(key, value)
self.__dict__.update({key: value})
def __delattr__(self, item):
self.__delitem__(item)
def __delitem__(self, key):
super(Map, self).__delitem__(key)
del self.__dict__[key]
Usage examples:
m = Map({'first_name': 'Eduardo'}, last_name='Pool', age=24, sports=['Soccer'])
# Add new key
m.new_key = 'Hello world!'
# Or
m['new_key'] = 'Hello world!'
print m.new_key
print m['new_key']
# Update values
m.new_key = 'Yay!'
# Or
m['new_key'] = 'Yay!'
# Delete key
del m.new_key
# Or
del m['new_key']
Install dotmap via pip
pip install dotmap
It does everything you want it to do and subclasses dict, so it operates like a normal dictionary:
from dotmap import DotMap
m = DotMap()
m.hello = 'world'
m.hello
m.hello += '!'
# m.hello and m['hello'] now both return 'world!'
m.val = 5
m.val2 = 'Sam'
On top of that, you can convert it to and from dict objects:
d = m.toDict()
m = DotMap(d) # automatic conversion in constructor
This means that if something you want to access is already in dict form, you can turn it into a DotMap for easy access:
import json
jsonDict = json.loads(text)
data = DotMap(jsonDict)
print data.location.city
Finally, it automatically creates new child DotMap instances so you can do things like this:
m = DotMap()
m.people.steve.age = 31
Comparison to Bunch
Full disclosure: I am the creator of the DotMap. I created it because Bunch was missing these features
remembering the order items are added and iterating in that order
automatic child DotMap creation, which saves time and makes for cleaner code when you have a lot of hierarchy
constructing from a dict and recursively converting all child dict instances to DotMap
Derive from dict and and implement __getattr__ and __setattr__.
Or you can use Bunch which is very similar.
I don't think it's possible to monkeypatch built-in dict class.
Use SimpleNamespace:
>>> from types import SimpleNamespace
>>> d = dict(x=[1, 2], y=['a', 'b'])
>>> ns = SimpleNamespace(**d)
>>> ns.x
[1, 2]
>>> ns
namespace(x=[1, 2], y=['a', 'b'])
Fabric has a really nice, minimal implementation. Extending that to allow for nested access, we can use a defaultdict, and the result looks something like this:
from collections import defaultdict
class AttributeDict(defaultdict):
def __init__(self):
super(AttributeDict, self).__init__(AttributeDict)
def __getattr__(self, key):
try:
return self[key]
except KeyError:
raise AttributeError(key)
def __setattr__(self, key, value):
self[key] = value
Make use of it as follows:
keys = AttributeDict()
keys.abc.xyz.x = 123
keys.abc.xyz.a.b.c = 234
That elaborates a bit on Kugel's answer of "Derive from dict and and implement __getattr__ and __setattr__". Now you know how!
I tried this:
class dotdict(dict):
def __getattr__(self, name):
return self[name]
you can try __getattribute__ too.
make every dict a type of dotdict would be good enough, if you want to init this from a multi-layer dict, try implement __init__ too.
I recently came across the 'Box' library which does the same thing.
Installation command : pip install python-box
Example:
from box import Box
mydict = {"key1":{"v1":0.375,
"v2":0.625},
"key2":0.125,
}
mydict = Box(mydict)
print(mydict.key1.v1)
I found it to be more effective than other existing libraries like dotmap, which generate python recursion error when you have large nested dicts.
link to library and details: https://pypi.org/project/python-box/
If you want to pickle your modified dictionary, you need to add few state methods to above answers:
class DotDict(dict):
"""dot.notation access to dictionary attributes"""
def __getattr__(self, attr):
return self.get(attr)
__setattr__= dict.__setitem__
__delattr__= dict.__delitem__
def __getstate__(self):
return self
def __setstate__(self, state):
self.update(state)
self.__dict__ = self
You can achieve this using SimpleNamespace
from types import SimpleNamespace
# Assign values
args = SimpleNamespace()
args.username = 'admin'
# Retrive values
print(args.username) # output: admin
Don't. Attribute access and indexing are separate things in Python, and you shouldn't want them to perform the same. Make a class (possibly one made by namedtuple) if you have something that should have accessible attributes and use [] notation to get an item from a dict.
To build upon epool's answer, this version allows you to access any dict inside via the dot operator:
foo = {
"bar" : {
"baz" : [ {"boo" : "hoo"} , {"baba" : "loo"} ]
}
}
For instance, foo.bar.baz[1].baba returns "loo".
class Map(dict):
def __init__(self, *args, **kwargs):
super(Map, self).__init__(*args, **kwargs)
for arg in args:
if isinstance(arg, dict):
for k, v in arg.items():
if isinstance(v, dict):
v = Map(v)
if isinstance(v, list):
self.__convert(v)
self[k] = v
if kwargs:
for k, v in kwargs.items():
if isinstance(v, dict):
v = Map(v)
elif isinstance(v, list):
self.__convert(v)
self[k] = v
def __convert(self, v):
for elem in range(0, len(v)):
if isinstance(v[elem], dict):
v[elem] = Map(v[elem])
elif isinstance(v[elem], list):
self.__convert(v[elem])
def __getattr__(self, attr):
return self.get(attr)
def __setattr__(self, key, value):
self.__setitem__(key, value)
def __setitem__(self, key, value):
super(Map, self).__setitem__(key, value)
self.__dict__.update({key: value})
def __delattr__(self, item):
self.__delitem__(item)
def __delitem__(self, key):
super(Map, self).__delitem__(key)
del self.__dict__[key]
Building on Kugel's answer and taking Mike Graham's words of caution into consideration, what if we make a wrapper?
class DictWrap(object):
""" Wrap an existing dict, or create a new one, and access with either dot
notation or key lookup.
The attribute _data is reserved and stores the underlying dictionary.
When using the += operator with create=True, the empty nested dict is
replaced with the operand, effectively creating a default dictionary
of mixed types.
args:
d({}): Existing dict to wrap, an empty dict is created by default
create(True): Create an empty, nested dict instead of raising a KeyError
example:
>>>dw = DictWrap({'pp':3})
>>>dw.a.b += 2
>>>dw.a.b += 2
>>>dw.a['c'] += 'Hello'
>>>dw.a['c'] += ' World'
>>>dw.a.d
>>>print dw._data
{'a': {'c': 'Hello World', 'b': 4, 'd': {}}, 'pp': 3}
"""
def __init__(self, d=None, create=True):
if d is None:
d = {}
supr = super(DictWrap, self)
supr.__setattr__('_data', d)
supr.__setattr__('__create', create)
def __getattr__(self, name):
try:
value = self._data[name]
except KeyError:
if not super(DictWrap, self).__getattribute__('__create'):
raise
value = {}
self._data[name] = value
if hasattr(value, 'items'):
create = super(DictWrap, self).__getattribute__('__create')
return DictWrap(value, create)
return value
def __setattr__(self, name, value):
self._data[name] = value
def __getitem__(self, key):
try:
value = self._data[key]
except KeyError:
if not super(DictWrap, self).__getattribute__('__create'):
raise
value = {}
self._data[key] = value
if hasattr(value, 'items'):
create = super(DictWrap, self).__getattribute__('__create')
return DictWrap(value, create)
return value
def __setitem__(self, key, value):
self._data[key] = value
def __iadd__(self, other):
if self._data:
raise TypeError("A Nested dict will only be replaced if it's empty")
else:
return other
Use __getattr__, very simple, works in
Python 3.4.3
class myDict(dict):
def __getattr__(self,val):
return self[val]
blockBody=myDict()
blockBody['item1']=10000
blockBody['item2']="StackOverflow"
print(blockBody.item1)
print(blockBody.item2)
Output:
10000
StackOverflow
I like the Munch and it gives lot of handy options on top of dot access.
import munch
temp_1 = {'person': { 'fname': 'senthil', 'lname': 'ramalingam'}}
dict_munch = munch.munchify(temp_1)
dict_munch.person.fname
The language itself doesn't support this, but sometimes this is still a useful requirement. Besides the Bunch recipe, you can also write a little method which can access a dictionary using a dotted string:
def get_var(input_dict, accessor_string):
"""Gets data from a dictionary using a dotted accessor-string"""
current_data = input_dict
for chunk in accessor_string.split('.'):
current_data = current_data.get(chunk, {})
return current_data
which would support something like this:
>> test_dict = {'thing': {'spam': 12, 'foo': {'cheeze': 'bar'}}}
>> output = get_var(test_dict, 'thing.spam.foo.cheeze')
>> print output
'bar'
>>
I ended up trying BOTH the AttrDict and the Bunch libraries and found them to be way to slow for my uses. After a friend and I looked into it, we found that the main method for writing these libraries results in the library aggressively recursing through a nested object and making copies of the dictionary object throughout. With this in mind, we made two key changes. 1) We made attributes lazy-loaded 2) instead of creating copies of a dictionary object, we create copies of a light-weight proxy object. This is the final implementation. The performance increase of using this code is incredible. When using AttrDict or Bunch, these two libraries alone consumed 1/2 and 1/3 respectively of my request time(what!?). This code reduced that time to almost nothing(somewhere in the range of 0.5ms). This of course depends on your needs, but if you are using this functionality quite a bit in your code, definitely go with something simple like this.
class DictProxy(object):
def __init__(self, obj):
self.obj = obj
def __getitem__(self, key):
return wrap(self.obj[key])
def __getattr__(self, key):
try:
return wrap(getattr(self.obj, key))
except AttributeError:
try:
return self[key]
except KeyError:
raise AttributeError(key)
# you probably also want to proxy important list properties along like
# items(), iteritems() and __len__
class ListProxy(object):
def __init__(self, obj):
self.obj = obj
def __getitem__(self, key):
return wrap(self.obj[key])
# you probably also want to proxy important list properties along like
# __iter__ and __len__
def wrap(value):
if isinstance(value, dict):
return DictProxy(value)
if isinstance(value, (tuple, list)):
return ListProxy(value)
return value
See the original implementation here by https://stackoverflow.com/users/704327/michael-merickel.
The other thing to note, is that this implementation is pretty simple and doesn't implement all of the methods you might need. You'll need to write those as required on the DictProxy or ListProxy objects.
def dict_to_object(dick):
# http://stackoverflow.com/a/1305663/968442
class Struct:
def __init__(self, **entries):
self.__dict__.update(entries)
return Struct(**dick)
If one decides to permanently convert that dict to object this should do. You can create a throwaway object just before accessing.
d = dict_to_object(d)
This solution is a refinement upon the one offered by epool to address the requirement of the OP to access nested dicts in a consistent manner. The solution by epool did not allow for accessing nested dicts.
class YAMLobj(dict):
def __init__(self, args):
super(YAMLobj, self).__init__(args)
if isinstance(args, dict):
for k, v in args.iteritems():
if not isinstance(v, dict):
self[k] = v
else:
self.__setattr__(k, YAMLobj(v))
def __getattr__(self, attr):
return self.get(attr)
def __setattr__(self, key, value):
self.__setitem__(key, value)
def __setitem__(self, key, value):
super(YAMLobj, self).__setitem__(key, value)
self.__dict__.update({key: value})
def __delattr__(self, item):
self.__delitem__(item)
def __delitem__(self, key):
super(YAMLobj, self).__delitem__(key)
del self.__dict__[key]
With this class, one can now do something like: A.B.C.D.
For infinite levels of nesting of dicts, lists, lists of dicts, and dicts of lists.
It also supports pickling
This is an extension of this answer.
class DotDict(dict):
# https://stackoverflow.com/a/70665030/913098
"""
Example:
m = Map({'first_name': 'Eduardo'}, last_name='Pool', age=24, sports=['Soccer'])
Iterable are assumed to have a constructor taking list as input.
"""
def __init__(self, *args, **kwargs):
super(DotDict, self).__init__(*args, **kwargs)
args_with_kwargs = []
for arg in args:
args_with_kwargs.append(arg)
args_with_kwargs.append(kwargs)
args = args_with_kwargs
for arg in args:
if isinstance(arg, dict):
for k, v in arg.items():
self[k] = v
if isinstance(v, dict):
self[k] = DotDict(v)
elif isinstance(v, str) or isinstance(v, bytes):
self[k] = v
elif isinstance(v, Iterable):
klass = type(v)
map_value: List[Any] = []
for e in v:
map_e = DotDict(e) if isinstance(e, dict) else e
map_value.append(map_e)
self[k] = klass(map_value)
def __getattr__(self, attr):
return self.get(attr)
def __setattr__(self, key, value):
self.__setitem__(key, value)
def __setitem__(self, key, value):
super(DotDict, self).__setitem__(key, value)
self.__dict__.update({key: value})
def __delattr__(self, item):
self.__delitem__(item)
def __delitem__(self, key):
super(DotDict, self).__delitem__(key)
del self.__dict__[key]
def __getstate__(self):
return self.__dict__
def __setstate__(self, d):
self.__dict__.update(d)
if __name__ == "__main__":
import pickle
def test_map():
d = {
"a": 1,
"b": {
"c": "d",
"e": 2,
"f": None
},
"g": [],
"h": [1, "i"],
"j": [1, "k", {}],
"l":
[
1,
"m",
{
"n": [3],
"o": "p",
"q": {
"r": "s",
"t": ["u", 5, {"v": "w"}, ],
"x": ("z", 1)
}
}
],
}
map_d = DotDict(d)
w = map_d.l[2].q.t[2].v
assert w == "w"
pickled = pickle.dumps(map_d)
unpickled = pickle.loads(pickled)
assert unpickled == map_d
kwargs_check = DotDict(a=1, b=[dict(c=2, d="3"), 5])
assert kwargs_check.b[0].d == "3"
kwargs_and_args_check = DotDict(d, a=1, b=[dict(c=2, d="3"), 5])
assert kwargs_and_args_check.l[2].q.t[2].v == "w"
assert kwargs_and_args_check.b[0].d == "3"
test_map()
I dislike adding another log to a (more than) 10-year old fire, but I'd also check out the dotwiz library, which I've recently released - just this year actually.
It's a relatively tiny library, which also performs really well for get (access) and set (create) times in benchmarks, at least as compared to other alternatives.
Install dotwiz via pip
pip install dotwiz
It does everything you want it to do and subclasses dict, so it operates like a normal dictionary:
from dotwiz import DotWiz
dw = DotWiz()
dw.hello = 'world'
dw.hello
dw.hello += '!'
# dw.hello and dw['hello'] now both return 'world!'
dw.val = 5
dw.val2 = 'Sam'
On top of that, you can convert it to and from dict objects:
d = dw.to_dict()
dw = DotWiz(d) # automatic conversion in constructor
This means that if something you want to access is already in dict form, you can turn it into a DotWiz for easy access:
import json
json_dict = json.loads(text)
data = DotWiz(json_dict)
print data.location.city
Finally, something exciting I am working on is an existing feature request so that it automatically creates new child DotWiz instances so you can do things like this:
dw = DotWiz()
dw['people.steve.age'] = 31
dw
# ✫(people=✫(steve=✫(age=31)))
Comparison with dotmap
I've added a quick and dirty performance comparison with dotmap below.
First, install both libraries with pip:
pip install dotwiz dotmap
I came up with the following code for benchmark purposes:
from timeit import timeit
from dotwiz import DotWiz
from dotmap import DotMap
d = {'hey': {'so': [{'this': {'is': {'pretty': {'cool': True}}}}]}}
dw = DotWiz(d)
# ✫(hey=✫(so=[✫(this=✫(is=✫(pretty={'cool'})))]))
dm = DotMap(d)
# DotMap(hey=DotMap(so=[DotMap(this=DotMap(is=DotMap(pretty={'cool'})))]))
assert dw.hey.so[0].this['is'].pretty.cool == dm.hey.so[0].this['is'].pretty.cool
n = 100_000
print('dotwiz (create): ', round(timeit('DotWiz(d)', number=n, globals=globals()), 3))
print('dotmap (create): ', round(timeit('DotMap(d)', number=n, globals=globals()), 3))
print('dotwiz (get): ', round(timeit("dw.hey.so[0].this['is'].pretty.cool", number=n, globals=globals()), 3))
print('dotmap (get): ', round(timeit("dm.hey.so[0].this['is'].pretty.cool", number=n, globals=globals()), 3))
Results, on my M1 Mac, running Python 3.10:
dotwiz (create): 0.189
dotmap (create): 1.085
dotwiz (get): 0.014
dotmap (get): 0.335
This also works with nested dicts and makes sure that dicts which are appended later behave the same:
class DotDict(dict):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# Recursively turn nested dicts into DotDicts
for key, value in self.items():
if type(value) is dict:
self[key] = DotDict(value)
def __setitem__(self, key, item):
if type(item) is dict:
item = DotDict(item)
super().__setitem__(key, item)
__setattr__ = __setitem__
__getattr__ = dict.__getitem__
Using namedtuple allows dot access.
It is like a lightweight object which also has the properties of a tuple.
It allows to define properties and access them using the dot operator.
from collections import namedtuple
Data = namedtuple('Data', ['key1', 'key2'])
dataObj = Data(val1, key2=val2) # can instantiate using keyword arguments and positional arguments
Access using dot operator
dataObj.key1 # Gives val1
datObj.key2 # Gives val2
Access using tuple indices
dataObj[0] # Gives val1
dataObj[1] # Gives val2
But remember this is a tuple; not a dict. So the below code will give error
dataObj['key1'] # Gives TypeError: tuple indices must be integers or slices, not str
Refer: namedtuple
It is an old question but I recently found that sklearn has an implemented version dict accessible by key, namely Bunch
https://scikit-learn.org/stable/modules/generated/sklearn.utils.Bunch.html#sklearn.utils.Bunch
Simplest solution.
Define a class with only pass statement in it. Create object for this class and use dot notation.
class my_dict:
pass
person = my_dict()
person.id = 1 # create using dot notation
person.phone = 9999
del person.phone # Remove a property using dot notation
name_data = my_dict()
name_data.first_name = 'Arnold'
name_data.last_name = 'Schwarzenegger'
person.name = name_data
person.name.first_name # dot notation access for nested properties - gives Arnold
One simple way to get dot access (but not array access), is to use a plain object in Python. Like this:
class YourObject:
def __init__(self, *args, **kwargs):
for k, v in kwargs.items():
setattr(self, k, v)
...and use it like this:
>>> obj = YourObject(key="value")
>>> print(obj.key)
"value"
... to convert it to a dict:
>>> print(obj.__dict__)
{"key": "value"}
The answer of #derek73 is very neat, but it cannot be pickled nor (deep)copied, and it returns None for missing keys. The code below fixes this.
Edit: I did not see the answer above that addresses the exact same point (upvoted). I'm leaving the answer here for reference.
class dotdict(dict):
__setattr__ = dict.__setitem__
__delattr__ = dict.__delitem__
def __getattr__(self, name):
try:
return self[name]
except KeyError:
raise AttributeError(name)
I just needed to access a dictionary using a dotted path string, so I came up with:
def get_value_from_path(dictionary, parts):
""" extracts a value from a dictionary using a dotted path string """
if type(parts) is str:
parts = parts.split('.')
if len(parts) > 1:
return get_value_from_path(dictionary[parts[0]], parts[1:])
return dictionary[parts[0]]
a = {'a':{'b':'c'}}
print(get_value_from_path(a, 'a.b')) # c
The implemention used by kaggle_environments is a function called structify.
class Struct(dict):
def __init__(self, **entries):
entries = {k: v for k, v in entries.items() if k != "items"}
dict.__init__(self, entries)
self.__dict__.update(entries)
def __setattr__(self, attr, value):
self.__dict__[attr] = value
self[attr] = value
# Added benefit of cloning lists and dicts.
def structify(o):
if isinstance(o, list):
return [structify(o[i]) for i in range(len(o))]
elif isinstance(o, dict):
return Struct(**{k: structify(v) for k, v in o.items()})
return o
https://github.com/Kaggle/kaggle-environments/blob/master/kaggle_environments/utils.py
This may be useful for testing AI simulation agents in games like ConnectX
from kaggle_environments import structify
obs = structify({ 'remainingOverageTime': 60, 'step': 0, 'mark': 1, 'board': [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]})
conf = structify({ 'timeout': 2, 'actTimeout': 2, 'agentTimeout': 60, 'episodeSteps': 1000, 'runTimeout': 1200, 'columns': 7, 'rows': 6, 'inarow': 4, '__raw_path__': '/kaggle_simulations/agent/main.py' })
def agent(obs, conf):
action = obs.step % conf.columns
return action
Not a direct answer to the OP's question, but inspired by and perhaps useful for some.. I've created an object-based solution using the internal __dict__ (In no way optimized code)
payload = {
"name": "John",
"location": {
"lat": 53.12312312,
"long": 43.21345112
},
"numbers": [
{
"role": "home",
"number": "070-12345678"
},
{
"role": "office",
"number": "070-12345679"
}
]
}
class Map(object):
"""
Dot style access to object members, access raw values
with an underscore e.g.
class Foo(Map):
def foo(self):
return self.get('foo') + 'bar'
obj = Foo(**{'foo': 'foo'})
obj.foo => 'foobar'
obj._foo => 'foo'
"""
def __init__(self, *args, **kwargs):
for arg in args:
if isinstance(arg, dict):
for k, v in arg.iteritems():
self.__dict__[k] = v
self.__dict__['_' + k] = v
if kwargs:
for k, v in kwargs.iteritems():
self.__dict__[k] = v
self.__dict__['_' + k] = v
def __getattribute__(self, attr):
if hasattr(self, 'get_' + attr):
return object.__getattribute__(self, 'get_' + attr)()
else:
return object.__getattribute__(self, attr)
def get(self, key):
try:
return self.__dict__.get('get_' + key)()
except (AttributeError, TypeError):
return self.__dict__.get(key)
def __repr__(self):
return u"<{name} object>".format(
name=self.__class__.__name__
)
class Number(Map):
def get_role(self):
return self.get('role')
def get_number(self):
return self.get('number')
class Location(Map):
def get_latitude(self):
return self.get('lat') + 1
def get_longitude(self):
return self.get('long') + 1
class Item(Map):
def get_name(self):
return self.get('name') + " Doe"
def get_location(self):
return Location(**self.get('location'))
def get_numbers(self):
return [Number(**n) for n in self.get('numbers')]
# Tests
obj = Item({'foo': 'bar'}, **payload)
assert type(obj) == Item
assert obj._name == "John"
assert obj.name == "John Doe"
assert type(obj.location) == Location
assert obj.location._lat == 53.12312312
assert obj.location._long == 43.21345112
assert obj.location.latitude == 54.12312312
assert obj.location.longitude == 44.21345112
for n in obj.numbers:
assert type(n) == Number
if n.role == 'home':
assert n.number == "070-12345678"
if n.role == 'office':
assert n.number == "070-12345679"
I would like to combine OrderedDict() and defaultdict() from collections in one object, which shall be an ordered, default dict.
Is this possible?
The following (using a modified version of this recipe) works for me:
from collections import OrderedDict, Callable
class DefaultOrderedDict(OrderedDict):
# Source: http://stackoverflow.com/a/6190500/562769
def __init__(self, default_factory=None, *a, **kw):
if (default_factory is not None and
not isinstance(default_factory, Callable)):
raise TypeError('first argument must be callable')
OrderedDict.__init__(self, *a, **kw)
self.default_factory = default_factory
def __getitem__(self, key):
try:
return OrderedDict.__getitem__(self, key)
except KeyError:
return self.__missing__(key)
def __missing__(self, key):
if self.default_factory is None:
raise KeyError(key)
self[key] = value = self.default_factory()
return value
def __reduce__(self):
if self.default_factory is None:
args = tuple()
else:
args = self.default_factory,
return type(self), args, None, None, self.items()
def copy(self):
return self.__copy__()
def __copy__(self):
return type(self)(self.default_factory, self)
def __deepcopy__(self, memo):
import copy
return type(self)(self.default_factory,
copy.deepcopy(self.items()))
def __repr__(self):
return 'OrderedDefaultDict(%s, %s)' % (self.default_factory,
OrderedDict.__repr__(self))
Here is another possibility, inspired by Raymond Hettinger's super() Considered Super, tested on Python 2.7.X and 3.4.X:
from collections import OrderedDict, defaultdict
class OrderedDefaultDict(OrderedDict, defaultdict):
def __init__(self, default_factory=None, *args, **kwargs):
#in python3 you can omit the args to super
super(OrderedDefaultDict, self).__init__(*args, **kwargs)
self.default_factory = default_factory
If you check out the class's MRO (aka, help(OrderedDefaultDict)), you'll see this:
class OrderedDefaultDict(collections.OrderedDict, collections.defaultdict)
| Method resolution order:
| OrderedDefaultDict
| collections.OrderedDict
| collections.defaultdict
| __builtin__.dict
| __builtin__.object
meaning that when an instance of OrderedDefaultDict is initialized, it defers to the OrderedDict's init, but this one in turn will call the defaultdict's methods before calling __builtin__.dict, which is precisely what we want.
If you want a simple solution that doesn't require a class, you can just use OrderedDict.setdefault(key, default=None) or OrderedDict.get(key, default=None). If you only get / set from a few places, say in a loop, you can easily just setdefault.
totals = collections.OrderedDict()
for i, x in some_generator():
totals[i] = totals.get(i, 0) + x
It is even easier for lists with setdefault:
agglomerate = collections.OrderedDict()
for i, x in some_generator():
agglomerate.setdefault(i, []).append(x)
But if you use it more than a few times, it is probably better to set up a class, like in the other answers.
Here's another solution to think about if your use case is simple like mine and you don't necessarily want to add the complexity of a DefaultOrderedDict class implementation to your code.
from collections import OrderedDict
keys = ['a', 'b', 'c']
items = [(key, None) for key in keys]
od = OrderedDict(items)
(None is my desired default value.)
Note that this solution won't work if one of your requirements is to dynamically insert new keys with the default value. A tradeoff of simplicity.
Update 3/13/17 - I learned of a convenience function for this use case. Same as above but you can omit the line items = ... and just:
od = OrderedDict.fromkeys(keys)
Output:
OrderedDict([('a', None), ('b', None), ('c', None)])
And if your keys are single characters, you can just pass one string:
OrderedDict.fromkeys('abc')
This has the same output as the two examples above.
You can also pass a default value as the second arg to OrderedDict.fromkeys(...).
Another simple approach would be to use dictionary get method
>>> from collections import OrderedDict
>>> d = OrderedDict()
>>> d['key'] = d.get('key', 0) + 1
>>> d['key'] = d.get('key', 0) + 1
>>> d
OrderedDict([('key', 2)])
>>>
A simpler version of #zeekay 's answer is:
from collections import OrderedDict
class OrderedDefaultListDict(OrderedDict): #name according to default
def __missing__(self, key):
self[key] = value = [] #change to whatever default you want
return value
A simple and elegant solution building on #NickBread.
Has a slightly different API to set the factory, but good defaults are always nice to have.
class OrderedDefaultDict(OrderedDict):
factory = list
def __missing__(self, key):
self[key] = value = self.factory()
return value
I created slightly fixed and more simplified version of the accepted answer, actual for python 3.7.
from collections import OrderedDict
from copy import copy, deepcopy
import pickle
from typing import Any, Callable
class DefaultOrderedDict(OrderedDict):
def __init__(
self,
default_factory: Callable[[], Any],
*args,
**kwargs,
):
super().__init__(*args, **kwargs)
self.default_factory = default_factory
def __getitem__(self, key):
try:
return super().__getitem__(key)
except KeyError:
return self.__missing__(key)
def __missing__(self, key):
self[key] = value = self.default_factory()
return value
def __reduce__(self):
return type(self), (self.default_factory, ), None, None, iter(self.items())
def copy(self):
return self.__copy__()
def __copy__(self):
return type(self)(self.default_factory, self)
def __deepcopy__(self, memo):
return type(self)(self.default_factory, deepcopy(tuple(self.items()), memo))
def __repr__(self):
return f'{self.__class__.__name__}({self.default_factory}, {OrderedDict(self).__repr__()})'
And, that may be even more important, provided some tests.
a = DefaultOrderedDict(list)
# testing default
assert a['key'] == []
a['key'].append(1)
assert a['key'] == [1, ]
# testing repr
assert repr(a) == "DefaultOrderedDict(<class 'list'>, OrderedDict([('key', [1])]))"
# testing copy
b = a.copy()
assert b['key'] is a['key']
c = copy(a)
assert c['key'] is a['key']
d = deepcopy(a)
assert d['key'] is not a['key']
assert d['key'] == a['key']
# testing pickle
saved = pickle.dumps(a)
restored = pickle.loads(saved)
assert restored is not a
assert restored == a
# testing order
a['second_key'] = [2, ]
a['key'] = [3, ]
assert list(a.items()) == [('key', [3, ]), ('second_key', [2, ])]
Inspired by other answers on this thread, you can use something like,
from collections import OrderedDict
class OrderedDefaultDict(OrderedDict):
def __missing__(self, key):
value = OrderedDefaultDict()
self[key] = value
return value
I would like to know if there're any downsides of initializing another object of the same class in the missing method.
i tested the default dict and discovered it's also sorted!
maybe it was just a coincidence but anyway you can use the sorted function:
sorted(s.items())
i think it's simpler
A frozen set is a frozenset.
A frozen list could be a tuple.
What would a frozen dict be? An immutable, hashable dict.
I guess it could be something like collections.namedtuple, but that is more like a frozen-keys dict (a half-frozen dict). Isn't it?
A "frozendict" should be a frozen dictionary, it should have keys, values, get, etc., and support in, for, etc.
update :
* there it is : https://www.python.org/dev/peps/pep-0603
Python doesn't have a builtin frozendict type. It turns out this wouldn't be useful too often (though it would still probably be useful more often than frozenset is).
The most common reason to want such a type is when memoizing function calls for functions with unknown arguments. The most common solution to store a hashable equivalent of a dict (where the values are hashable) is something like tuple(sorted(kwargs.items())).
This depends on the sorting not being a bit insane. Python cannot positively promise sorting will result in something reasonable here. (But it can't promise much else, so don't sweat it too much.)
You could easily enough make some sort of wrapper that works much like a dict. It might look something like
import collections
class FrozenDict(collections.Mapping):
"""Don't forget the docstrings!!"""
def __init__(self, *args, **kwargs):
self._d = dict(*args, **kwargs)
self._hash = None
def __iter__(self):
return iter(self._d)
def __len__(self):
return len(self._d)
def __getitem__(self, key):
return self._d[key]
def __hash__(self):
# It would have been simpler and maybe more obvious to
# use hash(tuple(sorted(self._d.iteritems()))) from this discussion
# so far, but this solution is O(n). I don't know what kind of
# n we are going to run into, but sometimes it's hard to resist the
# urge to optimize when it will gain improved algorithmic performance.
if self._hash is None:
hash_ = 0
for pair in self.items():
hash_ ^= hash(pair)
self._hash = hash_
return self._hash
It should work great:
>>> x = FrozenDict(a=1, b=2)
>>> y = FrozenDict(a=1, b=2)
>>> x is y
False
>>> x == y
True
>>> x == {'a': 1, 'b': 2}
True
>>> d = {x: 'foo'}
>>> d[y]
'foo'
Curiously, although we have the seldom useful frozenset, there's still no frozen mapping. The idea was rejected in PEP 416 -- Add a frozendict builtin type. This idea may be revisited in a later Python release, see PEP 603 -- Adding a frozenmap type to collections.
So the Python 2 solution to this:
def foo(config={'a': 1}):
...
Still seems to be the usual:
def foo(config=None):
if config is None:
config = {'a': 1} # default config
...
In Python 3 you have the option of this:
from types import MappingProxyType
default_config = {'a': 1}
DEFAULTS = MappingProxyType(default_config)
def foo(config=DEFAULTS):
...
Now the default config can be updated dynamically, but remain immutable where you want it to be immutable by passing around the proxy instead.
So changes in the default_config will update DEFAULTS as expected, but you can't write to the mapping proxy object itself.
Admittedly it's not really the same thing as an "immutable, hashable dict", but it might be a decent substitute for some use cases of a frozendict.
Assuming the keys and values of the dictionary are themselves immutable (e.g. strings) then:
>>> d
{'forever': 'atones', 'minks': 'cards', 'overhands': 'warranted',
'hardhearted': 'tartly', 'gradations': 'snorkeled'}
>>> t = tuple((k, d[k]) for k in sorted(d.keys()))
>>> hash(t)
1524953596
There is no fronzedict, but you can use MappingProxyType that was added to the standard library with Python 3.3:
>>> from types import MappingProxyType
>>> foo = MappingProxyType({'a': 1})
>>> foo
mappingproxy({'a': 1})
>>> foo['a'] = 2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'mappingproxy' object does not support item assignment
>>> foo
mappingproxy({'a': 1})
I think of frozendict everytime I write a function like this:
def do_something(blah, optional_dict_parm=None):
if optional_dict_parm is None:
optional_dict_parm = {}
Install frozendict
pip install frozendict
Use it!
from frozendict import frozendict
def smth(param = frozendict({})):
pass
Here is the code I've been using. I subclassed frozenset. The advantages of this are the following.
This is a truly immutable object. No relying on the good behavior of future users and developers.
It's easy to convert back and forth between a regular dictionary and a frozen dictionary. FrozenDict(orig_dict) --> frozen dictionary. dict(frozen_dict) --> regular dict.
Update Jan 21 2015: The original piece of code I posted in 2014 used a for-loop to find a key that matched. That was incredibly slow. Now I've put together an implementation which takes advantage of frozenset's hashing features. Key-value pairs are stored in special containers where the __hash__ and __eq__ functions are based on the key only. This code has also been formally unit-tested, unlike what I posted here in August 2014.
MIT-style license.
if 3 / 2 == 1:
version = 2
elif 3 / 2 == 1.5:
version = 3
def col(i):
''' For binding named attributes to spots inside subclasses of tuple.'''
g = tuple.__getitem__
#property
def _col(self):
return g(self,i)
return _col
class Item(tuple):
''' Designed for storing key-value pairs inside
a FrozenDict, which itself is a subclass of frozenset.
The __hash__ is overloaded to return the hash of only the key.
__eq__ is overloaded so that normally it only checks whether the Item's
key is equal to the other object, HOWEVER, if the other object itself
is an instance of Item, it checks BOTH the key and value for equality.
WARNING: Do not use this class for any purpose other than to contain
key value pairs inside FrozenDict!!!!
The __eq__ operator is overloaded in such a way that it violates a
fundamental property of mathematics. That property, which says that
a == b and b == c implies a == c, does not hold for this object.
Here's a demonstration:
[in] >>> x = Item(('a',4))
[in] >>> y = Item(('a',5))
[in] >>> hash('a')
[out] >>> 194817700
[in] >>> hash(x)
[out] >>> 194817700
[in] >>> hash(y)
[out] >>> 194817700
[in] >>> 'a' == x
[out] >>> True
[in] >>> 'a' == y
[out] >>> True
[in] >>> x == y
[out] >>> False
'''
__slots__ = ()
key, value = col(0), col(1)
def __hash__(self):
return hash(self.key)
def __eq__(self, other):
if isinstance(other, Item):
return tuple.__eq__(self, other)
return self.key == other
def __ne__(self, other):
return not self.__eq__(other)
def __str__(self):
return '%r: %r' % self
def __repr__(self):
return 'Item((%r, %r))' % self
class FrozenDict(frozenset):
''' Behaves in most ways like a regular dictionary, except that it's immutable.
It differs from other implementations because it doesn't subclass "dict".
Instead it subclasses "frozenset" which guarantees immutability.
FrozenDict instances are created with the same arguments used to initialize
regular dictionaries, and has all the same methods.
[in] >>> f = FrozenDict(x=3,y=4,z=5)
[in] >>> f['x']
[out] >>> 3
[in] >>> f['a'] = 0
[out] >>> TypeError: 'FrozenDict' object does not support item assignment
FrozenDict can accept un-hashable values, but FrozenDict is only hashable if its values are hashable.
[in] >>> f = FrozenDict(x=3,y=4,z=5)
[in] >>> hash(f)
[out] >>> 646626455
[in] >>> g = FrozenDict(x=3,y=4,z=[])
[in] >>> hash(g)
[out] >>> TypeError: unhashable type: 'list'
FrozenDict interacts with dictionary objects as though it were a dict itself.
[in] >>> original = dict(x=3,y=4,z=5)
[in] >>> frozen = FrozenDict(x=3,y=4,z=5)
[in] >>> original == frozen
[out] >>> True
FrozenDict supports bi-directional conversions with regular dictionaries.
[in] >>> original = {'x': 3, 'y': 4, 'z': 5}
[in] >>> FrozenDict(original)
[out] >>> FrozenDict({'x': 3, 'y': 4, 'z': 5})
[in] >>> dict(FrozenDict(original))
[out] >>> {'x': 3, 'y': 4, 'z': 5} '''
__slots__ = ()
def __new__(cls, orig={}, **kw):
if kw:
d = dict(orig, **kw)
items = map(Item, d.items())
else:
try:
items = map(Item, orig.items())
except AttributeError:
items = map(Item, orig)
return frozenset.__new__(cls, items)
def __repr__(self):
cls = self.__class__.__name__
items = frozenset.__iter__(self)
_repr = ', '.join(map(str,items))
return '%s({%s})' % (cls, _repr)
def __getitem__(self, key):
if key not in self:
raise KeyError(key)
diff = self.difference
item = diff(diff({key}))
key, value = set(item).pop()
return value
def get(self, key, default=None):
if key not in self:
return default
return self[key]
def __iter__(self):
items = frozenset.__iter__(self)
return map(lambda i: i.key, items)
def keys(self):
items = frozenset.__iter__(self)
return map(lambda i: i.key, items)
def values(self):
items = frozenset.__iter__(self)
return map(lambda i: i.value, items)
def items(self):
items = frozenset.__iter__(self)
return map(tuple, items)
def copy(self):
cls = self.__class__
items = frozenset.copy(self)
dupl = frozenset.__new__(cls, items)
return dupl
#classmethod
def fromkeys(cls, keys, value):
d = dict.fromkeys(keys,value)
return cls(d)
def __hash__(self):
kv = tuple.__hash__
items = frozenset.__iter__(self)
return hash(frozenset(map(kv, items)))
def __eq__(self, other):
if not isinstance(other, FrozenDict):
try:
other = FrozenDict(other)
except Exception:
return False
return frozenset.__eq__(self, other)
def __ne__(self, other):
return not self.__eq__(other)
if version == 2:
#Here are the Python2 modifications
class Python2(FrozenDict):
def __iter__(self):
items = frozenset.__iter__(self)
for i in items:
yield i.key
def iterkeys(self):
items = frozenset.__iter__(self)
for i in items:
yield i.key
def itervalues(self):
items = frozenset.__iter__(self)
for i in items:
yield i.value
def iteritems(self):
items = frozenset.__iter__(self)
for i in items:
yield (i.key, i.value)
def has_key(self, key):
return key in self
def viewkeys(self):
return dict(self).viewkeys()
def viewvalues(self):
return dict(self).viewvalues()
def viewitems(self):
return dict(self).viewitems()
#If this is Python2, rebuild the class
#from scratch rather than use a subclass
py3 = FrozenDict.__dict__
py3 = {k: py3[k] for k in py3}
py2 = {}
py2.update(py3)
dct = Python2.__dict__
py2.update({k: dct[k] for k in dct})
FrozenDict = type('FrozenDict', (frozenset,), py2)
You may use frozendict from utilspie package as:
>>> from utilspie.collectionsutils import frozendict
>>> my_dict = frozendict({1: 3, 4: 5})
>>> my_dict # object of `frozendict` type
frozendict({1: 3, 4: 5})
# Hashable
>>> {my_dict: 4}
{frozendict({1: 3, 4: 5}): 4}
# Immutable
>>> my_dict[1] = 5
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/mquadri/workspace/utilspie/utilspie/collectionsutils/collections_utils.py", line 44, in __setitem__
self.__setitem__.__name__, type(self).__name__))
AttributeError: You can not call '__setitem__()' for 'frozendict' object
As per the document:
frozendict(dict_obj): Accepts obj of dict type and returns a hashable and immutable dict
Subclassing dict
i see this pattern in the wild (github) and wanted to mention it:
class FrozenDict(dict):
def __init__(self, *args, **kwargs):
self._hash = None
super(FrozenDict, self).__init__(*args, **kwargs)
def __hash__(self):
if self._hash is None:
self._hash = hash(tuple(sorted(self.items()))) # iteritems() on py2
return self._hash
def _immutable(self, *args, **kws):
raise TypeError('cannot change object - object is immutable')
# makes (deep)copy alot more efficient
def __copy__(self):
return self
def __deepcopy__(self, memo=None):
if memo is not None:
memo[id(self)] = self
return self
__setitem__ = _immutable
__delitem__ = _immutable
pop = _immutable
popitem = _immutable
clear = _immutable
update = _immutable
setdefault = _immutable
example usage:
d1 = FrozenDict({'a': 1, 'b': 2})
d2 = FrozenDict({'a': 1, 'b': 2})
d1.keys()
assert isinstance(d1, dict)
assert len(set([d1, d2])) == 1 # hashable
Pros
support for get(), keys(), items() (iteritems() on py2) and all the goodies from dict out of the box without explicitly implementing them
uses internally dict which means performance (dict is written in c in CPython)
elegant simple and no black magic
isinstance(my_frozen_dict, dict) returns True - although python encourages duck-typing many packages uses isinstance(), this can save many tweaks and customizations
Cons
any subclass can override this or access it internally (you cant really 100% protect something in python, you should trust your users and provide good documentation).
if you care for speed, you might want to make __hash__ a bit faster.
Yes, this is my second answer, but it is a completely different approach. The first implementation was in pure python. This one is in Cython. If you know how to use and compile Cython modules, this is just as fast as a regular dictionary. Roughly .04 to .06 micro-sec to retrieve a single value.
This is the file "frozen_dict.pyx"
import cython
from collections import Mapping
cdef class dict_wrapper:
cdef object d
cdef int h
def __init__(self, *args, **kw):
self.d = dict(*args, **kw)
self.h = -1
def __len__(self):
return len(self.d)
def __iter__(self):
return iter(self.d)
def __getitem__(self, key):
return self.d[key]
def __hash__(self):
if self.h == -1:
self.h = hash(frozenset(self.d.iteritems()))
return self.h
class FrozenDict(dict_wrapper, Mapping):
def __repr__(self):
c = type(self).__name__
r = ', '.join('%r: %r' % (k,self[k]) for k in self)
return '%s({%s})' % (c, r)
__all__ = ['FrozenDict']
Here's the file "setup.py"
from distutils.core import setup
from Cython.Build import cythonize
setup(
ext_modules = cythonize('frozen_dict.pyx')
)
If you have Cython installed, save the two files above into the same directory. Move to that directory in the command line.
python setup.py build_ext --inplace
python setup.py install
And you should be done.
The main disadvantage of namedtuple is that it needs to be specified before it is used, so it's less convenient for single-use cases.
However, there is a practical workaround that can be used to handle many such cases. Let's say that you want to have an immutable equivalent of the following dict:
MY_CONSTANT = {
'something': 123,
'something_else': 456
}
This can be emulated like this:
from collections import namedtuple
MY_CONSTANT = namedtuple('MyConstant', 'something something_else')(123, 456)
It's even possible to write an auxiliary function to automate this:
def freeze_dict(data):
from collections import namedtuple
keys = sorted(data.keys())
frozen_type = namedtuple(''.join(keys), keys)
return frozen_type(**data)
a = {'foo':'bar', 'x':'y'}
fa = freeze_dict(data)
assert a['foo'] == fa.foo
Of course this works only for flat dicts, but it shouldn't be too difficult to implement a recursive version.
freeze implements frozen collections (dict, list and set) that are hashable, type-hinted and will recursively freeze the data you give them (when possible) for you.
pip install frz
Usage:
from freeze import FDict
a_mutable_dict = {
"list": [1, 2],
"set": {3, 4},
}
a_frozen_dict = FDict(a_mutable_dict)
print(repr(a_frozen_dict))
# FDict: {'list': FList: (1, 2), 'set': FSet: {3, 4}}
In the absence of native language support, you can either do it yourself or use an existing solution. Fortunately Python makes it dead simple to extend off of their base implementations.
class frozen_dict(dict):
def __setitem__(self, key, value):
raise Exception('Frozen dictionaries cannot be mutated')
frozen_dict = frozen_dict({'foo': 'FOO' })
print(frozen['foo']) # FOO
frozen['foo'] = 'NEWFOO' # Exception: Frozen dictionaries cannot be mutated
# OR
from types import MappingProxyType
frozen_dict = MappingProxyType({'foo': 'FOO'})
print(frozen_dict['foo']) # FOO
frozen_dict['foo'] = 'NEWFOO' # TypeError: 'mappingproxy' object does not support item assignment
I needed to access fixed keys for something at one point for something that was a sort of globally-constanty kind of thing and I settled on something like this:
class MyFrozenDict:
def __getitem__(self, key):
if key == 'mykey1':
return 0
if key == 'mykey2':
return "another value"
raise KeyError(key)
Use it like
a = MyFrozenDict()
print(a['mykey1'])
WARNING: I don't recommend this for most use cases as it makes some pretty severe tradeoffs.
While looking over some code in Think Complexity, I noticed their Graph class assigning values to itself. I've copied a few important lines from that class and written an example class, ObjectChild, that fails at this behavior.
class Graph(dict):
def __init__(self, vs=[], es=[]):
for v in vs:
self.add_vertex(v)
for e in es:
self.add_edge(e)
def add_edge(self, e):
v, w = e
self[v][w] = e
self[w][v] = e
def add_vertex(self, v):
self[v] = {}
class ObjectChild(object):
def __init__(self, name):
self['name'] = name
I'm sure the different built in types all have their own way of using this, but I'm not sure whether this is something I should try to build into my classes. Is it possible, and how? Is this something I shouldn't bother with, relying instead on simple composition, e.g. self.l = [1, 2, 3]? Should it be avoided outside built in types?
I ask because I was told "You should almost never inherit from the builtin python collections"; advice I'm hesitant to restrict myself to.
To clarify, I know that ObjectChild won't "work", and I could easily make it "work", but I'm curious about the inner workings of these built in types that makes their interface different from a child of object.
In Python 3 and later, just add these simple functions to your class:
class some_class(object):
def __setitem__(self, key, value):
setattr(self, key, value)
def __getitem__(self, key):
return getattr(self, key)
They are accomplishing this magic by inheriting from dict. A better way of doing this is to inherit from UserDict or the newer collections.MutableMapping
You could accomplish a similar result by doing the same:
import collections
class ObjectChild(collections.MutableMapping):
def __init__(self, name):
self['name'] = name
You can also define two special functions to make your class dictionary-like: __getitem__(self, key) and __setitem__(self, key, value). You can see an example of this at Dive Into Python - Special Class Methods.
Disclaimer : I might be wrong.
the notation :
self[something]
is legit in the Graph class because it inherits fro dict. This notation is from the dictionnaries ssyntax not from the class attribute declaration syntax.
Although all namespaces associated with a class are dictionnaries, in your class ChildObject, self isn't a dictionnary. Therefore you can't use that syntax.
Otoh, in your class Graph, self IS a dictionnary, since it is a graph, and all graphs are dictionnaries because they inherit from dict.
Is using something like this ok?
def mk_opts_dict(d):
''' mk_options_dict(dict) -> an instance of OptionsDict '''
class OptionsDict(object):
def __init__(self, d):
self.__dict__ = d
def __setitem__(self, key, value):
self.__dict__[key] = value
def __getitem__(self, key):
return self.__dict__[key]
return OptionsDict(d)
I realize this is an old post, but I was looking for some details around item assignment and stumbled upon the answers here. Ted's post wasn't completely wrong. To avoid inheritance from dict, you can make a class inherit from MutableMapping, and then provide methods for __setitem__ and __getitem__.
Additionally, the class will need to support methods for __delitem__, __iter__, __len__, and (optionally) other inherited mixin methods, like pop. The documentation has more info on the details.
from collections.abc import MutableMapping
class ItemAssign(MutableMapping):
def __init__(self, a, b):
self.a = a
self.b = b
def __setitem__(self, k, v):
setattr(self, k, v)
def __getitem__(self, k):
getattr(self, k)
def __len__(self):
return 2
def __delitem__(self, k):
self[k] = None
def __iter__(self):
yield self.a
yield self.b
Example use:
>>> x = ItemAssign("banana","apple")
>>> x["a"] = "orange"
>>> x.a
'orange'
>>> del x["a"]
>>> print(x.a)
None
>>> x.pop("b")
'apple'
>>> print(x.b)
None
Hope this serves to clarify how to properly implement item assignment for others stumbling across this post :)
Your ObjectChild doesn't work because it's not a subclass of dict. Either of these would work:
class ObjectChild(dict):
def __init__(self, name):
self['name'] = name
or
class ObjectChild(object):
def __init__(self, name):
self.name = name
You don't need to inherit from dict. If you provide setitem and getitem methods, you also get the desired behavior I believe.
class a(object):
def __setitem__(self, k, v):
self._data[k] = v
def __getitem__(self, k):
return self._data[k]
_data = {}
Little memo about <dict> inheritance
For those who want to inherit dict.
In this case MyDict will have a shallow copy of original dict in it.
class MyDict(dict):
...
d = {'a': 1}
md = MyDict(d)
print(d['a']) # 1
print(md['a']) # 1
md['a'] = 'new'
print(d['a']) # 1
print(md['a']) # new
This could lead to problem when you have a tree of nested dicts and you want to covert part of it to an object. Changing this object will not affect its parent
root = {
'obj': {
'a': 1,
'd': {'x': True}
}
}
obj = MyDict(root['obj'])
obj['a'] = 2
print(root) # {'obj': {'a': 1, 'd': {'x': True}}} # 'a' is the same
obj['d']['x'] = False
print(root) # {'obj': {'a': 1, 'd': {'x': True}}} # 'x' chanded