Python: unable to access generated functions in a python class - python

I have a class that contains a nested dictionary that I want to make getters and setters for. I use a depth first search to generate the functions and add them to the class's __dict__ attribute, but when I try to call any of the generated functions, I just get an AttributeError: 'MyClass' object has no attribute 'getA'.
import operator
from functools import reduce
class MyClass:
def __init__(self):
self.dictionary = {
"a": {
"b": 1,
"c": 2
},
"d": {
"e": {
"f": 3,
"g": 4
}
}
}
self.addGettersSetters()
def addGettersSetters(self):
def makegetter(self, keyChain):
def func():
return reduce(operator.getitem, keyChain, self.dictionary)
return func
def makesetter(self, keyChain):
def func(arg):
print("setter ", arg)
path = self.dictionary
for i in keyChain[:-1]:
path = path[i]
path[keyChain[-1]] = arg
return func
# depth first search of dictionary
def recurseDict(self, dictionary, keyChain=[]):
for key, value in dictionary.items():
keyChain.append(key)
# capitalize the first letter of each part of the keychain for the function name
capKeyChain = [i.title().replace(" ", "")
for i in keyChain]
# setter version
print('set{}'.format("".join(capKeyChain)))
self.__dict__['set{}'.format(
"".join(capKeyChain))] = makesetter(self, keyChain)
# getter version
print('get{}'.format("".join(capKeyChain)))
self.__dict__['set{}'.format(
"".join(capKeyChain))] = makegetter(self, keyChain)
# recurse down the dictionary chain
if isinstance(value, dict):
recurseDict(self, dictionary=value,
keyChain=keyChain)
# remove the last key for the next iteration
while keyChain[-1] != key:
keyChain = keyChain[: -1]
keyChain = keyChain[: -1]
recurseDict(self, self.dictionary)
print(self.__dict__)
if __name__ == '__main__':
myclass = MyClass()
print(myclass.getA())
If you run this code, it outputs the names of all of the generated functions as well as the state of __dict___ after generating the functions and terminates with the AttributionError.
What has me puzzled is that I used another piece of code that uses essentially the same methodology as an example for how to generate getters and setters this way. That piece of code works just fine, but mine does not and, per my eyes and research, I am at a loss as to why. What am I missing here?
For reference I am running Anaconda Python 3.6.3

Related

Nested dictionary that acts as defaultdict when setting items but not when getting items

I want to implement a dict-like data structure that has the following properties:
from collections import UserDict
class TestDict(UserDict):
pass
test_dict = TestDict()
# Create empty dictionaries at 'level_1' and 'level_2' and insert 'Hello' at the 'level_3' key.
test_dict['level_1']['level_2']['level_3'] = 'Hello'
>>> test_dict
{
'level_1': {
'level_2': {
'level_3': 'Hello'
}
}
}
# However, this should not return an empty dictionary but raise a KeyError.
>>> test_dict['unknown_key']
KeyError: 'unknown_key'
The problem, to my knowledge, is that python does not know whether __getitem__ is being called in the context of setting an item, i.e. the first example, or in the context of getting and item, the second example.
I have already seen Python `defaultdict`: Use default when setting, but not when getting, but I do not think that this question is a duplicate, or that it answers my question.
Please let me know if you have any ideas.
Thanks in advance.
EDIT:
It is possible to achieve something similar using:
def set_nested_item(dict_in: Union[dict, TestDict], value, keys):
for i, key in enumerate(keys):
is_last = i == (len(keys) - 1)
if is_last:
dict_in[key] = value
else:
if key not in dict_in:
dict_in[key] = {}
else:
if not isinstance(dict_in[key], (dict, TestDict)):
dict_in[key] = {}
dict_in[key] = set_nested_item(dict_in[key], value, keys[(i + 1):])
return dict_in
class TestDict(UserDict):
def __init__(self):
super().__init__()
def __setitem__(self, key, value):
if isinstance(key, list):
self.update(set_nested_item(self, value, key))
else:
super().__setitem__(key, value)
test_dict[['level_1', 'level_2', 'level_3']] = 'Hello'
>>> test_dict
{
'level_1': {
'level_2': {
'level_3': 'Hello'
}
}
}
It's impossible.
test_dict['level_1']['level_2']['level_3'] = 'Hello'
is semantically equivalent to:
temp1 = test_dict['level_1'] # Should this line fail?
temp1['level_2']['level_3'] = 'Hello'
But... if determined to implement it anyway, you could inspect the Python stack to grab/parse the calling line of code, and then vary the behaviour depending on whether the calling line of code contains an assignment! Unfortunately, sometimes the calling code isn't available in the stack trace (e.g. when called interactively), in which case you need to work with Python bytecode.
import dis
import inspect
from collections import UserDict
def get_opcodes(code_object, lineno):
"""Utility function to extract Python VM opcodes for line of code"""
line_ops = []
instructions = dis.get_instructions(code_object).__iter__()
for instruction in instructions:
if instruction.starts_line == lineno:
# found start of our line
line_ops.append(instruction.opcode)
break
for instruction in instructions:
if not instruction.starts_line:
line_ops.append(instruction.opcode)
else:
# start of next line
break
return line_ops
class TestDict(UserDict):
def __getitem__(self, key):
try:
return super().__getitem__(key)
except KeyError:
# inspect the stack to get calling line of code
frame = inspect.stack()[1].frame
opcodes = get_opcodes(frame.f_code, frame.f_lineno)
# STORE_SUBSCR is Python opcode for TOS1[TOS] = TOS2
if dis.opmap['STORE_SUBSCR'] in opcodes:
# calling line of code contains a dict/array assignment
default = TestDict()
super().__setitem__(key, default)
return default
else:
raise
test_dict = TestDict()
test_dict['level_1']['level_2']['level_3'] = 'Hello'
print(test_dict)
# {'level_1': {'level_2': {'level_3': 'Hello'}}}
test_dict['unknown_key']
# KeyError: 'unknown_key'
The above is just a partial solution. It can still be fooled if there are other dictionary/array assignments on the same line, e.g. other['key'] = test_dict['unknown_key']. A more complete solution would need to actually parse the line of code to figure out where the variable occurs in the assignment.

Is there a Python equivalent for Swift's #dynamicMemberLookup?

In Swift, you can define #dynamicMemberLookup (see documentation) to get direct access to properties that are nested inside another type. Is there a Python equivalent?
Example of what I want to achieve with Python
Let's say I have a class with members, e.g.:
c = OuterClass()
c.inner_class = ClassWithManyMembers()
c.inner_class.member1 = "1"
c.inner_class.member2 = "2"
c.inner_class.member3 = "3"
I would like to be able to get/set those members without having to type the inner_class every time:
print(c.member1) # prints "1"
c.member1 = 3
print(c.member1) # prints "3"
Example in Swift (Source):
Dynamic member lookup by member name
#dynamicMemberLookup
struct DynamicStruct {
let dictionary = ["someDynamicMember": 325,
"someOtherMember": 787]
subscript(dynamicMember member: String) -> Int {
return dictionary[member] ?? 1054
}
}
let s = DynamicStruct()
// Use dynamic member lookup.
let dynamic = s.someDynamicMember
print(dynamic)
// Prints "325"
Dynamic member lookup by key path
struct Point { var x, y: Int }
#dynamicMemberLookup
struct PassthroughWrapper<Value> {
var value: Value
subscript<T>(dynamicMember member: KeyPath<Value, T>) -> T {
get { return value[keyPath: member] }
}
}
let point = Point(x: 381, y: 431)
let wrapper = PassthroughWrapper(value: point)
print(wrapper.x)
My only idea in Python would be to monkey-patch all nested properties directly to the outer class.
I would advise against nesting classes in one another, but if you must do it, try this:
class MetaOuter(type):
def __getattr__(cls, attr):
for member in cls.__dict__.values():
if hasattr(member, attr):
return getattr(member, attr)
raise AttributeError(attr)
def __setattr__(cls, attr, value):
for member in cls.__dict__.values():
if hasattr(member, attr):
setattr(member, attr, value)
return
super().__setattr__(attr, value)
class Outer(metaclass=MetaOuter):
a = 0
class Inner:
x = 1
y = 2
Now any attributes of a nested class inside Outer are available (and can be written to) as an attribute of Outer:
>>> Outer.x, Outer.y
(1, 2)
>>> Outer.a # Accessing regular attributes still works as usual
0
>>> Outer.x = True
>>> Outer.Inner.x
True
If you need to nest more than one level, use the same meta class for any inner encapsulating classes:
class Outer(metaclass=MetaOuter):
a = 0
class Inner(metaclass=MetaOuter):
x = 1
y = 2
class Innerer:
z = 42
>>> Outer.a, Outer.x, Outer.y, Outer.z
(0, 1, 2, 42)
>>> Outer.z = -1
>>> Outer.z
-1
Note: Be aware that if you're trying to access an attribute that is found in multiple nested classes, you can't be sure of which class the attribute will come from. A more predictable implementation in this case would be to handle some kind of key path that will be looked up, but that's essentially the same as what Python provides by default (e.g., Outer.Inner.Innerer.z).
Generally, you can just save a reference to the inner object when you want to make repeated accesses to it.
c = OuterClass()
c.inner_class = ClassWithManyMembers()
ic = c.inner_class
print(ic.member1)
print(ic.member2)
print(ic.member3)
ic.member1 = "5"

how to save frequently used physical constants in python

I would like to have a place for my physical constants.
The following answer is already a starting point:
How-to import constants in many files
So I have a seperate file called constants.py which I import into my projects.
Now, i would like to save and access additional information:
units
documentation
The resulting interface should be like:
import constants as c
print c.R
>>> 287.102
print c.R.units
>>> J/(kg K)
print c.R.doc
>>> ideal gas constant
Calculations should use c.R to access the value.
It is basically a class, which behaves like the float class
but holds two additional strings: units and documentation.
How can this be designed?
Inheriting from class float, you have to overwrite the __new__-method:
class Constant(float):
def __new__(cls, value, units, doc):
self = float.__new__(cls, value)
self.units = units
self.doc = doc
return self
R = Constant(287.102, "J/(kg K)", "deal gas constant")
print R, R * 2
>>> 287.102 574.204
print R.units
>>> J/(kg K)
print R.doc
>>> ideal gas constant
I recommend using the json library, which will allow you to store your constant values in a readable and modifiable format.
Using #Daniel's Constant class which inherits from float and adds your custom attributes, you can load all your constants at once into a new Constants object.
You can then get these attributes as c.R to access the value.
Complete file:
#!/usr/bin/env python
import json
class Constant(float):
def __new__(cls, value):
self = float.__new__(cls, value["value"]) # KeyError if missing "value"
self.units = value.get("units", None)
self.doc = value.get("doc", None)
return self
class Constants():
# load the json file into a dictionary of Constant objects
def __init__(self):
with open("constants.json") as fh:
json_object = json.load(fh)
# create a new dictionary
self.constants_dict = {}
for constant in json_object.keys():
# put each Constant into it
self.constants_dict[constant] = Constant(json_object[constant])
# try to get the requested attribute
def __getattr__(self, name):
# missing keys are returned None, use self.constants_dict[name]
# if you want to raise a KeyError instead
return self.constants_dict.get(name, None)
c = Constants()
print c.R # 287.102
print c.R.doc # ideal gas constant
print c.R + 5 # 292.102
print c.F.units # C mol-1
print c.missing # None
Example constants.json:
{
"R": {
"value": 287.102,
"units": "J/(kg K)",
"doc": "ideal gas constant"
},
"F": {
"value": 96485.33,
"units": "C mol-1",
"doc": "Faraday contant"
}
}

Reverse of `__getitem___`

d[x] where d is a dict, invokes d.__getitem__(x). Is there a way to create a class F, so that y=F(X); d[y] would invoke some method in F instead: y.someMethod(d)?
Background: I'm trying to make a dict with "aliased" keys, so that if I have d[a]=42, then d[alias_of_a] would return 42 as well. This is pretty straightforward with the custom __getitem__, for example:
class oneOf(object):
def __init__(self, *keys):
self.keys = keys
class myDict(dict):
def __getitem__(self, item):
if isinstance(item, oneOf):
for k in item.keys:
if k in self:
return self[k]
return dict.__getitem__(self, item)
a = myDict({
'Alpha': 1,
'B': 2,
})
print a[oneOf('A', 'Alpha')]
print a[oneOf('B', 'Bravo')]
However, I'm wondering if it could be possible without overriding dict:
a = {
'Alpha': 1,
'B': 2,
}
print a[???('A', 'Alpha')]
print a[???('B', 'Bravo')]
If this is not possible, how to make it work the other way round:
a = {
???('A', 'Alpha'): 1,
???('B', 'Bravo'): 2,
}
print a['A']
print a['Bravo']
What it important to me is that I'd like to avoid extending dict.
This use-case is impossible:
a = {
'Alpha': 1,
'B': 2,
}
a[???('A', 'Alpha')]
a[???('B', 'Bravo')]
This is because the dict will first hash the object. In order to force a collision, which will allow overriding equality to take hold, the hashes need to match. But ???('A', 'Alpha') can only hash to one of 'A' or 'Alpha', and if it makes the wrong choice it has failed.
The other use-case has a similar deduction applied to it:
a = {
???('A', 'Alpha'): 1,
???('B', 'Bravo'): 2,
}
a['A']
a['Bravo']
a['A'] will look up with a different hash to a['Alpha'], so again ???('A', 'Alpha') needs to have both hashes, which is impossible.
You need cooperation from both the keys and the values in order for this to work.
You could in theory use inspect.getouterframes in the __hash__ method to check the values of the dictionary, but this would only work if dictionaries had Python frames. If your intent is to monkey patch a function that sort-of does what you want but not quite, this might (just about) work(ish, sort of).
import inspect
class VeryHackyAnyOfHack:
def __init__(self, variable_name_hack, *args):
self.variable_name_hack = variable_name_hack
self.equal_to = args
def __eq__(self, other):
return other in self.equal_to
def __hash__(self):
outer_frame = inspect.getouterframes(inspect.currentframe())[1]
assumed_target_dict = outer_frame[0].f_locals[self.variable_name_hack]
for item in self.equal_to:
if item in assumed_target_dict:
return hash(item)
# Failure
return hash(item[0])
This is used like so:
import random
def check_thing_agains_dict(item):
if random.choice([True, False]):
internal_dict = {"red": "password123"}
else:
internal_dict = {"blue": "password123"}
return internal_dict[item]
myhack = VeryHackyAnyOfHack('internal_dict', "red", "blue")
check_thing_agains_dict(myhack)
#>>> 'password123'
Again, the very fact that you have to do this means that in practice it's not possible. It's also a language extension, so this isn't portable.
The built-in dict provides very simple lookup semantics: given a hashable object x, return the object y that x was mapped to previously. If you want multiple keys that map to the same object, you'll need to set that up explicitly:
# First, initialize the dictionary with one key per equivalence class
a = { 'a': 1, 'b': 2 }
# Then, set up any aliases.
a['Alpha'] = a['a']
a['Bravo'] = a['b']
The TransformDict class being considered for inclusion in Python 3.5 would simplify this somewhat by allowing you to replace step 2 with a "secondary" lookup function that would map the given key to its canonical representation prior to the primary lookup. Something like
def key_transform(key):
if key in {'Alpha', 'Aleph'}:
return 'a'
elif key in {'Bravo', 'Beta', 'Beth'}:
return 'b'
a = TransformDict(key_transform, a=1, b=2)
assert a['Alpha'] is a['a']

python: behavior of json.dumps on dict

I am trying to override‎ the behavior of the dict on json.dumps. For instance, I can order the keys. Thus, I create a class which inherits if dict, and override‎ some of its methods.
import json
class A(dict):
def __iter__(self):
for i in range(10):
yield i
def __getitem__(self, name):
return None
print json.dumps(A())
But it does not call any of my methods and only gives me {}
There is a way to give me the rigt behavior:
import json
class A(dict):
def __init__(self):
dict.__init__(self, {None:None})
def __iter__(self):
for i in range(10):
yield i
def __getitem__(self, name):
return None
print json.dumps(A())
Wich finally gives {"0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null}
Thus, it is clear that the C implementation of json.dumps somehow test if the dict is empty. Unfortunately, I cannot figure out which method is called. First, __getattribute__ does not work, and second I've overrided quite every method dict defines or could define without success.
So, could someone explain to me how the C implementation of json.dumps check if the dict is empty, and is there a way to override it (I find my __init__ pretty ugly).
Thank you.
Edit:
I finally found where this appends in the C code, and it looks not customizable
_json.c line 2083:
if (open_dict == NULL || close_dict == NULL || empty_dict == NULL) {
open_dict = PyString_InternFromString("{");
close_dict = PyString_InternFromString("}");
empty_dict = PyString_InternFromString("{}");
if (open_dict == NULL || close_dict == NULL || empty_dict == NULL)
return -1;
}
if (Py_SIZE(dct) == 0)
return PyList_Append(rval, empty_dict);
So it looks like Py_SIZE is used to check if the dict is empty. But this is a macro (not a function), which only return a property of the python object.
object.h line 114:
#define Py_REFCNT(ob) (((PyObject*)(ob))->ob_refcnt)
#define Py_TYPE(ob) (((PyObject*)(ob))->ob_type)
#define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size)
So since its not a function, it cannot be overrided and thus its behavior cannot be customized.
Finally, the "non empty dict trick" is necessary if one want to customize json.dumps by inheriting a dict (of course other ways to achive this are possible).
Would it be easier to modify the behaviour of the encoder rather than creating a new dict sub class?
class OrderedDictJSONEncoder(json.JSONEncoder):
def default(self, obj):
if hasattr(obj, 'keys'):
return {} # replace your unordered dict with an OrderedDict from collections
else:
return super(OrderedDictJSONEncoder, self).default(obj)
And use it like so:
json.dumps(my_dict_to_encode, cls=OrderedDictJSONEncoder)
This seems like the right place to turn an unordered Python dict into an ordered JSON object.
I don't know exactly what the encoder does, but it's not written in C, the Python source for the json package is here: http://hg.python.org/cpython/file/2a872126f4a1/Lib/json
Also if you just want to order the items, there's
json.dumps(A(), sort_keys=True)
Also see this question ("How to perfectly override a dict?") and its first answer, that explains that you should subclass collections.MutableMapping in most cases.
Or just give a subclassed encoder, as aychedee mentioned.

Categories