Construct a data structure with special requirements in python - python

Requirements:
There is a variable, for example, related_to_dict = 10
Construct a key value pair data, for example, special_dict = {0 :
ref_related_to_dict}
When the variable of related_to_dict changed, the value of
special_dict[0] also changed to the value of related_to_dict
accordingly.
When the value_of special_dict[0], e.g. ref_related_to_dict changed, the
value of related_to_dict also changed to the value of
special_dict[0] accordingly.
Is there a way to achieve this task?

You need to wrap the value in some sort of container.
class Ref:
def __init__(self, v):
self.val = v
And then:
related_to_dict = Ref(10)
special_dict = {0: related_to_dict}
Then it works as desired:
related_to_dict.val = 40
print(special_dict[0].val) # 40

Related

Python JSON - finding elements on a dynamic path

I have to update a nested JSON object.
If I knew the specifics of which items were to be updated I could do :
json_object['basket']['items']['apple'] = 'new value'
However, my list of elements to target is dynamic.
> basket.items.apple = 'green'
> name = 'my shopping'
> basket.cost = '15.43'
I could do this by looping through elements.
Find 'basket' > then find 'items > then find 'apple' > set value
Find 'name' > set value
However, was hoping that there was a way to just reference directly/dynamicaly.
i.e. from a string 'basket.cost', build the expression :
json_object['basket']['cost']
P.s. it has to cope with lists of dictionaries too !
Any guidance appreciated :)
Once you have the string "basket.cost", you can split it on "." and it's pretty easy to drill down into json_object['basket']['cost'] using a loop. Functionally, there is no difference between doing this and doing it "directly": you are still getting the 'basket' key first, and then getting the 'cost' key from the value of json_object['basket'].
def get_element(d, path):
# This function can take the string "basket.cost", or the list ["basket", "cost"]
if isinstance(path, str):
path = path.split(".")
for p in path:
d = d[p]
return d
def set_element(d, path, value):
path = path.split(".")
dict_to_set = get_element(d, path[:-1])
key_to_set = path[-1]
dict_to_set[key_to_set] = value
set_element(json_object, "basket.items.apple", 100)
Now, this assumes all elements of your path already exist, so let's say you create a dictionary that looks like so:
json_object = {"basket": {"items": dict()}}
set_element(json_object, "basket.items.apple", 100)
set_element(json_object, "basket.cost", 10)
print(json_object)
# Output: {'basket': {'items': {'apple': 100}, 'cost': 10}}
print(get_element(json_object, "basket.cost"))
# Output: 10
If you try to access an element that doesn't already exist, you get a KeyError:
get_element(json_object, "basket.date")
# KeyError: 'date'
This also happens if you try to set a value in an element that doesn't exist:
set_element(json_object, "basket.date.day", 1)
# KeyError: 'date'
If we want to allow your function to create the dictionaries when they don't exist, we can modify the get_element function to account for this situation and add the key:
def get_element(d, path, create_missing=False):
# This function can take the string "basket.cost", or an iterable containing the elements "basket" and "cost"
if isinstance(path, str):
path = path.split(".")
for p in path:
if create_missing and p not in d:
d[p] = dict()
d = d[p]
return d
def set_element(d, path, value, create_missing=True):
path = path.split(".")
dict_to_set = get_element(d, path[:-1], create_missing)
key_to_set = path[-1]
dict_to_set[key_to_set] = value
set_element(json_object, "basket.date.day", 1)
print(json_object)
# Output: {'basket': {'items': {'apple': 100}, 'cost': 10, 'date': {'day': 1}}}
If using third party package is an option, you can try python-box. It comes with lots of options and utilities to load from json, yaml files. The implementation is optimized for speed using Cython.
from box import Box
test_data = {
"basket": {
"products": [
{"name": "apple", "colour": "green"}
],
}
}
a = Box(test_data)
a.basket.cost = 12.3
a.basket.products[0].colour = "pink"
a.basket.products.append({"name": "pineapple", "taste": "sweet"})
print(a.basket.products[1].taste)
You can get exactly what you want by overloading some python magic methods: __getattr__ and __setattr__. I'll show an example of the API to wet the appetite and then the full code:
test_data = {'basket': {'items': [{'name': 'apple', 'colour': 'green'},
{'name': 'pineapple', 'taste': 'sweet',},
],
'cost': 12.3,
},
'name': 'Other'}
o = wrap(test_data) # This wraps with the correct class, depending if it is a dict or a list
print(o.name) # Prints 'Other'
print(o.basket.items) # Prints the list of items
print(o.basket.cost) # Prints 12.3
o.basket.cost = 10.0 # Changes the cost
assert o.basket.cost == 10.0
assert len(o) == 2
assert len(o.basket.items) == 2
o.basket.items.append({'name': 'orange'})
o.basket.items[2].colour = 'yellow' # It works with lists!
assert o.basket.items[2].name == 'orange'
assert o.basket.items[2].colour == 'yellow'
# You can get a part of it and it holds a reference to the original
b = o.basket
b.type = 'groceries'
assert o.basket.type == 'groceries'
# It is also possible to create a separate wrapped part and then join:
employees = wrap({})
employees.Clara.id = 101
employees.Clara.age = 23
employees.Lucia.id = 102
employees.Lucia.age = 29
o.employees = employees
The implementation is based on special wrapper classes, one for dicts, another for lists. They all inherit from a base class. Note that the need to use super().__setattr__ instead of simply self._data is because we will override the __getattr__ and __setattr__ methods to look for the data inside _data. Of course it gives an infinite loop when you try to define _data.
from collections.abc import Mapping, Sequence, MutableSequence
class BaseWrapper:
__slots__ = ('_data')
def __init__(self, data):
super().__setattr__('_data', data)
def __repr__(self):
return f'{self.__class__.__name__}({repr(self._data)})'
The wrapper for dictionaries is the most interesting: it uses __getattr__ to look for a key in the wrapped dictionary. This allows for a very natural API: if o is a wrapped dictionary, o.entry will give the same result as o['entry']. Most of the code should be self-explanatory, there are only two tricks: the first is that __getattr__ checks if the output is a dict or list and wraps it. This allows for chaining of calls like o.basket.cost. The downside is that a new wrapper is created every call. The second trick is when setting an attribute: it checks if what is being set is a wrapped instance and un-wraps it. Thus, wrapped dictionaries can be combined and the underlying dictionary is always "clean".
class MappingWrapper(BaseWrapper):
"""Wraps a dictionary and provides the keys of the dictionary as class members.
Create new keys when they do not exist."""
def __getattr__(self, name):
# Note: these two lines allow automatic creation of attributes, e.g. in an object 'obj'
# that doesn't have an attribute 'car', the following is possible:
# >> o.car.colour = 'blue'
# And all the missing levels will be automatically created
if name not in self._data and not name.startswith('_'):
self._data[name] = {}
return wrap(self._data[name])
def __setattr__(self, name, value):
self._data[name] = unwrap(value)
# Implements standard dictionary access
def __getitem__(self, name):
return wrap(self._data[name])
def __setitem__(self, name, value):
self._data[name] = unwrap(value)
def __delitem__(self, name):
del self._data[name]
def __len__(self):
return len(self._data)
The list wrapper is simpler, no need to mess around with attribute access. The only special care we have to take is to wrap and unwrap the list elements when one is requested/set. Note that, just like with the dictionary wrapper, the same wrap and unwrap functions are used (in __getitem__/__setitem__/insert).
class ListWrapper(BaseWrapper, MutableSequence):
"""Wraps a list. Essentially, provides wrapping of elements of the list."""
def __getitem__(self, idx):
return wrap(self._data[idx])
def __setitem__(self, idx, value):
self._data[idx] = unwrap(value)
def __delitem__(self, idx):
del self._data[idx]
def __len__(self):
return len(self._data)
def insert(self, index, obj):
self._data.insert(index, unwrap(obj))
Finally, the definition of wrap, which just selects the correct wrapper based on the type of the input, and unwrap, which extracts the raw data:
def wrap(obj):
if isinstance(obj, dict):
return MappingWrapper(obj)
if isinstance(obj, list):
return ListWrapper(obj)
return obj
def unwrap(obj):
if isinstance(obj, BaseWrapper):
return obj._data
return obj
The full code can be found in this gist.
An important caveat: to keep the implementation simple, wrapper objects are created at every access. Thus using this method inside large loops may cause performance issues (per my measurements, this method of access is between 12 to 30 times slower).
I'm going to assume that you already know how to handle the value errors that will probably come up with this nested collection accessing, so I won't focus on it in my approach.
I would split this in two parts:
Traversing a nested collection according to a list of keys for each level
Getting a list of keys out of a string
The first one is quite trivial, where as you said simply looping through the keys and getting to the end of those gives you access to the collection element in question. A simple implementation of that could look something like this:
def get_nested(collection, key):
for part in key:
collection = collection[part]
return collection
def set_nested(collection, key, value):
for part in key[:-1]:
collection = collection[part]
collection[key[-1]] = value
Here the key is expected to be some iterable of keys, such as a tuple or list.
Of course that means there is an expectation that your string representing a path along the collection is already parsed. We can get to that next.
This step would also be very trivial, since one could simply expression.split(".") it. However, since you also want to be able to index nested lists along with dicts, it get's a little more complicated.
There is a tradeoff to be made here. One could simply say: "Any time that one of the items in expression.split(".") can be parsed to an int, we will do just that, and assume that it was ment as an index in a list", however naturally that isn't necessarily the case. There is nothing preventing you from using a number in string form as a key in a dict. However if you think this is never going to be the case for you, perhaps the you can just call it like this:
set_nested(
collection,
(int(part) if part.isdigit() else part for part in expression.split(".")),
"target value",
)
(or of course wrap it in another function like this).
However if the consideration of using digit keys in dicts is important for you, there is another solution:
Whenever traversing the nested collection downward, we check if the collection we are currently looking at is a list. Only if it is a list, do we actually try to parse the path part as an int.
This would be the respective set_nested and get_nested functions for that:
def get_nested(collection, key: str):
for part in key.split("."):
if type(collection) == list:
part = int(part)
collection = collection[part]
return collection
def set_nested(collection, key: str, val):
key = key.split(".")
for i, part in enumerate(key):
if type(collection) == list:
part = int(part)
if i == len(key) - 1:
collection[part] = val
else:
collection = collection[part]
I believe that's the simplest solution to your problem, though of course it's important to keep in mind:
There is no error handling in this code, and indexing on dynamic paths is a topic where you are bound to run into errors. Depending on where and how you want to handle those it's going to be easy or very tedious.
There is no checking of setting values in dicts that don't exist yet, or for expanding arrays to a specific size, but since you didn't mention those that as a requirement I'm presuming it's not an issue. It might be for others reading this.
This is tricky and I would discourage it unless necessary as it is an easy thing to design and implmenet badly.
First: it's easy to split on path separator and follow the object tree to the desired key.
But after a while questions will start to appear. E.g.: what separator to split on?
A slash? It can appear in the JSON dictionary key... A dot? Same.
We'll need to either restrict legal / handled paths or implement some kind of escaping mechanism.
How do you handle empty strings?
Another goal: handle lists... Ok. So how do we interpret a path a.0? Is it ['a'][0] or ['a']['0'] ?
It seem that we'll have to complicate the language or drop the requirement.
So, in general -- I'd avoid it. Ultimately here's a quick implementation which
desing choices may or may not satisfy you:
there's basic backslash escaping of path separator
empty string is allowed as a key
lists are not handled due to ambiguity
def deep_set(root: dict, path: str, value):
segments = [*iter_segments(path, '.')]
for k in segments[:-1]:
root = root[k]
root[segments[-1]] = value
def iter_segments(path: str, separator: str = '.'):
segment = ''
path_iter = iter(path)
while True:
c = next(path_iter, '')
if c in ('.', ''):
yield segment
segment = ''
if c == '':
break
continue
elif '\\' == c:
c = next(path_iter, '')
segment += c

Classes vs. dictionaries as variable containers to get rid of quotation marks?

I’m trying to better understand the concept of python dictionaries and want to use a dictionary as a container of several variables in my code. Most examples I looked for, show strings as dictionary keys, which implies the use of quotation marks for using keys as variables. However, I found out that one does not need to use quotation marks if the key is firstly given a value and after that placed in a dictionary. Then one get rid of the quotation marks. The variable is then actually an immutable value. In that case, even as one changes the value of the key, the original value remains in the key and can be retrieved by dictionary method -.keys() (and thus be used to restore the first given value). However, I’m wondering if this is a proper way of coding and if it is better to apply a class as a variable container, which looks more simple but is perhaps slower when executed. Both approaches lead to the same result. See my example below.
class Container ():
def __init__(self):
self.a = 15
self.b = 17
# first given values
a = 5
b = 7
# dictionary approach
container = {a:15, b:17}
print('values in container: ', container[a], container[b])
container[a], container[b] = 25, 27
print('keys and values in container: ', container[a], container[b])
for key in container.keys():
print('firstly given values: ', key)
print('\n')
# class approach
cont = Container()
print('values in cont: ', cont.a, cont.b)
cont.a, cont.b = 25, 27
print('keys and values in cont: ', cont.a, cont.b)
However, I found out that one does not need to use quotation marks if the key is firstly given a value and after that placed in a dictionary.
This isn’t really what’s happening. Your code isn’t using 'a' and 'b' as dictionary keys. It’s using the values of the variables a and b — which happen to be the integers 5 and 7, respectively.
Subsequent access to the dictionary also happens by value: whether you write container[a] or container[5] doesn’t matter (as long as a is in scope and unchanged). But *it is not the same as container['a'], and the latter would fail here.
You can also inspect the dictionary itself to see that it doesn’t have a key called 'a' (or unquoted, a):
>>> print(dictionary)
{5: 15, 7: 17}
Ultimately, if you want to use names (rather than values) to access data, use a class, not a dictionary. Use a dictionary when the keys are given as values.
Later you may assign other values to a and b, and the code using dictionary will crash. Using a variable as a key is not a good practice. Do it with the class. You may also add the attributes to the constructor of your class.
class Container ():
def __init__(self, a, b):
self.a = a
self.b = b
# creating
cont = Container(15, 17)
# changin
cont.a, cont.b = 25, 27
I would recommand the class approach, because the dict approach in this case does not seem a proper way to code.
When you do :
a = 5
b = 7
container = {a:15, b:17}
You actually do :
container = {5:15, 7:17}
But this is "hidden", so there is a risk that later you reassign your variables, or that you just get confused with this kind of dictionary :
container = {
a:15,
b:17,
"a": "something"
}

create dynamic linked variables

Is it possible to create a dynamic linked variable? So that changes in original_VAR will automatically take effect in copied_VAR ? Like so:
original_VAR = 'original_VAL'
copied_VAR = original_VAR
original_VAR = 'modified_VAL'
print(copied_VAR)
#desired output:
>>>> 'modified_VAL'
A similar behavior can be created for lists under few conditions:
original_DICT_ARR = [{'key': 'original_VAL'}]
# 1 - does not create a dynamic link
copied_DICT_ARR = [value for value in original_DICT_ARR]
# 2 - does create a dynamic link
copied_DICT_ARR = original_DICT_ARR
# 3 - does create a dynamic link, if the copied element is a list or dict, but not if string, boolean, int, float
copied_DICT_ARR = []
copied_DICT_ARR.append(original_DICT_ARR[0])
# MODIFICATION:
original_DICT_ARR[0]['key'] = 'modified_VAL'
# RESULT for 2,3
print(copied_DICT_ARR[0])
>>>> {'key': 'modified_VAL'}
Why would I want to do this?
I am building a list, the list is full of dict objects. I need to assign a value to a certain dict key.
Later, that value might change - I don't want to loop through all dictionaries in the list again. I want to change the original variable, and have the effect taken place in all dictionaries automatically.
You can keep a reference the specific dict you want to modify later. Since it refers to the same underlying dict - changes will be reflected in your reference. Like so:
original_DICT_ARR = [{'key': 'original_VAL'}, {'key': 'another_val'}]
target_dict = original_DICT_ARR[0]
# MODIFICATION:
original_DICT_ARR[0]['key'] = 'modified_VAL'
# RESULT for 2,3
print(target_dict)
Gives:
{'key': 'modified_VAL'}
You could achieve this and keep things abstract by using a mutable object and subclassing UserDict and overriding __getitem__:
class ChangingVal:
def __init__(self, val):
self.val = val
class ChangingValsDict(UserDict):
def __getitem__(self, key):
ret = self.data[key]
if isinstance(ret, ChangingVal):
ret = ret.val
return ret
my_dict = ChangingValsDict()
changing_val = ChangingVal(3)
my_dict["changing_value"] = changing_val
print(my_dict["changing_value"]) # -- outputs "3"
# change your value
changing_val.val = 6
print(my_dict["changing_value"]) # -- outputs "6"

Multi-level defaultdict with variable depth and with list and int type

I am trying to create a multi-level dict with variable depth and with list and int type.
Data structure is like below
A
--B1
-----C1=1
-----C2=[1]
--B2=[3]
D
--E
----F
------G=4
In the case of above data structure, the last value can be an int or list.
If the above data structure has the only int then I can be easily achieved by using the below code:
from collections import defaultdict
f = lambda: defaultdict(f)
d = f()
d['A']['B1']['C1'] = 1
But as the last value has both list and int, it becomes a bit problematic for me.
Now we can insert data in a list using two ways.
d['A']['B1']['C2']= [1]
d['A']['B1']['C2'].append([2])
But when I am using only the append method it is causing the error.
Error is:
AttributeError: 'collections.defaultdict' object has no attribute 'append'
so Is there any way to use only the append method for a list?
There's no way you can use your current defaultdict-based structure to make d['A']['B1']['C2'].append(1) work properly if the 'C2' key doesn't already exist, since the data structure can't tell that the unknown key should correspond to a list rather than another layer of dictionary. It doesn't know what method you're going to call on the value it returns, so it can't know it shouldn't return a dictionary (like it did when it first looked up 'A' and 'B').
This isn't an issue for bare integers, since for those you're as assigning directly to a new key (and all the earlier levels are dictionaries). When you're assigning, the data structure isn't creating the value, you are, so you can use any type you want.
Now, if your keys are distinctive in some way, so that given a key like 'C2' you can know for sure that it should correspond to a list, you may have a chance. You can write your own dict subclass, defining a __missing__ method to handle lookups of keys that don't exist yet in your own special way:
def Tree(dict):
def __missing__(self, key):
if key_corresponds_to_list(key): # magic from somewhere
result = self[key] = []
else:
result = self[key] = Tree()
return result
# you might also want a custom __repr__
Here's an example run with a magic key function that makes any even-length key default to a list, while an odd-length key defaults to a dict:
> def key_corresponds_to_list(key):
return len(key) % 2 == 0
> t = Tree()
> t["A"]["B"]["C2"].append(1) # the default value for C2 is a list because it's even length
> t
{'A': {'B': {'C2': [1]}}}
> t["A"]["B"]["C10"]["D"] = 2 # C10's another layer of dict, since it's length is odd
> t
{'A': {'B': {'C10': {'D': 2}, 'C2': [1]}}} # it didn't matter what length D was though
You probably won't actually want to use a global function to control the class like this, I just did that as an example. If you go with this approach, I'd suggest putting the logic directly into the __missing__ method (or maybe passing a function as a parameter, like defaultdict does with its factory function).

Variables as hash values in Python

How can I link a variable to a dictionary value in Python?
Consider the following code:
a_var = 10
a_dict = {'varfield':a_var, 'first':25, 'second':57}
# a_dict['varfield'] == 10 now
a_var = 700
# a_dict['varfield'] == 10 anyway
So is there a way to link the value of a variable to a field in a dictictionary without looking up for that field an updating it's value manually?
You would need to set the value of the dictionary key, to an object that you can change the value of.
For example like this:
class valueContainer(object):
def __init__(self, value):
self.value = value
def __repr__(self):
return self.value.__repr__()
v1 = valueContainer(1)
myDict = {'myvar': v1}
print myDict
#{'myvar': 1}
v1.value = 2
print myDict
#{'myvar': 2}
w = [410]
myDict = {'myvar': w}
print myDict
#{'myvar': [410]}
w[0] = 520
print myDict
#{'myvar': [520]}
That's the version of the code of M4rtini with a list instead of an instance of a class.
He is obliged to modify v1 (in fact its attribute value) with the instruction v1.value = ...,
I am obliged to modify the value in the list with w[0] = ...
The reason to act like this is that what you erroneously called a variable, and that is in fact an identifier, doesn't designates a variable in the sense of a "chunk of memory whose content can change" but references an object to which the identifier is binded, object whose value cannot change because it is an immutable object.
Please read the explanations of the documentation on the data model and the execution model of Python which is quite different from the ones of languages such as Java, PHP, etc.

Categories