Use __get__, __set__ with dictionary item? - python

Is there a way to make a dictionary of functions that use set and get statements and then use them as set and get functions?
class thing(object):
def __init__(self, thingy)
self.thingy = thingy
def __get__(self,instance,owner):
return thingy
def __set__(self,instance,value):
thingy += value
theDict = {"bob":thing(5), "suzy":thing(2)}
theDict["bob"] = 10
wanted result is that 10 goes into the set function and adds to the existing 5
print theDict["bob"]
>>> 15
actual result is that the dictionary replaces the entry with the numeric value
print theDict["bob"]
>>> 10
Why can't I just make a function like..
theDict["bob"].add(10)
is because it's building off an existing and already really well working function that uses the set and get. The case I'm working with is an edge case and wouldn't make sense to reprogram everything to make work for this one case.
I need some means to store instances of this set/get thingy that is accessible but doesn't create some layer of depth that might break existing references.
Please don't ask for actual code. It'd take pages of code to encapsulate the problem.

You could do it if you can (also) use a specialized version of the dictionary which is aware of your Thing class and handles it separately:
class Thing(object):
def __init__(self, thingy):
self._thingy = thingy
def _get_thingy(self):
return self._thingy
def _set_thingy(self, value):
self._thingy += value
thingy = property(_get_thingy, _set_thingy, None, "I'm a 'thingy' property.")
class ThingDict(dict):
def __getitem__(self, key):
if key in self and isinstance(dict.__getitem__(self, key), Thing):
return dict.__getitem__(self, key).thingy
else:
return dict.__getitem__(self, key)
def __setitem__(self, key, value):
if key in self and isinstance(dict.__getitem__(self, key), Thing):
dict.__getitem__(self, key).thingy = value
else:
dict.__setitem__(self, key, value)
theDict = ThingDict({"bob": Thing(5), "suzy": Thing(2), "don": 42})
print(theDict["bob"]) # --> 5
theDict["bob"] = 10
print(theDict["bob"]) # --> 15
# non-Thing value
print(theDict["don"]) # --> 42
theDict["don"] = 10
print(theDict["don"]) # --> 10

No, because to execute theDict["bob"] = 10, the Python runtime doesn't call any methods at all of the previous value of theDict["bob"]. It's not like when myObject.mydescriptor = 10 calls the descriptor setter.
Well, maybe it calls __del__ on the previous value if the refcount hits zero, but let's not go there!
If you want to do something like this then you need to change the way the dictionary works, not the contents. For example you could subclass dict (with the usual warnings that you're Evil, Bad and Wrong to write a non-Liskov-substituting derived class). Or you could from scratch implement an instance of collections.MutableMapping. But I don't think there's any way to hijack the normal operation of dict using a special value stored in it.

theDict["bob"] = 10 is just assign 10 to the key bob for theDict.
I think you should know about the magic methods __get__ and __set__ first. Go to: https://docs.python.org/2.7/howto/descriptor.html Using a class might be easier than dict.

Related

Python simple lazy loading

I'm trying to clean up some logic and remove duplicate values in some code and am looking for a way to introduce some very simple lazy-loading to handle settings variables. Something that would work like this:
FOO = {'foo': 1}
BAR = {'test': FOO['foo'] }
# ...complex logic here which ultimately updates the value of Foo['foo']...
FOO['foo'] = 2
print(BAR['test']) # Outputs 1 but would like to get 2
Update:
My question may not have been clear based on the initial responses. I'm looking to replace the value being set for test in BAR with a lazy-loaded substitute. I know a way I can do this but it seems unnecessarily complex for what it is, I'm wondering if there's a simpler approach.
Update #2:
Okay, here's a solution that works. Is there any built-in type that can do this out of the box:
FOO = {'foo': 1}
import types
class LazyDict(dict):
def __getitem__(self, item):
value = super().__getitem__(item)
return value if not isinstance(value, types.LambdaType) else value()
BAR = LazyDict({ 'test': lambda: FOO['foo'] })
# ...complex logic here which ultimately updates the value of Foo['foo']...
FOO['foo'] = 2
print(BAR['test']) # Outputs 2
As I stated in the comment above, what you are seeking is some of the facilities of reactive programming paradigm. (not to be confounded with the JavaScript library which borrows its name from there).
It is possible to instrument objects in Python to do so - I think the minimum setup here would be a specialized target mapping, and a special object type you set as the values in it, that would fetch the target value.
Python can do this in more straightforward ways with direct attribute access (using the dot notation: myinstance.value) than by using the key-retrieving notation used in dictionaries mydata['value'] due to the fact a class is already a template to a certain data group, and class attributes can define mechanisms to access each instance's attribute value. That is called the "descriptor protocol" and is bound into the language model itself.
Nonetheless a minimalist Mapping based version can be implemented as such:
FOO = {'foo': 1}
from collections.abc import MutableMapping
class LazyValue:
def __init__(self, source, key):
self.source = source
self.key = key
def get(self):
return self.source[self.key]
def __repr__(self):
return f"<LazyValue {self.get()!r}>"
class LazyDict(MutableMapping):
def __init__(self, *args, **kw):
self.data = dict(*args, **kw)
def __getitem__(self, key):
value = self.data[key]
if isinstance(value, LazyValue):
value = value.get()
return value
def __setitem__(self, key, value):
self.data[key] = value
def __delitem__(key):
del self.data[key]
def __iter__(self):
return iter(self.data)
def __len__():
return len(self.data)
def __repr__():
return repr({key: value} for key, value in self.items())
BAR = LazyDict({'test': LazyValue(FOO, 'foo')})
# ...complex logic here which ultimately updates the value of Foo['foo']...
FOO['foo'] = 2
print(BAR['test']) # Outputs 2
The reason this much code is needed is that there are several ways to retrieve data from a dictionary or mapping (.values, .items, .get, .setdefault) and simply inheriting from dict and implementing __getitem__ would "leak" the special lazy object in any of the other methods. Going through this MutableMapping approach ensure a single point of reading of the value in the __getitem__ method - and the resulting instance can be used reliably anywhere a mapping is expected.
However, notice that if you are using normal classes and instances rather than dictionaries, this can be much simpler - you can just use plain Python "property" and have a getter that will fetch the value. The main factor you should ponder is whether your referenced data keys are fixed, and can be hard-coded when writting the source code, or if they are dynamic, and which keys will work as lazy-references are only known at runtime. In this last case, the custom mapping approach, as above, will be usually better:
FOO = {'foo': 1}
class LazyStuff:
def __init__(self, source):
self.source = source
#property
def test(self):
return self.source["foo"]
BAR = LazyStuff(FOO)
FOO["foo"] = 2
print(BAR.test)
Perceive that in this way you have to hardcode the key "foo" and "test" in the class body, but it is just plaincode, and no need for the intermediary "LazyValue" class. Also, if you need this data as a dictionary, you could add an .as_dict method to LazyStuff that would collect all attributes in the moment it were called and yield a snapshot of those values as a dictionary..
You can try using lambdas and calling the value on return. Like this:
FOO = {'foo': 1}
BAR = {'test': lambda: FOO['foo'] }
FOO['foo'] = 2
print(BAR['test']()) # Outputs 2
If you're only one level deep, you may wish to try ChainMap, E.g.,
>>> from collections import ChainMap
>>> defaults = {'foo': 42}
>>> myvalues = {}
>>> result = ChainMap(myvalues, defaults)
>>> result['foo']
42
>>> defaults['foo'] = 99
>>> result['foo']
99

How to access a dictionary value from within the same dictionary in Python? [duplicate]

I'm new to Python, and am sort of surprised I cannot do this.
dictionary = {
'a' : '123',
'b' : dictionary['a'] + '456'
}
I'm wondering what the Pythonic way to correctly do this in my script, because I feel like I'm not the only one that has tried to do this.
EDIT: Enough people were wondering what I'm doing with this, so here are more details for my use cases. Lets say I want to keep dictionary objects to hold file system paths. The paths are relative to other values in the dictionary. For example, this is what one of my dictionaries may look like.
dictionary = {
'user': 'sholsapp',
'home': '/home/' + dictionary['user']
}
It is important that at any point in time I may change dictionary['user'] and have all of the dictionaries values reflect the change. Again, this is an example of what I'm using it for, so I hope that it conveys my goal.
From my own research I think I will need to implement a class to do this.
No fear of creating new classes -
You can take advantage of Python's string formating capabilities
and simply do:
class MyDict(dict):
def __getitem__(self, item):
return dict.__getitem__(self, item) % self
dictionary = MyDict({
'user' : 'gnucom',
'home' : '/home/%(user)s',
'bin' : '%(home)s/bin'
})
print dictionary["home"]
print dictionary["bin"]
Nearest I came up without doing object:
dictionary = {
'user' : 'gnucom',
'home' : lambda:'/home/'+dictionary['user']
}
print dictionary['home']()
dictionary['user']='tony'
print dictionary['home']()
>>> dictionary = {
... 'a':'123'
... }
>>> dictionary['b'] = dictionary['a'] + '456'
>>> dictionary
{'a': '123', 'b': '123456'}
It works fine but when you're trying to use dictionary it hasn't been defined yet (because it has to evaluate that literal dictionary first).
But be careful because this assigns to the key of 'b' the value referenced by the key of 'a' at the time of assignment and is not going to do the lookup every time. If that is what you are looking for, it's possible but with more work.
What you're describing in your edit is how an INI config file works. Python does have a built in library called ConfigParser which should work for what you're describing.
This is an interesting problem. It seems like Greg has a good solution. But that's no fun ;)
jsbueno as a very elegant solution but that only applies to strings (as you requested).
The trick to a 'general' self referential dictionary is to use a surrogate object. It takes a few (understatement) lines of code to pull off, but the usage is about what you want:
S = SurrogateDict(AdditionSurrogateDictEntry)
d = S.resolve({'user': 'gnucom',
'home': '/home/' + S['user'],
'config': [S['home'] + '/.emacs', S['home'] + '/.bashrc']})
The code to make that happen is not nearly so short. It lives in three classes:
import abc
class SurrogateDictEntry(object):
__metaclass__ = abc.ABCMeta
def __init__(self, key):
"""record the key on the real dictionary that this will resolve to a
value for
"""
self.key = key
def resolve(self, d):
""" return the actual value"""
if hasattr(self, 'op'):
# any operation done on self will store it's name in self.op.
# if this is set, resolve it by calling the appropriate method
# now that we can get self.value out of d
self.value = d[self.key]
return getattr(self, self.op + 'resolve__')()
else:
return d[self.key]
#staticmethod
def make_op(opname):
"""A convience class. This will be the form of all op hooks for subclasses
The actual logic for the op is in __op__resolve__ (e.g. __add__resolve__)
"""
def op(self, other):
self.stored_value = other
self.op = opname
return self
op.__name__ = opname
return op
Next, comes the concrete class. simple enough.
class AdditionSurrogateDictEntry(SurrogateDictEntry):
__add__ = SurrogateDictEntry.make_op('__add__')
__radd__ = SurrogateDictEntry.make_op('__radd__')
def __add__resolve__(self):
return self.value + self.stored_value
def __radd__resolve__(self):
return self.stored_value + self.value
Here's the final class
class SurrogateDict(object):
def __init__(self, EntryClass):
self.EntryClass = EntryClass
def __getitem__(self, key):
"""record the key and return"""
return self.EntryClass(key)
#staticmethod
def resolve(d):
"""I eat generators resolve self references"""
stack = [d]
while stack:
cur = stack.pop()
# This just tries to set it to an appropriate iterable
it = xrange(len(cur)) if not hasattr(cur, 'keys') else cur.keys()
for key in it:
# sorry for being a duche. Just register your class with
# SurrogateDictEntry and you can pass whatever.
while isinstance(cur[key], SurrogateDictEntry):
cur[key] = cur[key].resolve(d)
# I'm just going to check for iter but you can add other
# checks here for items that we should loop over.
if hasattr(cur[key], '__iter__'):
stack.append(cur[key])
return d
In response to gnucoms's question about why I named the classes the way that I did.
The word surrogate is generally associated with standing in for something else so it seemed appropriate because that's what the SurrogateDict class does: an instance replaces the 'self' references in a dictionary literal. That being said, (other than just being straight up stupid sometimes) naming is probably one of the hardest things for me about coding. If you (or anyone else) can suggest a better name, I'm all ears.
I'll provide a brief explanation. Throughout S refers to an instance of SurrogateDict and d is the real dictionary.
A reference S[key] triggers S.__getitem__ and SurrogateDictEntry(key) to be placed in the d.
When S[key] = SurrogateDictEntry(key) is constructed, it stores key. This will be the key into d for the value that this entry of SurrogateDictEntry is acting as a surrogate for.
After S[key] is returned, it is either entered into the d, or has some operation(s) performed on it. If an operation is performed on it, it triggers the relative __op__ method which simple stores the value that the operation is performed on and the name of the operation and then returns itself. We can't actually resolve the operation because d hasn't been constructed yet.
After d is constructed, it is passed to S.resolve. This method loops through d finding any instances of SurrogateDictEntry and replacing them with the result of calling the resolve method on the instance.
The SurrogateDictEntry.resolve method receives the now constructed d as an argument and can use the value of key that it stored at construction time to get the value that it is acting as a surrogate for. If an operation was performed on it after creation, the op attribute will have been set with the name of the operation that was performed. If the class has a __op__ method, then it has a __op__resolve__ method with the actual logic that would normally be in the __op__ method. So now we have the logic (self.op__resolve) and all necessary values (self.value, self.stored_value) to finally get the real value of d[key]. So we return that which step 4 places in the dictionary.
finally the SurrogateDict.resolve method returns d with all references resolved.
That'a a rough sketch. If you have any more questions, feel free to ask.
If you, just like me wandering how to make #jsbueno snippet work with {} style substitutions, below is the example code (which is probably not much efficient though):
import string
class MyDict(dict):
def __init__(self, *args, **kw):
super(MyDict,self).__init__(*args, **kw)
self.itemlist = super(MyDict,self).keys()
self.fmt = string.Formatter()
def __getitem__(self, item):
return self.fmt.vformat(dict.__getitem__(self, item), {}, self)
xs = MyDict({
'user' : 'gnucom',
'home' : '/home/{user}',
'bin' : '{home}/bin'
})
>>> xs["home"]
'/home/gnucom'
>>> xs["bin"]
'/home/gnucom/bin'
I tried to make it work with the simple replacement of % self with .format(**self) but it turns out it wouldn't work for nested expressions (like 'bin' in above listing, which references 'home', which has it's own reference to 'user') because of the evaluation order (** expansion is done before actual format call and it's not delayed like in original % version).
Write a class, maybe something with properties:
class PathInfo(object):
def __init__(self, user):
self.user = user
#property
def home(self):
return '/home/' + self.user
p = PathInfo('thc')
print p.home # /home/thc
As sort of an extended version of #Tony's answer, you could build a dictionary subclass that calls its values if they are callables:
class CallingDict(dict):
"""Returns the result rather than the value of referenced callables.
>>> cd = CallingDict({1: "One", 2: "Two", 'fsh': "Fish",
... "rhyme": lambda d: ' '.join((d[1], d['fsh'],
... d[2], d['fsh']))})
>>> cd["rhyme"]
'One Fish Two Fish'
>>> cd[1] = 'Red'
>>> cd[2] = 'Blue'
>>> cd["rhyme"]
'Red Fish Blue Fish'
"""
def __getitem__(self, item):
it = super(CallingDict, self).__getitem__(item)
if callable(it):
return it(self)
else:
return it
Of course this would only be usable if you're not actually going to store callables as values. If you need to be able to do that, you could wrap the lambda declaration in a function that adds some attribute to the resulting lambda, and check for it in CallingDict.__getitem__, but at that point it's getting complex, and long-winded, enough that it might just be easier to use a class for your data in the first place.
This is very easy in a lazily evaluated language (haskell).
Since Python is strictly evaluated, we can do a little trick to turn things lazy:
Y = lambda f: (lambda x: x(x))(lambda y: f(lambda *args: y(y)(*args)))
d1 = lambda self: lambda: {
'a': lambda: 3,
'b': lambda: self()['a']()
}
# fix the d1, and evaluate it
d2 = Y(d1)()
# to get a
d2['a']() # 3
# to get b
d2['b']() # 3
Syntax wise this is not very nice. That's because of us needing to explicitly construct lazy expressions with lambda: ... and explicitly evaluate lazy expression with ...(). It's the opposite problem in lazy languages needing strictness annotations, here in Python we end up needing lazy annotations.
I think with some more meta-programmming and some more tricks, the above could be made more easy to use.
Note that this is basically how let-rec works in some functional languages.
The jsbueno answer in Python 3 :
class MyDict(dict):
def __getitem__(self, item):
return dict.__getitem__(self, item).format(self)
dictionary = MyDict({
'user' : 'gnucom',
'home' : '/home/{0[user]}',
'bin' : '{0[home]}/bin'
})
print(dictionary["home"])
print(dictionary["bin"])
Her ewe use the python 3 string formatting with curly braces {} and the .format() method.
Documentation : https://docs.python.org/3/library/string.html

Pythonic way of polling a dictionary - using the key's value once it exists

I have a working solution to this question, it just doesn't feel very pythonic. I am working in Python 2.7 and, thus, cannot use Python 3 solutions.
I have a dictionary that is regularly being updated. Eventually a key, let's call it "foo", with a value will appear in the dictionary. I want to keep polling that object and getting that dictionary until the key "foo" appears at which point I want to get the value associated with that key and use it.
Here is some psuedo code that is functioning right now:
polled_dict = my_object.get_dict()
while('foo' not in polled_dict.keys()):
polled_dict = my_object.get_dict()
fooValue = polled_dict['foo']
Let me emphasize that what the code is doing right now works. It feels gross but it works. A potential saolution I came up with is:
fooValue = None
While fooValue is None:
polled_dict = my_object.get_dict()
fooValue = polled_dict.get('foo')
This also works but it only seems a tiny bit better. Instead of calling polled_dict.get('foo') twice once it shows up in the dict(the key is accessed during the while loop and again on exiting the while loop) we only call it once. But, honestly, it doesn't seem much better and the gains are minimal.
As I look over the other solutions I've implemented I see that they're just different logical permutations of the two above examples (a not in a different place or something) but nothing feels pythonic. I seems like there would be an easy, cleaner way of doing this. Any suggestions? If not, is either of the above better than the other?
EDIT A lot of answers are recommending I override or otherwise change the dictionaries that the code is polling from. I agree that this would normally be a great solution but, to quote from some of my comments below:
"The code in question needs to exist separately from the API that updates the dictionary. This code needs to be generic and access the dictionary of a large number of different types of objects. Adding a trigger would ultimately require completely reworking all of those objects (and would not be nearly as generic as this function needs to be) This is grossly simplified obviously but, ultimately, I need to check values in this dict until it shows up instead of triggering something in the object. I'm unconvinced that making such a wide reaching and potentially damaging change is a pythonic solution(though should the API be rewritten from the ground up this will definitely be the solution and for something that does not need to be separated/can access the API this is definitely the pythonic solution.)"
You could always do something like subclass dict.
This is completely untested, but something to the effect of:
class NoisyDict(dict):
def __init__(self, *args, **kwargs):
self.handlers = {}
#Python 3 style
super().__init__(*args, **kwargs)
def add_handler(self, key, callback):
self.handlers[key] = self.handlers.get(key, [])
self.handlers[key].append(callback)
def __getitem__(self, key):
for handler in self.handlers.get(key, []):
handler('get', key, super().__getitem__(key))
return super().__getitem__(key)
def __setitem__(self, key, value):
for handler in self.handlers.get(key, []):
handler('set', key, value)
return super().__setitem(value)
Then you could do
d = NoisyDict()
d.add_handler('spam', print)
d['bar'] = 3
d['spam'] = 'spam spam spam'
Fun with generators:
from itertools import repeat
gen_dict = (o.get_dict() for o in repeat(my_object))
foo_value = next(d['foo'] for d in gen_dict if 'foo' in d)
Is it not possible to do something like this? (obviously not thread safe) The only catch is that the method below does not catch dictionary initialization via construction. That is it wouldn't catch keys added when the dictionary is created; eg MyDict(watcher=MyWatcher(), a=1, b=2) - the a and b keys would not be caught as added. I'm not sure how to implement that.
class Watcher(object):
"""Watches entries added to a MyDict (dictionary). key_found() is called
when an item is added whose key matches one of elements in keys.
"""
def __init__(self, *keys):
self.keys = keys
def key_found(self, key, value):
print key, value
class MyDict(dict):
def __init__(self, *args, **kwargs):
self.watcher = kwargs.pop('watcher')
super(MyDict, self).__init__(*args, **kwargs)
def __setitem__(self, key, value):
super(MyDict, self).__setitem__(key, value)
if key in self.watcher.keys:
self.watcher.key_found(key, value)
watcher = Watcher('k1', 'k2', 'k3')
d = MyDict(watcher=watcher)
d['a'] = 1
d['b'] = 2
d['k1'] = 'k1 value'
If your object is modifying the dictionary in place then you should only need to get it once. Then you and your object have a pointer to the same dictionary object. If you need to stick with polling then this is probably the cleanest solution:
polled_dict = my_object.get_dict()
while 'foo' not in polled_dict:
pass # optionally sleep
fooValue = polled_dict['foo']
The best overall way of doing this would be to push some type of event through a pipe/socket/thread-lock in some way.
Maybe Try/Except would be considered more 'Pythonic'?
A sleep statement in the while loop will stop it consuming all your resources as well.
polled_dict = my_object.get_dict()
while True:
time.sleep(0.1)
try:
fooValue = polled_dict['foo']
return (foovalue) # ...or break
except KeyError:
polled_dict = my_object.get_dict()
I think a defaultdict is great for this kind of job.
from collections import defaultdict
mydeafultdict = defaultdict(lambda : None)
r = None
while r is None:
r = mydeafultdict['foo']
a defaultdict works just like a regular dictionary, except when a key doesn't exist, it calls the function supplied to it in the constructor to return a value. In this case, I gave it a lambda that just returns None. With this, you can keep trying to get foo, when there is a value associated with it, it will be returned.

Is there a pythonic way to update a dictionary value when the update is dependent on the value itself?

I find myself in a lot of situations where I have a dictionary value that I want to update with a new value, but only if the new value fulfils some criteria relative to the current value (such as being larger).
Currently I write expressions similar to:
dictionary[key] = max(newvalue, dictionary[key])
which works fine but I keep thinking that there's probably a neater way to do it that doesn't involve repeating myself.
Thanks for any suggestions.
You could make the values objects with update methods that encapsulate that logic. Or subclass dictionary and modify the behavior of __setitem__. Just keep in mind anything you do like this is going to make it less clear to someone not familiar with your code what is going on. What you are doing now is most explicit and clear.
Just write yourself a helper function:
def update(dictionary, key, newvalue, func=max):
dictionary[key] = func(dictionary[key], newvalue)
Not sure if it's "neater", but one way to avoid repeating yourself is to use an object-oriented approach and subclass the built-in dict class to make something able to do what you want. This also has the advantage that instances of your custom class can be used in place of dict instances without changing the rest of your code.
class CmpValDict(dict):
""" dict subclass that stores values associated with each key based
on the return value of a function which allow the value passed to be
first compared to any already there (if there is no pre-existing
value, the second argument passed to the function will be None)
"""
def __init__(self, cmp=None, *args, **kwargs):
self.cmp = cmp if cmp else lambda nv,cv: nv # default returns new value
super(CmpValDict, self).__init__(*args, **kwargs)
def __setitem__(self, key, value):
super(CmpValDict, self).__setitem__(key, self.cmp(value, self.get(key)))
cvdict = CmpValDict(cmp=max)
cvdict['a'] = 43
cvdict['a'] = 17
print cvdict['a'] # 43
cvdict[43] = 'George Bush'
cvdict[43] = 'Al Gore'
print cvdict[43] # George Bush
What about using the Python version of a ternary operator:
d[key]=newval if newval>d[key] else d[key]
or a one line if:
if newval>d[key]: d[key]=newval

Alternative to "assign to a function call" in a python

I'm trying to solve this newbie puzzle:
I've created this function:
def bucket_loop(htable, key):
bucket = hashtable_get_bucket(htable, key)
for entry in bucket:
if entry[0] == key:
return entry[1]
return None
And I have to call it in two other functions (bellow) in the following way: to change the value of the element entry[1] or to append to this list (entry) a new element. But I can't do that calling the function bucket_loop the way I did because "you can't assign to function call" (assigning to a function call is illegal in Python). What is the alternative (most similar to the code I wrote) to do this (bucket_loop(htable, key) = value and hashtable_get_bucket(htable, key).append([key, value]))?
def hashtable_update(htable, key, value):
if bucket_loop(htable, key) != None:
bucket_loop(htable, key) = value
else:
hashtable_get_bucket(htable, key).append([key, value])
def hashtable_lookup(htable, key):
return bucket_loop(htable, key)
Thanks, in advance, for any help!
This is the rest of the code to make this script works:
def make_hashtable(size):
table = []
for unused in range(0, size):
table.append([])
return table
def hash_string(s, size):
h = 0
for c in s:
h = h + ord(c)
return h % size
def hashtable_get_bucket(htable, key):
return htable[hash_string(key, len(htable))]
Similar question (but didn't help me): SyntaxError: "can't assign to function call"
In general, there are three things you can do:
Write “setter” functions (ex, bucket_set)
Return mutable values (ex, bucket_get(table, key).append(42) if the value is a list)
Use a class which overrides __getitem__ and __setitem__
For example, you could have a class like like:
class Bucket(object):
def __setitem__(self, key, value):
# … implementation …
def __getitem__(self, key):
# … implementation …
return value
Then use it like this:
>>> b = Bucket()
>>> b["foo"] = 42
>>> b["foo"]
42
>>>
This would be the most Pythonic way to do it.
One option that would require few changes would be adding a third argument to bucket_loop, optional, to use for assignment:
empty = object() # An object that's guaranteed not to be in your htable
def bucket_loop(htable, key, value=empty):
bucket = hashtable_get_bucket(htable, key)
for entry in bucket:
if entry[0] == key:
if value is not empty: # Reference (id) comparison
entry[1] = value
return entry[1]
else: # I think this else is unnecessary/buggy
return None
However, a few pointers:
I agree with Ignacio Vazquez-Abrams and David Wolever, a class would be better;
Since a bucket can have more than one key/value pairs, you shouldn't return None if the first entry didn't match your key. Loop through all of them, and only return None in the end; (you can ommit this statement also, the default behavior is to return None)
If your htable doesn't admit None as a value, you can use it instead of empty.
So you're basically cheating at udacity, which is an online cs class / university? Funny part is you couldn't even declare the question properly. Next time cheat thoroughly and paste the two functions you're supposed to simplify and request someone simplify them by creating a third function with the overlapping code within. Doesn't matter anyway because if this is the one you need help in you're likely not doing very well in the class
you were also able to solve the problem without using most of these tools, it was an exercise in understanding how to identify an handle redundancies, NOT efficiency...
Real instructions:
Modify the code for both hashtable_update and hashtable_lookup to have the same behavior they have now, but using fewer lines of code in each procedure.  You should define a new procedure, helper, to help with this.  Your new version should have approximately the same running time as the original version, but neither
hashtable_update or hashtable_lookup should include any for or while loop, and the block of each procedure should be no more than 6 lines of code
Seriously, cheating is lame.

Categories