Set is unordered and unindexed. Thus, there is no concept of last entered element. Thus, there is no popitem. Is this the reasoning for no popitem in set?
If this is valid reasoning then why dictionary has popitem. dictionary is also unordered like Set.
The corresponding method for sets is pop():
pop()
Remove and return an arbitrary element from the set. Raises KeyError if the set is empty.
Prior to Python 3.7 dicts were unordered and popitem() returned an arbitrary key-value pair. It's only since 3.7 that dicts have been ordered and popitem() defined to return items in LIFO order.
It's called popitem() for dicts because there's already a pop(key) method that removes the item with the specified key.
Python set has pop(), to return an arbitrary item. You don't know which one it turn, though.
There is no method to remove the last-entered element from a set because set is unordered in Python.
If you want an easy way to emulate an ordered set using the standard library you can use collections.OrderedDict instead:
from collections import OrderedDict
s = OrderedDict.fromkeys((2,5,2,6,7))
s.popitem()
From the example above, s.keys() would become:
odict_keys([2, 5, 6])
Related
I am using a recursive code as below:
def dfs(self, hash_map):
ans = 1
for key,value in hash_map.items():
if value > 0:
hash_map[key] = hash_map[key] - 1
ans = ans + self.dfs(hash_map)
hash_map[key] = value
return ans
So will the list returned by items() method in the dictionary retain the order?
Not necessarily, it depends on what version of Python you're using. check out OrderedDict
From the docs:
Ordered dictionaries are just like regular dictionaries but have some
extra capabilities relating to ordering operations. They have become
less important now that the built-in dict class gained the ability to
remember insertion order (this new behavior became guaranteed in
Python 3.7).
If you were relying on the order of keys in a Python dictionary or set, then don't. Python uses a hash table to implement these types and their order depends on the insertion and deletion history as well as the random hash seed.
More details with about the fact can be found here:
hash function in Python 3.3 returns different results between sessions
I have a variable (object from database). In some cases this variable can be type of list and in some cases dictionary.
Standard for cycle if variable is list:
for value in object_values:
self.do_something(value)
Standard for cycle if variable is dictionary:
for key, value in object_values.items():
self.do_something(value)
I can use instanceof() two check the type, but then I still need two functions or if with two for cycles. I have now if condition which calls one of the two functions, one for iterating as list (e.g. iterate_list()) and the second for iterating as dictionary (e.g. iterate_dict()) .
Is there any better option how elegantly and more pythonic way resolve problem that I don't know if the variable will be list or dictionary?
in your case, since the data is either the items or the values of the dictionary, you could use a ternary to get values() or just the iterable depending on the type:
def iterate(self,object_values):
for value in object_values.values() if isinstance(object_values,dict) else object_values:
self.do_something(value)
If you pass a tuple, generator or other iterable, it falls back on "standard" iteration. If you pass a dictionary (or OrderedDict or other), it iterates on the values.
Performance-wise, the ternary expression is evaluated only once at the start of the iteration, so it's fine.
The isinstance bit could even be replaced by if hasattr(object_values,"values") so even non-dict objects with a values member would match.
(Note that you should be aware of the "least atonishment" principle. Some people may expect an iteration on the keys of the dictionary when calling the method)
Suppose I want PERL-like autovivication in Python, i.e.:
>>> d = Autovivifier()
>>> d = ['nested']['key']['value']=10
>>> d
{'nested': {'key': {'value': 10}}}
There are a couple of dominant ways to do that:
Use a recursive default dict
Use a __missing__ hook to return the nested structure
OK -- easy.
Now suppose I want to return a default value from a dict with a missing key. Once again, few way to do that:
For a non-nested path, you can use a __missing__ hook
try/except block wrapping the access to potentially missing key path
Use {}.get(key, default) (does not easily work with a nested dict) i.e., There is no version of autoviv.get(['nested']['key']['no key of this value'], default)
The two goals seem in irreconcilable conflict (based on me trying to work this out the last couple hours.)
Here is the question:
Suppose I want to have an Autovivifying dict that 1) creates the nested structure for d['arbitrary']['nested']['path']; AND 2) returns a default value from a non-existing arbitrary nesting without wrapping that in try/except?
Here are the issues:
The call of d['nested']['key']['no key of this value'] is equivalent to (d['nested'])['key']['no key of this value']. Overiding __getitem__ does not work without returning an object that ALSO overrides __getitem__.
Both the methods for creating an Autovivifier will create a dict entry if you test that path for existence. i.e., I do not want if d['p1']['sp2']['etc.'] to create that whole path if you just test it with the if.
How can I provide a dict in Python that will:
Create an access path of the type d['p1']['p2'][etc]=val (Autovivication);
NOT create that same path if you test for existence;
Return a default value (like {}.get(key, default)) without wrapping in try/except
I do not need the FULL set of dict operations. Really only d=['nested']['key']['value']=val and d['nested']['key']['no key of this value'] is equal to a default value. I would prefer that testing d['nested']['key']['no key of this value'] does not create it, but would accept that.
?
To create a recursive tree of dictionaries, use defaultdict with a trick:
from collections import defaultdict
tree = lambda: defaultdict(tree)
Then you can create your x with x = tree().
above from #BrenBarn -- defaultdict of defaultdict, nested
Don't do this. It could be solved much more easily by just writing a class that has the operations you want, and even in Perl it's not a universally-appraised feature.
But, well, it is possible, with a custom autoviv class. You'd need a __getitem__ that returns an empty autoviv dict but doesn't store it. The new autoviv dict would remember the autoviv dict and key that created it, then insert itself into its parent only when a "real" value is stored in it.
Since an empty dict tests as falsey, you could then test for existence Perl-style, without ever actually creating the intermediate dicts.
But I'm not going to write the code out, because I'm pretty sure this is a terrible idea.
While it does not precisely match the dictionary protocol in Python, you could achieve reasonable results by implementing your own auto-vivification dictionary that uses variable getitem arguments. Something like (2.x):
class ExampleVivifier(object):
""" Small example class to show how to use varargs in __getitem__. """
def __getitem__(self, *args):
print args
Example usage would be:
>>> v = ExampleVivifier()
>>> v["nested", "dictionary", "path"]
(('nested', 'dictionary', 'path'),)
You can fill in the blanks to see how you can achieve your desired behaviour here.
Does orderedDict.values(), .keys(), .iterkeys() etc. returns the values in the order that items were first inserted?
I assume that values\keys function does not change the order of the dict, and if it's orderedDict, then i get the values in the order that they were added to the dictionary.
That's true?
Does orderedDict.values(), .keys(), .iterkeys() etc. returns the values in the order that items were first inserted?
Yes, they are preserved. From the documentation (emphasis mine):
Equality tests between OrderedDict objects are order-sensitive and are
implemented as list(od1.items())==list(od2.items()).
Thats how equality comparison is done, and it implies that the result of items() are ordered. Now for other two functions you can look into a substitute implementation for OrderedDict here
Your assumption is correct. OrderedDict maintains a list of (thus ordered) keys to iterate over values and keys, so they will always be ordered the same way as in for loops.
The comments in the source code also states this, saying:
The inherited dict provides __getitem__, __len__, __contains__, and get. The remaining methods are order-aware. Big-O running times for all methods are the same as regular dictionaries.
Emphasis mine.
I recently wrote some code that looked something like this:
# dct is a dictionary
if "key" in dct.keys():
However, I later found that I could achieve the same results with:
if "key" in dct:
This discovery got me thinking and I began to run some tests to see if there could be a scenario when I must use the keys method of a dictionary. My conclusion however is no, there is not.
If I want the keys in a list, I can do:
keys_list = list(dct)
If I want to iterate over the keys, I can do:
for key in dct:
...
Lastly, if I want to test if a key is in dct, I can use in as I did above.
Summed up, my question is: am I missing something? Could there ever be a scenario where I must use the keys method?...or is it simply a leftover method from an earlier installation of Python that should be ignored?
On Python 3, use dct.keys() to get a dictionary view object, which lets you do set operations on just the keys:
>>> for sharedkey in dct1.keys() & dct2.keys(): # intersection of two dictionaries
... print(dct1[sharedkey], dct2[sharedkey])
In Python 2.7, you'd use dct.viewkeys() for that.
In Python 2, dct.keys() returns a list, a copy of the keys in the dictionary. This can be passed around an a separate object that can be manipulated in its own right, including removing elements without affecting the dictionary itself; however, you can create the same list with list(dct), which works in both Python 2 and 3.
You indeed don't want any of these for iteration or membership testing; always use for key in dct and key in dct for those, respectively.
Source: PEP 234, PEP 3106
Python 2's relatively useless dict.keys method exists for historical reasons. Originally, dicts weren't iterable. In fact, there was no such thing as an iterator; iterating over sequences worked by calling __getitem__, the element access method, with increasing integer indices until an IndexError was raised. To iterate over the keys of a dict, you had to call the keys method to get an explicit list of keys and iterate over that.
When iterators went in, dicts became iterable, because it was more convenient, faster, and all around better to say
for key in d:
than
for key in d.keys()
This had the side-effect of making d.keys() utterly superfluous; list(d) and iter(d) now did everything d.keys() did in a cleaner, more general way. They couldn't get rid of keys, though, since so much code already called it.
(At this time, dicts also got a __contains__ method, so you could say key in d instead of d.has_key(key). This was shorter and nicely symmetrical with for key in d; the symmetry is also why iterating over a dict gives the keys instead of (key, value) pairs.)
In Python 3, taking inspiration from the Java Collections Framework, the keys, values, and items methods of dicts were changed. Instead of returning lists, they would return views of the original dict. The key and item views would support set-like operations, and all views would be wrappers around the underlying dict, reflecting any changes to the dict. This made keys useful again.
Assuming you're not using Python 3, list(dct) is equivalent to dct.keys(). Which one you use is a matter of personal preference. I personally think dct.keys() is slightly clearer, but to each their own.
In any case, there isn't a scenario where you "need" to use dct.keys() per se.
In Python 3, dct.keys() returns a "dictionary view object", so if you need to get a hold of an unmaterialized view to the keys (which could be useful for huge dictionaries) outside of a for loop context, you'd need to use dct.keys().
key in dict
is much faster than checking
key in dict.keys()