How to slice value element in orderedDict in python? - python

SO,I have this code , how to sliced up value elements in orderedDict?
import numpy as np
from collections import OrderedDict
x = OrderedDict()
val= np.array((1,2))
key= 40
x[key]= val
x[key]= val,3
print(x)
returns :
OrderedDict([(40, (array([1, 2]), 3))]) # <- i want to slice this 2nd value element
target output:
OrderedDict([(40, array([1, 2])])

#Caina is close, but his version is not quite right in that it leaves an extra collection layer in the result. This is the expression that returns the exact result you requested:
x_sliced = OrderedDict({k:x[k][0] for k in x})
Result:
OrderedDict([(40, array([1, 2]))])
Actually, this isn't technically what you asked for. Your version has one missing closing ')', but that's just a typo I assume.

You can use this if you're using Python 3.6+ or greater:
x_sliced = {k:x[k][:1] for k in x}
If want to make x_sliced an orderedDict, just type orderedDict(x_sliced).
For older versions of Python, or to ensure backward-compatibility:
for key in x:
x[key] = x_sliced[key]

Related

Using map function with external dictionary (global)

I'm trying to improve the computing time of my code so I want to replace for loops with map functions.
For each key in the dictionary I check if it is bigger than a specific value and inserting it to a new dictionary under the same key:
My original code is:
dict1={'a':-1,'b':0,'c':1,'d':2,'e':3}
dict_filt = {}
for key in dict1.keys():
if dict1[key]>1:
dict_filt[key] = dict1[key]*10
print (dict_filt)
output is: {'d': 20, 'e': 30}
and this works
but when I try with map:
dict1={'a':-1,'b':0,'c':1,'d':2,'e':3}
dict_filt = {}
def for_filter (key):
if dict1[key]>1:
dict_filt[key] = dict1[key]*10
map (for_filter ,dict1.keys())
print (dict_filt)
I get an empty dictionary
I tried to make it work with lambda:
map (lambda x: for_filter(x) ,dict1.keys())
or define the dictionarys as global but it still doesnt work.
I'll be glad to get some help
I don't need the original dictionary so if it's simpler to work on one dictionary it's still ok
Use a dictionary-comprehension instead of map:
{k: v * 10 for k, v in dict1.items() if v > 1}
Code:
dict1 = {'a':-1,'b':0,'c':1,'d':2,'e':3}
print({k: v * 10 for k, v in dict1.items() if v > 1})
# {'d': 20, 'e': 30}
map is lazy: if you do not consume the values, the function for_filter is not applied. Since you are using a side effect to populate dict_filt, nothing will happen unless you force the evaluation of the map:
Replace:
map(for_filter, dict1.keys())
By:
list(map(for_filter, dict1)) # you don't need keys here
And you will get the expected result.
But note that this is a misuse of map. You should use a dict comprehension (see #Austin's answer).
EDIT: More on map and lazyness.
TLDR;
Look at the doc:
map(function, iterable, ...)
Return an iterator that applies function to every item of iterable, yielding the results.
Explanation
Consider the following function:
>>> def f(x):
... print("x =", x)
... return x
...
This function returns its parameter and performs a side effect (print the value). Let's try to apply this function to a simple range with the map function:
>>> m = map(f, range(5))
Nothing is printed! Let's look at the value of m:
>>> m
<map object at 0x7f91d35cccc0>
We were expecting [0, 1, 2, 3, 4] but we got a strange <map object at 0x7f91d35cccc0>. That's lazyness: map does not really apply the function but creates an iterator. This iterator returns, on each next call, a value:
>>> next(m)
x = 0
0
That value is the result of the application of the function f to the next element of the mapped iterable (range). Here, 0 is the returned value and x = 0 the result of the print side effect. What is important here is that this value does not exist before you pull it out of the iterator. Hence the side effect is not performed before you pull the vlaue out of the iterator.
If we continue to call next, we'll exhaust the iterator:
...
>>> next(m)
x = 4
4
>>> next(m)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
Another way to get all the values of the iterator is to create a list. That's not a cast, but rather the constrution of a new object and the consumption of the old one:
>>> m = map(f, range(5))
>>> list(m)
x = 0
x = 1
x = 2
x = 3
x = 4
[0, 1, 2, 3, 4]
We see that the side effect print is performed for every element of the range, and then the list [0, 1, 2, 3, 4] is returned.
In your case, the function doesn't print anything, but makes an assignement to an external variable dict_filt. The function is not applied unless you consume the map iterator.
I repeat: do not use map (or any list/dict comprehension) to perform a side effect (map comes from the functional world where side effect do not exist).

map,lambda and append.. why doesn't it work?

So I'm trying to do this.
a = []
map(lambda x: a.append(x),(i for i in range(1,5)))
I know map takes a function but so why doesn't it append to the list? Or is append not a function?
However printing a results to a still being empty
now an interesting thing is this works
a = []
[a.append(i) for i in range(5)]
print(a)
aren't they basically "saying" the same thing?
It's almost as if that list comprehension became some sort of hybrid list-comprehension function thing
So why doesn't the lambda and map approach work?
I am assuming you are using Python 3.x , the actual reason why your code with map() does not work is because in Python 3.x , map() returns a generator object , unless you iterate over the generator object returned by map() , the lambda function is not called . Try doing list(map(...)) , and you should see a getting filled.
That being said , what you are doing does not make much sense , you can just use -
a = list(range(5))
append() returns None so it doesn't make sense using that in conjunction with map function. A simple for loop would suffice:
a = []
for i in range(5):
a.append(i)
print a
Alternatively if you want to use list comprehensions / map function;
a = range(5) # Python 2.x
a = list(range(5)) # Python 3.x
a = [i for i in range(5)]
a = map(lambda i: i, range(5)) # Python 2.x
a = list(map(lambda i: i, range(5))) # Python 3.x
[a.append(i) for i in range(5)]
The above code does the appending too, however it also creates a list of None values as the size of range(5) which is totally a waste of memory.
>>> a = []
>>> b = [a.append(i) for i in range(5)]
>>> print a
[0, 1, 2, 3, 4]
>>> print b
[None, None, None, None, None]
The functions map and filter have as first argument a function reference that is called for each element in the sequence (list, tuple, etc.) provided as second argument AND the result of this call is used to create the resulting list
The function reduce has as first argument a function reference that is called for first 2 elems in the sequence provided as second argument AND the result is used together with the third elem in another call, then the result is used with the fourth elem, and so on. A single value results in the end.
>>> map(lambda e: e+10, [i for i in range(5)])
[10, 11, 12, 13, 14]
>>> filter(lambda e: e%2, [i for i in range(5)])
[1, 3]
>>> reduce(lambda e1, e2: e1+e2, [i for i in range(5)])
10
Explanations:
map example: adds 10 to each elem of list [0,1,2,3,4]
filter example: keeps only elems that are odd of list [0,1,2,3,4]
reduce example: add first 2 elems of list [0,1,2,3,4], then the result and the third elem of list, then the result and fourth elem, and so on.
This map doesn't work because the append() method returns None and not a list:
>>> a = []
>>> type(a.append(1))
<class 'NoneType'>
To keep it functional why not use reduce instead?
>>> from functools import reduce
>>> reduce(lambda p, x: p+[x], (i for i in range(5)), [])
[0, 1, 2, 3, 4]
Lambda function will not get triggered unless you wrap the call to map function in list() like below
list(map(lambda x: a.append(x),(i for i in range(1,5))))
map only returns a generator object which needs to be iterated in order to create a list. Above code will get the lambda called.
However this code does not make much sense considering what you are trying to achieve

Accessing elements of multi-dimensional list, given a list of indexes

I have a multidimensional list F, holding elements of some type. So, if for example the rank is 4, then the elements of F can be accessed by something like F[a][b][c][d].
Given a list L=[a,b,c,d], I would like to access F[a][b][c][d]. My problem is that my rank is going to be changing, so I cannot just have F[L[0]][L[1]][L[2]][L[3]].
Ideally, I would like to be able to do F[L] and get the element F[a][b][c][d]. I think something like this can be done with numpy, but for the types of arrays that I'm using, numpy is not suitable, so I want to do it with python lists.
How can I have something like the above?
Edit: For a specific example of what I'm trying to achieve, see the demo in Martijn's answer.
You can use the reduce() function to access consecutive elements:
from functools import reduce # forward compatibility
import operator
reduce(operator.getitem, indices, somelist)
In Python 3 reduce was moved to the functools module, but in Python 2.6 and up you can always access it in that location.
The above uses the operator.getitem() function to apply each index to the previous result (starting at somelist).
Demo:
>>> import operator
>>> somelist = ['index0', ['index10', 'index11', ['index120', 'index121', ['index1220']]]]
>>> indices = [1, 2, 2, 0]
>>> reduce(operator.getitem, indices, somelist)
'index1220'
Something like this?
def get_element(lst, indices):
if indices:
return get_element(lst[indices[0]], indices[1:])
return lst
Test:
get_element([[["element1"]], [["element2"]], "element3"], [2])
'element3'
get_element([[["element1"]], [["element2"]], "element3"], [0, 0])
['element1']
Or if you want an iterative version:
def get_element(lst, indices):
res = lst
for index in indices:
res = res[index]
return res
Test:
get_element([[["element1"]], [["element2"]], "element3"], [1, 0])
['element2']

Counting element in python Deque

I've attempted to count 1 values within my deque (i.e. deque.count(1)) but get the following error:
'deque' object has no attribute 'count'
I assume that I am working with a Python version that's before 2.7 when the deque.count() function was first introduced.
Besides using a for loop, what would be the most efficient/fastest way of counting how many 1's there are in my deque?
"without loops" requirement is strange, but if you're curious...
len(filter(lambda x: x == 1, d))
I know you asked for no for loops, but I don't think there's any other way:
def count(dq, item):
return sum(elem == item for elem in dq)
For example:
>>> from collections import deque
>>> d = deque([1, 2, 3, 1])
>>> count(d, 1)
2

Python map() dictionary values

I'm trying to use map() on the dict_values object returned by the values() function on a dictionary. However, I can't seem to be able to map() over a dict_values:
map(print, h.values())
Out[31]: <builtins.map at 0x1ce1290>
I'm sure there's an easy way to do this. What I'm actually trying to do is create a set() of all the Counter keys in a dictionary of Counters, doing something like this:
# counters is a dict with Counters as values
whole_set = set()
map(lambda x: whole_set.update(set(x)), counters.values())
Is there a better way to do this in Python?
In Python 3, map returns an iterator, not a list. You still have to iterate over it, either by calling list on it explicitly, or by putting it in a for loop. But you shouldn't use map this way anyway. map is really for collecting return values into an iterable or sequence. Since neither print nor set.update returns a value, using map in this case isn't idiomatic.
Your goal is to put all the keys in all the counters in counters into a single set. One way to do that is to use a nested generator expression:
s = set(key for counter in counters.values() for key in counter)
There's also the lovely dict comprehension syntax, which is available in Python 2.7 and higher (thanks Lattyware!) and can generate sets as well as dictionaries:
s = {key for counter in counters.values() for key in counter}
These are both roughly equivalent to the following:
s = set()
for counter in counters.values():
for key in counter:
s.add(key)
You want the set-union of all the values of counters? I.e.,
counters[1].union(counters[2]).union(...).union(counters[n])
? That's just functools.reduce:
import functools
s = functools.reduce(set.union, counters.values())
If counters.values() aren't already sets (e.g., if they're lists), then you should turn them into sets first. You can do it using a dict comprehension using iteritems, which is a little clunky:
>>> counters = {1:[1,2,3], 2:[4], 3:[5,6]}
>>> counters = {k:set(v) for (k,v) in counters.iteritems()}
>>> print counters
{1: set([1, 2, 3]), 2: set([4]), 3: set([5, 6])}
or of course you can do it inline, since you don't care about counters.keys():
>>> counters = {1:[1,2,3], 2:[4], 3:[5,6]}
>>> functools.reduce(set.union, [set(v) for v in counters.values()])
set([1, 2, 3, 4, 5, 6])

Categories