I've attempted to count 1 values within my deque (i.e. deque.count(1)) but get the following error:
'deque' object has no attribute 'count'
I assume that I am working with a Python version that's before 2.7 when the deque.count() function was first introduced.
Besides using a for loop, what would be the most efficient/fastest way of counting how many 1's there are in my deque?
"without loops" requirement is strange, but if you're curious...
len(filter(lambda x: x == 1, d))
I know you asked for no for loops, but I don't think there's any other way:
def count(dq, item):
return sum(elem == item for elem in dq)
For example:
>>> from collections import deque
>>> d = deque([1, 2, 3, 1])
>>> count(d, 1)
2
Related
SO,I have this code , how to sliced up value elements in orderedDict?
import numpy as np
from collections import OrderedDict
x = OrderedDict()
val= np.array((1,2))
key= 40
x[key]= val
x[key]= val,3
print(x)
returns :
OrderedDict([(40, (array([1, 2]), 3))]) # <- i want to slice this 2nd value element
target output:
OrderedDict([(40, array([1, 2])])
#Caina is close, but his version is not quite right in that it leaves an extra collection layer in the result. This is the expression that returns the exact result you requested:
x_sliced = OrderedDict({k:x[k][0] for k in x})
Result:
OrderedDict([(40, array([1, 2]))])
Actually, this isn't technically what you asked for. Your version has one missing closing ')', but that's just a typo I assume.
You can use this if you're using Python 3.6+ or greater:
x_sliced = {k:x[k][:1] for k in x}
If want to make x_sliced an orderedDict, just type orderedDict(x_sliced).
For older versions of Python, or to ensure backward-compatibility:
for key in x:
x[key] = x_sliced[key]
I have a multidimensional list F, holding elements of some type. So, if for example the rank is 4, then the elements of F can be accessed by something like F[a][b][c][d].
Given a list L=[a,b,c,d], I would like to access F[a][b][c][d]. My problem is that my rank is going to be changing, so I cannot just have F[L[0]][L[1]][L[2]][L[3]].
Ideally, I would like to be able to do F[L] and get the element F[a][b][c][d]. I think something like this can be done with numpy, but for the types of arrays that I'm using, numpy is not suitable, so I want to do it with python lists.
How can I have something like the above?
Edit: For a specific example of what I'm trying to achieve, see the demo in Martijn's answer.
You can use the reduce() function to access consecutive elements:
from functools import reduce # forward compatibility
import operator
reduce(operator.getitem, indices, somelist)
In Python 3 reduce was moved to the functools module, but in Python 2.6 and up you can always access it in that location.
The above uses the operator.getitem() function to apply each index to the previous result (starting at somelist).
Demo:
>>> import operator
>>> somelist = ['index0', ['index10', 'index11', ['index120', 'index121', ['index1220']]]]
>>> indices = [1, 2, 2, 0]
>>> reduce(operator.getitem, indices, somelist)
'index1220'
Something like this?
def get_element(lst, indices):
if indices:
return get_element(lst[indices[0]], indices[1:])
return lst
Test:
get_element([[["element1"]], [["element2"]], "element3"], [2])
'element3'
get_element([[["element1"]], [["element2"]], "element3"], [0, 0])
['element1']
Or if you want an iterative version:
def get_element(lst, indices):
res = lst
for index in indices:
res = res[index]
return res
Test:
get_element([[["element1"]], [["element2"]], "element3"], [1, 0])
['element2']
I learned about the collections.Counter() class recently and, as it's a neat (and fast??) way to count stuff, I started using it.
But I detected a bug on my program recently due to the fact that when I try to update the count with a tuple, it actually treats it as a sequence and updates the count for each item in the tuple instead of counting how many times I inserted that particular tuple.
For example, if you run:
import collections
counter = collections.Counter()
counter.update(('user1', 'loggedin'))
counter.update(('user2', 'compiled'))
counter.update(('user1', 'compiled'))
print counter
You'll get:
Counter({'compiled': 2, 'user1': 2, 'loggedin': 1, 'user2': 1})
as a result. Is there a way to count tuples with the Counter()? I could concatenate the strings but this is... ugly. Could I use named tuples? Implement my own very simple dictionary counter? Don't know what's best.
Sure: you simply have to add one level of indirection, namely pass .update a container with the tuple as an element.
>>> import collections
>>> counter = collections.Counter()
>>> counter.update((('user1', 'loggedin'),))
>>> counter.update((('user2', 'compiled'),))
>>> counter.update((('user1', 'compiled'),))
>>> counter.update((('user1', 'compiled'),))
>>> counter
Counter({('user1', 'compiled'): 2, ('user1', 'loggedin'): 1, ('user2', 'compiled'): 1})
I'm trying to use map() on the dict_values object returned by the values() function on a dictionary. However, I can't seem to be able to map() over a dict_values:
map(print, h.values())
Out[31]: <builtins.map at 0x1ce1290>
I'm sure there's an easy way to do this. What I'm actually trying to do is create a set() of all the Counter keys in a dictionary of Counters, doing something like this:
# counters is a dict with Counters as values
whole_set = set()
map(lambda x: whole_set.update(set(x)), counters.values())
Is there a better way to do this in Python?
In Python 3, map returns an iterator, not a list. You still have to iterate over it, either by calling list on it explicitly, or by putting it in a for loop. But you shouldn't use map this way anyway. map is really for collecting return values into an iterable or sequence. Since neither print nor set.update returns a value, using map in this case isn't idiomatic.
Your goal is to put all the keys in all the counters in counters into a single set. One way to do that is to use a nested generator expression:
s = set(key for counter in counters.values() for key in counter)
There's also the lovely dict comprehension syntax, which is available in Python 2.7 and higher (thanks Lattyware!) and can generate sets as well as dictionaries:
s = {key for counter in counters.values() for key in counter}
These are both roughly equivalent to the following:
s = set()
for counter in counters.values():
for key in counter:
s.add(key)
You want the set-union of all the values of counters? I.e.,
counters[1].union(counters[2]).union(...).union(counters[n])
? That's just functools.reduce:
import functools
s = functools.reduce(set.union, counters.values())
If counters.values() aren't already sets (e.g., if they're lists), then you should turn them into sets first. You can do it using a dict comprehension using iteritems, which is a little clunky:
>>> counters = {1:[1,2,3], 2:[4], 3:[5,6]}
>>> counters = {k:set(v) for (k,v) in counters.iteritems()}
>>> print counters
{1: set([1, 2, 3]), 2: set([4]), 3: set([5, 6])}
or of course you can do it inline, since you don't care about counters.keys():
>>> counters = {1:[1,2,3], 2:[4], 3:[5,6]}
>>> functools.reduce(set.union, [set(v) for v in counters.values()])
set([1, 2, 3, 4, 5, 6])
There is a nice class Enum from enum, but it only works for strings. I'm currently using:
for index in range(len(objects)):
# do something with index and objects[index]
I guess it's not the optimal solution due to the premature use of len. How is it possible to do it more efficiently?
Here is the pythonic way to write this loop:
for index, obj in enumerate(objects):
# Use index, obj.
enumerate works on any sequence regardless of the types of its elements. It is a builtin function.
Edit:
After running some timeit tests using Python 2.5, I found enumerate to be slightly slower:
>>> timeit.Timer('for i in xrange(len(seq)): x = i + seq[i]', 'seq = range(100)').timeit()
10.322299003601074
>>> timeit.Timer('for i, e in enumerate(seq): x = i + e', 'seq = range(100)').timeit()
11.850601196289062