I want to use two generators in a single for loop. Something like:
for a,b,c,d,e,f in f1(arg),f2(arg):
print a,b,c,d,e,f
Where a,b,c,d and e come from f1 and f comes from f2. I need to use the yield operator because of space constraints.
The above code however doesn't work. Due to some reason it keeps on taking values (for all six variables) from f1 until it is exhausted and then starts taking values from f2.
Please let me know if this is possible and if not is there any workaround. Thank you in advance.
You can use zip (itertools.izip if you're using Python 2) and sequence unpacking:
def f1(arg):
for i in range(10):
yield 1, 2, 3, 4, 5
def f2(arg):
for i in range(10):
yield 6
arg = 1
for (a, b, c, d, e), f in zip(f1(arg), f2(arg)):
print(a, b, c, d, e, f)
Related
For example, if I want to assign a, b, c from l = [1,2,3,4,5], I can do
a, b, c = l[:3]
but what if l is only [1,2] (or [1] or []) ?
Is there a way to automatically set the rest of the variables to None or '' or 0 or some other default value?
I thought about extending the list with default values before assigning to match the number of variables, but just wondering if there's a better way.
In general, to unpack N elements into N separate variables from a list of size M where M <= N, then you can pad your list slice upto N and slice again:
l = [1,]
a, b, c = (l[:3] + [None]*3)[:3]
a, b, c
# 1, None, None
If you fancy a clean generator-based approach, this will also work:
from itertools import islice, cycle, chain
def pad(seq, filler=None):
yield from chain(seq, cycle([filler]))
a, b, c = islice(pad([1, ]), 3)
a, b, c
# 1, None, None
Having trouble iterating over tuples such as this:
t = ('a','b',{'c':'d'})
for a,b,c in t:
print(a,b,c) # ValueError: not enough values to unpack (expected 3, got 1)
for a,b,*c in t:
print(a,b,c) # ValueError: not enough values to unpack (expected 2, got 1)
for a,b,**c in t:
print (a,b,c) # Syntax error (can't do **c)
Anyone know how I can preserve the dictionary value? I would like to see a='a', b='b', and c={'c':'d'}
You'd need to put t inside some other iterable container:
for a, b, c in [t]:
print(a, b, c)
The problem with your attempts is that each on is iterating over a single element from t and trying to unpack that. e.g. the first turn of the loop is trying to unpack 'a' into three places (a, b and c).
Obviously, it's also probably better to just unpack directly (no loop required):
a, b, c = t
print(a, b, c)
Why are you iterating at all when it's a single tuple? Just unpack the single tuple, if that's what you need to do:
a, b, c = t
print(a, b, c)
Or if it's just printing you want to do unpack in the call itself:
print(*t)
Try this:
for a,b,c in [t]:
print(a,b,c)
Putting t inside a list will allow you to unpack it.
You can use tuple unpacking with multiple assignment:
a, b, c = t
print(a, b, c)
try this
for x in t:
print(x)
x will take all the values in t iteratively, so 'a', then 'b' and finally {'c':'d'}.
And to print exactly a = 'a' etc you can do:
for param, val in zip(["a","b","c"], t):
print(param,"=",val)
suppose I have a list X that contains a bunch of different items and I am testing if it contains any of the following: (a, b, c).
If a occurs vastly more often than b, which in turn is more common than c, is there a way to force
any(True for v in X if v in (a, b, c))
to check for a first so that it can return faster?
if a in X or b in X or c in X:
runs much faster than the any() statement but it’s messy and not extensible.
any(v in X for v in (a, b, c))
You've put together your any in a pretty weird way. This way gets the effect you want.
If you wanted to do the checks in the way your existing code does it (for example, if earlier elements of X were more likely to match), it'd be cleaner to do
any(v in (a, b, c) for v in X)
If instead of 3 elements in (a, b, c), you have quite a lot, it'd be faster to use a set:
not {a, b, c}.isdisjoint(X)
Did you try?
any(True for v in (a, b, c) if v in X)
That form seems more comparable to your faster example.
This question already has answers here:
How to change values in a tuple?
(17 answers)
Closed 8 years ago.
This is extremely likely to be a duplicate of something, but my serach foo is failing me.
It is known that tuples are immutable, so you can't really change them. Sometimes, however, it comes in handy to do something to the effect of changing, say, (1, 2, "three") to (1, 2, 3), perhaps in a similar vein to the Haskell record update syntax. You wouldn't actually change the original tuple, but you'd get a new one that differs in just one (or more) elements.
A way to go about doing this would be:
elements = list(old_tuple)
elements[-1] = do_things_to(elements[-1])
new_tuple = tuple(elements)
I feel that changing a tuple to a list however kind of defeats the purpose of using the tuple type for old_tuple to begin with: if you were using a list instead, you wouldn't have had to build a throw-away list copy of the tuple in memory per operation.
If you were to change, say, just the 3rd element of a tuple, you could also do this:
def update_third_element(a, b, c, *others):
c = do_things_to(c)
return tuple(a, b, c, *others)
new_tuple = update_third_element(*old_tuple)
This would resist changes in the number of elements in the tuple better than the naive approach:
a, b, c, d, e, f, g, h, i, j = old_tuple
c = do_things_to(c)
new_tuple = (a, b, c, d, e, f, g, h, j, j) # whoops
...but it doesn't work if what you wanted to change was the last, or the n-th to last element. It also creates a throw away list (others). It also forces you to name all elements up to the n-th.
Is there a better way?
I would use collections.namedtuple instead:
>>> from collections import namedtuple
>>> class Foo(namedtuple("Foo", ["a", "b", "c"])):
pass
>>> f = Foo(1, 2, 3) # or Foo(a=1, b=2, c=3)
>>> f._replace(a = 5)
Foo(a=5, b=2, c=3)
namedtuples also support indexing so you can use them in place of plain tuples.
If you must use a plain tuple, just use a helper function:
>>> def updated(tpl, i, val):
return tpl[:i] + (val,) + tpl[i + 1:]
>>> tpl = (1, 2, 3)
>>> updated(tpl, 1, 5)
(1, 5, 3)
While reading data from a ASCII file, I find myself doing something like this:
(a, b, c1, c2, c3, d, e, f1, f2) = (float(x) for x in line.strip().split())
c = (c1, c2, c3)
f = (f1, f2)
If I have a determinate number of elements per line (which I do)¹ and only one multi-element entry to unpack, I can use something like `(a, b, *c, d, e) = ...' (Extended iterable unpacking).
Even if I don't, I can of course replace one of the two multi-element entries from the example above by a starred component: (a, b, *c, d, e, f1, f2) = ....
As far as I can tell, the itertools are not of immediate use here.
Are there any alternatives to the three-line code above that may be considered "more pythonic" for a reason I'm probably not aware of?
¹It's determinate but still varies per line, the pattern is too complicated for numpys functions loadtxt or genfromtxt.
If you use such statements really often, and want maximum flexibility and reusability of code instead of writing such patterns really often, I'd propose creating a small function for it. Just put it into some module and import it (you can even import the script I created).
For usage examples, see the if __name__=="__main__" block. The trick is to use a list of group ids to group values of t together. The length of this id list should be at least the same as the length of t.
I will only explain the main concepts, if you don't understand anything, just ask.
I use groupby from itertools. Even though it might not be straightforward how to use it here, I hope it might be understandable soon.
As key-function I use a method I dynamically create via a factory-function. The main concept here is "closures". The list of group ids is being "attached" to the internal function get_group. Thus:
The list is specific to each call to extract_groups_from_iterable. You can use it multiple times, no globals are used
The state of this list is shared between subsequent calls to the same instance of get_group (remember: functions are objects, too! So I have two instances of get_group during the execution of my script.
Beside of this, I have a simple method to create either lists or scalars from the groups returned by groupby.
That's it.
from itertools import groupby
def extract_groups_from_iterable(iterable, group_ids):
return [_make_list_or_scalar(g) for k, g in
groupby(iterable, _get_group_id_provider(group_ids))
]
def _get_group_id_provider(group_ids):
def get_group(value, group_ids = group_ids):
return group_ids.pop(0)
return get_group
def _make_list_or_scalar(iterable):
list_ = list(iterable)
return list_ if len(list_) != 1 else list_[0]
if __name__ == "__main__":
t1 = range(9)
group_ids1 = [1,2,3,4,5,5,6,7,8]
a,b,c,d,e,f,g,h = extract_groups_from_iterable(t1, group_ids1)
for varname in "abcdefgh":
print varname, globals()[varname]
print
t2 = range(15)
group_ids2 = [1,2,2,3,4,5,5,5,5,5,6,6,6,7,8]
a,b,c,d,e,f,g,h = extract_groups_from_iterable(t2, group_ids2)
for varname in "abcdefgh":
print varname, globals()[varname]
Output is:
a 0
b 1
c 2
d 3
e [4, 5]
f 6
g 7
h 8
a 0
b [1, 2]
c 3
d 4
e [5, 6, 7, 8, 9]
f [10, 11, 12]
g 13
h 14
Once again, this might seem like overkill, but if this helps you reducing your code, use it.
Why not just slice a tuple?
t = tuple(float(x) for x in line.split())
c = t[2:5] #maybe t[2:-4] instead?
f = t[-2:]
demo:
>>> line = "1 2 3 4 5 6 7 8 9"
>>> t = tuple(float(x) for x in line.split())
>>> c = t[2:5] #maybe t[2:-4] instead?
>>> f = t[-2:]
>>> c
(3.0, 4.0, 5.0)
>>> t
(1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0)
>>> c = t[2:-4]
>>> c
(3.0, 4.0, 5.0)
While we're on the topic of being pythonic, line.strip().split() can always be safely written as line.split() where line is a string. split will strip the whitespace for you when you don't give it any arguments.