Single assignment works like this: (there may be better ways)
b='bb'
vars()[b] = 10
bb
>>>10
but if I do this:
c='cc'
vars() [b,c]= 10,11
it doesn't successfully assign bb and cc.
I don't understand why, nor how best to do this.
Thanks
PS, several people have asked, quite reasonably, why I wanted to do this. I found I was setting up a lot of variables and objects according to options specified by the user. So if the user specified options 2, 3 and 7, I would want to create a2, a3 and 7, plus b2, b3, b7 etc. It may not be usual practice but using vars and eval is a very easy and transparent way to do it, requiring simple concise code:
For i in input_vector: vars()['a'+input_vector[i]] = create_a (input_vector[i])
For i in input_vector: vars()['b'+input_vector[i]] = create_b (input_vector[i])
This works for some of the data. The trouble is when I use another function, create_c_and_d. This requires me to compress the above two lines into one function call. If this can be done easily using dictionaries, I am happy to switch to that method. I am new to python so it isn't obvious to me whether it can.
Because b,c is a tuple, so you're actually assiging to the key ('bb', 'cc').
>>> vars() [b,c]= 10,11
>>> vars()[('bb', 'cc')]
(10, 11)
>>> x = b,c
>>> type(x)
<type 'tuple'>
I don't understand why, nor how best to do this.
Well, I have to say, uh.. the best way would be to not do it. At all.
Just use a dict like it's supposed to be used:
d = {}
d['aa'] = 10
d['bb'] = 11
Anyway, to answer your question, you're doing the tuple unpacking in the wrong place. Or, rather, not unpacking at all; when you specify a,b to a dict, that means you're assigning a tuple as the key. Instead, unpack like this:
vars()[b], vars()[c] = 10,11
I'll again recommend that you not do this and just use dicts to map strings (or whatever hashable datatype) to values. Dynamically naming variables is not good practice.
Related
How would I print a list of strings as their individual variable values?
For example, take this code:
a=1
b=2
c=3
text="abc"
splittext = text.split(text)
print(splittext)
How would I get this to output 123?
You could do this using eval, but it is very dangerous:
>>> ''.join(map(lambda x : str(eval(x)),Text))
'123'
eval (perhaps they better rename it to evil, no hard feelings, simply use it as a warning) evaluates a string as if you would have coded it there yourself. So eval('a') will fetch the value of a. The problem is that a hacker could perhaps find some trick to inject arbitrary code using this, and thus hack your server, program, etc. Furthermore by accident it can perhaps change the state of your program. So a piece of advice is "Do not use it, unless you have absolutely no other choice" (which is not the case here).
Or a less dangerous variant:
>>> ''.join(map(lambda x : str(globals()[x]),Text))
'123'
in case these are global variables (you can use locals() for local variables).
This is ugly and dangerous, because you do not know in advance what a, b and c are, neither do you have much control on what part of the program can set these variables. So it can perhaps allow code injection. As is advertised in the comments on your question, you better use a dictionary for that.
Dictionary approach
A better way to do this is using a dictionary (as #Ignacio Vazquez-Abrams was saying):
>>> dic = {'a':1,'b': 2,'c':3}
>>> ''.join(map(lambda x : str(dic[x]),Text))
'123'
List instead of string
In the above we converted the content to a string using str in the lambda-expression and used ''.join to concatenate these strings. If you are however interested in an array of "results", you can drop these constructs. For instance:
>>> map(lambda x : dic[x],Text)
[1, 2, 3]
The same works for all the above examples.
EDIT
For some reason, I later catched the fact that you want to print the valuesm, this can easily be achieved using list comprehension:
for x in Text :
print dic[x]
again you can use the same technique for the above cases.
In case you want to print out the value of the variables named in the string you can use locals (or globals, depending on what/where you want them)
>>> a=1
>>> b=2
>>> c=3
>>> s='abc'
>>> for v in s:
... print(locals()[v])
...
1
2
3
or, if you use separators in the string
>>> s='a,b,c'
>>> for v in s.split(','):
... print(locals()[v])
...
1
2
3
I have three indexes, x,y,t and a tridimensional matrix (it's actually a netcdf variable) in python but the order in which the indexes have to be applied to the matrix change. So, to make it easily user-definable I am trying to get the specific element I want as
order='t,x,y' # or 't,y,x' or anything like this
elem=matrix[eval(order)]
but this fails with TypeError: illegal subscript type. When I try
a=eval(order)
print type(a)
it gives me that a is a tuple, so I'm guessing this is the source of my problem. But why is a a tuple? Any ideas as how to do this? Documentation wasn't helpful.
Also, somehow doing
a=eval(order)
i,j,k=a
elem=matrix[i,j,k]
doesn't work either. Not sure as to why.
EDIT
People are misunderstanding what I'm trying to do here apparently, so let me explain a little better. This is inside a function where the values x, y, t are already defined. However, the order in which to apply those indexes should be provided by the user. So the call would be something like func(order='t,x,y'). That's at least the only way I figured the user could pass the order of the indexes as a parameter. Sorry for the confusion.
Why is a a tuple?
Because it is: if you leave eval() out of the picture, you get the same, when you are just using commas:
>>> a = 1, 2, 3
>>> type(a)
<type 'tuple'>
do instead this:
Give the order directly as list, lists maintain order:
def some_method(order=None):
order = order or [t, y, x]
# t, y, x have to be known out side this scope
...
If your t, x, y are only known within the scope, you - of course - have to
give the order in a symbolic way, thus back to eval. Here you assume knowledge about the inner state of your function
def some_method(order='t,x,y'):
order = eval(order)
...
elem = matrix[order[0]][order[1]][order[2]]
EDIT
wims answer shows how to avoid eval() which should be preferred at least when the input to this function would come from an untrusted source, because eval() would gladly run arbitrary python code.
You should try to avoid using eval for this. It's hacky and ugly, and it's easily possible to avoid it just by making a lookup dict.
>>> order = 'x,y,t' # this is specified outside your function
You can still pass this string into your function if you want:
>>> # this is inside your function:
>>> t,x,y = 0,1,2 # I don't know what your actual values are..
>>> lookup = {'t': t, 'x': x, 'y': y} # make this inside your function
>>> tuple_ = tuple(lookup[k] for k in order.split(','))
>>> tuple_
(1, 2, 0)
Now use the tuple_ to index your array.
I think what you're looking for is called "slicing", or even "extended slicing", depending on the data format you're slicing. Oh, and you don't need eval for that at all, tuples would do just fine.
See also this question:
Explain Python's slice notation
complete rookie with python here. I'm wondering what type of data structure and how to approach what I'm trying to do.
I have a few functions that all return an integer. I need to store these values and compare them to themselves to see if they've changed when calling the functions a second time. How should I approach this? pointing me in the right direction is very appreciated.
All I have currently is calling the three functions in succession:
self.checkAlarmVolume()
self.checkSpeakerVolume()
self.checkMusicVolume()
The simplest way would be:
a1 = self.checkAlarmVolume()
...
a2 = self.checkAlarmVolume()
if (a1 != a2)
...
Another way, if you want to store many values, you could use a list:
a_list = []
a_list.append(self.checkAlarmVolume())
...
a_list.append(self.checkAlarmVolume())
if (a_list[0] != a_list[1])
...
This question already has answers here:
Why can't I iterate twice over the same iterator? How can I "reset" the iterator or reuse the data?
(5 answers)
Closed last month.
If I create two lists and zip them
a=[1,2,3]
b=[7,8,9]
z=zip(a,b)
Then I typecast z into two lists
l1=list(z)
l2=list(z)
Then the contents of l1 turn out to be fine [(1,7),(2,8),(3,9)], but the contents of l2 is just [].
I guess this is the general behavior of python with regards to iterables. But as a novice programmer migrating from the C family, this doesn't make sense to me. Why does it behave in such a way? And is there a way to get past this problem?
I mean, yeah in this particular example, I can just copy l1 into l2, but in general is there a way to 'reset' whatever Python uses to iterate 'z' after I iterate it once?
There's no way to "reset" a generator. However, you can use itertools.tee to "copy" an iterator.
>>> z = zip(a, b)
>>> zip1, zip2 = itertools.tee(z)
>>> list(zip1)
[(1, 7), (2, 8), (3, 9)]
>>> list(zip2)
[(1, 7), (2, 8), (3, 9)]
This involves caching values, so it only makes sense if you're iterating through both iterables at about the same rate. (In other words, don't use it the way I have here!)
Another approach is to pass around the generator function, and call it whenever you want to iterate it.
def gen(x):
for i in range(x):
yield i ** 2
def make_two_lists(gen):
return list(gen()), list(gen())
But now you have to bind the arguments to the generator function when you pass it. You can use lambda for that, but a lot of people find lambda ugly. (Not me though! YMMV.)
>>> make_two_lists(lambda: gen(10))
([0, 1, 4, 9, 16, 25, 36, 49, 64, 81], [0, 1, 4, 9, 16, 25, 36, 49, 64, 81])
I hope it goes without saying that under most circumstances, it's better just to make a list and copy it.
Also, as a more general way of explaining this behavior, consider this. The point of a generator is to produce a series of values, while maintaining some state between iterations. Now, at times, instead of simply iterating over a generator, you might want to do something like this:
z = zip(a, b)
while some_condition():
fst = next(z, None)
snd = next(z, None)
do_some_things(fst, snd)
if fst is None and snd is None:
do_some_other_things()
Let's say this loop may or may not exhaust z. Now we have a generator in an indeterminate state! So it's important, at this point, that the behavior of a generator is restrained in a well-defined way. Although we don't know where the generator is in its output, we know that a) all subsequent accesses will produce later values in the series, and b) once it's "empty", we've gotten all the items in the series exactly once. The more ability we have to manipulate the state of z, the harder it is to reason about it, so it's best that we avoid situations that break those two promises.
Of course, as Joel Cornett points out below, it is possible to write a generator that accepts messages via the send method; and it would be possible to write a generator that could be reset using send. But note that in that case, all we can do is send a message. We can't directly manipulate the generator's state, and so all changes to the state of the generator are well-defined (by the generator itself -- assuming it was written correctly!). send is really for implementing coroutines, so I wouldn't use it for this purpose. Everyday generators almost never do anything with values sent to them -- I think for the very reasons I give above.
If you need two copies of the list, which you do if you need to modify them, then I suggest you make the list once, and then copy it:
a=[1,2,3]
b=[7,8,9]
l1 = list(zip(a,b))
l2 = l1[:]
Just create a list out of your iterator using list() once, and use it afterwards.
It just happens that zip returns a generator, which is an iterator that you can only iterate once.
You can iterate a list as many times as you want.
No, there is no way to "reset them".
Generators generate their output once, one by one, on demand, and then are done when the output is exhausted.
Think of them like reading a file, once you are through, you'll have to restart if you want to have another go at the data.
If you need to keep the generator's output around, then consider storing it, for instance, in a list, and subsequently re-use it as often as you need. (Somewhat similar to the decisions that guided the use of xrange(), a generator vs range() which created a whole list of items in memory in v2)
Updated: corrected terminology, temporary brain-outage ...
Yet another explanation. As a programmer, you probably understand the difference between classes vs. instances (i.e. objects). The zip() is said to be a built-in function (in the official doc). Actually, it is a built-in generator function. It means it is rather the class. You can even try in the interactive mode:
>>> zip
<class 'zip'>
The classes are types. Because of that also the following should be clear:
>>> type(zip)
<class 'type'>
Your z is the instance of the class, and you can think about calling the zip() as about calling the class constructor:
>>> a = [1, 2, 3]
>>> b = [7, 8, 9]
>>> z = zip(a, b)
>>> z
<zip object at 0x0000000002342AC8>
>>> type(z)
<class 'zip'>
The z is an iterator (object) that keeps inside the iterators for the a and b. Because of its generic implementation, the z (or the zip class) has no mean to reset the iterators through the a or b or whatever sequences. Because of that there is no way to reset the z. The cleanest way to solve your concrete problem is to copy the list (as you mentioned in the question and Lennart Regebro suggests). Another understandable way is to use the zip(a, b) twice, thus constructing the two z-like iterators that behaves from the start the same way:
>>> lst1 = list(zip(a, b))
>>> lst2 = list(zip(a, b))
However, this cannot be used generally with the identical result. Think about a or b being unique sequences generated based on some current conditions (say temperatures read from several thermometers).
I have two Python lists of dictionaries, entries9 and entries10. I want to compare the items and write joint items to a new list called joint_items. I also want to save the unmatched items to two new lists, unmatched_items_9 and unmatched_items_10.
This is my code. Getting the joint_items and unmatched_items_9 (in the outer list) is quite easy: but how do I get unmatched_items_10 (in the inner list)?
for counter, entry1 in enumerate(entries9):
match_found = False
for counter2,entry2 in enumerate(entries10):
if match_found:
continue
if entry1[a]==entry2[a] and entry1[b]==entry2[b]: # the dictionaries only have some keys in common, but we care about a and b
match_found = True
joint_item = entry1
joint_items.append(joint_item)
#entries10.remove(entry2) # Tried this originally, but realised it messes with the original list object!
if match_found:
continue
else:
unmatched_items_9.append(entry1)
Performance is not really an issue, since it's a one-off script.
The equivalent of what you're currently doing, but the other way around, is:
unmatched_items_10 = [d for d in entries10 if d not in entries9]
While more concise than your way of coding it, this has the same performance problem: it will take time proportional to the number of items in each list. If the lengths you're interested in are about 9 or 10 (as those numbers seem to indicate), no problem.
But for lists of substantial length you can get much better performance by sorting the lists and "stepping through" them "in parallel" so to speak (time proportional to N log N where N is the length of the longer list). There are other possibilities, too (of growing complication;-) if even this more advanced approach is not sufficient to get you the performance you need. I'll refrain from suggesting very complicated stuff unless you indicate that you do require it to get good performance (in which case, please mention the typical lengths of each list and the typical contents of the dicts that are their items, since of course such "details" are the crucial consideration for picking algorithms that are a good compromise between speed and simplicity).
Edit: the OP edited his Q to show what he cares about, for any two dicts d1 and d2 one each from the two lists, is not whether d1 == d2 (which is what the in operator checks), but rather d1[a]==d2[a] and d1[b]==d2[b]. In this case the in operator cannot be used (well, not without some funky wrapping, but that's a complication that's best avoided when feasible;-), but the all builtin replaces it handily:
unmatched_items_10 = [d for d in entries10
if all(d[a]!=d1[a] or d[b]!=d2[b] for d2 in entries9)]
I have switched the logic around (to != and or, per De Morgan's laws) since we want the dicts that are not matched. However, if you prefer:
unmatched_items_10 = [d for d in entries10
if not any(d[a]==d1[a] and d[b]==d2[b] for d2 in entries9)]
Personally, I don't like if not any and if not all, for stylistic reasons, but the maths are impeccable (by what the Wikipedia page calls the Extensions to De Morgan's laws, since any is an existential quantifier and all a universal quantifier, so to speak;-). Performance should be just about equivalent (but then, the OP did clarify in a comment that performance is not very important for them on this task).
The Python stdlib has a class, difflib.SequenceMatcher that looks like it can do what you want, though I don't know how to use it!
You may consider using sets and their associated methods, like intersection. You will however, need to turn your dictionaries into immutable data so that you can store them in a set (e.g. strings). Would something like this work?
a = set(str(x) for x in entries9)
b = set(str(x) for x in entries10)
# You'll have to change the above lines if you only care about _some_ of the keys
joint_items = a.union(b)
unmatched_items = a - b
# Now you can turn them back into dicts:
joint_items = [eval(i) for i in joint_items]
unmatched_items = [eval(i) for i in unmatched_items]