Is there a more efficient, simpler way to create a tuple of lists of length 'n'?
So in Python, if I wanted to create a list of lists I could do something like:
[[] for _ in range(list_length)]
And to create a tuple of lists of a similar nature I could technically write:
tuple([[] for _ in range(list_length)])
But is this computationally inefficient, could it be written in a nicer way?
Warning: For anyone else who thought it was good idea to put mutable objects such as lists inside a tuple (immutable); it actually appears to be generally faster (computationally) to just use a list (in my case a list of lists).
Use a genex instead of a LC.
tuple([] for _ in range(list_length))
Try this:
tuple = (elements,) * list_length
Related
I'm trying to figure out what is the pythonic way to unpack an iterator inside of a list.
For example:
my_iterator = zip([1, 2, 3, 4], [1, 2, 3, 4])
I have come with the following ways to unpack my iterator inside of a list:
1)
my_list = [*my_iterator]
2)
my_list = [e for e in my_iterator]
3)
my_list = list(my_iterator)
No 1) is my favorite way to do it since is less code, but I'm wondering if this is also the pythonic way. Or maybe there is another way to achieve this besides those 3 which is the pythonic way?
This might be a repeat of Fastest way to convert an iterator to a list, but your question is a bit different since you ask which is the most Pythonic. The accepted answer is list(my_iterator) over [e for e in my_iterator] because the prior runs in C under the hood. One commenter suggests [*my_iterator] is faster than list(my_iterator), so you might want to test that. My general vote is that they are all equally Pythonic, so I'd go with the faster of the two for your use case. It's also possible that the older answer is out of date.
After exploring more the subject I've come with some conclusions.
There should be one-- and preferably only one --obvious way to do it
(zen of python)
Deciding which option is the "pythonic" one should take into consideration some criteria :
how explicit,
simple,
and readable it is.
And the obvious "pythonic" option winning in all criteria is option number 3):
list = list(my_iterator)
Here is why is "obvious" that no 3) is the pythonic one:
Option 3) is close to natural language making you to 'instantly'
think what is the output.
Option 2) (using list comprehension) if you see for the first time
that line of code will take you to read a little bit more and to pay
a bit more attention. For example, I use list comprehension when I
want to add some extra steps(calling a function with the iterated
elements or having some checking using if statement), so when I see a
list comprehension I check for any possible function call inside or
for any if statment.
option 1) (unpacking using *) asterisk operator can be a bit confusing
if you don't use it regularly, there are 4 cases for using the
asterisk in Python:
For multiplication and power operations.
For repeatedly extending the list-type containers.
For using the variadic arguments. (so-called “packing”)
For unpacking the containers.
Another good argument is python docs themselves, I have done some statistics to check which options are chosen by the docs, for this I've chose 4 buil-in iterators and everything from the module itertools (that are used like: itertools.) to see how they are unpacked in a list:
map
range
filter
enumerate
itertools.
After exploring the docs I found: 0 iterators unpacked in a list using option 1) and 2) and 35 using option 3).
Conclusion :
The pythonic way to unpack an iterator inside of a list is: my_list = list(my_iterator)
While the unpacking operator * is not often used for unpacking a single iterable into a list (therefore [*it] is a bit less readable than list(it)), it is handy and more Pythonic in several other cases:
1. Unpacking an iterable into a single list / tuple / set, adding other values:
mixed_list = [a, *it, b]
This is more concise and efficient than
mixed_list = [a]
mixed_list.extend(it)
mixed_list.append(b)
2. Unpacking multiple iterables + values into a list / tuple / set
mixed_list = [*it1, *it2, a, b, ... ]
This is similar to the first case.
3. Unpacking an iterable into a list, excluding elements
first, *rest = it
This extracts the first element of it into first and unpacks the rest into a list. One can even do
_, *mid, last = it
This dumps the first element of it into a don't-care variable _, saves last element into last, and unpacks the rest into a list mid.
4. Nested unpacking of multiple levels of an iterable in one statement
it = (0, range(5), 3)
a1, (*a2,), a3 = it # Unpack the second element of it into a list a2
e1, (first, *rest), e3 = it # Separate the first element from the rest while unpacking it[1]
This can also be used in for statements:
from itertools import groupby
s = "Axyz123Bcba345D"
for k, (first, *rest) in groupby(s, key=str.isalpha):
...
If you're interested in the least amount of typing possible, you can actually do one character better than my_list = [*my_iterator] with iterable unpacking:
*my_list, = my_iterator
or (although this only equals my_list = [*my_iterator] in the number of characters):
[*my_list] = my_iterator
(Funny how it has the same effect as my_list = [*my_iterator].)
For the most Pythonic solution, however, my_list = list(my_iterator) is clearly the clearest and the most readable of all, and should therefore be considered the most Pythonic.
I tend to use zip if I need to convert a list to a dictionary or use it as a key-value pair in a loop or list comprehension.
However, if this is only for illustration to create an iterator. I will definitely vote for #3 for clarity.
My question is te following:
I'm new to Python programming and I'm a little confused about the correct use of lists.
When I want for example to create a list of 10 elements I am doing something like:
list = [None for x in range(10)]
And when I have something to put on it, I do:
list[lastPosition] = data
But after some research I noticed that this use for lists is not efficient or recommended. But how other way could a list be inicialized before I have all the data? My idea is to have a list (specially something like Java's Collections) inicialized and as the data comes I fill it.
I hope my question is clear enough.
If you don't know in advance how many elements will be in the list,
then don't try to allocate 10 elements. Start with an empty list and use the append function to append to it:
lst = []
...
lst.append(value)
If for some reason you really need to initialize a list to a constant size,
and the initial elements are immutable values,
then you can do that like this:
lst = [None] * 10
If the initial elements are mutable values,
you most probably don't want to use this technique,
because all the elements will all point to the same instance.
In that case your original technique is mostly fine as it was:
lst = [SomeClass() for _ in range(10)]
Note that I renamed the variable from list to lst,
because list is a built-in class in Python,
and it's better to avoid such overlaps.
I have a list of lists, let's call it thelist, that looks like this:
[[Redditor(name='Kyle'), Redditor(name='complex_r'), Redditor(name='Lor')],
[Redditor(name='krispy'), Redditor(name='flyry'), Redditor(name='Ooer'), Redditor(name='Iewee')],
[Redditor(name='Athaa'), Redditor(name='The_Dark_'), Redditor(name='drpeterfost'), Redditor(name='owise23'), Redditor(name='invirtibrit')],
[Redditor(name='Dacak'), Redditor(name='synbio'), Redditor(name='Thosee')]]
thelist has 1000 elements (or lists). I'm trying to compare each one of these with the other lists pairwise and try to get the number of common elements for each pair of lists. the code doing that:
def calculate(list1,list2):
a=0
for i in list1:
if (i in list2):
a+=1
return a
for i in range(len(thelist)-1):
for j in range(i+1,len(thelist)):
print calculate(thelist[i],thelist[j])
My problem is: the calculation of the function is extremely slow taking 2 or more seconds per a list pair depending on their length. I'm guessing, this has to do with my list structure. What am i missing here?
First I would recommend making your class hashable which is referenced here: What's a correct and good way to implement __hash__()?
You can then make your list of lists a list of sets by doing:
thelist = [set(l) for l in thelist]
Then your function will work much faster!
This is python 3.
Lets say I have a tuple
tup = (1, 2, 3)
And this tuple is stored in a list:
a = []
a[0] = tup
I am iterating over the list a. What I need to do is modify the contents of tup. That is, I want to change the values, while keeping it in the list a.
Is this correct?
tmp = list(a[0])
tmp[0] = 0 # Now a[0] = (0, 2, 3)
Furthermore: I am aware tuples are designed to be immutable, and that a list is probably better for tup instead of a tuple. However, I am uncomfortable using append to add elements to the list: the list is storing elements of a fixed size, and a tuple is a better representative of this. I'd rather add things manually to the list like tup[0] = blah than tup.append(blah)
The answer to this is simple. You can't, and it doesn't make any sense to do so. Lists are mutable, tuples are not. If you want the elements of a to be mutable, have them as lists, eg, a = [[1,2,3],...].
If you really wanted to have them as tuples, but change them, you could do something along the lines of a[0] = (0,)+a[0][1:]. This would create a new tuple in the list, and would be less efficient than just using a list of lists.
Furthermore: I am aware tuples are designed to be immutable, and that a list is probably better for tup instead of a tuple.
It is.
However, I am uncomfortable using append to add elements to the list: the list is storing elements of a fixed size, and a tuple is a better representative of this.
I'm not sure what you mean by this. Do you mean that a is a fixed size, or elements of a are a fixed size? In either case, how does this make tuples better?
The confusion here might be that lists in Python are not lists in a computer science sense; they are actually arrays. They are O(1) for retrieval, O(1) for setting elements, and usually O(1) for appends unless they need to be resized.
I'd rather add things manually to the list like tup[0] = blah than tup.append(blah)
You can't do that with a tuple, but you can do either with a list, and they're both around O(1).
If you really want fixed-size, mutable arrays, you could look at numpy, or you could initialize python lists of set sizes.
I currently generate lists using following expression (T and no_jobs are integers):
for i in xrange(no_jobs):
row = row + T * [i]
The first thing I came up with for converting it into a list comprehension statement was:
[T*[i] for i in xrange(no_jobs)]
But this obviously creates a nested list which is not what I'm looking for. All my other ideas seems a litle clunky so if anyone has a pythonic and elegant way of creating these types of lists I would be gratefull.
Nested loops.
[i for i in xrange(no_jobs) for x in xrange(T)]
But this obviously creates a nested list which is not what I'm looking for.
So just flatten the result. List addition is concatenation, so we can put all the lists together by 'summing' them (with an empty list as an "accumulator").
sum((T*[i] for i in xrange(no_jobs)), [])