Python Permutation code - python

Started to learn programming 2 months ago, in one of my little projects i encountered the need to generate permutations of list of objects.
I knew that i'll find how to do this if i'll just searched for it, but i wanted to make one of my own, so i worked up and made my own permutation generator code:
def perm(lst, c = [], x = 0):
i = -1
g = len(lst) - 1
if x == g:
while i < g:
i += 1
if lst[i] in c:
continue
c.append(lst[i])
print(c)
del c[-1]
i = g
else:
while i < g:
if x == 0:
del c[:]
elif g == x:
del c[-1]
elif len(c) > x:
del c[-1]
continue
i += 1
if lst[i] in c:
continue
c.append(lst[i])
x + 1
perm(lst, c, x + 1)
This is what it gives if i run it:
perm(range(2))
[0, 1]
[1, 0]
perm([1, 4, 5])
[1, 4, 5]
[1, 5, 4]
[4, 1, 5]
[4, 5, 1]
[5, 1, 4]
[5, 4, 1]
It works as i expected, but when i use bigger lists it take some time for it to generate all the permutations of the list.
So all i want is hints on how to improve my code, only hints.
Or if you can tell me what should i learn to be able to make a better generator?
Thanks in advance for all the helpers.

Generating permutations is often done recursively. Given a list of 5 items, the permutations can be created by picking each of the 5 elements in turn as the first element of the answer, then for each of them permuting the remaining 4 elements, and appending them together.

>>> from itertools import permutations
>>> list(permutations(range(2)))
[(0, 1), (1, 0)]
>>> list(permutations([1, 4, 5]))
[(1, 4, 5), (1, 5, 4), (4, 1, 5), (4, 5, 1), (5, 1, 4), (5, 4, 1)]
In the docs there is Python code available for legacy versions.
A note re your code, x + 1 doesn't do anything as you're not assigning result of that expression to anyting.

The best way to understand what is making your code slow is to actually measure it. When you attempt to guess at what will make something fast, it's often wrong. You've got the right idea in that you're noticing that your code is slower and it's time for some improvement.
Since this is a fairly small piece of code, the timeit module will probably be useful. Break the code up into sections, and time them. A good rule of thumb is that it's better to look at an inner loop for improvements, since this will be executed the most times. In your example, this would be the loop inside the perm function.
It is also worth noting that while loops are generally slower than for loops in python, and that list comprehensions are faster than both of these.
Once you start writing longer scripts, you'll want to be aware of the ability to profile in python, which will help you identify where your code is slow. Hopefully this has given you a few places to look.

OK, for large lists, a recursive solution will take more and more time & space, and eventually reach the recursion limit and die. So, on the theoretical side, you could work on ways to save time and space.
Hint: tail-recursive functions (such as the one you've written) can be rewritten as loops
On a more practical side, you may consider the use cases of your function. Maybe there's somebody who doesn't want the whole list of permutations at once, but a single one each time - there's a good opportunity to learn about yield and generators.
Also, for something generally not directly applicable to your question: k-combinations and multisets are closely related to permutations. Perhaps you can extend your function (or write a new one) which will produce the k-combinations and/or multiset the user asks for.

The first thing I notice is that the code is hard to understand. The variable names are completely meaningless, replace them with meaningful names. It also seems like you're using i as a loop index, which is almost always bad style in python, because you can do for item in list:.

This is not really performance related, but there is a glaring bug in the code. Using a list as a default parameter does not do what you think it does - it will create one list object that will be shared by every call to perm(), so the second time you call perm you will have the value in c of whatever it contained when the last call finished. This is a common beginner's error.
You also say "when I use bigger lists it takes some time to generate all the permutations of the list". What did you expect? The number of permutations is equal to the factorial of the length of the list, and that grows big fast! For example a list of length 20 will have 2432902008176640000 permutations. Even the most efficient algorithm in the world is "going to take some time" for a list this size, even if it did not run out of memory first. If our hypothetical algorithm generated a billion permutations a second it would still take 77 years to run. Can you be more specific about the length of lists you are using and how long it is taking?

Related

How can I shuffle a list with constraint (1 and 2, 3 and 4, 5 and 6 are not adjacent)?

I have 6 test questions that I want to randomize, together with their correct answers. Questions #1 and #2, #3 and #4, #5 and #6 are of the same type. In order not to make the test too easy, I don't want show #1 and #2 in a row (nor #3 and #4, or #5 and #6, for this matter).
For this purpose, I think I should shuffle a list [1, 2, 3, 4, 5, 6] with this constraint: 1 and 2, 3 and 4, 5 and 6 are not adjacent. For example, [1, 2, 4, 6, 3, 5] is not acceptable because 1 and 2 are next to each other. Then, I want apply the new order to both the question list and the answer list.
As someone new to programming, I only know how to shuffle a list without constraint, like so:
question = [1, 3, 5, 2, 4, 6]
answer = ['G', 'Y', 'G', 'R', 'Y', 'R']
order = list(zip(question, answer))
random.shuffle(order)
question, answer = zip(*order)
Any help would be appreciated!
Here's a "brute force" approach. It just shuffles the list repeatedly until it finds a valid ordering:
import random
def is_valid(sequence):
similar_pairs = [(1, 2), (3, 4), (5, 6)]
return all(
abs(sequence.index(a) - sequence.index(b)) != 1
for a, b in similar_pairs
)
sequence = list(range(1, 7))
while not is_valid(sequence):
random.shuffle(sequence)
print(sequence)
# One output: [6, 2, 4, 5, 3, 1]
For inputs this small, this is fine. (Computers are fast.) For longer inputs, you'd want to think about doing something more efficient, but it sounds like you're after a simple practical approach, not a theoretically optimal one.
I see two simple ways:
Shuffle the list and accept the shuffle if it satisfies the constraints, else repeat.
Iteratively sample numbers and use the constraints to limit the possible numbers. For example, if you first draw 1 then the second draw can be 3..6. This could also result in a solution that is infeasible so you'll have to account for that.
Draw a graph with your list elements as vertices. If elements u and v can be adjacent in the output list, draw an edge (u,v) between them, otherwise do not.
Now you have to find a Hamiltonian path on this graph. This problem is generally intractable (NP-complete) but if the graph is almost complete (there are few constraints, i.e. missing edges) it can be effectively solved by DFS.
For a small input set like in your example it could be easier to just generate all permutations and then filter out those that violate one of the constraints.
You can try this. This should work fine for small lists. So as you can see below, I used a list of python sets for the constraints. The code builds the permutation you require element by element.
Building element by element can lead to invalid permutation if at some point the remaining elements in the list are all limited by the constraint.
Example: If the code makes 4,1,3,2,6 It is forced to try using 5 as the last element, but that is invalid, so the function tries to make another permutation.
It is better than the brute force approach(in terms of performance) of generating a random shuffle and checking if its valid(The answer given by smarx).
Note: The function would result in an infinite loop if no permutation satisfying the constraints is possible.
import random
def shuffler(dataList, constraints):
my_data_list = list(dataList)
shuffledList = [random.choice(dataList)]
my_data_list.remove(shuffledList[0])
for i in range(1, list_size):
prev_ele = shuffledList[i - 1]
prev_constraint = set()
for sublist in constraints:
if prev_ele in sublist:
prev_constraint = set.union(prev_constraint, sublist)
choices = [choice for choice in my_data_list if choice not in prev_constraint]
if len(choices) == 0:
print('Trying once more...')
return shuffler(dataList,constraints)
curr_ele = random.choice(choices)
my_data_list.remove(curr_ele)
shuffledList.append(curr_ele)
return shuffledList
if __name__ == '__main__':
dataList = [1, 2, 3, 4, 5, 6]
list_size = len(dataList)
constraints = [{1,2},{3,4},{5,6}]
print(shuffler(dataList,constraints))
You could try something like:
shuffle the list
while (list is not good)
find first invalid question
swap first invalid question with a different random question
endwhile
I haven't done any timings, but it might run faster than reshuffling the whole list. It partly preserves the valid part before the first invalid question, so it should reach a good ordering faster.

Remove duplicates from one Python list, prune other lists based on it

I have a problem that's easy enough to do in an ugly way, but I'm wondering if there's a more Pythonic way of doing it.
Say I have three lists, A, B and C.
A = [1, 1, 2, 3, 4, 4, 5, 5, 3]
B = [1, 2, 3, 4, 5, 6, 7, 8, 9]
C = [1, 2, 3, 4, 5, 6, 7, 8, 9]
# The actual data isn't important.
I need to remove all duplicates from list A, but when a duplicate entry is deleted, I would like the corresponding indexes removed from B and C:
A = [1, 2, 3, 4, 5]
B = [1, 3, 4, 5, 7]
C = [1, 3, 4, 5, 7]
This is easy enough to do with longer code by moving everything to new lists:
new_A = []
new_B = []
new_C = []
for i in range(len(A)):
if A[i] not in new_A:
new_A.append(A[i])
new_B.append(B[i])
new_C.append(C[i])
But is there a more elegant and efficient (and less repetitive) way of doing this? This could get cumbersome if the number of lists grows, which it might.
Zip the three lists together, uniquify based on the first element, then unzip:
from operator import itemgetter
from more_itertools import unique_everseen
abc = zip(a, b, c)
abc_unique = unique_everseen(abc, key=itemgetter(0))
a, b, c = zip(*abc_unique)
This is a very common pattern. Whenever you want to do anything in lock step over a bunch of lists (or other iterables), you zip them together and loop over the result.
Also, if you go from 3 lists to 42 of them ("This could get cumbersome if the number of lists grows, which it might."), this is trivial to extend:
abc = zip(*list_of_lists)
abc_unique = unique_everseen(abc, key=itemgetter(0))
list_of_lists = zip(*abc_unique)
Once you get the hang of zip, the "uniquify" is the only hard part, so let me explain it.
Your existing code checks whether each element has been seen by searching for each one in new_A. Since new_A is a list, this means that if you have N elements, M of them unique, on average you're going to be doing M/2 comparisons for each of those N elements. Plug in some big numbers, and NM/2 gets pretty big—e.g., 1 million values, a half of them unique, and you're doing 250 billion comparisons.
To avoid that quadratic time, you use a set. A set can test an element for membership in constant, rather than linear, time. So, instead of 250 billion comparisons, that's 1 million hash lookups.
If you don't need to maintain order or decorate-process-undecorate the values, just copy the list to a set and you're done. If you need to decorate, you can use a dict instead of a set (with the key as the dict keys, and everything else hidden in the values). To preserve order, you could use an OrderedDict, but at that point it's easier to just use a list and a set side by side. For example, the smallest change to your code that works is:
new_A_set = set()
new_A = []
new_B = []
new_C = []
for i in range(len(A)):
if A[i] not in new_A_set:
new_A_set.add(A[i])
new_A.append(A[i])
new_B.append(B[i])
new_C.append(C[i])
But this can be generalized—and should be, especially if you're planning to expand from 3 lists to a whole lot of them.
The recipes in the itertools documentation include a function called unique_everseen that generalizes exactly what we want. You can copy and paste it into your code, write a simplified version yourself, or pip install more-itertools and use someone else's implementation (as I did above).
PadraicCunningham asks:
how efficient is zip(*unique_everseen(zip(a, b, c), key=itemgetter(0)))?
If there are N elements, M unique, it's O(N) time and O(M) space.
In fact, it's effectively doing the same work as the 10-line version above. In both cases, the only work that's not obviously trivial inside the loop is key in seen and seen.add(key), and since both operations are amortized constant time for set, that means the whole thing is O(N) time. In practice, for N=1000000, M=100000 the two versions are about 278ms and 297ms (I forget which is which) compared to minutes for the quadratic version. You could probably micro-optimize that down to 250ms or so—but it's hard to imagine a case where you'd need that, but wouldn't benefit from running it in PyPy instead of CPython, or writing it in Cython or C, or numpy-izing it, or getting a faster computer, or parallelizing it.
As for space, the explicit version makes it pretty obvious. Like any conceivable non-mutating algorithm, we've got the three new_Foo lists around at the same time as the original lists, and we've also added new_A_set of the same size. Since all of those are length M, that's 4M space. We could cut that in half by doing one pass to get indices, then doing the same thing mu 無's answer does:
indices = set(zip(*unique_everseen(enumerate(a), key=itemgetter(1))[0])
a = [a[index] for index in indices]
b = [b[index] for index in indices]
c = [c[index] for index in indices]
But there's no way to go lower than that; you have to have at least a set and a list of length M alive to uniquify a list of length N in linear time.
If you really need to save space, you can mutate all three lists in-place. But this is a lot more complicated, and a bit slower (although still linear*).
Also, it's worth noting another advantage of the zip version: it works on any iterables. You can feed it three lazy iterators, and it won't have to instantiate them eagerly. I don't think it's doable in 2M space, but it's not too hard in 3M:
indices, a = zip(*unique_everseen(enumerate(a), key=itemgetter(1))
indices = set(indices)
b = [value for index, value in enumerate(b) if index in indices]
c = [value for index, value in enumerate(c) if index in indices]
* Note that just del c[i] will make it quadratic, because deleting from the middle of a list takes linear time. Fortunately, that linear time is a giant memmove that's orders of magnitude faster than the equivalent number of Python assignments, so if N isn't too big you can get away with it—in fact, at N=100000, M=10000 it's twice as fast as the immutable version… But if N might be too big, you have to instead replace each duplicate element with a sentinel, then loop over the list in a second pass so you can shift each element only once, which is instead 50% slower than the immutable version.
How about this - basically get a set of all unique elements of A, and then get their indices, and create a new list based on these indices.
new_A = list(set(A))
indices_to_copy = [A.index(element) for element in new_A]
new_B = [B[index] for index in indices_to_copy]
new_C = [C[index] for index in indices_to_copy]
You can write a function for the second statement, for reuse:
def get_new_list(original_list, indices):
return [original_list[idx] for idx in indices]

Fastest way to compare ordered lists and count common elements *including* duplicates

I need to compare two lists of numbers and count how many elements of first list are there in second list. For example,
a = [2, 3, 3, 4, 4, 5]
b1 = [0, 2, 2, 3, 3, 4, 6, 8]
here I should get result of 4: I should count '2' 1 time (as it happens only once in first list), '3' - 2 times, '4' - 1 time (as it happens only once in second list). I was using the following code:
def scoreIn(list1, list2):
score=0
list2c=list(list2)
for i in list1:
if i in list2c:
score+=1
list2c.remove(i)
return score
it works correctly, but too slow for my case (I call it 15000 times). I read a hint about 'walking' through sorted lists which was supposed to be faster, so I tried to do like that:
def scoreWalk(list1, list2):
score=0
i=0
j=0
len1=len(list1) # we assume that list2 is never shorter than list1
while i<len1:
if list1[i]==list2[j]:
score+=1
i+=1
j+=1
elif list1[i]>list2[j]:
j+=1
else:
i+=1
return score
Unfortunately this code is even slower. Is there any way to make it more efficient? In my case, both lists are sorted, contains only integers, and list1 is never longer than list2.
You can use the intersection feature of collections.Counter to solve the problem in an easy and readable way:
>>> from collections import Counter
>>> intersection = Counter( [2,3,3,4,4,5] ) & Counter( [0, 2, 2, 3, 3, 4, 6, 8] )
>>> intersection
Counter({3: 2, 2: 1, 4: 1})
As #Bakuriu says in the comments, to obtain the number of elements in the intersection (including duplicates), like your scoreIn function, you can then use sum( intersection.values() ).
However, doing it this way you're not actually taking advantage of the fact that your data is pre-sorted, nor of the fact (mentioned in the comments) that you're doing this over and over again with the same list.
Here is a more elaborate solution more specifically tailored for your problem. It uses a Counter for the static list and directly uses the sorted dynamic list. On my machine it runs in 43% of the run-time of the naïve Counter approach on randomly generated test data.
def common_elements( static_counter, dynamic_sorted_list ):
last = None # previous element in the dynamic list
count = 0 # count seen so far for this element in the dynamic list
total_count = 0 # total common elements seen, eventually the return value
for x in dynamic_sorted_list:
# since the list is sorted, if there's more than one element they
# will be consecutive.
if x == last:
# one more of the same as the previous element
# all we need to do is increase the count
count += 1
else:
# this is a new element that we haven't seen before.
# first "flush out" the current count we've been keeping.
# - count is the number of times it occurred in the dynamic list
# - static_counter[ last ] is the number of times it occurred in
# the static list (the Counter class counted this for us)
# thus the number of occurrences the two have in common is the
# smaller of these numbers. (Note that unlike a normal dictionary,
# which would raise KeyError, a Counter will return zero if we try
# to look up a key that isn't there at all.)
total_count += min( static_counter[ last ], count )
# now set count and last to the new element, starting a new run
count = 1
last = x
if count > 0:
# since we only "flushed" above once we'd iterated _past_ an element,
# the last unique value hasn't been counted. count it now.
total_count += min( static_counter[ last ], count )
return total_count
The idea of this is that you do some of the work up front when you create the Counter object. Once you've done that work, you can use the Counter object to quickly look up counts, just like you look up values in a dictionary: static_counter[ x ] returns the number of times x occurred in the static list.
Since the static list is the same every time, you can do this once and use the resulting quick-lookup structure 15 000 times.
On the other hand, setting up a Counter object for the dynamic list may not pay off performance-wise. There is a little bit of overhead involved in creating a Counter object, and we'd only use each dynamic list Counter one time. If we can avoid constructing the object at all, it makes sense to do so. And as we saw above, you can in fact implement what you need by just iterating through the dynamic list and looking up counts in the other counter.
The scoreWalk function in your post does not handle the case where the biggest item is only in the static list, e.g. scoreWalk( [1,1,3], [1,1,2] ). Correcting that, however, it actually performs better than any of the Counter approaches for me, contrary to the results you report. There may be a significant difference in the distribution of your data to my uniformly-distributed test data, but double-check your benchmarking of scoreWalk just to be sure.
Lastly, consider that you may be using the wrong tool for the job. You're not after short, elegant and readable -- you're trying to squeeze every last bit of performance out of a rather simple task. CPython allows you to write modules in C. One of the primary use cases for this is to implement highly optimized code. It may be a good fit for your task.
You can do this with a dict comprehension:
>>> a = [2, 3, 3, 4, 4, 5]
>>> b1 = [0, 2, 2, 3, 3, 4, 6, 8]
>>> {k: min(b1.count(k), a.count(k)) for k in set(a)}
{2: 1, 3: 2, 4: 1, 5: 0}
This is much faster if set(a) is small. If set(a) is more than 40 items, the Counter based solution is faster.

Is a Python list guaranteed to have its elements stay in the order they are inserted in?

If I have the following Python code
>>> x = []
>>> x = x + [1]
>>> x = x + [2]
>>> x = x + [3]
>>> x
[1, 2, 3]
Will x be guaranteed to always be [1,2,3], or are other orderings of the interim elements possible?
Yes, the order of elements in a python list is persistent.
In short, yes, the order is preserved. In long:
In general the following definitions will always apply to objects like lists:
A list is a collection of elements that can contain duplicate elements and has a defined order that generally does not change unless explicitly made to do so. stacks and queues are both types of lists that provide specific (often limited) behavior for adding and removing elements (stacks being LIFO, queues being FIFO). Lists are practical representations of, well, lists of things. A string can be thought of as a list of characters, as the order is important ("abc" != "bca") and duplicates in the content of the string are certainly permitted ("aaa" can exist and != "a").
A set is a collection of elements that cannot contain duplicates and has a non-definite order that may or may not change over time. Sets do not represent lists of things so much as they describe the extent of a certain selection of things. The internal structure of set, how its elements are stored relative to each other, is usually not meant to convey useful information. In some implementations, sets are always internally sorted; in others the ordering is simply undefined (usually depending on a hash function).
Collection is a generic term referring to any object used to store a (usually variable) number of other objects. Both lists and sets are a type of collection. Tuples and Arrays are normally not considered to be collections. Some languages consider maps (containers that describe associations between different objects) to be a type of collection as well.
This naming scheme holds true for all programming languages that I know of, including Python, C++, Java, C#, and Lisp (in which lists not keeping their order would be particularly catastrophic). If anyone knows of any where this is not the case, please just say so and I'll edit my answer. Note that specific implementations may use other names for these objects, such as vector in C++ and flex in ALGOL 68 (both lists; flex is technically just a re-sizable array).
If there is any confusion left in your case due to the specifics of how the + sign works here, just know that order is important for lists and unless there is very good reason to believe otherwise you can pretty much always safely assume that list operations preserve order. In this case, the + sign behaves much like it does for strings (which are really just lists of characters anyway): it takes the content of a list and places it behind the content of another.
If we have
list1 = [0, 1, 2, 3, 4]
list2 = [5, 6, 7, 8, 9]
Then
list1 + list2
Is the same as
[0, 1, 2, 3, 4] + [5, 6, 7, 8, 9]
Which evaluates to
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Much like
"abdcde" + "fghijk"
Produces
"abdcdefghijk"
You are confusing 'sets' and 'lists'. A set does not guarantee order, but lists do.
Sets are declared using curly brackets: {}. In contrast, lists are declared using square brackets: [].
mySet = {a, b, c, c}
Does not guarantee order, but list does:
myList = [a, b, c]
I suppose one thing that may be concerning you is whether or not the entries could change, so that the 2 becomes a different number, for instance. You can put your mind at ease here, because in Python, integers are immutable, meaning they cannot change after they are created.
Not everything in Python is immutable, though. For example, lists are mutable---they can change after being created. So for example, if you had a list of lists
>>> a = [[1], [2], [3]]
>>> a[0].append(7)
>>> a
[[1, 7], [2], [3]]
Here, I changed the first entry of a (I added 7 to it). One could imagine shuffling things around, and getting unexpected things here if you are not careful (and indeed, this does happen to everyone when they start programming in Python in some way or another; just search this site for "modifying a list while looping through it" to see dozens of examples).
It's also worth pointing out that x = x + [a] and x.append(a) are not the same thing. The second one mutates x, and the first one creates a new list and assigns it to x. To see the difference, try setting y = x before adding anything to x and trying each one, and look at the difference the two make to y.
Yes the list will remain as [1,2,3] unless you perform some other operation on it.
aList=[1,2,3]
i=0
for item in aList:
if i<2:
aList.remove(item)
i+=1
aList
[2]
The moral is when modifying a list in a loop driven by the list, takes two steps:
aList=[1,2,3]
i=0
for item in aList:
if i<2:
aList[i]="del"
i+=1
aList
['del', 'del', 3]
for i in range(2):
del aList[0]
aList
[3]
Yes lists and tuples are always ordered while dictionaries are not

Python 3 turn range to a list

I'm trying to make a list with numbers 1-1000 in it. Obviously this would be annoying to write/read, so I'm attempting to make a list with a range in it. In Python 2 it seems that:
some_list = range(1,1000)
would have worked, but in Python 3 the range is similar to the xrange of Python 2?
Can anyone provide some insight into this?
You can just construct a list from the range object:
my_list = list(range(1, 1001))
This is how you do it with generators in python2.x as well. Typically speaking, you probably don't need a list though since you can come by the value of my_list[i] more efficiently (i + 1), and if you just need to iterate over it, you can just fall back on range.
Also note that on python2.x, xrange is still indexable1. This means that range on python3.x also has the same property2
1print xrange(30)[12] works for python2.x
2The analogous statement to 1 in python3.x is print(range(30)[12]) and that works also.
In Pythons <= 3.4 you can, as others suggested, use list(range(10)) in order to make a list out of a range (In general, any iterable).
Another alternative, introduced in Python 3.5 with its unpacking generalizations, is by using * in a list literal []:
>>> r = range(10)
>>> l = [*r]
>>> print(l)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Though this is equivalent to list(r), it's literal syntax and the fact that no function call is involved does let it execute faster. It's also less characters, if you need to code golf :-)
in Python 3.x, the range() function got its own type. so in this case you must use iterator
list(range(1000))
The reason why Python3 lacks a function for directly getting a ranged list is because the original Python3 designer was quite novice in Python2. He only considered the use of range() function in a for loop, thus, the list should never need to be expanded. In fact, very often we do need to use the range() function to produce a list and pass into a function.
Therefore, in this case, Python3 is less convenient as compared to Python2 because:
In Python2, we have xrange() and range();
In Python3, we have range() and list(range())
Nonetheless, you can still use list expansion in this way:
[*range(N)]
You really shouldn't need to use the numbers 1-1000 in a list. But if for some reason you really do need these numbers, then you could do:
[i for i in range(1, 1001)]
List Comprehension in a nutshell:
The above list comprehension translates to:
nums = []
for i in range(1, 1001):
nums.append(i)
This is just the list comprehension syntax, though from 2.x. I know that this will work in python 3, but am not sure if there is an upgraded syntax as well
Range starts inclusive of the first parameter; but ends Up To, Not Including the second Parameter (when supplied 2 parameters; if the first parameter is left off, it'll start at '0')
range(start, end+1)
[start, start+1, .., end]
Python 3:
my_list = [*range(1001)]
Actually, if you want 1-1000 (inclusive), use the range(...) function with parameters 1 and 1001: range(1, 1001), because the range(start, end) function goes from start to (end-1), inclusive.
Use Range in Python 3.
Here is a example function that return in between numbers from two numbers
def get_between_numbers(a, b):
"""
This function will return in between numbers from two numbers.
:param a:
:param b:
:return:
"""
x = []
if b < a:
x.extend(range(b, a))
x.append(a)
else:
x.extend(range(a, b))
x.append(b)
return x
Result
print(get_between_numbers(5, 9))
print(get_between_numbers(9, 5))
[5, 6, 7, 8, 9]
[5, 6, 7, 8, 9]
In fact, this is a retro-gradation of Python3 as compared to Python2. Certainly, Python2 which uses range() and xrange() is more convenient than Python3 which uses list(range()) and range() respectively. The reason is because the original designer of Python3 is not very experienced, they only considered the use of the range function by many beginners to iterate over a large number of elements where it is both memory and CPU inefficient; but they neglected the use of the range function to produce a number list. Now, it is too late for them to change back already.
If I was to be the designer of Python3, I will:
use irange to return a sequence iterator
use lrange to return a sequence list
use range to return either a sequence iterator (if the number of elements is large, e.g., range(9999999) or a sequence list (if the number of elements is small, e.g., range(10))
That should be optimal.

Categories