Python nested for loop faster than single for loop - python

Why is the nested for loop faster than the single for loop?
start = time()
k = 0
m = 0
for i in range(1000):
for j in range(1000):
for l in range(100):
m+=1
#for i in range(100000000):
# k +=1
print int(time() - start)
For the single for loop I get a time of 14 seconds and for the nested for loop of 10 seconds

The relevant context is explained in this topic.
In short, range(100000000) builds a huge list in Python 2, whereas with the nested loops you only build lists with a total of 1000 + 1000 + 100 = 2100 elements. In Python 3, range is smarter and lazy like xrange in Python 2.
Here are some timings for the following code. Absolute runtime depends on the system, but comparing the values with each other is valuable.
import timeit
runs = 100
code = '''k = 0
for i in range(1000):
for j in range(1000):
for l in range(100):
k += 1'''
print(timeit.timeit(stmt=code, number=runs))
code = '''k = 0
for i in range(100000000):
k += 1'''
print(timeit.timeit(stmt=code, number=runs))
Outputs:
CPython 2.7 - range
264.650791883
372.886064053
Interpretation: building huge lists takes time.
CPython 2.7 - range exchanged with xrange
231.975350142
221.832423925
Interpretation: almost equal, as expected. (Nested for loops should have slightly
larger overhead than a single for loop.)
CPython 3.6 - range
365.20924194483086
437.26447860104963
Interpretation: Interesting! I did not expect this. Anyone?

It is because you are using Python2. Range generates a list of numbers, and has to allocate that list. In the first nested loop you are allocating 1000 + 1000 + 100, so the list size is 2100, while in the other one the list has a size of 100000000, which is much bigger.
In python2 is better to use a generator, xrange(), a generator yields the numbers instead of building and allocating a list with them.
Aditionally and for further information you can read this question that it is related to this but in python3

In Python 2, range creates a list with all of the numbers within the list. Try swapping range with xrange and you should see them take comparable time or the single loop approach may work a bit faster.

during the nested loops python has to allocate 1000+1000+100=2100 values for the counters whereas in the single loop it has to allocate 10M. This is what's taking the extra time
i have tested this in python 3.6 and the behaviour is similar, i would say it's very likely this is a memory allocation issue.

Related

Why is using a Cython list faster than using a Python list?

Here's my Python code:
X = [[0] * 1000] * 100
start = time()
for x in xrange(100):
for i in xrange(len(X)):
for j in xrange(len(X[i])):
X[i][j] += 1
print time() - start
My Cython code is the same:
X = [[0] * 1000] * 100
start = time()
for x in xrange(100):
for i in xrange(len(X)):
for j in xrange(len(X[i])):
X[i][j] += 1
print time() - start
Output:
Python cost: 2.86 sec
Cython cost: 0.41 sec
Any other faster way to do the same above in Python or Cython?
Update: Any way to create an 2d array X with height indexing performance close to array int X[][] that in C/C++?
For now I am considering use Python C API to do the job.
One more thing, a numpy array does the same thing but so much slower(70 sec) than list in both pure Python and Cython.
Python:
X = np.zeros((100,1000),dtype=np.int32)
start = time()
for x in xrange(100):
for i in xrange(len(X)):
for j in xrange(len(X[i])):
X[i][j]+=1
If doing a lot of access to numeric array, which approach is the best?
To answer the question in your title, your Cython code beats your Python code because, despite the lack of cdef to declare variables, C code is being generated for the for loops (in addition to lots of extra C code to describe the Python objects). To speed up your Cython code use cdef to declare the integers i, j and x so they're no longer Python integers: e.g. cdef int i. You can also declare C-type arrays in Cython which should improve performance further.
A quick way to get the same result with NumPy:
X = np.zeros((100, 1000), dtype=np.int32)
X += 10000
If you can help it, you should never use for loops with NumPy arrays. They are completely different to lists in terms of memory usage.
Any other faster way to do the same above in Python or Cython?
The equivalent, faster code would be:
X = [[100 * 100] * 1000] * 100
In your code, you are creating a 1000-long list of zeroes, then creating a 100-long list of references to that list. Now, iterating 100 times over that 100-long list results in each position being incremented 100 * 100 = 10000 times.
len(set(map(id, X)))
1
If you would like to end up with a list of 100 lists:
base = [100] * 1000
X = [list(base) for _ in xrange(100)]
len(set(map(id, X)))
100
Note that references to objects inside lists are still copied.
ajcr's answer probably is the fastest and simpliest one. You should first explicitly declare the datatype of your variables in the cython code. In addition I would make a prange instead of a simple range iterator for the outer loop. This will activate OpenMP multithreading what might speed up your code further but I really doubt that this solution will beat the numpy implementation.

Why this list comprehension is faster than equivalent generator expression?

I'm using Python 3.3.1 64-bit on Windows and this code snippet:
len ([None for n in range (1, 1000000) if n%3 == 1])
executes in 136ms, compared to this one:
sum (1 for n in range (1, 1000000) if n%3 == 1)
which executes in 146ms. Shouldn't a generator expression be faster or the same speed as the list comprehension in this case?
I quote from Guido van Rossum From List Comprehensions to Generator Expressions:
...both list comprehensions and generator expressions in Python 3 are
actually faster than they were in Python 2! (And there is no longer a
speed difference between the two.)
EDIT:
I measured the time with timeit. I know that it is not very accurate, but I care only about relative speeds here and I'm getting consistently shorter time for list comprehension version, when I test with different numbers of iterations.
I believe the difference here is entirely in the cost of 1000000 additions. Testing with 64-bit Python.org 3.3.0 on Mac OS X:
In [698]: %timeit len ([None for n in range (1, 1000000) if n%3 == 1])
10 loops, best of 3: 127 ms per loop
In [699]: %timeit sum (1 for n in range (1, 1000000) if n%3 == 1)
10 loops, best of 3: 138 ms per loop
In [700]: %timeit sum ([1 for n in range (1, 1000000) if n%3 == 1])
10 loops, best of 3: 139 ms per loop
So, it's not that the comprehension is faster than the genexp; they both take about the same time. But calling len on a list is instant, while summing 1M numbers adds another 7% to the total time.
Throwing a few different numbers at it, this seems to hold up unless the list is very tiny (in which case it does seem to get faster), or large enough that memory allocation starts to become a significant factor (which it isn't yet, at 333K).
Borrowed from this answer, there are two things to consider:
1. A Python list is index-able and fetching its length only takes O(1) times. This means that the speed of calling len() on a list does not depend on its size. However, if you call len() on a generator, you're consuming all the items it generates and thus, the time complexity is O(n).
2. See the linked answer above. A list comprehension is a tight C loop, whereas a generator has to store a reference to the iterator inside and call next(iter) for every item it generates. This creates another layer of overhead for generators. At a small scale, the difference in performance between list comprehension and generators can be safely ignored, but at a larger scale, you have to consider this.

Higher Order Functions vs loops - running time & memory efficiency?

Does using Higher Order Functions & Lambdas make running time & memory efficiency better or worse?
For example, to multiply all numbers in a list :
nums = [1,2,3,4,5]
prod = 1
for n in nums:
prod*=n
vs
prod2 = reduce(lambda x,y:x*y , nums)
Does the HOF version have any advantage over the loop version other than it's lesser lines of code/uses a functional approach?
EDIT:
I am not able to add this as an answer as I don't have the required reputation.
I tied to profile the loop & HOF approach using timeit as suggested by #DSM
def test1():
s= """
nums = [a for a in range(1,1001)]
prod = 1
for n in nums:
prod*=n
"""
t = timeit.Timer(stmt=s)
return t.repeat(repeat=10,number=100)
def test2():
s="""
nums = [a for a in range(1,1001)]
prod2 = reduce(lambda x,y:x*y , nums)
"""
t = timeit.Timer(stmt=s)
return t.repeat(repeat=10,number=100)
And this is my result:
Loop:
[0.08340786340144211, 0.07211491653462579, 0.07162720686361926, 0.06593182661083438, 0.06399049758613146, 0.06605228229559557, 0.06419744588664211, 0.0671893658461038, 0.06477527090075941, 0.06418023793167627]
test1 average: 0.0644778902685
HOF:
[0.0759414223099324, 0.07616920129277016, 0.07570730355421262, 0.07604965128984942, 0.07547092059389193, 0.07544737286604364, 0.075532959799953, 0.0755039779810629, 0.07567424616704144, 0.07542563650187661]
test2 average: 0.0754917512762
On an average loop approach seems to be faster than using HOFs.
Higher-order functions can be very fast.
For example, map(ord, somebigstring) is much faster than the equivalent list comprehension [ord(c) for c in somebigstring]. The former wins for three reasons:
map() pre-sizes the result string to the length of somebigstring. In contrast, the list-comprehension must make many calls to realloc() as it grows.
map() only has to do one lookup for ord, first checking globals, then checking and finding it in builtins. The list comprehension has to repeat this work on every iteration.
The inner loop for map runs at C speed. The loop body for the list comprehension is a series of pure Python steps that each need to be dispatched or handled by the eval-loop.
Here are some timings to confirm the prediction:
>>> from timeit import Timer
>>> print min(Timer('map(ord, s)', 's="x"*10000').repeat(7, 1000))
0.808364152908
>>> print min(Timer('[ord(c) for c in s]', 's="x"*10000').repeat(7, 1000))
1.2946639061
from my experience loops can do things very fast , provided they are not nested too deeply , and with complex higher math operations , for simple operations and a Single layer of loops it can be as fast as any other way , maybe faster , so long as only integers are used as the index to the loop or loops, it would actually depend on what you are doing too
Also it might very well be that the higher order function will produce just as many loops
as the loop program version and might even be a little slower , you would have to time them both...just to be sure.

Cost of list functions in Python

Based on this older thread, it looks like the cost of list functions in Python is:
Random access: O(1)
Insertion/deletion to front: O(n)
Insertion/deletion to back: O(1)
Can anyone confirm whether this is still true in Python 2.6/3.x?
Take a look here. It's a PEP for a different kind of list. The version specified is 2.6/3.0.
Append (insertion to back) is O(1), while insertion (everywhere else) is O(n). So yes, it looks like this is still true.
Operation...Complexity
Copy........O(n)
Append......O(1)
Insert......O(n)
Get Item....O(1)
Set Item....O(1)
Del Item....O(n)
Iteration...O(n)
Get Slice...O(k)
Del Slice...O(n)
Set Slice...O(n+k)
Extend......O(k)
Sort........O(n log n)
Multiply....O(nk)
Python 3 is mostly an evolutionary change, no big changes in the datastructures and their complexities.
The canonical source for Python collections is TimeComplexity on the Wiki.
That's correct, inserting in front forces a move of all the elements to make place.
collections.deque offers similar functionality, but is optimized for insertion on both sides.
I know this post is old, but I recently did a little test myself. The complexity of list.insert() appears to be O(n)
Code:
'''
Independent Study, Timing insertion list method in python
'''
import time
def make_list_of_size(n):
ret_list = []
for i in range(n):
ret_list.append(n)
return ret_list
#Estimate overhead of timing loop
def get_overhead(niters):
'''
Returns the time it takes to iterate a for loop niter times
'''
tic = time.time()
for i in range(niters):
pass #Just blindly iterate, niter times
toc = time.time()
overhead = toc-tic
return overhead
def tictoc_midpoint_insertion(list_size_initial, list_size_final, niters,file):
overhead = get_overhead(niters)
list_size = list_size_initial
#insertion_pt = list_size//2 #<------- INSERTION POINT ASSIGMNET LOCATION 1
#insertion_pt = 0 #<--------------- INSERTION POINT ASSIGNMENT LOCATION 4 (insert at beginning)
delta = 100
while list_size <= list_size_final:
#insertion_pt = list_size//2 #<----------- INSERTION POINT ASSIGNMENT LOCATION 2
x = make_list_of_size(list_size)
tic = time.time()
for i in range(niters):
insertion_pt = len(x)//2 #<------------- INSERTION POINT ASSIGNMENT LOCATION 3
#insertion_pt = len(x) #<------------- INSERTION POINT ASSIGN LOC 5 insert at true end
x.insert(insertion_pt,0)
toc = time.time()
cost_per_iter = (toc-tic)/niters #overall time cost per iteration
cost_per_iter_no_overhead = (toc - tic - overhead)/niters #the fraction of time/iteration, #without overhead cost of pure iteration
print("list size = {:d}, cost without overhead = {:f} sec/iter".format(list_size,cost_per_iter_no_overhead))
file.write(str(list_size)+','+str(cost_per_iter_no_overhead)+'\n')
if list_size >= 10*delta:
delta *= 10
list_size += delta
def main():
fname = input()
file = open(fname,'w')
niters = 10000
tictoc_midpoint_insertion(100,10000000,niters,file)
file.close()
main()
See 5 positions where insertion can be done. Cost is of course a function of how large the list is, and how close you are to the beginning of the list (i.e. how many memory locations have to be re-organized)
Ignore left image of plot
Fwiw, there is a faster (for some ops... insert is O(log n)) list implementation called BList if you need it. BList

What is the difference between range and xrange functions in Python 2.X?

Apparently xrange is faster but I have no idea why it's faster (and no proof besides the anecdotal so far that it is faster) or what besides that is different about
for i in range(0, 20):
for i in xrange(0, 20):
In Python 2.x:
range creates a list, so if you do range(1, 10000000) it creates a list in memory with 9999999 elements.
xrange is a sequence object that evaluates lazily.
In Python 3:
range does the equivalent of Python 2's xrange. To get the list, you have to explicitly use list(range(...)).
xrange no longer exists.
range creates a list, so if you do range(1, 10000000) it creates a list in memory with 9999999 elements.
xrange is a generator, so it is a sequence object is a that evaluates lazily.
This is true, but in Python 3, range() will be implemented by the Python 2 xrange(). If you need to actually generate the list, you will need to do:
list(range(1,100))
Remember, use the timeit module to test which of small snippets of code is faster!
$ python -m timeit 'for i in range(1000000):' ' pass'
10 loops, best of 3: 90.5 msec per loop
$ python -m timeit 'for i in xrange(1000000):' ' pass'
10 loops, best of 3: 51.1 msec per loop
Personally, I always use range(), unless I were dealing with really huge lists -- as you can see, time-wise, for a list of a million entries, the extra overhead is only 0.04 seconds. And as Corey points out, in Python 3.0 xrange() will go away and range() will give you nice iterator behavior anyway.
xrange only stores the range params and generates the numbers on demand. However the C implementation of Python currently restricts its args to C longs:
xrange(2**32-1, 2**32+1) # When long is 32 bits, OverflowError: Python int too large to convert to C long
range(2**32-1, 2**32+1) # OK --> [4294967295L, 4294967296L]
Note that in Python 3.0 there is only range and it behaves like the 2.x xrange but without the limitations on minimum and maximum end points.
xrange returns an iterator and only keeps one number in memory at a time. range keeps the entire list of numbers in memory.
Do spend some time with the Library Reference. The more familiar you are with it, the faster you can find answers to questions like this. Especially important are the first few chapters about builtin objects and types.
The advantage of the xrange type is that an xrange object will always
take the same amount of memory, no matter the size of the range it represents.
There are no consistent performance advantages.
Another way to find quick information about a Python construct is the docstring and the help-function:
print xrange.__doc__ # def doc(x): print x.__doc__ is super useful
help(xrange)
The doc clearly reads :
This function is very similar to range(), but returns an xrange object instead of a list. This is an opaque sequence type which yields the same values as the corresponding list, without actually storing them all simultaneously. The advantage of xrange() over range() is minimal (since xrange() still has to create the values when asked for them) except when a very large range is used on a memory-starved machine or when all of the range’s elements are never used (such as when the loop is usually terminated with break).
You will find the advantage of xrange over range in this simple example:
import timeit
t1 = timeit.default_timer()
a = 0
for i in xrange(1, 100000000):
pass
t2 = timeit.default_timer()
print "time taken: ", (t2-t1) # 4.49153590202 seconds
t1 = timeit.default_timer()
a = 0
for i in range(1, 100000000):
pass
t2 = timeit.default_timer()
print "time taken: ", (t2-t1) # 7.04547905922 seconds
The above example doesn't reflect anything substantially better in case of xrange.
Now look at the following case where range is really really slow, compared to xrange.
import timeit
t1 = timeit.default_timer()
a = 0
for i in xrange(1, 100000000):
if i == 10000:
break
t2 = timeit.default_timer()
print "time taken: ", (t2-t1) # 0.000764846801758 seconds
t1 = timeit.default_timer()
a = 0
for i in range(1, 100000000):
if i == 10000:
break
t2 = timeit.default_timer()
print "time taken: ", (t2-t1) # 2.78506207466 seconds
With range, it already creates a list from 0 to 100000000(time consuming), but xrange is a generator and it only generates numbers based on the need, that is, if the iteration continues.
In Python-3, the implementation of the range functionality is same as that of xrange in Python-2, while they have done away with xrange in Python-3
Happy Coding!!
range creates a list, so if you do range(1, 10000000) it creates a list in memory with 10000000 elements.
xrange is a generator, so it evaluates lazily.
This brings you two advantages:
You can iterate longer lists without getting a MemoryError.
As it resolves each number lazily, if you stop iteration early, you won't waste time creating the whole list.
It is for optimization reasons.
range() will create a list of values from start to end (0 .. 20 in your example). This will become an expensive operation on very large ranges.
xrange() on the other hand is much more optimised. it will only compute the next value when needed (via an xrange sequence object) and does not create a list of all values like range() does.
range(): range(1, 10) returns a list from 1 to 10 numbers & hold whole list in memory.
xrange(): Like range(), but instead of returning a list, returns an object that generates the numbers in the range on demand. For looping, this is lightly faster than range() and more memory efficient.
xrange() object like an iterator and generates the numbers on demand.(Lazy Evaluation)
In [1]: range(1,10)
Out[1]: [1, 2, 3, 4, 5, 6, 7, 8, 9]
In [2]: xrange(10)
Out[2]: xrange(10)
In [3]: print xrange.__doc__
xrange([start,] stop[, step]) -> xrange object
range(x,y) returns a list of each number in between x and y if you use a for loop, then range is slower. In fact, range has a bigger Index range. range(x.y) will print out a list of all the numbers in between x and y
xrange(x,y) returns xrange(x,y) but if you used a for loop, then xrange is faster. xrange has a smaller Index range. xrange will not only print out xrange(x,y) but it will still keep all the numbers that are in it.
[In] range(1,10)
[Out] [1, 2, 3, 4, 5, 6, 7, 8, 9]
[In] xrange(1,10)
[Out] xrange(1,10)
If you use a for loop, then it would work
[In] for i in range(1,10):
print i
[Out] 1
2
3
4
5
6
7
8
9
[In] for i in xrange(1,10):
print i
[Out] 1
2
3
4
5
6
7
8
9
There isn't much difference when using loops, though there is a difference when just printing it!
Some of the other answers mention that Python 3 eliminated 2.x's range and renamed 2.x's xrange to range. However, unless you're using 3.0 or 3.1 (which nobody should be), it's actually a somewhat different type.
As the 3.1 docs say:
Range objects have very little behavior: they only support indexing, iteration, and the len function.
However, in 3.2+, range is a full sequence—it supports extended slices, and all of the methods of collections.abc.Sequence with the same semantics as a list.*
And, at least in CPython and PyPy (the only two 3.2+ implementations that currently exist), it also has constant-time implementations of the index and count methods and the in operator (as long as you only pass it integers). This means writing 123456 in r is reasonable in 3.2+, while in 2.7 or 3.1 it would be a horrible idea.
* The fact that issubclass(xrange, collections.Sequence) returns True in 2.6-2.7 and 3.0-3.1 is a bug that was fixed in 3.2 and not backported.
In python 2.x
range(x) returns a list, that is created in memory with x elements.
>>> a = range(5)
>>> a
[0, 1, 2, 3, 4]
xrange(x) returns an xrange object which is a generator obj which generates the numbers on demand. they are computed during for-loop(Lazy Evaluation).
For looping, this is slightly faster than range() and more memory efficient.
>>> b = xrange(5)
>>> b
xrange(5)
When testing range against xrange in a loop (I know I should use timeit, but this was swiftly hacked up from memory using a simple list comprehension example) I found the following:
import time
for x in range(1, 10):
t = time.time()
[v*10 for v in range(1, 10000)]
print "range: %.4f" % ((time.time()-t)*100)
t = time.time()
[v*10 for v in xrange(1, 10000)]
print "xrange: %.4f" % ((time.time()-t)*100)
which gives:
$python range_tests.py
range: 0.4273
xrange: 0.3733
range: 0.3881
xrange: 0.3507
range: 0.3712
xrange: 0.3565
range: 0.4031
xrange: 0.3558
range: 0.3714
xrange: 0.3520
range: 0.3834
xrange: 0.3546
range: 0.3717
xrange: 0.3511
range: 0.3745
xrange: 0.3523
range: 0.3858
xrange: 0.3997 <- garbage collection?
Or, using xrange in the for loop:
range: 0.4172
xrange: 0.3701
range: 0.3840
xrange: 0.3547
range: 0.3830
xrange: 0.3862 <- garbage collection?
range: 0.4019
xrange: 0.3532
range: 0.3738
xrange: 0.3726
range: 0.3762
xrange: 0.3533
range: 0.3710
xrange: 0.3509
range: 0.3738
xrange: 0.3512
range: 0.3703
xrange: 0.3509
Is my snippet testing properly? Any comments on the slower instance of xrange? Or a better example :-)
xrange() and range() in python works similarly as for the user , but the difference comes when we are talking about how the memory is allocated in using both the function.
When we are using range() we allocate memory for all the variables it is generating, so it is not recommended to use with larger no. of variables to be generated.
xrange() on the other hand generate only a particular value at a time and can only be used with the for loop to print all the values required.
range generates the entire list and returns it. xrange does not -- it generates the numbers in the list on demand.
xrange uses an iterator (generates values on the fly), range returns a list.
What?
range returns a static list at runtime.
xrange returns an object (which acts like a generator, although it's certainly not one) from which values are generated as and when required.
When to use which?
Use xrange if you want to generate a list for a gigantic range, say 1 billion, especially when you have a "memory sensitive system" like a cell phone.
Use range if you want to iterate over the list several times.
PS: Python 3.x's range function == Python 2.x's xrange function.
Everyone has explained it greatly. But I wanted it to see it for myself. I use python3. So, I opened the resource monitor (in Windows!), and first, executed the following command first:
a=0
for i in range(1,100000):
a=a+i
and then checked the change in 'In Use' memory. It was insignificant.
Then, I ran the following code:
for i in list(range(1,100000)):
a=a+i
And it took a big chunk of the memory for use, instantly. And, I was convinced.
You can try it for yourself.
If you are using Python 2X, then replace 'range()' with 'xrange()' in the first code and 'list(range())' with 'range()'.
From the help docs.
Python 2.7.12
>>> print range.__doc__
range(stop) -> list of integers
range(start, stop[, step]) -> list of integers
Return a list containing an arithmetic progression of integers.
range(i, j) returns [i, i+1, i+2, ..., j-1]; start (!) defaults to 0.
When step is given, it specifies the increment (or decrement).
For example, range(4) returns [0, 1, 2, 3]. The end point is omitted!
These are exactly the valid indices for a list of 4 elements.
>>> print xrange.__doc__
xrange(stop) -> xrange object
xrange(start, stop[, step]) -> xrange object
Like range(), but instead of returning a list, returns an object that
generates the numbers in the range on demand. For looping, this is
slightly faster than range() and more memory efficient.
Python 3.5.2
>>> print(range.__doc__)
range(stop) -> range object
range(start, stop[, step]) -> range object
Return an object that produces a sequence of integers from start (inclusive)
to stop (exclusive) by step. range(i, j) produces i, i+1, i+2, ..., j-1.
start defaults to 0, and stop is omitted! range(4) produces 0, 1, 2, 3.
These are exactly the valid indices for a list of 4 elements.
When step is given, it specifies the increment (or decrement).
>>> print(xrange.__doc__)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'xrange' is not defined
Difference is apparent. In Python 2.x, range returns a list, xrange returns an xrange object which is iterable.
In Python 3.x, range becomes xrange of Python 2.x, and xrange is removed.
range() in Python 2.x
This function is essentially the old range() function that was available in Python 2.x and returns an instance of a list object that contains the elements in the specified range.
However, this implementation is too inefficient when it comes to initialise a list with a range of numbers. For example, for i in range(1000000) would be a very expensive command to execute, both in terms of memory and time usage as it requires the storage of this list into the memory.
range() in Python 3.x and xrange() in Python 2.x
Python 3.x introduced a newer implementation of range() (while the newer implementation was already available in Python 2.x through the xrange() function).
The range() exploits a strategy known as lazy evaluation. Instead of creating a huge list of elements in range, the newer implementation introduces the class range, a lightweight object that represents the required elements in the given range, without storing them explicitly in memory (this might sound like generators but the concept of lazy evaluation is different).
As an example, consider the following:
# Python 2.x
>>> a = range(10)
>>> type(a)
<type 'list'>
>>> b = xrange(10)
>>> type(b)
<type 'xrange'>
and
# Python 3.x
>>> a = range(10)
>>> type(a)
<class 'range'>
On a requirement for scanning/printing of 0-N items , range and xrange works as follows.
range() - creates a new list in the memory and takes the whole 0 to N items(totally N+1) and prints them.
xrange() - creates a iterator instance that scans through the items and keeps only the current encountered item into the memory , hence utilising same amount of memory all the time.
In case the required element is somewhat at the beginning of the list only then it saves a good amount of time and memory.
Range returns a list while xrange returns an xrange object which takes the same memory irrespective of the range size,as in this case,only one element is generated and available per iteration whereas in case of using range, all the elements are generated at once and are available in the memory.
The difference decreases for smaller arguments to range(..) / xrange(..):
$ python -m timeit "for i in xrange(10111):" " for k in range(100):" " pass"
10 loops, best of 3: 59.4 msec per loop
$ python -m timeit "for i in xrange(10111):" " for k in xrange(100):" " pass"
10 loops, best of 3: 46.9 msec per loop
In this case xrange(100) is only about 20% more efficient.
range :-range will populate everything at once.which means every number of the range will occupy the memory.
xrange :-xrange is something like generator ,it will comes into picture when you want the range of numbers but you dont want them to be stored,like when you want to use in for loop.so memory efficient.
Additionally, if do list(xrange(...)) will be equivalent to range(...).
So list is slow.
Also xrange really doesn't fully finish the sequence
So that's why its not a list, it's a xrange object
See this post to find difference between range and xrange:
To quote:
range returns exactly what you think: a list of consecutive
integers, of a defined length beginning with 0. xrange, however,
returns an "xrange object", which acts a great deal like an iterator

Categories