In the chapter on Arrays in the book Elements of Programming Interviews in Python, it is mentioned that Filling an array from the front is slow, so see if it’s possible to write values from the back.
What could be the possible reason for that?
Python lists, at least in CPython, the standard Python implementation, are actually implemented from a data structure perspective as arrays, not lists.
However, these are dynamically allocated and resized, so appending to the end of a Python-list is actually possible. It takes a somewhat variable amount of time to do so: CPython tries to allocate additional space when items are being appended beyond what is actually necessary, such that it doesn't need to allocate more space for every append operation. At best, appending, if space has already been allocated, is O(1), and since it is an array, indexing is also O(1).
What will take a long time, however, is adding something to the beginning of a list, as this would require shifting all the array values, and is O(n), just as popping the first element is.
Python language designers have decided to call these arrays lists instead of arrays, contradicting standard terminology, in part, I assume, because the dynamic resizing makes them different from standard, fixed-size lists.
Unless I'm mistaken, collections.deque implements a doubly-linked list, with the corresponding O(1) appends/pops on either side, and so on.
Related
I've been trying to learn how CPython is implemented under the scenes. It's great that Python is high level, but I don't like treating it like a black box.
With that in mind, how are tuples implemented? I've had a look at the source (tupleobject.c), but it's going over my head.
I see that PyTuple_MAXSAVESIZE = 20 and PyTuple_MAXFREELIST = 2000, what is saving and the "free list"? (Will there be a performance difference between tuples of length 20/21 or 2000/2001? What enforces the maximum tuple length?)
As a caveat, everything in this answer is based on what I've gleaned from looking over the implementation you linked.
It seems that the standard implementation of a tuple is simply as an array. However, there are a bunch of optimizations in place to speed things up.
First, if you try to make an empty tuple, CPython instead will hand back a canonical object representing the empty tuple. As a result, it can save on a bunch of allocations that are just allocating a single object.
Next, to avoid allocating a bunch of small objects, CPython recycles memory for many small lists. There is a fixed constant (PyTuple_MAXSAVESIZE) such that all tuples less than this length are eligible to have their space reclaimed. Whenever an object of length less than this constant is deallocated, there is a chance that the memory associated with it will not be freed and instead will be stored in a "free list" (more on that in the next paragraph) based on its size. That way, if you ever need to allocate a tuple of size n and one has previously been allocated and is no longer in use, CPython can just recycle the old array.
The free list itself is implemented as an array of size PyTuple_MAXSAVESIZE storing pointers to unused tuples, where the nth element of the array points either to NULL (if no extra tuples of size n are available) or to a reclaimed tuple of size n. If there are multiple different tuples of size n that could be reused, they are chained together in a sort of linked list by having each tuple's zeroth entry point to the next tuple that can be reused. (Since there is only one tuple of length zero ever allocated, there is never a risk of reading a nonexistent zeroth element). In this way, the allocator can store some number of tuples of each size for reuse. To ensure that this doesn't use too much memory, there is a second constant PyTuple_MAXFREELIST that controls the maximum length of any of these linked lists within any bucket. There is then a secondary array of length PyTuple_MAXSAVESIZE that stores the length of the linked lists for tuples of each given length so that this upper limit isn't exceeded.
All in all, it's a very clever implementation!
Because in the course of normal operations Python will create and destroy a lot of small tuples, Python keeps an internal cache of small tuples for that purpose. This helps cut down on a lot of memory allocation and deallocation churn. For the same reasons small integers from -5 to 255 are interned (made into singletons).
The PyTuple_MAXSAVESIZE definition controls at the maximum size of tuples that qualify for this optimization, and the PyTuple_MAXFREELIST definition controls how many of these tuples keeps around in memory. When a tuple of length < PyTuple_MAXSAVESIZE is discarded, it is added to the free list if there is still room for one (in tupledealloc), to be re-used when Python creates a new small tuple (in PyTuple_New).
Python is being a little clever about how it stores these; for each tuple of length > 0, it'll reuse the first element of each cached tuple to chain up to PyTuple_MAXFREELIST tuples together into a linked list. So each element in the free_list array is a linked list of Python tuple objects, and all tuples in such a linked list are of the same size. The only exception is the empty tuple (length 0); only one is ever needed of these, it is a singleton.
So, yes, for tuples over length PyTuple_MAXSAVESIZE python is guaranteed to have to allocate memory separately for a new C structure, and that could affect performance if you create and discard such tuples a lot.
If you want to understand Python C internals, I do recommend you study the Python C API; it'll make it easier to understand the various structures Python uses to define objects, functions and methods in C.
I have done over the time many things that require me using the list's .append() function, and also numpy.append() function for numpy arrays. I noticed that both grow really slow when sizes of the arrays are big.
I need an array that is dynamically growing for sizes of about 1 million elements. I can implement this myself, just like std::vector is made in C++, by adding buffer length (reserve length) that is not accessible from the outside. But do I have to reinvent the wheel? I imagine it should be implemented somewhere. So my question is: Does such a thing exist already in Python?
What I mean: Is there in Python an array type that is capable of dynamically growing with time complexity of O(C) most of the time?
The memory of numpy arrays is well described in its docs, and has been discussed here a lot. List memory layout has also been discussed, though usually just contrast to numpy.
A numpy array has a fixed size data buffer. 'growing' it requires creating a new array, and copying data to it. np.concatenate does that in compiled code. np.append as well as all the stack functions use concatenate.
A list has, as I understand it, a contiguous data buffer that contains pointers to objects else where in memeory. Python maintains some freespace in that buffer, so additions with list.append are relatively fast and easy. But when the freespace fills up, it has to create a new buffer and copy pointers. I can see where that could get expensive with large lists.
So a list will have store a pointer for each element, plus the element itself (e.g. a float) somewhere else in memory. In contrast the array of floats stores the floats themselves as contiguous bytes in its buffer. (Object dtype arrays are more like lists).
The recommended way to create an array iteratively is to build the list with append, and create the array once at the end. Repeated np.append or np.concatenate is relatively expensive.
deque was mentioned. I don't know much about how it stores its data. The docs say it can add elements at the start just as easily as at the end, but random access is slower than for a list. That implies that it stores data in some sort of linked list, so that finding the nth element requires traversing the n-1 links before it. So there's a trade off between growth ease and access speed.
Adding elements to the start of a list requires making a new list of pointers, with the new one(s) at the start. So adding, and removing elements from the start of a regular list, is much more expensive than doing that at the end.
Recommending software is outside of the core SO purpose. Others may make suggestions, but don't be surprised if this gets closed.
There are file formats like HDF5 that a designed for large data sets. They accommodate growth with features like 'chunking'. And there are all kinds of database packages.
Both use an underlying array. Instead, you can use collections.deque which is made for specifically adding and removing elements at both ends with O(1) complexity
If I am using C-Python or jython (in Python 2.7), and for list ([]) data structure, if I continue to add new elements, will there be memory reallocation issue like how Java ArrayList have (since Java ArrayList requires continuous memory space, if current pre-allocated space is full, it needs re-allocate new larger continuous large memory space, and move existing elements to the new allocated space)?
http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/util/ArrayList.java#ArrayList.ensureCapacity%28int%29
regards,
Lin
The basic story, at least for the main Python, is that a list contains pointers to objects elsewhere in memory. The list is created with a certain free space (eg. for 8 pointers). When that fills up, it allocates more memory, and so on. Whether it moves the pointers from one block of memory to another, is a detail that most users ignore. In practice we just append/extend a list as needed and don't worry about memory use.
Why does creating a list from a list make it larger?
I assume jython uses the same approach, but you'd have to dig into its code to see how that translates to Java.
I mostly answer numpy questions. This is a numerical package that creates fixed sized multidimensional arrays. If a user needs to build such an array incrementally, we often recommend that they start with a list and append values. At the end they create the array. Appending to a list is much cheaper than rebuilding an array multiple times.
Internally python lists are Array of pointers as mentioned by hpaulj
The next question then is how can you extend the an Array in C as explained in the answer. Which explains this can be done using realloc function in C.
This lead me to look in to the behavior of realloc which mentions
The function may move the memory block to a new location (whose address is returned by the function).
From this my understanding is the array object is extended if contiguous memory is available, else memory block (containing the Array object not List object) is copied to newly allocated memory block with greater size.
This is my understanding, corrections are welcome if I am wrong.
I've been trying to learn how CPython is implemented under the scenes. It's great that Python is high level, but I don't like treating it like a black box.
With that in mind, how are tuples implemented? I've had a look at the source (tupleobject.c), but it's going over my head.
I see that PyTuple_MAXSAVESIZE = 20 and PyTuple_MAXFREELIST = 2000, what is saving and the "free list"? (Will there be a performance difference between tuples of length 20/21 or 2000/2001? What enforces the maximum tuple length?)
As a caveat, everything in this answer is based on what I've gleaned from looking over the implementation you linked.
It seems that the standard implementation of a tuple is simply as an array. However, there are a bunch of optimizations in place to speed things up.
First, if you try to make an empty tuple, CPython instead will hand back a canonical object representing the empty tuple. As a result, it can save on a bunch of allocations that are just allocating a single object.
Next, to avoid allocating a bunch of small objects, CPython recycles memory for many small lists. There is a fixed constant (PyTuple_MAXSAVESIZE) such that all tuples less than this length are eligible to have their space reclaimed. Whenever an object of length less than this constant is deallocated, there is a chance that the memory associated with it will not be freed and instead will be stored in a "free list" (more on that in the next paragraph) based on its size. That way, if you ever need to allocate a tuple of size n and one has previously been allocated and is no longer in use, CPython can just recycle the old array.
The free list itself is implemented as an array of size PyTuple_MAXSAVESIZE storing pointers to unused tuples, where the nth element of the array points either to NULL (if no extra tuples of size n are available) or to a reclaimed tuple of size n. If there are multiple different tuples of size n that could be reused, they are chained together in a sort of linked list by having each tuple's zeroth entry point to the next tuple that can be reused. (Since there is only one tuple of length zero ever allocated, there is never a risk of reading a nonexistent zeroth element). In this way, the allocator can store some number of tuples of each size for reuse. To ensure that this doesn't use too much memory, there is a second constant PyTuple_MAXFREELIST that controls the maximum length of any of these linked lists within any bucket. There is then a secondary array of length PyTuple_MAXSAVESIZE that stores the length of the linked lists for tuples of each given length so that this upper limit isn't exceeded.
All in all, it's a very clever implementation!
Because in the course of normal operations Python will create and destroy a lot of small tuples, Python keeps an internal cache of small tuples for that purpose. This helps cut down on a lot of memory allocation and deallocation churn. For the same reasons small integers from -5 to 255 are interned (made into singletons).
The PyTuple_MAXSAVESIZE definition controls at the maximum size of tuples that qualify for this optimization, and the PyTuple_MAXFREELIST definition controls how many of these tuples keeps around in memory. When a tuple of length < PyTuple_MAXSAVESIZE is discarded, it is added to the free list if there is still room for one (in tupledealloc), to be re-used when Python creates a new small tuple (in PyTuple_New).
Python is being a little clever about how it stores these; for each tuple of length > 0, it'll reuse the first element of each cached tuple to chain up to PyTuple_MAXFREELIST tuples together into a linked list. So each element in the free_list array is a linked list of Python tuple objects, and all tuples in such a linked list are of the same size. The only exception is the empty tuple (length 0); only one is ever needed of these, it is a singleton.
So, yes, for tuples over length PyTuple_MAXSAVESIZE python is guaranteed to have to allocate memory separately for a new C structure, and that could affect performance if you create and discard such tuples a lot.
If you want to understand Python C internals, I do recommend you study the Python C API; it'll make it easier to understand the various structures Python uses to define objects, functions and methods in C.
Just learning Python. Reading through the official tutorials. I ran across this:
While appends and pops from the end of list are fast, doing inserts or pops from the beginning of a list is slow (because all of the other elements have to be shifted by one).
I would have guessed that a mature language like Python would have all sorts of optimizations, so why doesn't Python [seem to] use linked lists so that inserts can be fast?
Python uses a linear list layout in memory so that indexing is fast (O(1)).
As Greg Hewgill has already pointed out, python lists use contiguous blocks of memory to make indexing fast. You can use a deque if you want the performance characteristics of a linked list. But your initial premise seems flawed to me. Indexed insertion into the middle of a (standard) linked list is also slow.
What Python calls "lists" aren't actually linked lists; they're more like arrays. See the list entry from the Python glossary and also How do you make an array in Python? from the Python FAQ.
list is implemented as an arraylist. If you want to insert frequently, you can use a deque (but note that traversal to the middle is expensive).
Alternatively, you can use a heap. It's all there if you take the time to look at the docs.
Python lists are implemented using a resizeable array of references to other objects. This provides O(1) lookup compared to O(n) lookup for a linked list implementation.
See How are lists implemented?
As you mentioned, this implementation makes insertions into the beginning or middle of a Python list slow because every element in the array to the right of the insertion point has to be shifted over one element. Also, sometimes the array will have to be resized to accommodate more elements. For inserting into a linked list, you'll still need O(n) time to find the location where you will insert, but the actual insertion itself will be O(1), since you only need to change the references in the nodes immediately before and after your insertion point (assuming a doubly-linked list).
So the decision to make Python lists use dynamic arrays rather than linked lists has nothing to do with the "maturity" of the language implementation. There are simply trade-offs between different data structures and the designers of Python decided that dynamic arrays were the best option overall. They may have assumed indexing a list is more common than inserting data into it, thus making dynamic arrays a better choice in this case.
See the following table in the Dynamic Array wikipedia article for a comparison of various data structure performance characteristics:
https://en.wikipedia.org/wiki/Dynamic_array#Performance