python array time complexity? - python

What's the .append time complexity of array.array and np.array ?
I see time complexity for list, collections.deque, set, and dict in python_wiki , but I can't find the time complexity of array.array and np.array. Where can I find them?

So to link you provided (also a TLDR) list are internally "represented as an array" link It's supposed to be O(1) with a note at the bottom saying:
"These operations rely on the "Amortized" part of "Amortized Worst Case". Individual actions may take surprisingly long, depending on the history of the container."
link
More details
It doesn't go into detail in the docs but if you look at the source code you'll actually see what's going on. Python arrays have internal buffer(s) that allow for quick resizing of themselves and will realloc as it grows/shrinks.
array.append uses arraymodule.array_array_append which calls arraymodule.ins calling arraymodule.ins1 which is the meat and potatoes of the operation. Incidentally array.extend uses this as well but it just supplies it Py_SIZE(self) as the insertion index.
So if we read the notes in arraymodule.ins1 it starts off with:
Bypass realloc() when a previous overallocation is large enough
to accommodate the newsize. If the newsize is 16 smaller than the
current size, then proceed with the realloc() to shrink the array.
link
...
This over-allocates proportional to the array size, making room
for additional growth. The over-allocation is mild, but is
enough to give linear-time amortized behavior over a long
sequence of appends() in the presence of a poorly-performing
system realloc().
The growth pattern is: 0, 4, 8, 16, 25, 34, 46, 56, 67, 79, ...
Note, the pattern starts out the same as for lists but then
grows at a smaller rate so that larger arrays only overallocate
by about 1/16th -- this is done because arrays are presumed to be more
memory critical.
link

It is important to understand the array data structure to answer your question. Since both array objects are based on C arrays (regular, numpy), they share a lot of the same functionality.
Adding an item to an array is amortized O(1), but in most cases, ends up being O(n) time. This is because it could be the case that your array object is not filled yet, and thus appending some data to that spot in memory is a relatively trivial exercise, it is O(1). However, in most cases, the array is full and thus needs to be completely copied over in memory with the new item added to it. This is a very expensive operation since an array of n size needs to be copied, thus making the insertion O(n).
An interesting example from this post:
To make this clearer, consider the case where the factor is 2 and
initial array size is 1. Then consider copy costs to grow the array
from size 1 to where it's large enough to hold, 2^k+1 elements for any
k >= 0. This size is 2^(k+1). Total copy costs will include all the
copying to become that big in factor-of-2 steps:
1 + 2 + 4 + ... + 2^k = 2^(k+1) - 1 = 2n - 1

Related

How to circular shift numpy array without O(n) memory copying?

Short version:
how to do this without O(n) memory copying
start = 2 # \/ start points here
input = np.array([4, 5, 1, 2, 3])
output = np.array([1, 2, 3, 4, 5]) # output doesn't need to be memory contiguous, a view is just fine
Long version:
I'm trying to implement a fixed length FIFO buffer based on numpy.ndarray for hardware simulation.
My goal is to achieve O(1) (or at least less than O(n)) of enqueue/dequeue, and most important: get numpy array with correct order.
I end up using a pointer start to point at first element of FIFO, thus enqueue/dequeue is just single assignment/indexing which is fast.
But the problem is when I want use this whole FIFO as a numpy.ndarray and do some math(like np.dot(signal,fifo)), I cannot find a way to get corresponding array without an O(n) operation.
It sounds like a wheel that already be invented so I searched online and tried something, but none of these suits my need:
input[1:] = input[:-1]; input[0] = new_val a naive but most efficient way so far, but still O(n)
collection.deque: good enqueue/dequeue performance, but take long time to work with other numpy array (seems numpy will convert deque to numpy.ndarray first then do the math, which will take a lot of time)
indexing: input[np.arange(start,start+size)%size] this is advanced indexing so it returns new array, and the index array creation part makes it even slower
np.r_[input[start:],input[:start]], better than indexing when buffer size is large, but still O(n)
np.roll(input,-start), O(n) and slower
np.concatenate([input[start:],input[:start]]), O(n), slower than naive way but faster than others
since the original array is contiguous so I also tried use strides/shape trick to create a view of original array but not work neither.
So I came here for help, or it has to be O(n)?

Generating very large 2D-array in Python?

I'd like to generate very large 2D-array (or, in other terms, a matrix) using list of lists. Each element should be a float.
So, just to give an example, let's assume to have the following code:
import numpy as np
N = 32000
def largeMat():
m = []
for i in range(N):
l = list(np.ones(N))
m.append(l)
if i % 1000 == 0:
print i
return m
m = largeMat()
I have 12GB of RAM, but as the code reaches the 10000-th line of the matrix, my RAM is already full. Now, if I'm not wrong, each float is 64-bit large (or 8 byte), so the total occupied RAM should be:
32000 * 32000 * 8 / 1 MB = 8192 MB
Why does python fill my whole RAM and even start to allocate into swap?
Python does not necessarily store list items in the most compact form, as lists require pointers to the next item, etc. This is a side effect of having a data type which allows deletes, inserts, etc. For a simple two-way linked list the usage would be two pointers plus the value, in a 64-bit machine that would be 24 octets per float item in the list. In practice the implementation is not that stupid, but there is still some overhead.
If you want to have a concise format, I'd suggest using a numpy.array as it will take exactly as many bytes you think it'd take (plus a small overhead).
Edit Oops. Not necessarily. Explanation wrong, suggestion valid. numpy is the right tool as numpy.array exists for this reason. However, the problem is most probably something else. My computer will run the procedure even though it takes a lot of time (appr. 2 minutes). Also, quitting python after this takes a long time (actually, it hung). Memory use of the python process (as reported by top) peaks at 10 000 MB and then falls down to slightly below 9 000 MB. Probably the allocated numpy arrays are not garbage collected very fast.
But about the raw data size in my machine:
>>> import sys
>>> l = [0.0] * 1000000
>>> sys.getsizeof(l)
8000072
So there seems to be a fixed overhead of 72 octets per list.
>>> listoflists = [ [1.0*i] * 1000000 for i in range(1000)]
>>> sys.getsizeof(listoflists)
9032
>>> sum([sys.getsizeof(l) for l in listoflists])
8000072000
So, this is as expected.
On the other hand, reserving and filling the long list of lists takes a while (about 10 s). Also, quitting python takes a while. The same for numpy:
>>> a = numpy.empty((1000,1000000))
>>> a[:] = 1.0
>>> a.nbytes
8000000000
(The byte count is not entirely reliable, as the object itself takes some space for its metadata, etc. There has to be the pointer to the start of the memory block, data type, array shape, etc.)
This takes much less time. The creation of the array is almost instantaneous, inserting the numbers takes maybe a second or two. Allocating and freeing a lot of small memory chunks is time consuming and while it does not cause fragmentation problems in a 64-bit machine, it is still much easier to allocate a big chunk of data.
If you have a lot of data which can be put into an array, you need a good reason for not using numpy.

Repeatedly appending to a large list (Python 2.6.6)

I have a project where I am reading in ASCII values from a microcontroller through a serial port (looks like this : AA FF BA 11 43 CF etc)
The input is coming in quickly (38 two character sets / second).
I'm taking this input and appending it to a running list of all measurements.
After about 5 hours, my list has grown to ~ 855000 entries.
I'm given to understand that the larger a list becomes, the slower list operations become. My intent is to have this test run for 24 hours, which should yield around 3M results.
Is there a more efficient, faster way to append to a list then list.append()?
Thanks Everyone.
I'm given to understand that the larger a list becomes, the slower list operations become.
That's not true in general. Lists in Python are, despite the name, not linked lists but arrays. There are operations that are O(n) on arrays (copying and searching, for instance), but you don't seem to use any of these. As a rule of thumb: If it's widely used and idiomatic, some smart people went and chose a smart way to do it. list.append is a widely-used builtin (and the underlying C function is also used in other places, e.g. list comprehensions). If there was a faster way, it would already be in use.
As you will see when you inspect the source code, lists are overallocating, i.e. when they are resized, they allocate more than needed for one item so the next n items can be appended without need to another resize (which is O(n)). The growth isn't constant, it is proportional with the list size, so resizing becomes rarer as the list grows larger. Here's the snippet from listobject.c:list_resize that determines the overallocation:
/* This over-allocates proportional to the list size, making room
* for additional growth. The over-allocation is mild, but is
* enough to give linear-time amortized behavior over a long
* sequence of appends() in the presence of a poorly-performing
* system realloc().
* The growth pattern is: 0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ...
*/
new_allocated = (newsize >> 3) + (newsize < 9 ? 3 : 6);
As Mark Ransom points out, older Python versions (<2.7, 3.0) have a bug that make the GC sabotage this. If you have such a Python version, you may want to disable the gc. If you can't because you generate too much garbage (that slips refcounting), you're out of luck though.
One thing you might want to consider is writing your data to a file as it's collected. I don't know (or really care) if it will affect performance, but it will help ensure that you don't lose all your data if power blips. Once you've got all the data, you can suck it out of the file and jam it in a list or an array or a numpy matrix or whatever for processing.
Appending to a python list has a constant cost. It is not affected by the number of items in the list (in theory). In practice appending to a list will get slower once you run out of memory and the system starts swapping.
http://wiki.python.org/moin/TimeComplexity
It would be helpful to understand why you actually append things into a list. What are you planning to do with the items. If you don't need all of them you could build a ring buffer, if you don't need to do computation you could write the list to a file, etc.
First of all, 38 two-character sets per second, 1 stop bit, 8 data bits, and no parity, is only 760 baud, not fast at all.
But anyway, my suggestion, if you're worried about having overly large lists/don't want to use one huge list, is just to store store a list on disk once it reaches a certain size and start a new list, repeating until you've gotten all the data, then combining all the lists into one once you're done receiving the data.
Though you may skip the sublists completely and just go with nmichaels' suggestion, writing the data to a file as you get it and using a small circular buffer to hold the received data that has not yet been written.
It might be faster to use numpy if you know how long the array is going to be and you can convert your hex codes to ints:
import numpy
a = numpy.zeros(3000000, numpy.int32)
for i in range(3000000):
a[i] = int(scanHexFromSerial(),16)
This will leave you with an array of integers (which you could convert back to hex with hex()), but depending on your application maybe that will work just as well for you.

How is set() implemented?

I've seen people say that set objects in python have O(1) membership-checking. How are they implemented internally to allow this? What sort of data structure does it use? What other implications does that implementation have?
Every answer here was really enlightening, but I can only accept one, so I'll go with the closest answer to my original question. Thanks all for the info!
According to this thread:
Indeed, CPython's sets are implemented as something like dictionaries
with dummy values (the keys being the members of the set), with some
optimization(s) that exploit this lack of values
So basically a set uses a hashtable as its underlying data structure. This explains the O(1) membership checking, since looking up an item in a hashtable is an O(1) operation, on average.
If you are so inclined you can even browse the CPython source code for set which, according to Achim Domma, was originally mostly a cut-and-paste from the dict implementation.
Note: Nowadays, set and dict's implementations have diverged significantly, so the precise behaviors (e.g. arbitrary order vs. insertion order) and performance in various use cases differs; they're still implemented in terms of hashtables, so average case lookup and insertion remains O(1), but set is no longer just "dict, but with dummy/omitted keys".
When people say sets have O(1) membership-checking, they are talking about the average case. In the worst case (when all hashed values collide) membership-checking is O(n). See the Python wiki on time complexity.
The Wikipedia article says the best case time complexity for a hash table that does not resize is O(1 + k/n). This result does not directly apply to Python sets since Python sets use a hash table that resizes.
A little further on the Wikipedia article says that for the average case, and assuming a simple uniform hashing function, the time complexity is O(1/(1-k/n)), where k/n can be bounded by a constant c<1.
Big-O refers only to asymptotic behavior as n → ∞.
Since k/n can be bounded by a constant, c<1, independent of n,
O(1/(1-k/n)) is no bigger than O(1/(1-c)) which is equivalent to O(constant) = O(1).
So assuming uniform simple hashing, on average, membership-checking for Python sets is O(1).
I think its a common mistake, set lookup (or hashtable for that matter) are not O(1).
from the Wikipedia
In the simplest model, the hash function is completely unspecified and the table does not resize. For the best possible choice of hash function, a table of size n with open addressing has no collisions and holds up to n elements, with a single comparison for successful lookup, and a table of size n with chaining and k keys has the minimum max(0, k-n) collisions and O(1 + k/n) comparisons for lookup. For the worst choice of hash function, every insertion causes a collision, and hash tables degenerate to linear search, with Ω(k) amortized comparisons per insertion and up to k comparisons for a successful lookup.
Related: Is a Java hashmap really O(1)?
We all have easy access to the source, where the comment preceding set_lookkey() says:
/* set object implementation
Written and maintained by Raymond D. Hettinger <python#rcn.com>
Derived from Lib/sets.py and Objects/dictobject.c.
The basic lookup function used by all operations.
This is based on Algorithm D from Knuth Vol. 3, Sec. 6.4.
The initial probe index is computed as hash mod the table size.
Subsequent probe indices are computed as explained in Objects/dictobject.c.
To improve cache locality, each probe inspects a series of consecutive
nearby entries before moving on to probes elsewhere in memory. This leaves
us with a hybrid of linear probing and open addressing. The linear probing
reduces the cost of hash collisions because consecutive memory accesses
tend to be much cheaper than scattered probes. After LINEAR_PROBES steps,
we then use open addressing with the upper bits from the hash value. This
helps break-up long chains of collisions.
All arithmetic on hash should ignore overflow.
Unlike the dictionary implementation, the lookkey function can return
NULL if the rich comparison returns an error.
*/
...
#ifndef LINEAR_PROBES
#define LINEAR_PROBES 9
#endif
/* This must be >= 1 */
#define PERTURB_SHIFT 5
static setentry *
set_lookkey(PySetObject *so, PyObject *key, Py_hash_t hash)
{
...
Sets in python employ hash table internally. Let us first talk about hash table.
Let there be some elements that you want to store in a hash table and you have 31 places in the hash table where you can do so. Let the elements be: 2.83, 8.23, 9.38, 10.23, 25.58, 0.42, 5.37, 28.10, 32.14, 7.31. When you want to use a hash table, you first determine the indices in the hash table where these elements would be stored. Modulus function is a popular way of determining these indices, so let us say we take one element at a time, multiply it by 100 and apply modulo by 31. It is important that each such operation on an element results in a unique number as an entry in a hash table can store only one element unless chaining is allowed. In this way, each element would be stored at a location governed by the indices obtained through modulo operation. Now if you want to search for an element in a set which essentially stores elements using this hash table, you would obtain the element in O(1) time as the index of the element is computed using the modulo operation in a constant time.
To expound on the modulo operation, let me also write some code:
piles = [2.83, 8.23, 9.38, 10.23, 25.58, 0.42, 5.37, 28.10, 32.14, 7.31]
def hash_function(x):
return int(x*100 % 31)
[hash_function(pile) for pile in piles]
Output: [4, 17, 8, 0, 16, 11, 10, 20, 21, 18]
To emphasize a little more the difference between set's and dict's, here is an excerpt from the setobject.c comment sections, which clarify's the main difference of set's against dicts.
Use cases for sets differ considerably from dictionaries where looked-up
keys are more likely to be present. In contrast, sets are primarily
about membership testing where the presence of an element is not known in
advance. Accordingly, the set implementation needs to optimize for both
the found and not-found case.
source on github

Why do dicts of defaultdict(int)'s use so much memory? (and other simple python performance questions)

I do understand that querying a non-existent key in a defaultdict the way I do will add items to the defaultdict. That is why it is fair to compare my 2nd code snippet to my first one in terms of performance.
import numpy as num
from collections import defaultdict
topKeys = range(16384)
keys = range(8192)
table = dict((k,defaultdict(int)) for k in topKeys)
dat = num.zeros((16384,8192), dtype="int32")
print "looping begins"
#how much memory should this use? I think it shouldn't use more that a few
#times the memory required to hold (16384*8192) int32's (512 mb), but
#it uses 11 GB!
for k in topKeys:
for j in keys:
dat[k,j] = table[k][j]
print "done"
What is going on here? Furthermore, this similar script takes eons to run compared to the first one, and also uses an absurd quantity of memory.
topKeys = range(16384)
keys = range(8192)
table = [(j,0) for k in topKeys for j in keys]
I guess python ints might be 64 bit ints, which would account for some of this, but do these relatively natural and simple constructions really produce such a massive overhead?
I guess these scripts show that they do, so my question is: what exactly is causing the high memory usage in the first script and the long runtime and high memory usage of the second script and is there any way to avoid these costs?
Edit:
Python 2.6.4 on 64 bit machine.
Edit 2: I can see why, to a first approximation, my table should take up 3 GB
16384*8192*(12+12) bytes
and 6GB with a defaultdict load factor that forces it to reserve double the space.
Then inefficiencies in memory allocation eat up another factor of 2.
So here are my remaining questions:
Is there a way for me to tell it to use 32 bit ints somehow?
And why does my second code snippet take FOREVER to run compared to the first one? The first one takes about a minute and I killed the second one after 80 minutes.
Python ints are internally represented as C longs (it's actually a bit more complicated than that), but that's not really the root of your problem.
The biggest overhead is your usage of dicts. (defaultdicts and dicts are about the same in this description). dicts are implemented using hash tables, which is nice because it gives quick lookup of pretty general keys. (It's not so necessary when you only need to look up sequential numerical keys, since they can be laid out in an easy way to get to them.)
A dict can have many more slots than it has items. Let's say you have a dict with 3x as many slots as items. Each of these slots needs room for a pointer to a key and a pointer serving as the end of a linked list. That's 6x as many points as numbers, plus all the pointers to the items you're interested in. Consider that each of these pointers is 8 bytes on your system and that you have 16384 defaultdicts in this situation. As a rough, handwavey look at this, 16384 occurrences * (8192 items/occurance) * 7 (pointers/item) * 8 (bytes/pointer) = 7 GB. This is before I've gotten to the actual numbers you're storing (each unique number of which is itself a Python dict), the outer dict, that numpy array, or the stuff Python's keeping track of to try to optimize some.
Your overhead sounds a little higher than I suspect and I would be interested in knowing whether that 11GB was for a whole process or whether you calculated it for just table. In any event, I do expect the size of this dict-of-defaultdicts data structure to be orders of magnitude bigger than the numpy array representation.
As to "is there any way to avoid these costs?" the answer is "use numpy for storing large, fixed-size contiguous numerical arrays, not dicts!" You'll have to be more specific and concrete about why you found such a structure necessary for better advice about what the best solution is.
Well, look at what your code is actually doing:
topKeys = range(16384)
table = dict((k,defaultdict(int)) for k in topKeys)
This creates a dict holding 16384 defaultdict(int)'s. A dict has a certain amount of overhead: the dict object itself is between 60 and 120 bytes (depending on the size of pointers and ssize_t's in your build.) That's just the object itself; unless the dict is less than a couple of items, the data is a separate block of memory, between 12 and 24 bytes, and it's always between 1/2 and 2/3rds filled. And defaultdicts are 4 to 8 bytes bigger because they have this extra thing to store. And ints are 12 bytes each, and although they're reused where possible, that snippet won't reuse most of them. So, realistically, in a 32-bit build, that snippet will take up 60 + (16384*12) * 1.8 (fill factor) bytes for the table dict, 16384 * 64 bytes for the defaultdicts it stores as values, and 16384 * 12 bytes for the integers. So that's just over a megabyte and a half without storing anything in your defaultdicts. And that's in a 32-bit build; a 64-bit build would be twice that size.
Then you create a numpy array, which is actually pretty conservative with memory:
dat = num.zeros((16384,8192), dtype="int32")
This will have some overhead for the array itself, the usual Python object overhead plus the dimensions and type of the array and such, but it wouldn't be much more than 100 bytes, and only for the one array. It does store 16384*8192 int32's in your 512Mb though.
And then you have this rather peculiar way of filling this numpy array:
for k in topKeys:
for j in keys:
dat[k,j] = table[k][j]
The two loops themselves don't use much memory, and they re-use it each iteration. However, table[k][j] creates a new Python integer for each value you request, and stores it in the defaultdict. The integer created is always 0, and it so happens that that always gets reused, but storing the reference to it still uses up space in the defaultdict: the aforementioned 12 bytes per entry, times the fill factor (between 1.66 and 2.) That lands you close to 3Gb of actual data right there, and 6Gb in a 64-bit build.
On top of that the defaultdicts, because you keep adding data, have to keep growing, which means they have to keep reallocating. Because of Python's malloc frontend (obmalloc) and how it allocates smaller objects in blocks of its own, and how process memory works on most operating systems, this means your process will allocate more and not be able to free it; it won't actually use all of the 11Gb, and Python will re-use the available memory inbetween the large blocks for the defaultdicts, but the total mapped address space will be that 11Gb.
Mike Graham gives a good explanation of why dictionaries use more memory, but I thought that I'd explain why your table dict of defaultdicts starts to take up so much memory.
The way that the defaultdict (DD) is set-up right now, whenever you retrieve an element that isn't in the DD, you get the default value for the DD (0 for your case) but also the DD now stores a key that previously wasn't in the DD with the default value of 0. I personally don't like this, but that's how it goes. However, it means that for every iteration of the inner loop, new memory is being allocated which is why it is taking forever. If you change the lines
for k in topKeys:
for j in keys:
dat[k,j] = table[k][j]
to
for k in topKeys:
for j in keys:
if j in table[k]:
dat[k,j] = table[k][j]
else:
dat[k,j] = 0
then default values aren't being assigned to keys in the DDs and so the memory stays around 540 MB for me which is mostly just the memory allocated for dat. DDs are decent for sparse matrices though you probably should just use the sparse matrices in Scipy if that's what you want.

Categories