I need to create a list with n items that all equals to 0, I used this method:
list = [0] * n
Is the time-complexity O(n) or O(1)?
If it is O(n) is it a way to achieve this list with an O(1) complexity?
One way to allocate a large object in O(1) time can be (depending on your underlying operating system) mmap.mmap, see https://docs.python.org/2/library/mmap.html . However, it's not a list with all its flexibility -- it behaves rather like a mutable array of characters.
Sometimes that can still be helpful. In recent versions of Python 3, you can put a memoryview on the result of mmap, and e.b call .cast('L') to obtain a view of the same memory as a sequence of integers (both operations are O(1)). Still not a list -- you don't get a list's rich flexibility and panoply of methods -- but, more likely to be helpful...
Added: this does however remind of a time many years ago when I was working to port a big CAD application to a then-brand-new AIX version and Power-based IBM workstation. malloc was essentially instantaneous -- the kernel was just doing some magic behind the scenes with memory mapping hardware in the CPU.
However... actually accessing an item far into the malloced area (say N bytes form its start) for the first time could take O(N) as all needed pages actually got allocated -- and could even cause a crash if the system couldn't actually find or free that many pages within your process's address space (IOW, that malloc was "overcommitting" memory in the first place!).
As you can imagine this wrecked havoc on an application originally developed for more traditional implementations of malloc: the app checked malloc's return value -- if 0, it took appropriate remedial action (worst case, just letting the user know their desired operation could not be performed due to lack of memory); otherwise, it assumed the memory it just allocated was actually there... and sometimes ended up crashing in "random" spots down the road:-).
The solution was only moderately hard -- wrap malloc into a function with a postfix actually touching the right subset of the "nominally allocated" pages to ensure they were actually all there (it was hard to catch the low-level errors that could come at such times, but eventually I managed to find a way, with a little assembly-language coding as I recall). Of course, this did make the wrapped malloc O(N) all over again... there does appear to be no such thing as a free lunch, who'd have thought?!-)
I'm not sure how mmap.mmap(-1, lotsofbytes) actually behaves from this point on view on all platforms of your interest -- nothing for it but actually experiment, I guess!-) (If it does turn out to offer a free lunch, at least on popular platforms, now that would be worth celebrating:-).
Creating a (dense) list of n zeros can't be done in O(1). The only option you have is to look at some sparse or lazy data structures. Depending on what you want to use this for, one option might be itertools.repeat().
a = itertools.repeat(0, n)
will be executed in O(1). Iterating over it will, of course, still be O(n).
you can roll your own methods for generators too:
def foo(num, times):
for a in range(0, times):
yield num
After profiling in my machine :
import timeit
code = """
def foo(num, times):
for a in range(0, times):
yield num
"""
print timeit.timeit("foo(0,1)", setup=code)
print timeit.timeit("foo(0,2**8)", setup=code)
# 0.310677051544
# 0.32208776474
Related
I am writing a blogpost on Python list.clear() method where I also want to mention about the time and space complexity of the underlying algorithm. I expected the time complexity to be O(N), iterate over the elements and free the memory? But, I found an article where it is mentioned that it is actually an O(1) operation. Then, I searched the source code of the method in CPython implementation and found a method which I believe is the actual internal implementation of list.clear(), however, I am not really sure it is. Here's the source code of the method:
static int
_list_clear(PyListObject *a)
{
Py_ssize_t i;
PyObject **item = a->ob_item;
if (item != NULL) {
/* Because XDECREF can recursively invoke operations on
this list, we make it empty first. */
i = Py_SIZE(a);
Py_SIZE(a) = 0;
a->ob_item = NULL;
a->allocated = 0;
while (--i >= 0) {
Py_XDECREF(item[i]);
}
PyMem_FREE(item);
}
/* Never fails; the return value can be ignored.
Note that there is no guarantee that the list is actually empty
at this point, because XDECREF may have populated it again! */
return 0;
}
I could be wrong but it does look like O(N) to me. Also, I found a similar question here, but there's no clear answer there. Just want to confirm the actual time and space complexity of list.clear(), and maybe a little explanation supporting the answer. Any help appreciated. Thanks.
As you correctly noticed, the CPython implementation of list.clear is O(n). The code iterates over the elements in order to decrease the reference count of each one, without a way to avoid it. There is no doubt that it is an O(n) operation and, given a large enough list, you can measure the time spent in clear() as function of list size:
import time
for size in 1_000_000, 10_000_000, 100_000_000, 1_000_000_000:
l = [None] * size
t0 = time.time()
l.clear()
t1 = time.time()
print(size, t1 - t0)
The output shows linear complexity; on my system with Python 3.7 it prints the following:
1000000 0.0023756027221679688
10000000 0.02452826499938965
100000000 0.23625731468200684
1000000000 2.31496524810791
The time per element is of course tiny because the loop is coded in C and each iteration does very little work. But, as the above measurement shows, even a miniscule per-element factor eventually adds up. Small per-element constant is not the reason to ignore the cost of an operation, or the same would apply to the loop that shifts the list elements in l.insert(0, ...), which is also very efficient - and yet few would claim insertion at the beginning to be O(1). (And clear potentially does more work because a decref will run an arbitrary chain of destructors for an object whose reference count actually reaches zero.)
On a philosophical level, one could argue that costs of memory management should be ignored when assessing complexity because otherwise it would be impossible to analyze anything with certainty, as any operation could trigger a GC. This argument has merit; GC does come occasionally and unpredictably, and its cost can be considered amortized across all allocations. In a similar vein complexity analysis tends to ignore the complexity of malloc because the parameters it depends on (like memory fragmentation) are typically not directly related to allocation size or even to the number of already allocated blocks. However, in case of list.clear there is only one allocated block, no GC is triggered, and the code is still visiting each and every list element. Even with the assumption of O(1) malloc and amortized O(1) GC, list.clear still takes the time proportional to the number of elements in the list.
The article linked from the question is about Python the language and doesn't mention a particular implementation. Python implementations that don't use reference counting, such as Jython or PyPy, are likely to have true O(1) list.clear, and for them the claim from the article would be entirely correct. So, when explaining the Python list on a conceptual level, it is not wrong to say that clearing the list is O(1) - after all, all the object references are in a contiguous array, and you free it only once. This is the point your blog post probably should make, and that is what the linked article is trying to say. Taking the cost of reference counting into account too early might confuse your readers and give them completely wrong ideas about Python's lists (e.g. they could imagine that they are implemented as linked lists).
Finally, at some point one must accept that memory management strategy does change complexity of some operations. For example, destroying a linked list in C++ is O(n) from the perspective of the caller; discarding it in Java or Go would be O(1). And not in the trivial sense of a garbage-collected language just postponing the same work for later - it is quite possible that a moving collector will only traverse reachable objects and will indeed never visit the elements of the discarded linked list. Reference counting makes discarding large containers algorithmically similar to manual collection, and GC can remove that. While CPython's list.clear has to touch every element to avoid a memory leak, it is quite possible that PyPy's garbage collector never needs to do anything of the sort, and thus has a true O(1) list.clear.
It's O(1) neglecting memory management. It's not quite right to say it's O(N) accounting for memory management, because accounting for memory management is complicated.
Most of the time, for most purposes, we treat the costs of memory management separately from the costs of the operations that triggered it. Otherwise, just about everything you could possibly do becomes O(who even knows), because almost any operation could trigger a garbage collection pass or an expensive destructor or something. Heck, even in languages like C with "manual" memory management, there's no guarantee that any particular malloc or free call will be fast.
There's an argument to be made that refcounting operations should be treated differently. After all, list.clear explicitly performs a number of Py_XDECREF operations equal to the list's length, and even if no objects are deallocated or finalized as a result, the refcounting itself will necessarily take time proportional to the length of the list.
If you count the Py_XDECREF operations list.clear performs explicitly, but ignore any destructors or other code that might be triggered by the refcounting operations, and you assume PyMem_FREE is constant time, then list.clear is O(N), where N is the original length of the list. If you discount all memory management overhead, including the explicit Py_XDECREF operations, list.clear is O(1). If you count all memory management costs, then the runtime of list.clear cannot be asymptotically bounded by any function of the list's length.
As the other answers have noted, it takes O(n) time to clear a list of length n. But I think there is an additional point to be made about amortized complexity here.
If you start with an empty list, and do N append or clear operations in any order, then the total running time across all of those operations is always O(N), giving an average per operation of O(1), however long the list gets in the process, and however many of those operations are clear.
Like clear, the worst case for append is also O(n) time where n is the length of the list. That's because when the capacity of the underlying array needs to be increased, we have to allocate a new array and copy everything across. But the cost of copying each element can be "charged" to one of the append operations which got the list to a length where the array needs to be resized, in such a way that N append operations starting from an empty list always take O(N) time.
Likewise, the cost of decrementing an element's refcount in the clear method can be "charged" to the append operation which inserted that element in the first place, because each element can only get cleared once. The conclusion is that if you are using a list as an internal data structure in your algorithm, and your algorithm repeatedly clears that list inside a loop, then for the purpose of analysing your algorithm's time complexity you should count clear on that list as an O(1) operation, just as you'd count append as an O(1) operation in the same circumstances.
A quick time check indicates that it is O(n).
Let's execute the following and create the lists beforehand to avoid overhead:
import time
import random
list_1000000 = [random.randint(0,10) for i in range(1000000)]
list_10000000 = [random.randint(0,10) for i in range(10000000)]
list_100000000 = [random.randint(0,10) for i in range(100000000)]
Now check for the time it takes to clear these four lists of different sizes as follows:
start = time.time()
list.clear(my_list)
end = time.time()
print(end - start))
Results:
list.clear(list_1000000) takes 0.015
list.clear(list_10000000) takes 0.074
list.clear(list_100000000) takes 0.64
A more robust time measurement is needed, since these numbers can deviate each time it is ran, but the results indicate that the execution time goes pretty much linearly as the input size grows. Hence we can conclude an O(n) complexity.
I have been using .pop() and .append() extensively for Leetcode-style programming problems, especially in cases where you have to accumulate palindromes, subsets, permutations, etc.
Would I get a substantial performance gain from migrating to using a fixed size list instead? My concern is that internally the python list reallocates to a smaller internal array when I execute a bunch of pops, and then has to "allocate up" again when I append.
I know that the amortized time complexity of append and pop is O(1), but I want to get better performance if I can.
Yes.
Python (at least the CPython implementation) uses magic under the hood to make lists as efficient as possible. According to this blog post (2011), calls to append and pop will dynamically allocate and deallocate memory in chunks (overallocating where necessary) for efficiency. The list will only deallocate memory if it shrinks below the chunk size. So, for most cases if you are doing a lot of appends and pops, no memory allocation/deallocation will be performed.
Basically the idea with these high level languages is that you should be able to use the data structure most suited to your use case and the interpreter will ensure that you don't have to worry about the background workings. (eg. avoid micro-optimisation and instead focus on the efficiency of the algorithms in general) If you're that worried about performance, I'd suggest using a language where you have more control over the memory, like C/C++ or Rust.
Python guarantees O(1) complexity for append and pops as you noted, so it sounds like it will be perfectly suited for your case. If you wanted to use it like a queue and using things like list.pop(1) or list.insert(0, obj) which are slower, then you could look into a dedicated queue data structure, for example.
I have implemented the Sieve of Atkin and it works great up to primes nearing 100,000,000 or so. Beyond that, it breaks down because of memory problems.
In the algorithm, I want to replace the memory based array with a hard drive based array. Python's "wb" file functions and Seek functions may do the trick. Before I go off inventing new wheels, can anyone offer advice? Two issues appear at the outset:
Is there a way to "chunk" the Sieve of Atkin to work on segment in memory, and
is there a way to suspend the activity and come back to it later - suggesting I could serialize the memory variables and restore them.
Why am I doing this? An old geezer looking for entertainment and to keep the noodle working.
Implementing the SoA in Python sounds fun, but note it will probably be slower than the SoE in practice. For some good monolithic SoE implementations, see RWH's StackOverflow post. These can give you some idea of the speed and memory use of very basic implementations. The numpy version will sieve to over 10,000M on my laptop.
What you really want is a segmented sieve. This lets you constrain memory use to some reasonable limit (e.g. 1M + O(sqrt(n)), and the latter can be reduced if needed). A nice discussion and code in C++ is shown at primesieve.org. You can find various other examples in Python. primegen, Bernstein's implementation of SoA, is implemented as a segmented sieve (Your question 1: Yes the SoA can be segmented). This is closely related (but not identical) to sieving a range. This is how we can use a sieve to find primes between 10^18 and 10^18+1e6 in a fraction of a second -- we certainly don't sieve all numbers to 10^18+1e6.
Involving the hard drive is, IMO, going the wrong direction. We ought to be able to sieve faster than we can read values from the drive (at least with a good C implementation). A ranged and/or segmented sieve should do what you need.
There are better ways to do storage, which will help some. My SoE, like a few others, uses a mod-30 wheel so has 8 candidates per 30 integers, hence uses a single byte per 30 values. It looks like Bernstein's SoA does something similar, using 2 bytes per 60 values. RWH's python implementations aren't quite there, but are close enough at 10 bits per 30 values. Unfortunately it looks like Python's native bool array is using about 10 bytes per bit, and numpy is a byte per bit. Either you use a segmented sieve and don't worry about it too much, or find a way to be more efficient in the Python storage.
First of all you should make sure that you store your data in an efficient manner. You could easily store the data for up to 100,000,000 primes in 12.5Mb of memory by using bitmap, by skipping obvious non-primes (even numbers and so on) you could make the representation even more compact. This also helps when storing the data on hard drive. You getting into trouble at 100,000,000 primes suggests that you're not storing the data efficiently.
Some hints if you don't receive a better answer.
1.Is there a way to "chunk" the Sieve of Atkin to work on segment in memory
Yes, for the Eratosthenes-like part what you could do is to run multiple elements in the sieve list in "parallell" (one block at a time) and that way minimize the disk accesses.
The first part is somewhat more tricky, what you would want to do is to process the 4*x**2+y**2, 3*x**2+y**2 and 3*x**2-y**2 in a more sorted order. One way is to first compute them and then sort the numbers, there are sorting algorithms that work well on drive storage (still being O(N log N)), but that would hurt the time complexity. A better way would be to iterate over x and y in such a way that you run on a block at a time, since a block is determined by an interval you could for example simply iterate over all x and y such that lo <= 4*x**2+y**2 <= hi.
2.is there a way to suspend the activity and come back to it later - suggesting I could serialize the memory variables and restore them
In order to achieve this (no matter how and when the program is terminated) you have to first have journalizing disk accesses (fx use a SQL database to keep the data, but with care you could do it yourself).
Second since the operations in the first part are not indempotent you have to make sure that you don't repeat those operations. However since you would be running that part block by block you could simply detect which was the last block processed and resume there (if you can end up with partially processed block you'd just discard that and redo that block). For the Erastothenes part it's indempotent so you could just run through all of it, but for increasing speed you could store a list of produced primes after the sieving of them has been done (so you would resume with sieving after the last produced prime).
As a by-product you should even be able to construct the program in a way that makes it possible to keep the data from the first step even when the second step is running and thereby at a later moment extending the limit by continuing the first step and then running the second step again. Perhaps even having two program where you terminate the first when you've got tired of it and then feeding it's output to the Eratosthenes part (thereby not having to define a limit).
You could try using a signal handler to catch when your application is terminated. This could then save your current state before terminating. The following script shows a simple number count continuing when it is restarted.
import signal, os, cPickle
class MyState:
def __init__(self):
self.count = 1
def stop_handler(signum, frame):
global running
running = False
signal.signal(signal.SIGINT, stop_handler)
running = True
state_filename = "state.txt"
if os.path.isfile(state_filename):
with open(state_filename, "rb") as f_state:
my_state = cPickle.load(f_state)
else:
my_state = MyState()
while running:
print my_state.count
my_state.count += 1
with open(state_filename, "wb") as f_state:
cPickle.dump(my_state, f_state)
As for improving disk writes, you could try experimenting with increasing Python's own file buffering with a 1Mb or more sized buffer, e.g. open('output.txt', 'w', 2**20). Using a with handler should also ensure your file gets flushed and closed.
There is a way to compress the array. It may cost some efficiency depending on the python interpreter, but you'll be able to keep more in memory before having to resort to disk. If you search online, you'll probably find other sieve implementations that use compression.
Neglecting compression though, one of the easier ways to persist memory to disk would be through a memory mapped file. Python has an mmap module that provides the functionality. You would have to encode to and from raw bytes, but it is fairly straightforward using the struct module.
>>> import struct
>>> struct.pack('H', 0xcafe)
b'\xfe\xca'
>>> struct.unpack('H', b'\xfe\xca')
(51966,)
I have a Python GAE app that stores data in each instance, and the memory usage is much higher than I’d expected. As an illustration, consider this test code which I’ve added to my app:
from google.appengine.ext import webapp
bucket = []
class Memory(webapp.RequestHandler):
def get(self):
global bucket
n = int(self.request.get('n'))
size = 0
for i in range(n):
text = '%10d' % i
bucket.append(text)
size += len(text)
self.response.out.write('Total number of characters = %d' % size)
A call to this handler with a value for query variable n will cause the instance to add n strings to its list, each 10 characters long.
If I call this with n=1 (to get everything loaded) and then check the instance memory usage on the production server, I see a figure of 29.4MB. If I then call it with n=100000 and check again, memory usage has jumped to 38.9MB. That is, my memory footprint has increased by 9.5MB to store only one million characters, nearly ten times what I’d expect. I believe that characters consume only one byte each, but even if that’s wrong there’s still a long way to go. Overhead of the list structure surely can’t explain it. I tried adding an explicit garbage collection call, but the figures didn’t change. What am I missing, and is there a way to reduce the footprint?
(Incidentally, I tried using a set instead of a list and found that after calling with n=100000 the memory usage increased by 13MB. That suggests that the set overhead for 100000 strings is 3.5MB more than that of lists, which is also much greater than expected.)
I know that I'm really late to the party here, but this isn't surprising at all...
Consider a string of length 1:
s = '1'
That's pretty small, right? Maybe somewhere on the order of 1 byte? Nope.
>>> import sys
>>> sys.getsizeof('1')
38
So there are approximately 37 bytes of overhead associated with each string that you create (all of those string methods need to be stored somewhere).
Additionally it's usually most efficient for your CPU to store items based on "word size" rather than byte size. On lots of systems, a "word" is 4 bytes...). I don't know for certain, but I wouldn't be surprised if python's memory allocator plays tricks there too to keep it running fairly quickly.
Also, don't forget that lists are represented as over-allocated arrays (to prevent huge performance problems each time you .append). It is possible that, when you make a list of 100k elements, python actually allocates pointers for 110k or more.
Finally, regarding set -- That's probably fairly easily explained by the fact that set are even more over-allocated than list (they need to avoid all those hash collisions after all). They end up having large jumps in memory usage as the set size grows in order to have enough free slots in the array to avoid hash collisions:
>>> sys.getsizeof(set([1]))
232
>>> sys.getsizeof(set([1, 2]))
232
>>> sys.getsizeof(set([1, 2, 3]))
232
>>> sys.getsizeof(set([1, 2, 3, 4]))
232
>>> sys.getsizeof(set([1, 2, 3, 4, 5]))
232
>>> sys.getsizeof(set([1, 2, 3, 4, 5, 6])) # resize!
744
The overhead of the list structure doesn't explain what you're seeing directly, but memory fragmentation does. And strings have a non-zero overhead in terms of underlying memory, so counting string lengths is going to undercount significantly.
I'm not an expert, but this is an interesting question. It seems like it's more of a python memory management issue than a GAE issue. Have you tried running it locally and comparing the memory usage on your local dev_appserver vs deployed on GAE? That should indicate whether it's the GAE platform, or just python.
Secondly, the python code you used is simple, but not very efficient, a list comprehension instead of the for loop should be more efficient. This should reduce the memory usage a bit:
''.join([`%10d` % i for i in range(n)])
Under the covers your growing string must be constantly reallocated. Every time through the for loop, there's a discarded string left lying around. I would have expected that triggering the garbage collector after your for loop should have cleaned up the extra strings though.
Try triggering the garbage collector before you check the memory usage.
import gc
gc.collect()
return len(gc.get_objects())
That should give you an idea if the garbage collector hasn't cleaned out some of the extra strings.
This is largely a response to dragonx.
The sample code exists only to illustrate the problem, so I wasn't concerned with small efficiencies. I am instead concerned about why the application consumes around ten times as much memory as there is actual data. I can understand there being some memory overhead, but this much?
Nonetheless, I tried using a list comprehension (without the join, to match my original) and the memory usage increases slightly, from 9.5MB to 9.6MB. Perhaps that's within the margin of error. Or perhaps the large range() expression sucks it up; it's released, no doubt, but better to use xrange(), I think. With the join the instance variable is set to one very long string, and the memory footprint unsurprisingly drops to a sensible 1.1MB, but this isn't the same case at all. You get the same 1.1MB just setting the instance variable to one million characters without using a list comprehension.
I'm not sure I agree that with my loop "there's a discarded string left lying around." I believe that the string is added to the list (by reference, if that's proper to say) and that no strings are discarded.
I had already tried explicit garbage collection, as my original question states. No help there.
Here's a telling result. Changing the length of the strings from 10 to some other number causes a proportional change in memory usage, but there's a constant in there as well. My experiments show that for every string added to the list there's an 85 byte overhead, no matter what the string length. Is this the cost for strings or for putting the strings into a list? I lean toward the latter. Creating a list of 100,000 None’s consumes 4.5MB, or around 45 bytes per None. This isn't as bad as for strings, but it's still pretty bad. And as I mentioned before, it's worse for sets than it is for lists.
I wish I understood why the overhead (or fragmentation) was this bad, but the inescapable conclusion seems to be that large collections of small objects are extremely expensive. You're probably right that this is more of a Python issue than a GAE issue.
GAE has various limitations, one of which is size of biggest allocatable block of memory amounting to 1Mb (now 10 times more, but that doesn't change the question). The limitation means that one cannot put more then some number of items in list() as CPython would try to allocate contiguous memory block for element pointers. Having huge list()s can be considered bad programming practice, but even if no huge structure is created in program itself, CPython maintains some behind the scenes.
It appears that CPython is maintaining single global list of objects or something. I.e. application that has many small objects tend to allocate bigger and bigger single blocks of memory.
First idea was gc, and disabling it changes application behavior a bit but still some structures are maintained.
A simplest short application that experience the issue is:
a = b = []
number_of_lists = 8000000
for i in xrange(number_of_lists):
b.append([])
b = b[0]
Can anyone enlighten me how to prevent CPython from allocating huge internal structures when having many objects in application?
On a 32-bit system, each of the 8000000 lists you create will allocate 20 bytes for the list object itself, plus 16 bytes for a vector of list elements. So you are trying to allocate at least (20+16) * 8000000 = 20168000000 bytes, about 20 GB. And that's in the best case, if the system malloc only allocates exactly as much memory as requested.
I calculated the size of the list object as follows:
2 Pointers in the PyListObject structure itself (see listobject.h)
1 Pointer and one Py_ssize_t for the PyObject_HEAD part of the list object (see object.h)
one Py_ssize_t for the PyObject_VAR_HEAD (also in object.h)
The vector of list elements is slightly overallocated to avoid having to resize it at each append - see list_resize in listobject.c. The sizes are 0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ... Thus, your one-element lists will allocate room for 4 elements.
Your data structure is a somewhat pathological example, paying the price of a variable-sized list object without utilizing it - all your lists have only a single element. You could avoid the 12 bytes overallocation by using tuples instead of lists, but to further reduce the memory consumption, you will have to use a different data structure that uses fewer objects. It's hard to be more specific, as I don't know what you are trying to accomplish.
I'm a bit confused as to what you're asking. In that code example, nothing should be garbage collected, as you're never actually killing off any references. You're holding a reference to the top level list in a and you're adding nested lists (held in b at each iteration) inside of that. If you remove the 'a =', then you've got unreferenced objects.
Edit: In response to the first part, yes, Python holds a list of objects so it can know what to cull. Is that the whole question? If not, comment/edit your question and I'll do my best to help fill in the gaps.
What are you trying to accomplish with the
a = b = []
and
b = b[0]
statements? It's certainly odd to see statements like that in Python, because they don't do what you might naively expect: in that example, a and b are two names for the same list (think pointers in C). If you're doing a lot of manipulation like that, it's easy to confuse the garbage collector (and yourself!) because you've got a lot of strange references floating around that haven't been properly cleared.
It's hard to diagnose what's wrong with that code without knowing why you want to do what it appears to be doing. Sure, it exposes a bit of interpreter weirdness... but I'm guessing you're approaching your problem in an odd way, and a more Pythonic approach might yield better results.
So that you're aware of it, Python has its own allocator. You can disable it using --without-pyalloc during the configure step.
However, the largest arena is 256KB so that shouldn't be the problem. You can also compile Python with debugging enabled, using --with-pydebug. This would give you more information about memory use.
I suspect your hunch and am sure that oefe's diagnosis are correct. A list uses contiguous memory, so if your list gets too large for a system arena then you're out of luck. If you're really adventurous you can reimplement PyList to use multiple blocks, but that's going to be a lot of work since various bits of Python expect contiguous data.