I have been using .pop() and .append() extensively for Leetcode-style programming problems, especially in cases where you have to accumulate palindromes, subsets, permutations, etc.
Would I get a substantial performance gain from migrating to using a fixed size list instead? My concern is that internally the python list reallocates to a smaller internal array when I execute a bunch of pops, and then has to "allocate up" again when I append.
I know that the amortized time complexity of append and pop is O(1), but I want to get better performance if I can.
Yes.
Python (at least the CPython implementation) uses magic under the hood to make lists as efficient as possible. According to this blog post (2011), calls to append and pop will dynamically allocate and deallocate memory in chunks (overallocating where necessary) for efficiency. The list will only deallocate memory if it shrinks below the chunk size. So, for most cases if you are doing a lot of appends and pops, no memory allocation/deallocation will be performed.
Basically the idea with these high level languages is that you should be able to use the data structure most suited to your use case and the interpreter will ensure that you don't have to worry about the background workings. (eg. avoid micro-optimisation and instead focus on the efficiency of the algorithms in general) If you're that worried about performance, I'd suggest using a language where you have more control over the memory, like C/C++ or Rust.
Python guarantees O(1) complexity for append and pops as you noted, so it sounds like it will be perfectly suited for your case. If you wanted to use it like a queue and using things like list.pop(1) or list.insert(0, obj) which are slower, then you could look into a dedicated queue data structure, for example.
Related
Everyone talks about advantages using generators in Python. It's really cool and useful thing. But no one speaks about their disadvantages. And interviewers usually use this gap.
So is there any other disadvantage of using generators besides these two?
For the generator's work, you need to keep in memory the variables of the generator function.
Every time you want to reuse the elements in a collection it must be regenerated.
For the generator's work, you need to keep in memory the variables of the generator function.
But you don't have to keep the entire collection in memory, so usually this is EXACTLY the trade-off you want to make.
Every time you want to reuse the elements in a collection it must be regenerated.
The generator must be recreated, but the collection does not need to be though. So this may not be a problem.
Essentially it boils down to a discussion about Lazy vs Eager evaluation. You trade-off CPU overhead for the capability of streaming processing (as opposed to bulk-processing with eager evaluation). The code can become a bit more tricky to read if using a lazy approach, so there could be a trade-off between performance and simplicity there as well.
I have a complex nested data structure. I iterate through it and perform some calculations on each possible uniqe pair of elements. It's all in-memory mathematical functions. I don't read from files or do networking.
It takes a few hours to run, with do_work() being called 25,000 times. I am looking for ways to speed it up.
Although Pool.map() seems useful for my lists, it's proving to be difficult because I need to pass extra arguments into the function being mapped.
I thought using the Python multitasking library would help, but when I use Pool.apply_async() to call do_work(), it actually takes longer.
I did some googling and a blogger says "Use sync for in-memory operations — async is a complete waste when you aren’t making blocking calls." Is this true? Can someone explain why? Do the RAM read & write operations interfere with each other? Why does my code take longer with async calls? do_work() writes calculation results to a database, but it doesn't modify my data structure.
Surely there is a way to utilize my processor cores instead of just linearly iterating through my lists.
My starting point, doing it synchronously:
main_list = [ [ [a,b,c,[x,y,z], ... ], ... ], ... ] # list of identical structures
helper_list = [1,2,3]
z = 2
for i_1 in range(0, len(main_list)):
for i_2 in range(0, len(main_list)):
if i_1 < i_2: # only unique combinations
for m in range(0, len(main_list[i_1])):
for h, helper in enumerate(helper_list):
do_work(
main_list[i_1][m][0], main_list[i_2][m][0], # unique combo
main_list[i_1][m][1], main_list[i_1][m][2],
main_list[i_1][m][3][z], main_list[i_2][m][3][h],
helper_list[h]
)
Variable names have been changed to make it more readable.
This is just a general answer, but too long for a comment...
First of all, I think your biggest bottleneck at this very moment is Python itself. I don't know what do_work() does, but if it's CPU intensive, you have the GIL which completely prevents effective parallelisation inside one process. No matter what you do, threads will fight for the GIL and it will eventually make your code even slower. Remember: Python has real threading, but the CPU is shared inside a single process.
I recommend checking out the page of David M Beazley: http://dabeaz.com/GIL/gilvis who did a lot of effort to visualise the GIL behaviour in Python.
On the other hand, the module multiprocessing allows you to run multiple processes and "circumvent" the GIL downsides, but it will be tricky to get access to the same memory locations without bigger penalties or trade-offs.
Second: if you utilise heavy nested loops, you should think about using numba and trying to fit your data structures inside numpy (structured) arrays. This can give you order of magnitude of speed quite easily. Python is slow as hell for such things but luckily there are ways to squeeze out a lot when using appropriate libraries.
To sum up, I think the code you are running could be orders of magnitudes faster with numba and numpy structures.
Alternatively, you can try to rewrite the code in a language like Julia (very similar syntax to Python and the community is extremely helpful) and quickly check how fast it is in order to explore the limits of the performance. It's always a good idea to get a feeling how fast something (or parts of a code) can be in a language which has not such complex performance critical aspects like Python.
Your task is more CPU bound than relying on I/O operations. Asynchronous execution make sense when you have long I/O operations i.e. sending/receiving something from network etc.
What you can do is split task to the chunks and utilize threads and multiprocessing (run on different CPU cores).
If I instantiate/update a few lists very, very few times, in most cases only once, but I check for the existence of an object in that list a bunch of times, is it worth it to convert the lists into dictionaries and then check by key existence?
Or in other words is it worth it for me to convert lists into dictionaries to achieve possible faster object existence checks?
Dictionary lookups are faster the list searches. Also a set would be an option. That said:
If "a bunch of times" means "it would be a 50% performance increase" then go for it. If it doesn't but makes the code better to read then go for it. If you would have fun doing it and it does no harm then go for it. Otherwise it's most likely not worth it.
You should be using a set, since from your description I am guessing you wouldn't have a value to associate. See Python: List vs Dict for look up table for more info.
Usually it's not important to tune every line of code for utmost performance.
As a rule of thumb, if you need to look up more than a few times, creating a set is usually worthwhile.
However consider that pypy might do the linear search 100 times faster than CPython, then a "few" times might be "dozens". In other words, sometimes the constant part of the complexity matters.
It's probably safest to go ahead and use a set there. You're less likely to find that a bottleneck as the system scales than the other way around.
If you really need to microtune everything, keep in mind that the implementation, cpu cache, etc... can affect it, so you may need to remicrotune differently for different platforms, and if you need performance that badly, Python was probably a bad choice - although maybe you can pull the hotspots out into C. :)
random access (look up) in dictionary are faster but creating hash table consumes more memory.
more performance = more memory usages
it depends on how many items in your list.
Goal: sorting a sequence in a functional way without using builtin sorted(..) function.
def my_sorted(seq):
"""returns an iterator"""
pass
Motivation: In the FP way, I am constrained:
never mutate seq (which could be an iterator or a realized list)
By implication, no in-place sorting.
Question 1 Since I cannot mutate seq, I would need to maintain a separate mutable data structure to store the sorted sequence. That seems wasteful compared to an in-place list.sort(). How do other functional programming languages handle this ?
Question 2 If I return a mutable sequence, it that ok in the functional paradigm?
Of course sorting cannot be totally lazy (the last element of input could be the first on output) but you could implement a computational lazy sort that after reading the whole sequence only generates exact sorted output on request element-by-element. You can also delay reading input until at least one output is requested so sorting and ignoring the result will require no computation.
For this computationally lazy approach the best candidate I know is the heapsort algorithm (you only do the heap-building step upfront).
Mutation in-place is only safe if no one else has references to the data, expecting it to be as it was prior to the sort. So it isn't really wasteful to have a new structure for the sorted results, in general. The in-place optimization is only safe if you're using the data in a linear fashion.
So, just allocate a new structure, since that is more generally useful. The in-place version is a special case.
The appropriate defensive programming is wasteful at times, but there's also nothing you can do about it.
This is why languages built to support functional use from the ground up use structural sharing for their natively immutable types; programming in a functional style in a language which isn't built for it (such as Python) isn't going to be as well-supported as a matter of course. That said, a sort operation isn't necessarily a good candidate for structural sharing (if more than minor changes need to be made).
As such, there often is at least one copy operation involved in a sort, even in other functional languages. Clojure, for instance, delegates to Java's native (highly optimized) sort operation on a temporary mutable array, and returns a seq wrapping that array (and thus making the result just as immutible as the input which was used to populate same). If the inputs are immutible, and the outputs are immutible, and what happens inbetween isn't visible to the outside world (particularly, to any other thread), transient mutability is often a necessary and appropriate thing.
Use a sorting algorithm that can be performed in a manner that creates a new datastructure, such as heapsort or mergesort.
Wasteful of what? bits? electricity? wall-clock time? A parallel merge-sort may be the quickest to complete if you have enough cpus and a large amount of data, but may produce many intermediary representations.
In general, parallelising an algorithm may lead to a very different optimisation strategy than a serial algorithm. For instance, due to Amdahl's Law, re-performing redundant work locally to avoid sharing. This may be considered "wasteful" in a serial context, but leads to a much more scalable algorithm.
I thought about the following question about computer's architecture. Suppose I do in Python
from bisect import bisect
index = bisect(x, a) # O(log n) (also, shouldn't it be a standard list function?)
x.insert(index, a) # O(1) + memcpy()
which takes log n, plus, if I correctly understand it, a memory copy operation for x[index:]. Now I read recently that the bottleneck is usually in the communication between processor and the memory so the memory copy could be done by RAM quite fast. Is it how that works?
Python is a language. Multiple implementations exist, and they may have different implementations for lists. So, without looking at the code of an actual implementation, you cannot know for sure how lists are implemented and how they behave under certain circumstances.
My bet would be that the references to the objects in a list are stored in contiguous memory (certainly not as a linked list...). If that is indeed so, then insertion using x.insert will cause all elements behind the inserted element to be moved. This may be done efficiently by the hardware, but the complexity would still be O(n).
For small lists the bisect operation may take more time than x.insert, even though the former is O(log n) while the latter is O(n). For long lists, however, I'd hazard a guess that x.insert is the bottleneck. In such cases you must consider using a different data structure.
Use the blist module if you need a list with better insert performance.
CPython lists are contiguous arrays. Which one of the O(log n) bisect and O(n) insert dominates your performance profile depends on the size of your list and also the constant factors inside the O(). Particularly, the comparison function invoked by bisect can be something expensive depending on the type of objects in the list.
If you need to hold potentially large mutable sorted sequences then the linear array underlying Pythons list type isn't a good choice. Depending on your requirements heaps, trees or skip-lists might be appropriate.