Python 2.7.5 - Run multiple threads simultaneously without slowing down - python

I'm creating a simple multiplayer game in python. I have split the processes up using the default thread module in python. However I noticed that the program still slows down with the speed of other threads. I tried using the multiprocessing module but not all of my objects can be pickled.
Is there an alternative to using the multiprocessing module for running simultaneous processes?

Here are your options:
MPI4PY:
http://code.google.com/p/mpi4py/
Celery:
http://www.celeryproject.org/
Pprocess:
http://www.boddie.org.uk/python/pprocess.html
Parallel Python(PP):
http://www.parallelpython.com/

You need to analyze why your program is slowing down when other threads do their work. Assuming that the threads are doing CPU-intensive work, the slowdown is consistent with threads being serialized by the global interpreter lock.
It is impossible to answer in detaile without knowing more about the nature of the work your threads are performing and of objects that must be shared in parallel. In general, you have two viable options:
Use processes, typically through the multiprocessing module. The typical reasons why objects are not picklable is because they contain unpicklable state such as closures, open file handles, or other system resources. But pickle allows objects to implement methods like __getstate__ or __reduce__ which identify object's state, using the state to rebuild the objects. If your objects are unpicklable because they are huge, then you might need to write a C extension that stores them in shared memory or a memory-mapped file, and pickle only a key that identifies them in the shared memory.
Use threads, finding ways to work around the GIL. If your computation is concentrated in several hot spots, you can move those hot spots to C, and release the GIL for the duration of the computation. For this to work, the computation must not refer to any Python objects, i.e. all data must be extracted from the objects while the GIL is held, and stored back into the Python world after the GIL has been reacquired.

Related

Why is pickle needed for multiprocessing module in python

I was doing multiprocessing in python and hit a pickling error. Which makes me wonder why do we need to pickle the object in order to do multiprocessing? isn't fork() enough?
Edit: I kind of get why we need pickle to do interprocess communication, but that is just for the data you want to transfer right? why does the multiprocessing module also try to pickle stuff like functions etc?
Which makes me wonder why do we need to pickle the object in order to
do multiprocessing?
We don't need pickle, but we do need to communicate between processes, and pickle happens to be a very convenient, fast, and general serialization method for Python. Serialization is one way to communicate between processes. Memory sharing is the other. Unlike memory sharing, the processes don't even need to be on the same machine to communicate. For example, PySpark using serialization very heavily to communicate between executors (which are typically different machines).
Addendum: There are also issues with the GIL (Global Interpreter Lock) when sharing memory in Python (see comments below for detail).
isn't fork() enough?
Not if you want your processes to communicate and share data after they've forked. fork() clones the current memory space, but changes in one process won't be reflected in another after the fork (unless we explicitly share data, of course).
I kind of get why we need pickle to do interprocess communication, but
that is just for the data you want to transfer right? why does the
multiprocessing module also try to pickle stuff like functions etc?
Sometimes complex objects (i.e. "other stuff"? not totally clear on what you meant here) contain the data you want to manipulate, so we'll definitely want to be able to send that "other stuff".
Being able to send a function to another process is incredibly useful. You can create a bunch of child processes and then send them all a function to execute concurrently that you define later in your program. This is essentially the crux of PySpark (again a bit off topic, since PySpark isn't multiprocessing, but it feels strangely relevant).
There are some functional purists (mostly the LISP people) that make arguments that code and data are the same thing. So it's not much of a line to draw for some.

Python multiprocessing guidelines seems to conflict: share memory or pickle?

I'm playing with Python multiprocessing module to have a (read-only) array shared among multiple processes. My goal is to use multiprocessing.Array to allocate the data and then have my code forked (forkserver) so that each worker can read straight from the array to do their job.
While reading the Programming guidelines I got a bit confused.
It is first said:
Avoid shared state
As far as possible one should try to avoid shifting large amounts of
data between processes.
It is probably best to stick to using queues or pipes for
communication between processes rather than using the lower level
synchronization primitives.
And then, a couple of lines below:
Better to inherit than pickle/unpickle
When using the spawn or forkserver start methods many types from
multiprocessing need to be picklable so that child processes can use
them. However, one should generally avoid sending shared objects to
other processes using pipes or queues. Instead you should arrange the
program so that a process which needs access to a shared resource
created elsewhere can inherit it from an ancestor process.
As far as I understand, queues and pipes pickle objects. If so, aren't those two guidelines conflicting?
Thanks.
The second guideline is the one relevant to your use case.
The first is reminding you that this isn't threading where you manipulate shared data structures with locks (or atomic operations). If you use Manager.dict() (which is actually SyncManager.dict) for everything, every read and write has to access the manager's process, and you also need the synchronization typical of threaded programs (which itself may come at a higher cost from being cross-process).
The second guideline suggests inheriting shared, read-only objects via fork; in the forkserver case, this means you have to create such objects before the call to set_start_method, since all workers are children of a process created at that time.
The reports on the usability of such sharing are mixed at best, but if you can use a small number of any of the C-like array types (like numpy or the standard array module), you should see good performance (because the majority of pages will never be written to deal with reference counts). Note that you do not need multiprocessing.Array here (though it may work fine), since you do not need writes in one concurrent process to be visible in another.

Memory sharing multithreading programming in Python

Is it possible to deal in python with shared memory parallel tasks? My task should be parallel over several cores (though threading module does not fit here, as far as I know the only facilities to do that is multiprocessing). There are lots of tasks for which I create a thread (in a python case process) pool. Then I need to initialize this threads (processes) with a lot of data from the main thread (process). Threads process this data and return new ones (again a lot of) to the main thread (process). What I see a huge overhead is that each task must copy the data to the new process in case of processes and do that same after finishing. But in case of threads this is eliminated. And it should be a huge speed up. Can I achieve this speed up with python?
Yes, the multiprocessing module does support placing objects in shared memory. See the documentation.
Threads won't help you due to the GIL serializing them. However, you can still use multiprocessing and share the data between processes using the mmap module or equivalent. This will require the data to be structured to be readable from the file, so you won't be able to use, say, dictionaries - but using a file-based storage such as sqlite will be fine.

Python multithreading best practices

i just recently read an article about the GIL (Global Interpreter Lock) in python.
Which seems to be some big issue when it comes to Python performance. So i was wondering myself
what would be the best practice to archive more performance. Would it be threading or
either multiprocessing? Because i hear everybody say something different, it would be
nice to have one clear answer. Or at least to know the pros and contras of multithreading
against multiprocessing.
Kind regards,
Dirk
It depends on the application, and on the python implementation that you are using.
In CPython (the reference implementation) and pypy the GIL only allows one thread at a time to execute Python bytecode. Other threads might be doing I/O or running extensions written in C.
It is worth noting that some other implementations like IronPython and JPython don't have a GIL.
A characteristic of threading is that all threads share the same interpreter and all the live objects. So threads can share global data almost without extra effort. You need to use locking to serialize access to data, though! Imagine what would happen if two threads would try to modifiy the same list.
Multiprocessing actually runs in different processes. That sidesteps the GIL, but if large amounts of data need to be shared between processes that data has to be pickled and transported to another process via IPC where it has to be unpickled again. The multiprocessing module can take care of the messy details for you, but it still adds overhead.
So if your program wants to run Python code in parallel but doesn't need to share huge amounts of data between instances (e.g. just filenames of files that need to be processed), multiprocessing is a good choice.
Currently multiprocessing is the only way that I'm aware of in the standard library to use all the cores of your CPU at the same time.
On the other hand if your tasks need to share a lot of data and most of the processing is done in extension or is I/O, threading would be a good choice.

Running a continuous Python process and memory management

My current assignment is to run a Python program continuously, it is going to be a cron job kind of thing, internally it will have objects which is going to be updated every 24hrs and then basically write the details in a file.
Some advice required about the memory management
Should I use single process or multi threaded. As there is a scope in the program which can be done parallel. As it is going to run continuously some clarification would be required about the memory consumption of these threads also do I need to cleanup the resources of the threads after each execution. Is there any clean up method available for threads in python.
When I do a object allocation in Python, do I need to think about the destructor as well or Python will do the gc.
Please share your thoughts on this as well as what would be the best approach.
There seems to be a misunderstanding in your question.
A cron job is a scheduled task that runs at a given interval of time. A program running continuously doesn't need to be scheduled, aside from being launched at boot.
First, multi-threading in Python suffers the GIL, so unless you are calling multi-threading aware library functions or your computations are I/O bound (often blocked by input/output such as disk access, network access and such) that releases the GIL, you will only have an insubstantial gain by using threading. You should though consider using the multiprocessing package for parallel computing. Other options are NumPy-based calculations when the library is compiled with OpenMP or using a task-based parallel framework such as SCOOP or Celery.
As stated in the comments, memory management is built in in Python and you won't have to worry about it apart from deleting unused instances or elements. Python will garbage collect your program automatically for every element that doesn't have any variable bound to it, so be sure to delete them or let them fall off-scope accordingly.
On a side-note, be careful with objects destructors in Python, they tend to exhibit a different behavior than other Object Oriented languages. I recommend you reading of this matter before using them.

Categories