Is python multiprocessing Queue safe for object put? - python

When I put an object in Queue, Is it necessary to create deep copy of object and then put in Queue?

If you can ensure that the Object is only processed in one Thread, this is not a problem. But if you can't, it is recommended to use a deep copy.
The Queue object doesn't do this autmatically if you put the object into it.
See Refs
Multithreading, Python and passed arguments
Python in Practice: Create Better Programs Using Concurrency... p.154
Keep in mind that the object need to be able to be pickled (Multiprocessing Basics)
It usually more useful to be able to spawn a process with arguments to tell it what work to do. Unlike with threading, to pass arguments to a multiprocessing Process the argument must be able to be serialized using pickle. This example passes each worker a number so the output is a little more interesting.

Related

Mutithreading a method from an object instantiated in the main thread

If I instantiate an object in the main thread, and then send one of it's member methods to a ThreadPoolExecutor, does Python somehow create a copy-by-value of the object and sends it to the subthread, so that the objects member method will have access to its own copy of self?
Or is it indeed accessing self from the object in the main thread, thus meaning that every member in a subthread is modifying / overwriting the same properties (living in the main thread)?
Threads share a memory space. There is no magic going on behind the scenes, so code in different threads accesses the same objects. Thread switches can occur at any time, although most simple Python operations are atomic. It is up to you to avoid race conditions. Normal Python scoping rules apply.
You might want to read about ThreadLocal variables if you want to find out about workarounds to the default behavior.
Processes as quite different. Each Process has its own memory space and its own copy of all the objects it references.

multiprocessing initargs - how it works under the hood?

I've assumed that multiprocessing.Pool uses pickle to pass initargs to child processes.
However I find the following stange:
value = multiprocessing.Value('i', 1)
multiprocess.Pool(initializer=worker, initargs=(value, )) # Works
But this does not work:
pickle.dumps(value)
throwing:
RuntimeError: Synchronized objects should only be shared between processes through inheritance
Why is that, and how multiprocessing initargs can bypass that, as it's using pickle as well?
As I understand, multiprocessing.Value is using shared memory behind the scenes, what is the difference between inheritance or passing it via initargs? Specifically speaking on Windows, where the code does not fork, so a new instance of multiprocessing.Value is created.
And if you had instead passed an instance of multiprocessing.Lock(), the error message would have been RuntimeError: Lock objects should only be shared between processes through inheritance. These things can be passed as arguments if you are creating an instance of multiprocessing.Process, which is in fact what is being used when you say initializer=worker, initargs=(value,). The test being made is whether a process is currently being spawned, which is not the case when you already have an existing process pool and are now just submitting some work for it. But why this restriction?
Would it make sense for you to be able to pickle this shared memory to a file and then a week later trying to unpickle it and use it? Of course not! Python cannot know that you would not be doing anything so foolish as that and so it places great restrictions on how shared memory and locks can be pickled/unpickled, which is only for passing to other processes.

How does Python pass-by-object-reference work with asynchronous code?

foo = { 'bar': None }
for i in range(3):
foo['bar'] = i
send_to_async_queue(foo)
Can I be certain that the queue processes 0,1,2 every time? If not how to I ensure that it does?
It depends on how send_to_async_queue will be implemented.
If you are using something like Celery, the dictionary will be serialized (as json, pickle or other method you choose) before sending to the queue. So, you are safe.
If you are using threads or another in-memory mechanism to hold the queue and process it, all consumers will share the address of this dictionary, and changing it will likely give you problems.
In this case, you can serialize by yourself before putting in the queue; or do a copy of the dictionary using send_to_async_queue(copy.deepcopy(foo)).

Python: Why is the multiprocessing lock shared among processes here?

I am trying to share a lock among processes. I understand that the way to share a lock is to pass it as an argument to the target function. However I found that even the approach below is working. I could not understand the way the processes are sharing this lock. Could anyone please explain?
import multiprocessing as mp
import time
class SampleClass:
def __init__(self):
self.lock = mp.Lock()
self.jobs = []
self.total_jobs = 10
def test_run(self):
for i in range(self.total_jobs):
p = mp.Process(target=self.run_job, args=(i,))
p.start()
self.jobs.append(p)
for p in self.jobs:
p.join()
def run_job(self, i):
with self.lock:
print('Sleeping in process {}'.format(i))
time.sleep(5)
if __name__ == '__main__':
t = SampleClass()
t.test_run()
On Windows (which you said you're using), these kinds of things always reduce to details about how multiprocessing plays with pickle, because all Python data crossing process boundaries on Windows is implemented by pickling on the sending end (and unpickling on the receiving end).
My best advice is to avoid doing things that raise such questions to begin with ;-) For example, the code you showed blows up on Windows under Python 2, and also blows up under Python 3 if you use a multiprocessing.Pool method instead of multiprocessing.Process.
It's not just the lock, simply trying to pickle a bound method (like self.run_job) blows up in Python 2. Think about it. You're crossing a process boundary, and there isn't an object corresponding to self on the receiving end. To what object is self.run_job supposed to be bound on the receiving end?
In Python 3, pickling self.run_job also pickles a copy of the self object. So that's the answer: a SampleClass object corresponding to self is created by magic on the receiving end. Clear as mud. t's entire state is pickled, including t.lock. That's why it "works".
See this for more implementation details:
Why can I pass an instance method to multiprocessing.Process, but not a multiprocessing.Pool?
In the long run, you'll suffer the fewest mysteries if you stick to things that were obviously intended to work: pass module-global callable objects (neither, e.g., instance methods nor local functions), and explicitly pass multiprocessing data objects (whether an instance of Lock, Queue, manager.list, etc etc).
On Unix Operating Systems, new processes are created via the fork primitive.
The fork primitive works by cloning the parent process memory address space assigning it to the child. The child will have a copy of the parent's memory as well as for the file descriptors and shared objects.
This means that, when you call fork, if the parent has a file opened, the child will have it too. The same applied with shared objects such as pipes, sockets etc...
In Unix+CPython, Locks are realized via the sem_open primitive which is designed to be shared when forking a process.
I usually recommend against mixing concurrency (multiprocessing in particular) and OOP because it frequently leads to these kind of misunderstandings.
EDIT:
Saw just now that you are using Windows. Tim Peters gave the right answer. For the sake of abstraction, Python is trying to provide OS independent behaviour over its API. When calling an instance method, it will pickle the object and send it over a pipe. Thus providing a similar behaviour as for Unix.
I'd recommend you to read the programming guidelines for multiprocessing. Your issue is addressed in particular in the first point:
Avoid shared state
As far as possible one should try to avoid shifting large amounts of data between processes.
It is probably best to stick to using queues or pipes for communication between processes rather than using the lower level synchronization primitives.

Semantics of args passed to python threading.Thread __init__

What are the semantics of data passed to threading.Thread.__init__()? Are they copied over and made local to the thread? Or, do they continue to be shared with the creating thread? The docs say that args is a tuple, so I assume it will be deep copied, but would like to make sure.
Basically, I'd like to dump a buffer periodically to disk, and I plan to pass this buffer as the arg to a saving thread's __init__. Can I continue to modifying the buffer in the calling thread without worrying if it will be affected in the saving thread?
Data are generally shared in Python unless you explicitly copy. Dumping a buffer from one thread while modifying it in another is not a safe operation unless the buffer itself has a thread-safe design. You need to synchronise access to the buffer somehow.
Can I continue to modifying the buffer in the calling thread without worrying if it will be affected in the saving thread?
You can if you use multiprocessing.Process instead of threading.Thread ... the process get's the data at the time of the Process.start() call via fork -- the data in one process cannot modify the data in another process. Although, to do IPC you will need to use a Queue or a Pipe or use shared objects from the multiprocessing module (Value, Array, etc...).

Categories