I'm trying to use a cluster of computers to run millions of small simulations. To do this I tried to set up two "servers" on my main computer, one to add input variables in a queue to the network and one to take care of the result.
This is the code for putting stuff into the simulation variables queue:
"""This script reads start parameters and calls on run_sim to run the
simulations"""
import time
from multiprocessing import Process, freeze_support, Manager, Value, Queue, current_process
from multiprocessing.managers import BaseManager
class QueueManager(BaseManager):
pass
class MultiComputers(Process):
def __init__(self, sim_name, queue):
self.sim_name = sim_name
self.queue = queue
super(MultiComputers, self).__init__()
def get_sim_obj(self, offset, db):
"""returns a list of lists from a database query"""
def handle_queue(self):
self.sim_nr = 0
sims = self.get_sim_obj()
self.total = len(sims)
while len(sims) > 0:
if self.queue.qsize() > 100:
self.queue.put(sims[0])
self.sim_nr += 1
print(self.sim_nr, round(self.sim_nr/self.total * 100, 2), self.queue.qsize())
del sims[0]
def run(self):
self.handle_queue()
if __name__ == '__main__':
freeze_support()
queue = Queue()
w = MultiComputers('seed_1_hundred', queue)
w.start()
QueueManager.register('get_queue', callable=lambda: queue)
m = QueueManager(address=('', 8001), authkey=b'abracadabra')
s = m.get_server()
s.serve_forever()
And then is this queue run to take care of the results of the simulations:
__author__ = 'axa'
from multiprocessing import Process, freeze_support, Queue
from multiprocessing.managers import BaseManager
import time
class QueueManager(BaseManager):
pass
class SaveFromMultiComp(Process):
def __init__(self, sim_name, queue):
self.sim_name = sim_name
self.queue = queue
super(SaveFromMultiComp, self).__init__()
def run(self):
res_got = 0
with open('sim_type1_' + self.sim_name, 'a') as f_1:
with open('sim_type2_' + self.sim_name, 'a') as f_2:
while True:
if self.queue.qsize() > 0:
while self.queue.qsize() > 0:
res = self.queue.get()
res_got += 1
if res[0] == 1:
f_1.write(str(res[1]) + '\n')
elif res[0] == 2:
f_2.write(str(res[1]) + '\n')
print(res_got)
time.sleep(0.5)
if __name__ == '__main__':
queue = Queue()
w = SaveFromMultiComp('seed_1_hundred', queue)
w.start()
m = QueueManager(address=('', 8002), authkey=b'abracadabra')
s = m.get_server()
s.serve_forever()
These scripts works as expected for handling the first ~7-800 simulations, after that I get the following error in the terminal running the receiving result script:
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Python35\lib\threading.py", line 914, in _bootstrap_inner
self.run()
File "C:\Python35\lib\threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "C:\Python35\lib\multiprocessing\managers.py", line 177, in accepter
t.start()
File "C:\Python35\lib\threading.py", line 844, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
Can anyone give som insights in where and how the threads are spawned, is a new thread spawned every time I call queue.get() or how does it work?
And I would be very glad if someone knows what I can do to avoid this failure? (i'm running the script with Python3.5-32)
All signs point to your system being out of resources it needs to launch a thread (probably memory, but you could be leaking threads or other resources). You could use OS system monitoring tools (top for Linux, Resource Monitor for windows) to look at the number of threads and memory usage to track this down, but I would recommend you just use an easier, more efficient programming pattern.
While not a perfect comparison, you generally are seeing the C10K problem and it states that blocking threads waiting for results do not scale well and can be prone to leaking errors like this. The solution was to implement Async IO patterns (one blocking thread that launches other workers) and this is pretty straight forward to do in Web Servers.
A framework like pythons aiohttp should be a good fit for what you want. You just need a handler that can get the ID of the remote code and the result. The framework should hopefully take care of the scaling for you.
So in your case you can keep your launching code, but after it starts the process on the remote machine, kill the thread. Have the remote code then send an HTTP message to your server with 1) its ID and 2) its result. Throw in a little extra code to ask it to try again if it does not get a 200 'OK' Status code and you should be in much better shape.
I think you have to many Threads running for your system. I would first check your system ressources and then rethink my Program.
Try limiting your threads and use as few as possible.
Related
I am trying to download a file on CAN bus using python-can. It involves sending data very quickly (in the order of 2-3 messages per millisecond). I am trying to log to file these messages without impacting the speed of sending. Doing the file I/O slows down the sending due to the logging overhead. I tried various methods to improve this (including using queues and reading the queue from another thread but this was not much better - possibly due to GIL). Most of these tests started with using the Python logging module and trying various handlers (QueueHandler/QueueListener, MemoryHandler, etc).
I've managed to make some significant improvements by moving the file I/O into a separate process. I initially ran into an issue with the overhead of sending data from one process to another - so I now buffer it. Now, instead of taking 150% longer with direct file I/O in the main process, I see ~20% increase in time.
I thought that, since this is running in another process, I could also print() the data to console (which I know is relative expensive) but I see a huge increase in the file download time.
What is happening that means the print() affects the main process even though it is running in a child process?
Code below:
file_logger_mp() is called from the main process and it starts the child process that does the logging. The main process then uses the log_hdl function to add a message to the buffer. When the buffer reaches a certain size (100) it is sent to the child process for logging to file or printing to console.
Device: rpi4. And the main process uses asyncio, in case that affects it.
def file_logger_mp(logger_name: str, log_file_pth: str):
conn_rec, conn_send = multiprocessing.Pipe()
log_hdl_c = MyLogger(conn_send)
log_hdl = log_hdl_c.log_hdl # This is used by main code to provide log messages to child process
listener = MyProcess(conn_rec, log_file_pth)
atexit.register(log_hdl_c.final_flush, listener)
listener.start() # Start the child process
return log_hdl, listener
class MyLogger():
def __init__(self, conn_send) -> None:
self.buffer = []
self.conn_send = conn_send
def log_hdl(self, msg):
self.buffer.append(msg)
if len(self.buffer) > 100:
self.conn_send.send(self.buffer)
self.buffer.clear()
def final_flush(self, listener):
self.conn_send.send(self.buffer)
listener.terminate()
class MyProcess(multiprocessing.Process):
def __init__(self, queue, f_hdl):
multiprocessing.Process.__init__(self)
self.exit = multiprocessing.Event()
self.queue = queue
self.f_hdl = f_hdl
def run(self):
f = open(self.f_hdl, "w+")
while not self.exit.is_set():
try:
record = self.queue.recv()
for msg in record:
output = str(msg)
f.write(output+'\n')
print(output) # This `print()` causes large delays to main process?!
record.clear()
except Exception:
import sys, traceback
print('Whoops! Problem:', file=sys.stderr)
traceback.print_exc(file=sys.stderr)
for msg in record: # Flush any pending records before finishing
f.write(str(msg)+'\n')
f.close()
def terminate(self):
self.exit.set()
I am building a watchdog timer that runs another Python program, and if it fails to find a check-in from any of the threads, shuts down the whole program. This is so it will, eventually, be able to take control of needed communication ports. The code for the timer is as follows:
from multiprocessing import Process, Queue
from time import sleep
from copy import deepcopy
PATH_TO_FILE = r'.\test_program.py'
WATCHDOG_TIMEOUT = 2
class Watchdog:
def __init__(self, filepath, timeout):
self.filepath = filepath
self.timeout = timeout
self.threadIdQ = Queue()
self.knownThreads = {}
def start(self):
threadIdQ = self.threadIdQ
process = Process(target = self._executeFile)
process.start()
try:
while True:
unaccountedThreads = deepcopy(self.knownThreads)
# Empty queue since last wake. Add new thread IDs to knownThreads, and account for all known thread IDs
# in queue
while not threadIdQ.empty():
threadId = threadIdQ.get()
if threadId in self.knownThreads:
unaccountedThreads.pop(threadId, None)
else:
print('New threadId < {} > discovered'.format(threadId))
self.knownThreads[threadId] = False
# If there is a known thread that is unaccounted for, then it has either hung or crashed.
# Shut everything down.
if len(unaccountedThreads) > 0:
print('The following threads are unaccounted for:\n')
for threadId in unaccountedThreads:
print(threadId)
print('\nShutting down!!!')
break
else:
print('No unaccounted threads...')
sleep(self.timeout)
# Account for any exceptions thrown in the watchdog timer itself
except:
process.terminate()
raise
process.terminate()
def _executeFile(self):
with open(self.filepath, 'r') as f:
exec(f.read(), {'wdQueue' : self.threadIdQ})
if __name__ == '__main__':
wd = Watchdog(PATH_TO_FILE, WATCHDOG_TIMEOUT)
wd.start()
I also have a small program to test the watchdog functionality
from time import sleep
from threading import Thread
from queue import SimpleQueue
Q_TO_Q_DELAY = 0.013
class QToQ:
def __init__(self, processQueue, threadQueue):
self.processQueue = processQueue
self.threadQueue = threadQueue
Thread(name='queueToQueue', target=self._run).start()
def _run(self):
pQ = self.processQueue
tQ = self.threadQueue
while True:
while not tQ.empty():
sleep(Q_TO_Q_DELAY)
pQ.put(tQ.get())
def fastThread(q):
while True:
print('Fast thread, checking in!')
q.put('fastID')
sleep(0.5)
def slowThread(q):
while True:
print('Slow thread, checking in...')
q.put('slowID')
sleep(1.5)
def hangThread(q):
print('Hanging thread, checked in')
q.put('hangID')
while True:
pass
print('Hello! I am a program that spawns threads!\n\n')
threadQ = SimpleQueue()
Thread(name='fastThread', target=fastThread, args=(threadQ,)).start()
Thread(name='slowThread', target=slowThread, args=(threadQ,)).start()
Thread(name='hangThread', target=hangThread, args=(threadQ,)).start()
QToQ(wdQueue, threadQ)
As you can see, I need to have the threads put into a queue.Queue, while a separate object slowly feeds the output of the queue.Queue into the multiprocessing queue. If instead I have the threads put directly into the multiprocessing queue, or do not have the QToQ object sleep in between puts, the multiprocessing queue will lock up, and will appear to always be empty on the watchdog side.
Now, as the multiprocessing queue is supposed to be thread and process safe, I can only assume I have messed something up in the implementation. My solution seems to work, but also feels hacky enough that I feel I should fix it.
I am using Python 3.7.2, if it matters.
I suspect that test_program.py exits.
I changed the last few lines to this:
tq = threadQ
# tq = wdQueue # option to send messages direct to WD
t1 = Thread(name='fastThread', target=fastThread, args=(tq,))
t2 = Thread(name='slowThread', target=slowThread, args=(tq,))
t3 = Thread(name='hangThread', target=hangThread, args=(tq,))
t1.start()
t2.start()
t3.start()
QToQ(wdQueue, threadQ)
print('Joining with threads...')
t1.join()
t2.join()
t3.join()
print('test_program exit')
The calls to join() means that the test program never exits all by itself since none of the threads ever exit.
So, as is, t3 hangs and the watchdog program detects this and detects the unaccounted for thread and stops the test program.
If t3 is removed from the above program, then the other two threads are well behaved and the watchdog program allows the test program to continue indefinitely.
When I have an error in my code, I'd like my processes to exit, but I have some strange behavior that I don't know how to work around.
This code errors out and closes the processes as expected:
from multiprocessing import Queue, Pool
def worker(queue):
raise error
task_queue = Queue(10)
the_pool = Pool(1, worker, (task_queue,))
But this one spins off an infinite number of new processes which all error out (but followed up by yet new processes):
from multiprocessing import Queue, Pool
def worker(queue):
raise error
task_queue = Queue(10)
the_pool = Pool(1, worker, (task_queue,))
while True: # <-- added this
pass
How can I effectively stop the second from spinning off infinite new processes?
I am getting BrokenPipeError when threads which employ multiprocessing.JoinableQueue spawn processes. It seems that happens after the program finished working and tries to exit, because it does everyithing it supposed to do. What does it mean, is there a way to fix this / safe to ignore?
import requests
import multiprocessing
from multiprocessing import JoinableQueue
from queue import Queue
import threading
class ProcessClass(multiprocessing.Process):
def __init__(self, func, in_queue, out_queue):
super().__init__()
self.in_queue = in_queue
self.out_queue = out_queue
self.func = func
def run(self):
while True:
arg = self.in_queue.get()
self.func(arg, self.out_queue)
self.in_queue.task_done()
class ThreadClass(threading.Thread):
def __init__(self, func, in_queue, out_queue):
super().__init__()
self.in_queue = in_queue
self.out_queue = out_queue
self.func = func
def run(self):
while True:
arg = self.in_queue.get()
self.func(arg, self.out_queue)
self.in_queue.task_done()
def get_urls(host, out_queue):
r = requests.get(host)
out_queue.put(r.text)
print(r.status_code, host)
def get_title(text, out_queue):
print(text.strip('\r\n ')[:5])
if __name__ == '__main__':
def test():
q1 = JoinableQueue()
q2 = JoinableQueue()
for i in range(2):
t = ThreadClass(get_urls, q1, q2)
t.daemon = True
t.setDaemon(True)
t.start()
for i in range(2):
t = ProcessClass(get_title, q2, None)
t.daemon = True
t.start()
for host in ("http://ibm.com", "http://yahoo.com", "http://google.com", "http://amazon.com", "http://apple.com",):
q1.put(host)
q1.join()
q2.join()
test()
print('Finished')
Program output:
200 http://ibm.com
<!DOC
200 http://google.com
<!doc
200 http://yahoo.com
<!DOC
200 http://apple.com
<!DOC
200 http://amazon.com
<!DOC
Finished
Exception in thread Thread-2:
Traceback (most recent call last):
File "C:\Python\33\lib\multiprocessing\connection.py", line 313, in _recv_bytes
nread, err = ov.GetOverlappedResult(True)
BrokenPipeError: [WinError 109]
The pipe has been ended
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python\33\lib\threading.py", line 901, in _bootstrap_inner
self.run()
File "D:\Progs\Uspat\uspat\spider\run\threads_test.py", line 31, in run
arg = self.in_queue.get()
File "C:\Python\33\lib\multiprocessing\queues.py", line 94, in get
res = self._recv()
File "C:\Python\33\lib\multiprocessing\connection.py", line 251, in recv
buf = self._recv_bytes()
File "C:\Python\33\lib\multiprocessing\connection.py", line 322, in _recv_bytes
raise EOFError
EOFError
....
(cut same errors for other threads.)
If I switch JoinableQueue to queue.Queue for multithreading part, everything fixes, but why?
This is happening because you're leaving the background threads blocking in a multiprocessing.Queue.get call when the main thread exits, but it only happens in certain conditions:
A daemon thread is running and blocking on a multiprocessing.Queue.get when the main thread exits.
A multiprocessing.Process is running.
The multiprocessing context is something other than 'fork'.
The exception is telling you that the other end of the Connection that the multiprocessing.JoinableQueue is listening to when its inside of a get() call sent an EOF. Generally this means the other side of the Connection has closed. It makes sense that this happens during shutdown - Python is cleaning up all objects prior to exiting the interpreter, and part of that clean up involves closing all the open Connection objects. What I haven't been able to figure out yet is why it only (and always) happens if a multiprocessing.Process has been spawned (not forked, which is why it doesn't happen on Linux by default) and is still running. I can even reproduce it if I create a multiprocessing.Process that just sleeps in a while loop. It doesn't take any Queue objects at all. For whatever reason, the presence of a running, spawned child process seems to guarantee the exception will be raised. It might simply cause the order that things are destroyed to be just right for race condition to occur, but that's a guess.
In any case, using a queue.Queue instead of multiprocessing.JoinableQueue is a good way to fix it, since you don't actually need a multiprocessing.Queue there. You could also make sure that the background threads and/or background processes are shut down before the main thread, by sending sentinels to their queues. So, make both run methods check for the sentinel:
def run(self):
for arg in iter(self.in_queue.get, None): # None is the sentinel
self.func(arg, self.out_queue)
self.in_queue.task_done()
self.in_queue.task_done()
And then send the sentinels when you're done:
threads = []
for i in range(2):
t = ThreadClass(get_urls, q1, q2)
t.daemon = True
t.setDaemon(True)
t.start()
threads.append(t)
p = multiprocessing.Process(target=blah)
p.daemon = True
p.start()
procs = []
for i in range(2):
t = ProcessClass(get_title, q2, None)
t.daemon = True
t.start()
procs.append(t)
for host in ("http://ibm.com", "http://yahoo.com", "http://google.com", "http://amazon.com", "http://apple.com",):
q1.put(host)
q1.join()
# All items have been consumed from input queue, lets start shutting down.
for t in procs:
q2.put(None)
t.join()
for t in threads:
q1.put(None)
t.join()
q2.join()
I have an idle background process to process data in a queue, which I've implemented in the following way. The data passed in this example is just an integer, but I will be passing lists with up to 1000 integers and putting up to 100 lists on the queue per sec. Is this the correct approach, or should I be looking at more elaborate RPC and server methods?
import multiprocessing
import Queue
import time
class MyProcess(multiprocessing.Process):
def __init__(self, queue, cmds):
multiprocessing.Process.__init__(self)
self.q = queue
self.cmds = cmds
def run(self):
exit_flag = False
while True:
try:
obj = self.q.get(False)
print obj
except Queue.Empty:
if exit_flag:
break
else:
pass
if not exit_flag and self.cmds.poll():
cmd = self.cmds.recv()
if cmd == -1:
exit_flag = True
time.sleep(.01)
if __name__ == '__main__':
queue = multiprocessing.Queue()
proc2main, main2proc = multiprocessing.Pipe(duplex=False)
p = MyProcess(queue, proc2main)
p.start()
for i in range(5):
queue.put(i)
main2proc.send(-1)
proc2main.close()
main2proc.close()
# Wait for the worker to finish
queue.close()
queue.join_thread()
p.join()
It depends on how long it will take to process the data. I can't tell because I don't have a sample of the data, but in general it is better to move to more elaborate RPC and server methods when you need things like load balancing, guaranteed uptime, or scalability. Just remember that these things will add complexity, which may make your application harder to deploy, debug, and maintain. It will also increase the latency that it takes to process a task (which might or might not be a concern to you).
I would test it with some sample data, and determine if you need the scalability that multiple servers provide.