I wish to have a process continually monitoring RPi input, and set a variable (I have chosen a queue) to True or False to reflect the debounced value. Another process will then capture an image (from a stream). I have written some code just to check I can get multiprocessing and signalling (the queue) working ok (I'm an amature coder...).
It all works fine with threading, but multiprocessing is giving an odd error. Specifically 'multiprocessing, EOFError: EOF when reading a line'. Code outputs:-
this computer has the following number of CPU's 6
OK, started thread on separate processor, now we monitor variable
enter something, True is the key word:
Process Process-1:
Traceback (most recent call last):
File "c:\Python34\lib\multiprocessing\process.py", line 254, in _bootstrap
self.run()
File "c:\Python34\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Peter\Documents\NetBeansProjects\test_area\src\test4.py", line 16, in Wait4InputIsTrue
ValueIs = input("enter something, True is the key word: ")
EOFError: EOF when reading a line
This module monitors the 'port' (I am using the keyboard as an input):
#test4.py
from time import sleep
from multiprocessing import Lock
def Wait4InputIsTrue(TheVar, TheLock):
while True:
sleep(0.2)
TheLock.acquire()
#try:
ValueIs = input("enter something, True is the key word: ")
#except:
# ValueIs = False
if ValueIs == "True":
TheVar.put(True)
print("changed TheVar to True")
TheLock.release()
This module monitors the status, and acts on it:
#test5.py
if __name__ == "__main__":
from multiprocessing import Process, Queue, Lock, cpu_count
from time import sleep
from test4 import Wait4InputIsTrue
print("this computer has the following number of CPU's", cpu_count())
LockIt = Lock()
IsItTrue = Queue(maxsize = 3)
Wait4 = Process(target = Wait4InputIsTrue, args = (IsItTrue, LockIt))
Wait4.start()
print("OK, started thread on separate processor, now we monitor variable")
while True:
if IsItTrue.qsize():
sleep(0.1)
print("received input from separate thread:", IsItTrue.get())
Note that I have tried adding a try: to the input statement in test4.py, in which case it keeps printing "enter something, True is the key word: " indefinitely, without a cr.
I added Lock in wild attempt to fix it, makes no difference
Anyone any idea why this is happening?
Your problem can be boiled down to a simpler script:
import multiprocessing as mp
import sys
def worker():
print("Got", repr(sys.stdin.read(1)))
if __name__ == "__main__":
process = mp.Process(target=worker)
process.start()
process.join()
When run, it produces
$ python3 i.py
Got ''
Reading zero bytes means the pipe is closed and input(..) turns that into an EOFError exception.
The multiprocessing module doesn't let you read stdin. That makes sense generally because mixing stdin readers from multiple children is a risky business. In fact, digging into the implementation, multiprocessing/process.py explicitly sets stdin to devnull:
sys.stdin.close()
sys.stdin = open(os.devnull)
If you are just using stdin for test, then the solution is simple: Don't do that! If you really need user input, life is quite a bit more difficult. You can use additional queues plus code in the parent to prompt users and get input.
Related
As can be seen in the code below, two multiprocessing runs together, but both have a moment that can ask for an input() in the Terminal, is there any way to pause the other multiprocessing until the answer is given in the Terminal?
File Code_One archaic and simple example to speed up the explanation:
from time import sleep
def main():
sleep(1)
print('run')
sleep(1)
print('run')
sleep(1)
input('Please, give the number:')
File Code_Two archaic and simple example to speed up the explanation:
from time import sleep
def main():
sleep(2)
input('Please, give the number:')
sleep(1)
print('run 2')
sleep(1)
print('run 2')
sleep(1)
print('run 2')
sleep(1)
print('run 2')
sleep(1)
print('run 2')
File Main_Code:
import Code_One
import Code_Two
import multiprocessing
from time import sleep
def main():
while True:
pression = multiprocessing.Process(target=Code_One.main)
xgoals = multiprocessing.Process(target=Code_Two.main)
pression.start()
xgoals.start()
pression.join()
xgoals.join()
print('Done')
sleep(5)
if __name__ == '__main__':
main()
How should I proceed in this situation?
In this example, as it doesn't pause the other multi, whenever it asks for an input this error happens:
input('Please, give the number:')
EOFError: EOF when reading a line
Sure, this is possible. To do it you will need to use some sort of interprocess communication (IPC) mechanism to allow the two processes to coordinate. time.sleep is not the best option though, and there are much more efficient ways of tackling it that are specifically made just for this problem.
Probably the most efficient way is to use a multiprocessing.Event, like this:
import multiprocessing
import sys
import os
def Code_One(event, fno):
proc_name = multiprocessing.current_process().name
print(f'running {proc_name}')
sys.stdin = os.fdopen(fno)
val = input('give proc 1 input: ')
print(f'proc 1 got input: {val}')
event.set()
def Code_Two(event, fno):
proc_name = multiprocessing.current_process().name
print(f'running {proc_name} and waiting...')
event.wait()
sys.stdin = os.fdopen(fno)
val = input('give proc 2 input: ')
print(f'proc 2 got input {val}')
if __name__ == '__main__':
event = multiprocessing.Event()
pression = multiprocessing.Process(name='code_one', target=Code_One, args=(event, sys.stdin.fileno()))
xgoals = multiprocessing.Process(name='code_two', target=Code_Two, args=(event, sys.stdin.fileno()))
xgoals.start()
pression.start()
xgoals.join()
pression.join()
This creates the event object, and the two subprocesses. Event objects have an internal flag that starts out False, and can then be toggled True by any process calling event.set(). If a process calls event.wait() while the flag is False, that process will block until another process calls event.set().
The event is created in the parent process, and passed to each subprocess as an argument. Code_Two begins and calls event.wait(), which blocks until the internal flag in the event is set to True. Code_One executes immediately and then calls event.set(), which sets event's internal flag to True, and allows Code_Two to proceed. At that point both processes have returned and called join, and the program ends.
This is a little hacky because it is also passing the stdin file number from the parent to the child processes. That is necessary because when subprocesses are forked, those file descriptors are closed, so for a child process to read stdin using input it first needs to open the correct input stream (that is what sys.stdin = os.fdopen(fno) is doing). It won't work to just send sys.stdin to the child as another argument, because of the mechanics that Python uses to set up the environment for forked processes (sys.stdin is a IO wrapper object and is not pickleable).
I'm trying to use a cluster of computers to run millions of small simulations. To do this I tried to set up two "servers" on my main computer, one to add input variables in a queue to the network and one to take care of the result.
This is the code for putting stuff into the simulation variables queue:
"""This script reads start parameters and calls on run_sim to run the
simulations"""
import time
from multiprocessing import Process, freeze_support, Manager, Value, Queue, current_process
from multiprocessing.managers import BaseManager
class QueueManager(BaseManager):
pass
class MultiComputers(Process):
def __init__(self, sim_name, queue):
self.sim_name = sim_name
self.queue = queue
super(MultiComputers, self).__init__()
def get_sim_obj(self, offset, db):
"""returns a list of lists from a database query"""
def handle_queue(self):
self.sim_nr = 0
sims = self.get_sim_obj()
self.total = len(sims)
while len(sims) > 0:
if self.queue.qsize() > 100:
self.queue.put(sims[0])
self.sim_nr += 1
print(self.sim_nr, round(self.sim_nr/self.total * 100, 2), self.queue.qsize())
del sims[0]
def run(self):
self.handle_queue()
if __name__ == '__main__':
freeze_support()
queue = Queue()
w = MultiComputers('seed_1_hundred', queue)
w.start()
QueueManager.register('get_queue', callable=lambda: queue)
m = QueueManager(address=('', 8001), authkey=b'abracadabra')
s = m.get_server()
s.serve_forever()
And then is this queue run to take care of the results of the simulations:
__author__ = 'axa'
from multiprocessing import Process, freeze_support, Queue
from multiprocessing.managers import BaseManager
import time
class QueueManager(BaseManager):
pass
class SaveFromMultiComp(Process):
def __init__(self, sim_name, queue):
self.sim_name = sim_name
self.queue = queue
super(SaveFromMultiComp, self).__init__()
def run(self):
res_got = 0
with open('sim_type1_' + self.sim_name, 'a') as f_1:
with open('sim_type2_' + self.sim_name, 'a') as f_2:
while True:
if self.queue.qsize() > 0:
while self.queue.qsize() > 0:
res = self.queue.get()
res_got += 1
if res[0] == 1:
f_1.write(str(res[1]) + '\n')
elif res[0] == 2:
f_2.write(str(res[1]) + '\n')
print(res_got)
time.sleep(0.5)
if __name__ == '__main__':
queue = Queue()
w = SaveFromMultiComp('seed_1_hundred', queue)
w.start()
m = QueueManager(address=('', 8002), authkey=b'abracadabra')
s = m.get_server()
s.serve_forever()
These scripts works as expected for handling the first ~7-800 simulations, after that I get the following error in the terminal running the receiving result script:
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Python35\lib\threading.py", line 914, in _bootstrap_inner
self.run()
File "C:\Python35\lib\threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "C:\Python35\lib\multiprocessing\managers.py", line 177, in accepter
t.start()
File "C:\Python35\lib\threading.py", line 844, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
Can anyone give som insights in where and how the threads are spawned, is a new thread spawned every time I call queue.get() or how does it work?
And I would be very glad if someone knows what I can do to avoid this failure? (i'm running the script with Python3.5-32)
All signs point to your system being out of resources it needs to launch a thread (probably memory, but you could be leaking threads or other resources). You could use OS system monitoring tools (top for Linux, Resource Monitor for windows) to look at the number of threads and memory usage to track this down, but I would recommend you just use an easier, more efficient programming pattern.
While not a perfect comparison, you generally are seeing the C10K problem and it states that blocking threads waiting for results do not scale well and can be prone to leaking errors like this. The solution was to implement Async IO patterns (one blocking thread that launches other workers) and this is pretty straight forward to do in Web Servers.
A framework like pythons aiohttp should be a good fit for what you want. You just need a handler that can get the ID of the remote code and the result. The framework should hopefully take care of the scaling for you.
So in your case you can keep your launching code, but after it starts the process on the remote machine, kill the thread. Have the remote code then send an HTTP message to your server with 1) its ID and 2) its result. Throw in a little extra code to ask it to try again if it does not get a 200 'OK' Status code and you should be in much better shape.
I think you have to many Threads running for your system. I would first check your system ressources and then rethink my Program.
Try limiting your threads and use as few as possible.
I am trying to make a console for my python applications, but i ran into a problem:
when printing something using the print() function, the text in the input field is also included. This is purely visual, because the program still works.
I tried searching online, but I do not even now what to search for and had no luck.
This is my code. It prints "foo" until the user types "exit":
import multiprocessing as mp
import os
import time
def f(q):
while True:
print(q)
time.sleep(1)
if __name__=="__main__":
p=mp.Process(target=f, args=("foo",))
p.start()
while True:
comm=str(input())
if comm=="exit":
p.terminate()
break
When the program is running, the user can still type, but when the program prints something, it also takes whatever is in the input field at the time:
foo
foo
foo
foo
efoo
xfoo
itfoo
When pressing "enter", the program still registers the input correctly and exits the program.
Here is a modification of your code that only prints foo after you have finished your input typing (i.e., until you hit Enter):
import multiprocessing as mp
from multiprocessing import Queue
def f(q, queue):
while True:
queue.get()
print(q)
if __name__=="__main__":
queue = Queue()
p=mp.Process(target=f, args=("foo", queue))
p.start()
while True:
comm=str(input())
queue.put(None)
if comm=="exit":
p.terminate()
break
If terminating the process is all you want your user to be able to do, then you can instruct them to enter Ctrl+C if they wish to stop the operation and then catch the KeyboardInterrupt exception that comes along with it.
import multiprocessing as mp
import os
import time
def f(q):
while True:
print(q)
time.sleep(1)
if __name__=="__main__":
p=mp.Process(target=f, args=("foo",))
print("Process starting. Use Ctrl+c anytime to stop it!")
p.start()
try:
while True:
input() # Trash command
except KeyboardInterrupt:
print("Terminating process...")
p.terminate()
print("Process terminated...")
If you want to do more complicated commands then a GUI would be your best approach (as mentioned by John)
So I have a program, in the "main" process I fire off a new Process object which (what I want) is to read lines from stdin and append them to a Queue object.
Essentially the basic system setup is that there is a "command getting" process which the user will enter commands/queries, and I need to get those queries to other subsystems running in separate processes. My thinking is to share these via a multiprocessing.Queue which the other systems can read from.
What I have (focusing on just the getting the commands/queries) is basically:
def sub_proc(q):
some_str = ""
while True:
some_str = raw_input("> ")
if some_str.lower() == "quit":
return
q.put_nowait(some_str)
if __name__ == "__main__":
q = Queue()
qproc = Process(target=sub_proc, args=(q,))
qproc.start()
qproc.join()
# now at this point q should contain all the strings entered by the user
The problem is that I get:
Process Process-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/home/blah/blah/blah/blah.py", line 325, in sub_proc
some_str = raw_input("> ")
File "/randompathhere/eclipse/plugins/org.python.pydev_2.1.0.2011052613/PySrc/pydev_sitecustomize/sitecustomize.py", line 181, in raw_input
ret = original_raw_input(prompt)
EOFError: EOF when reading a line
How do?
I solved a similar issue by passing the original stdin file descriptor to the child process and re-opening it there.
def sub_proc(q,fileno):
sys.stdin = os.fdopen(fileno) #open stdin in this process
some_str = ""
while True:
some_str = raw_input("> ")
if some_str.lower() == "quit":
return
q.put_nowait(some_str)
if __name__ == "__main__":
q = Queue()
fn = sys.stdin.fileno() #get original file descriptor
qproc = Process(target=sub_proc, args=(q,fn))
qproc.start()
qproc.join()
This worked for my relatively simple case. I was even able to use the readline module on the re-opened stream. I don't know how robust it is for more complex systems.
In short, the main process and your second process don't share the same STDIN.
from multiprocessing import Process, Queue
import sys
def sub_proc():
print sys.stdin.fileno()
if __name__ == "__main__":
print sys.stdin.fileno()
qproc = Process(target=sub_proc)
qproc.start()
qproc.join()
Run that and you should get two different results for sys.stdin.fileno()
Unfortunately, that doesn't solve your problem. What are you trying to do?
If you don't want to pass stdin to the target processes function, like in #Ashelly's answer, or just need to do it for many different processes, you can do it with multiprocessing.Pool via the initializer argument:
import os, sys, multiprocessing
def square(num=None):
if not num:
num = int(raw_input('square what? '))
return num ** 2
def initialize(fd):
sys.stdin = os.fdopen(fd)
initargs = [sys.stdin.fileno()]
pool = multiprocessing.Pool(initializer=initialize, initargs=initargs)
pool.apply(square, [3])
pool.apply(square)
the above example will print the number 9, followed by a prompt for input and then the square of the input number.
Just be careful not to have multiple child processes reading from the same descriptor at the same time or things may get... confusing.
You could use threading and keep it all on the same process:
from multiprocessing import Queue
from Queue import Empty
from threading import Thread
def sub_proc(q):
some_str = ""
while True:
some_str = raw_input("> ")
if some_str.lower() == "quit":
return
q.put_nowait(some_str)
if __name__ == "__main__":
q = Queue()
qproc = Thread(target=sub_proc, args=(q,))
qproc.start()
qproc.join()
while True:
try:
print q.get(False)
except Empty:
break
I am using pty to read non blocking the stdout of a process like this:
import os
import pty
import subprocess
master, slave = pty.openpty()
p = subprocess.Popen(cmd, stdout = slave)
stdout = os.fdopen(master)
while True:
if p.poll() != None:
break
print stdout.readline()
stdout.close()
Everything works fine except that the while-loop occasionally blocks. This is due to the fact that the line print stdout.readline() is waiting for something to be read from stdout. But if the program already terminated, my little script up there will hang forever.
My question is: Is there a way to peek into the stdout object and check if there is data available to be read? If this is not the case it should continue through the while-loop where it will discover that the process actually already terminated and break the loop.
Yes, use the select module's poll:
import select
q = select.poll()
q.register(stdout,select.POLLIN)
and in the while use:
l = q.poll(0)
if not l:
pass # no input
else:
pass # there is some input
The select.poll() answer is very neat, but doesn't work on Windows. The following solution is an alternative. It doesn't allow you to peek stdout, but provides a non-blocking alternative to readline() and is based on this answer:
from subprocess import Popen, PIPE
from threading import Thread
def process_output(myprocess): #output-consuming thread
nextline = None
buf = ''
while True:
#--- extract line using read(1)
out = myprocess.stdout.read(1)
if out == '' and myprocess.poll() != None: break
if out != '':
buf += out
if out == '\n':
nextline = buf
buf = ''
if not nextline: continue
line = nextline
nextline = None
#--- do whatever you want with line here
print 'Line is:', line
myprocess.stdout.close()
myprocess = Popen('myprogram.exe', stdout=PIPE) #output-producing process
p1 = Thread(target=process_output, args=(myprocess,)) #output-consuming thread
p1.daemon = True
p1.start()
#--- do whatever here and then kill process and thread if needed
if myprocess.poll() == None: #kill process; will automatically stop thread
myprocess.kill()
myprocess.wait()
if p1 and p1.is_alive(): #wait for thread to finish
p1.join()
Other solutions for non-blocking read have been proposed here, but did not work for me:
Solutions that require readline (including the Queue based ones) always block. It is difficult (impossible?) to kill the thread that executes readline. It only gets killed when the process that created it finishes, but not when the output-producing process is killed.
Mixing low-level fcntl with high-level readline calls may not work properly as anonnn has pointed out.
Using select.poll() is neat, but doesn't work on Windows according to python docs.
Using third-party libraries seems overkill for this task and adds additional dependencies.