Exchanging keystrokes based data among multiple python processes - python

I am running two separate Processes in my script. First process, p1, starts a oneSecondTimer routine that executes at exactly at 1 second and does some work. Second process, p2, fires off a keyboard listener that, well, listens to the keyboard.
At the moment, I want the p1 process to stop when the user presses the escape key. I tried using a global variable, it didn't work. I tried using a queue, it worked, but it is definitely not the most elegant solution out there. Its actually an ugly workaround which is not going to scale up.
Eventually, the script would have a number of separate parallel processes that would be controlled (not just start/stop) by pressing various keys.
Here's the code,
import time
from pynput import keyboard
from multiprocessing import Process, Queue
def on_release(key):
if key == keyboard.Key.esc:
print('escaped!')
# Stop listener
return False
def keyboardListener(q):
with keyboard.Listener(on_release=on_release) as listener:
listener.join()
print('Keybord Listener Terminated!!!')
# Make the queue NOT EMPTY
q.put('Terminate')
def oneSecondTimer(q):
starttime = time.time()
# Terminate the infinite loop if
# queue is NOT EMPTY
while (not q.qsize()):
print("tick")
time.sleep(1.0 - ((time.time() - starttime) % 1.0))
return False
if __name__ == '__main__':
q = Queue()
p1 = Process(target=oneSecondTimer, args=(q,))
p1.start()
p2 = Process(target=keyboardListener, args=(q,))
p2.start()

Finally managed to make this work.
In the above snippet, when I was calling the
listener.join()
to the Keyboard Listener event, it was essentially blocking the execution of the rest of the keyboardListener(q) process till the on_release(key) function STOPPED. Because this is exactly what .join() is supposed to do. It is a Blocking call.
In the following snippet, the keyboard.listener thread is simply started in the keyboardListener(q) process. A while loop keeps track of the variable called fetchKeyPress. The variable does what the name implies, it fetches the pressed key in the on_release(key) subroutine. Key presses fetched by fetchKeyPress are pumped in the Queue called q that is shared among the two processes, keyboardListener(key) and oneSecondTimer(q). The keyboardListener process runs 4 times as fast as the oneSecondTimer process, has logic to exit the while loop and prevents flooding of the queue if the user presses the same key continuously/repeatedly.
The oneSecondTimer(q) process runs at every second. If the q is not empty, it spits out whatever is there in the q. It also has a while loop exit logic built into it.
Now I can utilize the data (key presses) acquired by the process p2 and use it in the other parallel running process p1.
p2 is the Producer. p1 is the Consumer.
import time
from pynput import keyboard
from multiprocessing import Process, Queue
fetchKeyPress = 10
def on_release(key):
global fetchKeyPress
fetchKeyPress = key
if key == keyboard.Key.esc:
fetchKeyPress = 0
print('escaped!')
# Stop listener
return False
def keyboardListener(q):
global fetchKeyPress
prevKeyFetch = 10 # Keep track of the previous keyPress
keyboard.Listener(on_release=on_release).start()
while (fetchKeyPress):
print ('Last Key Pressed was ', fetchKeyPress)
# Fill the Queue only when a new key is pressed
if (not (fetchKeyPress == prevKeyFetch)):
q.put(fetchKeyPress)
# Update the previous keyPress
prevKeyFetch = fetchKeyPress
time.sleep(0.25)
print('Keybord Listener Terminated!!!')
q.put('Terminate')
def oneSecondTimer(q):
runner = True # Runs the while() loop
starttime = time.time()
while (runner):
print('\ttick')
if (not q.empty()):
qGet = q.get()
print ('\tQueue Size ', q.qsize())
print ('\tQueue out ', qGet)
# Condition to terminate the program
if (qGet == 'Terminate'):
# Make runner = False to terminate the While loop
runner = False
time.sleep(1.0 - ((time.time() - starttime) % 1.0))
return False
if __name__ == '__main__':
q = Queue()
p1 = Process(target=oneSecondTimer, args=(q,))
p1.start()
p2 = Process(target=keyboardListener, args=(q,))
p2.start()
But I think at the end of the day, I am probably just going to use the https://pypi.org/project/keyboard/ library because of the simplicity. Thanks to #toti08 for the suggestion in the comments above.

Related

How to stop all processes if one of them changes a global "stop" variable to True

import multiprocessing
global stop
stop = False
def makeprocesses():
processes = []
for _ in range(50):
p = multiprocessing.Process(target=runprocess)
processes.append(p)
for _ in range(50):
processes[_].start()
runprocess()
def runprocess():
global stop
while stop == False:
x = 1 #do something here
if x = 1:
stop = True
makeprocesses()
while stop == True:
x = 0
makeprocesses()
How could I make all the other 49 processes stop if just one changes stop to True?
I would think since stop is a global variable once one process changes stop all the others would stop.
No. Each process gets its own copy. It's global to the script, but not across processes. Remember that each process has a completely separate address space. It gets a COPY of the first process' data.
If you need to communicate across processes, you need to use one of the synchronization techniques in the multiprocessing documentation (https://docs.python.org/3/library/multiprocessing.html#synchronization-primitives), like an Event or a shared object.
Whenever you want to synchronise threads you need some shared context and make sure it is safe. as #Tim Roberts mentioned These can be taken from (https://docs.python.org/3/library/multiprocessing.html#synchronization-primitives)
Try something like this:
import multiprocessing
from multiprocessing import Event
from time import sleep
def makeprocesses():
processes = []
e = Event()
for i in range(50):
p = multiprocessing.Process(target=runprocess,args= (e,i))
p.start()
processes.append(p)
for p in processes:
p.join()
def runprocess(e: Event() = None,name = 0):
while not e.is_set():
sleep(1)
if name == 1:
e.set() # here we make all other processes to stop
print("end")
if __name__ == '__main__':
makeprocesses()
My favorite way is using cancelation token which is a object wrapping what we did here

Run a function (in a new thread) on a subprocess that is already running

I have some expensive long-running functions that I'd like to run on multiple cores. This is easy to do with multiprocessing. But I will also need to periodically run a function that calculates a value based on the state (global variables) of a specific process. I think this should be possible by simply spawning a thread on the subprocess.
Here's a simplified example. Please suggest how I can call procces_query_state().
import multiprocessing
import time
def process_runner(x: int):
global xx
xx = x
while True:
time.sleep(0.1)
xx += 1 # actually an expensive calculation
def process_query_state() -> int:
y = xx * 2 # actually an expenseive calculation
return y
def main():
processes = {}
for x in range(10):
p = multiprocessing.get_context('spawn').Process(target=process_runner, args=(x,))
p.start()
processes[x] = p
while True:
time.sleep(1)
print(processes[3].process_query_state()) # this doesn't actually work
if __name__ == '__main__':
main()
I see two problems:
Process is not RPC (Remote Procedure Call) and you can't execute other function process_query_state from main process. You can only use queue to send some information to other process - but this process has to periodically check if there is new message.
Process can run only one function so it would stop one function when it get message to run other function or it would have to run threads on new processes to run many functions at the same time.
EDIT: It may give other problem - if two functions will work at the same time on the same data then one can change value before other will use old value and this can create wrong results.
I created example which uses queues to send message to process_runner, and it periodically check if there is message and run process_query_state, and it send result back to main process.
Main process wait for result from selected porcess - it blocks code - but if you want to work with more processes then it would have to make it more complex.
import multiprocessing
import time
def process_query_state():
y = xx * 2 # actually an expenseive calculation
return y
def process_runner(x: int, queue_in, queue_out):
global xx
xx = x
# reverse direction
q_in = queue_out
q_out = queue_in
while True:
time.sleep(0.1)
xx += 1 # actually an expensive calculation
# run other function - it will block main calculations
# but this way it will use correct `xx` (other calculations will not change it)
if not q_in.empty():
if q_in.get() == 'run':
result = process_query_state()
q_out.put(result)
def main():
processes = {}
for x in range(4):
ctx = multiprocessing.get_context('spawn')
q_in = ctx.Queue()
q_out = ctx.Queue()
p = ctx.Process(target=process_runner, args=(x, q_in, q_out))
p.start()
processes[x] = (p, q_in, q_out)
while True:
time.sleep(1)
q_in = processes[3][1]
q_out = processes[3][2]
q_out.put('run')
# non blocking version
#if not q_in.empty():
# print(q_in.get())
# blocking version
print(q_in.get())
if __name__ == '__main__':
main()

Updating Popup.Animated to play gif until external task is completed (PYSimpleGUI)

I am looking to create a UI that displays an animated popup while another task is being carried out. That will exit upon completion. I am using PYSimpleGUI and am using the example listed here to base my work off. I can get a single frame of the animation to display once I start the code and exit upon completion of the task but can't get it to play the entire gif. Code:
import queue
import threading
import time
import PySimpleGUI as sg
# ############################# User callable CPU intensive code #############################
# Put your long running code inside this "wrapper"
# NEVER make calls to PySimpleGUI from this thread (or any thread)!
# Create one of these functions for EVERY long-running call you want to make
def long_function_wrapper(work_id, gui_queue):
# LOCATION 1
# this is our "long running function call"
#time.sleep(10) # sleep for a while as a simulation of a long-running computation
x = 0
while True:
print(x)
time.sleep(0.5)
x = x + 1
if x == 5:
break
# at the end of the work, before exiting, send a message back to the GUI indicating end
gui_queue.put('{} ::: done'.format(work_id))
# at this point, the thread exits
return
def the_gui():
gui_queue = queue.Queue() # queue used to communicate between the gui and long-running code
layout = [[sg.Text('Multithreaded Work Example')],
[sg.Text('This is a Test.', size=(25, 1), key='_OUTPUT_')],
[sg.Button('Go'), sg.Button('Exit')], ]
window = sg.Window('Multithreaded Window').Layout(layout)
# --------------------- EVENT LOOP ---------------------
work_id = 0
while True:
event, values = window.Read(timeout=100) # wait for up to 100 ms for a GUI event
if event is None or event == 'Exit':
#sg.PopupAnimated(None)
break
if event == 'Go': # clicking "Go" starts a long running work item by starting thread
window.Element('_OUTPUT_').Update('Starting long work %s'%work_id)
# LOCATION 2
# STARTING long run by starting a thread
thread_id = threading.Thread(target=long_function_wrapper, args=(work_id, gui_queue,), daemon=True)
thread_id.start()
#for i in range(200000):
work_id = work_id+1 if work_id < 19 else 0
#while True:
sg.PopupAnimated(sg.DEFAULT_BASE64_LOADING_GIF, background_color='white', time_between_frames=100)
#if message == None:
#break
# --------------- Read next message coming in from threads ---------------
try:
message = gui_queue.get_nowait() # see if something has been posted to Queue
except queue.Empty: # get_nowait() will get exception when Queue is empty
message = None # nothing in queue so do nothing
# if message received from queue, then some work was completed
if message is not None:
# LOCATION 3
# this is the place you would execute code at ENDING of long running task
# You can check the completed_work_id variable to see exactly which long-running function completed
completed_work_id = int(message[:message.index(' :::')])
sg.PopupAnimated(None)
#window['_GIF_'].update_animation(sg.DEFAULT_BASE64_LOADING_GIF, time_between_frames=100)
#window.read(timeout = 1000)
# if user exits the window, then close the window and exit the GUI func
window.Close()
############################# Main #############################
if __name__ == '__main__':
the_gui()
print('Exiting Program'
)
You've got your call to popup_animated inside of an "if" statement that is only executed once.
You must call popup_animated for every frame you wish to show. It's not spun off as a task that works in the background.
This change to your code will keep the animation going as long as there as background tasks running.
import queue
import threading
import time
import PySimpleGUI as sg
# ############################# User callable CPU intensive code #############################
# Put your long running code inside this "wrapper"
# NEVER make calls to PySimpleGUI from this thread (or any thread)!
# Create one of these functions for EVERY long-running call you want to make
def long_function_wrapper(work_id, gui_queue):
# LOCATION 1
# this is our "long running function call"
# time.sleep(10) # sleep for a while as a simulation of a long-running computation
x = 0
while True:
print(x)
time.sleep(0.5)
x = x + 1
if x == 5:
break
# at the end of the work, before exiting, send a message back to the GUI indicating end
gui_queue.put('{} ::: done'.format(work_id))
# at this point, the thread exits
return
def the_gui():
gui_queue = queue.Queue() # queue used to communicate between the gui and long-running code
layout = [[sg.Text('Multithreaded Work Example')],
[sg.Text('This is a Test.', size=(25, 1), key='_OUTPUT_')],
[sg.Text(size=(25, 1), key='_OUTPUT2_')],
[sg.Button('Go'), sg.Button('Exit')], ]
window = sg.Window('Multithreaded Window').Layout(layout)
# --------------------- EVENT LOOP ---------------------
work_id = 0
while True:
event, values = window.Read(timeout=100) # wait for up to 100 ms for a GUI event
if event is None or event == 'Exit':
# sg.PopupAnimated(None)
break
if event == 'Go': # clicking "Go" starts a long running work item by starting thread
window.Element('_OUTPUT_').Update('Starting long work %s' % work_id)
# LOCATION 2
# STARTING long run by starting a thread
thread_id = threading.Thread(target=long_function_wrapper, args=(work_id, gui_queue,), daemon=True)
thread_id.start()
# for i in range(200000):
work_id = work_id + 1 if work_id < 19 else 0
# while True:
# if message == None:
# break
# --------------- Read next message coming in from threads ---------------
try:
message = gui_queue.get_nowait() # see if something has been posted to Queue
except queue.Empty: # get_nowait() will get exception when Queue is empty
message = None # nothing in queue so do nothing
# if message received from queue, then some work was completed
if message is not None:
# LOCATION 3
# this is the place you would execute code at ENDING of long running task
# You can check the completed_work_id variable to see exactly which long-running function completed
completed_work_id = int(message[:message.index(' :::')])
window.Element('_OUTPUT2_').Update('Finished long work %s' % completed_work_id)
work_id -= 1
if not work_id:
sg.PopupAnimated(None)
if work_id:
sg.PopupAnimated(sg.DEFAULT_BASE64_LOADING_GIF, background_color='white', time_between_frames=100)
# window['_GIF_'].update_animation(sg.DEFAULT_BASE64_LOADING_GIF, time_between_frames=100)
# window.read(timeout = 1000)
# if user exits the window, then close the window and exit the GUI func
window.Close()
############################# Main #############################
if __name__ == '__main__':
the_gui()
print('Exiting Program')

How to structure code to be able to launch tasks that can kill/replace each other

I have a Python program that does the following:
1) endlessly wait on com port a command character
2) on character reception, launch a new thread to execute a particular piece of code
What I would need to do if a new command is received is:
1) kill the previous thread
2) launch a new one
I read here and there that doing so is not the right way to proceed.
What would be the best way to do this knowing that I need to do this in the same process so I guess I need to use threads ...
I would suggest you two differente approaches:
if your processes are both called internally from a function, you could set a timeout on the first function.
if you are running external script, you might want to kill the process.
Let me try to be more precise in my question by adding an example of my code structure.
Suppose synchronous functionA is still running because waiting internally for a particular event, if command "c" is received, I need to stop functionA and launch functionC.
def functionA():
....
....
call a synchronous serviceA that can take several seconds even more to execute
....
....
def functionB():
....
....
call a synchronous serviceB that nearly returns immediately
....
....
def functionC():
....
....
call a synchronous serviceC
....
....
#-------------------
def launch_async_task(function):
t = threading.Thread(target=function, name="async")
t.setDaemon(True)
t.start()
#------main----------
while True:
try:
car = COM_port.read(1)
if car == "a":
launch_async_task(functionA)
elif car == "b":
launch_async_task(functionB)
elif car == "c":
launch_async_task(functionC)
May want to run the serial port in a separate thread. When it receives a byte put that byte in a queue. Have the main program loop and check the queue to decide what to do with it. From the main program you can kill the thread with join and start a new thread. You may also want to look into a thread pool to see if it is what you want.
ser = serial.Serial("COM1", 9600)
que = queue.Queue()
def read_serial(com, q):
val = com.read(1)
q.put(val)
ser_th = threading.Thread(target=read_serial, args=(ser, que))
ser_th.start()
th = None
while True:
if not que.empty():
val = que.get()
if val == b"e":
break # quit
elif val == b"a":
if th is not None:
th.join(0) # Kill the previous function
th = threading.Thread(target=functionA)
th.start()
elif val == b"b":
if th is not None:
th.join(0) # Kill the previous function
th = threading.Thread(target=functionB)
th.start()
elif val == b"c":
if th is not None:
th.join(0) # Kill the previous thread (functionA)
th = threading.Thread(target=functionC)
th.start()
try:
ser.close()
th.join(0)
except:
pass
If you are creating and joining a lot of threads you may want to just have a function that checks what command to run.
running = True
def run_options(option):
if option == 0:
print("Running Option 0")
elif option == 1:
print("Running Option 1")
else:
running = False
while running:
if not que.empty():
val = que.get()
run_options(val)
Ok, I finally used a piece of code that uses ctypes lib to provide some kind of killing thread function.
I know this is not a clean way to proceed but in my case, there are no resources shared by the threads so it shouldn't have any impact ...
If it can help, here is the piece of code that can easily be found on the net:
def terminate_thread(thread):
"""Terminates a python thread from another thread.
:param thread: a threading.Thread instance
"""
if not thread.isAlive():
return
exc = ctypes.py_object(SystemExit)
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(
ctypes.c_long(thread.ident), exc)
if res == 0:
raise ValueError("nonexistent thread id")
elif res > 1:
# """if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"""
ctypes.pythonapi.PyThreadState_SetAsyncExc(thread.ident, None)
raise SystemError("PyThreadState_SetAsyncExc failed")

Controlling a python thread with a function

Thanks to those who helped me figure out I needed to use threading to run a loop in a control script I have run, I now have an issue to try and control the thread - by starting or stopping it based on a function:
I want to start a process to get a motor to cycle through a movement based on a 'start' parameter sent to the controlling function, also I want to send a 'stop' parameter to stop the thread too - here's where I got to:
def looper():
while True:
print 'forward loop'
bck.ChangeDutyCycle(10)
fwd.ChangeDutyCycle(0)
time.sleep(5)
print 'backwards loop'
bck.ChangeDutyCycle(0)
fwd.ChangeDutyCycle(20)
time.sleep(5)
def looper_control(state):
t = threading.Thread(target=looper)
if state == 'start':
t.start()
elif state == 'stop':
t.join()
print 'looper stopped!!'
This starts the thread okay when I call looper_control('start') but throws an error when looper_control('stop'):
File "/usr/lib/python2.7/threading.py", line 657, in join
raise RuntimeError("cannot join thread before it is started")
RuntimeError: cannot join thread before it is started
EDIT: looper_control called from here
if "motor" in tmp:
if tmp[-1:] == '0':
#stop both pin
MotorControl('fwd',0,0)
print 'stop motors'
looper_control('stop')
elif tmp[-1:] == '2':
#loop the motor
print 'loop motors'
looper_control('start')
UPDATE: Ive not been able to stop the thread using the method suggested - I thought I had it!
here's where I am:
class sliderControl(threading.Thread):
def __init__(self,stop_event):
super(sliderControl,self).__init__()
self.stop_event = stop_event
def run(self):
while self.stop_event:
print 'forward loop'
bck.ChangeDutyCycle(10)
fwd.ChangeDutyCycle(0)
time.sleep(5)
print 'backwards loop'
bck.ChangeDutyCycle(0)
fwd.ChangeDutyCycle(20)
time.sleep(5)
def looper_control(state,stop_event):
if state == 'start':
t = sliderControl(stop_event=stop_event)
t.start()
elif state == 'stop':
#time.sleep(3)
stop_event.set()
#t.join()
print 'looper stopped!!'
called via:
if tmp[-1:] == '0':
#stop both pin
MotorControl('fwd',0,0)
print 'stop motors'
#stop_thread_event = threading.Event()
print 'stopping thread'
print stop_thread_event
looper_control('stop',stop_thread_event)
elif tmp[-1:] == '2':
#loop the motor
print 'loop motors'
global stop_thread_event
stop_thread_event = threading.Event()
print stop_thread_event
looper_control('start', stop_thread_event)
It looked like a separate thread event was being called by loop and stop, so I thought a global would sort it out but its just not playing ball. When I start the loop - it runs, but when I try to stop it, I get looper stopped!! , but the process just keeps running
Your top-level thread routine will need to become an event handler that listens to a Queue object (as in from Queue import Queue) for messages, then handles them based on state. One of those messages can be a shutdown command, in which case the worker thread function simply exits, allowing the main thread to join it.
Instead of time.sleep, use threading.Timer with the body of the timer sending a message into your event queue.
This is a substantial refactoring. But especially if you plan on adding more conditions, you'll need it. One alternative is to use a package that handles this kind of thing for you, maybe pykka.
To stop a python thread you can use threading.Event()
try this:
class YourClass(threading.Thread):
def __init__(self, stop_event):
super(YourClass, self).__init__()
self.stop_event = stop_event
def run(self):
while not self.stop_event.is_set():
# do what you need here (what you had in looper)
def looper_control(state, stop_event):
if state == 'start':
t = YourClass(stop_event=stop_event)
t.start()
elif state == 'stop':
stop_event.set()
and call to looper_control:
stop_thread_event = threading.Event()
looper_control(state, stop_thread_event)
you only can "start" once a thread
but you can lock and unlock the thread.
the best way to stop and start a thread is with mutex, Example:
#!/usr/bin/python
import threading
from time import sleep
mutex2 = threading.Lock()
#This thread add values to d[]
class Hilo(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
while True:
mutex2.acquire()
#Add values to d[]
d.append("hi from Peru")
mutex2.release()
sleep(1)
d=[];
hilos = [Hilo()]
#Stop Thread
#If you have more threads you need make a mutex for every thread
mutex2.acquire()
#Start treades, but the thread is lock
for h in hilos:
h.start()
#so you need do
#unlock THREAD<
mutex2.release()
#>START THREAD
#Sleep for 4 seconds
sleep(4)
#And print d[]
print d
print "------------------------------------------"
#WAIT 5 SECONDS AND STOP THE THREAD
sleep(5)
try:
mutex2.acquire()
except Exception, e:
mutex2.release()
mutex2.acquire()
#AND PRINT d[]
print d
#AND NOW YOUR TRHEAD IS STOP#
#When the thread is lock(stop), you only need call: mutex2.release() for unlock(start)
#When your thread is unlock(start) and you want lock(stop):
#try:
# mutex2.acquire()
#except Exception, e:
# mutex2.release()
# mutex2.acquire()

Categories