PySimpleGUI Multithreaded Implementation for OpevCV Workflow - python

I've been tooling around with PySimpleGUI for just a week now. I have not been able to decipher what DemoProgram(s) will help achieve my goal. Skills are novice/tinkerer.
Project: RasberryPi OpenCV workflow using:
HDMI bridge, initiated with shell script when external image source is configured
Capture shell script, grabbing frames from HDMI bridge into data folder
Watcher.py script, watches data folder and processes new images via OpenCV script
Dashboard, displayed using Chromium
Goal: GUI with button(s) to launch each step (1-4) of the process and give Real-time Output for the watcher.py script + ability to exit capture and watcher scripts which are long running
Attempted Solutions from https://github.com/PySimpleGUI/PySimpleGUI/tree/master/DemoPrograms
Demo_Desktop_Floating_Toolbar.py
Demo_Script_Launcher_Realtime_Output.py
Demo_Multithreaded_Animated_Shell_Command.py
Demo_Multithreaded_Multiple_Threads.py
Demo_Multithreaded_Different_Threads.py
My attempts so far have yielded mixed results. I have been able to get individual steps to launch but I have not been able to put a combination of calls, triggers, and an event loop together that works.
My hunch is that using Demo_Multithreaded_Multiple_Threads.py is where I need to be building but trying to combine that with Demo_Script_Launcher_Realtime_Output.py for just the watcher.py script has led to hanging.
#!/usr/bin/python3
import subprocess
import threading
import time
import PySimpleGUI as sg
"""
You want to look for 3 points in this code, marked with comment "LOCATION X".
1. Where you put your call that takes a long time
2. Where the trigger to make the call takes place in the event loop
3. Where the completion of the call is indicated in the event loop
Demo on how to add a long-running item to your PySimpleGUI Event Loop
If you want to do something that takes a long time, and you do it in the
main event loop, you'll quickly begin to see messages from windows that your
program has hung, asking if you want to kill it.
The problem is not that your problem is hung, the problem is that you are
not calling Read or Refresh often enough.
One way through this, shown here, is to put your long work into a thread that
is spun off, allowed to work, and then gets back to the GUI when it's done working
on that task.
Every time you start up one of these long-running functions, you'll give it an "ID".
When the function completes, it will send to the GUI Event Loop a message with
the format:
work_id ::: done
This makes it easy to parse out your original work ID
You can hard code these IDs to make your code more readable. For example, maybe
you have a function named "update_user_list()". You can call the work ID "user list".
Then check for the message coming back later from the work task to see if it starts
with "user list". If so, then that long-running task is over.
"""
# ############################# User callable CPU intensive code #############################
# Put your long running code inside this "wrapper"
# NEVER make calls to PySimpleGUI from this thread (or any thread)!
# Create one of these functions for EVERY long-running call you want to make
def long_function_wrapper(work_id, window):
# LOCATION 1
# this is our "long running function call"
# sleep for a while as a simulation of a long-running computation
def process_thread():
global proc
proc = subprocess.Popen('python watcher.py data', shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
def main():
thread = threading.Thread(target=process_thread, daemon=True)
thread.start()
while True:
sg.popup_animated(sg.DEFAULT_BASE64_LOADING_GIF, 'Watcher is Running', time_between_frames=100)
thread.join(timeout=.1)
if not thread.is_alive():
break
sg.popup_animated(None)
output = proc.__str__().replace('\\r\\n', '\n')
sg.popup_scrolled(output, font='Courier 10')
if __name__ == '__main__':
main()
# at the end of the work, before exiting, send a message back to the GUI indicating end
window.write_event_value('-THREAD DONE-', work_id)
# at this point, the thread exits
return
############################# Begin GUI code #############################
def the_gui():
sg.theme('Light Brown 3')
layout = [[sg.Text('Multithreaded Work Example')],
[sg.Text('Click Go to start a long-running function call')],
[sg.Text(size=(25, 1), key='-OUTPUT-')],
[sg.Text(size=(25, 1), key='-OUTPUT2-')],
[sg.Text('?', text_color='blue', key=i, pad=(0,0), font='Default 14') for i in range(4)],
[sg.Button('Go'), sg.Button('Popup'), sg.Button('Exit')], ]
window = sg.Window('Multithreaded Window', layout)
# --------------------- EVENT LOOP ---------------------
work_id = 0
while True:
# wait for up to 100 ms for a GUI event
event, values = window.read()
if event in (sg.WIN_CLOSED, 'Exit'):
break
if event == 'Go': # clicking "Go" starts a long running work item by starting thread
window['-OUTPUT-'].update('Starting long work %s' % work_id)
window[work_id].update(text_color='red')
# LOCATION 2
# STARTING long run by starting a thread
thread_id = threading.Thread(
target=long_function_wrapper,
args=(work_id, window,),
daemon=True)
thread_id.start()
work_id = work_id+1 if work_id < 19 else 0
# if message received from queue, then some work was completed
if event == '-THREAD DONE-':
# LOCATION 3
# this is the place you would execute code at ENDING of long running task
# You can check the completed_work_id variable
# to see exactly which long-running function completed
completed_work_id = values[event]
window['-OUTPUT2-'].update(
'Complete Work ID "{}"'.format(completed_work_id))
window[completed_work_id].update(text_color='green')
if event == 'Popup':
sg.popup_non_blocking('This is a popup showing that the GUI is running', grab_anywhere=True)
# if user exits the window, then close the window and exit the GUI func
window.close()
############################# Main #############################
if __name__ == '__main__':
the_gui()
print('Exiting Program')
(cv) pi#raspberrypi:~/issgrab $ python GUIDE_GUI_master.py
*** Faking timeout ***
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/usr/lib/python3.5/threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "GUIDE_GUI_master.py", line 68, in long_function_wrapper
main()
File "GUIDE_GUI_master.py", line 57, in main
sg.popup_animated(sg.DEFAULT_BASE64_LOADING_GIF, 'Watcher is Running', time_between_frames=100)
File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/PySimpleGUI/PySimpleGUI.py", line 15797, in PopupAnimated
transparent_color=transparent_color, finalize=True, element_justification='c', icon=icon)
File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/PySimpleGUI/PySimpleGUI.py", line 6985, in __init__
self.Finalize()
File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/PySimpleGUI/PySimpleGUI.py", line 7510, in Finalize
self.Read(timeout=1)
File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/PySimpleGUI/PySimpleGUI.py", line 7325, in Read
results = self._read(timeout=timeout, timeout_key=timeout_key)
File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/PySimpleGUI/PySimpleGUI.py", line 7365, in _read
self._Show()
File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/PySimpleGUI/PySimpleGUI.py", line 7199, in _Show
StartupTK(self)
File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/PySimpleGUI/PySimpleGUI.py", line 12100, in StartupTK
my_flex_form.TKroot.mainloop()
File "/usr/lib/python3.5/tkinter/__init__.py", line 1143, in mainloop
self.tk.mainloop(n)
RuntimeError: Calling Tcl from different appartment
Exiting Program

Related

Python3: Catch SIGTTOU from subprocess and print warning

Hello I have a problem with a small debug GUI I have written (using PySimpleGUI). Part of the GUI is the capability to call a local Linux shell command.
One of the shell commands/programs I want to execute returns a SIGTTOU signal, if I start the GUI in background (with &). Which will freeze the GUI until I bring the GUI to foreground with 'fg'.
Since it's kind of normal to start the GUI in background, I just want to catch the SIGTTOU signal, print a warning and continue.
The following code snippet kind of works, but leaves the shell commands as zombies (and I get no return value from the commands. What even works better is to use signal.signal(signal.SIGTTOU, signal.SIG_IGN) but I really want to print a warning. Is that possible? What does signal.SIG_IGN to remove the zombies?
cmd_output = collections.deque()
...
def _handle_sigttou(signum, frame):
sys.__stdout__.write('WARNING: <SIGTTOU> received\n')
def _run_shell_command(cmd, timeout=None):
# run command
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
# process output
for line in p.stdout:
line = line.decode(errors='backslashreplace').rstrip()
cmd_output.appendleft((f'{line}'))
# wait for return code
retval = p.wait(timeout)
# print result
cmd_output.appendleft((f'✘ ({retval})') if retval else ('✅'))
# send command
signal.signal(signal.SIGTTOU, _handle_sigttou)
#signal.signal(signal.SIGTTOU, signal.SIG_IGN) # this works without zombies, but I want to print a warning
threading.Thread(target=_run_shell_command, args=(_line, 60), daemon=True).start()
Using window.write_event_value(event, value) to send an event to your window, then maybe call sg.popup to show the value when event in your event loop. Not try to update GUI on your thread.
...
def _handle_sigttou(signum, frame):
window.write_event_value("<SIGTTOU>", ("WARNING", "<SIGTTOU> received"))
...
while True:
event, values = window.read()
if event == sg.WINDOW_CLOSED:
break
elif event == "<SIGTTOU>":
title, message = values[event]
sg.popup(message, title=title, auto_close=True, auto_close_duration=2)
window.close()

Python Multiprocessing weird behavior when NOT USING time.sleep()

This is the exact code from Python.org. If you comment out the time.sleep(), it crashes with a long exception traceback. I would like to know why.
And, I do understand why Python.org included it in their example code. But artificially creating "working time" via time.sleep() shouldn't break the code when it's removed. It seems to me that the time.sleep() is affording some sort of spin up time. But as I said, I'd like to know from people who might actually know the answer.
A user comment asked me to fill in more details on the environment this was happening in. It was on OSX Big Sur 11.4. Using a clean install of Python 3.95 from Python.org (no Homebrew, etc). Run from within Pycharm inside a venv. I hope that helps add to understanding the situation.
import time
import random
from multiprocessing import Process, Queue, current_process, freeze_support
#
# Function run by worker processes
#
def worker(input, output):
for func, args in iter(input.get, 'STOP'):
result = calculate(func, args)
output.put(result)
#
# Function used to calculate result
#
def calculate(func, args):
result = func(*args)
return '%s says that %s%s = %s' % \
(current_process().name, func.__name__, args, result)
#
# Functions referenced by tasks
#
def mul(a, b):
#time.sleep(0.5*random.random()) # <--- time.sleep() commented out
return a * b
def plus(a, b):
#time.sleep(0.5*random.random()). # <--- time.sleep() commented out
return a + b
#
#
#
def test():
NUMBER_OF_PROCESSES = 4
TASKS1 = [(mul, (i, 7)) for i in range(20)]
TASKS2 = [(plus, (i, 8)) for i in range(10)]
# Create queues
task_queue = Queue()
done_queue = Queue()
# Submit tasks
for task in TASKS1:
task_queue.put(task)
# Start worker processes
for i in range(NUMBER_OF_PROCESSES):
Process(target=worker, args=(task_queue, done_queue)).start()
# Get and print results
print('Unordered results:')
for i in range(len(TASKS1)):
print('\t', done_queue.get())
# Add more tasks using `put()`
for task in TASKS2:
task_queue.put(task)
# Get and print some more results
for i in range(len(TASKS2)):
print('\t', done_queue.get())
# Tell child processes to stop
for i in range(NUMBER_OF_PROCESSES):
task_queue.put('STOP')
if __name__ == '__main__':
freeze_support()
test()
This is the traceback if it helps anyone:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/synchronize.py", line 110, in __setstate__
Traceback (most recent call last):
File "<string>", line 1, in <module>
self._semlock = _multiprocessing.SemLock._rebuild(*state)
FileNotFoundError: [Errno 2] No such file or directory
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/synchronize.py", line 110, in __setstate__
self._semlock = _multiprocessing.SemLock._rebuild(*state)
FileNotFoundError: [Errno 2] No such file or directory
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/synchronize.py", line 110, in __setstate__
self._semlock = _multiprocessing.SemLock._rebuild(*state)
FileNotFoundError: [Errno 2] No such file or directory
Here's a technical breakdown.
This is a race condition where the main process finishes, and exits before some of the children have a chance to fully start up. As long as a child fully starts, there are mechanisms in-place to ensure they shut down smoothly, but there's an unsafe in-between time. Race conditions can be very system dependent, as it is up to the OS and the hardware to schedule the different threads, as well as how fast they chew through their work.
Here's what's going on when a process is started... Early on in the creation of a child process, it registers itself in the main process so that it will be either joined or terminated when the main process exits depending on if it's daemonic (multiprocessing.util._exit_function). This exit function was registered with the atexit module on import of multiprocessing.
Also during creation of the child process, a pair of Pipes are opened which will be used to pass the Process object to the child interpreter (which includes what function you want to execute and its arguments). This requires 2 file handles to be shared with the child, and these file handles are also registered to be closed using atexit.
The problem arises when the main process exits before the child has a chance to read all the necessary data from the pipe (un-pickling the Process object) during the startup phase. If the main process first closes the pipe, then waits for the child to join, then we have a problem. The child will continue spinning up the new python instance until it gets to the point when it needs to read in the Process object containing your function and arguments it should run. It will try to read from a pipe which has already been closed, which is an error.
If all the children get a chance to fully start-up you won't see this ever, because that pipe is only used for startup. Putting in a delay which will in some way guarantee that all the children have some time to fully start up is what solves this problem. Manually calling join will provide this delay by waiting for the children before any of the atexit handlers are called. Additionally, any amount of processing delay means that q.get in the main thread will have to wait a while which also gives the children time to start up before closing. I was never able to reproduce the problem you encountered, but presumably you saw the output from all the TASKS (" Process-1 says that mul(19, 7) = 133 "). Only one or two of the child processes ended up doing all the work, allowing the main process to get all the results, and finish up before the other children finished startup.
EDIT:
The error is unambiguous as to what's happening, but I still can't figure how it happens... As far as I can tell, the file handles should be closed when calling _run_finalizers() in _exit_function after joining or terminating all active_children rather than before via _run_finalizers(0)
EDIT2:
_run_finalizers will seemingly actually never call Popen.finalizer to close the pipes, because exitpriority is None. I'm very confused as to what's going on here, and I think I need to sleep on it...
Apparently #user2357112supportsMonica was on the right track. It totally solves the problem if you join the processes before exiting the program. Also #Aaron's answer has the deep knowledge as to why this fixes the issue!
I added the following bits of code as was suggested and it totally fixed the need to have time.sleep() in there.
First I gathered all the processes when they were started:
processes: list[Process] = []
# Start worker processes
for i in range(NUMBER_OF_PROCESSES):
p = Process(target=worker, args=(task_queue, done_queue))
p.start()
processes.append(p)
Then at the end of the program I joined them as follows:
# Join the processes
for p in processes:
p.join()
Totally solved the issues. Thanks for the advice.

signal.pause() with signal.alarm() causes RecursionError in non-sleeping program

Single-threaded python program, intending to be responsive to events from raspberry pi button presses also wants to wake every minute to update an LCD display.
Main function:
btn_1 = 21
GPIO.setup(btn_1, GPIO.IN, pull_up_down=GPIO.PUD_UP)
GPIO.add_event_detect(btn_1, GPIO.FALLING, callback=btn_1_press_callback, bouncetime=100)
lcd.display()
lcd.messsage("text to display on lcd"
The previous code runs the btn_1_press_callback function whenever a physical button is pressed. The rest of the main function, instead of sleeping in a busy loop, does this:
signal.signal(signal.SIGALRM, wake_every_min)
signal.alarm(60)
signal.pause()
This way button presses are signaled immediately. The wake_every_minute() function simply refreshes the display with the currently displayed data (updating from the data source), so it updates every minute regardless of a button press:
def wake_every_min(sig, frame):
lcd.clear()
lcd.message("new string here")
signal.alarm(60)
signal.pause()
And then it calls signal.pause() to sleep / but listen for signals again. This works perfectly, except for the fact that after some time, I get RecursionError: maximum recursion depth exceeded while calling a Python object
Funny enough it's always at the same time, meaning "previous line repeated 482 times" is alway 482:
Traceback (most recent call last):
File "./info.py", line 129, in <module>
main()
File "./info.py", line 126, in main
signal.pause()
File "./info.py", line 111, in wake_every_min
signal.pause()
File "./info.py", line 111, in wake_every_min
signal.pause()
File "./info.py", line 111, in wake_every_min
signal.pause()
[Previous line repeated 482 more times]
Is there another way to accomplish this without a while True loop with a time.sleep()? If I do that, button presses aren't responsive, as there is always a potential for a 1.9999 minute delay, worst-case.
Update: I was thinking about this wrong. time.sleep() will not prevent a signal from happening -- the signals will interrupt sleep().
The correct solution is to sleep in the main loop, and never call signal.pause(). With a SIGINT handler, you can also exit immediately when ^c is pressed:
signal.signal(signal.SIGALRM, wake_every_min)
signal.alarm(60)
signal.signal(signal.SIGINT, int_signal_handler)
while True:
# Sleep and wait for a signal.
sleep(10)
signal.alarm(60)
Moving the re-setting of the alarm into the main loop prevents RecursionError, as new signals aren't piling up from the handler call stack. If anyone is curious what this was for, it's a crypto LCD ticker: https://github.com/manos/crypto_lcd/blob/master/info.py

python: how to kill Excel session in separate thread after timeout?

I am executing an Excel macro in python and would like to kill it after timeout. However, existing timeout and kill Excel process method are not working for me. Can you please help?
import threading
import win32com.client as win;
import pythoncom;
Excel = None;
workbook = None;
def worker(e):
global Excel;
global workbook;
pythoncom.CoInitialize();
Excel = win.DispatchEx('Excel.Application');
Excel.DisplayAlerts = False;
Excel.Visible = True;
workbook = Excel.Workbooks.Open("sample.xlsm", ReadOnly = True);
Excel.Calculation = -4135;
print "Run";
Excel.Run('Module2.Refresh');
e = threading.Event()
t = threading.Thread(target=worker, args=(e,));
t.start()
# wait 5 seconds for the thread to finish its work
t.join(5)
if t.is_alive():
print "thread is not done, setting event to kill thread."
e.set();
print "1";
workbook.Close();
print "2";
Excel.Quit();
else:
print "thread has already finished."
workbook.Close();
Excel.Quit();
I got error:
Run
thread is not done, setting event to kill thread.
1
Traceback (most recent call last):
File "check_odbc.py", line 62, in <module>
workbook.Close();
File "C:\Users\honwang\AppData\Local\conda\conda\envs\py27_32\lib\site-package
s\win32com\client\dynamic.py", line 527, in __getattr__
raise AttributeError("%s.%s" % (self._username_, attr))
AttributeError: Open.Close
Unfortunately it is not possible to kill threads. All you can do is ask them nicely to suicide, then hope for the best. Just passing an event object is not enough, you have to actively check for that event inside the thread and suicide when it is set. Since your thread is blocked while running excel code it can't check for the event - that means you can yell at it to suicide as much as you want but there's no code to make it listen.
If you need this kind of parallelism on inherently blocking code, I strongly suggest you use processes instead, because those can be killed. Otherwise if possible use asynchronous programming.

Window freezes after clicking of button in python GTK3

Hello I have some command, which runs for average 30 min, when I click on button created by GTK3, python starts to executing command but my all application freezes. My python code for button clicked is:
def on_next2_clicked(self,button):
cmd = "My Command"
proc = subprocess.Popen(cmd,shell=True, stdout=subprocess.PIPE)
while True:
line = proc.stdout.read(2)
if not line:
break
self.fper = float(line)/100.0
self.ui.progressbar1.set_fraction(self.fper)
print "Done"
I also have to set output of command to progress bar in my window. Can any one help to solve my problem ? I also tried with Threading in python, but it also falls useless...
Run a main loop iteration from within your loop:
def on_next2_clicked(self,button):
cmd = "My Command"
proc = subprocess.Popen(cmd,shell=True, stdout=subprocess.PIPE)
while True:
line = proc.stdout.read(2)
if not line:
break
self.fper = float(line)/100.0
self.ui.progressbar1.set_fraction(self.fper)
while Gtk.events_pending():
Gtk.main_iteration() # runs the GTK main loop as needed
print "Done"
You are busy-waiting, not letting the UI main event loop run. Put the loop in a separate thread so the main thread can continue its own event loop.
Edit: Adding example code
import threading
def on_next2_clicked(self,button):
def my_thread(obj):
cmd = "My Command"
proc = subprocess.Popen(cmd,shell=True, stdout=subprocess.PIPE)
while True:
line = proc.stdout.read(2)
if not line:
break
obj.fper = float(line)/100.0
obj.ui.progressbar1.set_fraction(obj.fper)
print "Done"
threading.Thread(target=my_thread, args=(self,)).start()
The above modification to your function will start a new thread that will run in parallel with your main thread. It will let the main event loop continue while the new thread does the busy waiting.

Categories