I am executing an Excel macro in python and would like to kill it after timeout. However, existing timeout and kill Excel process method are not working for me. Can you please help?
import threading
import win32com.client as win;
import pythoncom;
Excel = None;
workbook = None;
def worker(e):
global Excel;
global workbook;
pythoncom.CoInitialize();
Excel = win.DispatchEx('Excel.Application');
Excel.DisplayAlerts = False;
Excel.Visible = True;
workbook = Excel.Workbooks.Open("sample.xlsm", ReadOnly = True);
Excel.Calculation = -4135;
print "Run";
Excel.Run('Module2.Refresh');
e = threading.Event()
t = threading.Thread(target=worker, args=(e,));
t.start()
# wait 5 seconds for the thread to finish its work
t.join(5)
if t.is_alive():
print "thread is not done, setting event to kill thread."
e.set();
print "1";
workbook.Close();
print "2";
Excel.Quit();
else:
print "thread has already finished."
workbook.Close();
Excel.Quit();
I got error:
Run
thread is not done, setting event to kill thread.
1
Traceback (most recent call last):
File "check_odbc.py", line 62, in <module>
workbook.Close();
File "C:\Users\honwang\AppData\Local\conda\conda\envs\py27_32\lib\site-package
s\win32com\client\dynamic.py", line 527, in __getattr__
raise AttributeError("%s.%s" % (self._username_, attr))
AttributeError: Open.Close
Unfortunately it is not possible to kill threads. All you can do is ask them nicely to suicide, then hope for the best. Just passing an event object is not enough, you have to actively check for that event inside the thread and suicide when it is set. Since your thread is blocked while running excel code it can't check for the event - that means you can yell at it to suicide as much as you want but there's no code to make it listen.
If you need this kind of parallelism on inherently blocking code, I strongly suggest you use processes instead, because those can be killed. Otherwise if possible use asynchronous programming.
Related
I've been tooling around with PySimpleGUI for just a week now. I have not been able to decipher what DemoProgram(s) will help achieve my goal. Skills are novice/tinkerer.
Project: RasberryPi OpenCV workflow using:
HDMI bridge, initiated with shell script when external image source is configured
Capture shell script, grabbing frames from HDMI bridge into data folder
Watcher.py script, watches data folder and processes new images via OpenCV script
Dashboard, displayed using Chromium
Goal: GUI with button(s) to launch each step (1-4) of the process and give Real-time Output for the watcher.py script + ability to exit capture and watcher scripts which are long running
Attempted Solutions from https://github.com/PySimpleGUI/PySimpleGUI/tree/master/DemoPrograms
Demo_Desktop_Floating_Toolbar.py
Demo_Script_Launcher_Realtime_Output.py
Demo_Multithreaded_Animated_Shell_Command.py
Demo_Multithreaded_Multiple_Threads.py
Demo_Multithreaded_Different_Threads.py
My attempts so far have yielded mixed results. I have been able to get individual steps to launch but I have not been able to put a combination of calls, triggers, and an event loop together that works.
My hunch is that using Demo_Multithreaded_Multiple_Threads.py is where I need to be building but trying to combine that with Demo_Script_Launcher_Realtime_Output.py for just the watcher.py script has led to hanging.
#!/usr/bin/python3
import subprocess
import threading
import time
import PySimpleGUI as sg
"""
You want to look for 3 points in this code, marked with comment "LOCATION X".
1. Where you put your call that takes a long time
2. Where the trigger to make the call takes place in the event loop
3. Where the completion of the call is indicated in the event loop
Demo on how to add a long-running item to your PySimpleGUI Event Loop
If you want to do something that takes a long time, and you do it in the
main event loop, you'll quickly begin to see messages from windows that your
program has hung, asking if you want to kill it.
The problem is not that your problem is hung, the problem is that you are
not calling Read or Refresh often enough.
One way through this, shown here, is to put your long work into a thread that
is spun off, allowed to work, and then gets back to the GUI when it's done working
on that task.
Every time you start up one of these long-running functions, you'll give it an "ID".
When the function completes, it will send to the GUI Event Loop a message with
the format:
work_id ::: done
This makes it easy to parse out your original work ID
You can hard code these IDs to make your code more readable. For example, maybe
you have a function named "update_user_list()". You can call the work ID "user list".
Then check for the message coming back later from the work task to see if it starts
with "user list". If so, then that long-running task is over.
"""
# ############################# User callable CPU intensive code #############################
# Put your long running code inside this "wrapper"
# NEVER make calls to PySimpleGUI from this thread (or any thread)!
# Create one of these functions for EVERY long-running call you want to make
def long_function_wrapper(work_id, window):
# LOCATION 1
# this is our "long running function call"
# sleep for a while as a simulation of a long-running computation
def process_thread():
global proc
proc = subprocess.Popen('python watcher.py data', shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
def main():
thread = threading.Thread(target=process_thread, daemon=True)
thread.start()
while True:
sg.popup_animated(sg.DEFAULT_BASE64_LOADING_GIF, 'Watcher is Running', time_between_frames=100)
thread.join(timeout=.1)
if not thread.is_alive():
break
sg.popup_animated(None)
output = proc.__str__().replace('\\r\\n', '\n')
sg.popup_scrolled(output, font='Courier 10')
if __name__ == '__main__':
main()
# at the end of the work, before exiting, send a message back to the GUI indicating end
window.write_event_value('-THREAD DONE-', work_id)
# at this point, the thread exits
return
############################# Begin GUI code #############################
def the_gui():
sg.theme('Light Brown 3')
layout = [[sg.Text('Multithreaded Work Example')],
[sg.Text('Click Go to start a long-running function call')],
[sg.Text(size=(25, 1), key='-OUTPUT-')],
[sg.Text(size=(25, 1), key='-OUTPUT2-')],
[sg.Text('?', text_color='blue', key=i, pad=(0,0), font='Default 14') for i in range(4)],
[sg.Button('Go'), sg.Button('Popup'), sg.Button('Exit')], ]
window = sg.Window('Multithreaded Window', layout)
# --------------------- EVENT LOOP ---------------------
work_id = 0
while True:
# wait for up to 100 ms for a GUI event
event, values = window.read()
if event in (sg.WIN_CLOSED, 'Exit'):
break
if event == 'Go': # clicking "Go" starts a long running work item by starting thread
window['-OUTPUT-'].update('Starting long work %s' % work_id)
window[work_id].update(text_color='red')
# LOCATION 2
# STARTING long run by starting a thread
thread_id = threading.Thread(
target=long_function_wrapper,
args=(work_id, window,),
daemon=True)
thread_id.start()
work_id = work_id+1 if work_id < 19 else 0
# if message received from queue, then some work was completed
if event == '-THREAD DONE-':
# LOCATION 3
# this is the place you would execute code at ENDING of long running task
# You can check the completed_work_id variable
# to see exactly which long-running function completed
completed_work_id = values[event]
window['-OUTPUT2-'].update(
'Complete Work ID "{}"'.format(completed_work_id))
window[completed_work_id].update(text_color='green')
if event == 'Popup':
sg.popup_non_blocking('This is a popup showing that the GUI is running', grab_anywhere=True)
# if user exits the window, then close the window and exit the GUI func
window.close()
############################# Main #############################
if __name__ == '__main__':
the_gui()
print('Exiting Program')
(cv) pi#raspberrypi:~/issgrab $ python GUIDE_GUI_master.py
*** Faking timeout ***
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/usr/lib/python3.5/threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "GUIDE_GUI_master.py", line 68, in long_function_wrapper
main()
File "GUIDE_GUI_master.py", line 57, in main
sg.popup_animated(sg.DEFAULT_BASE64_LOADING_GIF, 'Watcher is Running', time_between_frames=100)
File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/PySimpleGUI/PySimpleGUI.py", line 15797, in PopupAnimated
transparent_color=transparent_color, finalize=True, element_justification='c', icon=icon)
File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/PySimpleGUI/PySimpleGUI.py", line 6985, in __init__
self.Finalize()
File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/PySimpleGUI/PySimpleGUI.py", line 7510, in Finalize
self.Read(timeout=1)
File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/PySimpleGUI/PySimpleGUI.py", line 7325, in Read
results = self._read(timeout=timeout, timeout_key=timeout_key)
File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/PySimpleGUI/PySimpleGUI.py", line 7365, in _read
self._Show()
File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/PySimpleGUI/PySimpleGUI.py", line 7199, in _Show
StartupTK(self)
File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/PySimpleGUI/PySimpleGUI.py", line 12100, in StartupTK
my_flex_form.TKroot.mainloop()
File "/usr/lib/python3.5/tkinter/__init__.py", line 1143, in mainloop
self.tk.mainloop(n)
RuntimeError: Calling Tcl from different appartment
Exiting Program
I've been having an odd issue with PyCharm and subprocesses created by the multiprocessing library locking up forever. I'm using Windows with Python 3.5. What I'm trying to do is:
Start a background thread to block on stdin (waiting for input)
Have the main thread check occasionally for input from stdin and then delegate the work to Python processes created using multiprocessing
However, I've found that newly created multiprocessing Processes lock up forever if and only if the following conditions are met:
I'm running the code via Pycharm (both the latest and older versions)
The background thread is blocking on stdin
Here's the simplest example I can create that reproduces the problem:
import multiprocessing
import threading
import sys
def noop():
pass
def consume():
while True:
sys.stdin.readline()
if __name__ == '__main__':
# create a daemon thread to block on stdin
thread = threading.Thread(target=consume, daemon=True)
thread.start()
# create a background process
process = multiprocessing.Process(target=noop)
process.start()
I've Googled various combinations of "PyCharm stdin multiprocessing hang ..." and had no luck at finding an explanation, and I can't figure out why a thread of the main process blocking on stdin should ever cause a subprocess to also block/hang, let alone why it would only happen when running the script in PyCharm. The only think I can guess is that there might be some monkey-patching of either stdin or the multiprocessing library going on.
Has anyone else encountered this problem? Can anyone explain to me why this only occurs in PyCharm, and how I can make it work regardless of the Python editor I'm using?
I faced the same problem when I was trying to do multiple API calls to fetch data from a remote server. I replaced multiprocessing dummy with ThreadPoolExecutor. It works in the same way as dummy.
Following is a short snippet of a running code to write the response to a json file:
uids = [] # an array of the requisite parameters used in requests
with open('flight_config.json', 'w') as f:
futures = []
for i in range(chunk_index, len(uids)):
print('For uid[{}], fetching started:'.format(i))
chunk_index += 1
auth_token = get_header()
with ThreadPoolExecutor(max_workers=7) as executor:
future_to_url = {executor.submit(fetch_response_from_api, uid=uid, auth_token=auth_token): uid for uid in
uids[i]}
for future in concurrent.futures.as_completed(future_to_url):
result = future_to_url[future]
try:
data = future.result()
print(data)
except Exception as exc:
print('%r generated an exception: %s' % (result, exc))
else:
print('%r page is %d bytes' % (result, len(data)))
This is my first program in threading and I am completely new to OS concepts. I was just trying to understand how to asynchronously do stuffs using python. I am trying to establish a a session and send Keepalives on a daemon thread and send protocol messages using the Main Thread. However I noticed that once I created the thread I am unable to access the variables which I was able to access before creating thread. I do not see the global variables that I used to see before I created this new thread.
Can some one help me understand how threading is working here. I am trying to understand:
How to print properly so that logging is useful.
How to kill the thread that we created?
How to access variables from one thread
def pcep_init(ip):
global pcc_client,keeppkt,pkt,thread1
accept_connection(ip)
send_pcep_open()
# Create new Daemon thread to send KA
thread1 = myThread(1, "Keepalive-Thread\r")
thread1.setDaemon(True)
# Start new Threads
print "This is thread1 before start %r" % thread1
thread1.start()
print "This is thread1 after start %r" % thread1
print "Coming out of pcep_init"
return 1
However when I executed the API i see that the print is not kind of misaligned due to async
>>> ret_val=pcep_init("192.168.25.2").
starting pce server on 192.168.25.2 port 4189
connection from ('192.168.25.1', 42352)
, initial daemon)>fore start <myThread(Keepalive-Thread
, started daemon 140302767515408)>ead(Keepalive-Thread
Coming out of pcep_init
>>> Starting Keepalive-Thread <------ I am supposed to hit the enter button to get the python prompt not sure why thats needed.
>>> thread1
Traceback (most recent call last):
File "<console>", line 1, in <module>
NameError: name 'thread1' is not defined
>>> threading.currentThread()
<_MainThread(MainThread, started)>
>>> threading.activeCount()
2
>>> threading.enumerate() <-------------- Not sure why this is not showing the Main Thread
, started daemon 140302767515408)>], <myThread(Keepalive-Thread
>>>
I create a python thread.One it's kick to run by calling it's start() method , I monitor a falg inside the thread , if that flag==True , I know User no longer wants the thread to keep running , so I liek to do some house cleaning and terminate the thread.
I couldn't terminate the thread however. I tried thread.join() , thread.exit() ,thread.quit() , all throw exception.
Here is how my thread looks like .
EDIT 1 : Please notice the core() function is called within standard run() function , which I haven't show it here.
EDIT 2 : I just tried sys.exit() when the StopFlag is true , and it looks thread terminates ! is that safe to go with ?
class workingThread(Thread):
def __init__(self, gui, testCase):
Thread.__init__(self)
self.myName = Thread.getName(self)
self.start() # start the thread
def core(self,arg,f) : # Where I check the flag and run the actual code
# STOP
if (self.StopFlag == True):
if self.isAlive():
self.doHouseCleaning()
# none of following works all throw exceptions
self.exit()
self.join()
self._Thread__stop()
self._Thread_delete()
self.quit()
# Check if it's terminated or not
if not(self.isAlive()):
print self.myName + " terminated "
# PAUSE
elif (self.StopFlag == False) and not(self.isSet()):
print self.myName + " paused"
while not(self.isSet()):
pass
# RUN
elif (self.StopFlag == False) and self.isSet():
r = f(arg)
Several problems here, could be others too but if you're not showing the entire program or the specific exceptions this is the best I can do:
The task the thread should be performing should be called "run" or passed to the Thread constructor.
A thread doesn't call join() on itself, the parent process that started the thread calls join(), which makes the parent process block until the thread returns.
Usually the parent process should be calling run().
The thread is complete once it finishes (returns from) the run() function.
Simple example:
import threading
import time
class MyThread(threading.Thread):
def __init__(self):
super(MyThread,self).__init__()
self.count = 5
def run(self):
while self.count:
print("I'm running for %i more seconds" % self.count)
time.sleep(1)
self.count -= 1
t = MyThread()
print("Starting %s" % t)
t.start()
# do whatever you need to do while the other thread is running
t.join()
print("%s finished" % t)
Output:
Starting <MyThread(Thread-1, initial)>
I'm running for 5 more seconds
I'm running for 4 more seconds
I'm running for 3 more seconds
I'm running for 2 more seconds
I'm running for 1 more seconds
<MyThread(Thread-1, stopped 6712)> finished
There's no explicit way to kill a thread, either from a reference to thread instance or from the threading module.
That being said, common use cases for running multiple threads do allow opportunities to prevent them from running indefinitely. If, say, you're making connections to an external resource via urllib2, you could always specify a timeout:
import urllib2
urllib2.urlopen(url[, data][, timeout])
The same is true for sockets:
import socket
socket.setdefaulttimeout(timeout)
Note that calling the join([timeout]) method of a thread with a timeout specified will only block for hte timeout (or until the thread terminates. It doesn't kill the thread.
If you want to ensure that the thread will terminate when your program finishes, just make sure to set the daemon attribute of the thread object to True before invoking it's start() method.
I need to do the following in Python. I want to spawn a process (subprocess module?), and:
if the process ends normally, to continue exactly from the moment it terminates;
if, otherwise, the process "gets stuck" and doesn't terminate within (say) one hour, to kill it and continue (possibly giving it another try, in a loop).
What is the most elegant way to accomplish this?
The subprocess module will be your friend. Start the process to get a Popen object, then pass it to a function like this. Note that this only raises exception on timeout. If desired you can catch the exception and call the kill() method on the Popen process. (kill is new in Python 2.6, btw)
import time
def wait_timeout(proc, seconds):
"""Wait for a process to finish, or raise exception after timeout"""
start = time.time()
end = start + seconds
interval = min(seconds / 1000.0, .25)
while True:
result = proc.poll()
if result is not None:
return result
if time.time() >= end:
raise RuntimeError("Process timed out")
time.sleep(interval)
There are at least 2 ways to do this by using psutil as long as you know the process PID.
Assuming the process is created as such:
import subprocess
subp = subprocess.Popen(['progname'])
...you can get its creation time in a busy loop like this:
import psutil, time
TIMEOUT = 60 * 60 # 1 hour
p = psutil.Process(subp.pid)
while 1:
if (time.time() - p.create_time()) > TIMEOUT:
p.kill()
raise RuntimeError('timeout')
time.sleep(5)
...or simply, you can do this:
import psutil
p = psutil.Process(subp.pid)
try:
p.wait(timeout=60*60)
except psutil.TimeoutExpired:
p.kill()
raise
Also, while you're at it, you might be interested in the following extra APIs:
>>> p.status()
'running'
>>> p.is_running()
True
>>>
I had a similar question and found this answer. Just for completeness, I want to add one more way how to terminate a hanging process after a given amount of time: The python signal library
https://docs.python.org/2/library/signal.html
From the documentation:
import signal, os
def handler(signum, frame):
print 'Signal handler called with signal', signum
raise IOError("Couldn't open device!")
# Set the signal handler and a 5-second alarm
signal.signal(signal.SIGALRM, handler)
signal.alarm(5)
# This open() may hang indefinitely
fd = os.open('/dev/ttyS0', os.O_RDWR)
signal.alarm(0) # Disable the alarm
Since you wanted to spawn a new process anyways, this might not be the best soloution for your problem, though.
A nice, passive, way is also by using a threading.Timer and setting up callback function.
from threading import Timer
# execute the command
p = subprocess.Popen(command)
# save the proc object - either if you make this onto class (like the example), or 'p' can be global
self.p == p
# config and init timer
# kill_proc is a callback function which can also be added onto class or simply a global
t = Timer(seconds, self.kill_proc)
# start timer
t.start()
# wait for the test process to return
rcode = p.wait()
t.cancel()
If the process finishes in time, wait() ends and code continues here, cancel() stops the timer. If meanwhile the timer runs out and executes kill_proc in a separate thread, wait() will also continue here and cancel() will do nothing. By the value of rcode you will know if we've timeouted or not. Simplest kill_proc: (you can of course do anything extra there)
def kill_proc(self):
os.kill(self.p, signal.SIGTERM)
Koodos to Peter Shinners for his nice suggestion about subprocess module. I was using exec() before and did not have any control on running time and especially terminating it. My simplest template for this kind of task is the following and I am just using the timeout parameter of subprocess.run() function to monitor the running time. Of course you can get standard out and error as well if needed:
from subprocess import run, TimeoutExpired, CalledProcessError
for file in fls:
try:
run(["python3.7", file], check=True, timeout=7200) # 2 hours timeout
print("scraped :)", file)
except TimeoutExpired:
message = "Timeout :( !!!"
print(message, file)
f.write("{message} {file}\n".format(file=file, message=message))
except CalledProcessError:
message = "SOMETHING HAPPENED :( !!!, CHECK"
print(message, file)
f.write("{message} {file}\n".format(file=file, message=message))