Win32com events not raising inside thread? - python

I am new to both COM and Python, so im not very familiar with exact terminologies. So apologies for using inexact terms.
I am trying to connect to a desktop application via a proprietary COM interface using pywin32.
I created a PoC and it runs fine. The COM function call is processed and I get the expected event.
class MyEvents:
def __init__(self):
print("Callback class initialized")
def OnMyEvent(self, data):
print('MyEvent raised')
class ComUser:
comObj = None
def __init__(self):
comObj = win32com.client.DispatchWithEvents("ProproetaryInterface.InterfaceClass",
MyEvents)
comObj.Register()
comObj.DoSomething(data)
time.sleep(120)
userObj = ComUser()
So far so good. I get the event on the screen
Callback class initialized
MyEvent raised
Next I tried to put it into my application where I have multiple threads. To explain it in simple terms:
Main creates an object of Class X which initializes an XMLRPC Server thread.
The XMLRPC handler simply takes incoming info and puts it into a queue
The queue is from multiprocessing lib.
Another thread waits on this queue for an incoming message
def __startPollingThread(self):
pythoncom.CoInitialize()
pollingThread = Thread(target=self.__checkQueue )
pollingThread.start()
pythoncom.CoUninitialize()
This is the polling thread method:
def __checkQueue(self):
try:
pythoncom.CoInitialize()
while True:
currMessage = self.__messageQueue.get()
self.__processMessage(currMessage);
except :
#Log message
finally:
pythoncom.CoUninitialize()
The __processMessage passes through multliple classes (something like a strategy pattern + state pattern) before it hits the class that handles COM interface.
In the ComUser class, i have a method which registers with the client application's com interface:
def initSystem(self):
import pythoncom
try:
pythoncom.CoInitialize()
self.ComConnector = win32com.client.DispatchWithEvents("ProprietaryInterface.InterfaceClass",
MyEvents)
self.ComConnector.Register()
except:
finally:
pythoncom.CoUninitialize()
Another method handles the specific requests as they arrive and makes the corresponding COM calls.
def handleMessage(self, message):
#if message = this then
comObj.DoSomething(data)
Both methods are called from the __processMessage method. All my classes reside in separate Py files. Except ComUser and MyEvents which are in same py module
I can call the Com Interface and see the Application reacting to the COM method calls but I cant see any events being raised. I have tried a whole lot of combinations of CoInitialize and Uninitialze and "import pythoncom" statements to ensure that it is not a problem with the threading. Also tried setting the sys.coinit_flags = 0 and checked. Seems to make no difference. I just dont see any events.
Is it a problem that I call DispatchWithEvents in a child thread instead of the main thread(The calls seem to work fine) ? Or is it that the main thread (ie main method of the program) dies out. I tried putting a long sleep there too. I even tried a separate thread with PumpWaitingMessages loop but it made no difference. I cant think of any solutions.

Related

Callback from Ctypes sometimes fails

I have registered a python callback with a dll using the ctypes library. When the callback is triggered, i try to free up an asyncio future i have set up. Since the callback happens in a separate thread that gets spawned by the dll, i use the loop.call_soon_threadsafe() function to get back to the eventloop that started it all.
Mostly this works fine, but every once in a while the future fails to be unblocked. In the minimal example here this also happens sometimes, but here i see that in those cases the callback doesn't even arrive (or at least the corresponding print doesn't happen).
I tried this only with python 3.8.5 so far. Is there some race condition here that i did not notice?
Here's a minimal example:
import asyncio
import os
class testClass:
loop = None
future = None
exampleDll = None
def finish(self):
#now in the right c thread and eventloop.
print("callback in eventloop")
self.future.set_result(999)
def trampoline(self):
#still in the other c thread
self.loop.call_soon_threadsafe(self.finish)
def example_callback(self):
#in another c thread, so we need to do threadsafety stuff
print("callback has arrived")
self.trampoline()
return
async def register_and_wait(self):
self.loop = asyncio.get_event_loop()
self.future=self.loop.create_future()
callback_type = ctypes.CFUNCTYPE(None)
callback_as_cfunc = callback_type(self.example_callback)
#now register the callback and wait
self.exampleDll.fnminimalExample(callback_as_cfunc, ctypes.c_int(1))
await self.future
print("future has finished")
def main(self):
path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "minimalExample.dll")
#print(path)
ctypes.cdll.LoadLibrary(path)
#for easy access
self.exampleDll = ctypes.cdll.minimalExample
asyncio.run(self.register_and_wait())
if __name__ == "__main__":
for i in range(0,100000):
print(i)
test = testClass()
test.main()
You can get the compiled example dll and its source from the repository here to reproduce.
The issue (at least in this minimal example) does not show up any more if i reuse the same eventloop instead of spawning a new one for every iteration with asyncio.run
The problem is thus fixed, but it doesn't feel right.

Ensure that process started by COM connection is killed

I'm automating Minitab 17 using Python's win32com library, and while all of commands execute correctly, I can't seem to get the process started by the Minitab process to exit when my script ends. My structure looks like
from myapi import get_data
import pythoncom
from win32com.client import gencache
def process_data(data):
# In case of threading
pythoncom.CoInitialize()
app = gencache.EnsureDispatch('Mtb.Application')
try:
# do some processing
pass
finally:
# App-specific command that is supposed to close the software
app.Quit()
# Ensure the object is released
del mtb
# In case of threading
pythoncom.CoUninitialize()
def main():
data = get_data()
process_data(data)
if __name__ == '__main__':
main()
I don't get any exceptions raised or error messages printed, the Mtb.exe process is still listed in task manager. Even more frustrating is if I run the following in an IPython session:
>>> from win32com.client import gencache
>>> app = gencache.EnsureDispatch('Mtb.Application')
>>> ^D
The Minitab process is closed immediately. I observe the same behavior in a normal python interactive session. Why would the process get closed correctly when running in an interactive session but not in a standalone script? What is done differently there that isn't being performed in my script?
I've also tried running process_data in a threading.Thread and in a multiprocessing.Process with no luck.
EDIT:
If I have a script containing nothing but
from win32com.client import gencache
app = gencache.EnsureDispatch('Mtb.Application')
then when I run it I see the Mtb.exe process in task manager, but once the script exits the process is killed. So instead my question is why does it matter if this COM object is declared at top-level vs. inside a function?
I don't have minitab so I can't verify but try forcing a shutdown of COM server by setting app = None just after the call to app.Quit? Python uses ref counting to manage object life cycle, so assuming there are no other refs to app then setting it to none should cause it to be finalized immediately. I have seen that cause similar issues. You should not need weak reference, something else is going on. The following, based on your answer, should work:
def process_data(mtb, data):
try:
mtb.do_something(data)
finally:
mtb.Quit()
def main(mtb):
data = get_data()
process_data(mtb, data)
if __name__ == '__main__':
pythoncom.CoInitialize()
mtb = gencache.EnsureDispatch('Mtb.Application')
main(mtb)
mtb.Quit()
mtb = None
pythoncom.CoUninitialize()
The problem was that the the garbage collector could clean up the reference to the underlying IUnknown object (the base type for all COM objects), and without the gc doing it's job the process stayed alive. I solve the problem by using the weakref module to immediately wrap the COM object in a weakref so it could be more easily deferenced:
from myapi import get_data
import weakref
from win32com.client import gencache
import pythoncom
def process_data(mtb_ref, data):
try:
mtb_ref().do_something(data)
finally:
mtb_ref().Quit()
def main(mtb_ref):
data = get_data()
process_data(mtb_ref, data)
if __name__ == '__main__':
pythoncom.CoInitialize()
mtb_ref = weakref.ref(gencache.EnsureDispatch('Mtb.Application'))
main(mtb_ref)
pythoncom.CoUninitialize()
I'm not sure I understand fully why this makes a difference, but I believe it's because there's never a direct reference to the object, only a weak reference, so all the functions that use the COM object only do so indirectly, allowing the GC to know that the object can be collected sooner. For whatever reason it still needs to be created at the top level of the module, but this at least makes it possible for me to write more reusable code that cleanly exits.
after pythoncom.CoUninitialize() i still see process
for me it help (based):
from comtypes.automation import IDispatch
from ctypes import c_void_p, cast, POINTER, byref
def release_reference(self, obj):
logger.debug("release com object")
oleobj = obj._oleobj_
addr = int(repr(oleobj).split()[-1][2:-1], 16)
pointer = POINTER(IDispatch)()
cast(byref(pointer), POINTER(c_void_p))[0] = addr
pointer.Release()

Stopping task.LoopingCall if exception occurs

I'm new to Twisted and after finally figuring out how the deferreds work I'm struggling with the tasks. What I want to achieve is to have a script that sends a REST request in a loop, however if at some point it fails I want to stop the loop. Since I'm using callbacks I can't easily catch exceptions and because I don't know how to stop the looping from an errback I'm stuck.
This is the simplified version of my code:
def send_request():
agent = Agent(reactor)
req_result = agent.request('GET', some_rest_link)
req_result.addCallbacks(cp_process_request, cb_process_error)
if __name__ == "__main__":
list_call = task.LoopingCall(send_request)
list_call.start(2)
reactor.run()
To end a task.LoopingCall all you need to do is call the stop on the return object (list_call in your case).
Somehow you need to make that var available to your errback (cb_process_error) either by pushing it into a class that cb_process_error is in, via some other class used as a pseudo-global or by literally using a global, then you simply call list_call.stop() inside the errback.
BTW you said:
Since I'm using callbacks I can't easily catch exceptions
Thats not really true. The point of an errback to to deal with exceptions, thats one of the things that literally causes it to be called! Check out my previous deferred answer and see if it makes errbacks any clearer.
The following is a runnable example (... I'm not saying this is the best way to do it, just that it is a way...)
#!/usr/bin/python
from twisted.internet import task
from twisted.internet import reactor
from twisted.internet.defer import Deferred
from twisted.web.client import Agent
from pprint import pprint
class LoopingStuff (object):
def cp_process_request(self, return_obj):
print "In callback"
pprint (return_obj)
def cb_process_error(self, return_obj):
print "In Errorback"
pprint(return_obj)
self.loopstopper()
def send_request(self):
agent = Agent(reactor)
req_result = agent.request('GET', 'http://google.com')
req_result.addCallbacks(self.cp_process_request, self.cb_process_error)
def main():
looping_stuff_holder = LoopingStuff()
list_call = task.LoopingCall(looping_stuff_holder.send_request)
looping_stuff_holder.loopstopper = list_call.stop
list_call.start(2)
reactor.callLater(10, reactor.stop)
reactor.run()
if __name__ == '__main__':
main()
Assuming you can get to google.com this will fetch pages for 10 seconds, if you change the second arg of the agent.request to something like http://127.0.0.1:12999 (assuming that port 12999 will give a connection refused) then you'll see 1 errback printout (which will have also shutdown the loopingcall) and have a 10 second wait until the reactor shuts down.

Remove threads usage from script

The next script I'm using is used to listen to IMAP connection using IMAP IDLE and depends heavily on threads. What's the easiest way for me to eliminate the treads call and just use the main thread?
As a new python developer I tried editing def __init__(self, conn): method but just got more and more errors
A code sample would help me a lot
#!/usr/local/bin/python2.7
print "Content-type: text/html\r\n\r\n";
import socket, ssl, json, struct, re
import imaplib2, time
from threading import *
# enter gmail login details here
USER="username#gmail.com"
PASSWORD="password"
# enter device token here
deviceToken = 'my device token x x x x x'
deviceToken = deviceToken.replace(' ','').decode('hex')
currentBadgeNum = -1
def getUnseen():
(resp, data) = M.status("INBOX", '(UNSEEN)')
print data
return int(re.findall("UNSEEN (\d)*\)", data[0])[0])
def sendPushNotification(badgeNum):
global currentBadgeNum, deviceToken
if badgeNum != currentBadgeNum:
currentBadgeNum = badgeNum
thePayLoad = {
'aps': {
'alert':'Hello world!',
'sound':'',
'badge': badgeNum,
},
'test_data': { 'foo': 'bar' },
}
theCertfile = 'certif.pem'
theHost = ('gateway.push.apple.com', 2195)
data = json.dumps(thePayLoad)
theFormat = '!BH32sH%ds' % len(data)
theNotification = struct.pack(theFormat, 0, 32,
deviceToken, len(data), data)
ssl_sock = ssl.wrap_socket(socket.socket(socket.AF_INET,
socket.SOCK_STREAM), certfile=theCertfile)
ssl_sock.connect(theHost)
ssl_sock.write(theNotification)
ssl_sock.close()
print "Sent Push alert."
# This is the threading object that does all the waiting on
# the event
class Idler(object):
def __init__(self, conn):
self.thread = Thread(target=self.idle)
self.M = conn
self.event = Event()
def start(self):
self.thread.start()
def stop(self):
# This is a neat trick to make thread end. Took me a
# while to figure that one out!
self.event.set()
def join(self):
self.thread.join()
def idle(self):
# Starting an unending loop here
while True:
# This is part of the trick to make the loop stop
# when the stop() command is given
if self.event.isSet():
return
self.needsync = False
# A callback method that gets called when a new
# email arrives. Very basic, but that's good.
def callback(args):
if not self.event.isSet():
self.needsync = True
self.event.set()
# Do the actual idle call. This returns immediately,
# since it's asynchronous.
self.M.idle(callback=callback)
# This waits until the event is set. The event is
# set by the callback, when the server 'answers'
# the idle call and the callback function gets
# called.
self.event.wait()
# Because the function sets the needsync variable,
# this helps escape the loop without doing
# anything if the stop() is called. Kinda neat
# solution.
if self.needsync:
self.event.clear()
self.dosync()
# The method that gets called when a new email arrives.
# Replace it with something better.
def dosync(self):
print "Got an event!"
numUnseen = getUnseen()
sendPushNotification(numUnseen)
# Had to do this stuff in a try-finally, since some testing
# went a little wrong.....
while True:
try:
# Set the following two lines to your creds and server
M = imaplib2.IMAP4_SSL("imap.gmail.com")
M.login(USER, PASSWORD)
M.debug = 4
# We need to get out of the AUTH state, so we just select
# the INBOX.
M.select("INBOX")
numUnseen = getUnseen()
sendPushNotification(numUnseen)
typ, data = M.fetch(1, '(RFC822)')
raw_email = data[0][1]
import email
email_message = email.message_from_string(raw_email)
print email_message['Subject']
#print M.status("INBOX", '(UNSEEN)')
# Start the Idler thread
idler = Idler(M)
idler.start()
# Sleep forever, one minute at a time
while True:
time.sleep(60)
except imaplib2.IMAP4.abort:
print("Disconnected. Trying again.")
finally:
# Clean up.
#idler.stop() #Commented out to see the real error
#idler.join() #Commented out to see the real error
#M.close() #Commented out to see the real error
# This is important!
M.logout()
As far as I can tell, this code is hopelessly confused because the author used the "imaplib2" project library which forces a threading model which this code then never uses.
Only one thread is ever created, which wouldn't need to be a thread but for the choice of imaplib2. However, as the imaplib2 documentation notes:
This module presents an almost identical API as that provided by the standard python library module imaplib, the main difference being that this version allows parallel execution of commands on the IMAP4 server, and implements the IMAP4rev1 IDLE extension. (imaplib2 can be substituted for imaplib in existing clients with no changes in the code, but see the caveat below.)
Which makes it appear that you should be able to throw out much of class Idler and just use the connection M. I recommend that you look at Doug Hellman's excellent Python Module Of The Week for module imaplib prior to looking at the official documentation. You'll need to reverse engineer the code to find out its intent, but it looks to me like:
Open a connection to GMail
check for unseen messages in Inbox
count unseen messages from (2)
send a dummy message to some service at gateway.push.apple.com
Wait for notice, goto (2)
Perhaps the most interesting thing about the code is that it doesn't appear to do anything, although what sendPushNotification (step 4) does is a mystery, and the one line that uses an imaplib2 specific service:
self.M.idle(callback=callback)
uses a named argument that I don't see in the module documentation. Do you know if this code ever actually ran?
Aside from unneeded complexity, there's another reason to drop imaplib2: it exists independently on sourceforge and PyPi which one maintainer claimed two years ago "An attempt will be made to keep it up-to-date with the original". Which one do you have? Which would you install?
Don't do it
Since you are trying to remove the Thread usage solely because you didn't find how to handle the exceptions from the server, I don't recommend removing the Thread usage, because of the async nature of the library itself - the Idler handles it more smoothly than a one thread could.
Solution
You need to wrap the self.M.idle(callback=callback) with try-except and then re-raise it in the main thread. Then you handle the exception by re-running the code in the main thread to restart the connection.
You can find more details of the solution and possible reasons in this answer: https://stackoverflow.com/a/50163971/1544154
Complete solution is here: https://www.github.com/Elijas/email-notifier

Errno 9 using the multiprocessing module with Tornado in Python

For operations in my Tornado server that are expected to block (and can't be easily modified to use things like Tornado's asynchronous HTTP request client), I have been offloading the work to separate worker processes using the multiprocessing module. Specifically, I was using a multiprocessing Pool because it offers a method called apply_async, which works very well with Tornado since it takes a callback as one of its arguments.
I recently realized that a pool preallocates the number of processes, so if they all become blocking, operations that require a new process will have to wait. I do realize that the server can still take connections since apply_async works by adding things to a task queue, and is rather immediately finished, itself, but I'm looking to spawn n processes for n amount of blocking tasks I need to perform.
I figured that I could use the add_handler method for my Tornado server's IOLoop to add a handler for each new PID that I create to that IOLoop. I've done something similar before, but it was using popen and an arbitrary command. An example of such use of this method is here. I wanted to pass arguments into an arbitrary target Python function within my scope, though, so I wanted to stick with multiprocessing.
However, it seems that something doesn't like the PIDs that my multiprocessing.Process objects have. I get IOError: [Errno 9] Bad file descriptor. Are these processes restricted somehow? I know that the PID isn't available until I actually start the process, but I do start the process. Here's the source code of an example I've made that demonstrates this issue:
#!/usr/bin/env python
"""Creates a small Tornado program to demonstrate asynchronous programming.
Specifically, this demonstrates using the multiprocessing module."""
import tornado.httpserver
import tornado.ioloop
import tornado.web
import multiprocessing as mp
import random
import time
__author__ = 'Brian McFadden'
__email__ = 'brimcfadden#gmail.com'
def sleepy(queue):
"""Pushes a string to the queue after sleeping for 5 seconds.
This sleeping can be thought of as a blocking operation."""
time.sleep(5)
queue.put("Now I'm awake.")
return
def random_num():
"""Returns a string containing a random number.
This function can be used by handlers to receive text for writing which
facilitates noticing change on the webpage when it is refreshed."""
n = random.random()
return "<br />Here is a random number to show change: {0}".format(n)
class SyncHandler(tornado.web.RequestHandler):
"""Demonstrates handing a request synchronously.
It executes sleepy() before writing some more text and a random number to
the webpage. While the process is sleeping, the Tornado server cannot
handle any requests at all."""
def get(self):
q = mp.Queue()
sleepy(q)
val = q.get()
self.write(val)
self.write('<br />Brought to you by SyncHandler.')
self.write('<br />Try refreshing me and then the main page.')
self.write(random_num())
class AsyncHandler(tornado.web.RequestHandler):
"""Demonstrates handing a request asynchronously.
It executes sleepy() before writing some more text and a random number to
the webpage. It passes the sleeping function off to another process using
the multiprocessing module in order to handle more requests concurrently to
the sleeping, which is like a blocking operation."""
#tornado.web.asynchronous
def get(self):
"""Handles the original GET request (normal function delegation).
Instead of directly invoking sleepy(), it passes a reference to the
function to the multiprocessing pool."""
# Create an interprocess data structure, a queue.
q = mp.Queue()
# Create a process for the sleepy function. Provide the queue.
p = mp.Process(target=sleepy, args=(q,))
# Start it, but don't use p.join(); that would block us.
p.start()
# Add our callback function to the IOLoop. The async_callback wrapper
# makes sure that Tornado sends an HTTP 500 error to the client if an
# uncaught exception occurs in the callback.
iol = tornado.ioloop.IOLoop.instance()
print "p.pid:", p.pid
iol.add_handler(p.pid, self.async_callback(self._finish, q), iol.READ)
def _finish(self, q):
"""This is the callback for post-sleepy() request handling.
Operation of this function occurs in the original process."""
val = q.get()
self.write(val)
self.write('<br />Brought to you by AsyncHandler.')
self.write('<br />Try refreshing me and then the main page.')
self.write(random_num())
# Asynchronous handling must be manually finished.
self.finish()
class MainHandler(tornado.web.RequestHandler):
"""Returns a string and a random number.
Try to access this page in one window immediately after (<5 seconds of)
accessing /async or /sync in another window to see the difference between
them. Asynchronously performing the sleepy() function won't make the client
wait for data from this handler, but synchronously doing so will!"""
def get(self):
self.write('This is just responding to a simple request.')
self.write('<br />Try refreshing me after one of the other pages.')
self.write(random_num())
if __name__ == '__main__':
# Create an application using the above handlers.
application = tornado.web.Application([
(r"/", MainHandler),
(r"/sync", SyncHandler),
(r"/async", AsyncHandler),
])
# Create a single-process Tornado server from the application.
http_server = tornado.httpserver.HTTPServer(application)
http_server.listen(8888)
print 'The HTTP server is listening on port 8888.'
tornado.ioloop.IOLoop.instance().start()
Here is the traceback:
Traceback (most recent call last):
File "/usr/local/lib/python2.6/dist-packages/tornado/web.py", line 810, in _stack_context
yield
File "/usr/local/lib/python2.6/dist-packages/tornado/stack_context.py", line 77, in StackContext
yield
File "/usr/local/lib/python2.6/dist-packages/tornado/web.py", line 827, in _execute
getattr(self, self.request.method.lower())(*args, **kwargs)
File "/usr/local/lib/python2.6/dist-packages/tornado/web.py", line 909, in wrapper
return method(self, *args, **kwargs)
File "./process_async.py", line 73, in get
iol.add_handler(p.pid, self.async_callback(self._finish, q), iol.READ)
File "/usr/local/lib/python2.6/dist-packages/tornado/ioloop.py", line 151, in add_handler
self._impl.register(fd, events | self.ERROR)
IOError: [Errno 9] Bad file descriptor
The above code is actually modified from an older example that used process pools. I've had it saved for reference for my coworkers and myself (hence the heavy amount of comments) for quite a while. I constructed it in such a way so that I could open two small browser windows side-by-side to demonstrate to my boss that the /sync URI blocks connections while /async allows more connections. For the purposes of this question, all you need to do to reproduce it is try to access the /async handler. It errors immediately.
What should I do about this? How can the PID be "bad"? If you run the program, you can see it be printed to stdout.
For the record, I'm using Python 2.6.5 on Ubuntu 10.04. Tornado is 1.1.
add_handler takes a valid file descriptor, not a PID. As an example of what's expected, tornado itself uses add_handler normally by passing in a socket object's fileno(), which returns the object's file descriptor. PID is irrelevant in this case.
Check out this project:
https://github.com/vukasin/tornado-subprocess
it allows you to start arbitrary processes from tornado and get a callback when they finish (with access to their status, stdout and stderr).

Categories