Ensure that process started by COM connection is killed - python

I'm automating Minitab 17 using Python's win32com library, and while all of commands execute correctly, I can't seem to get the process started by the Minitab process to exit when my script ends. My structure looks like
from myapi import get_data
import pythoncom
from win32com.client import gencache
def process_data(data):
# In case of threading
pythoncom.CoInitialize()
app = gencache.EnsureDispatch('Mtb.Application')
try:
# do some processing
pass
finally:
# App-specific command that is supposed to close the software
app.Quit()
# Ensure the object is released
del mtb
# In case of threading
pythoncom.CoUninitialize()
def main():
data = get_data()
process_data(data)
if __name__ == '__main__':
main()
I don't get any exceptions raised or error messages printed, the Mtb.exe process is still listed in task manager. Even more frustrating is if I run the following in an IPython session:
>>> from win32com.client import gencache
>>> app = gencache.EnsureDispatch('Mtb.Application')
>>> ^D
The Minitab process is closed immediately. I observe the same behavior in a normal python interactive session. Why would the process get closed correctly when running in an interactive session but not in a standalone script? What is done differently there that isn't being performed in my script?
I've also tried running process_data in a threading.Thread and in a multiprocessing.Process with no luck.
EDIT:
If I have a script containing nothing but
from win32com.client import gencache
app = gencache.EnsureDispatch('Mtb.Application')
then when I run it I see the Mtb.exe process in task manager, but once the script exits the process is killed. So instead my question is why does it matter if this COM object is declared at top-level vs. inside a function?

I don't have minitab so I can't verify but try forcing a shutdown of COM server by setting app = None just after the call to app.Quit? Python uses ref counting to manage object life cycle, so assuming there are no other refs to app then setting it to none should cause it to be finalized immediately. I have seen that cause similar issues. You should not need weak reference, something else is going on. The following, based on your answer, should work:
def process_data(mtb, data):
try:
mtb.do_something(data)
finally:
mtb.Quit()
def main(mtb):
data = get_data()
process_data(mtb, data)
if __name__ == '__main__':
pythoncom.CoInitialize()
mtb = gencache.EnsureDispatch('Mtb.Application')
main(mtb)
mtb.Quit()
mtb = None
pythoncom.CoUninitialize()

The problem was that the the garbage collector could clean up the reference to the underlying IUnknown object (the base type for all COM objects), and without the gc doing it's job the process stayed alive. I solve the problem by using the weakref module to immediately wrap the COM object in a weakref so it could be more easily deferenced:
from myapi import get_data
import weakref
from win32com.client import gencache
import pythoncom
def process_data(mtb_ref, data):
try:
mtb_ref().do_something(data)
finally:
mtb_ref().Quit()
def main(mtb_ref):
data = get_data()
process_data(mtb_ref, data)
if __name__ == '__main__':
pythoncom.CoInitialize()
mtb_ref = weakref.ref(gencache.EnsureDispatch('Mtb.Application'))
main(mtb_ref)
pythoncom.CoUninitialize()
I'm not sure I understand fully why this makes a difference, but I believe it's because there's never a direct reference to the object, only a weak reference, so all the functions that use the COM object only do so indirectly, allowing the GC to know that the object can be collected sooner. For whatever reason it still needs to be created at the top level of the module, but this at least makes it possible for me to write more reusable code that cleanly exits.

after pythoncom.CoUninitialize() i still see process
for me it help (based):
from comtypes.automation import IDispatch
from ctypes import c_void_p, cast, POINTER, byref
def release_reference(self, obj):
logger.debug("release com object")
oleobj = obj._oleobj_
addr = int(repr(oleobj).split()[-1][2:-1], 16)
pointer = POINTER(IDispatch)()
cast(byref(pointer), POINTER(c_void_p))[0] = addr
pointer.Release()

Related

Callback from Ctypes sometimes fails

I have registered a python callback with a dll using the ctypes library. When the callback is triggered, i try to free up an asyncio future i have set up. Since the callback happens in a separate thread that gets spawned by the dll, i use the loop.call_soon_threadsafe() function to get back to the eventloop that started it all.
Mostly this works fine, but every once in a while the future fails to be unblocked. In the minimal example here this also happens sometimes, but here i see that in those cases the callback doesn't even arrive (or at least the corresponding print doesn't happen).
I tried this only with python 3.8.5 so far. Is there some race condition here that i did not notice?
Here's a minimal example:
import asyncio
import os
class testClass:
loop = None
future = None
exampleDll = None
def finish(self):
#now in the right c thread and eventloop.
print("callback in eventloop")
self.future.set_result(999)
def trampoline(self):
#still in the other c thread
self.loop.call_soon_threadsafe(self.finish)
def example_callback(self):
#in another c thread, so we need to do threadsafety stuff
print("callback has arrived")
self.trampoline()
return
async def register_and_wait(self):
self.loop = asyncio.get_event_loop()
self.future=self.loop.create_future()
callback_type = ctypes.CFUNCTYPE(None)
callback_as_cfunc = callback_type(self.example_callback)
#now register the callback and wait
self.exampleDll.fnminimalExample(callback_as_cfunc, ctypes.c_int(1))
await self.future
print("future has finished")
def main(self):
path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "minimalExample.dll")
#print(path)
ctypes.cdll.LoadLibrary(path)
#for easy access
self.exampleDll = ctypes.cdll.minimalExample
asyncio.run(self.register_and_wait())
if __name__ == "__main__":
for i in range(0,100000):
print(i)
test = testClass()
test.main()
You can get the compiled example dll and its source from the repository here to reproduce.
The issue (at least in this minimal example) does not show up any more if i reuse the same eventloop instead of spawning a new one for every iteration with asyncio.run
The problem is thus fixed, but it doesn't feel right.

Python socketio with multiprocessing

So I have been struggling with this one error of pickle which is driving me crazy. I have the following master Engine class with the following code :
import eventlet
import socketio
import multiprocessing
from multiprocessing import Queue
from multi import SIOSerever
class masterEngine:
if __name__ == '__main__':
serverObj = SIOSerever()
try:
receiveData = multiprocessing.Process(target=serverObj.run)
receiveData.start()
receiveProcess = multiprocessing.Process(target=serverObj.fetchFromQueue)
receiveProcess.start()
receiveData.join()
receiveProcess.join()
except Exception as error:
print(error)
and I have another file called multi which runs like the following :
import multiprocessing
from multiprocessing import Queue
import eventlet
import socketio
class SIOSerever:
def __init__(self):
self.cycletimeQueue = Queue()
self.sio = socketio.Server(cors_allowed_origins='*',logger=False)
self.app = socketio.WSGIApp(self.sio, static_files={'/': 'index.html',})
self.ws_server = eventlet.listen(('0.0.0.0', 5000))
#self.sio.on('production')
def p_message(sid, message):
self.cycletimeQueue.put(message)
print("I logged : "+str(message))
def run(self):
eventlet.wsgi.server(self.ws_server, self.app)
def fetchFromQueue(self):
while True:
cycle = self.cycletimeQueue.get()
print(cycle)
As you can see I can trying to create two processes of def run and fetchFromQueue which i want to run independently.
My run function starts the python-socket server to which im sending some data from a html web page ( This runs perfectly without multiprocessing). I am then trying to push the data received to a Queue so that my other function can retrieve it and play with the data received.
I have a set of time taking operations that I need to carry out with the data received from the socket which is why im pushing it all into a Queue.
On running the master Engine class I receive the following :
Can't pickle <class 'threading.Thread'>: it's not the same object as threading.Thread
I ended!
[Finished in 0.5s]
Can you please help with what I am doing wrong?
From multiprocessing programming guidelines:
Explicitly pass resources to child processes
On Unix using the fork start method, a child process can make use of a shared resource created in a parent process using a global resource. However, it is better to pass the object as an argument to the constructor for the child process.
Apart from making the code (potentially) compatible with Windows and the other start methods this also ensures that as long as the child process is still alive the object will not be garbage collected in the parent process. This might be important if some resource is freed when the object is garbage collected in the parent process.
Therefore, I slightly modified your example by removing everything unnecessary, but showing an approach where the shared queue is explicitly passed to all processes that use it:
import multiprocessing
MAX = 5
class SIOSerever:
def __init__(self, queue):
self.cycletimeQueue = queue
def run(self):
for i in range(MAX):
self.cycletimeQueue.put(i)
#staticmethod
def fetchFromQueue(cycletimeQueue):
while True:
cycle = cycletimeQueue.get()
print(cycle)
if cycle >= MAX - 1:
break
def start_server(queue):
server = SIOSerever(queue)
server.run()
if __name__ == '__main__':
try:
queue = multiprocessing.Queue()
receiveData = multiprocessing.Process(target=start_server, args=(queue,))
receiveData.start()
receiveProcess = multiprocessing.Process(target=SIOSerever.fetchFromQueue, args=(queue,))
receiveProcess.start()
receiveData.join()
receiveProcess.join()
except Exception as error:
print(error)
0
1
...

Win32com events not raising inside thread?

I am new to both COM and Python, so im not very familiar with exact terminologies. So apologies for using inexact terms.
I am trying to connect to a desktop application via a proprietary COM interface using pywin32.
I created a PoC and it runs fine. The COM function call is processed and I get the expected event.
class MyEvents:
def __init__(self):
print("Callback class initialized")
def OnMyEvent(self, data):
print('MyEvent raised')
class ComUser:
comObj = None
def __init__(self):
comObj = win32com.client.DispatchWithEvents("ProproetaryInterface.InterfaceClass",
MyEvents)
comObj.Register()
comObj.DoSomething(data)
time.sleep(120)
userObj = ComUser()
So far so good. I get the event on the screen
Callback class initialized
MyEvent raised
Next I tried to put it into my application where I have multiple threads. To explain it in simple terms:
Main creates an object of Class X which initializes an XMLRPC Server thread.
The XMLRPC handler simply takes incoming info and puts it into a queue
The queue is from multiprocessing lib.
Another thread waits on this queue for an incoming message
def __startPollingThread(self):
pythoncom.CoInitialize()
pollingThread = Thread(target=self.__checkQueue )
pollingThread.start()
pythoncom.CoUninitialize()
This is the polling thread method:
def __checkQueue(self):
try:
pythoncom.CoInitialize()
while True:
currMessage = self.__messageQueue.get()
self.__processMessage(currMessage);
except :
#Log message
finally:
pythoncom.CoUninitialize()
The __processMessage passes through multliple classes (something like a strategy pattern + state pattern) before it hits the class that handles COM interface.
In the ComUser class, i have a method which registers with the client application's com interface:
def initSystem(self):
import pythoncom
try:
pythoncom.CoInitialize()
self.ComConnector = win32com.client.DispatchWithEvents("ProprietaryInterface.InterfaceClass",
MyEvents)
self.ComConnector.Register()
except:
finally:
pythoncom.CoUninitialize()
Another method handles the specific requests as they arrive and makes the corresponding COM calls.
def handleMessage(self, message):
#if message = this then
comObj.DoSomething(data)
Both methods are called from the __processMessage method. All my classes reside in separate Py files. Except ComUser and MyEvents which are in same py module
I can call the Com Interface and see the Application reacting to the COM method calls but I cant see any events being raised. I have tried a whole lot of combinations of CoInitialize and Uninitialze and "import pythoncom" statements to ensure that it is not a problem with the threading. Also tried setting the sys.coinit_flags = 0 and checked. Seems to make no difference. I just dont see any events.
Is it a problem that I call DispatchWithEvents in a child thread instead of the main thread(The calls seem to work fine) ? Or is it that the main thread (ie main method of the program) dies out. I tried putting a long sleep there too. I even tried a separate thread with PumpWaitingMessages loop but it made no difference. I cant think of any solutions.

Python multiprocessing module do not work

i am trying to write a spider with multiprocessing module
here is my python code:
# -*- coding:utf-8 -*-
import multiprocessing
import requests
class SpiderWorker(object):
def __init__(self, q):
self._q = q
def run(self):
def _crawl_item(url):
requests.get("http://www.baidu.com")
if respon.ok:
print respon.url
while True:
rst = self._q.get()
_crawl_item(rst)
def general_worker():
q = multiprocessing.Queue()
CPU_COUNT = multiprocessing.cpu_count()
worker_processes = [
multiprocessing.Process(target=SpiderWorker(q).run)
for i in range(CPU_COUNT)
]
map( lambda process: process.start(), worker_processes )
return q, worker_processes
maybe it is my process way wrong
every time i run this code, my process tell me
<Process(Process-1, stopped[SIGSEGV])>
hope love it
The major problem here is that you don't have any information on why your processes fail. It could be gevent, but it could just as easily be something else. So learning the actual reason why your processes get terminated is the first step before doing anything else.
What you need is multiprocessing.log_to_stderr():
class SpiderWorker(object):
# ...
def run(self):
logger = multiprocessing.log_to_stderr()
logger.setLevel(multiprocessing.SUBDEBUG)
try:
# Here goes your original run() code
except Exception:
logger.exception('whoopsie')
What this code does:
Creates a special logger which will transmit it's information to the main process and dump it to stderr (console by default).
Configures this logger to report everything, including some internal multiprocessing module events (just in case as you probably don't need them).
Wraps your entire code in catch-all statement so whatever happens there cannot escape your notice.
Runs .exception() method on the logger, which not only logs the message (it's meaningless anyway as we don't know what actually happens) but most importantly logs the entire error traceback - which we actually need.

How to keep function called with python ctypes running

I'm trying to create a minimal python script that calls the winsparkle C library using ctypes. The code only works if I run it line-by-line, win_sparkle_check_update_without_ui() pops up a window to download an update as expected. But running the script normally this function just gets called and skipped instantly, it doesn't remain open, unless I add the hacky time.sleep option.
What is the proper pythonic way to run this function and leave it open until the user closes the popup?
from ctypes import CDLL
import logging, time
class UpdateAgent(object):
def __init__(self, appcast_url):
self.appcast_url = appcast_url
def check_for_update(self):
winsparkle = CDLL("WinSparkle.dll")
url = self.appcast_url
winsparkle.win_sparkle_set_appcast_url(url.encode('ascii', 'ignore'))
winsparkle.win_sparkle_set_app_details(
unicode('Company'),
unicode('myapp'),
unicode(9))
winsparkle.win_sparkle_init()
winsparkle.win_sparkle_check_update_without_ui()
#time.sleep(10)
def run(self):
try:
self.check_for_update()
except Exception as e:
logging.info('%s' % (e))

Categories