I'm using COM integration to drive MS Excel from Python 2.7. I noticed a peculiar thing: when I run the following bit of code:
import win32com.client
excelApp = win32com.client.dynamic.Dispatch('Excel.Application')
an EXCEL.EXE process appears on the processes list (which view using the Windows Task Manager or subprocess.Popen('tasklist')) as expected. I then do all the stuff I need to do no problem. However, when I close Excel:
excelApp.Application.Quit()
The process persists, even if I close the Python interpreter which started it (this kind of makes sense as Excel runs in a different process but just to be sure). The only way I've found to terminate this process is either by hand, using the Task Manager, or by calling:
subprocess.Popen("taskkill /F /im EXCEL.EXE",shell=True)
the forceful /F flag is necessary, otherwise the process doesn't terminate.
This isn't really a problem (I hope) but I wanted to ask whether this could cause issues when I first edit the documents "normally", then when calling Excel from python and then "normally" again? Potentially many (couple dozens) times in a row? What I'm worried about is creating conflicting versions of documents etc. Or should I just terminate the EXCEL.EXE process each time just to be safe?
Also I noticed that subprocess.Popen("taskkill") doesn't return any exceptions that I can catch and anylse (or am I doing something worng here?). I'm particularly interested in distinguishing between the "non-existent process" kill attempt and a failed attempt to terminate the process.
try closing any open books, telling the app to quit and delete any references to the app. I usually wrap my com objects in a class. This is what my quit method looks like.
def quit(self):
self.xlBook.Close(SaveChanges=0)
self.xlApp.Quit()
del self.xlApp
Are you calling Dispatch from the main thread? If not be sure to call
pythoncom.CoInitialize()
before Dispatch, and
pythoncom.CoUninitialize()
after Quit
Related
I must ensure that the end_event() function is executed at the end of the program. I tried to implement this as Python's atexit. However, when the .py file was converted to an exe file with a PyInstaller and closed with a click, it did not work. I would appreciate it if you could tell me a solution that always works. Have a good day.
import atexit
import signal
import pyupbit
def end_event():
for keys in buy_list.keys():
order = upbit.get_order(keys)
if "state" in order:
if(order['state'] == 'wait'):
upbit.cancel_order(keys)
exit(1)
atexit.register(end_event)
signal.signal(signal.SIGINT, end_event)
There is no way to guarantee that code will run when the process exits, because there are ways to exit which do not permit any code to run.
User may pull the power cable. No code can run because there is no power.
External events (power outage, lightning strike), same as above.
Kill -9 or equivalent, e.g. Windows' TerminateProcess
Patterns for emergency cleanup.
The best pattern is: Never be in a state which requires emergency cleanup.
There is no guarantee that your cleanup will run because you could have a kill -9 or a power outage.
Therefore you need to be able to recover in that scenario.
See also: What does the POSIX standard say about thread stacks in atexit() handlers? What's the OS practice?
OS: Windows 10
Python: 3.5.2
I am trying to open calc.exe do some actions and than close it.
Here is my code sample
import subprocess, os, time
p = subprocess.Popen('calc.exe')
#Some actions
time.sleep(2)
p.kill()
So this is not working for calc.exe, it just opens the calculator, but does not close it, But same code is working fine for "notepad.exe".
I am guessing that there is a bug in subprocess lib for process kill method. so the notepad.exe process name in task manager is notepad.exe, but the calc.exe process name is calculator.exe, so I am guessing it is trying to kill by name and do not find it.
There's no bug in subprocess.kill. If you're really worried about that, just check the source, which is linked from the docs. The kill method just calls send_signal, which just calls os.kill unless the process is already done, and you can see the Windows implementation for that function. In short: subprocess.Process.kill doesn't care what name the process has in the kernel's process table (or the Task Manager); it remembers the PID (process ID) of the process it started, and kills it that way.
The most likely problem is that, like many Windows apps, calc.exe has some special "single instance" code: when you launch it, if there's already a copy of calc.exe running in your session, it just tells that copy to come to the foreground (and open a window, if it doesn't have one), and then exits. So, by the time you try to kill it 2 seconds later, the process has already exited.
And if the actual running process is calculator.exe, that means calc.exe is just a launcher for the real program, so it always tells calculator.exe to come to the foreground, launching it if necessary, and then exits.
So, how can you kill the new calculator you started? Well, you can't, because you didn't start a new one. You can kill all calc.exe and/or calculator.exe processes (the easiest way to do this is with a third-party library like psutil—see the examples on filtering and then kill the process once you've found it), but that will kill any existing calculator process you had open before running your program, not just the new one you started. Since calc.exe makes it impossible to tell if you've started a new process or not, there's really no way around that.
This is one way to kill it, but it will close every open calculator.
It calls a no window command prompt and gives the command to close the Calculator.exe process.
import subprocess, os, time
p = subprocess.Popen('calc.exe')
print(p)
#Some actions
time.sleep(2)
CREATE_NO_WINDOW = 0x08000000
subprocess.call('taskkill /F /IM Calculator.exe', creationflags=CREATE_NO_WINDOW)
After reading A LOT of data on the subject I still couldn't find any actual solution to my problem (there might not be any).
My problem is as following:
In my project I have multiple drivers working with various hardware's (IO managers, programmable loads, power supplies and more).
Initializing connection to these hardware's is costly (in time), and I cant open and then close the connection for every communication iteration between us.
Meaning I cant do this (Assuming programmable load implements enter / exit):
start of code...
with programmable_load(args) as program_instance:
programmable_load_instance.do_something()
rest of code...
So I went for a different solution :
class programmable_load():
def __init__(self):
self.handler = handler_creator()
def close_connection(self):
self.handler.close_connection()
self.handler = None
def __del__(self):
if (self.handler != None):
self.close_connection()
For obvious reasons I dont 'trust' the destructor to actually get called so I explicitly call close_connection() when I want to end my program (for all drivers).
The problem happens when I abruptly terminate the process, for example when I run via debug mode and quit debugging.
In these cases the process terminates without running through any destructors.
I understand that the OS will clear all memory unused at this point, but is there any way to clear the memory in an organized manner?
and if not, is there a way to make the quit debugging function pass through a certain set of functions? Does the python process know it got a quite debugging event or does it treat it as a normal termination?
Operating system: Windows
According to this documentation:
If a process is terminated by TerminateProcess, all threads of the
process are terminated immediately with no chance to run additional
code.
(Emphasis mine.) This implies that there is nothing you can do in this case.
As detailed here, signals don't work very well on ms-windows.
As was mentioned in a comment, you could use atexit to do the cleanup. But that only works if the process is asked to close (e.g. QUIT signal on Linux) and not just killed (as is likely the case when stopping the debugging session). Similarily if you force your computer to turn off (e.g. long press power button or remove power) then it won't be called either. There is no 'solution' to that for obvious reasons. Your program can't expect to be called when the power suddenly goes off or when it is forcefully killed. The point of forcefully killing is to definitely kill the process now. If it first called your clean-up code then you could delay that which defeats the purpose. That is why there are signals such as to ask your process to stop. This is not Python specific. The same concept also applies across operating systems.
Bonus (design suggestion, not a solution): I would argue that you can still make use of the context manager (using with). Your problem is not unique. Database connections are usually kept alive for longer as well. It is a question of the scope. Move the context further up to the application level. Then it is clear what the boundary is and you don't need any magic (you are probably also aware of #contextmanager to make that a breeze).
I haven't tested properly as I don't have wingide installed over here so I can't grant you this will work but what about using setconsolectrlhandler? For instance, try something like this:
import os
import sys
import win32api
if __name__ == "__main__":
def callback(sig, func=None):
print("Exit handler called!")
try:
win32api.SetConsoleCtrlHandler(callback, True)
except Exception as e:
print("Captured exception", e)
sys.exit(1)
print("Press to quit")
input()
print("Bye!")
It'll be able to handle CTRL+C and CTRL+BREAK signals:
python: 3.4
OS: win7 / win10
I want to kill a running process with python and close all the files it opened:
for proc in psutil.process_iter():
if proc.name() == 'myprocess.exe':
opened = proc.open_files()
proc.kill()
for i in opened:
print(i.path)
io.FileIO(i.path).close()
print(io.FileIO(i.path).closed)
Somehow io.IOBase(i.path).close() does not work.
Explanation:
It's like I would like to kill Microsoft Word with python, but it leaves some files open. And I would like to close those files as well.
Microsoft Word is just an example. It is a self-written python programm. The opened files are:
fonts (.ttf)
clr.pyd
and .dll-s
How should I close these files?
You don't need to close any files that were opened by the process. That is done automatically:
Terminating a process has the following results:
Any remaining threads in the process are marked for termination.
Any resources allocated by the process are freed.
All kernel objects are closed.
The process code is removed from memory.
The process exit code is set.
The process object is signaled.
The important bit is "All kernel objects are closed." For every open file handle, there is an associated kernel object--that's actually what a handle is, a mapping from a number to a kernel object. When the process exits, the kernel will walk behind and close all associated file handles, sockets, etc.
Additionally, you're original approach has a few problems. First, the list of open files is only a snapshot of which ones were open at that time. In between asking for the list of open files and killing the process, the process could have opened many more, or closed and removed many as well. Second, the Python 3 docs say that the constructor for IOBase isn't public, so using it in this way is wrong:
class io.IOBase
The abstract base class for all I/O classes, acting on streams of bytes. There is no public constructor.
Generally, you'd use something like io.open() which takes the path. This leads to the third issue. All you have to work with is the path. In order to close a file, you really need the handle. Those handles are process-specific. This means in one process, 0x5555AAAA may correspond to "file1.txt", but in another process, it might correspond to "file2.txt" or maybe not even a file at all (it could be a socket or something else). So even if you have the kernel handle, we don't really have a way of saying "close this handle in the context of this other process." That violates some security goals of processes. Also, it means that what you're actually doing here is creating your own handle to only turn around and close it (or in this case, it possibly does nothing at all since the object wasn't created correctly).
So, if you're having a problem with files still being held, perhaps the problem is that the process didn't actually die yet before trying whatever work you needed to get done. You may need to wait for the process to exit before attempting to move on if there are files the process was using that you want to use again. It looks like you can use psutils.wait_procs() to do that.
Also, on Windows I find that anti-virus tools often get in the way. They hold open files accessed by applications making it look like a process is still holding onto them when it's actually the virus scanner doing its thing. I remember one instance of having to deal with this in Subversion. The code still exists today. So you might need to simply wait a bit and try again.
Update
Microsoft Word is just an example. It is a self-written python programm. The opened files are:
fonts (.ttf)
clr.pyd
and .dll-s
How should I close these files?
The answer is that you shouldn't need to. Just make sure the process has actually exited. It's not an instantaneous operation, so there's some time between killing it and it actually exiting that it still retains the file handles.
Given that you've actually written the process being killed, I think a far better approach would be to introduce a way to launch that process, have it do its work, then exit gracefully. Then use subprocess.run() to run the script and wait for it to exit.
It's like I would like to kill Microsoft Word with python, but it leaves some files open. And I would like to close those files as well.
There is some misunderstanding here. When you terminate Word with kill, all files are closed from a system point of view, but they will be dirty closed. When Word terminates normally, it flushes its internal buffers, removes any temporary files and mark the files as clean. When it crashes or is abruptely terminated, all that cleaning does not occur. Some modifications may not be written to disk, and temp files are still there, so on next execution, Word will warn you that the files have not been orderly closed and have to be repaired.
So you do not want to kill Microsoft Word, but to close it, meaning posting a WM_QUIT message to its main window. Unfortunately, there is no clean and neat support in Python for that. There is an example of closing Excel by the win32com module here. The convertion for Word should be (beware untested):
wd = win32com.client.Dispatch("Word.Application")
wd.Quit() #quit word, as if user hit the close button/clicked file->exit.
Take a look at the with statement syntax. There's a brief overview here
Can my python script spawn a process that will run indefinitely?
I'm not too familiar with python, nor with spawning deamons, so I cam up with this:
si = subprocess.STARTUPINFO()
si.dwFlags = subprocess.CREATE_NEW_PROCESS_GROUP | subprocess.CREATE_NEW_CONSOLE
subprocess.Popen(executable, close_fds = True, startupinfo = si)
The process continues to run past python.exe, but is closed as soon as I close the cmd window.
Using the answer Janne Karila pointed out this is how you can run a process that doen't die when its parent dies, no need to use the win32process module.
DETACHED_PROCESS = 8
subprocess.Popen(executable, creationflags=DETACHED_PROCESS, close_fds=True)
DETACHED_PROCESS is a Process Creation Flag that is passed to the underlying CreateProcess function.
This question was asked 3 years ago, and though the fundamental details of the answer haven't changed, given its prevalence in "Windows Python daemon" searches, I thought it might be helpful to add some discussion for the benefit of future Google arrivees.
There are really two parts to the question:
Can a Python script spawn an independent process that will run indefinitely?
Can a Python script act like a Unix daemon on a Windows system?
The answer to the first is an unambiguous yes; as already pointed out; using subprocess.Popen with the creationflags=subprocess.CREATE_NEW_PROCESS_GROUP keyword will suffice:
import subprocess
independent_process = subprocess.Popen(
'python /path/to/file.py',
creationflags=subprocess.CREATE_NEW_PROCESS_GROUP
)
Note that, at least in my experience, CREATE_NEW_CONSOLE is not necessary here.
That being said, the behavior of this strategy isn't quite the same as what you'd expect from a Unix daemon. What constitutes a well-behaved Unix daemon is better explained elsewhere, but to summarize:
Close open file descriptors (typically all of them, but some applications may need to protect some descriptors from closure)
Change the working directory for the process to a suitable location to prevent "Directory Busy" errors
Change the file access creation mask (os.umask in the Python world)
Move the application into the background and make it dissociate itself from the initiating process
Completely divorce from the terminal, including redirecting STDIN, STDOUT, and STDERR to different streams (often DEVNULL), and prevent reacquisition of a controlling terminal
Handle signals, in particular, SIGTERM.
The reality of the situation is that Windows, as an operating system, really doesn't support the notion of a daemon: applications that start from a terminal (or in any other interactive context, including launching from Explorer, etc) will continue to run with a visible window, unless the controlling application (in this example, Python) has included a windowless GUI. Furthermore, Windows signal handling is woefully inadequate, and attempts to send signals to an independent Python process (as opposed to a subprocess, which would not survive terminal closure) will almost always result in the immediate exit of that Python process without any cleanup (no finally:, no atexit, no __del__, etc).
Rolling your application into a Windows service, though a viable alternative in many cases, also doesn't quite fit. The same is true of using pythonw.exe (a windowless version of Python that ships with all recent Windows Python binaries). In particular, they fail to improve the situation for signal handling, and they cannot easily launch an application from a terminal and interact with it during startup (for example, to deliver dynamic startup arguments to your script, say, perhaps, a password, file path, etc), before "daemonizing". Additionally, Windows services require installation, which -- though perfectly possible to do quickly at runtime when you first call up your "daemon" -- modifies the user's system (registry, etc), which would be highly unexpected if you're coming from a Unix world.
In light of that, I would argue that launching a pythonw.exe subprocess using subprocess.CREATE_NEW_PROCESS_GROUP is probably the closest Windows equivalent for a Python process to emulate a traditional Unix daemon. However, that still leaves you with the added challenge of signal handling and startup communications (not to mention making your code platform-dependent, which is always frustrating).
That all being said, for anyone encountering this problem in the future, I've rolled a library called daemoniker that wraps both proper Unix daemonization and the above strategy. It also implements signal handling (for both Unix and Windows systems), and allows you to pass objects to the "daemon" process using pickle. Best of all, it has a cross-platform API:
from daemoniker import Daemonizer
with Daemonizer() as (is_setup, daemonizer):
if is_setup:
# This code is run before daemonization.
do_things_here()
# We need to explicitly pass resources to the daemon; other variables
# may not be correct
is_parent, my_arg1, my_arg2 = daemonizer(
path_to_pid_file,
my_arg1,
my_arg2
)
if is_parent:
# Run code in the parent after daemonization
parent_only_code()
# We are now daemonized, and the parent just exited.
code_continues_here()
For that purpose you could daemonize your python process or as you are using windows environment you would like to run this as a windows service.
You know i like to hate posting only web-links:
But for more information according to your requirement:
A simple way to implement Windows Service. read all comments it will resolve any doubt
If you really want to learn more
First read this
what is daemon process or creating-a-daemon-the-python-way
update:
Subprocess is not the right way to achieve this kind of thing