I'm developing a PyQt applications so there's a good possibility segfaults could happen.
I want to use the faulthandler module to catch these. Now instead of writing to stderr I want to do the following:
Make faulthandler write into a file with a known location.
When starting again (normally or after a crash), check if that file exists and has a crash log in it.
If it does, open a dialog which asks the user to open a bug report with the crash log, then delete it.
Now this works fine, except when I run multiple instances of my application.
Then I thought I could write into a random file with a known location (say, crash-XXXXX.log), and then when starting check for crash-*.log, and if it's non-empty do the same thing as above.
However when doing it that way, at least on Linux I'll be able to delete the file while another instance might still have it open, and then if that instance crashes the log gets lost.
I also can't just open() the file at the right time as faulthandler wants an open file.
I'm searching for a solution which:
Works with multiple instances
Catches crashes of all these instances correctly
Only opens the crash dialog one time, when starting a new instance after one crashed.
Doesn't leave any stale files behind after all instances are closed.
Works at least on Linux and Windows
I've thought about some different approaches, but all of them have one of these drawbacks.
Related
I'm working on a new project in Python:
it's a sort of file-locker that protects with a password whichever file I decide to lock; then, when the right password is put into my script, the file gets opened.
Now I'm stuck into this problem: how can I make my script get executed when I try to open a normal file that's been locked before without changing it (like putting a piece of code at the start of the original file or what else)?
Should I try to make a "listener" that opens at every Windows startup checking whether the registered files are being opened, so that it will block their execution (if possible, I didn't find anything like that) until another script does not finish?
python: 3.4
OS: win7 / win10
I want to kill a running process with python and close all the files it opened:
for proc in psutil.process_iter():
if proc.name() == 'myprocess.exe':
opened = proc.open_files()
proc.kill()
for i in opened:
print(i.path)
io.FileIO(i.path).close()
print(io.FileIO(i.path).closed)
Somehow io.IOBase(i.path).close() does not work.
Explanation:
It's like I would like to kill Microsoft Word with python, but it leaves some files open. And I would like to close those files as well.
Microsoft Word is just an example. It is a self-written python programm. The opened files are:
fonts (.ttf)
clr.pyd
and .dll-s
How should I close these files?
You don't need to close any files that were opened by the process. That is done automatically:
Terminating a process has the following results:
Any remaining threads in the process are marked for termination.
Any resources allocated by the process are freed.
All kernel objects are closed.
The process code is removed from memory.
The process exit code is set.
The process object is signaled.
The important bit is "All kernel objects are closed." For every open file handle, there is an associated kernel object--that's actually what a handle is, a mapping from a number to a kernel object. When the process exits, the kernel will walk behind and close all associated file handles, sockets, etc.
Additionally, you're original approach has a few problems. First, the list of open files is only a snapshot of which ones were open at that time. In between asking for the list of open files and killing the process, the process could have opened many more, or closed and removed many as well. Second, the Python 3 docs say that the constructor for IOBase isn't public, so using it in this way is wrong:
class io.IOBase
The abstract base class for all I/O classes, acting on streams of bytes. There is no public constructor.
Generally, you'd use something like io.open() which takes the path. This leads to the third issue. All you have to work with is the path. In order to close a file, you really need the handle. Those handles are process-specific. This means in one process, 0x5555AAAA may correspond to "file1.txt", but in another process, it might correspond to "file2.txt" or maybe not even a file at all (it could be a socket or something else). So even if you have the kernel handle, we don't really have a way of saying "close this handle in the context of this other process." That violates some security goals of processes. Also, it means that what you're actually doing here is creating your own handle to only turn around and close it (or in this case, it possibly does nothing at all since the object wasn't created correctly).
So, if you're having a problem with files still being held, perhaps the problem is that the process didn't actually die yet before trying whatever work you needed to get done. You may need to wait for the process to exit before attempting to move on if there are files the process was using that you want to use again. It looks like you can use psutils.wait_procs() to do that.
Also, on Windows I find that anti-virus tools often get in the way. They hold open files accessed by applications making it look like a process is still holding onto them when it's actually the virus scanner doing its thing. I remember one instance of having to deal with this in Subversion. The code still exists today. So you might need to simply wait a bit and try again.
Update
Microsoft Word is just an example. It is a self-written python programm. The opened files are:
fonts (.ttf)
clr.pyd
and .dll-s
How should I close these files?
The answer is that you shouldn't need to. Just make sure the process has actually exited. It's not an instantaneous operation, so there's some time between killing it and it actually exiting that it still retains the file handles.
Given that you've actually written the process being killed, I think a far better approach would be to introduce a way to launch that process, have it do its work, then exit gracefully. Then use subprocess.run() to run the script and wait for it to exit.
It's like I would like to kill Microsoft Word with python, but it leaves some files open. And I would like to close those files as well.
There is some misunderstanding here. When you terminate Word with kill, all files are closed from a system point of view, but they will be dirty closed. When Word terminates normally, it flushes its internal buffers, removes any temporary files and mark the files as clean. When it crashes or is abruptely terminated, all that cleaning does not occur. Some modifications may not be written to disk, and temp files are still there, so on next execution, Word will warn you that the files have not been orderly closed and have to be repaired.
So you do not want to kill Microsoft Word, but to close it, meaning posting a WM_QUIT message to its main window. Unfortunately, there is no clean and neat support in Python for that. There is an example of closing Excel by the win32com module here. The convertion for Word should be (beware untested):
wd = win32com.client.Dispatch("Word.Application")
wd.Quit() #quit word, as if user hit the close button/clicked file->exit.
Take a look at the with statement syntax. There's a brief overview here
I have a program that creates a bunch of movie files. I runs as a cron job and every time it runs the movies from the previous iteration are moved to a 'previous' folder so that there is always a previous version to look at.
These movie files are accessed across a network by various users and that's where I'm running into a problem.
When the script runs and tries to move the files it throws a resource busy error because the files are open by various users. Is there a way in Python to force close these files before I attempt to move them?
Further clarification:
JMax is correct when he mentions it is server level problem. I can access our windows server through Administrative Tools > Computer Management > Shared Folders > Open Files and manually close the files there, but I am wondering whether there is a Python equivalent which will achieve the same result.
something like this:
try:
shutil.move(src, dst)
except OSError:
# Close src file on all machines that are currently accessing it and try again.
This question has nothing to do with Python, and everything to do with the particular operating system and file system you're using. Could you please provide these details?
At least in Windows you can use Sysinternals Handle to force a particular handle to a file to be closed. Especially as this file is opened by another user over a network this operation is extremely destabilising and will probably render the network connection subsequently useless. You're looking for the "-c" command-line argument, where the documentation reads:
Closes the specified handle (interpreted as a hexadecimal number). You
must specify the process by its PID.
WARNING: Closing handles can cause application or system instability.
And if you're force-closing a file mounted over Samba in Linux, speaking from experience this is an excruciating experience in futility. However, others have tried with mixed success; see Force a Samba process to close a file.
As far as I know you have to end the processes which access the file. At least on Windows
The .close() method doesn't work on your object file?
See dive into Python for more information on file objects
[EDIT] I've re-read your question. Your problem is that users do open the same file from the network and you want them to close the file? But can you access to their OS?
[EDIT2] The problem is more on a server level to disconnect the user that access the file. See this example for Windows servers.
When my Python script is writing a large amount of logs to a text file line by line using the Python built-in logging library, in my Delphi-powered Windows program I want to effectively read all newly added logs (lines).
When the Python scripting is logging
to the file, my Windows program will
keep a readonly file handle to
that log file;
I'll use the Windows API to get
informed when the log file is
changed; Once the file is changed, it'll read the newly appended lines.
I'm new to Python, do you see any possible problem with this approach? Does the Python logging lib lock the entire log? Thanks!
It depends on the logging handler you use, of course, but as you can see from the source code, logging.FileHandler does not currently create any file locks. By default, it opens files in 'a' (append) mode, so as long as your Windows calls can handle that, you should be fine.
As ʇsәɹoɈ commented, the standard FileHandler logger does not lock the file, so it should work. However, if for some reason you cannot keep you lock on the file - then I'd recommend having your other app open the file periodically, record the position it's read to and then seek back to that point later. I know the Linux DenyHosts program uses this approach when dealing with log files that it has to monitor for a long period of time. In those situations, simply holding a lock isn't feasible, since directories may move, the file get rotated out, etc. Though it does complicate things in that then you have to store filename + read position in persistent state somewhere.
In Linux, how can we know if a file has completed copying before reading it? In Windows, an OSError is raised.
You can use the inotify mechanisms (via pyinotify) to catch events like CREATE, WRITE, CLOSE and based on them you can assume wether the copy has finished or not.
However, since you provided no details on what are you trying to do, I can't tell if inotify would be suitable for you (btw, inotify is Linux specific so you can't use it on Windows or other platforms)
In Linux, you can open a file while another process is writing to it without Python throwing an OSError, so in general, you cannot know for sure whether the other side has finished writing into that file. You can try some hacks, though:
You can check the file size regularly to see whether it increased since the last check. If it hasn't increased in, say, five seconds, you might be safe to assume that the copy has finished. I'm saying might since this is not true in all circumstances. If the other process that is writing the file is blocked for whatever reason, it might temporarily stop writing to the file and resume it later. So this is not 100% fool-proof, but might work for local file copies if the system is never under a heavy load that would stall the writing process.
You can check the output of fuser (this is a shell command), which will list the process IDs for all the files that are holding a file handle to a given file name. If this list includes any process other than yours, you can assume that the copying process hasn't finished yet. However, you will have to make sure that fuser is installed on the target system in order to make it work.