Properly handling IOError thrown by logging.config.fileConfig? - python

This may be an open ended or awkward question, but I find myself running into more and more exception handling concerns where I do not know the "best" approach in handling them.
Python's logging module raises an IOError if you try to configure a FileHandler with a file that does not exist. The module does not handle this exception, but simply raises it. Often times, it is that the path to the file does not exist (and therefore the file does not exist), so we must create the directories along the path if we want to handle the exception and continue.
I want my application to properly handle this error, as every user has asked why we don't make the proper directory for them.
The way I have decided to handle this can be seen below.
done = False
while not done:
try:
# Configure logging based on a config file
# if a filehandler's full path to file does not exist, it raises an IOError
logging.config.fileConfig(filename)
except IOError as e:
if e.args[0] == 2 and e.filename:
# If we catch the IOError, we can see if it is a "does not exist" error
# and try to recover by making the directories
print "Most likely the full path to the file does not exist, so we can try and make it"
fp = e.filename[:e.rfind("/")]
# See http://stackoverflow.com/questions/273192/python-best-way-to-create-directory-if-it-doesnt-exist-for-file-write#273208 for why I don't just leap
if not os.path.exists(fp):
os.makedirs(fp)
else:
print "Most likely some other error...let's just reraise for now"
raise
else:
done = True
I need to loop (or recurse I suppose) since there is N FileHandlers that need to be configured and therefore N IOErrors that need to be raised and corrected for this scenario.
Is this the proper way to do this? Is there a better, more Pythonic way, that I don't know of or may not understand?

This is not something specific to the logging module: in general, Python code does not automatically create intermediate directories for you automatically; you need to do this explicitly using os.makedirs(), typically like this:
if not os.path.exists(dirname):
os.makedirs(dirname)
You can replace the standard FileHandler provided by logging with a subclass which does the checks you need and, when necessary, creates the directory for the logging file using os.makedirs(). Then you can specify this handler in the config file instead of the standard handler.

Assuming it only needs to be done once at the beginning of your app's execution, I would just os.makedirs() all the needed directories without checking for their existence first or even waiting for the logging module to raise an error. If you then you get an error trying to start a logger, you can just handle it the way you likely already did: print an error, disable the logger. You went above and beyond just by trying to create the directory. If the user gave you bogus information, you're no worse off than you are now, and you're better in the vast majority of cases.

Related

Thing to check if you have permissions to directory

I'm writting an python program and now I'm working at exceptions.
while True:
try:
os.makedirs("{}\\test".format(dest))
except PermissionError:
print("Make sure that you have access to specified path")
print("Try again specify your path: ", end='')
dest = input()
continue
break
It is working but later I need to delete that folder.
What is the better way to do it?
Don't.
It is almost never worth verifying that you have permissions to perform an operation that your program requires. For one thing, permissions are not the only possible reason for failure. A delete may also fail because of a file lock by another program, for instance. Unless you have a very good reason to do otherwise, it is both more efficient and more reliable to just write your code to try the operation and then abort on failure:
import shutil
try:
shutil.rmtree(path_to_remove) # Recursively deletes directory and files inside it
except Exception as ex:
print('Failed to delete directory, manual clean up may be required: {}'.format(path_to_remove))
sys.exit(1)
Other concerns about your code
Use os.path.join to concatenate file paths: os.makedirs(os.path.join(dest, test)). This will use the appropriate directory separator for the operating system.
Why are you looping on failure? In real world programs, simply aborting the entire operation is simpler and usually makes for a better user experience.
Are you sure you aren't looking for the tempfile library? It allows you to spit out a unique directory to the operating system's standard temporary location:
import tempfile
with tempfile.TemporaryDirectory() as tmpdir:
some_function_that_creates_several_files(tmpdir)
for f in os.walk(tmpdir):
# do something with each file
# tmpdir automatically deleted when context manager exits
# Or if you really only need the file
with tempfile.TemporaryFile() as tmpfile:
tmpfile.write('my data')
some_function_that_needs_a_file(tmpfile)
# tmpfile automatically deleted when context manager exits
I think what you want is os.access.
os.access(path, mode, *, dir_fd=None, effective_ids=False, follow_symlinks=True)
Use the real uid/gid to test for access to path. Note that most operations will use the effective uid/gid, therefore this routine can be used in a suid/sgid environment to test if the invoking user has the specified access to path. mode should be F_OK to test the existence of path, or it can be the inclusive OR of one or more of R_OK, W_OK, and X_OK to test permissions. Return True if access is allowed, False if not.
For example:
os.access("/path", os.R_OK)
And the mode contains:
os.F_OK # existence
os.R_OK # readability
os.W_OK # writability
os.X_OK # executability
Refer: https://docs.python.org/3.7/library/os.html#os.access

List of exceptions for Python3 I/O operations for proper error handling

I'm trying to write to a file like this:
with open(filename), 'wb') as f:
f.write(data)
But when opening a file and writing to it many things might go wrong and I want to properly report what went wrong to the user. So I don't want to do:
try:
with open(filename), 'wb') as f:
f.write(data)
except:
print("something went wrong")
But instead I want to tell the user more specifics so that they can fix the problem on their end. So I started with catching the PermissionError to tell the user to fix their permissions. But soon I got a bugreport that the program would fail if filename pointed to a directory, so I also caught and reported the IsADirectoryError. But this is not all - it might be that the path to filename doesn't exist, so later I also added the FileNotFoundError.
Opening a file and writing to it is a very simple pattern but still many things can go wrong. I'm wondering whether there is an easy recipe that allows me to catch these errors and report them properly without giving a generic "some error happened" message. This kind of pattern has been used for decades, so there must be some list of things to check for, no?
I now caught a number of exceptions but I can imagine that even more are lurking around. Things probably become even worse when running this on non-Linux platforms...
What are the best practices for opening a file and writing to it in Python that give a maximum of information back to the user when things go wrong without ressorting to dumping the bare traceback?
On the operating system level there are only so many things that can go wrong when doing an open() and write() system call so there must exist a complete list?
You could catch the higher level exception OSError and then present the value of the strerror field to the user as the explanation. This relies on the fact that exceptions such as FileNotFoundError are subclasses of OSError and these subclasses can all be caught by using the base class.
Here's an example:
try:
filename = 'missing_file'
with open(filename) as f:
for line in f:
# whatever....
pass
except OSError as exc:
print('{!r}: {}'.format(filename, exc.strerror))
Running this code when missing_file does not exist produces this error message:
'missing_file': No such file or directory

Can anyone explain this weird behaviour of shutil.rmtree and shutil.copytree?

I'm building a relatively simple application that asks for directories, checks if they're correct, and then removes one of them and recreates it with contents of the other one. I'm encountering this weird behaviour, I'll try to explain:
When I've got the destination folder window open, AND it's empty, there's an access denied exception, then I get kicked out of the folder and it gets removed. But then if it's not empty, it works just fine, no exceptions, the destination directory (from what it seems) gets emptied then filled with files from the source directory. Which is strange because it's supposed to straight out remove the destination folder no matter what, and then recreate it with the same name and contents from the source destination.
This doesn't make sense to me, shouldn't there be the exact same exception when I'm browsing the directory when it's not empty as when it's empty? What's the difference, it's still supposed to just delete the folder. Is there any logical explanation to that? Also, if there's an exception, why does the directory get removed anyway?
Code for this particular part is pretty straightforward (please keep in mind I'm a beginner :) )
def Delete(self, dest):
try:
shutil.rmtree(dest)
self.Paste(self.src, dest)
except (IOError, os.error) as e:
print e
def Paste(self, src, dest):
try:
shutil.copytree(src, dest)
except (IOError, os.error) as e:
print e
This is expected behaviour on Windows.
Internally shutil.rmtree calls the windows API function DeleteFile which is documented on MSDN (http://msdn.microsoft.com/en-us/library/windows/desktop/aa363915%28v=vs.85%29.aspx).
This function has the following property (highlight by me):
The DeleteFile function marks a file for deletion on close. Therefore, the file deletion does not occur until the last handle to the file is closed. Subsequent calls to CreateFile to open the file fail with ERROR_ACCESS_DENIED.
If any other process still has a handle open (e.g. virus scanners, windows explorer because you watch the directory or anything else that might still have a handle to that directory), it will not go away.
Usually you just catch the exception in your Paste operation and retry it a few times/for some dozend milliseconds, to handle all those weird virus scanner anomalies.
Little bonus: You can use windbg or ProcessExplorer to find out who still keeps an open handle to your file (just use Find Handle in Process explorer and search for the filename).

Which Python exception should I throw?

I'm writing some code to manipulate the Windows clipboard. The first thing I do is to try and open the clipboard with OpenClipboard() function from the Windows API:
if OpenClipboard(None):
# Access the clipboard here
else:
# Handle failure
This function can fail. So if it does, I would like to raise an exception. My question is, which of the standard Python exceptions should I raise? I'm thinking WindowsError would be the right one, but not sure. Could someone please give me a suggestion?
It is better to avoid raising standard exceptions directly. Create your own exception class, inherit it from the most appropriate one (WindowsError is ok) and raise it. This way you'll avoid confusion between your own errors and system errors.
Raise the windows error and give it some extra infomation, for example
raise WindowsError("Clipboard can't be opened")
Then when its being debugged they can tell what your windows error means rather than just a random windowserror over nothing.
WindowsError seems a reasonable choice, and it will record extra error information for you. From the docs:
exception WindowsError
Raised when a Windows-specific error occurs or when the error number does not correspond to an errno value. The winerror and strerror values are created from the return values of the GetLastError() and FormatMessage() functions from the Windows Platform API. The errno value maps the winerror value to corresponding errno.h values. ...

Check if I can write a file to a directory or not with Python

I need to check if I can write a file to a directory that user point to with Python.
Is there an way to check it in advance? I may be using try .. catch for this purpose, but I expect something better in the sense that I can check in advance.
Despite Jim Brissom's claim, exception handling is not cheap in Python compared to 'check then try' idioms if you expect the thing to fail more than a few percent of the time. (Read to the end for an exception!) However, the key thing here is that you need to check the exception anyway, because the permissions can change between the check and your write:
### !!! This is an example of what not to do!
### !!! Don't do this!
if os.access("test", os.W_OK):
# And in here, some jerk does chmod 000 test
open("test", "w").write(my_data)
# Exception happens despite os.access!
os.access and the stat module are great if you're trying to, e.g., prepare a list of folders for the user to pick and want to exclude invalid ones a priori. However, when the rubber hits the road, it is not a replacement for exception handling if you want your program to be robust.
And now the exception to the exceptions-are-slow rule: Exceptions are slow, but disks are slower. And if your disk is under particularly heavy load, you might end up having the file you're interested in evicted from the OS or disk cache between the os.access and open call. In this case, you're going to experience a major slowdown as you have to go to disk (and these days, that can also mean network) twice.
You will probably not find something better or more Pythonic. Python's philosophy is it is easier to ask forgiveness than permission.
You can use os.access if you like. Coupled with os.path.isfile to check if you have a file and not e.g. a directory. It will probably give you what you need. The exception path is much better.
The pythonic way is to access it and catch the exception if it fails.
If you really have to check it, use os.access, but the results are not always true, beware of issues in Vista/Win7 with UAC, for example!
Example:
os.access(r'C:\Programme', os.R_OK)
This will tell you if you have read access.
just use this:
def check_access(file, mode):
try:
open(file, mode)
return True
except PermissionError:
return False
print(check_access("test", "w"))
output if no Permission:
False
output if Permission:
True
if you use os.access() then you have a chance of getting something like
Traceback (most recent call last):
File "/Users/lawrence/Library/Application Support/CodeRunner/Unsaved/Untitled.py", line 8, in <module>
open("test", "w").write("test")
PermissionError: [Errno 13] Permission denied: 'test'
you can still use os.access() but I don't recommend it

Categories