How can I get python3 file.close to release the file lock? - python

Python3 file.close is apparently not letting go of the file, so it can't be used by Win7 commands.
To wit:
import os, time
hfile = "Positions.htm"
hf = open(hfile, "w")
hf.write(str(buf))
hf.close
time.sleep(2) # give it enough time to do the close
os.system(hfile) # run the html file in the default browser
This results in an error message from Win7, saying that it cannot access the file because it is currently in use. However, it is easily accessed after the python program terminates.
Yes, I know that similar questions have been asked here, but I have not seen one that gave a general answer.

You just forgot to call close()
hf.close # wrong
hf.close() # right
You can see that hm.close just gives you a bound method without calling it:
>>> hm.close
<built-in method close of _io.TextIOWrapper object at 0x7ffe8ec74b40>
As Padriac Cunning ham pointed out, you don't need to do this if you just use with syntax:
with open(hfile, 'w') as hf:
hf.write(str(buf))
# Automatically closed

Related

Does Python GC close files too?

Consider the following piece of Python (2.x) code:
for line in open('foo').readlines():
print line.rstrip()
I assume that since the open file remains unreferenced it has to be closed automatically. I have read about garbage collector in Python which frees memory allocated by unused objects. Is GC general enough to handle the files too?
UPDATE
For current versions of python, the clear recommendation is to close files explicitly or use a with statement. No indication anymore that the GC will close the file for you. So now the answer should be: Maybe, but no guarantee. Always use close() or a with statement.
In the Python 3.8 docs the text has been updated to:
If you’re not using the with keyword, then you should call f.close() to close the file and immediately free up any system resources used by it.
Warning: Calling f.write() without using the with keyword or calling f.close() might result in the arguments of f.write() not being completely written to the disk, even if the program exits successfully.
Old Answer:
Taken from the Python 3.6 docs:
If you’re not using the with keyword, then you should call f.close() to close the file and immediately free up any system resources used by it. If you don’t explicitly close a file, Python’s garbage collector will eventually destroy the object and close the open file for you, but the file may stay open for a while. Another risk is that different Python implementations will do this clean-up at different times.
So yes, the file will be closed automatically, but in order to be in control of the process you should do so yourself or use a with statement:
with open('foo') as foo_file:
for line in foo_file.readlines():
print line.rstrip()
foo_file will be clsoed once the with block ends
In the Python 2.7 docs, the wording was different:
When you’re done with a file, call f.close() to close it and free up
any system resources taken up by the open file. After calling
f.close(), attempts to use the file object will automatically fail.
so I assume that you should not depend on the garbage collector automatically closing files for you and just do it manually/use with
I often use open without with so I ran a little test. For the test I use Python 3.9 so I'm not speaking of earlier versions but for 3.9 at least, we do not need the with to have a clean file close.
bash
inotifywait -m "testfile"
python3.9
lines=[line for line in open("testfile")]
sleep(5)
for line in lines:
print(line)
Watch the inotifywait window and run the python script. Before the sleep the final event will be CLOSE_NOWRITE,CLOSE and there will be no other events from that file through the run of the python script.
It depends on what you do, check out this description how it works.
In general I would recommend to use the context manager of the file:
with open("foo", "r") as f:
for line in f.readlines():
# ....
which is similar to (for basic understanding):
file_context_manager = open("foo", "r").__enter__()
for line in file_context_manager.readlines():
# ....
file_context_manager.__exit__()
The first version is a lot more readable, and the withstatement calls the exit method automatically (plus a bit more context handling).
The file will be closed automatically when the scope of the with statement is left.

How to create a temporary file that can be read by a subprocess?

I'm writing a Python script that needs to write some data to a temporary file, then create a subprocess running a C++ program that will read the temporary file. I'm trying to use NamedTemporaryFile for this, but according to the docs,
Whether the name can be used to open the file a second time, while the named temporary file is still open, varies across platforms (it can be so used on Unix; it cannot on Windows NT or later).
And indeed, on Windows if I flush the temporary file after writing, but don't close it until I want it to go away, the subprocess isn't able to open it for reading.
I'm working around this by creating the file with delete=False, closing it before spawning the subprocess, and then manually deleting it once I'm done:
fileTemp = tempfile.NamedTemporaryFile(delete = False)
try:
fileTemp.write(someStuff)
fileTemp.close()
# ...run the subprocess and wait for it to complete...
finally:
os.remove(fileTemp.name)
This seems inelegant. Is there a better way to do this? Perhaps a way to open up the permissions on the temporary file so the subprocess can get at it?
Since nobody else appears to be interested in leaving this information out in the open...
tempfile does expose a function, mkdtemp(), which can trivialize this problem:
try:
temp_dir = mkdtemp()
temp_file = make_a_file_in_a_dir(temp_dir)
do_your_subprocess_stuff(temp_file)
remove_your_temp_file(temp_file)
finally:
os.rmdir(temp_dir)
I leave the implementation of the intermediate functions up to the reader, as one might wish to do things like use mkstemp() to tighten up the security of the temporary file itself, or overwrite the file in-place before removing it. I don't particularly know what security restrictions one might have that are not easily planned for by perusing the source of tempfile.
Anyway, yes, using NamedTemporaryFile on Windows might be inelegant, and my solution here might also be inelegant, but you've already decided that Windows support is more important than elegant code, so you might as well go ahead and do something readable.
According to Richard Oudkerk
(...) the only reason that trying to reopen a NamedTemporaryFile fails on
Windows is because when we reopen we need to use O_TEMPORARY.
and he gives an example of how to do this in Python 3.3+
import os, tempfile
DATA = b"hello bob"
def temp_opener(name, flag, mode=0o777):
return os.open(name, flag | os.O_TEMPORARY, mode)
with tempfile.NamedTemporaryFile() as f:
f.write(DATA)
f.flush()
with open(f.name, "rb", opener=temp_opener) as f:
assert f.read() == DATA
assert not os.path.exists(f.name)
Because there's no opener parameter in the built-in open() in Python 2.x, we have to combine lower level os.open() and os.fdopen() functions to achieve the same effect:
import subprocess
import tempfile
DATA = b"hello bob"
with tempfile.NamedTemporaryFile() as f:
f.write(DATA)
f.flush()
subprocess_code = \
"""import os
f = os.fdopen(os.open(r'{FILENAME}', os.O_RDWR | os.O_BINARY | os.O_TEMPORARY), 'rb')
assert f.read() == b'{DATA}'
""".replace('\n', ';').format(FILENAME=f.name, DATA=DATA)
subprocess.check_output(['python', '-c', subprocess_code]) == DATA
You can always go low-level, though am not sure if it's clean enough for you:
fd, filename = tempfile.mkstemp()
try:
os.write(fd, someStuff)
os.close(fd)
# ...run the subprocess and wait for it to complete...
finally:
os.remove(filename)
At least if you open a temporary file using existing Python libraries, accessing it from multiple processes is not possible in case of Windows. According to MSDN you can specify a 3rd parameter (dwSharedMode) shared mode flag FILE_SHARE_READ to CreateFile() function which:
Enables subsequent open operations on a file or device to request read
access. Otherwise, other processes cannot open the file or device if
they request read access. If this flag is not specified, but the file
or device has been opened for read access, the function fails.
So, you can write a Windows specific C routine to create a custom temporary file opener function, call it from Python and then you can make your sub-process access the file without any error. But I think you should stick with your existing approach as it is the most portable version and will work on any system and thus is the most elegant implementation.
Discussion on Linux and windows file locking can be found here.
EDIT: Turns out it is possible to open & read the temporary file from multiple processes in Windows too. See Piotr Dobrogost's answer.
Using mkstemp() instead with os.fdopen() in a with statement avoids having to call close():
fd, path = tempfile.mkstemp()
try:
with os.fdopen(fd, 'wb') as fileTemp:
fileTemp.write(someStuff)
# ...run the subprocess and wait for it to complete...
finally:
os.remove(path)
I know this is a really old post, but I think it's relevant today given that the API is changing and functions like mktemp and mkstemp are being replaced by functions like TemporaryFile() and TemporaryDirectory(). I just wanted to demonstrate in the following sample how to make sure that a temp directory is still available downstream:
Instead of coding:
tmpdirname = tempfile.TemporaryDirectory()
and using tmpdirname throughout your code, you should trying to use your code in a with statement block to insure that it is available for your code calls... like this:
with tempfile.TemporaryDirectory() as tmpdirname:
[do dependent code nested so it's part of the with statement]
If you reference it outside of the with then it's likely that it won't be visible anymore.

Make file, 'Currently being used' in python [duplicate]

What is the most elegant way to solve this:
open a file for reading, but only if it is not already opened for writing
open a file for writing, but only if it is not already opened for reading or writing
The built-in functions work like this
>>> path = r"c:\scr.txt"
>>> file1 = open(path, "w")
>>> print file1
<open file 'c:\scr.txt', mode 'w' at 0x019F88D8>
>>> file2 = open(path, "w")
>>> print file2
<open file 'c:\scr.txt', mode 'w' at 0x02332188>
>>> file1.write("111")
>>> file2.write("222")
>>> file1.close()
scr.txt now contains '111'.
>>> file2.close()
scr.txt was overwritten and now contains '222' (on Windows, Python 2.4).
The solution should work inside the same process (like in the example above) as well as when another process has opened the file.
It is preferred, if a crashing program will not keep the lock open.
I don't think there is a fully crossplatform way. On unix, the fcntl module will do this for you. However on windows (which I assume you are by the paths), you'll need to use the win32file module.
Fortunately, there is a portable implementation (portalocker) using the platform appropriate method at the python cookbook.
To use it, open the file, and then call:
portalocker.lock(file, flags)
where flags are portalocker.LOCK_EX for exclusive write access, or LOCK_SH for shared, read access.
The solution should work inside the same process (like in the example above) as well as when another process has opened the file.
If by 'another process' you mean 'whatever process' (i.e. not your program), in Linux there's no way to accomplish this relying only on system calls (fcntl & friends). What you want is mandatory locking, and the Linux way to obtain it is a bit more involved:
Remount the partition that contains your file with the mand option:
# mount -o remount,mand /dev/hdXY
Set the sgid flag for your file:
# chmod g-x,g+s yourfile
In your Python code, obtain an exclusive lock on that file:
fcntl.flock(fd, fcntl.LOCK_EX)
Now even cat will not be able to read the file until you release the lock.
EDIT: I solved it myself! By using directory existence & age as a locking mechanism! Locking by file is safe only on Windows (because Linux silently overwrites), but locking by directory works perfectly both on Linux and Windows. See my GIT where I created an easy to use class 'lockbydir.DLock' for that:
https://github.com/drandreaskrueger/lockbydir
At the bottom of the readme, you find 3 GITplayers where you can see the code examples execute live in your browser! Quite cool, isn't it? :-)
Thanks for your attention
This was my original question:
I would like to answer to parity3 (https://meta.stackoverflow.com/users/1454536/parity3) but I can neither comment directly ('You must have 50 reputation to comment'), nor do I see any way to contact him/her directly. What do you suggest to me, to get through to him?
My question:
I have implemented something similiar to what parity3 suggested here as an answer: https://stackoverflow.com/a/21444311/3693375 ("Assuming your Python interpreter, and the ...")
And it works brilliantly - on Windows. (I am using it to implement a locking mechanism that works across independently started processes. https://github.com/drandreaskrueger/lockbyfile )
But other than parity3 says, it does NOT work the same on Linux:
os.rename(src, dst)
Rename the file or directory src to dst. ... On Unix, if dst exists
and is a file,
it will be replaced silently if the user has permission.
The operation may fail on some Unix flavors if src and dst
are on different filesystems. If successful, the renaming will
be an atomic operation (this is a POSIX requirement).
On Windows, if dst already exists, OSError will be raised
(https://docs.python.org/2/library/os.html#os.rename)
The silent replacing is the problem. On Linux.
The "if dst already exists, OSError will be raised" is great for my purposes. But only on Windows, sadly.
I guess parity3's example still works most of the time, because of his if condition
if not os.path.exists(lock_filename):
try:
os.rename(tmp_filename,lock_filename)
But then the whole thing is not atomic anymore.
Because the if condition might be true in two parallel processes, and then both will rename, but only one will win the renaming race. And no exception raised (in Linux).
Any suggestions? Thanks!
P.S.: I know this is not the proper way, but I am lacking an alternative. PLEASE don't punish me with lowering my reputation. I looked around a lot, to solve this myself. How to PM users in here? And meh why can't I?
Here's a start on the win32 half of a portable implementation, that does not need a seperate locking mechanism.
Requires the Python for Windows Extensions to get down to the win32 api, but that's pretty much mandatory for python on windows already, and can alternatively be done with ctypes. The code could be adapted to expose more functionality if it's needed (such as allowing FILE_SHARE_READ rather than no sharing at all). See also the MSDN documentation for the CreateFile and WriteFile system calls, and the article on Creating and Opening Files.
As has been mentioned, you can use the standard fcntl module to implement the unix half of this, if required.
import winerror, pywintypes, win32file
class LockError(StandardError):
pass
class WriteLockedFile(object):
"""
Using win32 api to achieve something similar to file(path, 'wb')
Could be adapted to handle other modes as well.
"""
def __init__(self, path):
try:
self._handle = win32file.CreateFile(
path,
win32file.GENERIC_WRITE,
0,
None,
win32file.OPEN_ALWAYS,
win32file.FILE_ATTRIBUTE_NORMAL,
None)
except pywintypes.error, e:
if e[0] == winerror.ERROR_SHARING_VIOLATION:
raise LockError(e[2])
raise
def close(self):
self._handle.close()
def write(self, str):
win32file.WriteFile(self._handle, str)
Here's how your example from above behaves:
>>> path = "C:\\scr.txt"
>>> file1 = WriteLockedFile(path)
>>> file2 = WriteLockedFile(path) #doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
LockError: ...
>>> file1.write("111")
>>> file1.close()
>>> print file(path).read()
111
I prefer to use filelock, a cross-platform Python library which barely requires any additional code. Here's an example of how to use it:
from filelock import FileLock
lockfile = r"c:\scr.txt"
lock = FileLock(lockfile + ".lock")
with lock:
file = open(path, "w")
file.write("111")
file.close()
Any code within the with lock: block is thread-safe, meaning that it will be finished before another process has access to the file.
Assuming your Python interpreter, and the underlying os and filesystem treat os.rename as an atomic operation and it will error when the destination exists, the following method is free of race conditions. I'm using this in production on a linux machine. Requires no third party libs and is not os dependent, and aside from an extra file create, the performance hit is acceptable for many use cases. You can easily apply python's function decorator pattern or a 'with_statement' contextmanager here to abstract out the mess.
You'll need to make sure that lock_filename does not exist before a new process/task begins.
import os,time
def get_tmp_file():
filename='tmp_%s_%s'%(os.getpid(),time.time())
open(filename).close()
return filename
def do_exclusive_work():
print 'exclusive work being done...'
num_tries=10
wait_time=10
lock_filename='filename.lock'
acquired=False
for try_num in xrange(num_tries):
tmp_filename=get_tmp_file()
if not os.path.exists(lock_filename):
try:
os.rename(tmp_filename,lock_filename)
acquired=True
except (OSError,ValueError,IOError), e:
pass
if acquired:
try:
do_exclusive_work()
finally:
os.remove(lock_filename)
break
os.remove(tmp_filename)
time.sleep(wait_time)
assert acquired, 'maximum tries reached, failed to acquire lock file'
EDIT
It has come to light that os.rename silently overwrites the destination on a non-windows OS. Thanks for pointing this out # akrueger!
Here is a workaround, gathered from here:
Instead of using os.rename you can use:
try:
if os.name != 'nt': # non-windows needs a create-exclusive operation
fd = os.open(lock_filename, os.O_WRONLY | os.O_CREAT | os.O_EXCL)
os.close(fd)
# non-windows os.rename will overwrite lock_filename silently.
# We leave this call in here just so the tmp file is deleted but it could be refactored so the tmp file is never even generated for a non-windows OS
os.rename(tmp_filename,lock_filename)
acquired=True
except (OSError,ValueError,IOError), e:
if os.name != 'nt' and not 'File exists' in str(e): raise
# akrueger You're probably just fine with your directory based solution, just giving you an alternate method.
To make you safe when opening files within one application, you could try something like this:
import time
class ExclusiveFile(file):
openFiles = {}
fileLocks = []
class FileNotExclusiveException(Exception):
pass
def __init__(self, *args):
sMode = 'r'
sFileName = args[0]
try:
sMode = args[1]
except:
pass
while sFileName in ExclusiveFile.fileLocks:
time.sleep(1)
ExclusiveFile.fileLocks.append(sFileName)
if not sFileName in ExclusiveFile.openFiles.keys() or (ExclusiveFile.openFiles[sFileName] == 'r' and sMode == 'r'):
ExclusiveFile.openFiles[sFileName] = sMode
try:
file.__init__(self, sFileName, sMode)
finally:
ExclusiveFile.fileLocks.remove(sFileName)
else:
ExclusiveFile.fileLocks.remove(sFileName)
raise self.FileNotExclusiveException(sFileName)
def close(self):
del ExclusiveFile.openFiles[self.name]
file.close(self)
That way you subclass the file class. Now just do:
>>> f = ExclusiveFile('/tmp/a.txt', 'r')
>>> f
<open file '/tmp/a.txt', mode 'r' at 0xb7d7cc8c>
>>> f1 = ExclusiveFile('/tmp/a.txt', 'r')
>>> f1
<open file '/tmp/a.txt', mode 'r' at 0xb7d7c814>
>>> f2 = ExclusiveFile('/tmp/a.txt', 'w') # can't open it for writing now
exclfile.FileNotExclusiveException: /tmp/a.txt
If you open it first with 'w' mode, it won't allow anymore opens, even in read mode, just as you wanted...

In the Inline "open and write file" is the close() implicit?

In Python (>2.7) does the code :
open('tick.001', 'w').write('test')
has the same result as :
ftest = open('tick.001', 'w')
ftest.write('test')
ftest.close()
and where to find documentation about the 'close' for this inline functionnality ?
The close() here happens when the file object is deallocated from memory, as part of its deletion logic. Because modern Pythons on other virtual machines — like Java and .NET — cannot control when an object is deallocated from memory, it is no longer considered good Python to open() like this without a close(). The recommendation today is to use a with statement, which explicitly requests a close() when the block is exited:
with open('myfile') as f:
# use the file
# when you get back out to this level of code, the file is closed
If you do not need a name f for the file, then you can omit the as clause from the statement:
with open('myfile'):
# use the file
# when you get back out to this level of code, the file is closed

how to get content of a small ascii file in python?

Let's say we want to implement an equivalent of the PHP's file_get_content.
What is the best practice? (elegant and reliable)
Here are some proposition, are they correct?
using with statement:
def file_get_contents(filename):
with file(filename) as f:
s = f.read()
return s
is using standard open() safe?
def file_get_contents(filename):
return open(filename).read()
What happens to file descriptor in either solution?
In the current implementation of CPython, both will generally immediately close the file. However, Python the language makes no such guarantee for the second one - the file will eventually be closed, but the finaliser may not be called until the next gc cycle. Implementations like Jython and IronPython will work like this, so it's good practice to explicitely close your files.
I'd say using the first solution is the best practice, though open is generally preferred to file. Note that you can shorten it a little though if you prefer the brevity of the second example:
def file_get_contents(filename):
with open(filename) as f:
return f.read()
The __exit__ part of the context manager will execute when you leave the body for any reason, including exceptions and returning from the function - there's no need to use an intermediate variable.
import os
def file_get_contents(filename):
if os.path.exists(filename):
fp = open(filename, "r")
content = fp.read()
fp.close()
return content
This case it will return None if the file didn't exist, and the file descriptor will be closed before we exit the function.
Using the with statement is actually the nicest way to be sure that the file is really closed.
Depending on the garbage collector behavior for this task might work, but in this case, there is a nice way to be sure in all cases, so...
with will make sure that the file is closed when the block is left.
In your second example, the file handle might remain open (Python makes no guarantee that it's closed or when if you don't do it explicitly).
You can also use Python's v3 feature:
>>> ''.join(open('htdocs/config.php', 'r').readlines())
"This is the first line of the file.\nSecond line of the file"
Read more here http://docs.python.org/py3k/tutorial/inputoutput.html

Categories