Treat separate files as one file object in python - python

I have a splitted file (lets say name.a0, name.a1, name.a2, ...)
Is there a way to have one readable file-like object, that will be a concatenation of those, without using a temporary file and without loading them all to the memory?

The fileinput module in the python standard library is used for exactly this purpose.
import fileinput
with fileinput.input(files=('name.a0', 'name.a1', 'name.a2')) as f:
for line in f:
process(line)

You can always create a proxy object that treats a series of files as one. You need to implement just enough of the file object interface to satisfy your program's needs.
For example, if all you do is iterate over the lines in all these files, the following object would suffice for Python 2:
class MultiFile(object):
def __init__(self, *filenames, mode='r'):
self._filenames = reversed(filenames) # reversed iterable
self._mode = mode
sef._openfile = open(next(self._filenames), self._mode)
def __enter__(self):
return self
def __exit__(self, *exception_info):
self._openfile.close()
__del__ = __exit__
def __iter__(self):
return self
def __next__(self):
try:
return next(self._openfile)
except StopIteration:
# find next file to yield from, raises StopIteration
# when self._filenames has run out
while True:
self._opefile.close()
self._openfile = next(self._filenames)
try:
return next(self._openfile, self._mode)
except StopIteration:
continue
This lets you read through a series of files as if it was one, reading lines as you go (so never everything into memory):
import glob
for line in MultiFile(glob.glob('name.a?')):
# ...
Note that in Python 3 (or when using he io library in Python 2) you'll need to implement one of the appropriate base classes for the file mode (raw, buffered or text).

Related

Mocking "with open()"

I am trying to unit test a method that reads the lines from a file and process it.
with open([file_name], 'r') as file_list:
for line in file_list:
# Do stuff
I tried several ways described on another questions but none of them seems to work for this case. I don't quite understand how python uses the file object as an iterable on the lines, it internally use file_list.readlines() ?
This way didn't work:
with mock.patch('[module_name].open') as mocked_open: # also tried with __builtin__ instead of module_name
mocked_open.return_value = 'line1\nline2'
I got an
AttributeError: __exit__
Maybe because the with statement have this special attribute to close the file?
This code makes file_list a MagicMock. How do I store data on this MagicMock to iterate over it ?
with mock.patch("__builtin__.open", mock.mock_open(read_data="data")) as mock_file:
Best regards
The return value of mock_open (until Python 3.7.1) doesn't provide a working __iter__ method, which may make it unsuitable for testing code that iterates over an open file object.
Instead, I recommend refactoring your code to take an already opened file-like object. That is, instead of
def some_method(file_name):
with open([file_name], 'r') as file_list:
for line in file_list:
# Do stuff
...
some_method(file_name)
write it as
def some_method(file_obj):
for line in file_obj:
# Do stuff
...
with open(file_name, 'r') as file_obj:
some_method(file_obj)
This turns a function that has to perform IO into a pure(r) function that simply iterates over any file-like object. To test it, you don't need to mock open or hit the file system in any way; just create a StringIO object to use as the argument:
def test_it(self):
f = StringIO.StringIO("line1\nline2\n")
some_method(f)
(If you still feel the need to write and test a wrapper like
def some_wrapper(file_name):
with open(file_name, 'r') as file_obj:
some_method(file_obj)
note that you don't need the mocked open to do anything in particular. You test some_method separately, so the only thing you need to do to test some_wrapper is verify that the return value of open is passed to some_method. open, in this case, can be a plain old mock with no special behavior.)

Which objects the with statement applies to? [duplicate]

I'm trying to understand if there is there a difference between these, and what that difference might be.
Option One:
file_obj = open('test.txt', 'r')
with file_obj as in_file:
print in_file.readlines()
Option Two:
with open('test.txt', 'r') as in_file:
print in_file.readlines()
I understand that with Option One, the file_obj is in a closed state after the with block.
I don't know why no one has mentioned this yet, because it's fundamental to the way with works. As with many language features in Python, with behind the scenes calls special methods, which are already defined for built-in Python objects and can be overridden by user-defined classes. In with's particular case (and context managers more generally), the methods are __enter__ and __exit__.
Remember that in Python everything is an object -- even literals. This is why you can do things like 'hello'[0]. Thus, it does not matter whether you use the file object directly as returned by open:
with open('filename.txt') as infile:
for line in infile:
print(line)
or store it first with a different name (for example to break up a long line):
the_file = open('filename' + some_var + '.txt')
with the_file as infile:
for line in infile:
print(line)
Because the end result is that the_file, infile, and the return value of open all point to the same object, and that's what with is calling the __enter__ and __exit__ methods on. The built-in file object's __exit__ method is what closes the file.
These behave identically. As a general rule, the meaning of Python code is not changed by assigning an expression to a variable in the same scope.
This is the same reason that these are identical:
f = open("myfile.txt")
vs
filename = "myfile.txt"
f = open(filename)
Regardless of whether you add an alias, the meaning of the code stays the same. The context manager has a deeper meaning than passing an argument to a function, but the principle is the same: the context manager magic is applied to the same object, and the file gets closed in both cases.
The only reason to choose one over the other is if you feel it helps code clarity or style.
There is no difference between the two - either way the file is closed when you exit the with block.
The second example you give is the typical way the files are used in Python 2.6 and newer (when the with syntax was added).
You can verify that the first example also works in a REPL session like this:
>>> file_obj = open('test.txt', 'r')
>>> file_obj.closed
False
>>> with file_obj as in_file:
... print in_file.readlines()
<Output>
>>> file_obj.closed
True
So after the with blocks exits, the file is closed.
Normally the second example is how you would do this sort of thing, though.
There's no reason to create that extra variable file_obj... anything that you might want to do with it after the end of the with block you could just use in_file for, because it's still in scope.
>>> in_file
<closed file 'test.txt', mode 'r' at 0x03DC5020>
If you just fire up Python and use either of those options, the net effect is the same if the base instance of Python's file object is not changed. (In Option One, the file is only closed when file_obj goes out of scope vs at the end of the block in Option Two as you have already observed.)
There can be differences with use cases with a context manager however. Since file is an object, you can modify it or subclass it.
You can also open a file by just calling file(file_name) showing that file acts like other objects (but no one opens files that way in Python unless it is with with):
>>> f=open('a.txt')
>>> f
<open file 'a.txt', mode 'r' at 0x1064b5ae0>
>>> f.close()
>>> f=file('a.txt')
>>> f
<open file 'a.txt', mode 'r' at 0x1064b5b70>
>>> f.close()
More generally, the opening and closing of some resource called the_thing (commonly a file, but can be anything) you follow these steps:
set up the_thing # resource specific, open, or call the obj
try # generically __enter__
yield pieces from the_thing
except
react if the_thing is broken
finally, put the_thing away # generically __exit__
You can more easily change the flow of those subelements using the context manager vs procedural code woven between open and the other elements of the code.
Since Python 2.5, file objects have __enter__ and __exit__ methods:
>>> f=open('a.txt')
>>> f.__enter__
<built-in method __enter__ of file object at 0x10f836780>
>>> f.__exit__
<built-in method __exit__ of file object at 0x10f836780>
The default Python file object uses those methods in this fashion:
__init__(...) # performs initialization desired
__enter__() -> self # in the case of `file()` return an open file handle
__exit__(*excinfo) -> None. # in the case of `file()` closes the file.
These methods can be changed for your own use to modify how a resource is treated when it is opened or closed. A context manager makes it really easy to modify what happens when you open or close a file.
Trivial example:
class Myopen(object):
def __init__(self, fn, opening='', closing='', mode='r', buffering=-1):
# set up the_thing
if opening:
print(opening)
self.closing=closing
self.f=open(fn, mode, buffering)
def __enter__(self):
# set up the_thing
# could lock the resource here
return self.f
def __exit__(self, exc_type, exc_value, traceback):
# put the_thing away
# unlock, or whatever context applicable put away the_thing requires
self.f.close()
if self.closing:
print(self.closing)
Now try that:
>>> with Myopen('a.txt', opening='Hello', closing='Good Night') as f:
... print f.read()
...
Hello
[contents of the file 'a.txt']
Good Night
Once you have control of entry and exit to a resource, there are many use cases:
Lock a resource to access it and use it; unlock when you are done
Make a quirky resource (like a memory file, database or web page) act more like a straight file resource
Open a database and rollback if there is an exception but commit all writes if there are no errors
Temporarily change the context of a floating point calculation
Time a piece of code
Change the exceptions that you raise by returning True or False from the __exit__ method.
You can read more examples in PEP 343.
Is remarkable that with works even if return or sys.exit() is called inside (that means __exit__ is called anyway):
#!/usr/bin/env python
import sys
class MyClass:
def __enter__(self):
print("Enter")
return self
def __exit__(self, type, value, trace):
print("type: {} | value: {} | trace: {}".format(type,value,trace))
# main code:
def myfunc(msg):
with MyClass() as sample:
print(msg)
# also works if uncomment this:
# sys.exit(0)
return
myfunc("Hello")
return version will show:
Enter
Hello
type: None | value: None | trace: None
exit(0) version will show:
Enter
Hello
type: <class 'SystemExit'> | value: 0 | trace: <traceback object at 0x7faca83a7e00>

How to determine if file is opened in binary or text mode?

Given a file object, how do I determine whether it is opened in bytes mode (read returns bytes) or in text mode (read returns str)? It should work with reading and writing.
In other words:
>>> with open('filename', 'rb') as f:
... is_binary(f)
...
True
>>> with open('filename', 'r') as f:
... is_binary(f)
...
False
(Another question which sounds related is not. That question is about guessing whether a file is binary or not from it's contents.)
File objects have a .mode attribute:
def is_binary(f):
return 'b' in f.mode
This limits the test to files; in-memory file objects like TextIO and BytesIO do not have that attribute. You could also test for the appropriate abstract base classes:
import io
def is_binary(f):
return isinstance(f, (io.RawIOBase, io.BufferedIOBase))
or the inverse
def is_binary(f):
return not isinstance(f, io.TextIOBase)
For streams opened as reading, perhaps the most reliable way to determine its mode is to actually read from it:
def is_binary(f):
return isinstance(f.read(0), bytes)
Through it does have a caveat that it won't work if the stream was already closed (which may raise IOError) it would reliably determine binary-ness of any custom file-like objects neither extending from appropriate io ABCs nor providing the mode attribute.
If only Python 3 support is required, it is also possible to determine text/binary mode of writable streams given the clear distinction between bytes and text:
def is_binary(f):
read = getattr(f, 'read', None)
if read is not None:
try:
data = read(0)
except (TypeError, ValueError):
pass # ValueError is also a superclass of io.UnsupportedOperation
else:
return isinstance(data, bytes)
try:
# alternatively, replace with empty text literal
# and swap the following True and False.
f.write(b'')
except TypeError:
return False
return True
Unless you are to frequently test if a stream is in binary mode or not (which is unnecessary since binary-ness of a stream should not change for the lifetime of the object), I suspect any performance drawbacks resulting from extensive usage of catching exceptions would be an issue (you could certainly optimize for the likelier path, though).
There is one library called mimetypes where guess_type returns the The return value is a tuple (type, encoding) where type is None if the type can’t be guessed (missing or unknown suffix) or a string of the form 'type/subtype'
import mimetypes
file= mimetypes.guess_type(file)

How to implement a ByteCountingStreamReader?

How to implement a ByteCountingStreamReader?
The ByteCountingStreamReader should wrap a file descriptor stream and count the bytes it passed.
A bit like codecs.StreamReader, but the content should not be changed, just counted.
Use case: Solve http://bugs.python.org/issue24259
The tarfile library does not compare the file size of the TarInfo with the actual bytes read from the tar.
Like this Java class, but for Python: http://commons.apache.org/proper/commons-io/apidocs/org/apache/commons/io/input/CountingInputStream.html
Here is a small wrapper function that replaces the read method of the (file) stream. It should also work for other types of streams, and a similar wrapper for the write function could be added.
Beware: readline() seems not to use read() internally, so it has to be wrapped, too, if you use it instead of plain vanilla read().
def ByteCountingStreamReader(stream):
fr = stream.read
stream.count = 0
def inner(size=-1):
s = fr(size)
stream.count += len(s)
return s
stream.read=inner
return stream
# testing it
myStream = open('/etc/hosts', 'r')
with ByteCountingStreamReader(myStream) as f:
while True:
s = f.read(20)
if s == '':
break
print (s, end='')
print (f.count)

Will passing open() as json.load() parameter leave the file handle open?

I have written a small web application, and with each request I should open and read a JSON file. I am using pickledb for this purpose.
What concerns me about it, is that the library passes open() as a parameter for the json.load() function . So it got me thinking ..
When I write code like this:
with open("filename.json", "rb") as json_data:
my_data = json.load(json_data)
or
json_data = open("filename.json", "rb")
my_data = json.load(json_data)
json_data.close()
I am pretty sure that the file handle is being closed.
But when I open it this way :
my_data = json.load(open("filename.json", "rb"))
The docs say that json.load() is expecting a .read()-supporting file-like object containing a JSON document.
So the question is, will this handle stay open and eat more memory with each request? Who is responsible for closing the handle in that case?
Close method of the file will be called when object is destroyed, as json.load expects only read method on input object.
What happens depends on garbage collection implementation then. You can read more in Is explicitly closing files important?
Generally speaking it's a good practice to take care of closing the file.
I tried to somehow fake file-like object with read() and close() methods, and stick it into json.load(). Then I observed, that close() is not being called upon leaving context. Hence, I would recommend to close the file object explicitly. Anyway, doc says that the loading method expects read() method, but does not say it expects close() method on the object.
In test.json:
{ "test":0 }
In test.py:
import json
class myf:
def __init__(self):
self.f = None
#staticmethod
def open(path, mode):
obj = myf()
obj.f = open(path, mode)
return obj
def read(self):
print ("READING")
return self.f.read()
def close(self):
print ("CLOSING")
return self.f.close()
def mytest():
s = json.load(myf.open("test.json","r"))
print (s)
mytest()
print("DONE")
Output:
$> python test.py
READING
{u'test': 0}
DONE
$>

Categories