I'm currently using a library that loads a file using open(filename).
I don't want to mess with the file system, so I tried to download this file in memory using BytesIO:
obj = BytesIO(requests(url).content)
But, if I pass the obj to the library, I'll get an error.
How can I transform my object so it could be "opened" by open(object)?
You can override the built-in open function to return the first argument directly if the argument is a file-like object (which can be identified if it has a read attribute):
import builtins
original_open = open
builtins.open = lambda f, *args, **kwargs: f if hasattr(f, 'read') else original_open(f, *args, **kwargs)
so that:
from io import BytesIO
print(open(BytesIO(b'hello world'), 'rb').read())
outputs:
b'hello world'
You can't unless you want to save it as a file because the open() method can only be used for files contained in the file system. Instead, you can check out the python docs on io streams (found here: https://docs.python.org/3/library/io.html) and learn how to access your data through io methods.
Related
In python2 I have this in my test method:
mock_file = MagicMock(spec=file)
I'm moving to python3, and I can't figure out how to do a similar mock. I've tried:
from io import IOBase
mock_file = MagicMock(spec=IOBase)
mock_file = create_autospec(IOBase)
What am I missing?
IOBase does not implement crucial file methods such as read and write and is therefore usually unsuitable as a spec to create a mocked file object with. Depending on whether you want to mock a raw stream, a binary file or a text file, you can use RawIOBase, BufferedIOBase or TextIOBase as a spec instead:
from io import BufferedIOBase
mock_file = MagicMock(spec=BufferedIOBase)
I am trying to unit test a method that reads the lines from a file and process it.
with open([file_name], 'r') as file_list:
for line in file_list:
# Do stuff
I tried several ways described on another questions but none of them seems to work for this case. I don't quite understand how python uses the file object as an iterable on the lines, it internally use file_list.readlines() ?
This way didn't work:
with mock.patch('[module_name].open') as mocked_open: # also tried with __builtin__ instead of module_name
mocked_open.return_value = 'line1\nline2'
I got an
AttributeError: __exit__
Maybe because the with statement have this special attribute to close the file?
This code makes file_list a MagicMock. How do I store data on this MagicMock to iterate over it ?
with mock.patch("__builtin__.open", mock.mock_open(read_data="data")) as mock_file:
Best regards
The return value of mock_open (until Python 3.7.1) doesn't provide a working __iter__ method, which may make it unsuitable for testing code that iterates over an open file object.
Instead, I recommend refactoring your code to take an already opened file-like object. That is, instead of
def some_method(file_name):
with open([file_name], 'r') as file_list:
for line in file_list:
# Do stuff
...
some_method(file_name)
write it as
def some_method(file_obj):
for line in file_obj:
# Do stuff
...
with open(file_name, 'r') as file_obj:
some_method(file_obj)
This turns a function that has to perform IO into a pure(r) function that simply iterates over any file-like object. To test it, you don't need to mock open or hit the file system in any way; just create a StringIO object to use as the argument:
def test_it(self):
f = StringIO.StringIO("line1\nline2\n")
some_method(f)
(If you still feel the need to write and test a wrapper like
def some_wrapper(file_name):
with open(file_name, 'r') as file_obj:
some_method(file_obj)
note that you don't need the mocked open to do anything in particular. You test some_method separately, so the only thing you need to do to test some_wrapper is verify that the return value of open is passed to some_method. open, in this case, can be a plain old mock with no special behavior.)
I use AWS boto3 library which returns me an instance of urllib3.response.HTTPResponse. That response is a subclass of io.IOBase and hence behaves as a binary file. Its read() method returns bytes instances.
Now, I need to decode csv data from a file received in such a way. I want my code to work on both py2 and py3 with minimal code overhead, so I use backports.csv which relies on io.IOBase objects as input rather than on py2's file() objects.
The first problem is that HTTPResponse yields bytes data for CSV file, and I have csv.reader which expects str data.
>>> import io
>>> from backports import csv # actually try..catch statement here
>>> from mymodule import get_file
>>> f = get_file() # returns instance of urllib3.HTTPResponse
>>> r = csv.reader(f)
>>> list(r)
Error: iterator should return strings, not bytes (did you open the file in text mode?)
I tried to wrap HTTPResponse with io.TextIOWrapper and got error 'HTTPResponse' object has no attribute 'read1'. This is expected becuase TextIOWrapper is intended to be used with BufferedIOBase objects, not IOBase objects. And it only happens on python2's implementation of TextIOWrapper because it always expects underlying object to have read1 (source), while python3's implementation checks for read1 existence and falls back to read gracefully (source).
>>> f = get_file()
>>> tw = io.TextIOWrapper(f)
>>> list(csv.reader(tw))
AttributeError: 'HTTPResponse' object has no attribute 'read1'
Then I tried to wrap HTTPResponse with io.BufferedReader and then with io.TextIOWrapper. And I got the following error:
>>> f = get_file()
>>> br = io.BufferedReader(f)
>>> tw = io.TextIOWrapper(br)
>>> list(csv.reader(f))
ValueError: I/O operation on closed file.
After some investigation it turns out that the error only happens when the file doesn't end with \n. If it does end with \n then the problem does not happen and everything works fine.
There is some additional logic for closing underlying object in HTTPResponse (source) which is seemingly causing the problem.
The question is: how can I write my code to
work on both python2 and python3, preferably with no try..catch or version-dependent branching;
properly handle CSV files represented as HTTPResponse regardless of whether they end with \n or not?
One possible solution would be to make a custom wrapper around TextIOWrapper which would make read() return b'' when the object is closed instead of raising ValueError. But is there any better solution, without such hacks?
Looks like this is an interface mismatch between urllib3.HTTPResponse and file objects. It is described in this urllib3 issue #1305.
For now there is no fix, hence I used the following wrapper code which seemingly works fine:
class ResponseWrapper(io.IOBase):
"""
This is the wrapper around urllib3.HTTPResponse
to work-around an issue shazow/urllib3#1305.
Here we decouple HTTPResponse's "closed" status from ours.
"""
# FIXME drop this wrapper after shazow/urllib3#1305 is fixed
def __init__(self, resp):
self._resp = resp
def close(self):
self._resp.close()
super(ResponseWrapper, self).close()
def readable(self):
return True
def read(self, amt=None):
if self._resp.closed:
return b''
return self._resp.read(amt)
def readinto(self, b):
val = self.read(len(b))
if not val:
return 0
b[:len(val)] = val
return len(val)
And use it as follows:
>>> f = get_file()
>>> r = csv.reader(ResponseWrapper(io.TextIOWrapper(io.BufferedReader(f))))
>>> list(r)
The similar fix was proposed by urllib3 maintainers in the bug report comments but it would be a breaking change hence for now things will probably not change, so I have to use wrapper (or do some monkey patching which is probably worse).
I have written a small web application, and with each request I should open and read a JSON file. I am using pickledb for this purpose.
What concerns me about it, is that the library passes open() as a parameter for the json.load() function . So it got me thinking ..
When I write code like this:
with open("filename.json", "rb") as json_data:
my_data = json.load(json_data)
or
json_data = open("filename.json", "rb")
my_data = json.load(json_data)
json_data.close()
I am pretty sure that the file handle is being closed.
But when I open it this way :
my_data = json.load(open("filename.json", "rb"))
The docs say that json.load() is expecting a .read()-supporting file-like object containing a JSON document.
So the question is, will this handle stay open and eat more memory with each request? Who is responsible for closing the handle in that case?
Close method of the file will be called when object is destroyed, as json.load expects only read method on input object.
What happens depends on garbage collection implementation then. You can read more in Is explicitly closing files important?
Generally speaking it's a good practice to take care of closing the file.
I tried to somehow fake file-like object with read() and close() methods, and stick it into json.load(). Then I observed, that close() is not being called upon leaving context. Hence, I would recommend to close the file object explicitly. Anyway, doc says that the loading method expects read() method, but does not say it expects close() method on the object.
In test.json:
{ "test":0 }
In test.py:
import json
class myf:
def __init__(self):
self.f = None
#staticmethod
def open(path, mode):
obj = myf()
obj.f = open(path, mode)
return obj
def read(self):
print ("READING")
return self.f.read()
def close(self):
print ("CLOSING")
return self.f.close()
def mytest():
s = json.load(myf.open("test.json","r"))
print (s)
mytest()
print("DONE")
Output:
$> python test.py
READING
{u'test': 0}
DONE
$>
It used to be in Python (2.6) that one could ask:
isinstance(f, file)
but in Python 3.0 file was removed.
What is the proper method for checking to see if a variable is a file now? The What'sNew docs don't mention this...
def read_a_file(f)
try:
contents = f.read()
except AttributeError:
# f is not a file
substitute whatever methods you plan to use for read. This is optimal if you expect that you will get passed a file like object more than 98% of the time. If you expect that you will be passed a non file like object more often than 2% of the time, then the correct thing to do is:
def read_a_file(f):
if hasattr(f, 'read'):
contents = f.read()
else:
# f is not a file
This is exactly what you would do if you did have access to a file class to test against. (and FWIW, I too have file on 2.6) Note that this code works in 3.x as well.
In python3 you could refer to io instead of file and write
import io
isinstance(f, io.IOBase)
Typically, you don't need to check an object type, you could use duck-typing instead i.e., just call f.read() directly and allow the possible exceptions to propagate -- it is either a bug in your code or a bug in the caller code e.g., json.load() raises AttributeError if you give it an object that has no read attribute.
If you need to distinguish between several acceptable input types; you could use hasattr/getattr:
def read(file_or_filename):
readfile = getattr(file_or_filename, 'read', None)
if readfile is not None: # got file
return readfile()
with open(file_or_filename) as file: # got filename
return file.read()
If you want to support a case when file_of_filename may have read attribute that is set to None then you could use try/except over file_or_filename.read -- note: no parens, the call is not made -- e.g., ElementTree._get_writer().
If you want to check certain guarantees e.g., that only one single system call is made (io.RawIOBase.read(n) for n > 0) or there are no short writes (io.BufferedIOBase.write()) or whether read/write methods accept text data (io.TextIOBase) then you could use isinstance() function with ABCs defined in io module e.g., look at how saxutils._gettextwriter() is implemented.
Works for me on python 2.6... Are you in a strange environment where builtins aren't imported by default, or where somebody has done del file, or something?