I've got two files:
REF_FILE : it's a file with changing data
TEST_FILE: it's a file with fixed data (It's simply a REF_FILE at a given moment)
Now I want to test this function:
def get_info_from_extract(mpm):
fid = open(REF_FILE)
all_infos = json.load(fid)
fid.close()
for m in all_infos:
if m['mpm_id'] == mpm:
break
return m
class Test_info_magento(unittest.TestCase):
def test_should_have_value(self):
# GIVEN
mpm=107
expected_value = 1.345
# WHEN
#MOCK OPEN FUNCTION TO READ TEST_FILE
m = file_info.get_info_from_extract(mpm)
# THEN
self.assertEqual(m['value'], expected_value)
The problem is the 'REF_FILE' is changing often so I can't properly test it. So i need to use 'TEST_FILE' and for that purpose I need to mock my open function. I can't find out how to mock it and I would like some help to figure out how to properly mock it in order to make it return my 'TEST_FILE'
I would recommend rewriting the function so it accepts file-like object (it would be easier to test and maintain).
However if you can not, try this context-manager:
class MockOpen(object):
def __call__(self, *args, **kwargs):
#print('mocked')
return self.__open(TEST_FILE) #it would be better to return a file-like object instead
def __enter__(self):
global open
self.__open = open
open = self
def __exit__(self, exception_type, exception_value, traceback):
global open
open = self.__open
with MockOpen():
# here you run your test
...
The context manager replaces (within with statement block) built-in function referenced by global label open with itself. Every call to open() in the body of the with block is a call of the __call__() method, which ignores all its arguments and returns opened TEST_FILE.
It is not the best possible implementation, since:
it uses actual file, slowing your tests - a file-like object should be returned instead,
it is not configurable - a file name (or content) should be given to its constructor.
Related
The following code uses a context manager to store and load variables into files.
However, it's very annoying to have to set the value property of what is yielded by the context manager (loaded.value).
I would like to
Not have to define a new class like LoadedValue
Set the yielded value of the context manager (loaded) to be the value that is saved.
Solving either of these issues would be appreciated.
import os
import pickle
from contextlib import contextmanager
class LoadedValue:
def __init__(self, value):
self.value = value
def __str__(self):
return "<LoadedValue: {}>".format(self.value)
#contextmanager
def load_manager(load_file="file.pkl"):
with open(load_file, "rb") as f:
loaded_object = LoadedValue(pickle.load(f))
try:
yield loaded_object
finally:
with open(load_file, "wb+") as f:
pickle.dump(loaded_object.value, f)
if __name__ == "__main__":
filename = "test.pkl"
with open(filename, "wb+") as f:
pickle.dump(7, f)
with load_manager(filename) as loaded:
print(loaded) # >>> <LoadedValue: 7>
loaded.value = 5 # this is what I have to do
# loaded = 5 # this is what I want to do
with load_manager(filename) as loaded:
print(loaded) # >>> <LoadedValue: 5>
Note: This was originally posted on CodeReview, but I have decided to repost it here in order to get answers, and leave it on CodeReview to help improve the code in other ways.
no there is no way to override the assignment operator in python, so you cannot do loaded = 5
(you could override other things so it might sort of work)
override __call__ to allow loaded(5)
override __lshift__ to allow loaded << 5
override __ior__ to allow loaded |= 5
(however be forwarned your coworkers, or future co workers may never forgive you)
also, things that occure within the scope of the with XXXX as Y: block are not (typically) accessible to the method that is yielding the context, unless that scope was previously accessible to the place yielding the scope (ie the global namespace etc)
My question is, how can I execute any context manager without using with?
Python has the idea of context managers,
instead of
file = open('some_file', 'w')
try:
file.write('Hola!')
finally:
file.close()
# end try
you can write
with open('some_file', 'w') as opened_file:
opened_file.write('Hola!')
# end with
While in most cases the second one is the golden solution, however for the specific cases of testing in unit tests as well exploring in the interactive console, the first one can be much better used, as you can write it line by line.
>>> file = open('some_file', 'w')
>>> file.write('Hola!')
>>> file.close()
My question is, how can I execute any with context manager like this, best suited for exploring?
My actual use case follows below, but please try to give a answer which is generic and will work for other context managers too.
import flask
app = flask.Flask(__name__)
with app.test_request_context('/?name=Peter'):
assert flask.request.path == '/'
assert flask.request.args['name'] == 'Peter'
from flask docs
You can still use with syntax in the interactive console, however a context is based on 2 magic methods __enter__ and __exit__, so you can just use them:
class MyCtx(object):
def __init__(self, f):
self.f = f
def __enter__(self):
print("Enter")
return self.f
def __exit__(*args, **kwargs):
print("Exit")
def foo():
print("Hello")
usually you do:
with MyCtx(foo) as f:
f()
Same as:
ctx = MyCtx(foo)
f = ctx.__enter__()
f()
ctx.__exit__()
Here you have the live example
Remember that contexts __exit__ method are used for managing errors within the context, so most of them have a signature of __exit__(exception_type, exception_value, traceback), if you dont need to handle it for the tests, just give it some None values:
__exit__(None, None, None)
You can call app.test_request.context('/?name=Peter') to a variable (e.g. ctx), and then call ctx.__enter__() on it to enter the context manager, and ctx.__exit__(None, None, None) to perform the cleanup. Note that you lose the safety guarantees of context managers, unless you put the ctx.__exit__ in a finally clause.
import praw
import time
class getPms():
r = praw.Reddit(user_agent="Test Bot By /u/TheC4T")
r.login(username='*************', password='***************')
cache = []
inboxMessage = []
file = 'cache.txt'
def __init__(self):
cache = self.cacheRead(self, self.file)
self.bot_run(self)
self.cacheSave(self, self.file)
time.sleep(5)
return self.inboxMessage
def getPms(self):
def bot_run():
inbox = self.r.get_inbox(limit=25)
print(self.cache)
# print(r.get_friends())#this works
for message in inbox:
if message.id not in self.cache:
# print(message.id)
print(message.body)
# print(message.subject)
self.cache.append(message.id)
self.inboxMessage.append(message.body)
# else:
# print("no messages")
def cacheSave(self, file):
with open(file, 'w') as f:
for s in self.cache:
f.write(s + '\n')
def cacheRead(self, file):
with open(file, 'r') as f:
cache1 = [line.rstrip('\n') for line in f]
return cache1
# while True: #threading is needed in order to run this as a loop. Probably gonna do this in the main method though
# def getInbox(self):
# return self.inboxMessage
The exception is:
cache = self.cacheRead(self, self.file)
AttributeError: 'getPms' object has no attribute 'cacheRead'
I am new to working with classes in python and need help with what I am doing wrong with this if you need any more information I can add some. It worked when it was all functions but now that I attempted to switch it to a class it has stopped working.
Your cacheRead function (as well as bot_run and cacheSave) is indented too far, so it's defined in the body of your other function getPms. Thus it is only accessible inside of getPms. But you're trying to call it from __init__.
I'm not sure what you're trying to achieve here because getPms doesn't have anything else in it but three function definitions. As far as I can tell you should just take out the def getPms line and unindent the three functions it contains so they line up with the __init__ method.
Here are few points:
Unless you're explicitly inheriting from some specific class, you can omit parenthesis:
class A(object):, class A():, class A: are equivalent.
Your class name and class method have the same name. I'm not sure does Python confuse about this or not, but you probably do. You can name your class PMS and your method get, for example, so you'll obtain PMS.get(...)
In the present version of indentation cacheRead and cacheSave functions are simply inaccessible from init; why not move them to generic class namespace?
When calling member functions, you don't need to specify self as the first argument since you're already calling the function from this object. So instead of cache = self.cacheRead(self, self.file) you have to do it like this: cache = self.cacheRead(self.file)
I have the following problem. My application randomly takes different files, e.g. rar, zip, 7z. And I have different processors to extract and save them locally:
Now everything looks this way:
if extension == 'zip':
archive = zipfile.ZipFile(file_contents)
file_name = archive.namelist()[0]
file_contents = ContentFile(archive.read(file_name))
elif extension == '7z':
archive = py7zlib.Archive7z(file_contents)
file_name = archive.getnames()[0]
file_contents = ContentFile(
archive.getmember(file_name).read())
elif extension == '...':
And I want to switch to more object oriented approach, with one main Processor class and subclasses responsible for specific archives.
E.g. I was thinking about:
class Processor(object):
def __init__(self, filename, contents):
self.filename = filename
self.contents = contents
def get_extension(self):
return self.filename.split(".")[-1]
def process(self):
raise NotImplemented("Need to implement something here")
class ZipProcessor(Processor):
def process(self):
archive = zipfile.ZipFile(file_contents)
file_name = archive.namelist()[0]
file_contents = ContentFile(archive.read(file_name))
etc
But I am not sure, that's a correct way. E.g. I can't invent a way to call needed processor based on the file extension, if following this way
A rule of thumb is that if you have a class with two methods, one of which is __init__(), then it's not a class but a function is disguise.
Writing classes is overkill in this case, because you still have to use the correct class manually.
Since the handling of all kinds of archives will be subtly different, wrap each in a function;
def handle_zip(name):
print name, 'is a zip file'
return 'zip'
def handle_7z(name):
print name, 'is a 7z file'
return '7z'
Et cetera. Since functions are first-class objects in Python, you can use a dictionary using the extension as a key for calling the right function;
import os.path
filename = 'foo.zip'
dispatch = {'.zip': handle_zip, '.7z': handle_7z}
_, extension = os.path.splitext(filename)
try:
rv = dispatch[extension](filename)
except KeyError:
print 'Unknown extension', extension
rv = None
It is important to handle the KeyError here, since dispatch doesn't contain all possible extensions.
An idea that might make sense before (or instead) of writing a custom class to perform your operations generally, is making sure you offer a consistent interface to archives - wrapping zipfile.ZipFile and py7zlib.Archive7z into classes with, for example, a getfilenames method.
This method ensures that you don't repeat yourself, without needing to "hide" your operations in a class, if you don't want to
You may want to use a abc as a base class, to make things extra clear.
Then, you can simply:
archive_extractors= {'zip':MyZipExtractor, '7z':My7zExtractor}
extractor= archive_extractors[extension]
file_name = extractor.getfilenames()[0]
#...
If you want to stick to OOP, you could give Processor a static method to decide if a class can handle a certain file, and implement it in every subclass. Then, if you need to unpack a file, use the base class'es __subclasses__() method to iterate over the subclasses and create an instance of the appropriate one:
class Processor(object):
#staticmethod
def is_appropriate_for(name):
raise NotImplemented()
def process(self, name):
raise NotImplemented()
class ZipProcessor(Processor):
#staticmethod
def is_appropriate_for(name):
return name[-4:] == ".zip"
def process(self, name):
print ".. handling ", name
name = "test.zip"
handler = None
for cls in Processor.__subclasses__():
if cls.is_appropriate_for(name):
handler = cls()
print name, "handled by", handler
I love the argparse module. argparse.FileType is helpful too, unless you want the default to be something other than sys.std* since the default output file is created even if you supply a
value.
For example:
parser.add_argument('--outfile', type=FileType('w'), default="out.txt")
will create out.txt even if you specify a file with --outfile.
The best I can come up with is:
class MagicFileType(object):
def __init__(self, *args, **kwargs):
# save args/kwargs and set filetype to None
self.filetype = None
self.args = args
self.kwargs = kwargs
def __getattr__(self, attr):
""" Delegate everything to the filetype """
# If we haven't created it, now is the time to do so
if self.filetype is None:
self.filetype = FileType(*self.args, **self.kwargs)
self.filetype = self.filetype(self.filename)
return getattr(self.filetype, attr)
def __call__(self, filename):
""" Just cache the filename """
# This is called when the default is created
# Just cache the filename for now.
self.filename = filename
return self
But if feels like this should be easier, am I missing something?
There was a relatively recent change in argparse, http://bugs.python.org/issue12776 (Aug 2012), that delays the evaluation of the default value. Originally a string default would be passed through type (via _get_value) at the start of parsing, resulting in the opening (and creation) of a FileType file (whether it would be needed or not). In this patch, the string is written to the Namespace, but not evaluated until the end of parsing, when it can determine whether another value was provided or not. Basically, this line was moved from early in parse_known_args to the end of _parse_known_args
default = self._get_value(action, action.default)
In http://bugs.python.org/issue13824 I proposed a patch that provides a FileContext type. Its main difference from FileType is that it wraps the open(file...) in a partial. That way the file isn't opened (or created) until actually used in a with args.output() as f: context.
That patch deals with some other things like testing whether the file can be created or not (using os.access) and wrapping stdin/out in a dummy context so it does not try to close it.
Without testing, you could modify FileType like this:
class FileOpener(argparse.FileType):
# delayed FileType;
# sample use:
# with args.input.open() as f: f.read()
def __call__(self, string):
# optionally test string
self.filename = string
return self
def open(self):
return super(FileOpener,self).__call__(self.filename)
file = property(open, None, None, 'open file property')