Python's unittest doesn't initialize logger as expected - python

My unittest module breaks when testing my main file because my main file references a logger that was not initialized.
We have the following simple example.
logger_main.py:
import logging
def some_func():
logger.info(__name__ + " started ")
# Do something
logger.info(__name__ + " finished ")
return 1
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
some_func()
logger_main_tests.py:
import unittest
import logging
from logger_main import some_func
class Test(unittest.TestCase):
def setUp(self):
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
def testName(self):
self.assertEqual(some_func(), 1)
if __name__ == "__main__":
unittest.main()
logger_main.py runs as expected, however, logger_main_tests.py gives the following error.
Finding files... done.
Importing test modules ... done.
======================================================================
ERROR: testName (logger_main_tests.Test)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\workspaces\PythonPlayground\DataStoreHierarchy\logger_main_tests.py", line 11, in testName
self.assertEqual(some_func(), 1)
File "C:\workspaces\PythonPlayground\DataStoreHierarchy\logger_main.py", line 4, in some_func
logger.info(__name__ + " started ")
NameError: name 'logger' is not defined
----------------------------------------------------------------------
Ran 1 test in 0.003s
FAILED (errors=1)
The error makes sense since some_func() is trying to use logger that doesn't exists in its scope and I would like to figure out how to set up my unittests with a logger (set at the DEBUG level) so that any logger.info or logger.debug statement inside of my functions (such as some_func()) in logger_main.py would be printed out at the appropriate level.

Move this line outside of your main declaration.
logger = logging.getLogger(__name__)
The main will defined where the logs goes, but should not be used to defined the logger. Your module should declare the logger in its global context.
You can (and should) defined as many loggers as you need, it is common to have one per file or class and declaring it after you imports so it is available anywhere in the code that follow.
import logging
logger = logging.getLogger(__name__)
def some_func():
.....

This is because the logger is only being defined when __name__ == "__main__".
Generally you'll define one logger per file, something like:
logger = logging.getLogger(__name__)
At the top of the file, after the imports.
Also, notice that the logger defined in Test.setUp is local to the function, so it won't do anything.

Related

How to mock a function of a class in an imported module but outside the scope of a function?

I am trying to figure out how to use the #patch.object to mock a __init__ and, log.write() for the Logger class, which is imported but isn't used inside the function of a module. The tuorial such as this, https://www.pythontutorial.net/python-unit-testing/python-patch/
, points to the patching needs to be at the target where it is used, not where it comes from. However, every example used shows the target to be mocked inside another function.
In the use case mentioned below, the logger is imported and used to write the log outside the scope of a function. Is there a way to mock the behavior both in main.py and routers.py?
src/apis/main.py
from utils.log import Logger
from routes import route
log = Logger(name="logger-1")
log.write("logger started")
def main():
log = Logger(name="logger-1")
log.write("inside main")
route()
if __name__ == "__main__":
import logging
logging.basicConfig(level=logging.INFO) # for demo
main()
In src/apis/routers/routes.py
from utils.log import Logger
log = Logger(name="logger-1")
log.write, message=f"Inside route")
def route():
log.write, message=f"Logging done.")
In utils/log/logging.py
import logging
Class Logger:
def __init__(self, name):
# needs to be mocked
def write(self, message):
# needs to be mocked to return None
When asking a question, it is very convenient to offer a Minimal Reproducible Example. So remove the unnecessary fastapi and starlette, and provide the code for the test you are trying to write.
Here it is :
# file: so74695297_main.py
from so74695297_log import Logger
from so74695297_routes import my_route
log = Logger(name="logger-1")
log.write("logger started")
def main(): # FAKE IMPL
log.write(message=f"in main()")
my_route()
if __name__ == "__main__":
import logging
logging.basicConfig(level=logging.INFO) # for demo
main()
# file: so74695297_routes.py
from so74695297_log import Logger
log = Logger(name="logger-1")
def my_route(): # FAKE IMPL
log.write(message=f"route")
# file: so74695297_log.py
import logging
class Logger:
def __init__(self, name):
self._logger = logging.getLogger(name) # FAKE IMPL
def write(self, message):
self._logger.info(message) # FAKE IMPL
when ran (the main.py file does something) :
INFO:logger-1:in main()
INFO:logger-1:route
Which is the expected output when the loggers don't fave any formatter.
Then adding a test :
# file: so74695297_test.py
import unittest
import unittest.mock as mock
from so74695297_routes import my_route
class TestMyRoute(unittest.TestCase):
def test__my_route_write_a_log(self):
spy_logger = mock.Mock()
with mock.patch("so74695297_log.Logger", new=spy_logger):
my_route()
assert spy_logger.assert_called()
if __name__ == "__main__":
unittest.main() # for demo
Ran 1 test in 0.010s
FAILED (failures=1)
Failure
Traceback (most recent call last):
File "/home/stack_overflow/so74695297_test.py", line 12, in test__my_route_write_a_log
assert spy_logger.assert_called()
File "/usr/lib/python3.8/unittest/mock.py", line 882, in assert_called
raise AssertionError(msg)
AssertionError: Expected 'mock' to have been called.
Now we have something to work with !
As #MrBeanBremen indicated, the fact that your logger is configured at import time (even when not being the "main" module) complicates things.
The problem is that, by the time the mock.patch line runs, the modules have already been imported and created their Logger. What we could do instead is mock the Logger.write method :
def test__my_route_writes_a_log(self):
with mock.patch("so74695297_log.Logger.write") as spy__Logger_write:
my_route()
spy__Logger_write.assert_called_once_with(message="route")
Ran 1 test in 0.001s
OK
If you prefer using the decorator form :
#mock.patch("so74695297_log.Logger.write")
def test__my_route_writes_a_log(self, spy__Logger_write):
my_route()
spy__Logger_write.assert_called_once_with(message="route")
Because we mocked the class's method, each Logger instance has a mock version of write :
# vvvv
#mock.patch("so74695297_main.Logger.write")
def test__main_writes_a_log(self, spy__Logger_write):
main()
# assert False, spy__Logger_write.mock_calls
spy__Logger_write.assert_any_call(message="in main()")
In the end, main.Logger.write is essentially the same thing as routes.Logger.write and as log.Logger.write, just a reference to the same "method" object. Mock from one way, mock for all the others too.

logging to different files in different module

I'm trying to write a file python modules and I need to log the output of different module to different files.
module_a.py
import logging
logging.basicConfig(levle = logging.INFO, filename="a.log")
logger = logging.getLogger(__name__)
def function_a():
logger.info("this is function a")
if __name__ == "__main__":
logger.info("this is module a")
function_a()
module_b.py
import logging
import module_a
logging.basicConfig(levle = logging.INFO, filename="b.log")
logger = logging.getLogger(__name__)
def function_b():
module_a.function_a()
logger.info("this is function b")
if __name__ == "__main__":
logger.info("this is module b")
function_b()
What I want to do is, I wants module_b.py always log to b.log, and
module_a.py always log to a.log, no matter the module was called or imported and then called.
The issue I got now is,
If I run
python3 module_a.py
All good.
If I run
python3 module_b.py
then all logs will be logged into a.log. I understand when import module_a, logger got overwritten. So how can make sure these two modules can always log into the right files?
Thanks
Normally, logging rules are decided by top level scripts. Imported modules just log without worrying about where the logs go. You want something different. So stay clear of basicConfig which is just a helper, and have each module setup its own logger.
In this example, I pulled the setup into a function. Its an easy way to get rid of temporary variables like handler. You may want to add a formatter while you are at it. The function is the same in both scripts. You may want to pull it into a single common utilities module. Or maybe there is a reason to do things differently in different modules. There are so many ways to setup loggers that the function is really just a rough guess of what you'll do in your project.
module_a
import logging
def make_logger(name):
handler = logging.FileHandler(filename=name + ".log")
logger = logging.getLogger(name)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
return logger
logger = make_logger("a")
def function_a():
logger.info("this is function a")
if __name__ == "__main__":
logger.info("this is module a")
function_a()
module_b
import logging
import module_a
def make_logger(name):
handler = logging.FileHandler(filename=name + ".log")
logger = logging.getLogger(name)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
return logger
logger = make_logger("b")
def function_b():
module_a.function_a()
logger.info("this is function b")
if __name__ == "__main__":
logger.info("this is module b")
function_b()

How to make python module logger work with any main logger?

I've developed a module that I would like to be imported into any script and automatically have the module logger hiearchy below the logger established in the main script, no matter what the main logger is called (root, main, etc.)
i.e.
a.py
import module
log = logging.getLogger()
log.info("Test Main")
test()
b.py
import module
log = logging.getLogger('main')
log.info("Test Main")
test()
module.py
mod_log = logging.getLogger(__name__)
def test():
mod_log.info("Test Mod")
If the script ran successfully, I would expect the following output. I just can't seem to get it to run?
a.py
ROOT - Test Main
ROOT.module - Test Mod
b.py
main - Test Main
main.module - Test Mod
This is not possible because there is no way to tell where a logger was defined and which one the "main" logger is supposed to be. The main script could define any number of loggers, including zero. The only logger that is guaranteed to exist is the root logger.
If you are fine with just taking a guess you could make your logger a child of whatever is the first logger that was created.
module.py
import logging
logger_name = __name__
if logging.root.manager.loggerDict:
parent = next(iter(logging.root.manager.loggerDict))
logger_name = parent + '.' + logger_name
mod_log = logging.getLogger(logger_name)
Implement a Singleton Design pattern of the Logger class - whose basically job is to add a formatter, setting logging level and filtering and finally returning an instance of the logger.
Then use this to create an object:
logger = Logger.__call__().get_logger()
print(Logger())
Following the above method will create one and only on instance of logger and no matter where you create the object of the Logger class, it will be the same old instance if it was created before.

Import modules with proper initialization in Python

I have a file test.py:
import logging
def func():
logger.info('some info')
if __name__ == '__main__':
logger = logging.getLogger()
func()
Which runs well. When I from test import func in another file and call the func() method, it gives me global name 'logger' is not defined error.
from test import func
def another_fun():
func()
Even if I put logger = logging.getLogger() in another_func() before calling func(), it still gives me such error. I'm wondering in situations like this, what is the standard way of importing functions from other files with proper initialization. Thanks a lot.
#!/usr/bin/env python3
import logging
logger = logging.getLogger("funclogger") # all modules importing this get access to funclogger log object
def func():
logger.info('some info')
if __name__ == '__main__':
func()
Other process calling another_func():
#!/usr/bin/env python3
import logging
logger = logging.getLogger("funclogger") # May not be needed if this module doesn't log
from test import func
def another_fun():
func()
It's as #Kevin mentioned.
another_fun() calls the method in test.py, but since your logger variable is ONLY initialized when you run it as a script (that's the __main__ portion), it's NOT initialized when func() is called and therefore can not recognize the variable. That also means when another_fun() calls func(), no logger object is found since another_fun() doesn't call __main__ in your test.py module.
Don't initialize things in if __name__ == '__main__'. That's for the code that will run when your file is executed as a script. Ideally, it should consist of a single function call and nothing else.
You can't define globals in a different module, either. Each module has its own separate set of globals. You have to initialize logger in the test.py module.

How to use python logging in multiple modules

I was wondering what the standard set up is for performing logging from within a Python app.
I am using the Logging class, and I've written my own logger class that instantiates the Logging class. My main then instantiates my logger wrapper class. However, my main instantiates other classes and I want those other classes to also be able to write to he log file via the logger object in the main.
How do I make that logger object such that it can be called by other classes? It's almost like we need some sort of static logger object to get this to work.
I guess the long and short of the question is: how do you implement logging within your code structure such that all classes instantiated from within main can write to the same log file? Do I just have to create a new logging object in each of the classes that points to the same file?
I don't know what you mean by the Logging class - there's no such class in Python's built-in logging. You don't really need wrappers: here's an example of how to do logging from arbitrary classes that you write:
import logging
# This class could be imported from a utility module
class LogMixin(object):
#property
def logger(self):
name = '.'.join([__name__, self.__class__.__name__])
return logging.getLogger(name)
# This class is just there to show that you can use a mixin like LogMixin
class Base(object):
pass
# This could be in a module separate from B
class A(Base, LogMixin):
def __init__(self):
# Example of logging from a method in one of your classes
self.logger.debug('Hello from A')
# This could be in a module separate from A
class B(Base, LogMixin):
def __init__(self):
# Another example of logging from a method in one of your classes
self.logger.debug('Hello from B')
def main():
# Do some work to exercise logging
a = A()
b = B()
with open('myapp.log') as f:
print('Log file contents:')
print(f.read())
if __name__ == '__main__':
# Configure only in your main program clause
logging.basicConfig(level=logging.DEBUG,
filename='myapp.log', filemode='w',
format='%(name)s %(levelname)s %(message)s')
main()
Generally it's not necessary to have loggers at class level: in Python, unlike say Java, the unit of program (de)composition is the module. However, nothing stops you from doing it, as I've shown above. The script, when run, displays:
Log file contents:
__main__.A DEBUG Hello from A
__main__.B DEBUG Hello from B
Note that code from both classes logged to the same file, myapp.log. This would have worked even with A and B in different modules.
Try using logging.getLogger() to get your logging object instance:
http://docs.python.org/3/library/logging.html#logging.getLogger
All calls to this function with a given name return the same logger instance. This means that logger instances never need to be passed between different parts of an application.
UPDATE:
The recommended way to do this is to use the getLogger() function and configure it (setting a handler, formatter, etc...):
# main.py
import logging
import lib
def main():
logger = logging.getLogger('custom_logger')
logger.setLevel(logging.INFO)
logger.addHandler(logging.FileHandler('test.log'))
logger.info('logged from main module')
lib.log()
if __name__ == '__main__':
main()
# lib.py
import logging
def log():
logger = logging.getLogger('custom_logger')
logger.info('logged from lib module')
If you really need to extend the logger class take a look at logging.setLoggerClass(klass)
UPDATE 2:
Example on how to add a custom logging level without changing the Logging class:
# main.py
import logging
import lib
# Extend Logger class
CUSTOM_LEVEL_NUM = 9
logging.addLevelName(CUSTOM_LEVEL_NUM, 'CUSTOM')
def custom(self, msg, *args, **kwargs):
self._log(CUSTOM_LEVEL_NUM, msg, args, **kwargs)
logging.Logger.custom = custom
# Do global logger instance setup
logger = logging.getLogger('custom_logger')
logger.setLevel(logging.INFO)
logger.addHandler(logging.FileHandler('test.log'))
def main():
logger = logging.getLogger('custom_logger')
logger.custom('logged from main module')
lib.log()
if __name__ == '__main__':
main()
Note that adding custom level is not recommended: http://docs.python.org/2/howto/logging.html#custom-levels
Defining a custom handler and maybe using more than one logger may do the trick for your other requirement: optional output to stderr.

Categories