Python remove logging file after running tests - python

I am testing the logger's functionality and it requires me to create a log file, but at the end I want to remove it so I tried to os.remove in the tearDownClass
#classmethod
def tearDownClass(cls) -> None:
log = logging.getLogger('client_logger')
try:
log.removeHandler(log.handlers.pop())
except:
pass
os.remove('client_logger.log')
I read that the RotatingFileHandler is the cause of it and once I remove it the handler list is empty, but it still gives me PermissionError: [WinError 32].

Log files work like any other ordinary files. It must be opened and closed.
The same goes for the log files. The log handlers opens a file to write into.
This means it before the file can be removed it should be disconnected to the log handler.
Example with Django TestCase:
import os
import logging
from django.test import TestCase
_logger = logging.getLogger("logger_name")
class CustomTestCase(TestCase):
def setUp(self) -> None:
... # stuff
def test_more_stuff(self) -> None:
... # more stuff
def tearDown(self) -> None:
# First close logger file before removing.
_logger.handlers[0].close() # <FileHandler D:\path\to\logfile\test.log (DEBUG)>
if os.path.exists("test.log"):
os.remove("test.log")
The following StackOverflow page helped me find a solution.
Reference: python does not release filehandles to logfile

Related

Module or singleton that uses logging and config

I've been trying to figure out how best to set this up. Cutting it down as much as I can. I have 4 python files: core.py (main), logger_controler.py, config_controller.py, and a 4th as a module or singleton well just call it tool.py.
The way I have it setup is logging has an init function that setup pythons built in logging with the necessary levels, formatter, directory location, etc. I call this init function in main.
import logging
import logger_controller
def main():
logger_controller.init_log()
logger = logging.getLogger(__name__)
if __name__ == "__main__":
main()
config_controller is using configparser and is mainly a singleton as a controller for my config.
import configparser
import logging
logger = logging.getLogger(__name__)
class ConfigController(object):
def __init__(self, *file_names):
self.config_parser = configparser.ConfigParser()
found_files = self.config_parser.read(file_names)
if not found_files:
raise ValueError("No config file found.")
self._validate()
def _validate(self):
...
def read_config(self, section, field):
try:
data = self.config_parser.get(section, field)
except (configparser.NoSectionError, configparser.NoOptionError) as e:
logger.error(e)
data = None
return data
config = ConfigController("config.ini")
And then my problem is trying to create the 4th file and making sure both my logger and config parser are running before it. I'm also wanting this 4th one to be a singleton so it's following a similar format as the config_controller.
So tool.py uses config_controller to pull anything it needs from the config file. It also has some error checking for if config_controller's read_config returns None as that isn't validated in _validate. I did this as I wanted my logging to have a general layer for error checking and a more specific layer. So _validate just checks if required fields and sections are in the config file. Then wherever the field is read will handle extra error checking.
So my main problem is this:
How do I have it where my logger and configparser are both running and available before anything else. I'm very much willing to rework all of this, but I'd like to keep the functionality of it all.
One attempt I tried that works, but seems very messy is making my logger_controler a singleton that just returns python's logging object.
import logging
import os
class MyLogger(object):
def __new__(cls, *args, **kwargs):
init_log()
return logging
def init_log():
...
mylogger = MyLogger()
Then in core.py
from logger_controller import mylogger
logger = mylogger.getLogger(__name__)
I feel like there should be a better way to do the above, but I'm honestly not sure how.
A few ideas:
Would I be able to extend the logging class instead of just using that init_log function?
Maybe there's a way I can make all 3 individual modules such that they each initialize in a correct order? My attempts here didn't quite work as I also have some internal data that I wouldn't want exposed to classes using the module, just the functionality.
I'd like to have it where all 3, logging, configparsing, and the tool, available anywhere I import them.
How I have it setup now "works" but if I were to import the tool.py anywhere in core.py and an error occurs that I need to catch, then my logger won't be able to log it as this tool is loading before the init of my logger.

Singleton method usage in Python to create one instance in logging package

Coding this in Python as a newbie and trying to understand how to create a single instance of a logging class in a separate module. I am trying to access the logging module from my python script. So, I have different files in this automation script and I am trying to record the logs in a file and displaying the logs in the console at the same time so I would be utilizing two handlers namely: FileHandler() and StreamHandler(). The initialization of logger is in a different file called debugLogs.py and I am accessing this file from multiple Python modules running the script. But if the separate modules call debugLogs.py it creates multiple instances of the logger which means it gets printed multiple times which is not what I want. That is why I need to use singleton method to create just one instance. How do you suggest I go about doing that? I have included my version of debugLogs.py in this code and I have shown
#debugLogs.py
import logging
import logging.handlers
#open readme file and read name of latest log file created
def get_name():
with open("latestLogNames.txt") as f:
for line in f:
pass
latestLog = line
logfile_name = latestLog[:-1]
return logfile_name
class Logger(object):
_instance = None
def __new__(self, logfile_name):
if not self._instance:
self._instance = super(Logger, self).__new__(self)
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
formatter = logging.Formatter('%(message)s')
file_handler = logging.FileHandler(logfile_name)
file_handler.setFormatter(formatter)
stream_handler = logging.StreamHandler()
logger.addHandler(file_handler)
logger.addHandler(stream_handler)
return self._instance
So get_name() gets the latest log name already indicated in the readme file latestLogNames.txt and inputs the logs in the latest log file already there.
I understand that my singleton class code is not right and I am confused on how to initialize the whole class structure. But somehow I would have to pass that logfile_name value to that class. So I am planning to call this logger from a different module with something like this:
#differentModule.py
import debugLogs
logger = debugLogs.Logger(debugLogs.get_name())
And then I would use logger.info("...") to print the logs as well as store it in the file. Please tell me how to restructure the debugLogs.py and how to call it from different modules of my script.

How to disable logs in Kedro

I have been unsuccessful in disabling kedro logs. I have tried adding disable_existing_loggers: True to the logging.yml file as well as disable:True to all of the existing logs and it still appears to be saving log files. Any suggestions?
If you want kedro to stop logging you can override the _setup_logging in ProjectContext in src/<package-name>/run.py as per the documentation. For example:
class ProjectContext(KedroContext):
"""Users can override the remaining methods from the parent class here, or create new ones
(e.g. as required by plugins)
"""
project_name = "<PACKGE-NAME>"
project_version = "0.15.4"
def _get_pipelines(self) -> Dict[str, Pipeline]:
return create_pipelines()
def _setup_logging(self) -> None:
import logging
logging.disable()
If you want it to still log to the console, but not save to logs/info.log then you can do def _setup_logging(self) -> None: pass.

how to verfify python log format in unittest?

Recently I am writting an python logging extension, and I want to add some tests for my extension to verify whether my extension work as expected.
However, I don't know how to capture the complete log and compare with my excepted result in unittest/pytest.
simplified sample:
# app.py
import logging
def create_logger():
formatter = logging.Formatter(fmt='%(name)s-%(levelname)s-%(message)s')
hdlr = logging.StreamHandler()
hdlr.setFormatter(formatter)
logger = logging.getLogger(__name__)
logger.setLevel('DEBUG')
logger.addHandler(hdlr)
return logger
app_logger = create_logger()
Here is my tests
Attempt 1: unittest
from app import app_logger
import unittest
class TestApp(unittest.TestCase):
def test_logger(self):
with self.assertLogs('', 'DEBUG') as cm:
app_logger.debug('hello')
# or some other way to capture the log output.
self.assertEqual('app-DEBUG-hello', cm.output)
expected behaviour:
cm.output = 'app-DEBUG-hello'
actual behaviour
cm.output = ['DEBUG:app:hello']
Attempt 2: pytest caplog
from app import app_logger
import pytest
def test_logger(caplog):
app_logger.debug('hello')
assert caplog.text == 'app-DEBUG-hello'
expected behaviour:
caplog.text = 'app-DEBUG-hello'
actual behaviour
caplog.text = 'test_logger.py 6 DEBUG hello'
Attempt 3: pytest capsys
from app import app_logger
import pytest
def test_logger(capsys):
app_logger.debug('hello')
out, err = capsys.readouterr()
assert err
assert err == 'app-DEBUG-hello'
expected behaviour:
err = 'app-DEBUG-hello'
actual behaviour
err = ''
Considering there will be many tests with different format, I don't want to check the log format manually. I have no idea how to get complete log as I see on the console and compare it with my expected one in the test cases. Hoping for your help, thx.
I know this is old but posting here since it pulled up in google for me...
Probably needs cleanup but it is the first thing that has gotten close for me so I figured it would be good to share.
Here is a test case mixin I've put together that lets me verify a particular handler is being formatted as expected by copying the formatter:
import io
import logging
from django.conf import settings
from django.test import SimpleTestCase
from django.utils.log import DEFAULT_LOGGING
class SetupLoggingMixin:
def setUp(self):
super().setUp()
logging.config.dictConfig(settings.LOGGING)
self.stream = io.StringIO()
self.root_logger = logging.getLogger("")
self.root_hdlr = logging.StreamHandler(self.stream)
console_handler = None
for handler in self.root_logger.handlers:
if handler.name == 'console':
console_handler = handler
break
if console_handler is None:
raise RuntimeError('could not find console handler')
formatter = console_handler.formatter
self.root_formatter = formatter
self.root_hdlr.setFormatter(self.root_formatter)
self.root_logger.addHandler(self.root_hdlr)
def tearDown(self):
super().tearDown()
self.stream.close()
logging.config.dictConfig(DEFAULT_LOGGING)
And here is an example of how to use it:
class SimpleLogTests(SetupLoggingMixin, SimpleTestCase):
def test_logged_time(self):
msg = 'foo'
self.root_logger.error(msg)
self.assertEqual(self.stream.getvalue(), 'my-expected-message-formatted-as-expected')
After reading the source code of the unittest library, I've worked out the following bypass. Note, it works by changing a protected member of an imported module, so it may break in future versions.
from unittest.case import _AssertLogsContext
_AssertLogsContext.LOGGING_FORMAT = 'same format as your logger'
After these commands the logging context opened by self.assertLogs will use the above format. I really don't know why this values is left hard-coded and not configurable.
I did not find an option to read the format of a logger, but if you use logging.config.dictConfig you can use a value from the same dictionary.
I know this doesn't completely answer the OP's question but I stumbled upon this post while looking for a neat way to capture logged messages.
Taking what #user319862 did, I've cleaned it and simplified it.
import unittest
import logging
from io import StringIO
class SetupLogging(unittest.TestCase):
def setUp(self):
super().setUp()
self.stream = StringIO()
self.root_logger = logging.getLogger("")
self.root_hdlr = logging.StreamHandler(self.stream)
self.root_logger.addHandler(self.root_hdlr)
def tearDown(self):
super().tearDown()
self.stream.close()
def test_log_output(self):
""" Does the logger produce the correct output? """
msg = 'foo'
self.root_logger.error(msg)
self.assertEqual(self.stream.getvalue(), 'foo\n')
if __name__ == '__main__':
unittest.main()
I new to python but have some experience in test/tdd in other languages, and found that the default way of "changing" the formatter is by adding a new streamhandler BUT in the case you already have a stream defined in your logger (i.e. using Azure functions or TestCase::assertLogs that add one for you) you end up logging twice one with your format and another with the "default" format.
If in the OP the function create_logger mutates the formatter of current StreamHandler, instead of adding a new StreamHandler (checks if exist and if doesn't creates a new one and all that jazz...)
Then you can call the create_logger after the with self.assertLogs('', 'DEBUG') as cm: and just assert the cm.output and it just works because you are mutating the Formatter of the StreamHandler that the assertLogs is adding.
So basically what's happening is that the execution order is not appropriate for the test.
The order of execution in OP is:
import stuff
Add stream to logger formatter
Run test
Add another stream to logger formatter via self.assertLogs
assert stuff in 2nd StreamHandler
When it should be
the order of execution is:
import stuff
Add stream with logger formatter (but is irrelevant)
Run test
Add another stream with logger formatter via self.assertLogs
Change current stream logger formatter
assert stuff in only and properly formatted StreamHandler

How to configure/initialize logging using logger for multiple modules only once in Python for entire project?

I have python project with multiple modules with logging. I perform initialization (reading log configuration file and creating root logger and enable/disable logging) in every module before start of logging the messages. Is it possible to perform this initialization only once in one place (like in one class may be called as Log) such that the same settings are reused by logging all over the project?
I am looking for a proper solution to have only once to read the configuration file and to only once get and configure a logger, in a class constructor, or perhaps in the initializer (__init__.py). I don't want to do this at client side (in __main__ ). I want to do this configuration only once in separate class and call this class in other modules when logging is required.
setup using #singleton pattern
#log.py
import logging.config
import yaml
from singleton_decorator import singleton
#singleton
class Log:
def __init__(self):
configFile = 'path_to_my_lof_config_file'/logging.yaml
with open(configFile) as f:
config_dict = yaml.load(f)
logging.config.dictConfig(config_dict)
self.logger = logging.getLogger('root')
def info(self, message):
self.logger.info(message)
#module1.py
from Log import Log
myLog = Log()
myLog.info('Message logged successfully)
#module2.py
from Log import Log
myLog = Log() #config read only once and only one object is created
myLog.info('Message logged successfully)
From the documentation,
Note that Loggers should NEVER be instantiated directly, but always through the module-level function logging.getLogger(name). Multiple calls to getLogger() with the same name will always return a reference to the same Logger object.
You can initialize and configure logging in your main entry point. See Logging from multiple modules in this Howto (Python 2.7).
I had the same problem and I don't have any classes or anything, so I solved it with just using global variable
utils.py:
existing_loggers = {}
def get_logger(name='my_logger', level=logging.INFO):
if name in existing_loggers:
return existing_loggers[name]
# Do the rest of initialization, handlers, formatters etc...

Categories