A library that I use emits warnings and errors through the logging module (logging.Logger's warn() and error() methods). I would like to implement an option to turn the warnings into errors (i.e., fail on warnings).
Is there an easy way to achieve this?
From looking at the documentation, I cannot see a ready-made solution. I assume it is possible by adding a custom Handler object, but I am not sure how to do it "right". Any pointers?
#hoefling's answer is close, but I would change it like so:
class LevelRaiser(logging.Filter):
def filter(self, record):
if record.levelno == logging.WARNING:
record.levelno = logging.ERROR
record.levelname = logging.getLevelName(logging.ERROR)
return True
def configure_library_logging():
library_root_logger = logging.getLogger(library.__name__)
library_root_logger.addFilter(LevelRaiser())
The reason is that filters are used to change LogRecord attributes and filter stuff out, whereas handlers are used to do I/O. What you're trying to do here isn't I/O, and so doesn't really belong in a handler.
Update: I like the proposal of Vinay made in this answer, injecting a custom Filter instead of a Handler is a much cleaner way. Please check it out!
You are on the right track with implementing own Handler. This is pretty easy to implement. I would do it like that: write a handler that edits the LogRecord in-place and attach one handler instance to the library's root loggers. Example:
# library.py
import logging
_LOGGER = logging.getLogger(__name__)
def library_stuff():
_LOGGER.warning('library stuff')
This is a script that uses the library:
import logging
import library
class LevelRaiser(logging.Handler):
def emit(self, record: logging.LogRecord):
if record.levelno == logging.WARNING:
record.levelno = logging.ERROR
record.levelname = logging.getLevelName(logging.ERROR)
def configure_library_logging():
library_root_logger = logging.getLogger(library.__name__)
library_root_logger.addHandler(LevelRaiser())
if __name__ == '__main__':
# do some example global logging config
logging.basicConfig(level=logging.INFO)
# additional configuration for the library logging
configure_library_logging()
# play with different loggers
our_logger = logging.getLogger(__name__)
root_logger = logging.getLogger()
root_logger.warning('spam')
our_logger.warning('eggs')
library.library_stuff()
root_logger.warning('foo')
our_logger.warning('bar')
library.library_stuff()
Run the script:
WARNING:root:spam
WARNING:__main__:eggs
ERROR:library:library stuff
WARNING:root:foo
WARNING:__main__:bar
ERROR:library:library stuff
Note that warning level is elevated to error level only on library's logging prints, all the rest remains unchanged.
You can assign logging.warn to logging.error before calling methods from your library:
import logging
warn_log_original = logging.warn
logging.warn = logging.error
library_call()
logging.warn = warn_log_original
Related
I am new to Python (background is .Net) and working on a Python application that is a wrapper around a third-party library. The third-party Python library uses standard logging. I need to intercept these logging calls and store them. The code looks something like this:
Third-party main — myApp.py:
# Standard Library
import logging
from options import (info, warn)
from process import (processIt)
# Module-level logger
log = logging.getLogger(__name__)
log.propagate = False
formatter = logging.Formatter("[%(name)s] [%(levelname)-7s] [%(asctime)s] %(message)s")
# Console Handler for Elevator messages
ch = logging.StreamHandler()
ch.setFormatter(formatter)
log.addHandler(ch)
def runIt():
info("Running it.", 1)
processIt()
info("Running it.", 2)
Third-party logging setup — options.py:
# Standard Library
import logging
formatter = logging.Formatter("[%(name)s] [%(ecode)d] [%(levelname)-7s] [%(asctime)s] %(message)s")
# Console Handler for Elevator messages
ch = logging.StreamHandler()
ch.setFormatter(formatter)
# Module-level logger
log = logging.getLogger(__name__)
log.level= logging.INFO
# temporary? hack to prevent multiple loggers from printing messages
log.propagate = False
log.addHandler(ch)
def info(fmt, ecode, *args):
log.info(fmt, *args, extra={'ecode': ecode})
def warn(fmt, ecode, *args):
log.warning(fmt, *args, extra={'ecode': ecode})
def init():
info("Initialized options", 100)
Third-party process — process.py:
from options import (info, warn)
def processIt():
info ("Inside Process", 10)
This is the client — client.py:
import options
import myApp
info_msg = []
warn_msg = []
def info(fmt, ecode, *args):
info_msg.append(dict({ecode:fmt.format(*args)}))
def warn(fmt, ecode, *args):
warn_msg.append(dict({ecode:fmt.format(*args)}))
options.warn = warn
options.info = info
def runApp():
print ("Start")
options.init()
myApp.runIt()
print ("End")
print (info_msg)
print (warn_msg)
runApp()
Here is the output:
Start
[options] [1] [INFO ] [2022-06-09 09:28:46,380] Running it.
[options] [10] [INFO ] [2022-06-09 09:28:46,380] Inside Process
[options] [2] [INFO ] [2022-06-09 09:28:46,380] Running it.
End
[{100: 'Initialized options'}]
[]
You can see that the log in the init folder got overridden, but nothing else.
Explanation and quick-and-dirty solution
First of all, your solution is a bit of a hack, since you're modifying a third-party module at runtime. This might work, but it depends on the situation. The reason it does not work in this case, is because myApp.py contains from options import (info, warn). In general, using the from ... import ... style is not recommended, since this creates additional objects. In this example, an object called info is created in the myApp module.
When you overwrite the options.info function, this has no effect on myApp.info, since it was already created and still references the original function.
The quick-and-dirty fix is to override the methods from the options module before you import all other thirdparty modules:
import options
# import myApp # NOT YET
info_msg = []
warn_msg = []
def info(fmt, ecode, *args):
info_msg.append(dict({ecode:fmt.format(*args)}))
def warn(fmt, ecode, *args):
warn_msg.append(dict({ecode:fmt.format(*args)}))
options.warn = warn
options.info = info
import myApp # Now it's okay
def runApp():
...
I tested above code with your current example code, and this gives the following output:
Start
End
[{100: 'Initialized options'}, {1: 'Running it.'}, {10: 'Inside Process'}, {2: 'Running it.'}]
[]
Note, however, that this solution might not work if the actual thirdparty code is bigger and imports are done in an even different way than demonstrated here. Also note that my above code breaks a number of coding styleguides (which you may or may not care about).
Solution using the logging module
Instead of "hacking" the thirdparty code, you can also achieve a similar result using the logging module.
The logging module is written to support these type of things, and it seems like the third-party library follows the best practices regarding logging. This means you could just create your own logging handler to deal with logging from the third-party library.
To capture the log messages in the way you demonstrated, you'll need to write a custom logging Handler Object. The following code demonstrates how this works.
import logging
import thirdparty.options # See note below
import thirdparty.myApp # See note below
class MyLogHandler(logging.Handler):
def __init__(self):
super().__init__()
self.info_msg = []
self.warn_msg = []
def emit(self, record: logging.LogRecord):
ecode = record.__dict__.get('ecode')
if record.levelno == logging.INFO:
self.info_msg.append({ecode: record.msg.format(*record.args)})
elif record.levelno == logging.WARNING:
self.warn_msg.append({ecode: record.msg.format(*record.args)})
def runApp():
handler = MyLogHandler()
logging.getLogger('thirdparty.options').addHandler(handler) # See note below
print("Start")
thirdparty.options.init() # See note below
thirdparty.myApp.runIt() # See note below
print("End")
print(handler.info_msg)
print(handler.warn_msg)
runApp()
NOTE: I named the thirdparty module "thirdparty" in the above code. Your actual library probably has a different name, in which case you'll need to match the name in the code accordingly!
The output of above script is:
Start
[thirdparty.options] [100] [INFO ] [2022-06-10 18:06:21,979] Initialized options
[thirdparty.options] [1] [INFO ] [2022-06-10 18:06:21,979] Running it.
[thirdparty.options] [10] [INFO ] [2022-06-10 18:06:21,980] Inside Process
[thirdparty.options] [2] [INFO ] [2022-06-10 18:06:21,980] Running it.
End
[{100: 'Initialized options'}, {1: 'Running it.'}, {10: 'Inside Process'}, {2: 'Running it.'}]
[]
Note that this solution does not replace the original logging, but adds a log handler, so the original logging still works as well. (If you don't want that, you can remove the original log handler.)
References:
Library documentation of Handler Object.
Python Logging HOWTO
Try having the import thirdparty.setup in a function because otherwise you are gonna have an import error.
While subclassing logging.Handler, I can make a custom handler by doing something like:
import requests
import logging
class RequestsHandler(logging.Handler):
def emit(self, record):
res = requests.get('http://google.com')
print (res, record)
handler = RequestsHandler()
logger = logging.getLogger(__name__)
logger.addHandler(handler)
logger.warning('ok!')
# <Response [200]> <LogRecord: __main__, 30, <stdin>, 1, "ok!">
What would be the simplest RequestHandler (i.e., what methods would it need?) if it was just a base class without subclassing logging.Handler ?
In general, you can find out which attributes of a class is getting accessed externally by overriding the __getattribue__ method with a wrapper function
that adds the name of the attribute being accessed to a set if the caller's class is not the same as the current class:
import logging
import sys
class MyHandler(logging.Handler):
def emit(self, record):
pass
def show_attribute(self, name):
caller_locals = sys._getframe(1).f_locals
if ('self' not in caller_locals or
object.__getattribute__(caller_locals['self'], '__class__') is not
object.__getattribute__(self, '__class__')):
attributes.add(name)
return original_getattribute(self, name)
attributes = set()
original_getattribute = MyHandler.__getattribute__
MyHandler.__getattribute__ = show_attribute
so that:
handler = MyHandler()
logger = logging.getLogger(__name__)
logger.addHandler(handler)
logger.warning('ok!')
print(attributes)
outputs:
{'handle', 'level'}
Demo: https://repl.it/#blhsing/UtterSoupyCollaborativesoftware
As you see from the result above, handle and level are the only attributes needed for a basic logging handler. In other words, #jirassimok is correct in that handle is the only method of the Handler class that is called externally, but one also needs to implement the level attribute as well since it is also directly accessed in the Logger.callHandlers method:
if record.levelno >= hdlr.level:
where the level attribute has to be an integer, and should be 0 if records of all logging levels are to be handled.
A minimal implementation of a Handler class should therefore be something like:
class MyHandler:
def __init__(self):
self.level = 0
def handle(self, record):
print(record.msg)
so that:
handler = MyHandler()
logger = logging.getLogger(__name__)
logger.addHandler(handler)
logger.warning('ok!')
outputs:
ok!
Looking at the source for Logger.log leads me to Logger.callHandlers, which calls only handle on the handlers. So that might be the minimum you need if you're injecting the fake handler directly into a logger instance.
If you want to really guarantee compatibility with the rest of the logging module, the only thing you can do is go through the module's source to figure out how it works. The documentation is a good starting place, but that doesn't get into the internals much at all.
If you're just trying to write a dummy handler for a small use case, you could probably get away with skipping a lot of steps; try something, see where it fails, and build on that.
Otherwise, you won't have much choice but to dive into the source code (though trying things and seeing what breaks can also be a good way to find places to start reading).
A quick glance at the class' source tells me that the only gotchas in the class are related to the module's internal management of its objects; Handler.__init__ puts the handler into a global handler list, which the module could use in any number of places. But beyond that, the class is quite straightforward; it shouldn't be too hard to read.
Recently I am writting an python logging extension, and I want to add some tests for my extension to verify whether my extension work as expected.
However, I don't know how to capture the complete log and compare with my excepted result in unittest/pytest.
simplified sample:
# app.py
import logging
def create_logger():
formatter = logging.Formatter(fmt='%(name)s-%(levelname)s-%(message)s')
hdlr = logging.StreamHandler()
hdlr.setFormatter(formatter)
logger = logging.getLogger(__name__)
logger.setLevel('DEBUG')
logger.addHandler(hdlr)
return logger
app_logger = create_logger()
Here is my tests
Attempt 1: unittest
from app import app_logger
import unittest
class TestApp(unittest.TestCase):
def test_logger(self):
with self.assertLogs('', 'DEBUG') as cm:
app_logger.debug('hello')
# or some other way to capture the log output.
self.assertEqual('app-DEBUG-hello', cm.output)
expected behaviour:
cm.output = 'app-DEBUG-hello'
actual behaviour
cm.output = ['DEBUG:app:hello']
Attempt 2: pytest caplog
from app import app_logger
import pytest
def test_logger(caplog):
app_logger.debug('hello')
assert caplog.text == 'app-DEBUG-hello'
expected behaviour:
caplog.text = 'app-DEBUG-hello'
actual behaviour
caplog.text = 'test_logger.py 6 DEBUG hello'
Attempt 3: pytest capsys
from app import app_logger
import pytest
def test_logger(capsys):
app_logger.debug('hello')
out, err = capsys.readouterr()
assert err
assert err == 'app-DEBUG-hello'
expected behaviour:
err = 'app-DEBUG-hello'
actual behaviour
err = ''
Considering there will be many tests with different format, I don't want to check the log format manually. I have no idea how to get complete log as I see on the console and compare it with my expected one in the test cases. Hoping for your help, thx.
I know this is old but posting here since it pulled up in google for me...
Probably needs cleanup but it is the first thing that has gotten close for me so I figured it would be good to share.
Here is a test case mixin I've put together that lets me verify a particular handler is being formatted as expected by copying the formatter:
import io
import logging
from django.conf import settings
from django.test import SimpleTestCase
from django.utils.log import DEFAULT_LOGGING
class SetupLoggingMixin:
def setUp(self):
super().setUp()
logging.config.dictConfig(settings.LOGGING)
self.stream = io.StringIO()
self.root_logger = logging.getLogger("")
self.root_hdlr = logging.StreamHandler(self.stream)
console_handler = None
for handler in self.root_logger.handlers:
if handler.name == 'console':
console_handler = handler
break
if console_handler is None:
raise RuntimeError('could not find console handler')
formatter = console_handler.formatter
self.root_formatter = formatter
self.root_hdlr.setFormatter(self.root_formatter)
self.root_logger.addHandler(self.root_hdlr)
def tearDown(self):
super().tearDown()
self.stream.close()
logging.config.dictConfig(DEFAULT_LOGGING)
And here is an example of how to use it:
class SimpleLogTests(SetupLoggingMixin, SimpleTestCase):
def test_logged_time(self):
msg = 'foo'
self.root_logger.error(msg)
self.assertEqual(self.stream.getvalue(), 'my-expected-message-formatted-as-expected')
After reading the source code of the unittest library, I've worked out the following bypass. Note, it works by changing a protected member of an imported module, so it may break in future versions.
from unittest.case import _AssertLogsContext
_AssertLogsContext.LOGGING_FORMAT = 'same format as your logger'
After these commands the logging context opened by self.assertLogs will use the above format. I really don't know why this values is left hard-coded and not configurable.
I did not find an option to read the format of a logger, but if you use logging.config.dictConfig you can use a value from the same dictionary.
I know this doesn't completely answer the OP's question but I stumbled upon this post while looking for a neat way to capture logged messages.
Taking what #user319862 did, I've cleaned it and simplified it.
import unittest
import logging
from io import StringIO
class SetupLogging(unittest.TestCase):
def setUp(self):
super().setUp()
self.stream = StringIO()
self.root_logger = logging.getLogger("")
self.root_hdlr = logging.StreamHandler(self.stream)
self.root_logger.addHandler(self.root_hdlr)
def tearDown(self):
super().tearDown()
self.stream.close()
def test_log_output(self):
""" Does the logger produce the correct output? """
msg = 'foo'
self.root_logger.error(msg)
self.assertEqual(self.stream.getvalue(), 'foo\n')
if __name__ == '__main__':
unittest.main()
I new to python but have some experience in test/tdd in other languages, and found that the default way of "changing" the formatter is by adding a new streamhandler BUT in the case you already have a stream defined in your logger (i.e. using Azure functions or TestCase::assertLogs that add one for you) you end up logging twice one with your format and another with the "default" format.
If in the OP the function create_logger mutates the formatter of current StreamHandler, instead of adding a new StreamHandler (checks if exist and if doesn't creates a new one and all that jazz...)
Then you can call the create_logger after the with self.assertLogs('', 'DEBUG') as cm: and just assert the cm.output and it just works because you are mutating the Formatter of the StreamHandler that the assertLogs is adding.
So basically what's happening is that the execution order is not appropriate for the test.
The order of execution in OP is:
import stuff
Add stream to logger formatter
Run test
Add another stream to logger formatter via self.assertLogs
assert stuff in 2nd StreamHandler
When it should be
the order of execution is:
import stuff
Add stream with logger formatter (but is irrelevant)
Run test
Add another stream with logger formatter via self.assertLogs
Change current stream logger formatter
assert stuff in only and properly formatted StreamHandler
I'm using pytest-3.7.1 which has good support for logging, including live logging to stdout during tests. I'm using --log-cli-level=DEBUG to dump all debug-level logging to the console as it happens.
The problem I have is that --log-cli-level=DEBUG turns on debug logging for all modules in my test program, including third-party dependencies, and it floods the log with a lot of uninteresting output.
Python's logging module has the ability to set logging levels per module. This enables selective logging - for example, in a normal Python program I can turn on debugging for just one or two of my own modules, and restrict the log output to just those, or set different log levels for each module. This enables turning off debug-level logging for noisy libraries.
So what I'd like to do is apply the same concept to pytest's logging - i.e. specify a logging level, from the command line, for specific non-root loggers. For example, if I have a module called test_foo.py then I'm looking for a way to set the log level for this module from the command line.
I'm prepared to roll-my-own if necessary (I know how to add custom arguments to pytest), but before I do that I just want to be sure that there isn't already a solution. Is anyone aware of one?
I had the same problem, and found a solution in another answer:
Instead of --log-cli-level=DEBUG, use --log-level DEBUG. It disables all third-party module logs (in my case, I had plenty of matplotlib logs), but still outputs your app logs for each test that fails.
I got this working by writing a factory class and using it to set the level of the root logger to logger.INFO and use the logging level from the command line for all the loggers obtained from the factory. If the logging level from the command line is higher than the minimum global log level you specify in the class (using constant MINIMUM_GLOBAL_LOG_LEVEL), the global log level isn't changed.
import logging
MODULE_FIELD_WIDTH_IN_CHARS = '20'
LINE_NO_FIELD_WIDTH_IN_CHARS = '3'
LEVEL_NAME_FIELD_WIDTH_IN_CHARS = '8'
MINIMUM_GLOBAL_LOG_LEVEL = logging.INFO
class EasyLogger():
root_logger = logging.getLogger()
specified_log_level = root_logger.level
format_string = '{asctime} '
format_string += '{module:>' + MODULE_FIELD_WIDTH_IN_CHARS + 's}'
format_string += '[{lineno:' + LINE_NO_FIELD_WIDTH_IN_CHARS + 'd}]'
format_string += '[{levelname:^' + LEVEL_NAME_FIELD_WIDTH_IN_CHARS + 's}]: '
format_string += '{message}'
level_change_warning_sent = False
#classmethod
def get_logger(cls, logger_name):
if not EasyLogger._logger_has_format(cls.root_logger, cls.format_string):
EasyLogger._setup_root_logger()
logger = logging.getLogger(logger_name)
logger.setLevel(cls.specified_log_level)
return logger
#classmethod
def _setup_root_logger(cls):
formatter = logging.Formatter(fmt=cls.format_string, style='{')
if not cls.root_logger.hasHandlers():
handler = logging.StreamHandler()
cls.root_logger.addHandler(handler)
for handler in cls.root_logger.handlers:
handler.setFormatter(formatter)
cls.root_logger.setLevel(MINIMUM_GLOBAL_LOG_LEVEL)
if (cls.specified_log_level < MINIMUM_GLOBAL_LOG_LEVEL and
cls.level_change_warning_sent is False):
cls.root_logger.log(
max(cls.specified_log_level, logging.WARNING),
"Setting log level for %s class to %s, all others to %s" % (
__name__,
cls.specified_log_level,
MINIMUM_GLOBAL_LOG_LEVEL
)
)
cls.level_change_warning_sent = True
#staticmethod
def _logger_has_format(logger, format_string):
for handler in logger.handlers:
return handler.format == format_string
return False
The above class is then used to send logs normally as you would with a logging.logger object as follows:
from EasyLogger import EasyLogger
class MySuperAwesomeClass():
def __init__(self):
self.logger = EasyLogger.get_logger(__name__)
def foo(self):
self.logger.debug("debug message")
self.logger.info("info message")
self.logger.warning("warning message")
self.logger.critical("critical message")
self.logger.error("error message")
Enable/Disable/Modify the log level of any module in Python:
logging.getLogger("module_name").setLevel(logging.log_level)
I've added logs to a Python 2 application using the logging module.
Now I want to add a closing statement at the end, dependent on the worst thing logged.
If the worst thing logged had the INFO level or lower, print "SUCCESS!"
If the worst thing logged had the WARNING level, write "SUCCESS!, with warnings. Please check the logs"
If the worst thing logged had the ERROR level, write "FAILURE".
Is there a way to get this information from the logger? Some built in method I'm missing, like logging.getWorseLevelLogSoFar?
My current plan is to replace all log calls (logging.info et al) with calls to wrapper functions in a class that also keeps track of that information.
I also considered somehow releasing the log file, reading and parsing it, then appending to it. This seems worse than my current plan.
Are there other options? This doesn't seem like a unique problem.
I'm using the root logger and would prefer to continue using it, but can change to a named logger if that's necessary for the solution.
As you said yourself, I think writing a wrapper function would be the neatest and fastest approach. The problem would be that you need a global variable, if you're not working within a class
global worst_log_lvl = logging.NOTSET
def write_log(logger, lvl, msg):
logger.log(lvl, msg)
if lvl > worst_log_lvl:
global worst_log_lvl
worst_log_lvl = lvl
or make worst_log_lvl a member of a custom class, where you emulate the signature of logging.logger, that you use instead of the actual logger
class CustomLoggerWrapper(object):
def __init__(self):
# setup of your custom logger
self.worst_log_lvl = logging.NOTSET
def debug(self):
pass
# repeat for other functions like info() etc.
As you're only using the root logger, you could attach a filter to it which keeps track of the level:
import argparse
import logging
import random
LEVELS = ['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']
class LevelTrackingFilter(logging.Filter):
def __init__(self):
self.level = logging.NOTSET
def filter(self, record):
self.level = max(self.level, record.levelno)
return True
def main():
parser = argparse.ArgumentParser()
parser.add_argument('maxlevel', metavar='MAXLEVEL', default='WARNING',
choices=LEVELS,
nargs='?', help='Set maximum level to log')
options = parser.parse_args()
maxlevel = getattr(logging, options.maxlevel)
logger = logging.getLogger()
logger.addHandler(logging.NullHandler()) # needs Python 2.7
filt = LevelTrackingFilter()
logger.addFilter(filt)
for i in range(100):
level = getattr(logging, random.choice(LEVELS))
if level > maxlevel:
continue
logger.log(level, 'message')
if filt.level <= logging.INFO:
print('SUCCESS!')
elif filt.level == logging.WARNING:
print('SUCCESS, with warnings. Please check the logs.')
else:
print('FAILURE')
if __name__ == '__main__':
main()
There's a "good" way to get this done automatically by using context filters.
TL;DR I've built a package that has the following contextfilter baked in. You can install it with pip install ofunctions.logger_utils then use it with:
from ofunctions import logger_utils
logger = logger_utils.logger_get_logger(log_file='somepath', console=True)
logger.error("Oh no!")
logger.info("Anyway...")
# Now get the worst called loglevel (result is equivalent to logging.ERROR level in this case)
worst_level = logger_utils.get_worst_logger_level(logger)
Here's the long solution which explains what happens under the hood:
Let's built a contextfilter class that can be injected into logging:
class ContextFilterWorstLevel(logging.Filter):
"""
This class records the worst loglevel that was called by logger
Allows to change default logging output or record events
"""
def __init__(self):
self._worst_level = logging.INFO
if sys.version_info[0] < 3:
super(logging.Filter, self).__init__()
else:
super().__init__()
#property
def worst_level(self):
"""
Returns worst log level called
"""
return self._worst_level
#worst_level.setter
def worst_level(self, value):
# type: (int) -> None
if isinstance(value, int):
self._worst_level = value
def filter(self, record):
# type: (str) -> bool
"""
A filter can change the default log output
This one simply records the worst log level called
"""
# Examples
# record.msg = f'{record.msg}'.encode('ascii', errors='backslashreplace')
# When using this filter, something can be added to logging.Formatter like '%(something)s'
# record.something = 'value'
if record.levelno > self.worst_level:
self.worst_level = record.levelno
return True
Now inject this filter into you logger instance
logger = logging.getLogger()
logger.addFilter(ContextFilterWorstLevel())
logger.warning("One does not simply inject a filter into logging")
Now we can iter over present filters and extract the worst called loglevel like this:
for flt in logger.filters:
if isinstance(flt, ContextFilterWorstLevel):
print(flt.worst_level)