The python logging library allows to log based on different levels:
https://docs.python.org/3/howto/logging.html#logging-levels
But I would like to use it to log based on custom tags, for example "show_intermediate_results" or "display_waypoints_in_the_code" or "report_time_for_each_module" and so on...
Those tags cannot be measured in a severity ladder, during development i would sometimes want to see them and sometimes not depending on what i am developing/debugging at the moment.
So the question is if I can use the logging library to do that?
Btw, i DO want to use the library and not write something by myself because i want it to be thread safe.
As per the documentation, you can use logging.Filter objects with Logger and Handler instances
for more sophisticated filtering than is provided by levels.
Related
my scenario is this - im using a simple logger, and have alot of log.info()/log.debug() messages in my code.
i would like to change the log level dynamically, mainly being able to "turn on/off" debug level logs.
my question is this - is it somehow possible to do so, but make the change only affect parts of my code? lets say only when im inside methods of a specific class.
or is the only way to do something like that is to use a different log object for that class and change only its level?
You can achive this by using the stdlib logging output for structlog, and then just configure logging to different levels for each logger.
Every logger has a name, that is used to differenciate where the logs came from and how to handle them. usually you define a single logger per python file like this:
import logging
logger = logging.getLogger(__name__)
What this does is give the logger the name of the python module. for example if the file is proj/routers/route1.py then the logger name will be proj.routers.route1.
This is useful because the std logging library allows you to configure a log level to the proj.routers logger, that will be applied to every proj.routers.* loggers.
see https://docs.python.org/3/howto/logging.html#configuring-logging
Python's logging facility comes with several levels of severity:
CRITICAL
ERROR
WARNING*
INFO
DEBUG
NOTSET
*The root logger defaults to WARNING.
How does one deal with regular messages in this setup?
With "regular messages" I mean those that are not warnings but should always be shown to the user.
An example could be:
Loop over a list of files and apply an accumulating operation to them. A "regular message" could be "currently working on {filename}". The INFO level might hold the status of individual sub-steps of the file treatment that the user may not be interested in knowing most of the time. On the WARNING level could be potential problems like two files containing differing entries for the same key.
I think that I want a level between WARNING and INFO, which might be called NOTICE, NOTE, REGULAR or similar. This level would be my default instead of WARNING. Why does logging not provide such a level?
I understand that I can add such a level fairly easily but the logging manual strongly argues against custom levels. Thus, I assume there must be a canonical way of logging messages like the above ...
EDIT: Just to clarify regarding the vote to close for being "opinion-based": I would like to know how logging is supposed to be used sticking to the given set of levels and conventions. Thus, I would think that the correct answer is not necessarily an opinion but should be the best practice of using this module.
The manual states "Defining your own levels is possible, but should not be necessary, as the existing levels have been chosen on the basis of practical experience". In my example above, I seem to be lacking one level: either "regular messages" or -- if I switch to INFO meaning "regular messages" -- something like "verbose info" for the user.
Log messages are not really indented for the user. What you are describing sounds more like regular output. However there are at least two possible ways to solve this. Either make the regular message INFO level and move the not-as-interesting messages to the DEBUG level, or use a log filter. To quote the documentation:
Filters can be used by Handlers and Loggers for more sophisticated
filtering than is provided by levels.
I am new to python and just trying to learn and find better ways to write code. I want to create a custom class for logging and use package logging inside it. I want the function in this class to be reusable and do my logging from other scripts rather than writing custom code in each and every scripts. Is there a good link you guys can share? Is this the right way to handle logging? I just want to avoid writing the same code in every script if I can reuse it from one module.
I would highly appreciate any reply.
You can build a custom class that utilizes the built in python logging library. There isn't really any right way to handle logging as the library allows you to use 5 standard levels indicating the severity of events (DEBUG, INFO, WARNING, ERROR, and CRITICAL). The way you use these levels are application specific. Here's another good explanation of the package.
It's indeed a good idea to keep all your logging configuration (formatters, level, handlers) in one place.
create a class wrapping a custom logger with your configuration
expose methods for logging with different levels
import this class wherever you want
create an instance of this class to log where you want
To make sure all you custom logging objects have the same config, you should make logging class own the configuration.
I don't think there's any links I can share for the whole thing but you can find links for the individual details I mentioned easily enough.
I want logging messages from my program, but not from the libraries it uses. I can disable / change the logging level of individual libraries like this:
logging.getLogger('alibrary').setLevel(logging.ERROR)
The problem is, my program uses lots and lots of libraries, which use lots themselves. So doing this individually for every library is a big chore. Is there a better way to do this?
You could set the root logger's level to e.g. ERROR and then selectively set more verbose levels for your own code:
logging.getLogger().setLevel(logging.ERROR)
then assuming the libraries you use are well-behaved with regard to logging, the effective levels of their loggers should effectively be ERROR just as if you had set each one individually.
I am trying to figure out the best approach to apply some custom processing on Python logging messages with minimal impact to our codebase.
The problem is this: we have many different projects logging a lot of things, and among these can be found some AWS keys. As a security requirement, we need to strip out all AWS keys from the logs, and there are multiple ways to go about this:
The naive approach would be to go in each and every project, and modify each logging call to manually strip out keys. This is the least preferred approach as it would be the most manual.
Implement a different module that provides the same function as the logging module (like info, error, ...) and each function definition would first apply a regex to filter out AWS keys, and then call the actual logging method behind the scenes. Then each project can be modified to something like import custom_logging_module as logging and none of the logging calls need to be modified. The drawback of this approach though is that it looks like every logging call comes from this module in the log, so you can't track where your messages originate from.
Not sure in what form yet, but it sounds like it would be possible to implement a custom Logger or LogRecord and register it when initializing the logging. This wouldn't have the problems of the previous approach.
I have done some research on approach #3 but couldn't really find a way to do this. Does anyone have experience applying some custom processing on logging messages that would apply to this use case?
You could use a custom LogRecord class to achieve this, as long as you could identify keys in text unambiguously. For example:
import logging
import re
KEY = 'PK_SOME_PUBLIC_KEY'
SECRET_KEY = 'SK_SOME_PRIVATE_KEY'
class StrippingLogRecord(logging.LogRecord):
pattern = re.compile(r'\b[PS]K_\w+\b', re.I)
def getMessage(self):
message = super(StrippingLogRecord, self).getMessage()
message = self.pattern.sub('-- key redacted --', message)
return message
if hasattr(logging, 'setLogRecordFactory'):
# 3.x has this
logging.setLogRecordFactory(StrippingLogRecord)
else:
# 2.x needs monkey-patching
logging.LogRecord = StrippingLogRecord
logging.basicConfig(level=logging.DEBUG)
logging.debug('Message with a %s', KEY)
logging.debug('Message with a %s', SECRET_KEY)
In my example I've assumed you could use a simple regex to spot keys, but a more sophisticated alternative method could be used if that's not workable.
Note that the above code should be run before any of the code which logs keys.