What would be a good way to create logs (with python logging module) inside a constant running loop, without producing a large amount of useless log-files?
An example would be a loop that constant list a folder, and does some action when it sees a file of a specific type.
I want to log that no files were found, or files were found but of a wrong type, without logging that same line constantly for each folder check, as it might run many times a second.
Create a Handler that subclasses whatever other functionality you need. Store either the last, or all the previously logged messages that you don't want to emit again:
def make_filetype_aware_handler(handler_class):
class DontRepeatFiletypeHandler(handler_class):
def __init__(self, *args, **kwds):
super().__init__(*args, **kwds)
self.previous_types = set()
def emit(self, record):
if not record.file_type in self.previous_types:
self.previous_types.add(record.file_type)
super().emit(record)
return DontRepeatFiletypeHandler
FiletypeStreamHandler = make_filetype_aware_handler(logging.StreamHandler)
logger = logging.getLogger()
logger.addHandler(FiletypeStreamHandler(sys.stderr))
logger.debug('Found file of type %(file_type)', file_type='x-type/zomg')
import logging
logger = logging.getLogger(test)
# logging to a file
hdlr = logging.FileHandler(test.log)
formatter = logging.Formatter('%(asctime)s %(filename)s %(lineno)s %(levelname)s % (message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logger.setLevel(logging.DEBUG)
Then in the loop, you have to check for file type and if file is present or not.
Then add :
logger.debug('File Type : %s ' % file_type)
also
if file_present:
logger.debug('File Present : %s ' % present)
Log the less important events with a lower precedence, like DEBUG. See setLevel and SysLogHandler.
At development time set the level to DEBUG, and as your application matures, set it to more reasonable values like INFO or ERROR.
Your app should do something about the errors, like remove files of the wrong type and/or create lacking files; or move the wrongfully configured directories from the job polling to a quarantine location, so your log will not be flooded.
My understanding is that you are trying to limit logging the same message over and over again.
If this is your issue, I would create a a set of file_types you have already logged. However you need to be careful, if this is going to run forever, you will eventually crash..
from sets import Set
logged = Set()
while yourCondition:
file_type = get_next_file_type()
needToLog = #determine if you need to log this thing
if needToLog and (not file_type in logged):
logger.info("BAH! " + file_type)
logged.add(file_type)
A bit of a hack but much easier is "misusing" functools lru_cache.
from functools import lru_cache
from logging import getLogger
# Keep track of 10 different messages and then warn again
#lru_cache(10)
def warn_once(logger: Logger, msg: str):
logger.warning(msg)
You can increase the 10 to suppress more if required or set it to None to store suppress everything duplicated.
Related
I use a package which logs too much.
It has loggers set up properly, so I can get individual loggers using getLogger.
Is it possible to decrease logging level for all message produced by particular logger?
I know there is setLevel function, which disables all messages below certain level, but I still want the messages to be logged, just on a lower level, e.g. INFO level messages should instead be logged at DEBUG.
It depends what exactly you want to happen. You could either use a Filter or an Adapter to modify the log level.
A filter is easier, but only works properly for downgrading the log level.
The adapter solution has the advantage of setting the level at the earliest possible moment. The adapter has the downside of just being a wrapper around the logger, so you need to pass it to all places that would use the logger. If that's a third-party module the adapter solution won't be possible.
# Using filters
import logging
def changeLevel(record):
if record.levelno == logging.INFO:
record.levelno = logging.DEBUG
record.levelname = "DEBUG"
return record
logger = logging.getLogger('name')
logger.addFilter(changeLevel)
# Using an adapter
import logging
class ChangeLevel(logging.LoggerAdapter):
def log(self, level, msg, *args, **kwargs):
if level == logging.INFO:
level = logging.DEBUG
super().log(level, msg, args, kwargs)
logger_ = logging.getLogger('name', {})
logger = ChangeLevel(logger_)
I'm using pytest-3.7.1 which has good support for logging, including live logging to stdout during tests. I'm using --log-cli-level=DEBUG to dump all debug-level logging to the console as it happens.
The problem I have is that --log-cli-level=DEBUG turns on debug logging for all modules in my test program, including third-party dependencies, and it floods the log with a lot of uninteresting output.
Python's logging module has the ability to set logging levels per module. This enables selective logging - for example, in a normal Python program I can turn on debugging for just one or two of my own modules, and restrict the log output to just those, or set different log levels for each module. This enables turning off debug-level logging for noisy libraries.
So what I'd like to do is apply the same concept to pytest's logging - i.e. specify a logging level, from the command line, for specific non-root loggers. For example, if I have a module called test_foo.py then I'm looking for a way to set the log level for this module from the command line.
I'm prepared to roll-my-own if necessary (I know how to add custom arguments to pytest), but before I do that I just want to be sure that there isn't already a solution. Is anyone aware of one?
I had the same problem, and found a solution in another answer:
Instead of --log-cli-level=DEBUG, use --log-level DEBUG. It disables all third-party module logs (in my case, I had plenty of matplotlib logs), but still outputs your app logs for each test that fails.
I got this working by writing a factory class and using it to set the level of the root logger to logger.INFO and use the logging level from the command line for all the loggers obtained from the factory. If the logging level from the command line is higher than the minimum global log level you specify in the class (using constant MINIMUM_GLOBAL_LOG_LEVEL), the global log level isn't changed.
import logging
MODULE_FIELD_WIDTH_IN_CHARS = '20'
LINE_NO_FIELD_WIDTH_IN_CHARS = '3'
LEVEL_NAME_FIELD_WIDTH_IN_CHARS = '8'
MINIMUM_GLOBAL_LOG_LEVEL = logging.INFO
class EasyLogger():
root_logger = logging.getLogger()
specified_log_level = root_logger.level
format_string = '{asctime} '
format_string += '{module:>' + MODULE_FIELD_WIDTH_IN_CHARS + 's}'
format_string += '[{lineno:' + LINE_NO_FIELD_WIDTH_IN_CHARS + 'd}]'
format_string += '[{levelname:^' + LEVEL_NAME_FIELD_WIDTH_IN_CHARS + 's}]: '
format_string += '{message}'
level_change_warning_sent = False
#classmethod
def get_logger(cls, logger_name):
if not EasyLogger._logger_has_format(cls.root_logger, cls.format_string):
EasyLogger._setup_root_logger()
logger = logging.getLogger(logger_name)
logger.setLevel(cls.specified_log_level)
return logger
#classmethod
def _setup_root_logger(cls):
formatter = logging.Formatter(fmt=cls.format_string, style='{')
if not cls.root_logger.hasHandlers():
handler = logging.StreamHandler()
cls.root_logger.addHandler(handler)
for handler in cls.root_logger.handlers:
handler.setFormatter(formatter)
cls.root_logger.setLevel(MINIMUM_GLOBAL_LOG_LEVEL)
if (cls.specified_log_level < MINIMUM_GLOBAL_LOG_LEVEL and
cls.level_change_warning_sent is False):
cls.root_logger.log(
max(cls.specified_log_level, logging.WARNING),
"Setting log level for %s class to %s, all others to %s" % (
__name__,
cls.specified_log_level,
MINIMUM_GLOBAL_LOG_LEVEL
)
)
cls.level_change_warning_sent = True
#staticmethod
def _logger_has_format(logger, format_string):
for handler in logger.handlers:
return handler.format == format_string
return False
The above class is then used to send logs normally as you would with a logging.logger object as follows:
from EasyLogger import EasyLogger
class MySuperAwesomeClass():
def __init__(self):
self.logger = EasyLogger.get_logger(__name__)
def foo(self):
self.logger.debug("debug message")
self.logger.info("info message")
self.logger.warning("warning message")
self.logger.critical("critical message")
self.logger.error("error message")
Enable/Disable/Modify the log level of any module in Python:
logging.getLogger("module_name").setLevel(logging.log_level)
What have I missed?
I have a project with lots of files and modules. I want each module, and some classes, to have their own log file. For the most part I want just INFO and ERROR logged, but occasionally I'll want DEBUG.
However, I only want ERROR sent to the screen (iPython on Spyder via Anaconda). This is high speed code getting network updates several times each millisecond and printing all the INFO messages is not only very annoying but crashes Spyder.
In short, I want only ERROR sent to the screen. Everything else goes to a file. Below is my code for creating a separate log file for each class. It is called in the __init__ method of each class that should log items. The name argument is typically __class__.__name__. The fname argument is set as well. Typically, the lvl and formatter args are left with the defaults. The files are being created and they look more or less correct.
My searches are not turning up useful items and I'm missing something when I read the Logging Cookbook.
Code:
import logging, traceback
import time, datetime
import collections
standard_formatter = logging.Formatter('[{}|%(levelname)s]::%(funcName)s-%(message)s\n'.format(datetime.datetime.utcnow()))
standard_formatter.converter = time.gmtime
logging.basicConfig(level=logging.ERROR,
filemode='w')
err_handler = logging.StreamHandler()
err_handler.setLevel(logging.ERROR)
err_handler.setFormatter(standard_formatter)
Log = collections.namedtuple(
'LogLvls',
[
'info',
'debug',
'error',
'exception'
]
)
def setup_logger(
name: str,
fname: [str, None]=None,
lvl=logging.INFO,
formatter: logging.Formatter=standard_formatter
) -> Log:
logger = logging.getLogger(name)
if fname is None:
handler = logging.StreamHandler()
else:
handler = logging.FileHandler(fname, mode='a')
handler.setFormatter(formatter)
logger.setLevel(lvl)
logger.addHandler(handler)
logger.addHandler(err_handler)
return Log(
debug=lambda msg: logger.debug('{}::{}'.format(name, msg)),
info=lambda msg: logger.info('{}::{}'.format(name, msg)),
error=lambda msg: logger.error('{}::{}'.format(name, msg)),
exception=lambda e: logger.error('{}::{}: {}\n{}'.format(name, type(e), e, repr(traceback.format_stack()))),
)
You cannot have
logging.basicConfig(level=logging.ERROR,
filemode='w')
and specialized config same time. Comment it out or remove, then using
if __name__ == '__main__':
setup_logger('spam', "/tmp/spaaam")
logging.getLogger('spam').info('eggs')
logging.getLogger('spam').error('eggs')
logging.getLogger('spam').info('eggs')
You'll have only ERROR level messages on console, and both in file.
I looked through the tutorials for the python logging class here and didnt see anything that would let me make multiple logs of different levels for the same output. In the end I would like to have three logs:
<timestamp>_DEBUG.log (debug level)
<timestamp>_INFO.log (info Level)
<timestamp>_ERROR.log (error level)
Is there a way to, in one script, generate multiple log files for the same input?
<-------------UPDATE #1-------------------------->
So in implementing #robert's suggestion, I now have a small issue, probably due to not fully understanding what is being done in his code.
Here is my code in scriptRun.py
import os
import logging
logger = logging.getLogger("exceptionsLogger")
debugLogFileHandler = logging.FileHandler("Debug.log")
errorLogFileHandler = logging.FileHandler("Error.Log")
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
errorLogFileHandler.setFormatter(formatter)
debugLogFileHandler.setFormatter(formatter)
logger.addHandler(debugLogFileHandler)
logger.addHandler(errorLogFileHandler)
class LevelFilter(logging.Filter):
def __init__(self, level):
self.level = level
def filter(self, record):
return record.levelno == self.level
debugLogFileHandler.addFilter(LevelFilter(logging.DEBUG))
errorLogFileHandler.addFilter(LevelFilter(logging.ERROR))
directory = []
for dirpath, dirnames, filenames in os.walk("path\to\scripts"):
for filename in [f for f in filenames if f.endswith(".py")]:
directory.append(os.path.join(dirpath, filename))
for entry in directory:
execfile(entry)
for lists in x:
if lists[0] == 2:
logger.error(lists[1]+" "+lists[2])
elif lists[0] == 1:
logger.debug(lists[1]+" "+lists[2])
an example of what this is running is:
import sys
def script2Test2():
print y
def script2Ttest3():
mundo="hungry"
global x
x = []
theTests = (test2, test3)
for test in theTests:
try:
test()
x.append([1,test.__name__," OK"])
except:
error = str(sys.exc_info()[1])
x.append([2,test.__name__,error])
Now to my issue: running scriptRun.py does not throw any errors when i run it, and error.log and debug.log are created, but only error.log is populated with entries.
any idea why?
<------------------------Update #2----------------------->
So I realized that nothing is being logged that is "lower" than warning. even if i remove the filters and debugLogFileHandler.setLevel(logging.DEBUG) it does not seem to matter. If I set the actual log command to logger.warning or higher, it will print to the logs. Of course once I uncomment debugLogFileHandler.addFilter(LevelFilter(logging.DEBUG)) I get no log activity in Debug.log. I;m tempted to just make my own log level, but that seems like a really bad idea, in case anyone/anything else uses this code.
<-------------------------Final UPDATE--------------------->
Well I was stupid and forgot to set the logger itself to log DEBUG level events. Since by default the logging class doesn't log anything below warning, it wasnt logging any of the debug information I send it.
Final thanks and shoutout to #Robert for the filter.
Create multiple Handlers, each for one output file (INFO.log, DEBUG.log etc.).
Add a filter to each handler that only allows the specific level.
For example:
import logging
# Set up loggers and handlers.
# ...
class LevelFilter(logging.Filter):
def __init__(self, level):
self.level = level
def filter(self, record):
return record.levelno == self.level
debugLogFileHandler.addFilter(LevelFilter(logging.DEBUG))
infoLogFileHandler.addFilter(LevelFilter(logging.INFO))
I'm writing a server app which should be able to log at different levels both on the console and a log file.
The problem is, if logging.basicConfig() is set it will log to the console but it must be set in the main thread.
It can also be set with logging.basicConfig(filename='logger.log') to write to a file.
Setting a handle either for console logging (logging.StreamHandler()) or file logging (logging.FileHandler()) complements the logging.baseconfig() option set.
The problem is, that the settings are not independent.
What I mean is, the loglevel of logging.baseConfig() must include the Handler level, or it wont be logged.
So if I set the baseConfig to log to file, and a StreamHandler to log to console, the file loglevel must be lower than the console level.
(Also, the basicConfig option logs all other logs.)
I tried to create two Handles, one for the console and one for the log file, they work, but whatever log type is specified by basicConfig() will still be displayed duplicating messages.
Is there a way to disable the output of basicConfig() ?
Or any other way to implement these options ?
Thanks.
You don't say in your question exactly what levels you want on your console and file logging. However, you don't need to call basicConfig(), as it's only a convenience function. You can do e.g. (code just typed in, not tested):
import logging
logger = logging.getLogger(__name__)
configured = False
def configure_logging():
global configured
if not configured:
logger.setLevel(logging.DEBUG) # or whatever
console = logging.StreamHandler()
file = logging.FileHandler('/path/to/file')
#set a level on the handlers if you want;
#if you do, they will only output events that are >= that level
logger.addHandler(console)
logger.addHandler(file)
configured = True
Events are passed to the logger first, and if an event is to be processed (due to comparing the level of the logger and the event) then the event is passed to each handler of the logger and all its ancestors' handlers as well. If a level is set on a handler the event may be dropped by that handler, otherwise it will output the event.
Find the below example code to handle the logging with all kind of exceptions
import mysql.connector
import logging
logging.basicConfig(filename=r'C:\Users\root\Desktop\logs.txt',level=logging.DEBUG,format='%(asctime)s,%(levelname)s:%(message)s',datefmt='%d-%m-%Y %H:%M:%S')
while True:
try:
mydb=mysql.connector.connect(host='localhost',user='root',passwd='password123', database='shiva')
mycursor=mydb.cursor()
logging.info("Connected mysql db successfully...\n")
mycursor.execute("show databases")
mycursor.execute("Create table employee(name varchar(20), salary float(20))")
mydb.commit()
except Exception as e:
logging.info("Trying to Connect MysqlDB...")
logging.critical("Error Occured While Connecting...\n\n" "CAUSEDBY: "+str(e))
logging.warning("Check Login Credentials.")