I'm looking for a simple way to extend the logging functionality defined in the standard python library. I just want the ability to choose whether or not my logs are also printed to the screen.
Example: Normally to log a warning you would call:
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(levelname)s: %(message)s', filename='log.log', filemode='w')
logging.warning("WARNING!!!")
This sets the configurations of the log and puts the warning into the log
I would like to have something along the lines of a call like:
logging.warning("WARNING!!!", True)
where the True statement signifys if the log is also printed to stdout.
I've seen some examples of implementations of overriding the logger class
but I am new to the language and don't really follow what is going on, or how to implement this idea. Any help would be greatly appreciated :)
The Python logging module defines these classes:
Loggers that emit log messages.
Handlers that put those messages to a destination.
Formatters that format log messages.
Filters that filter log messages.
A Logger can have Handlers. You add them by invoking the addHandler() method. A Handler can have Filters and Formatters. You similarly add them by invoking the addFilter() and setFormatter() methods, respectively.
It works like this:
import logging
# make a logger
main_logger = logging.getLogger("my logger")
main_logger.setLevel(logging.INFO)
# make some handlers
console_handler = logging.StreamHandler() # by default, sys.stderr
file_handler = logging.FileHandler("my_log_file.txt")
# set logging levels
console_handler.setLevel(logging.WARNING)
file_handler.setLevel(logging.INFO)
# add handlers to logger
main_logger.addHandler(console_handler)
main_logger.addHandler(file_handler)
Now, you can use this object like this:
main_logger.info("logged in the FILE")
main_logger.warning("logged in the FILE and on the CONSOLE")
If you just run python on your machine, you can type the above code into the interactive console and you should see the output. The log file will get crated in your current directory, if you have permissions to create files in it.
I hope this helps!
It is possible to override logging.getLoggerClass() to add new functionality to loggers. I wrote simple class which prints green messages in stdout.
Most important parts of my code:
class ColorLogger(logging.getLoggerClass()):
__GREEN = '\033[0;32m%s\033[0m'
__FORMAT = {
'fmt': '%(asctime)s %(levelname)s: %(message)s',
'datefmt': '%Y-%m-%d %H:%M:%S',
}
def __init__(self, format=__FORMAT):
formatter = logging.Formatter(**format)
self.root.setLevel(logging.INFO)
self.root.handlers = []
(...)
handler = logging.StreamHandler()
handler.setFormatter(formatter)
self.root.addHandler(handler)
def info(self, message):
self.root.info(message)
(...)
def info_green(self, message):
self.root.info(self.__GREEN, message)
(...)
if __name__ == '__main__':
logger = ColorLogger()
logger.info("This message has default color.")
logger.info_green("This message is green.")
Handlers send the log records (created by loggers) to the appropriate
destination.
(from the docs: http://docs.python.org/library/logging.html)
Just set up multiple handlers with your logging object, one to write to file, another to write to the screen.
UPDATE
Here is an example function you can call in your classes to get logging set up with a handler.
def set_up_logger(self):
# create logger object
self.log = logging.getLogger("command")
self.log.setLevel(logging.DEBUG)
# create console handler and set min level recorded to debug messages
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
# add the handler to the log object
self.log.addHandler(ch)
You would just need to set up another handler for files, ala the StreamHandler code that's already there, and add it to the logging object. The line that says ch.setLevel(logging.DEBUG) means that this particular handler will take logging messages that are DEBUG or higher. You'll likely want to set yours to WARNING or higher, since you only want the more important things to go to the console. So, your logging would work like this:
self.log.info("Hello, World!") -> goes to file
self.log.error("OMG!!") -> goes to file AND console
Related
I have a Python script and I want that the info method write the messages in the console. But the warning, critical or error writes the messages to a file. How can I do that?
I tried this:
import logging
console_log = logging.getLogger("CONSOLE")
console_log.setLevel(logging.INFO)
stream_handler = logging.StreamHandler()
stream_handler.setLevel(logging.INFO)
console_log.addHandler(stream_handler)
file_log = logging.getLogger("FILE")
file_log.setLevel(logging.WARNING)
file_handler = logging.FileHandler('log.txt')
file_handler.setLevel(logging.WARNING)
file_log.addHandler(file_handler)
def log_to_console(message):
console_log.info(message)
def log_to_file(message):
file_log.warning(message)
log_to_console("THIS SHOULD SHOW ONLY IN CONSOLE")
log_to_file("THIS SHOULD SHOW ONLY IN FILE")
but the message that should be only in the file is going to the console too, and the message that should be in the console, is duplicating. What am I doing wrong here?
What happens is that the two loggers you created propagated the log upwards to the root logger. The root logger does not have any handlers by default, but will use the lastResort handler if needed:
A "handler of last resort" is available through this attribute. This
is a StreamHandler writing to sys.stderr with a level of WARNING, and
is used to handle logging events in the absence of any logging
configuration. The end result is to just print the message to
sys.stderr.
Source from the Python documentation.
Inside the Python source code, you can see where the call is done.
Therefore, to solve your problem, you could set the console_log and file_log loggers' propagate attribute to False.
On another note, I think you should refrain from instantiating several loggers for you use case. Just use one custom logger with 2 different handlers that will each log to a different destination.
Create a custom StreamHandler to log only the specified level:
import logging
class MyStreamHandler(logging.StreamHandler):
def emit(self, record):
if record.levelno == self.level:
# this ensures this handler will print only for the specified level
super().emit(record)
Then, use it:
my_custom_logger = logging.getLogger("foobar")
my_custom_logger.propagate = False
my_custom_logger.setLevel(logging.INFO)
stream_handler = MyStreamHandler()
stream_handler.setLevel(logging.INFO)
file_handler = logging.FileHandler("log.txt")
file_handler.setLevel(logging.WARNING)
my_custom_logger.addHandler(stream_handler)
my_custom_logger.addHandler(file_handler)
my_custom_logger.info("THIS SHOULD SHOW ONLY IN CONSOLE")
my_custom_logger.warning("THIS SHOULD SHOW ONLY IN FILE")
And it works without duplicate and without misplaced log.
I'm using the logging module from the python standard library and would like to obtain the current Formatter. The reason is that I'm using multiprocessing module and for each process I'd like to assign its logger another file handler to log to its own log file. When I do this in the following way
logger = logging.getLogger('subprocess')
log_path = 'log.txt'
with open(log_path, 'a') as outfile:
handler = logging.StreamHandler(outfile)
logger.addHandler(handler)
the messages in log.txt have no formatting at all, but I would like the message to be in the same format as my typical logging format. My typical logging setup is shown below
logging.basicConfig(
format='%(asctime)s | %(levelname)s | %(name)s - %(process)d | %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',
level=os.environ.get('LOGLEVEL', 'INFO').upper(),
)
logger = logging.getLogger(__name__)
Since it looks like the Formatter object is associated with the Handler object, I tried to obtain handler from the main Logger object which has the correct formatting. So I called
logger.handlers
but I got an empty list [].
So my question is, where do I get the Formatter object which has the same format that my main logger has?
For the record, I'm using python 3.8 on macOS but the code will be deployed to Linux (but still python 3.8).
Judging by the cpython source code on logging.basicConfig, it appears that the formatter object which contains your formatting string is eventually added to a handler that is passed to the root logger (see: here ). So you can obtain the handler (and therefore the formatter) from the root logger object by doing
logging.root.handlers[0].formatter
In particular, to achieve the subprocess logging handler in the question, you can do this instead
handler = logging.StreamHandler(outfile)
handler.setFormatter(logging.root.handlers[0].formatter)
handler.setLevel(logging.root.level)
logger.addHandler(handler)
which will set your handler to the same logging format (and level as well!) as that of your usual logger object.
I started using logging in python and I would like to write my logs to file.
The logging module is writing a lot many events to the file. I just want to write my own loggers and do not want to have that default loggers getting logged. How can I avoid that?
Below is the sample logging happening right now:
Changing event name from creating-client-class.iot-data to creating-client-class.iot-data-plane
Changing event name from before-call.apigateway to before-call.api-gateway
Changing event name from request-created.machinelearning.
Predict to request-created.machine-learning.
Predict Setting config variable for region to 'us-west-2'
The way how I instantiated the logger
logger = logging.getLogger("S3_transfer")
def set_log_output_file(logname):
if not os.path.exists('logs'):
os.makedirs('logs')
logging.basicConfig(filename='logs/{}.log'.format(logname),
filemode='a',
format='%(asctime)s %(levelname)-8s %(message)s',
datefmt='%a, %d %b %Y %H:%M:%S',
level=logging.DEBUG)
def get_logger():
"""
Retrieves the current logger
:return: Logger
"""
return logger
Suggestions please?!?!
That happens because multiple calls logging.getLogger() with the same name will always return the same instance of Logger. There is a practice in python packages to create a logger instance by logging.getLogger(__name__), which allows us to get all logs in a single place and configure logger once. But if you don't want to see any logs from external packages just replace __name__ in your file with your own logger name and set basic config which will write to file for it.
Is there any way I can provide the filename for logger from my main module?
I am using following way, however it's not working.all the logs go to xyz.log file rather than main.log
Updated as per suggestion from nosklo
logger.py
formatter = logging.Formatter(fmt='[%(asctime)s] - {%(filename)s:%(lineno)d} %(levelname)s - %(message)s')
def _get_file_handler(file_name="xyz.log"):
file_handler = logging.FileHandler(file_name)
file_handler.setLevel(logging.DEBUG)
file_handler.setFormatter(formatter)
return file_handler
def get_logger(name):
logger = logging.getLogger(name)
logger.setLevel(logging.DEBUG)
logger.addHandler(_get_file_handler())
return logger
parser.py
log = logger.get_logger(__name__)
def parse():
log.info("is there anyway this could go to main.log and xyz.log")
main.py
log = logger.get_logger(__name__)
if __name__ == '__main__':
for handler in log.handlers:
if isinstance(handler, logging.FileHandler):
log.removeHandler(handler)
log.addHandler(logger._get_file_handler())
log.info("is there anyway this could go to main.log and xyz.log?")
parser.parse()
Is there a way I can set the Log file name from my main.py module and not from logger.py module?
You're calling get_logger() first, so when you set the class attribute in FileName.file_name = "main.log" the get_logger function is already finished, and the logger is already defined to write in xyz.log; Changing the variable later won't change the logger anymore, since it is already defined.
To change the previously selected file, you'd have to retrieve the logger, remove the previous handler and add a new file handler. Another option is to set the variable before calling get_logger() so when you call it, the variable already has the correct value.
Logging instances can have multiple file handlers. Use a function like this to just add another handler with the additional output path you want. Log messages will get sent to both (or all) text logs added to the instance. You can even configure the handlers to have different logging levels so you can filter messages to different logs for critical errors, info message, etc.
import logging
def add_handler(output_log_path, log):
# Set up text logger and add it to logging instance
file_logger = logging.FileHandler(output_log_path)
file_logger.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s | logger name: %(name)s | module: %(module)s | lineno: %(lineno)d | %(message)s')
file_logger.setFormatter(formatter)
log.addHandler(file_logger)
return log
Is there anything in this code that would explain why my info messages aren't going into the log. Properly formatted warnings and above are going into both log files.
Initializing logger:
logger = logging.getLogger()
f = logging.Formatter('%(asctime)s\n%(levelname)s: %(funcName)s %(message)s')
out = logging.handlers.RotatingFileHandler(filename=self.f_stdout, maxBytes=1048576, backupCount=99)
err = logging.handlers.RotatingFileHandler(filename=self.f_stderr, maxBytes=1048576, backupCount=99)
out.setLevel(logging.INFO)
err.setLevel(logging.WARNING)
err.setFormatter(f)
logger.addHandler(out)
logger.addHandler(err)
Usage:
logging.info('this doesnt get logged')
logging.warning('this gets logged to stdout and stderr with respective formatting')
You never actually set the log level of the root logger object itself (the logger variable in your code). Both handlers and loggers have log levels; lines only reach the output if they're above both thresholds.
Since you don't set the root log level, it uses its default (warning). Try adding a call to logger.setLevel(logging.INFO) to change that.