Global Python Logger with file Rotator - python

I have create a global logger using the following:
def logini():
logfile='/var/log/cs_status.log'
import logging
import logging.handlers
global logger
logger = logging.getLogger()
logging.basicConfig(filename=logfile,filemode='a',format='%(asctime)s %(name)s %(levelname)s %(message)s',datefmt='%y%m%d-%H:%M:%S',level=logging.DEBUG,propagate=0)
handler = logging.handlers.RotatingFileHandler(logfile, maxBytes=2000000, backupCount=5)
logger.addHandler(handler)
__builtins__.logger = logger
It works, however I am getting 2 outputs for every log, one with the formatting and one without.
I realize that this is being caused by the file rotater as I can comment out the 2 lines of the handler code and then I get a single outputted correct log entry.
How can I prevent the log rotator from outputting a second entry ?

Currently you're configuring two file loggers that point to the same logfile. To only use the RotatingFileHandler, get rid of the basicConfig call:
logger = logging.getLogger()
handler = logging.handlers.RotatingFileHandler(logfile, maxBytes=2000000,
backupCount=5)
formatter = logging.Formatter(fmt='%(asctime)s %(name)s %(levelname)s %(message)s',
datefmt='%y%m%d-%H:%M:%S')
handler.setFormatter(formatter)
handler.setLevel(logging.DEBUG)
logger.addHandler(handler)
All basicConfig does for you is to provide an easy way to instantiate either a StreamHandler (default) or a FileHandler and set its loglevel and formats (see the docs for more information). If you need a handler other than these two, you should instantiate and configure it yourself.

Related

How to reset root logger config in Python

I originally used logging.basicConfig(filename='logs/example.log') to create a log file. After reading the docs I found that it is not recommended to modify the class attributes of logging. Now that I've changed these attributes how can I change them back/reset the logger module?
Output of the code below creates two log files, app.log and example.log. The latter is an artifact of .basicConfig() being called when I first tried to set up the logger.
UPDATE:
grep -R "example.log" /lib/python3.8/ does not output anything so I'm not sure what was changed in the source code of logging to cause the example.log file to be created every time
import logging
import logging.handlers
LOG_FILENAME = 'logs/app.log'
# https://stackoverflow.com/questions/3630774/logging-remove-inspect-modify-handlers-configured-by-fileconfig
# need to run this every time since I changed class attributes of logger
# still creates example.log file
print(logging.getLogger())
root_log = logging.getLogger()
for handler in root_log.handlers:
root_log.removeHandler(handler)
# OR
# logging.basicConfig(force=True) #one way to reset root logger format
# create logger
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
# create console and file handler and set level to info
fh = logging.handlers.RotatingFileHandler(LOG_FILENAME, maxBytes=250000, backupCount=5)
fh.setLevel(logging.INFO)
# ch = logging.StreamHandler()
# ch.setLevel(logging.INFO)
# create formatter
ffh = logging.Formatter('%(asctime)s : %(name)-12s : %(levelname)-8s : %(message)s')
# fch = logging.Formatter('%(name)-12s : %(levelname)-8s : %(message)s')
# add formatter handlers
fh.setFormatter(ffh)
# ch.setFormatter(fch)
# add handler to logger
logger.addHandler(fh)
# logger.addHandler(ch)
logger.info('instance logger')
# logging.shutdown()

Python Custom logging handler to create each log file per key in the dictionary

I have a dictionary with some sample data like below
{"screener": "media", "anchor": "media","reader": "media"}
and I wanted to create a log file for each key in the dictionary. I'm planning to use this logging in streaming job which will get reused for every batch in the streaming. here I'm planning to use the rotating file handler per key as well.
here is my snippet
import logging
from logging.handlers import RotatingFileHandler
import time
logger = logging.getLogger('my_logger')
logger.setLevel(logging.DEBUG)
dict_store = {"screener": "media", "anchor": "media", "reader": "media"}
dict_log_handler = {}
def createFileHandler(name):
handler = RotatingFileHandler(name, maxBytes=2000000000, backupCount=10)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
return handler
def runBatch():
print("logging batch")
for name in dict_store.keys():
print(name)
if name in dict_log_handler:
print(f"getting logger from dict_handler {name}")
handler = dict_log_handler[name]
else:
handler = createFileHandler(name)
dict_log_handler[name] = handler
logger.addHandler(handler)
logger.info('Hello, world!')
logger.info('Hello, world!')
logger.info('Hello, world!')
logger.info('Hello, world!')
logger.info('Hello, world!')
logger.info('Hello, world!')
logger.removeHandler(handler)
time.sleep(0.1)
for i in range(0, 3):
runBatch()
It is working as expected currently. I'm just thinking of implementing similar stuff inside the overriding or creating a custom handler (like if we pass a name, it should automatically do this stuff) and the overall expectation is it should not affect the performance.
Question is, I wanted to wrap this in a class and using it directly. possible ?
You question is not clear what exactly you want to do, but if the idea is to use multiple loggers as your code shows then you can do something like this:
logging.getLogger(name) this is the method which is used to access the specific logger, in your code you are using the same logger but using addHandler and removeHandler to switch to specific logger.
You can create multiple loggers like this:
import logging
from logging.handlers import RotatingFileHandler
dict_store = {"screener": "media", "anchor": "media", "reader": "media"}
for name in dict_store:
logger = logging.getLogger(name)
logger.setLevel(logging.DEBUG)
handler = RotatingFileHandler(name, maxBytes=2000000000, backupCount=10)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
You can wrap the above code in your own logging class/method and use it as needed. Keep in mind this needs to called only once.
Next time you can access the specific log and use the logging method:
logger = logging.getLogger(<logger_name>)
logger.debug("debug")
logger.info("info")
logger.warning("warning")
logger.error("error")
logger.critical("critical")

python: difference between logging.Logger and logging.getLogger

Yes, I see python doc says: "Loggers are never instantiated directly, but always through the module-level function logging.getLogger(name)", but I have an issue to debug and want to know the root cause.
here is the example:
#!/usr/bin/python
import logging
logger = logging.getLogger("test")
format = "%(asctime)s [%(levelname)-8s] %(message)s"
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter(format, datefmt="%Y-%m-%d %H:%M:%S"))
handler.setLevel(logging.DEBUG)
logger.addHandler(handler)
logger.info("test")
Using logging.getLogger("test") here, log message will not be printed.
If I change logging.getLogger("test") to logging.Logger("test"), the log message will be printed.
#!/usr/bin/python
import logging
logger = logging.Logger("test")
format = "%(asctime)s [%(levelname)-8s] %(message)s"
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter(format, datefmt="%Y-%m-%d %H:%M:%S"))
handler.setLevel(logging.DEBUG)
logger.addHandler(handler)
logger.info("test")
Or we can using logging.getLogger("test") and set logger level to logging.DEBUG.
#!/usr/bin/python
import logging
logger = logging.getLogger("test")
format = "%(asctime)s [%(levelname)-8s] %(message)s"
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter(format, datefmt="%Y-%m-%d %H:%M:%S"))
handler.setLevel(logging.DEBUG)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
logger.info("test")
The method .getLogger("test") is looking for any existing logger configurations for the name "test" while the .Logger("test") is creating a default logger with the name "test" and sets the default log level to 0. If the getLogger method doesn't find a logger class by that name then it will create a basic logger that will have an effective level of 30 (https://docs.python.org/3/library/logging.html#logging-levels) which will ignore your DEBUG message. You can check via logger.getEffectiveLevel() to notice the difference.
Ideally you would create loggers and set them based on the proper naming/configurations instead of accepting the default configuration.

How do I use logging in python tornado and write to file

Currently I'm using logging.getLogger().setLevel(logging.DEBUG) what I think is logging everything where logging level is => DEBUG Is that a correct assumption? I can see a difference when I set logging.DEBUG to logging.ERROR so I guess I'm correct.
Also how do I write these logging rows to a file?
This is a exmaple write log to file
import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
# create a file handler
handler = logging.FileHandler('hello.log')
handler.setLevel(logging.INFO)
# create a logging format
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(handler)
logger.info('Hello baby')
More detail:
http://victorlin.me/posts/2012/08/good-logging-practice-in-python/

Python flushing log buffer at exit

I wrote a python script which executes a while loop and requires a keyboard interrupt or system shutdown to terminate.
I would like my log file to save the log output; currently the log file gets created, but nothing gets written to it.
The following creates an output file with the contents I expect:
import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
# create a file handler
handler = logging.FileHandler('hello.log')
# handler.setLevel(logging.INFO)
# create a logging format
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(handler)
logger.info('Mmmm...donuts')
But when I integrate it into my code, the log file lacks any contents:
from logging import log, FileHandler, getLogger, Formatter, CRITICAL
logger = getLogger(__name__)
logger.setLevel(CRITICAL)
handler = FileHandler("test.log")
formatter = Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(formatter)
logger.info("start")
enter_while_loop()
I believe I should handle this using atexit, but how?
Thank you for your time and consideration.

Categories