Python standard logging - python

I like using the python logging module because it standardizes my application and easier to get metrics. The problem I face is, for every application (or file.py) I am keep putting this on top of my code.
logger = logging.getLogger(__name__)
if not os.path.exists('log'):
os.makedirs('log')
logName=time.strftime("%Y%m%d.log")
hdlr = logging.FileHandler('log/%s'%(logName))
logger.setLevel(logging.INFO)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(funcName)s %(levelname)s - %(message)s')
ch.setFormatter(formatter)
hdlr.setFormatter(formatter)
logger.addHandler(ch)
logger.addHandler(hdlr)
This is tedious and repetitive. Is there a better way to do this?
How do people log for a large application with multiple modules?

Take a look at logging.basicConfig().
If you wrap the basicConfig() in a function then you can just import your function and pass specific args (i.e. log filename, format, level, etc).
It will help to condense the code a bit and make it more extensible.
For example -
import logging
def test_logging(filename, format):
logging.basicConfig(filename=filename, format=format, level=logging.DEBUG)
# test
logging.info('Info test...')
logging.debug('Debug test...')
Then just import the test_logging() function into other programs.
Hope that this helps.

Read Using logging in multiple modules from Logging Cookbook.
What you need to do is to use getLogger() function to get a logger with a pre-defined settings.
import logging
logger = logging.getLogger('some_logger')
You set those settings just once at application startup time.

Related

How to log using MemoryHandler in Python

I am very new to Python and am trying to add logs(in a log file) to a project. As soon as I added some log codes, the runtime tripled, so I was suggested to use a functionality that could perhaps store the logs in Python's memory and then print it out to a logfile - in the hope for the runtime to not increase so much.
I started searching for ways to do that and found this resource on MemoryHandler, which I interpreted as something that could help me achieve my purpose:
Base1.py (Attempt1)
I tried this code first, without the MemoryHandler:
import logging
import logging.config
import logging.handlers
formatter = logging.Formatter('%(asctime)s — %(name)s — %(levelname)s — %(funcName)s:%(lineno)d — %(message)s')
def setup_logger(__name__,log_file,level=logging.INFO):
'''to set up loggers'''
handler= logging.FileHandler(log_file)
handler.setFormatter(formatter)
logger = logging.getLogger(__name__)
logger.setLevel(level)
logger.addHandler(handler)
return logger
checking = setup_logger(__name__,'run_logfile.log')
checking.info("this is a test")
It took around 11 seconds to run for the entire project. This is actually a lot - because currently the data volumes are Nil and it should be around 3 seconds, which is what it was before I added these log codes. So next, I tried the below code in the hope of making the code faster:
Base2.py (Attempt2)
import logging
import logging.config
import logging.handlers
formatter = logging.Formatter('%(asctime)s — %(name)s — %(levelname)s — %(funcName)s:%(lineno)d — %(message)s')
def setup_logger(__name__,log_file,level=logging.INFO):
'''to set up loggers'''
handler= logging.FileHandler(log_file)
handler.setFormatter(formatter)
memoryhandler = logging.handlers.MemoryHandler(
capacity=1024*100,
flushLevel=logging.INFO,
target=handler,
flushOnClose=True
)
logger = logging.getLogger(__name__)
logger.setLevel(level)
logger.addHandler(handler)
logger.addHandler(memoryhandler)
return logger
checking = setup_logger(__name__,'run_logfile.log')
checking.info("this is a test")
This too takes 11 seconds - it does work, but my problem is that it is not at all faster than when I did not use the MemoryHandler previously, so I was wondering if my code was very wrong still?
Am I doing anything wrong here? Or if there is way to have logs without making the runtime longer?
It wouldn't be faster if you're logging INFO messages and the flushLevel is also INFO - it will flush after every message. What happens if you set the flushLevel to e.g. ERROR?
Also - if adding logging triples your execution time, something else is likely to be wrong - logging doesn't add that much overhead.

Logging Module to pass event and logging level

Hi I'm wanting to create a logging module that will allow me to set the logging level and pass a messaging to the function. Although this doesn't work, it's my idea and needs some guidance.
def log_message(loglevel, message):
logging.getLogger().setLevel(loglevel.upper())
logging.basicConfig(level=logging.getLevelName, filename="mylog.log",
filemode="w", format="%(asctime)s - %(levelname)s - %(message)s")
logging.getLogger(message)
log_message("info", 'this is a test')
This implementation may not be thread safe and should not be used for production level as it is just to fix what is not working in your function.
import logging
# map logging levels
LEVELS = {"NOTSET":logging.NOTSET,"DEBUG":logging.DEBUG, "INFO":logging.INFO,
"WARNING":logging.WARNING,"ERROR":logging.ERROR, "CRITICAL":logging.CRITICAL}
handler = logging.FileHandler("mylog.log","a")
formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
handler.setFormatter(formatter)
logger = logging.getLogger()
logger.addHandler(handler)
logger.setLevel(LEVELS["DEBUG"])
# your function
def log_message(logLevel, message):
# log file name
# map logging level to int values
logger.log(LEVELS[logLevel.upper()],message)
log_message("info", "asd")
Well. it is said that setLevel just specifies minimum threshold to display the log, so it should be fine.
Logger.setLevel() specifies the lowest-severity log message a logger will handle, where debug is the lowest built-in severity level and critical is the highest built-in severity.
Yet, I will not recommend this for production, as you need better written logging function. Please refer to https://docs.python.org/3/howto/logging.html#basic-logging-tutorial and https://docs.python.org/3/howto/logging.html#advanced-logging-tutorial for better logging setup.

Logging into single file from multiple modules in python when TimedRotatingFileHandler is used

I have a main process which makes use of different other modules. And these modules also use other modules. I need to log all the logs into single log file. Due to use of TimedRotatingFileHandler, my log behaves differently after midnight. I got to know why it so but couldn't clearly how I can solve it.
Below is log_config.py which is used by all other modules to get the logger and log.
'''
import logging
import sys
from logging.handlers import TimedRotatingFileHandler
FORMATTER = logging.Formatter("%(asctime)s — %(name)s — %(message)s")
LOG_FILE = "my_app.log"
def get_file_handler():
file_handler = TimedRotatingFileHandler(LOG_FILE, when='midnight')
file_handler.setFormatter(FORMATTER)
return file_handler
def get_logger(logger_name):
logger = logging.getLogger(logger_name)
logger.setLevel(logging.DEBUG) # better to have too much log than not enough
logger.addHandler(get_file_handler())
#with this pattern, it's rarely necessary to propagate the error up to parent
logger.propagate = False
return logger
'''
All other modules call,
'logging = log_config.get_logger(name)'
and use it to log.
I came to know about QueueHandler and QueueListener but not sure how to use them in my code.
How can I use these to serialize logs to single file.?

Can i set logging level of all loaded modules at once?

main module
module A
module B
The main module uses the functions of modules.
The functions of modules include logger.info or logger.warning.. so on to show what i made wrong about the code.
Objective :
- Logging all things in main, A and B.
- A faculty to set logging level of A and B in main jupyter notebook, at the moment. (e.g. When i need more information about a function of A, then increase logging level to DEBUG from INFO)
By the way, the main script has:
import logging, sys
# create logger
logger = logging.getLogger('logger')
logger.setLevel(logging.DEBUG)
# create console handler and set level to debug
fh = logging.FileHandler('process.log')
logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler(sys.stdout)
ch.setLevel(logging.DEBUG)
# create formatter
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s\n')
# add formatter to ch
fh.setFormatter(formatter)
# add ch to logger
logger.addHandler(fh)
logger.addHandler(ch)
I want to use Logger object and configure this object, instead of logging's baseconfig. But if i can't, other ways is ok.
If you do:
logger = logging.getLogger('logger')
Into Module A and Module B, then they should have access to the same logger as your main file. From there, you can set whatever level at any time you want. E.g.
# ../module_a.py
import logging
logger = logging.getLogger('logger')
logger.setLevel(whatever) # Now all instances of "logger" will be set to that level.
Basically, loggers are globally registered by name and accessible through the logging module directly from any other modules.

How to direct one logging to one file and rest to other in python with filtering

I have been googling since quite time to separate out selenium log (Automation API) debug info with application (My logging info) in two different file but automation api log is also coming on my application log file.
I tried following approach (I tried commented line too):
def get_selenium_logger():
logger=logging.getLogger('selenium.webdriver.remote.remote_connection')
fh = logging.FileHandler('results/selenium_log.log', delay=True)
fh.setLevel(logging.DEBUG)
logger.addHandler(fh)
return logger
def get_application_logger():
logger=logging.getLogger()
logging.basicConfig(level=logging.DEBUG)
fh = logging.FileHandler('results/automation_log.log', delay=True)
fh.setLevel(logging.DEBUG)
formatter=logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(module)s - %(funcName)s - %(lineno)d - %(message)s')
fh.setFormatter(formatter)
#logger.removeFilter(logging.Filter('selenium.webdriver.remote.remote_connection'))
logger.addHandler(fh)
return logger
def my_automation_code():
get_selenium_logger()
app_logger = get_application_logger()
app_logger.info("Test **************")
debug log from automation api (selenium) also listed on "automation_log.log", How can I filter it?
Instead of using the root logger for your application, using a logger whose name doesn't start with "selenium." to separate the two logs. For example:
def get_application_logger():
logger=logging.getLogger('myapp')
# rest of the stuff as in your snippet
Update: The name of your application is whatever you decide. In a module which is imported, you can use __name__ as the logger name; in a script, it can be whatever you like, e.g. the basename of the script.

Categories