Is there an easy way to execute a python script and then have all error messages save to a type of log file (csv, txt, anything).
class MyClass():
def __init__(self, something):
self.something = something
def my_function(self):
# code here
Or is the only way to add try and except statements everywhere and write the error messages to a file?
Yes you can do that using python logging
Here's a specific example, using the info from https://realpython.com/python-logging/ with your code:
import logging
logging.basicConfig(filename='app.log', filemode='w', format='%(name)s - %(levelname)s - %(message)s')
logging.warning('This will get logged to a file')
class MyClass():
def __init__(self, something):
self.something = something
def my_function(self):
logging.warning('my_function entered...')
After you instantiate your class and call my_function, you should get logging output in your log file (here app.log):
root - WARNING - This will get logged to a file
root - WARNING - my_function entered...
Related
I'm new to python and trying to create wrapper over logging to reuse changes needed to modify formatting etc.
I've written my wrapper class in following way -
import logging
import sys
from datetime import datetime
class CustomLogger:
"""This is custom logger class"""
_format_spec = f"[%(name)-24s | %(asctime)s | %(levelname)s ] (%(filename)-32s : %(lineno)-4d) ==> %(message)s"
_date_format_spec = f"%Y-%m-%d # %I:%M:%S %p"
def __init__(self, name, level=logging.DEBUG, format_spec=None):
""""""
self.name = name
self.level = level
self.format_spec = format_spec if format_spec else CustomLogger._format_spec
# Complete logging configuration.
self.logger = self.get_logger(self.name, self.level)
def get_file_handler(self, name, level):
"""This is a method to get a file handler"""
today = datetime.now().strftime(format="%Y-%m-%d")
file_handler = logging.FileHandler("{}-{}.log".format(name, today))
file_handler.setLevel(level)
file_handler.setFormatter(logging.Formatter(self.format_spec,
datefmt=CustomLogger._date_format_spec))
return file_handler
def get_stream_handler(self, level):
"""This is a method to get a stream handler"""
stream_handler = logging.StreamHandler(sys.stdout)
stream_handler.setLevel(level)
stream_handler.setFormatter(logging.Formatter(self.format_spec,
datefmt=CustomLogger._date_format_spec))
return stream_handler
def get_logger(self, name, level):
"""This is a method to get a logger"""
logger = logging.getLogger(name)
logger.addHandler(self.get_file_handler(name, level))
# logger.addHandler(self.get_stream_handler(level))
return logger
def info(self, msg):
"""info message logger method"""
self.logger.info(msg)
def error(self, msg):
"""error message logger method"""
self.logger.error(msg)
def debug(self, msg):
"""debug message logger method"""
self.logger.debug(msg)
def warn(self, msg):
"""warning message logger method"""
self.logger.warn(msg)
def critical(self, msg):
"""critical message logger method"""
self.logger.critical(msg)
def exception(self, msg):
"""exception message logger method"""
self.logger.exception(msg)
But when I try to use my CustomLogger, nothing goes into the log file.
def main():
"""This main function"""
logger = CustomLogger(name="debug", level=logging.DEBUG)
logger.info("Called main")
if __name__ == "__main__":
main()
If I do similar thing without class/function wrapper, it works. Not sure where I'm going wrong. Any pointer will help.
Further update on the question
After making this (custom_logger.py) as a package and using in actual application (app.py), I'm noticing it always prints custom_logger.py as filename but not app.py.
How to fix this? I'm ok with rewriting the CustomLogger class if required.
I missed to do setLevel() for the logger. After doing that, problem is resolved. I also added pid for the file handler file-name to avoid any future issue with multi-process env.
Let me know if there's anything I can do better here wrt any other potential issues.
Coding this in Python as a newbie and trying to understand how to create a single instance of a logging class in a separate module. I am trying to access the logging module from my python script. So, I have different files in this automation script and I am trying to record the logs in a file and displaying the logs in the console at the same time so I would be utilizing two handlers namely: FileHandler() and StreamHandler(). The initialization of logger is in a different file called debugLogs.py and I am accessing this file from multiple Python modules running the script. But if the separate modules call debugLogs.py it creates multiple instances of the logger which means it gets printed multiple times which is not what I want. That is why I need to use singleton method to create just one instance. How do you suggest I go about doing that? I have included my version of debugLogs.py in this code and I have shown
#debugLogs.py
import logging
import logging.handlers
#open readme file and read name of latest log file created
def get_name():
with open("latestLogNames.txt") as f:
for line in f:
pass
latestLog = line
logfile_name = latestLog[:-1]
return logfile_name
class Logger(object):
_instance = None
def __new__(self, logfile_name):
if not self._instance:
self._instance = super(Logger, self).__new__(self)
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
formatter = logging.Formatter('%(message)s')
file_handler = logging.FileHandler(logfile_name)
file_handler.setFormatter(formatter)
stream_handler = logging.StreamHandler()
logger.addHandler(file_handler)
logger.addHandler(stream_handler)
return self._instance
So get_name() gets the latest log name already indicated in the readme file latestLogNames.txt and inputs the logs in the latest log file already there.
I understand that my singleton class code is not right and I am confused on how to initialize the whole class structure. But somehow I would have to pass that logfile_name value to that class. So I am planning to call this logger from a different module with something like this:
#differentModule.py
import debugLogs
logger = debugLogs.Logger(debugLogs.get_name())
And then I would use logger.info("...") to print the logs as well as store it in the file. Please tell me how to restructure the debugLogs.py and how to call it from different modules of my script.
Env Summary:
Hi, I have a Python project, where I request some information from a storage system and display this on screen.
Env:
module executer
Module rest has a class RestThreePar
Module storagebox has a class StorageBox
Module locater has a class Locater
Problem:
I want to implement a single module/class for logging for the hole project, which needs to log stuff to 4 different files (each module has it's own log file). When I import logging separately for each module it works like expected but whenever I try to make a separate class and import that one (like from logger import Logger) it goes wrong (I am not sure but it looks like it writes one line multiple times in the files).
I am not a developer and don't know what to do next.
I have tried to use the python documentation but had no success.
https://docs.python.org/3.6/howto/logging-cookbook.html
import logging
import os
class Logger:
def __init__(self, cls_name, file):
self.cls_name = cls_name
current_dir = os.getcwd()
self.file = f'{current_dir}/logs/{file}'
def log(self):
logger = logging.getLogger(self.cls_name)
logger.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s,%(levelname)s,%(name)s,%(message)s')
file_handler = logging.FileHandler(self.file)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
return logger
def log_info(self, msg):
logger = self.log()
logger.info(msg)
def log_exception(self, msg):
logger = self.log()
logger.exception(msg)
if __name__ == '__main__':
pass
I am writing some code that will output a log to either the screen, or a file, but not both.
I thought the easiest way to do this would be to write a class:
class WriteLog:
"write to screen or to file"
def __init__(self, stdout, filename):
self.stdout = stdout
self.logfile = file(filename, 'a')
def write(self, text):
self.stdout.write(text)
self.logfile.write(text)
def close(self):
self.stdout.close()
self.logfile.close()
And then call it something like this:
output = WriteLog(sys.stdout, 'log.txt')
However, I'm not sure how to allow for switching between the two, i.e. there should be an option within the class that will set WriteLog to either use stdout, or filename. Once that option has been set I just use WriteLog without any need for if statements etc.
Any ideas? Most of the solutions I see online are trying to output to both simultaneously.
Thanks.
Maybe something like this? It uses the symbolic name 'stdout' or 'stderr' in the constructor, or a real filename. The usage of if is limited to the constructor. By the way, I think you're trying to prematurely optimize (which is the root of all evil): you're trying to save time on if's while in real life, the program will spend much more time in file I/O; making the potential waste on your if's negligible.
import sys
class WriteLog:
def __init__(self, output):
self.output = output
if output == 'stdout':
self.logfile = sys.stdout
elif output == 'stderr':
self.logfile = sys.stderr
else:
self.logfile = open(output, 'a')
def write(self, text):
self.logfile.write(text)
def close(self):
if self.output != 'stdout' and self.output != 'stderr':
self.logfile.close()
def __del__(self):
self.close()
if __name__ == '__main__':
a = WriteLog('stdout')
a.write('This goes to stdout\n')
b = WriteLog('stderr')
b.write('This goes to stderr\n')
c = WriteLog('/tmp/logfile')
c.write('This goes to /tmp/logfile\n')
I'm not an expert in it, but try to use the logging library, and maybe you can have logger with 2 handlers, one for file and one for stream and then add/remove handlers dynamically.
I like the suggestion about using the logging library. But if you want to hack out something yourself, maybe passing in the file handle is worth considering.
import sys
class WriteLog:
"write to screen or to file"
def __init__(self, output):
self.output = output
def write(self, text):
self.output.write(text)
def close(self):
self.output.close()
logger = WriteLog(file('c:/temp/log.txt','a' ))
logger.write("I write to the log file.\n")
logger.close()
sysout = WriteLog(sys.stdout)
sysout.write("I write to the screen.\n")
You can utilize the logging library to do something similar to this. The following function will set up a logging object at the INFO level.
def setup_logging(file_name, log_to_file=False, log_to_console=False ):
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
# Create Console handler
if log_to_file:
console_log = logging.StreamHandler()
console_log.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(levelname)-8s - %(name)-12s - %(message)s')
console_log.setFormatter(formatter)
logger.addHandler(console_log)
# Log file
if log_to_console:
file_log = logging.FileHandler('%s.log' % (file_name), 'a', encoding='UTF-8')
file_log.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(levelname)-8s - %(name)-12s - %(message)s')
file_log.setFormatter(formatter)
logger.addHandler(file_log)
return logger
You pass it the name of your log, and where you wish to log (either to the file, the console or both). Then you can utilize this function in your code block like this:
logger = setup_logging("mylog.log", log_to_file=True, log_to_console=False)
logger.info('Message')
This example will log to a file named mylog.log (in the current directory) and have output like this:
2014-11-05 17:20:29,933 - INFO - root - Message
This function has areas for improvement (if you wish to add more functionality). Right now it logs to both the console and file at log level INFO on the .setLevel(logging.INFO) lines. This could be set dynamically if you wish.
Additionally, as it is now, you can easily add standard logging lines (logger.debug('Message'), logger.critcal('DANGER!')) without modifying a class. In these examples, the debug messages won't print (because it is set to INFO) and the critical ones will.
I want to write to a log file some events. In order to do this I've used functions decorators to add the loggin code, and report the function called. But, the output is always the same function, the decorator function _decorador.
I'm using the %(funcName)s parameter in format logging.basicConfig
Output in example.log:
04/21/2014 09:32:41 AM DEBUG This message should go to the log file _decorador
04/21/2014 09:32:41 AM INFO So should this _decorador
04/21/2014 09:32:41 AM WARNING And this, too _decorador
04/21/2014 10:46:23 AM DEBUG This message should go to the log file (debug) _decorador
04/21/2014 10:46:23 AM INFO So should this (info) _decorador
04/21/2014 10:46:23 AM WARNING And this, too (warning) _decorador
Desired output in example.log:
04/21/2014 09:32:41 AM DEBUG This message should go to the log file mi_funcion
04/21/2014 09:32:41 AM INFO So should this mi_funcion
04/21/2014 09:32:41 AM WARNING And this, too mi_funcion
04/21/2014 10:46:23 AM DEBUG This message should go to the log file (debug) mi_funcion
04/21/2014 10:46:23 AM INFO So should this (info) mi_funcion
04/21/2014 10:46:23 AM WARNING And this, too (warning) mi_funcion
My code:
#!usr/bin/python3
# -*- coding: UTF-8 -*-
import logging
FORMAT = '%(asctime)s %(levelname)s %(message)s %(funcName)s'
logging.basicConfig(filename='example.log', level=logging.DEBUG, format=FORMAT, datefmt='%m/%d/%Y %I:%M:%S %p')
# Decorator function, writes in the log file.
def decorador(funcion):
def _decorador(*args, **kwargs):
funcion(*args, **kwargs)
logging.debug('This message should go to the log file (debug)')
logging.info('So should this (info)')
logging.warning('And this, too (warning)')
return _decorador
#decorador
def mi_funcion(arg1, arg2):
print("Code asset: %s; Registry number: s%" % (arg1, arg2))
mi_funcion("18560K", 12405)
It's 2022 and this is still difficult.
Here's a complete example adapted from Using functools.wraps with a logging decorator
from inspect import getframeinfo, stack
import logging
from functools import wraps
class CustomFormatter(logging.Formatter):
"""Custom formatter, overrides funcName with value of name_override if it exists"""
def format(self, record):
if hasattr(record, 'name_override'):
record.funcName = record.name_override
if hasattr(record, 'file_override'):
record.filename = record.file_override
if hasattr(record, 'line_override'):
record.lineno= record.line_override
return super(CustomFormatter, self).format(record)
# setup logger and handler
logger = logging.getLogger(__file__)
handler = logging.StreamHandler()
logger.setLevel(logging.DEBUG)
handler.setLevel(logging.DEBUG)
handler.setFormatter(CustomFormatter('%(asctime)s - %(filename)s:%(lineno)s - %(funcName)s - %(levelname)s - %(message)s'))
logger.addHandler(handler)
def log_and_call(statement):
def decorator(func):
caller = getframeinfo(stack()[1][0])
#wraps(func)
def wrapper(*args, **kwargs):
# set name_override to func.__name__
logger.info(statement, extra={
'name_override': func.__name__,
'file_override': os.path.basename(caller.filename),
'line_override': caller.lineno
})
return func(*args, **kwargs)
return wrapper
return decorator
#log_and_call("This should be logged by 'decorated_function'")
def decorated_function(): # <- the logging in the wrapped function will point to/log this line for lineno.
logger.info('I ran')
decorated_function()
Defining the caller outside of the wrapper function will correctly get the calling function's (i.e. the wrapped function) filename and line number.
You cannot easily change this. The goal of the logging module funcName is to report exact locations of the source code line, not the function it represents. The idea is that you use it in combination with the lineno and filename entries to pinpoint the source code, not what function was called.
In order to achieve this, the log module uses code object introspection to determine the real function name:
def findCaller(self):
"""
Find the stack frame of the caller so that we can note the source
file name, line number and function name.
"""
f = currentframe()
#On some versions of IronPython, currentframe() returns None if
#IronPython isn't run with -X:Frames.
if f is not None:
f = f.f_back
rv = "(unknown file)", 0, "(unknown function)"
while hasattr(f, "f_code"):
co = f.f_code
filename = os.path.normcase(co.co_filename)
if filename == _srcfile:
f = f.f_back
continue
rv = (co.co_filename, f.f_lineno, co.co_name)
break
return rv
Short of reconstructing the _decorador code object you cannot alter what is reported here. Reconstructing the code object can be done; you could build a facade function with exec that calls the decorator, for example. But for this to work with a closure is more work than you should worry about, really.
I'd instead include the function name of the wrapped function:
logging.debug('This message should go to the log file (debug) (function %r)',
funcion)
You can extract the function name from the funcion object:
def decorador(funcion):
def _decorador(*args, **kwargs):
funcion(*args, **kwargs)
logging.debug('This message should go to the log file (debug) %s',
funcion.__name__)
# ...
return _decorador
I get this output after running the modified code:
cat example.log
04/21/2014 11:37:12 AM DEBUG This message should go to the log file (debug) mi_funcion