I have logging configuration in conf file, idea is to log to two different files in different formats (one is plain text, other is in json)
[handler_plainTextFileHandler]
class=handlers.TimedRotatingFileHandler
level=INFO
formatter=plainTextFormatter
args=('app/logs/log.log', 'midnight', 1, 10, None, False, False)
[handler_jsonFileHandler]
class=handlers.TimedRotatingFileHandler
level=INFO
formatter=jsonFormatter
args=('app/logs/json/jlog.log', 'midnight', 1, 10, None, False, False)
[formatter_jsonFormatter]
class=pythonjsonlogger.jsonlogger.JsonFormatter
format=%(asctime)s - %(name)s %(lineno)d - %(levelname)s - %(message)s
datefmt =%d.%m.%Y %H:%M:%S
[formatter_plainTextFormatter]
format=%(asctime)s - %(name)s(%(lineno)d) - %(levelname)s - [%(trace)s] %(message)s
datefmt =%d.%m.%Y %H:%M:%S
When I log something from my app, I send this trace parameter as extra
logger.info(f'Health check from {req.client.host}:{req.client.port}', extra={"trace": self.trace})
And everything is working as intended, logs are formatted as they should be and it's all as intended. When I have no trace parameter, I just send extra={"trace":None}
Problem happens when there is unhandled exception in my code that then gets logged by root logger. Since there is no extra in that call, error gets logged in json file but not in plain text file (I get error that param trace is missing)
Any ideas how to handle this, I would like to keep things as they are in normal logging, only add ability to log other errors in that same text file
Related
I have to output my python job's logs as structured (json) format for our downstream datadog agent to pick them up. Crucially, I have requirements about what specific log fields are named, e.g. there must be a timestamp field which cannot be called e.g. asctime. So a desired log looks like:
{"timestamp": "2022-11-10 00:28:58,557", "name": "__main__", "level": "INFO", "message": "my message"}
I can get something very close to that with the following code:
import logging.config
from pythonjsonlogger import jsonlogger
logging.config.fileConfig("logging_config.ini", disable_existing_loggers=False)
logger = logging.getLogger(__name__)
logger.info("my message")
referencing the following logging_config.ini file:
[loggers]
keys = root
[handlers]
keys = consoleHandler
[formatters]
keys=json
[logger_root]
level=DEBUG
handlers=consoleHandler
[handler_consoleHandler]
class=StreamHandler
level=DEBUG
formatter=json
[formatter_json]
class = pythonjsonlogger.jsonlogger.JsonFormatter
format=%(asctime)s %(name)s - %(levelname)s:%(message)s
...however, this doesn't allow flexibility about the keys in the outputted log json objects. e.g. my timestamp object is called "asctime" as below:
{"asctime": "2022-11-10 00:28:58,557", "name": "__main__", "levelname": "INFO", "message": "my message"}
I still want that asctime value (e.g. 2022-11-10 00:28:58,557), but need it to be referenced by a key called "timestamp" instead of "asctime". If at all possible I would strongly prefer a solution that adapts the logging.config.ini file (or potentially a yaml logging config file) with relatively minimal extra python code itself.
I also tried this alternative python json logging library which I thought provided very simple and elegant code, but unfortunately when I tried to use that, I didn't get my log statement to output at all...
You'll need to have a small, minimal amount of Python code, something like
# in mymodule.py, say
class CustomJsonFormatter(jsonlogger.JsonFormatter):
def add_fields(self, log_record, record, message_dict):
super(CustomJsonFormatter, self).add_fields(log_record, record, message_dict)
log_record['timestamp'] = datetime.datetime.fromtimestamp(record.created).strftime('%Y-%m-%d %H:%M:%S') + f',{int(record.msecs)}'
and then change the configuration to reference it:
[formatter_json]
class = mymodule.CustomJsonFormatter
format=%(timestamp)s %(name)s - %(levelname)s:%(message)s
which would then output e.g.
{"timestamp": "2022-11-10 11:37:25,153", "name": "root", "levelname": "DEBUG", "message": "foo"}
It is possible to get a logger by module name. Like this:
logging.getLogger(module_name)
I would like to add module_name to every log record. Is it possible to set up a Formatter object which adds module_name?
You are looking for the %(name)s parameter; add that to your formatter pattern:
FORMAT = "%(name)s: %(message)s"
logging.basicConfig(format=FORMAT)
or when creating a Formatter():
FORMAT = "%(name)s: %(message)s"
formatter = logging.Formatter(fmt=FORMAT)
See the LogRecord attributes reference:
Attribute name: name
Format: %(name)s
Description: Name of the logger used to log the call.
when you initialize the logger (only need to do this once for the app) try this config
logging.basicConfig(
filename='var/bpextract.log',
level=logging.INFO,
format='%(asctime)s %(process)-7s %(module)-20s %(message)s',
datefmt='%m/%d/%Y %H:%M:%S'
)
...later in your code...
log = logging.getLogger("bpextract")
log.info('###### Starting BPExtract App #####')
In logging.basicConfig, you can specify the format:
logging.basicConfig(format='%(name)s\t%(message)s')
Please bear with me while I try to explain my setup.
I have an application structure as follows
1.) MAIN AGENT
2.) SUPPORTING MODULES
3.) Various ClASSES called upon BY MODULES
This is a script that basically sets up my logging currently.
logging_setup.py -
Through this script I set up a custom format using context filters and other classes. A snippet of it is as follows.
class ContextFilter(logging.Filter):
CMDID_cf="IAMTEST1"
def __init__(self, CMDID1):
self.CMDID_cf=CMDID1
def filter(self,record):
record.CMDID=self.CMDID_cf
return True
class testFormatter(logging.Formatter):
def format(self,record):
record.message=record.getMessage()
if string.find(self._fmt,"%(asctime)") >= 0:
record.asctime = self.formatTime(record, self.datefmt)
if threading.currentThread().getName() in cmdidDict:
record.CMDID=cmdidDict[threading.currentThread().getName()]
else:
record.CMDID="Oda_EnvId"
return self._fmt % record.__dict__
def initLogging(loggername)
format=testFormatter(*some format*)
*other configuration settings*
So I basically have two questions both of which I believe can be used by I dont know how to implement them correctly.
1.) I want my MAIN AGENT Logger to have format and other configuration as set up by logging_setup script while I want MODULES to log messages having configuration set from a different Config File.
So in short is it possible for two modules who are logging to the same file to have different configurations set from two different sources.
P.S. I am using logger.getLogger() call to get logger in each of these modules.
2.) If the above isn't possible (or even if it is) how can i make the config file to includes complex formatting ?? i.e. how can I change the following config file so that it will set up the format in the same way as logging_Setup.py does.
My current config file is :
[loggers]
keys=root,testAgent,testModule
[formatters]
keys=generic
[handlers]
keys=fh
[logger_root]
level=DEBUG
handlers=fh
[logger_testAgent]
level=DEBUG
handlers=fh
qualname=testAgent
propagate=0
[logger_testModule]
level=ERROR
handlers=fh
qualname=testAgent.testModule.TEST
propagate=0
[handler_fh]
class=handlers.RotatingFileHandler
level=DEBUG
formatter=generic
maxBytes=1000
args=('spam.log',)
[formatter_generic]
format=%(asctime)s %(name)s %(levelname)s %(lineno)d %(message)s
I hope I have tried to make the question clear.
Thanks!!
Currently I have everything getting logged to one logfile but I want to separate it out to multiple log files. I look at the logging in python documentation but they don't discuss about this.
log_format = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
logging.basicConfig(filename=(os.path.join(OUT_DIR, + '-user.log')),
format=log_format, level=logging.INFO, datefmt='%Y-%m-%d %H:%M:%S')
Currently this is how I do the logging. what I want to do have different type of errors or information get log into different log files. At the moment when I do logging.info('Logging IN') and logging.error('unable to login') will go to same logfile. I want to seperate them. Do I need to create another logging object to support the logging into another file?
What you /could/ do (I haven't dug into the logging module too much so there may be a better way to do this) is maybe use a stream rather than a file object:
In [1]: class LogHandler(object):
...: def write(self, msg):
...: print 'a :%s' % msg
...: print 'b :%s' % msg
...:
In [3]: import logging
In [4]: logging.basicConfig(stream=LogHandler())
In [5]: logging.critical('foo')
a :CRITICAL:root:foo
b :CRITICAL:root:foo
In [6]: logging.warn('bar')
a :WARNING:root:bar
b :WARNING:root:bar
Edit with further handling:
Assuming your log files already exist, you could do something like this:
import logging
class LogHandler(object):
format = '%(levelname)s %(message)s'
files = {
'ERROR': 'error.log',
'CRITICAL': 'error.log',
'WARN': 'warn.log',
}
def write(self, msg):
type_ = msg[:msg.index(' ')]
with open(self.files.get(type_, 'log.log'), 'r+') as f:
f.write(msg)
logging.basicConfig(format=LogHandler.format, stream=LogHandler())
logging.critical('foo')
This would allow you to split your logging into various files based on conditions in your log messages. If what you're looking for isn't found, it simply defaults to log.log.
I created this solution from docs.python.org/2/howto/logging-cookbook.html
Simply create two logging file handlers, assign their logging level and add them to your logger.
import os
import logging
current_path = os.path.dirname(os.path.realpath(__file__))
logger = logging.getLogger('simple_example')
logger.setLevel(logging.DEBUG)
#to log debug messages
debug_log = logging.FileHandler(os.path.join(current_path, 'debug.log'))
debug_log.setLevel(logging.DEBUG)
#to log errors messages
error_log = logging.FileHandler(os.path.join(current_path, 'error.log'))
error_log.setLevel(logging.ERROR)
logger.addHandler(debug_log)
logger.addHandler(error_log)
logger.debug('This message should go in the debug log')
logger.info('and so should this message')
logger.warning('and this message')
logger.error('This message should go in both the debug log and the error log')
I'm configuring my Python logging from a file (see http://www.python.org/doc//current/library/logging.html#configuration-file-format ).
From the example on that page, i have a formatter in the config file that looks like:
[formatter_form01]
format=F1 %(asctime)s %(levelname)s %(message)s
datefmt=
class=logging.Formatter
How do i put a newline in the "format" string that specifies the formatter? Neither \n nor \\n work (e.g. format=F1\n%(asctime)s %(levelname)s %(message)s does not work). Thanks
The logging.config module reads config files with ConfigParser, which has support for multiline values.
So you can specify your format string like this:
[formatter_form01]
format=F1
%(asctime)s %(levelname)s %(message)s
datefmt=
class=logging.Formatter
Multilines values are continued by indenting the following lines (one or more spaces or tabs count as an indent).
The logging configuration file is based on the ConfigParser module. There you'll find you can solve it like this:
[formatter_form01]
format=F1
%(asctime)s %(levelname)s %(message)s
datefmt=
class=logging.Formatter
My best bet would be using a custom formatter (instead of logging.Formatter)... For reference, here's the source code for logging.Formatter.format:
def format(self, record):
record.message = record.getMessage()
if string.find(self._fmt,"%(asctime)") >= 0:
record.asctime = self.formatTime(record, self.datefmt)
s = self._fmt % record.__dict__
if record.exc_info:
# Cache the traceback text to avoid converting it multiple times
# (it's constant anyway)
if not record.exc_text:
record.exc_text = self.formatException(record.exc_info)
if record.exc_text:
if s[-1:] != "\n":
s = s + "\n"
s = s + record.exc_text
return s
It's pretty clear to me that, if self._fmt is read from a text file (single line), no escapping of any kind would be possible. Maybe you can extend from logging.Formatter, override this method and substitute the 4th line for something like:
s = self._fmt.replace('\\n', '\n') % record.__dict__
or something more general, if you want other things to be escaped as well.
EDIT: alternatively, you can do that in the init method, once (instead of every time a message is formatted). But as others already pointed out, the ConfigParser support multiple lines, so no need to go this route...
This might be an easy way:
import logging
logformat = """%(asctime)s ... here you get a new line
... %(thread)d .... here you get another new line
%(message)s"""
logging.basicConfig(format=logformat, level=logging.DEBUG)
I tested, the above setting gives two new lines for each logging message, as it shown in the codes. Note: %(asctime)s and things like this is python logging formatting strings.
import logging
logformat = "%(asctime)s %(message)s\n\r"
logging.basicConfig(level=logging.DEBUG, format=logformat,filename='debug.log', filemode='w')
logging.debug (Your String here)
Debug text in the file will be written with new line.
Just add "\n" before the closing apostrophe of basicConfig function
logging.basicConfig(level=logging.DEBUG, format=' %(levelname)s - %(message)s\n')