Up to now I do simple logging to files, and if I log a multiline string, then the result looks like this:
Emitting log:
logging.info('foo\nbar')
Logfile:
2018-03-05 10:51:53 root.main +16: INFO [28302] foo
bar
Up to now all lines which do not contain "INFO" or "DEBUG" get reported to operators.
This means the line bar gets reported. This is a false positive.
Environment: Linux.
How to set up logging in Python to keep the INFO foo\nbar in one string and ignore the whole string since it is only "INFO"?
Note: Yes, you can filter the logging in the interpreter. Unfortunately this is not what the question is about. This question is different. First the logging happens. Then the logs get parsed.
Here is a script to reproduce it:
import sys
import logging
def set_up_logging(level=logging.INFO):
root_logger = logging.getLogger()
root_logger.setLevel(level)
handler = logging.StreamHandler(sys.stdout)
handler.setFormatter(
logging.Formatter('%(asctime)s %(name)s: %(levelname)-8s [%(process)d] %(message)s', '%Y-%m-%d %H:%M:%S'))
root_logger.addHandler(handler)
def main():
set_up_logging()
logging.info('foo\nbar')
if __name__ == '__main__':
main()
After thinking about it again, I think the real question is: Which logging format is feasible? Just removing the newlines in messages which span multiple lines makes some output hard to read for the human eyes. On the other hand the current 1:1 relation between logging.info() and a line in the log file is easy to read. ... I am unsure
I usually have a class to customize logging but you can achieve what you want with a custom logging.Formatter:
import logging
class NewLineFormatter(logging.Formatter):
def __init__(self, fmt, datefmt=None):
"""
Init given the log line format and date format
"""
logging.Formatter.__init__(self, fmt, datefmt)
def format(self, record):
"""
Override format function
"""
msg = logging.Formatter.format(self, record)
if record.message != "":
parts = msg.split(record.message)
msg = msg.replace('\n', '\n' + parts[0])
return msg
The format() function above splits lines and replicates the timestamp/logging-preamble in each line (after every \n)
Now you need to attach the formatter to the root logger. You can actually attach it to any handler if you build your own logging setup/structure:
# Basic config as usual
logging.basicConfig(level=logging.DEBUG)
# Some globals/consts
DATEFORMAT = '%d-%m-%Y %H:%M:%S'
LOGFORMAT = '%(asctime)s %(process)s %(levelname)-8s %(filename)15s-%(lineno)-4s: %(message)s'
# Create a new formatter
formatter = NewLineFormatter(LOGFORMAT, datefmt=DATEFORMAT)
# Attach the formatter on the root logger
lg = logging.getLogger()
# This is a bit of a hack... might be a better way to do this
lg.handlers[0].setFormatter(formatter)
# test root logger
lg.debug("Hello\nWorld")
# test module logger + JSON
lg = logging.getLogger("mylogger")
lg.debug('{\n "a": "Hello",\n "b": "World2"\n}')
The above gives you:
05-03-2018 08:37:34 13065 DEBUG test_logger.py-47 : Hello
05-03-2018 08:37:34 13065 DEBUG test_logger.py-47 : World
05-03-2018 08:37:34 13065 DEBUG test_logger.py-51 : {
05-03-2018 08:37:34 13065 DEBUG test_logger.py-51 : "a": "Hello",
05-03-2018 08:37:34 13065 DEBUG test_logger.py-51 : "b": "World2"
05-03-2018 08:37:34 13065 DEBUG test_logger.py-51 : }
Note that I am accessing the .handlers[0] of the root logger which is a bit of a hack but I couldn't find a way around this... Also, note the formatted JSON printing :)
I think maintaining this 1:1 relationship, a single line in the log file for each logging.info() call, is highly desirable to keep the log files simple and parsable. Therefore, if you really need to log a newline character, then I would simply log the string representation instead, for example:
logging.info(repr('foo\nbar'))
Outputs:
2018-03-05 11:34:54 root: INFO [32418] 'foo\nbar'
A simple alternative would be to log each part separately:
log_string = 'foo\nbar'
for item in log_string.split('\n'):
logging.info(item)
Outputs:
2018-03-05 15:39:44 root: INFO [4196] foo
2018-03-05 15:39:44 root: INFO [4196] bar
You can use:
logging.basicConfig(level=your_level)
where your_level is one of those:
'debug': logging.DEBUG,
'info': logging.INFO,
'warning': logging.WARNING,
'error': logging.ERROR,
'critical': logging.CRITICAL
In your case, you can use warning to ignore info
import logging
logging.basicConfig(filename='example.log',level=logging.WARNING)
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
Output:
WARNING:root:And this, too
You can try disabling INFO Prior to logging as
import logging
logging.basicConfig(filename='example.log')
logging.debug('This message should go to the log file')
logging.disable(logging.INFO)
logging.info('So should this')
logging.disable(logging.NOTSET)
logging.warning('And this, too')
Output:
WARNING:root:And this, too
Or
logger.disabled = True
someOtherModule.function()
logger.disabled = False
Related
I have problems with logging in Python 3.7 I use Spyder as editor
This code is working normally and creates a file and writes in it.
import logging
LOG_FORMAT="%(Levelname)s %(asctime)s - %(message)s"
logging.basicConfig(filename="C:\\Users\\MOHAMED\\Desktop\\Learn python\\tst.log",
level=logging.DEBUG)
logger=logging.getLogger()
logger.info("Our first message.")
The problem is when I add the format in my file this code does not write anything in tst file.
import logging
LOG_FORMAT="%(Levelname)s %(asctime)s - %(message)s"
logging.basicConfig(filename="C:\\Users\\MOHAMED\\Desktop\\Learn python\\tst.log",
level=logging.DEBUG,
format=LOG_FORMAT)
logger=logging.getLogger()
logger.info("Our first message.")
You are specifying logging variable Levelname but you do not use the extra to populate the variable.
try it with
logger.info("Our first message.", extra={"Levelname":"test"})
also, recommend the docs https://docs.python.org/3/library/logging.html
Im working on a project with several modules that should all log to the same file.
Initializing the logger:
parentLogger = logging.getLogger('PARENTLOGGER')
logger = logging.getLogger('PARENTLOGGER.' + __name__)
#set format
fmt = logging.Formatter('%(asctime)s [%(levelname)s] %(name)s: %(message)s')
#set handler for command line
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
ch.setFormatter(fmt)
#set file handler
open('data/logs.log', 'w+').close() #for creating the file in case it doesnt exists. I got exceptions otherwise.
fh = RotatingFileHandler('data/logs.log', maxBytes=5242880, backupCount=1)
fh.setLevel(logging.DEBUG)
fh.setFormatter(fmt)
parentLogger.addHandler(fh)
parentLogger.addHandler(ch)
#
than, on all other modules im calling:
self._logger = logging.getLogger('PARENTLOGGER.' + __name__)
The problem is that the rotating file handler wont write anything to the log file. In any module.
Am I configuring the logger correctly? I've tried several examples from pythons' logging cookbook without success...
Regards and thanks in advance!
You should tell logging when and what to log. Also, the log file will be created by default, no need to create it yourself. Here's a simple example:
$ cat test_log.py
import logging
log = '/home/user/test_log.log'
logging.basicConfig(filename=log,
format='%(asctime)s [%(levelname)s]: %(message)s at line: %(lineno)d (func: "%(funcName)s")')
try:
this_will_fail
except Exception as err:
logging.error('%s' % err)
$ python test_log.py
$ cat /home/user/test_log.log
2018-02-07 11:31:24,127 [ERROR]: name 'this_will_fail' is not defined at line: 10 (func: "<module>")
So, when I copy paste the following x times to the python prompt,
it add the log x times to the end of the designated file.
How can I change the code so that each time I copy paste this to the prompt,
I simply overwrite the existing file (the code seems to not accept the
mode = 'w' option or I do not seem to understand its meaning)
def MinimalLogginf():
import logging
import os
paths = {'work': ''}
logger = logging.getLogger('oneDayFileLoader')
LogHandler = logging.FileHandler(os.path.join(paths["work"] , "oneDayFileLoader.log"), mode='w')
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
LogHandler.setFormatter(formatter)
logger.addHandler(LogHandler)
logger.setLevel(logging.DEBUG)
#Let's say this is an error:
if(1 == 1):
logger.error('overwrite')
So I run it once:
MinmalLoggingf()
Now, I want the new log file to overwrite the log file created on the previous run:
MinmalLoggingf()
If I understand correctly, you're running a certain Python process for days at a time, and want to rotate the log every day. I'd recommend you go a different route, using a handler that automatically rotates the log file, e.g. http://www.blog.pythonlibrary.org/2014/02/11/python-how-to-create-rotating-logs/
But, if you want to control the log using the process in the same method you're comfortable with (Python console, pasting in code.. extremely unpretty and error prone, but sometimes quick-n-dirty is sufficient for the task at hand), well...
Your issue is that you create a new FileHandler each time you paste in the code, and you add it to the Logger object. You end up with a logger that has X FileHandlers attached to it, all of them writing to the same file. Try this:
import logging
paths = {'work': ''}
logger = logging.getLogger('oneDayFileLoader')
if logger.handlers:
logger.handlers[0].close()
logger.handlers = []
logHandler = logging.FileHandler(os.path.join(paths["work"] , "oneDayFileLoader.log"), mode='w')
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
logHandler.setFormatter(formatter)
logger.addHandler(logHandler)
logger.setLevel(logging.DEBUG)
logger.error('overwrite')
Based on your request, I've also added an example using TimedRotatingFileHandler. Note I haven't tested it locally, so if you have issues ping back.
import logging
from logging.handlers import TimedRotatingFileHandler
logPath = os.path.join('', "fileLoaderLog")
logger = logging.getLogger('oneDayFileLoader')
logHandler = TimedRotatingFileHandler(logPath,
when="midnight",
interval=1)
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
logHandler.setFormatter(formatter)
logger.addHandler(logHandler)
logger.setLevel(logging.DEBUG)
logger.error('overwrite')
Your log messages are being duplicated because you call addHandler more than once. Each call to addHandler adds an additional log handler.
If you want to make sure the file is created from scratch, add an extra line of code to remove it:
os.remove(os.path.join(paths["work"], "oneDayFileLoader.log"))
The mode is specified as part of logging.basicConfig and is passed through using filemode.
logging.basicConfig(
level = logging.DEBUG,
format = '%(asctime)s %(levelname)s %(message)s',
filename = 'oneDayFileLoader.log,
filemode = 'w'
)
https://docs.python.org/3/library/logging.html#simple-examples
I want to see all the output displayed on the console in a log file. I ran through basics of logging module, but cannot figure where I am going wrong.
My code is:
# temp2.py
import logging
import pytz
from parameters import *
import pandas as pd
import recos
def create_tbid_cc(tbid_path):
"""Function to read campaign-TBID mapping and remove duplicate entries by
TBID-CampaignName"""
logging.debug('Started')
tbid_cc = pd.read_csv(tbid_path, sep='|')
tbid_cc.columns = map(str.lower, tbid_cc.columns)
tbid_cc_unique = tbid_cc[~(tbid_cc.tbid.duplicated())]
tbid_cc_unique.set_index('tbid', inplace=True)
tbid_cc_unique['campaignname_upd'] = tbid_cc_unique['campaignname']
del tbid_cc_unique['campaignname']
return tbid_cc, tbid_cc_unique
def main():
logging.basicConfig(
filename='app.log', filemode='w',
format='%(asctime)s : %(levelname)s : %(message)s',
datefmt='%m/%d/%Y %I:%M:%S %p',
level=logging.DEBUG)
tbid_cc, tbid_cc_unique = create_tbid_cc(tbid_path=tbid_campaign_map_path)
if __name__ == '__main__':
main()
Output on console:
SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
tbid_cc_unique['campaignname_upd'] = tbid_cc_unique['campaignname']
Output of myapp.log is :
09/27/2015 06:29:56 AM : DEBUG : Started
I want to see the warning displayed on the console in the myapp.log file, but not being able to do that. I have the set the logging level to 'DEBUG' but still the output logfile is the only one line as mentioned above. Any help in regarding this would be appreciated.
The warning being raised won't automatically be logged -- you're seeing them in the console because pandas is writing them to stderr.
If you want to write these messages to the log, you'll need to catch these warnings, and then log the message manually.
See:
https://docs.python.org/2/library/warnings.html
I'd like to have all timestamps in my log file to be UTC timestamp. When specified through code, this is done as follows:
import logging
import time
myHandler = logging.FileHandler('mylogfile.log', 'a')
formatter = logging.Formatter('%(asctime)s %(levelname)-8s %(name)-15s:%(lineno)4s: %(message)-80s')
formatter.converter = time.gmtime
myHandler.setFormatter(formatter)
myLogger = logging.getLogger('MyApp')
myLogger.addHandler(myHandler)
myLogger.setLevel(logging.DEBUG)
myLogger.info('here we are')
I'd like to move away from the above 'in-code' configuration to a config file based mechanism.
Here's the config file section for the formatter:
[handler_MyLogHandler]
args=("mylogfile.log", "a",)
class=FileHandler
level=DEBUG
formatter=simpleFormatter
Now, how do I specify the converter attribute (time.gmtime) in the above section?
Edit: The above config file is loaded thus:
logging.config.fileConfig('myLogConfig.conf')
Sadly, there is no way of doing this using the configuration file, other than having e.g. a
class UTCFormatter(logging.Formatter):
converter = time.gmtime
and then using a UTCFormatter in the configuration.
Here Vinay's solution applied to the logging.basicConfig:
import logging
import time
logging.basicConfig(filename='junk.log', level=logging.DEBUG, format='%(asctime)s: %(levelname)s:%(message)s')
logging.Formatter.converter = time.gmtime
logging.info('A message.')