I want to see all the output displayed on the console in a log file. I ran through basics of logging module, but cannot figure where I am going wrong.
My code is:
# temp2.py
import logging
import pytz
from parameters import *
import pandas as pd
import recos
def create_tbid_cc(tbid_path):
"""Function to read campaign-TBID mapping and remove duplicate entries by
TBID-CampaignName"""
logging.debug('Started')
tbid_cc = pd.read_csv(tbid_path, sep='|')
tbid_cc.columns = map(str.lower, tbid_cc.columns)
tbid_cc_unique = tbid_cc[~(tbid_cc.tbid.duplicated())]
tbid_cc_unique.set_index('tbid', inplace=True)
tbid_cc_unique['campaignname_upd'] = tbid_cc_unique['campaignname']
del tbid_cc_unique['campaignname']
return tbid_cc, tbid_cc_unique
def main():
logging.basicConfig(
filename='app.log', filemode='w',
format='%(asctime)s : %(levelname)s : %(message)s',
datefmt='%m/%d/%Y %I:%M:%S %p',
level=logging.DEBUG)
tbid_cc, tbid_cc_unique = create_tbid_cc(tbid_path=tbid_campaign_map_path)
if __name__ == '__main__':
main()
Output on console:
SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
tbid_cc_unique['campaignname_upd'] = tbid_cc_unique['campaignname']
Output of myapp.log is :
09/27/2015 06:29:56 AM : DEBUG : Started
I want to see the warning displayed on the console in the myapp.log file, but not being able to do that. I have the set the logging level to 'DEBUG' but still the output logfile is the only one line as mentioned above. Any help in regarding this would be appreciated.
The warning being raised won't automatically be logged -- you're seeing them in the console because pandas is writing them to stderr.
If you want to write these messages to the log, you'll need to catch these warnings, and then log the message manually.
See:
https://docs.python.org/2/library/warnings.html
Related
I have problems with logging in Python 3.7 I use Spyder as editor
This code is working normally and creates a file and writes in it.
import logging
LOG_FORMAT="%(Levelname)s %(asctime)s - %(message)s"
logging.basicConfig(filename="C:\\Users\\MOHAMED\\Desktop\\Learn python\\tst.log",
level=logging.DEBUG)
logger=logging.getLogger()
logger.info("Our first message.")
The problem is when I add the format in my file this code does not write anything in tst file.
import logging
LOG_FORMAT="%(Levelname)s %(asctime)s - %(message)s"
logging.basicConfig(filename="C:\\Users\\MOHAMED\\Desktop\\Learn python\\tst.log",
level=logging.DEBUG,
format=LOG_FORMAT)
logger=logging.getLogger()
logger.info("Our first message.")
You are specifying logging variable Levelname but you do not use the extra to populate the variable.
try it with
logger.info("Our first message.", extra={"Levelname":"test"})
also, recommend the docs https://docs.python.org/3/library/logging.html
Up to now I do simple logging to files, and if I log a multiline string, then the result looks like this:
Emitting log:
logging.info('foo\nbar')
Logfile:
2018-03-05 10:51:53 root.main +16: INFO [28302] foo
bar
Up to now all lines which do not contain "INFO" or "DEBUG" get reported to operators.
This means the line bar gets reported. This is a false positive.
Environment: Linux.
How to set up logging in Python to keep the INFO foo\nbar in one string and ignore the whole string since it is only "INFO"?
Note: Yes, you can filter the logging in the interpreter. Unfortunately this is not what the question is about. This question is different. First the logging happens. Then the logs get parsed.
Here is a script to reproduce it:
import sys
import logging
def set_up_logging(level=logging.INFO):
root_logger = logging.getLogger()
root_logger.setLevel(level)
handler = logging.StreamHandler(sys.stdout)
handler.setFormatter(
logging.Formatter('%(asctime)s %(name)s: %(levelname)-8s [%(process)d] %(message)s', '%Y-%m-%d %H:%M:%S'))
root_logger.addHandler(handler)
def main():
set_up_logging()
logging.info('foo\nbar')
if __name__ == '__main__':
main()
After thinking about it again, I think the real question is: Which logging format is feasible? Just removing the newlines in messages which span multiple lines makes some output hard to read for the human eyes. On the other hand the current 1:1 relation between logging.info() and a line in the log file is easy to read. ... I am unsure
I usually have a class to customize logging but you can achieve what you want with a custom logging.Formatter:
import logging
class NewLineFormatter(logging.Formatter):
def __init__(self, fmt, datefmt=None):
"""
Init given the log line format and date format
"""
logging.Formatter.__init__(self, fmt, datefmt)
def format(self, record):
"""
Override format function
"""
msg = logging.Formatter.format(self, record)
if record.message != "":
parts = msg.split(record.message)
msg = msg.replace('\n', '\n' + parts[0])
return msg
The format() function above splits lines and replicates the timestamp/logging-preamble in each line (after every \n)
Now you need to attach the formatter to the root logger. You can actually attach it to any handler if you build your own logging setup/structure:
# Basic config as usual
logging.basicConfig(level=logging.DEBUG)
# Some globals/consts
DATEFORMAT = '%d-%m-%Y %H:%M:%S'
LOGFORMAT = '%(asctime)s %(process)s %(levelname)-8s %(filename)15s-%(lineno)-4s: %(message)s'
# Create a new formatter
formatter = NewLineFormatter(LOGFORMAT, datefmt=DATEFORMAT)
# Attach the formatter on the root logger
lg = logging.getLogger()
# This is a bit of a hack... might be a better way to do this
lg.handlers[0].setFormatter(formatter)
# test root logger
lg.debug("Hello\nWorld")
# test module logger + JSON
lg = logging.getLogger("mylogger")
lg.debug('{\n "a": "Hello",\n "b": "World2"\n}')
The above gives you:
05-03-2018 08:37:34 13065 DEBUG test_logger.py-47 : Hello
05-03-2018 08:37:34 13065 DEBUG test_logger.py-47 : World
05-03-2018 08:37:34 13065 DEBUG test_logger.py-51 : {
05-03-2018 08:37:34 13065 DEBUG test_logger.py-51 : "a": "Hello",
05-03-2018 08:37:34 13065 DEBUG test_logger.py-51 : "b": "World2"
05-03-2018 08:37:34 13065 DEBUG test_logger.py-51 : }
Note that I am accessing the .handlers[0] of the root logger which is a bit of a hack but I couldn't find a way around this... Also, note the formatted JSON printing :)
I think maintaining this 1:1 relationship, a single line in the log file for each logging.info() call, is highly desirable to keep the log files simple and parsable. Therefore, if you really need to log a newline character, then I would simply log the string representation instead, for example:
logging.info(repr('foo\nbar'))
Outputs:
2018-03-05 11:34:54 root: INFO [32418] 'foo\nbar'
A simple alternative would be to log each part separately:
log_string = 'foo\nbar'
for item in log_string.split('\n'):
logging.info(item)
Outputs:
2018-03-05 15:39:44 root: INFO [4196] foo
2018-03-05 15:39:44 root: INFO [4196] bar
You can use:
logging.basicConfig(level=your_level)
where your_level is one of those:
'debug': logging.DEBUG,
'info': logging.INFO,
'warning': logging.WARNING,
'error': logging.ERROR,
'critical': logging.CRITICAL
In your case, you can use warning to ignore info
import logging
logging.basicConfig(filename='example.log',level=logging.WARNING)
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
Output:
WARNING:root:And this, too
You can try disabling INFO Prior to logging as
import logging
logging.basicConfig(filename='example.log')
logging.debug('This message should go to the log file')
logging.disable(logging.INFO)
logging.info('So should this')
logging.disable(logging.NOTSET)
logging.warning('And this, too')
Output:
WARNING:root:And this, too
Or
logger.disabled = True
someOtherModule.function()
logger.disabled = False
Attempt [see it running here]:
from sys import stdout, stderr
from cStringIO import StringIO
from logging import getLogger, basicConfig, StreamHandler
basicConfig(format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
datefmt='%m-%d %H:%M')
log = getLogger(__name__)
sio = StringIO()
console = StreamHandler(sio)
log.addHandler(console)
log.addHandler(StreamHandler(stdout))
log.info('Jackdaws love my big sphinx of quartz.')
print 'console.stream.read() = {!r}'.format(console.stream.read())
Output [stdout]:
console.stream.read() = ''
Expected output [stdout]:
[date] [filename] INFO Jackdaws love my big sphinx of quartz.
console.stream.read() = 'Jackdaws love my big sphinx of quartz.'
Two things going on here.
Firstly, the root logger is instantiated with a level of WARNING, which means that no messages with a level lower than WARNING will be processed. You can set the level using Logger.setLevel(level), where levels are defined here - https://docs.python.org/2/library/logging.html#levels.
As suggested in comments, the log level can also be set with:
basicConfig(level='INFO', ...)
Secondly, when you write to the StringIO object, the position in the stream is set at the end of the current stream. You need to rewind the StringIO object to then be able to read from it.
console.stream.seek(0)
console.stream.read()
Even simpler, just call:
console.stream.getvalue()
Full code:
from sys import stdout, stderr
from cStringIO import StringIO
from logging import getLogger, basicConfig, StreamHandler
basicConfig(format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
datefmt='%m-%d %H:%M')
log = getLogger(__name__)
log.setLevel("INFO")
sio = StringIO()
console = StreamHandler(sio)
log.addHandler(console)
log.addHandler(StreamHandler(stdout))
log.info('Jackdaws love my big sphinx of quartz.')
console.stream.seek(0)
print 'console.stream.read() = {!r}'.format(console.stream.read())
So, when I copy paste the following x times to the python prompt,
it add the log x times to the end of the designated file.
How can I change the code so that each time I copy paste this to the prompt,
I simply overwrite the existing file (the code seems to not accept the
mode = 'w' option or I do not seem to understand its meaning)
def MinimalLogginf():
import logging
import os
paths = {'work': ''}
logger = logging.getLogger('oneDayFileLoader')
LogHandler = logging.FileHandler(os.path.join(paths["work"] , "oneDayFileLoader.log"), mode='w')
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
LogHandler.setFormatter(formatter)
logger.addHandler(LogHandler)
logger.setLevel(logging.DEBUG)
#Let's say this is an error:
if(1 == 1):
logger.error('overwrite')
So I run it once:
MinmalLoggingf()
Now, I want the new log file to overwrite the log file created on the previous run:
MinmalLoggingf()
If I understand correctly, you're running a certain Python process for days at a time, and want to rotate the log every day. I'd recommend you go a different route, using a handler that automatically rotates the log file, e.g. http://www.blog.pythonlibrary.org/2014/02/11/python-how-to-create-rotating-logs/
But, if you want to control the log using the process in the same method you're comfortable with (Python console, pasting in code.. extremely unpretty and error prone, but sometimes quick-n-dirty is sufficient for the task at hand), well...
Your issue is that you create a new FileHandler each time you paste in the code, and you add it to the Logger object. You end up with a logger that has X FileHandlers attached to it, all of them writing to the same file. Try this:
import logging
paths = {'work': ''}
logger = logging.getLogger('oneDayFileLoader')
if logger.handlers:
logger.handlers[0].close()
logger.handlers = []
logHandler = logging.FileHandler(os.path.join(paths["work"] , "oneDayFileLoader.log"), mode='w')
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
logHandler.setFormatter(formatter)
logger.addHandler(logHandler)
logger.setLevel(logging.DEBUG)
logger.error('overwrite')
Based on your request, I've also added an example using TimedRotatingFileHandler. Note I haven't tested it locally, so if you have issues ping back.
import logging
from logging.handlers import TimedRotatingFileHandler
logPath = os.path.join('', "fileLoaderLog")
logger = logging.getLogger('oneDayFileLoader')
logHandler = TimedRotatingFileHandler(logPath,
when="midnight",
interval=1)
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
logHandler.setFormatter(formatter)
logger.addHandler(logHandler)
logger.setLevel(logging.DEBUG)
logger.error('overwrite')
Your log messages are being duplicated because you call addHandler more than once. Each call to addHandler adds an additional log handler.
If you want to make sure the file is created from scratch, add an extra line of code to remove it:
os.remove(os.path.join(paths["work"], "oneDayFileLoader.log"))
The mode is specified as part of logging.basicConfig and is passed through using filemode.
logging.basicConfig(
level = logging.DEBUG,
format = '%(asctime)s %(levelname)s %(message)s',
filename = 'oneDayFileLoader.log,
filemode = 'w'
)
https://docs.python.org/3/library/logging.html#simple-examples
I am python beginner. My python script logs output to a file (say example.log) using the basic python logging module. However, my python script also makes some 3rd party API calls (for example, parse_the_file) over which I don't have any control. I want to capture the output (usually on console) produced by the API into my example.log. The following example code works partially but the problem is that the contents get overwritten as soon as I start logging output of the API into my log file.
#!/usr/bin/env python
import logging
import sys
import os
from common_lib import * # import additional modules
logging.basicConfig(filename='example.log', filemode='w', level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
logging.debug('This is a log message.') # message goes to log file.
sys.stdout = open('example.log','a')
metadata_func=parse_the_file('/opt/metadata.txt') # output goes to log file but OVERWRITES the content
sys.stdout = sys.__stdout__
logging.debug('This is a second log message.') # message goes to log file.
I know there have been post suggesting to similar question on this site but I haven't a workaround/solution for this problem that will work in this scenario.
Try:
log_file = open('example.log', 'a')
logging.basicConfig(stream=log_file, level=logging.DEBUG)
logging.debug("Test")
sys.stdout = log_file
sys.stderr = log_file
stdout_fd = os.dup(1)
stderr_fd = os.dup(2)
os.dup2(log_file.fileno(), 1)
os.dup2(log_file.fileno(), 2)
os.system("echo foo")
os.dup2(stdout_fd, 1)
os.dup2(stderr_fd, 2)
sys.stdout = sys.__stdout__
sys.stderr = sys.__stderr__
However, this will not format it accordingly. If you want that, you can try something like http://plumberjack.blogspot.com/2009/09/how-to-treat-logger-like-output-stream.html