Attempt [see it running here]:
from sys import stdout, stderr
from cStringIO import StringIO
from logging import getLogger, basicConfig, StreamHandler
basicConfig(format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
datefmt='%m-%d %H:%M')
log = getLogger(__name__)
sio = StringIO()
console = StreamHandler(sio)
log.addHandler(console)
log.addHandler(StreamHandler(stdout))
log.info('Jackdaws love my big sphinx of quartz.')
print 'console.stream.read() = {!r}'.format(console.stream.read())
Output [stdout]:
console.stream.read() = ''
Expected output [stdout]:
[date] [filename] INFO Jackdaws love my big sphinx of quartz.
console.stream.read() = 'Jackdaws love my big sphinx of quartz.'
Two things going on here.
Firstly, the root logger is instantiated with a level of WARNING, which means that no messages with a level lower than WARNING will be processed. You can set the level using Logger.setLevel(level), where levels are defined here - https://docs.python.org/2/library/logging.html#levels.
As suggested in comments, the log level can also be set with:
basicConfig(level='INFO', ...)
Secondly, when you write to the StringIO object, the position in the stream is set at the end of the current stream. You need to rewind the StringIO object to then be able to read from it.
console.stream.seek(0)
console.stream.read()
Even simpler, just call:
console.stream.getvalue()
Full code:
from sys import stdout, stderr
from cStringIO import StringIO
from logging import getLogger, basicConfig, StreamHandler
basicConfig(format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
datefmt='%m-%d %H:%M')
log = getLogger(__name__)
log.setLevel("INFO")
sio = StringIO()
console = StreamHandler(sio)
log.addHandler(console)
log.addHandler(StreamHandler(stdout))
log.info('Jackdaws love my big sphinx of quartz.')
console.stream.seek(0)
print 'console.stream.read() = {!r}'.format(console.stream.read())
Related
I have problems with logging in Python 3.7 I use Spyder as editor
This code is working normally and creates a file and writes in it.
import logging
LOG_FORMAT="%(Levelname)s %(asctime)s - %(message)s"
logging.basicConfig(filename="C:\\Users\\MOHAMED\\Desktop\\Learn python\\tst.log",
level=logging.DEBUG)
logger=logging.getLogger()
logger.info("Our first message.")
The problem is when I add the format in my file this code does not write anything in tst file.
import logging
LOG_FORMAT="%(Levelname)s %(asctime)s - %(message)s"
logging.basicConfig(filename="C:\\Users\\MOHAMED\\Desktop\\Learn python\\tst.log",
level=logging.DEBUG,
format=LOG_FORMAT)
logger=logging.getLogger()
logger.info("Our first message.")
You are specifying logging variable Levelname but you do not use the extra to populate the variable.
try it with
logger.info("Our first message.", extra={"Levelname":"test"})
also, recommend the docs https://docs.python.org/3/library/logging.html
So, when I copy paste the following x times to the python prompt,
it add the log x times to the end of the designated file.
How can I change the code so that each time I copy paste this to the prompt,
I simply overwrite the existing file (the code seems to not accept the
mode = 'w' option or I do not seem to understand its meaning)
def MinimalLogginf():
import logging
import os
paths = {'work': ''}
logger = logging.getLogger('oneDayFileLoader')
LogHandler = logging.FileHandler(os.path.join(paths["work"] , "oneDayFileLoader.log"), mode='w')
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
LogHandler.setFormatter(formatter)
logger.addHandler(LogHandler)
logger.setLevel(logging.DEBUG)
#Let's say this is an error:
if(1 == 1):
logger.error('overwrite')
So I run it once:
MinmalLoggingf()
Now, I want the new log file to overwrite the log file created on the previous run:
MinmalLoggingf()
If I understand correctly, you're running a certain Python process for days at a time, and want to rotate the log every day. I'd recommend you go a different route, using a handler that automatically rotates the log file, e.g. http://www.blog.pythonlibrary.org/2014/02/11/python-how-to-create-rotating-logs/
But, if you want to control the log using the process in the same method you're comfortable with (Python console, pasting in code.. extremely unpretty and error prone, but sometimes quick-n-dirty is sufficient for the task at hand), well...
Your issue is that you create a new FileHandler each time you paste in the code, and you add it to the Logger object. You end up with a logger that has X FileHandlers attached to it, all of them writing to the same file. Try this:
import logging
paths = {'work': ''}
logger = logging.getLogger('oneDayFileLoader')
if logger.handlers:
logger.handlers[0].close()
logger.handlers = []
logHandler = logging.FileHandler(os.path.join(paths["work"] , "oneDayFileLoader.log"), mode='w')
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
logHandler.setFormatter(formatter)
logger.addHandler(logHandler)
logger.setLevel(logging.DEBUG)
logger.error('overwrite')
Based on your request, I've also added an example using TimedRotatingFileHandler. Note I haven't tested it locally, so if you have issues ping back.
import logging
from logging.handlers import TimedRotatingFileHandler
logPath = os.path.join('', "fileLoaderLog")
logger = logging.getLogger('oneDayFileLoader')
logHandler = TimedRotatingFileHandler(logPath,
when="midnight",
interval=1)
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
logHandler.setFormatter(formatter)
logger.addHandler(logHandler)
logger.setLevel(logging.DEBUG)
logger.error('overwrite')
Your log messages are being duplicated because you call addHandler more than once. Each call to addHandler adds an additional log handler.
If you want to make sure the file is created from scratch, add an extra line of code to remove it:
os.remove(os.path.join(paths["work"], "oneDayFileLoader.log"))
The mode is specified as part of logging.basicConfig and is passed through using filemode.
logging.basicConfig(
level = logging.DEBUG,
format = '%(asctime)s %(levelname)s %(message)s',
filename = 'oneDayFileLoader.log,
filemode = 'w'
)
https://docs.python.org/3/library/logging.html#simple-examples
I want to see all the output displayed on the console in a log file. I ran through basics of logging module, but cannot figure where I am going wrong.
My code is:
# temp2.py
import logging
import pytz
from parameters import *
import pandas as pd
import recos
def create_tbid_cc(tbid_path):
"""Function to read campaign-TBID mapping and remove duplicate entries by
TBID-CampaignName"""
logging.debug('Started')
tbid_cc = pd.read_csv(tbid_path, sep='|')
tbid_cc.columns = map(str.lower, tbid_cc.columns)
tbid_cc_unique = tbid_cc[~(tbid_cc.tbid.duplicated())]
tbid_cc_unique.set_index('tbid', inplace=True)
tbid_cc_unique['campaignname_upd'] = tbid_cc_unique['campaignname']
del tbid_cc_unique['campaignname']
return tbid_cc, tbid_cc_unique
def main():
logging.basicConfig(
filename='app.log', filemode='w',
format='%(asctime)s : %(levelname)s : %(message)s',
datefmt='%m/%d/%Y %I:%M:%S %p',
level=logging.DEBUG)
tbid_cc, tbid_cc_unique = create_tbid_cc(tbid_path=tbid_campaign_map_path)
if __name__ == '__main__':
main()
Output on console:
SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
tbid_cc_unique['campaignname_upd'] = tbid_cc_unique['campaignname']
Output of myapp.log is :
09/27/2015 06:29:56 AM : DEBUG : Started
I want to see the warning displayed on the console in the myapp.log file, but not being able to do that. I have the set the logging level to 'DEBUG' but still the output logfile is the only one line as mentioned above. Any help in regarding this would be appreciated.
The warning being raised won't automatically be logged -- you're seeing them in the console because pandas is writing them to stderr.
If you want to write these messages to the log, you'll need to catch these warnings, and then log the message manually.
See:
https://docs.python.org/2/library/warnings.html
I am python beginner. My python script logs output to a file (say example.log) using the basic python logging module. However, my python script also makes some 3rd party API calls (for example, parse_the_file) over which I don't have any control. I want to capture the output (usually on console) produced by the API into my example.log. The following example code works partially but the problem is that the contents get overwritten as soon as I start logging output of the API into my log file.
#!/usr/bin/env python
import logging
import sys
import os
from common_lib import * # import additional modules
logging.basicConfig(filename='example.log', filemode='w', level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
logging.debug('This is a log message.') # message goes to log file.
sys.stdout = open('example.log','a')
metadata_func=parse_the_file('/opt/metadata.txt') # output goes to log file but OVERWRITES the content
sys.stdout = sys.__stdout__
logging.debug('This is a second log message.') # message goes to log file.
I know there have been post suggesting to similar question on this site but I haven't a workaround/solution for this problem that will work in this scenario.
Try:
log_file = open('example.log', 'a')
logging.basicConfig(stream=log_file, level=logging.DEBUG)
logging.debug("Test")
sys.stdout = log_file
sys.stderr = log_file
stdout_fd = os.dup(1)
stderr_fd = os.dup(2)
os.dup2(log_file.fileno(), 1)
os.dup2(log_file.fileno(), 2)
os.system("echo foo")
os.dup2(stdout_fd, 1)
os.dup2(stderr_fd, 2)
sys.stdout = sys.__stdout__
sys.stderr = sys.__stderr__
However, this will not format it accordingly. If you want that, you can try something like http://plumberjack.blogspot.com/2009/09/how-to-treat-logger-like-output-stream.html
I'd like to have all timestamps in my log file to be UTC timestamp. When specified through code, this is done as follows:
import logging
import time
myHandler = logging.FileHandler('mylogfile.log', 'a')
formatter = logging.Formatter('%(asctime)s %(levelname)-8s %(name)-15s:%(lineno)4s: %(message)-80s')
formatter.converter = time.gmtime
myHandler.setFormatter(formatter)
myLogger = logging.getLogger('MyApp')
myLogger.addHandler(myHandler)
myLogger.setLevel(logging.DEBUG)
myLogger.info('here we are')
I'd like to move away from the above 'in-code' configuration to a config file based mechanism.
Here's the config file section for the formatter:
[handler_MyLogHandler]
args=("mylogfile.log", "a",)
class=FileHandler
level=DEBUG
formatter=simpleFormatter
Now, how do I specify the converter attribute (time.gmtime) in the above section?
Edit: The above config file is loaded thus:
logging.config.fileConfig('myLogConfig.conf')
Sadly, there is no way of doing this using the configuration file, other than having e.g. a
class UTCFormatter(logging.Formatter):
converter = time.gmtime
and then using a UTCFormatter in the configuration.
Here Vinay's solution applied to the logging.basicConfig:
import logging
import time
logging.basicConfig(filename='junk.log', level=logging.DEBUG, format='%(asctime)s: %(levelname)s:%(message)s')
logging.Formatter.converter = time.gmtime
logging.info('A message.')