I am trying to include simple logging into my application using TimedRotatingFileHandler. However I get the output both into the designated file and into the standard error. I reduced the problem to a small example:
import logging, logging.handlers
import sys
logging.basicConfig(format='%(asctime)s %(message)s')
loghandler = logging.handlers.TimedRotatingFileHandler("logfile",when="midnight")
logger = logging.getLogger()
logger.setLevel(logging.INFO)
logger.addHandler(loghandler)
for k in range(5):
logger.info("Line %d" % k)
I get 5 log lines both in my 'logfile' and this program's stderr. What am I doing wrong?
This is the way you can have the print just on the log files and not to stdout/stderr:
import logging
from logging.handlers import TimedRotatingFileHandler
logHandler = TimedRotatingFileHandler("logfile",when="midnight")
logFormatter = logging.Formatter('%(asctime)s %(message)s')
logHandler.setFormatter( logFormatter )
logger = logging.getLogger( 'MyLogger' )
logger.addHandler( logHandler )
logger.setLevel( logging.INFO )
for k in range(5):
logger.info("Line %d" % k)
logging.basicConfig sets up a handler that prints to standard error.
logger.addHandler(loghandler) sets up a TimedRotatingFileHandler.
Do you wish to squelch output to standard error?
If so, simply remove the call tologging.basicConfig.
Related
I am trying to create logs for errors. This is the logger i am using.
import logging
import os
def create_log(source):
logging.basicConfig(filename="logs/"+source+".log",
format='%(asctime)s::%(levelname)s::%(message)s',
filemode='a')
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def info_logger(message):
logger.info(message)
def error_logger(message):
print(message)
logger.error(message)
I am calling this logger in a for loop where i am doing some operation and trying to create logs for each iteration
for i in data["source_id"]:
--Some task here--
log_file_name = str(source_dict["source_id"]) + "_" + source_dict["source_name"] + "_"+str(datetime.today().strftime("%Y-%m-%d_%H_%M_%S"))
create_log(log_file_name)
for the first iteration, log file is getting created. But for other iterations, the same log file is getting appended. I want to make seperate log files for each iteration. Any idea how can i do that?
You can try this
import logging
debug = logging.FileHandler("debug.log")
debug.setLevel(logging.DEBUG)
error = logging.FileHandler("error.log")
error.setLevel(logging.ERROR)
warning = logging.FileHandler("warning.log")
warning.setLevel(logging.WARNING)
console = logging.StreamHandler()
logging.basicConfig( # noqa
level=logging.INFO,
format="[%(asctime)s]:%(levelname)s %(name)s :%(module)s/%(funcName)s,%(lineno)d: %(message)s",
handlers=[debug, error, warning, console]
)
logger = logging.getLogger()
logger.debug("This is debug [error+warning]")
logger.error("This is error [error only]")
logger.warning("This is warn [error+warning]")
I'm trying to import a function which initializes two different logging handlers with different levels. The problem is that for option 1 below, I'm getting the root logger, and for option 2, I can't get any logs to print to screen.
Does anybody have any thoughts or suggestions that might help?
Option 1
TestModule:
def set_logger(
app_name=argv[0][:-3]):
logging.basicConfig(
level=logging.DEBUG,
format='[%(levelname)s][%(module)s][%(asctime)s] - %(message)s',
filename="test.log"
)
console = logging.StreamHandler()
console.setLevel(logging.INFO)
formatter = logging.Formatter('[%(levelname)s][%(module)s][%(asctime)s] - %(message)s')
console.setFormatter(formatter)
logging.getLogger('').addHandler(console)
Option 2 TestModule:
def set_logger(
app_name=argv[0][:-3]):
formatter = logging.Formatter('[%(levelname)s][%(module)s][%(asctime)s] - %(message)s')
logger = logging.getLogger(app_name)
stream_log = logging.StreamHandler()
stream_log.setLevel(logging.INFO)
stream_log.setFormatter(formatter)
file_log = logging.FileHandler("test.log")
file_log.setLevel(logging.DEBUG)
file_log.setFormatter(formatter)
logger.addHandler(stream_log)
logger.addHandler(file_log)
in Script:
from Module import set_logger
import logging
if __name__ == "__main__"
set_logger()
logging.info("start_app")
What am I missing here?
In option 2, you are initializing a logger via logging.getLogger()
You need to return the logger in that function and call it using the returned logger,
if __name__ == "__main__"
logger = set_logger()
logger.info("start_app")
I'm trying to do a test run of the logging module's RotatingFileHandler as follows:
import logging
from logging.handlers import RotatingFileHandler
# logging.basicConfig(filename="example.log", level=logging.DEBUG)
logger = logging.getLogger('my_logger')
handler = RotatingFileHandler("my_log.log", maxBytes=2000, backupCount=10)
logger.addHandler(handler)
for _ in range(10000):
logger.debug("Hello, world!")
However, with logging.basicConfig line commented out, the resulting my_log.log file contains no data:
If I comment in the line with logging.basicConfig(filename="example.log", level=logging.DEBUG), I get the expected my_log.log files with numbered suffixes. However, there is also the example.log which is a (relatively) large file:
How can I set up the logging so that it only generates the my_log.log files, and not the large example.log file?
Python provides 5 logging levels out of the box (in increasing order of severity): DEBUG, INFO, WARNING, ERROR and CRITICAL. The default one is WARNING. The docs says, that
Logging messages which are less severe than lvl will be ignored.
So if you use .debug with the default settings, you won't see anything in your logs.
The easiest fix would be to use logger.warning function rather than logger.debug:
import logging
from logging.handlers import RotatingFileHandler
logger = logging.getLogger('my_logger')
handler = RotatingFileHandler('my_log.log', maxBytes=2000, backupCount=10)
logger.addHandler(handler)
for _ in range(10000):
logger.warning('Hello, world!')
And if you want to change logger level you can use .setLevel method:
import logging
from logging.handlers import RotatingFileHandler
logger = logging.getLogger('my_logger')
logger.setLevel(logging.DEBUG)
handler = RotatingFileHandler('my_log.log', maxBytes=2000, backupCount=10)
logger.addHandler(handler)
for _ in range(10000):
logger.debug('Hello, world!')
Going off of Kurt Peek's answer you can also put the rotating file handler in the logging.basicConfig directly
import logging
from logging.handlers import RotatingFileHandler
logging.basicConfig(
handlers=[RotatingFileHandler('./my_log.log', maxBytes=100000, backupCount=10)],
level=logging.DEBUG,
format="[%(asctime)s] %(levelname)s [%(name)s.%(funcName)s:%(lineno)d] %(message)s",
datefmt='%Y-%m-%dT%H:%M:%S')
All previous answers are correct, here another way of doing the same thing except we use logging config file instead.
logging_config.ini
Here is the config file :
[loggers]
keys=root
[handlers]
keys=logfile
[formatters]
keys=logfileformatter
[logger_root]
level=DEBUG
handlers=logfile
[formatter_logfileformatter]
format=%(asctime)s %(name)-12s: %(levelname)s %(message)s
[handler_logfile]
class=handlers.RotatingFileHandler
level=DEBUG
args=('testing.log','a',10,100)
formatter=logfileformatter
myScrypt.py
here is simple logging script that uses the above config file
import logging
from logging.config import fileConfig
fileConfig('logging_config.ini')
logger = logging.getLogger()
logger.debug('the best scripting language is python in the world')
RESULT
here is the result, notice maxBytes is set to 10 but in real life, that's clearly too small.
(args=('testing.log','a',10,100)
I found that to obtain the desired behavior one has to use the same name in the basicConfig and RotatingFileHandler initializations:
import logging
from logging.handlers import RotatingFileHandler
logging.basicConfig(filename="my_log.log", level=logging.DEBUG)
logger = logging.getLogger('my_logger')
handler = RotatingFileHandler("my_log.log", maxBytes=2000, backupCount=10)
logger.addHandler(handler)
for _ in range(10000):
logger.debug("Hello, world!")
Here, I have chose the same name my_log.log. This results in only the 'size-limited' logs being created:
Apparently, I shouldn't be using ScrapyFileLogObserver anymore (http://doc.scrapy.org/en/1.0/topics/logging.html). But I still want to be able to save my log messages to a file, and I still want all the standard Scrapy console information to be saved to the file too.
From reading up on how to use the logging module, this is the code that I have tried to use:
class BlahSpider(CrawlSpider):
name = 'blah'
allowed_domains = ['blah.com']
start_urls = ['https://www.blah.com/blahblahblah']
rules = (
Rule(SgmlLinkExtractor(allow=r'whatever'), callback='parse_item', follow=True),
)
def __init__(self):
CrawlSpider.__init__(self)
self.logger = logging.getLogger()
self.logger.setLevel(logging.DEBUG)
logging.basicConfig(filename='debug_log.txt', filemode='w', format='%(asctime)s %(levelname)s: %(message)s',
level=logging.DEBUG)
console = logging.StreamHandler()
console.setLevel(logging.DEBUG)
simple_format = logging.Formatter('%(levelname)s: %(message)s')
console.setFormatter(simple_format)
self.logger.addHandler(console)
self.logger.info("Something")
def parse_item(self):
i = BlahItem()
return i
It runs fine, and it saves the "Something" to the file. However, all of the stuff that I see in the command prompt window, all of the stuff that used to be saved to the file when I used ScrapyFileLogObserver, is not saved now.
I thought that my "console" handler with "logging.StreamHandler()" was supposed to deal with that, but this is just what I had read and I don't really understand how it works.
Can anyone point out what I am missing or where I have gone wrong?
Thank you.
I think the problem is that you've used both basicConfig and addHandler.
Configure two handlers separately:
self.logger = logging.getLogger()
self.logger.setLevel(logging.DEBUG)
logFormatter = logging.Formatter('%(asctime)s %(levelname)s: %(message)s')
# file handler
fileHandler = logging.FileHandler("debug_log.txt")
fileHandler.setLevel(logging.DEBUG)
fileHandler.setFormatter(logFormatter)
self.logger.addHandler(fileHandler)
# console handler
consoleHandler = logging.StreamHandler()
consoleHandler.setLevel(logging.DEBUG)
consoleHandler.setFormatter(logFormatter)
self.logger.addHandler(consoleHandler)
See also:
logger configuration to log to file and print to stdout
you can log all scrapy logs to file by first disabling root handle in scrapy.utils.log.configure_logging and then adding your own log handler.
In settings.py file of scrapy project add the following code:
import logging
from logging.handlers import RotatingFileHandler
from scrapy.utils.log import configure_logging
LOG_ENABLED = False
# Disable default Scrapy log settings.
configure_logging(install_root_handler=False)
# Define your logging settings.
log_file = '/tmp/logs/CRAWLER_logs.log'
root_logger = logging.getLogger()
root_logger.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
rotating_file_log = RotatingFileHandler(log_file, maxBytes=10485760, backupCount=1)
rotating_file_log.setLevel(logging.DEBUG)
rotating_file_log.setFormatter(formatter)
root_logger.addHandler(rotating_file_log)
Also we customize log level (DEBUG to INFO) and formatter as required.
Hope this helps!
I have an analytics routine (python 2.7) that I am adding logging to. My goal is to add INFO messages to the start and end of each function so I can see how long each is taking. What I have below is a simplified version. When I run this example, nothing writes to the log file I specify. I would like to get the messages with time stamps writing to that file.
import numpy as np
import pandas as pd
import logging
import logging.handlers
def perform_process():
setup_logging()
baseline()
calculations()
def setup_logging():
logging.basicConfig(level = logging.INFO)
global logger
global handler
global formatter
logger = logging.getLogger(__name__)
handler = logging.handlers.RotatingFileHandler('C:\\Users\\perform_log.log', maxBytes=2000, backupCount=5)
handler.setLevel(logging.info)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.info('I am logging')
def baseline():
logger.info('start baseline function')
baseline_df = pd.DataFrame(np.random.randn(10,4), columns=['a','b','c','d'])
global output_df
output_df = baseline_df
def calculations():
print output_df.head()
perform_process()
When I run this file, I get this output in the console:
a b c d
0 -0.686909 0.279976 0.219521 -0.027359
1 -0.718949 0.714682 1.202500 0.935868
2 0.454883 1.205500 0.079626 -1.370491
3 -0.743507 -1.353939 0.677011 -0.847376
4 -0.464742 1.034433 -0.779324 0.930626
[5 rows x 4 columns]
INFO:__main__:I am logging
INFO:__main__:start baseline function
However, no data appears in the log file I specified. What is wrong with my logging setup to prevent messages from being written with the format I specified to the log file I specified?
Regarding the output above, why is 'print output_df.head()' executed before the two logging calls? I would like to see the log messages in order of execution.
In this line
handler.setLevel(logging.info)
logging.info returns a function not a level. You'll have to correct that to logging.INFO (info is capitalized)