I'm trying to do a test run of the logging module's RotatingFileHandler as follows:
import logging
from logging.handlers import RotatingFileHandler
# logging.basicConfig(filename="example.log", level=logging.DEBUG)
logger = logging.getLogger('my_logger')
handler = RotatingFileHandler("my_log.log", maxBytes=2000, backupCount=10)
logger.addHandler(handler)
for _ in range(10000):
logger.debug("Hello, world!")
However, with logging.basicConfig line commented out, the resulting my_log.log file contains no data:
If I comment in the line with logging.basicConfig(filename="example.log", level=logging.DEBUG), I get the expected my_log.log files with numbered suffixes. However, there is also the example.log which is a (relatively) large file:
How can I set up the logging so that it only generates the my_log.log files, and not the large example.log file?
Python provides 5 logging levels out of the box (in increasing order of severity): DEBUG, INFO, WARNING, ERROR and CRITICAL. The default one is WARNING. The docs says, that
Logging messages which are less severe than lvl will be ignored.
So if you use .debug with the default settings, you won't see anything in your logs.
The easiest fix would be to use logger.warning function rather than logger.debug:
import logging
from logging.handlers import RotatingFileHandler
logger = logging.getLogger('my_logger')
handler = RotatingFileHandler('my_log.log', maxBytes=2000, backupCount=10)
logger.addHandler(handler)
for _ in range(10000):
logger.warning('Hello, world!')
And if you want to change logger level you can use .setLevel method:
import logging
from logging.handlers import RotatingFileHandler
logger = logging.getLogger('my_logger')
logger.setLevel(logging.DEBUG)
handler = RotatingFileHandler('my_log.log', maxBytes=2000, backupCount=10)
logger.addHandler(handler)
for _ in range(10000):
logger.debug('Hello, world!')
Going off of Kurt Peek's answer you can also put the rotating file handler in the logging.basicConfig directly
import logging
from logging.handlers import RotatingFileHandler
logging.basicConfig(
handlers=[RotatingFileHandler('./my_log.log', maxBytes=100000, backupCount=10)],
level=logging.DEBUG,
format="[%(asctime)s] %(levelname)s [%(name)s.%(funcName)s:%(lineno)d] %(message)s",
datefmt='%Y-%m-%dT%H:%M:%S')
All previous answers are correct, here another way of doing the same thing except we use logging config file instead.
logging_config.ini
Here is the config file :
[loggers]
keys=root
[handlers]
keys=logfile
[formatters]
keys=logfileformatter
[logger_root]
level=DEBUG
handlers=logfile
[formatter_logfileformatter]
format=%(asctime)s %(name)-12s: %(levelname)s %(message)s
[handler_logfile]
class=handlers.RotatingFileHandler
level=DEBUG
args=('testing.log','a',10,100)
formatter=logfileformatter
myScrypt.py
here is simple logging script that uses the above config file
import logging
from logging.config import fileConfig
fileConfig('logging_config.ini')
logger = logging.getLogger()
logger.debug('the best scripting language is python in the world')
RESULT
here is the result, notice maxBytes is set to 10 but in real life, that's clearly too small.
(args=('testing.log','a',10,100)
I found that to obtain the desired behavior one has to use the same name in the basicConfig and RotatingFileHandler initializations:
import logging
from logging.handlers import RotatingFileHandler
logging.basicConfig(filename="my_log.log", level=logging.DEBUG)
logger = logging.getLogger('my_logger')
handler = RotatingFileHandler("my_log.log", maxBytes=2000, backupCount=10)
logger.addHandler(handler)
for _ in range(10000):
logger.debug("Hello, world!")
Here, I have chose the same name my_log.log. This results in only the 'size-limited' logs being created:
Related
For any script that needs a history of it's run kept, I use the following code to create a logging process via the logging module and copy paste it in:
from datetime import datetime
import logging
from logging import handlers
dt_now = datetime.today().strftime('%Y%m%d')
sender = 'sender#domainname.com'
recipients ='recipient#domainname.com'
sub_err = "Process Failed"
# Set up logging
log_file = ("//some_server/some_dir/another_dir/"
"logs/{}.txt".format(dt_now))
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
file_handle = logging.FileHandler(log_file)
stream_handle = logging.StreamHandler()
email_handle = handlers.SMTPHandler(mailhost="mailhost.domain.com",
fromaddr=sender,
toaddrs=recipients,
subject=sub_err,
secure=None)
email_handle.setLevel(logging.WARNING)
file_handle.setLevel(logging.DEBUG)
stream_handle.setLevel(logging.DEBUG)
log_format = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
handles = [email_handle, file_handle, stream_handle]
for handle in handles:
handle.setFormatter(log_format)
logger.addHandler(handle)
This works well for me, it logs everything to a file, sends me an email if an exception is hit, and will print to console if I'm working with one to debug. Now for any project it's pretty typical for me to have 2 or more scripts and what I usually do is just copy and paste this in to each script and I get a single log file for the process. In addition I usually have a project module with some commonly used functions that both scripts will use. What I'm wondering is how I would go about housing my logging process in this common file so each script just imports it in and uses all the same handles and log file.
As #chepner commented, you can create a function that enables the logger. You can also create a logging configuration file that you can parse via logging.config.fileConfig.
Function
You can create a function that creates the logger with the correct handler. Note that this function needs to be called only once for the entire run of the application. Indeed, the logging config is defined globally across the entire app.
def init_logging():
from datetime import datetime
import logging
from logging import handlers
dt_now = datetime.today().strftime('%Y%m%d')
sender = 'sender#domainname.com'
recipients ='recipient#domainname.com'
sub_err = "Process Failed"
# Set up logging
log_file = ("//some_server/some_dir/another_dir/logs/{}.txt".format(dt_now))
logger = logging.getLogger("foobar")
logger.setLevel(logging.DEBUG)
file_handle = logging.FileHandler(log_file)
stream_handle = logging.StreamHandler()
email_handle = handlers.SMTPHandler(
mailhost="mailhost.domain.com",
fromaddr=sender,
toaddrs=recipients,
subject=sub_err,
secure=None
)
email_handle.setLevel(logging.WARNING)
file_handle.setLevel(logging.DEBUG)
stream_handle.setLevel(logging.DEBUG)
log_format = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
handles = [email_handle, file_handle, stream_handle]
for handle in handles:
handle.setFormatter(log_format)
logger.addHandler(handle)
You can then use the logger created here inside any script like this:
from .logging import init_logging
init_logging()
# foobar is accessible, and has all the handlers as defined in init_logging
foobarlogger = getLogger("foobar")
foobarlogger.info("Hello, world!")
Note: Since the definition of the handler is global, you can also put the code, not inside a function but inside the __init__.py file of a module, defining it at first import, and making it usable throughout the entire app.
fileConfig
Another way to proceed would be to use logging.config.fileConfig. You can setup your logging handlers, loggers, formatters etc. inside this file.
Create a logging.conf file:
[loggers]
keys=root, foobar
[handlers]
keys=console, file, email
[formatters]
keys=simpleFormatter
[logger_root]
level=DEBUG
handlers=console
[logger_foobar]
level=DEBUG
handlers=file, console, email
qualname=foobar
propagate=0
[handler_file]
class=FileHandler
level=DEBUG
formatter=simpleFormatter
args=({"/some_server/some_dir/another_dir/logs/log.txt"})
[handler_console]
class=StreamHandler
level=DEBUG
formatter=simpleFormatter
[handler_email]
class=handlers.SMTPHandler
level=WARNING
args=('mailhost.domain.com', 'sender#domainname.com', ['recipient#domainname.com'], 'Process Failed')
formatter=simpleFormatter
[formatter_simpleFormatter]
format=%(asctime)s - %(levelname)s - %(message)s
Then, you can load this file and create all the logging environment with:
import logging.config
# only do it once, since the loaded logging config is global!
logging.config.fileConfig("logging.conf")
# then, in any file, you can do:
foobarlogger = logging.getLogger("foobar")
foobarlogger.info("Hello, world!")
I am a python newbie, trying to implement logging into my code. I have two modules
main.py
submodule.py
main.py
import logging
from logging.handlers import RotatingFileHandler
import submodule
import logging
from logging.handlers import RotatingFileHandler
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
fh = RotatingFileHandler('master.log', maxBytes=2000000, backupCount=10)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
logger.addHandler(fh)
logger.debug('DEBUG LEVEL - MAIN MODULE')
logger.info('INFO LEVEL - MAIN MODULE')
submodule.loggerCall()
submodule.py
import logging
from logging.handlers import RotatingFileHandler
def loggerCall():
logger = logging.getLogger(__name__)
# logger.setLevel(logging.DEBUG)
fh = RotatingFileHandler('master.log', maxBytes=2000000, backupCount=10)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
logger.addHandler(fh)
logger.debug('SUBMODULE: DEBUG LOGGING MODE : ')
logger.info('Submodule: INFO LOG')
return
I thought as longs as I call the getLogger from my submodule, it should inherit the log level & handler details from root logger. However, in my case, I have to specify log level and handler again in submodule to get them print to same log file.
Also, If I have lots of methods, and classes inside my submodule. How can I go about it without having to define my log level & handler again.
Idea is to have a single log file with main, and sub modules printing in the same log based on the log level set in the main module.
The problem here is that you're not initializing the root logger; you're initializing the logger for your main module.
Try this for main.py:
import logging
from logging.handlers import RotatingFileHandler
import submodule
logger = logging.getLogger() # Gets the root logger
logger.setLevel(logging.DEBUG)
fh = RotatingFileHandler('master.log', maxBytes=2000000, backupCount=10)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
logger.addHandler(fh)
logger.debug('DEBUG LEVEL - MAIN MODULE')
logger.info('INFO LEVEL - MAIN MODULE')
submodule.loggerCall()
Then try this for submodule.py:
def loggerCall():
logger = logging.getLogger(__name__)
logger.debug('SUBMODULE: DEBUG LOGGING MODE : ')
logger.info('Submodule: INFO LOG')
return
Since you said you wanted to send log messages from all your submodules to the same place, you should initialize the root logger and then simply use the message logging methods (along with setlevel() calls, as appropriate). Because there's no explicit handler for your submodule, logging.getLogger(__name__) will traverse the tree to the root, where it will find the handler you established in main.py.
I'm trying to use the standard library to debug my code:
This works fine:
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
logger.info('message')
I can't make work the logger for the lower levels:
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
logger.info('message')
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
logger.debug('message')
I don't get any response for neither of those.
What Python version? That works for me in 3.4. But note that basicConfig() won't affect the root handler if it's already setup:
This function does nothing if the root logger already has handlers configured for it.
To set the level on root explicitly do logging.getLogger().setLevel(logging.DEBUG). But ensure you've called basicConfig() before hand so the root logger initially has some setup. I.e.:
import logging
logging.basicConfig()
logging.getLogger().setLevel(logging.DEBUG)
logging.getLogger('foo').debug('bah')
logging.getLogger().setLevel(logging.INFO)
logging.getLogger('foo').debug('bah')
Also note that "Loggers" and their "Handlers" both have distinct independent log levels. So if you've previously explicitly loaded some complex logger config in you Python script, and that has messed with the root logger's handler(s), then this can have an effect, and just changing the loggers log level with logging.getLogger().setLevel(..) may not work. This is because the attached handler may have a log level set independently. This is unlikely to be the case and not something you'd normally have to worry about.
I use the following setup for logging.
Yaml based config
Create a yaml file called logging.yml like this:
version: 1
formatters:
simple:
format: "%(name)s - %(lineno)d - %(message)s"
complex:
format: "%(asctime)s - %(name)s - %(lineno)d - %(message)s"
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: simple
file:
class: logging.handlers.TimedRotatingFileHandler
when: midnight
backupCount: 5
level: DEBUG
formatter: simple
filename : Thrift.log
loggers:
qsoWidget:
level: INFO
handlers: [console,file]
propagate: yes
__main__:
level: DEBUG
handlers: [console]
propagate: yes
Python - The main
The "main" module should look like this:
import logging.config
import logging
import yaml
with open('logging.yaml','rt') as f:
config=yaml.safe_load(f.read())
f.close()
logging.config.dictConfig(config)
logger=logging.getLogger(__name__)
logger.info("Contest is starting")
Sub Modules/Classes
These should start like this
import logging
class locator(object):
def __init__(self):
self.logger = logging.getLogger(__name__)
self.logger.debug('{} initialized')
Hope that helps you...
In my opinion, this is the best approach for the majority of cases.
Configuration via an INI file
Create a filename logging.ini in the project root directory as below:
[loggers]
keys=root
[logger_root]
level=DEBUG
handlers=screen,file
[formatters]
keys=simple,verbose
[formatter_simple]
format=%(asctime)s [%(levelname)s] %(name)s: %(message)s
[formatter_verbose]
format=[%(asctime)s] %(levelname)s [%(filename)s %(name)s %(funcName)s (%(lineno)d)]: %(message)s
[handlers]
keys=file,screen
[handler_file]
class=handlers.TimedRotatingFileHandler
interval=midnight
backupCount=5
formatter=verbose
level=WARNING
args=('debug.log',)
[handler_screen]
class=StreamHandler
formatter=simple
level=DEBUG
args=(sys.stdout,)
Then configure it as below:
import logging
from logging.config import fileConfig
fileConfig('logging.ini')
logger = logging.getLogger('dev')
name = "stackoverflow"
logger.info(f"Hello {name}!")
logger.critical('This message should go to the log file.')
logger.error('So should this.')
logger.warning('And this, too.')
logger.debug('Bye!')
If you run the script, the sysout will be:
2021-01-31 03:40:10,241 [INFO] dev: Hello stackoverflow!
2021-01-31 03:40:10,242 [CRITICAL] dev: This message should go to the log file.
2021-01-31 03:40:10,243 [ERROR] dev: So should this.
2021-01-31 03:40:10,243 [WARNING] dev: And this, too.
2021-01-31 03:40:10,243 [DEBUG] dev: Bye!
And debug.log file should contain:
[2021-01-31 03:40:10,242] CRITICAL [my_loger.py dev <module> (12)]: This message should go to the log file.
[2021-01-31 03:40:10,243] ERROR [my_loger.py dev <module> (13)]: So should this.
[2021-01-31 03:40:10,243] WARNING [my_loger.py dev <module> (14)]: And this, too.
All done.
I wanted to leave the default logger at warning level but have detailed lower-level loggers for my code. But it wouldn't show anything. Building on the other answer, it's critical to run logging.basicConfig() beforehand.
import logging
logging.basicConfig()
logging.getLogger('foo').setLevel(logging.INFO)
logging.getLogger('foo').info('info')
logging.getLogger('foo').debug('info')
logging.getLogger('foo').setLevel(logging.DEBUG)
logging.getLogger('foo').info('info')
logging.getLogger('foo').debug('debug')
Outputs expected
INFO:foo:info
INFO:foo:info
DEBUG:foo:debug
For a logging solution across modules, I did this
# cfg.py
import logging
logging.basicConfig()
logger = logging.getLogger('foo')
logger.setLevel(logging.INFO)
logger.info(f'active')
# main.py
import cfg
cfg.logger.info(f'main')
I have create a global logger using the following:
def logini():
logfile='/var/log/cs_status.log'
import logging
import logging.handlers
global logger
logger = logging.getLogger()
logging.basicConfig(filename=logfile,filemode='a',format='%(asctime)s %(name)s %(levelname)s %(message)s',datefmt='%y%m%d-%H:%M:%S',level=logging.DEBUG,propagate=0)
handler = logging.handlers.RotatingFileHandler(logfile, maxBytes=2000000, backupCount=5)
logger.addHandler(handler)
__builtins__.logger = logger
It works, however I am getting 2 outputs for every log, one with the formatting and one without.
I realize that this is being caused by the file rotater as I can comment out the 2 lines of the handler code and then I get a single outputted correct log entry.
How can I prevent the log rotator from outputting a second entry ?
Currently you're configuring two file loggers that point to the same logfile. To only use the RotatingFileHandler, get rid of the basicConfig call:
logger = logging.getLogger()
handler = logging.handlers.RotatingFileHandler(logfile, maxBytes=2000000,
backupCount=5)
formatter = logging.Formatter(fmt='%(asctime)s %(name)s %(levelname)s %(message)s',
datefmt='%y%m%d-%H:%M:%S')
handler.setFormatter(formatter)
handler.setLevel(logging.DEBUG)
logger.addHandler(handler)
All basicConfig does for you is to provide an easy way to instantiate either a StreamHandler (default) or a FileHandler and set its loglevel and formats (see the docs for more information). If you need a handler other than these two, you should instantiate and configure it yourself.
I have the following lines of code that initialize logging.
I comment one out and leave the other to be used.
The problem that I'm facing is that the one that is meant to log to the file not logging to file. It is instead logging to the console.
Please help.
For logging to Console:
logging.basicConfig(level=logging.INFO,
format='%(asctime)s [%(levelname)s] (%(threadName)-10s) %(message)s',)
for file logging
logging.basicConfig(filename='server-soap.1.log',level=logging.INFO,
format='%(asctime)s [%(levelname)s] (%(threadName)-10s) %(message)s')
I found out what the problem was.
It was in the ordering of the imports and the logging definition.
The effect of the poor ordering was that the libraries that I imported before defining the logging using logging.basicConfig() defined the logging. This therefore took precedence to the logging that I was trying to define later using logging.basicConfig()
Below is how I needed to order it:
import logging
## for file logging
logging.basicConfig(filename='server-soap.1.log',
level=logging.INFO,
format='%(asctime)s %(levelname)s %(threadName)-10s %(message)s',)
from pysimplesoap.server import SoapDispatcher, SOAPHandler
from BaseHTTPServer import HTTPServer
import time,random,datetime,pytz,sys,threading
from datetime import timedelta
#DB
import psycopg2, psycopg2.extras
from psycopg2.pool import ThreadedConnectionPool
#ESB Call
from suds import WebFault
from suds.client import Client
But the faulty ordering that I initially had was:
from pysimplesoap.server import SoapDispatcher, SOAPHandler
from BaseHTTPServer import HTTPServer
import logging
import time,random,datetime,pytz,sys,threading
from datetime import timedelta
#DB
import psycopg2, psycopg2.extras
from psycopg2.pool import ThreadedConnectionPool
#ESB Call
from suds import WebFault
from suds.client import Client
## for file logging
logging.basicConfig(filename='server-soap.1.log',
level=logging.INFO,
format='%(asctime)s %(levelname)s %(threadName)-10s %(message)s',)
"Changed in version 3.8: The force argument was added." I think it's a better choice for new version.
For older Version(< 3.8):
From the source code of logging I found the flows:
This function does nothing if the root logger already has handlers
configured. It is a convenience method intended for use by simple scripts
to do one-shot configuration of the logging package.
So, if some module we import called the basicConfig() method before us, our call will do nothing.
A solution I found can work is that you can reload logging before your own calling to basicConfig(), such as
def init_logger(*, fn=None):
# !!! here
from imp import reload # python 2.x don't need to import reload, use it directly
reload(logging)
logging_params = {
'level': logging.INFO,
'format': '%(asctime)s__[%(levelname)s, %(module)s.%(funcName)s](%(name)s)__[L%(lineno)d] %(message)s',
}
if fn is not None:
logging_params['filename'] = fn
logging.basicConfig(**logging_params)
logging.error('init basic configure of logging success')
In case basicConfig() does not work:
logger = logging.getLogger('Spam Logger')
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
fh = logging.FileHandler('spam.log')
fh.setLevel(logging.DEBUG)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
fh.setFormatter(formatter)
# add the handlers to logger
logger.addHandler(ch)
logger.addHandler(fh)
# 'application' code
logger.debug('debug Spam message')
logging.debug('debug Spam message')
logger.info('info Ham message')
logger.warning('warn Eggs message')
logger.error('error Spam and Ham message')
logger.critical('critical Ham and Eggs message')
which gives me the following output:
2019-06-20 11:33:48,967 - Spam Logger - DEBUG - debug Spam message
2019-06-20 11:33:48,968 - Spam Logger - INFO - info Ham message
2019-06-20 11:33:48,968 - Spam Logger - WARNING - warn Eggs message
2019-06-20 11:33:48,968 - Spam Logger - ERROR - error Spam and Ham message
2019-06-20 11:33:48,968 - Spam Logger - CRITICAL - critical Ham and Eggs message
For the sake of reference, Python Logging Cookbook is readworthy.
I got the same error, I fixed it by passing the following argument to the basic config.
logging.basicConfig(
level="WARNING",
format="%(asctime)s - %(name)s - [ %(message)s ]",
datefmt='%d-%b-%y %H:%M:%S',
force=True,
handlers=[
logging.FileHandler("debug.log"),
logging.StreamHandler()
])
Here as you can see passing force=True overrides any other BasicConfigs
Another solution that worked for me is instead of tracing down which module might be importing logging or even calling basicConfig before me is to just call setLevel after basicConfig again.
import os
import logging
RUNTIME_DEBUG_LEVEL = os.environ.get('RUNTIME_DEBUG_LEVEL').upper()
LOGGING_KWARGS = {
'level': getattr(logging, RUNTIME_DEBUG_LEVEL)
}
logging.basicConfig(**LOGGING_KWARGS)
logging.setLevel(getattr(logging, RUNTIME_DEBUG_LEVEL))
Sort of crude, seems hacky, fixed my problem, worth a share.
IF YOU JUST WANT TO SET THE LOG LEVEL OF ALL LOGGERS
instead of ordering your imports after logging config:
just set level on the root level:
# Option 1:
logging.root.setLevel(logging.INFO)
# Option 2 - make it configurable:
# env variable + default value INFO
logging.root.setLevel(os.environ.get('LOG_LEVEL', logging.INFO))
Like vacing's answer mentioned, basicConfig has no effect if the root logger already has handlers configured.
I was using pytest which seems to set handlers which means the default logging setup with loglevel WARNING is active -- so it appears my app fails to log, but this only happens when executing unit tests with pytest. In a normal app run logs are produced as expected which is enough for my use case.