According to the docs, it should be a simple matter when adding the process id to the statements in the log. Here is what I have:
import logging
logging.basicConfig(format='%(process)d %(asctime)s %(levelname)-8s %(message)s',
filename = ns.logFile, level=ns.loggingLevel,
datefmt='%Y-%m-%d %H:%M:%S')
This prints out nicely, except the process id is missing:
2021-08-13 11:36:01 DEBUG Got here!!!9
What am I doing wrong?
As per the docs logging.basicConfig
does nothing if the root logger already has handlers configured, unless the keyword argument force is set to True. The functions debug(), info(), warning(), error() and critical() will call basicConfig() automatically if no handlers are defined for the root logger.
I suspect handlers are already added before your configuration. So I think you will have to force your configuration with :
import logging
logging.basicConfig(
force=True,
format='%(process)d %(asctime)s %(levelname)-8s %(message)s',
filename = ns.logFile, level=ns.loggingLevel, datefmt='%Y-%m-%d %H:%M:%S'
)
Related
I would like to save in superwrapper.log all logging lines but only show in console the INFO.
If I remove the # of filename line , the file is okey but I don't see anything in the console.
if __name__ == '__main__':
logging.basicConfig(
#filename='superwrapper.log',
level=logging.DEBUG,
format='%(asctime)s.%(msecs)03d %(levelname)s %(module)s - %(funcName)s: %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
2020-04-28 11:41:09.698 INFO common - handle_elasticsearch: Elastic connection detected
2020-04-28 11:41:09.699 INFO superwrapper - <module>: Cookies Loaded: |TRUE|
2020-04-28 11:41:09.715 DEBUG connectionpool - _new_conn: Starting new HTTPS connection (1): m.facebook.com:443
You can use multiple handlers. logging.basicConfig can accept handlers as an argument starting in Python 3.3. One is required for logging to the log file and one to the console. You can also set the handlers to have different logging levels. The simplest way I can think of is to do this:
import logging
import sys
file_handler = logging.FileHandler('superwrapper.log')
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setLevel(logging.INFO)
logging.basicConfig(
level=logging.DEBUG,
format='%(asctime)s.%(msecs)03d %(levelname)s %(module)s - %(funcName)s: %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',
handlers=[
file_handler,
console_handler
]
)
One thing to note is the StreamHandler writes to strerr. Usually you will want to override this with sys.stdout
You can set up multiple loggers. This will get rid of the DEBUG messages, but note that messages of a higher severity will still be broadcast (e.g. 'WARNING' and 'ERROR').
This exact scenario is in the logging cookbook of the Python docs:
import logging
# set up logging to file - see previous section for more details
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
datefmt='%m-%d %H:%M',
filename='/temp/myapp.log',
filemode='w')
# define a Handler which writes INFO messages or higher to the sys.stderr
console = logging.StreamHandler()
console.setLevel(logging.INFO)
# set a format which is simpler for console use
formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
# tell the handler to use this format
console.setFormatter(formatter)
# add the handler to the root logger
logging.getLogger('').addHandler(console)
# Now, we can log to the root logger, or any other logger. First the root...
logging.info('Jackdaws love my big sphinx of quartz.')
# Now, define a couple of other loggers which might represent areas in your
# application:
logger1 = logging.getLogger('myapp.area1')
logger2 = logging.getLogger('myapp.area2')
logger1.debug('Quick zephyrs blow, vexing daft Jim.')
logger1.info('How quickly daft jumping zebras vex.')
logger2.warning('Jail zesty vixen who grabbed pay from quack.')
logger2.error('The five boxing wizards jump quickly.')
In the example given by the cookbook, you should see in the console all the 'INFO', 'WARNING' and 'ERROR' messages, but only the log file will hold the 'DEBUG' message.
#Alan's answer was great, but it also wrote the message from the root logger, which was too much for me, because I was afraid that the size of log file would get out of control. So I modified it like below to only log what I specified and nothing extra from the imported modules.
import logging
# set up logging to file - see previous section for more details
logging.basicConfig(format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
datefmt='%m-%d %H:%M',
filename='/temp/myapp.log',
filemode='w')
logger = logging.getLogger('myapp')
logger.setLevel(logging.DEBUG)
# define a Handler which writes INFO messages or higher to the sys.stderr
console = logging.StreamHandler()
console.setLevel(logging.INFO)
# set a format which is simpler for console use
formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
# tell the handler to use this format
console.setFormatter(formatter)
# add the handler to your logger
logger.addHandler(console)
Then in my app, I logged like below just using logger (without any numbers).
logger.debug('Quick zephyrs blow, vexing daft Jim.')
logger.info('How quickly daft jumping zebras vex.')
logger.warning('Jail zesty vixen who grabbed pay from quack.')
logger.error('The five boxing wizards jump quickly.')
In my program I define a logger at the beginning that looks similar to this:
def start_logger():
fh = logging.handlers.RotatingFileHandler('logger.log',
maxBytes=1000000,
backupCount=100)
fh.setLevel(logging.DEBUG)
ch = logging.StreamHandler(sys.stdout)
ch.setLevel(logging.DEBUG)
fh_fmt = '%(asctime)s %(levelname)8s %(message)s [%(filename)s:%(lineno)d]'
#fh_fmt = '%(asctime)s - %(funcName)s - %(levelname)s - %(message)s'
ch_fmt = '%(asctime)s %(levelname)8s %(message)s [%(filename)s:%(lineno)d]'
#ch_fmt = '%(funcName)s - %(levelname)s - %(message)s'
fh.setFormatter(logging.Formatter(fh_fmt))
ch.setFormatter(logging.Formatter(ch_fmt))
root = logging.getLogger()
root.addHandler(fh)
root.addHandler(ch)
I then have multiple files that are called from my main program. In order for them to work correctly I need to do the following:
import logging
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
log.debug("This debug message is ounly output when I set the level again to debug for this file. Otherwise the the log level for this file is WARNING.")
Why is the default level for all modules that I import set to warning. Why do I have to set the level again to DEBUG for each of them when they import my root logger with log = logging.getLogger(name)? Is this the best way to create a logging module accross a package with different modules or is there a better solution?
Let's start by looking at what Handler.setLevel does:
Sets the threshold for this handler to lvl. Logging messages which are less severe than lvl will be ignored. When a handler is created, the level is set to NOTSET (which causes all messages to be processed).
Any messages less severe than lvl are ignored. Effectively, setting it to DEBUG is meaningless (unless you define your own log levels), because there is no less severe message than debug. So, that means that the handler won't ignore any messages.
setLevel is the right idea, but you're calling it on the wrong object. Look at Logger.setLevel:
Sets the threshold for this logger to lvl. Logging messages which are less severe than lvl will be ignored. When a logger is created, the level is set to NOTSET (which causes all messages to be processed when the logger is the root logger, or delegation to the parent when the logger is a non-root logger). Note that the root logger is created with level WARNING.
The term ‘delegation to the parent’ means that if a logger has a level of NOTSET, its chain of ancestor loggers is traversed until either an ancestor with a level other than NOTSET is found, or the root is reached.
If an ancestor is found with a level other than NOTSET, then that ancestor’s level is treated as the effective level of the logger where the ancestor search began, and is used to determine how a logging event is handled.
You're creating the children right, but they're all children of the root logger. They're level is set to NOTSET, and it propagates up to the root, whose default value is WARNING. Consequently, you don't see any messages.
TL;DR:
The solution is simple: set the level on the logger, not the handler. The following code should do what you need:
def start_logger():
fh = logging.handlers.RotatingFileHandler('logger.log',
maxBytes=1000000,
backupCount=100)
ch = logging.StreamHandler(sys.stdout)
fh_fmt = '%(asctime)s %(levelname)8s %(message)s [%(filename)s:%(lineno)d]'
#fh_fmt = '%(asctime)s - %(funcName)s - %(levelname)s - %(message)s'
ch_fmt = '%(asctime)s %(levelname)8s %(message)s [%(filename)s:%(lineno)d]'
#ch_fmt = '%(funcName)s - %(levelname)s - %(message)s'
fh.setFormatter(logging.Formatter(fh_fmt))
ch.setFormatter(logging.Formatter(ch_fmt))
logging.basicConfig(level=logging.DEBUG)
root = logging.getLogger()
root.addHandler(fh)
root.addHandler(ch)
Once you do that, you shouldn't need the setLevel call when making the children.
Oh, and to answer your other questions: this is exactly the way you're supposed to use the logging library. (I actually just log everything to the root logger, because I don't need the kind of granularity you developed, but when you have a case for it, you're doing it just as you should.)
EDIT: Evidently, setLevel doesn't seem to work on the root logger. Instead, before accessing the root logger, you have to set basicConfig. Setting the level in logging.basicConfig will do what you need (at least, it worked in my tests). Note, doing this matches the example given in Logging from multiple modules, so should solve your problem.
I'm using logging module, and I've passed in the same parameters that I have on other jobs that are currently working:
import logging
from inst_config import config3
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s [%(levelname)s] - %(message)s',
filename=config3.GET_LOGFILE(config3.REQUESTS_USAGE_LOGFILE))
logging.warning('This should go in the file.')
if __name__ == '__main__':
logging.info('Starting unload.')
Using this method to create the filename:
REQUESTS_USAGE_LOGFILE = r'C:\RunLogs\Requests_Usage\requests_usage_runlog_{}.txt'.format(
CUR_MONTH)
def GET_LOGFILE(logfile):
"""Truncates file and returns."""
with open(logfile, 'w'):
pass
return logfile
When I run it, however, it is creating the file, and then still outputting the logging info to the console. I'm running in Powershell.
Just tried putting it inside the main statement like this:
if __name__ == '__main__':
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s [%(levelname)s] - %(message)s',
filename=config3.GET_LOGFILE(config3.REQUESTS_USAGE_LOGFILE))
logging.warning('This should go in the file.')
Still no luck.
I add the following lines before the logging.basicConfig() and it worked for me.
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
You can try running this snippet in your main file.
import logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s [%(levelname)s] - %(message)s',
filename='filename.txt') # pass explicit filename here
logger = logging.get_logger() # get the root logger
logger.warning('This should go in the file.')
print logger.handlers # you should have one FileHandler object
In addition to Forge's answer on using logging.basicConfig(), as of Python 3.8 a parameter to basicConfig() got added. To quote the docs:
"""
This function does nothing if the root logger already has handlers
configured, unless the keyword argument *force* is set to ``True``.
...
force If this keyword is specified as true, any existing handlers
attached to the root logger are removed and closed, before
carrying out the configuration as specified by the other
arguments.
"""
This is why yue dong's answer to remove all handlers has worked for some as well as Alexander's answer to reset logger.handlers to [].
Debugging logger.handlers (as Forge's answer suggests) led me to see a single StreamHandler there (so basicConfig() did nothing for me until I used force=True as parameter).
Hope that helps in addition to all the other answers here too!
If you are using 'root' logger which is by default has name "", than you can do this trick:
logging.getLogger().setLevel(logging.INFO)
logger = logging.getLogger('')
logger.handlers = []
In addition you may want to specify logging level as in code above, this level will persist for all descendant loggers.
If instead, you specified particular logger, than do
logger = logging.getLogger('my_service')
logger.handlers = []
fh = logging.FileHandler(log_path)
fh.setLevel(logging.INFO)
# create console handler
ch = logging.StreamHandler()
ch.setLevel(logging.INFO)
logger.addHandler(fh)
logger.addHandler(ch)
logger.info('service started')
The above code will create new logger 'my_service'. In case that logger has been already created it clears all handles. That it adds handles for writing in specified file and console. See official documentation as well.
You can also use hierarchical loggers. It is done directly.
logger = logging.getLogger('my_service.update')
logger.info('updated successfully')
I have create a global logger using the following:
def logini():
logfile='/var/log/cs_status.log'
import logging
import logging.handlers
global logger
logger = logging.getLogger()
logging.basicConfig(filename=logfile,filemode='a',format='%(asctime)s %(name)s %(levelname)s %(message)s',datefmt='%y%m%d-%H:%M:%S',level=logging.DEBUG,propagate=0)
handler = logging.handlers.RotatingFileHandler(logfile, maxBytes=2000000, backupCount=5)
logger.addHandler(handler)
__builtins__.logger = logger
It works, however I am getting 2 outputs for every log, one with the formatting and one without.
I realize that this is being caused by the file rotater as I can comment out the 2 lines of the handler code and then I get a single outputted correct log entry.
How can I prevent the log rotator from outputting a second entry ?
Currently you're configuring two file loggers that point to the same logfile. To only use the RotatingFileHandler, get rid of the basicConfig call:
logger = logging.getLogger()
handler = logging.handlers.RotatingFileHandler(logfile, maxBytes=2000000,
backupCount=5)
formatter = logging.Formatter(fmt='%(asctime)s %(name)s %(levelname)s %(message)s',
datefmt='%y%m%d-%H:%M:%S')
handler.setFormatter(formatter)
handler.setLevel(logging.DEBUG)
logger.addHandler(handler)
All basicConfig does for you is to provide an easy way to instantiate either a StreamHandler (default) or a FileHandler and set its loglevel and formats (see the docs for more information). If you need a handler other than these two, you should instantiate and configure it yourself.
I have the following lines of code that initialize logging.
I comment one out and leave the other to be used.
The problem that I'm facing is that the one that is meant to log to the file not logging to file. It is instead logging to the console.
Please help.
For logging to Console:
logging.basicConfig(level=logging.INFO,
format='%(asctime)s [%(levelname)s] (%(threadName)-10s) %(message)s',)
for file logging
logging.basicConfig(filename='server-soap.1.log',level=logging.INFO,
format='%(asctime)s [%(levelname)s] (%(threadName)-10s) %(message)s')
I found out what the problem was.
It was in the ordering of the imports and the logging definition.
The effect of the poor ordering was that the libraries that I imported before defining the logging using logging.basicConfig() defined the logging. This therefore took precedence to the logging that I was trying to define later using logging.basicConfig()
Below is how I needed to order it:
import logging
## for file logging
logging.basicConfig(filename='server-soap.1.log',
level=logging.INFO,
format='%(asctime)s %(levelname)s %(threadName)-10s %(message)s',)
from pysimplesoap.server import SoapDispatcher, SOAPHandler
from BaseHTTPServer import HTTPServer
import time,random,datetime,pytz,sys,threading
from datetime import timedelta
#DB
import psycopg2, psycopg2.extras
from psycopg2.pool import ThreadedConnectionPool
#ESB Call
from suds import WebFault
from suds.client import Client
But the faulty ordering that I initially had was:
from pysimplesoap.server import SoapDispatcher, SOAPHandler
from BaseHTTPServer import HTTPServer
import logging
import time,random,datetime,pytz,sys,threading
from datetime import timedelta
#DB
import psycopg2, psycopg2.extras
from psycopg2.pool import ThreadedConnectionPool
#ESB Call
from suds import WebFault
from suds.client import Client
## for file logging
logging.basicConfig(filename='server-soap.1.log',
level=logging.INFO,
format='%(asctime)s %(levelname)s %(threadName)-10s %(message)s',)
"Changed in version 3.8: The force argument was added." I think it's a better choice for new version.
For older Version(< 3.8):
From the source code of logging I found the flows:
This function does nothing if the root logger already has handlers
configured. It is a convenience method intended for use by simple scripts
to do one-shot configuration of the logging package.
So, if some module we import called the basicConfig() method before us, our call will do nothing.
A solution I found can work is that you can reload logging before your own calling to basicConfig(), such as
def init_logger(*, fn=None):
# !!! here
from imp import reload # python 2.x don't need to import reload, use it directly
reload(logging)
logging_params = {
'level': logging.INFO,
'format': '%(asctime)s__[%(levelname)s, %(module)s.%(funcName)s](%(name)s)__[L%(lineno)d] %(message)s',
}
if fn is not None:
logging_params['filename'] = fn
logging.basicConfig(**logging_params)
logging.error('init basic configure of logging success')
In case basicConfig() does not work:
logger = logging.getLogger('Spam Logger')
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
fh = logging.FileHandler('spam.log')
fh.setLevel(logging.DEBUG)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
fh.setFormatter(formatter)
# add the handlers to logger
logger.addHandler(ch)
logger.addHandler(fh)
# 'application' code
logger.debug('debug Spam message')
logging.debug('debug Spam message')
logger.info('info Ham message')
logger.warning('warn Eggs message')
logger.error('error Spam and Ham message')
logger.critical('critical Ham and Eggs message')
which gives me the following output:
2019-06-20 11:33:48,967 - Spam Logger - DEBUG - debug Spam message
2019-06-20 11:33:48,968 - Spam Logger - INFO - info Ham message
2019-06-20 11:33:48,968 - Spam Logger - WARNING - warn Eggs message
2019-06-20 11:33:48,968 - Spam Logger - ERROR - error Spam and Ham message
2019-06-20 11:33:48,968 - Spam Logger - CRITICAL - critical Ham and Eggs message
For the sake of reference, Python Logging Cookbook is readworthy.
I got the same error, I fixed it by passing the following argument to the basic config.
logging.basicConfig(
level="WARNING",
format="%(asctime)s - %(name)s - [ %(message)s ]",
datefmt='%d-%b-%y %H:%M:%S',
force=True,
handlers=[
logging.FileHandler("debug.log"),
logging.StreamHandler()
])
Here as you can see passing force=True overrides any other BasicConfigs
Another solution that worked for me is instead of tracing down which module might be importing logging or even calling basicConfig before me is to just call setLevel after basicConfig again.
import os
import logging
RUNTIME_DEBUG_LEVEL = os.environ.get('RUNTIME_DEBUG_LEVEL').upper()
LOGGING_KWARGS = {
'level': getattr(logging, RUNTIME_DEBUG_LEVEL)
}
logging.basicConfig(**LOGGING_KWARGS)
logging.setLevel(getattr(logging, RUNTIME_DEBUG_LEVEL))
Sort of crude, seems hacky, fixed my problem, worth a share.
IF YOU JUST WANT TO SET THE LOG LEVEL OF ALL LOGGERS
instead of ordering your imports after logging config:
just set level on the root level:
# Option 1:
logging.root.setLevel(logging.INFO)
# Option 2 - make it configurable:
# env variable + default value INFO
logging.root.setLevel(os.environ.get('LOG_LEVEL', logging.INFO))
Like vacing's answer mentioned, basicConfig has no effect if the root logger already has handlers configured.
I was using pytest which seems to set handlers which means the default logging setup with loglevel WARNING is active -- so it appears my app fails to log, but this only happens when executing unit tests with pytest. In a normal app run logs are produced as expected which is enough for my use case.