When I running the following inside IPython Notebook I don't see any output:
import logging
logging.basicConfig(level=logging.DEBUG)
logging.debug("test")
Anyone know how to make it so I can see the "test" message inside the notebook?
Try following:
import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logging.debug("test")
According to logging.basicConfig:
Does basic configuration for the logging system by creating a
StreamHandler with a default Formatter and adding it to the root
logger. The functions debug(), info(), warning(), error() and
critical() will call basicConfig() automatically if no handlers are
defined for the root logger.
This function does nothing if the root logger already has handlers
configured for it.
It seems like ipython notebook call basicConfig (or set handler) somewhere.
If you still want to use basicConfig, reload the logging module like this
from importlib import reload # Not needed in Python 2
import logging
reload(logging)
logging.basicConfig(format='%(asctime)s %(levelname)s:%(message)s', level=logging.DEBUG, datefmt='%I:%M:%S')
My understanding is that the IPython session starts up logging so basicConfig doesn't work. Here is the setup that works for me (I wish this was not so gross looking since I want to use it for almost all my notebooks):
import logging
logger = logging.getLogger()
fhandler = logging.FileHandler(filename='mylog.log', mode='a')
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fhandler.setFormatter(formatter)
logger.addHandler(fhandler)
logger.setLevel(logging.DEBUG)
Now when I run:
logging.error('hello!')
logging.debug('This is a debug message')
logging.info('this is an info message')
logging.warning('tbllalfhldfhd, warning.')
I get a "mylog.log" file in the same directory as my notebook that contains:
2015-01-28 09:49:25,026 - root - ERROR - hello!
2015-01-28 09:49:25,028 - root - DEBUG - This is a debug message
2015-01-28 09:49:25,029 - root - INFO - this is an info message
2015-01-28 09:49:25,032 - root - WARNING - tbllalfhldfhd, warning.
Note that if you rerun this without restarting the IPython session it will write duplicate entries to the file since there would now be two file handlers defined
Bear in mind that stderr is the default stream for the logging module, so in IPython and Jupyter notebooks you might not see anything unless you configure the stream to stdout:
import logging
import sys
logging.basicConfig(format='%(asctime)s | %(levelname)s : %(message)s',
level=logging.INFO, stream=sys.stdout)
logging.info('Hello world!')
What worked for me now (Jupyter, notebook server is: 5.4.1, IPython 7.0.1)
import logging
logging.basicConfig()
logger = logging.getLogger('Something')
logger.setLevel(logging.DEBUG)
Now I can use logger to print info, otherwise I would see only message from the default level (logging.WARNING) or above.
You can configure logging by running %config Application.log_level="INFO"
For more information, see IPython kernel options
I wanted a simple and straightforward answer to this, with nicely styled output so here's my recommendation
import sys
import logging
logging.basicConfig(
format='%(asctime)s [%(levelname)s] %(name)s - %(message)s',
level=logging.INFO,
datefmt='%Y-%m-%d %H:%M:%S',
stream=sys.stdout,
)
log = logging.getLogger('notebook')
Then you can use log.info() or any of the other logging levels anywhere in your notebook with output that looks like this
2020-10-28 17:07:08 [INFO] notebook - Hello world
2020-10-28 17:12:22 [INFO] notebook - More info here
2020-10-28 17:12:22 [INFO] notebook - And some more
As of logging version 3.8 a force parameter has been added that removes any existing handlers, which allows basicConfig to work. This worked on IPython version 7.29.0 and Jupyter Lab version 3.2.1.
import logging
logging.basicConfig(level=logging.DEBUG,
force = True)
logging.debug("test")
I setup a logger for both file and I wanted it to show up on the notebook. Turns out adding a filehandler clears out the default stream handlder.
logger = logging.getLogger()
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
# Setup file handler
fhandler = logging.FileHandler('my.log')
fhandler.setLevel(logging.DEBUG)
fhandler.setFormatter(formatter)
# Configure stream handler for the cells
chandler = logging.StreamHandler()
chandler.setLevel(logging.DEBUG)
chandler.setFormatter(formatter)
# Add both handlers
logger.addHandler(fhandler)
logger.addHandler(chandler)
logger.setLevel(logging.DEBUG)
# Show the handlers
logger.handlers
# Log Something
logger.info("Test info")
logger.debug("Test debug")
logger.error("Test error")
setup
import logging
# make a handler
handler = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
# add it to the root logger
logging.getLogger().addHandler(handler)
log from your own logger
# make a logger for this notebook, set verbosity
logger = logging.getLogger(__name__)
logger.setLevel('DEBUG')
# send messages
logger.debug("debug message")
logger.info("so much info")
logger.warning("you've veen warned!")
logger.error("bad news")
logger.critical("really bad news")
2021-09-02 18:18:27,397 - __main__ - DEBUG - debug message
2021-09-02 18:18:27,397 - __main__ - INFO - so much info
2021-09-02 18:18:27,398 - __main__ - WARNING - you've veen warned!
2021-09-02 18:18:27,398 - __main__ - ERROR - bad news
2021-09-02 18:18:27,399 - __main__ - CRITICAL - really bad news
capture logging from other libraries
logging.getLogger('google').setLevel('DEBUG')
from google.cloud import storage
client = storage.Client()
2021-09-02 18:18:27,415 - google.auth._default - DEBUG - Checking None for explicit credentials as part of auth process...
2021-09-02 18:18:27,416 - google.auth._default - DEBUG - Checking Cloud SDK credentials as part of auth process...
2021-09-02 18:18:27,416 - google.auth._default - DEBUG - Cloud SDK credentials not found on disk; not using them
...
Related
I use the following code in my __init__ class:
# Create a custom logger
self.logger = logging.getLogger(__name__)
# Create handlers
self.handler_cmdline = logging.StreamHandler()
self.handler_file = logging.FileHandler(self.logfile)
self.handler_cmdline.setLevel(logging.DEBUG)
self.handler_file.setLevel(logging.INFO)
# Create formatters and add it to handlers
log_format = logging.Formatter(fmt='%(asctime)s - %(name)s - %(<levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
self.handler_cmdline.setFormatter(log_format)
self.handler_file.setFormatter(log_format)
# Add handlers to the logger
self.logger.addHandler(self.handler_cmdline)
self.logger.addHandler(self.handler_file)
self.logger.debug('Initialisation Complete')
self.logger.info('Initialisation Complete')
self.logger.warning('Initialisation Complete')
self.logger.critical('Initialisation Complete')
The debug and the warning don't work somehow. Everything from warning and above works.
What's wrong here?
import logging
class example():
def __init__(self):
self.logger = logging.getLogger(__name__)
self.logfile = 'example.log'
# Create handlers
self.handler_cmdline = logging.StreamHandler()
self.handler_file = logging.FileHandler(self.logfile)
self.handler_cmdline.setLevel(logging.DEBUG)
self.handler_file.setLevel(logging.INFO)
# Create formatters and add it to handlers
log_format = logging.Formatter(fmt='%(asctime)s - %(name)s - %(levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
self.handler_cmdline.setFormatter(log_format)
self.handler_file.setFormatter(log_format)
# Add handlers to the logger
self.logger.addHandler(self.handler_cmdline)
self.logger.addHandler(self.handler_file)
self.logger.setLevel(logging.DEBUG)
# print(self.logger.level)
self.logger.debug('Initialisation Complete')
self.logger.info('Initialisation Complete')
self.logger.warning('Initialisation Complete')
self.logger.critical('Initialisation Complete')
example()
Small snippet to test the fix with your code, the issue is that your logger does not have a level set. Because of which default level is used which causes the INFO and DEBUG levelled logs to be ignored. You can try printing the self.logger.level to see what level it is currently set to.
In your case, you did not have logging level set for your self.logger.
Just adding the self.logger.setLevel(logging.DEBUG) fixes the issue. And you can see this output in the console -
2022-12-12 17:44:23 - __main__ - DEBUG - Initialisation Complete
2022-12-12 17:44:23 - __main__ - INFO - Initialisation Complete
2022-12-12 17:44:23 - __main__ - WARNING - Initialisation Complete
2022-12-12 17:44:23 - __main__ - CRITICAL - Initialisation Complete
while in the file, the debug log would not be there, because of the logging level of the file handler being INFO.
The level of the logger should be minimum of the level of the log handlers of that logger, else those logs would not be logged.(e.g. in this case, if you set the logger level to INFO, the debug log will not be printed in stdout either, even though the log handler has its level set to DEBUG.
I have read :
Using logging in multiple modules,
Managing loggers with Python logging
https://docs.python.org/3/howto/logging-cookbook.html
and other posts.
and still, I don't get it (something is really wrong in the way the logging module works IMO).
Let's see a straight forward example:
module1:
import logging
logger=logging.getLogger(__name__)
def test_log():
logger.info("module1")
logger.debug("module1")
module2:
import logging
logger=logging.getLogger(__name__)
def test_log2():
logger.info("module2")
logger.debug("module2")
main.py
import module1
import module2
import logging
# create logger
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(ch)
logger.info("main debug")
logger.info("main info")
module1.test_log()
module2.test_log2()
This outputs:
2022-10-28 11:18:09,101 - __main__ - INFO - main debug
2022-10-28 11:18:09,102 - __main__ - INFO - main info
The 2 others modules loggers did not work ...
Also, changing
logger.setLevel(logging.DEBUG) to logger.setLevel(logging.ERROR)
even if there is ch.setLevel(logging.DEBUG) remove the logging below ERROR level.
But the opposite is ALSO TRUE !
Setting ch.setLevel(logging.DEBUG) to ch.setLevel(logging.ERROR)
even if there is logger.setLevel(logging.DEBUG) remove the logging below ERROR level.
What's the point of setting two differents things if they must be equal to work anyway ?
But ! with :
logger.setLevel(logging.DEBUG)
ch.setLevel(logging.INFO)
or with
logger.setLevel(logging.INFO)
ch.setLevel(logging.DEBUG)
we get both messages. I can't make sense of how this is supposed to work.
Anyway, what about my 2 modules ?
logging.getLogger("module1").setLevel(logging.DEBUG)
logging.getLogger("module2").setLevel(logging.DEBUG)
does not fix anything
Finally !
logging.getLogger("module1").setLevel(logging.DEBUG)
logging.getLogger("module2").setLevel(logging.INFO)
logging.basicConfig(level=logging.DEBUG)
logger.info("main debug")
logger.info("main info")
module1.test_log()
module2.test_log2()
outputs:
2022-10-28 11:27:47,843 - __main__ - INFO - main debug
2022-10-28 11:27:47,843 - __main__ - INFO - main debug
2022-10-28 11:27:47,844 - __main__ - INFO - main info
2022-10-28 11:27:47,844 - __main__ - INFO - main info
2022-10-28 11:27:47,844 - module1 - DEBUG - module1
2022-10-28 11:27:47,845 - module1 - INFO - module1
2022-10-28 11:27:47,845 - module2 - INFO - module2
which is a failure because,
main logging has been written twice
it seems really innapropriate to me to go through basicConfig, especially considering that the users of my package should be bothered to deal with the formatting and other options of logging (which in my case seems necessary)
if you want debug level on one module you need to specify DEBUG to basicConfig which result every other package to print in DEBUG level, then forcing the user to set EVERY library levels manually.
So, how should we implement the logger when we have differents modules, packages et different level of logging for each one of them ?
Thanks you !
I would like to save in superwrapper.log all logging lines but only show in console the INFO.
If I remove the # of filename line , the file is okey but I don't see anything in the console.
if __name__ == '__main__':
logging.basicConfig(
#filename='superwrapper.log',
level=logging.DEBUG,
format='%(asctime)s.%(msecs)03d %(levelname)s %(module)s - %(funcName)s: %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
2020-04-28 11:41:09.698 INFO common - handle_elasticsearch: Elastic connection detected
2020-04-28 11:41:09.699 INFO superwrapper - <module>: Cookies Loaded: |TRUE|
2020-04-28 11:41:09.715 DEBUG connectionpool - _new_conn: Starting new HTTPS connection (1): m.facebook.com:443
You can use multiple handlers. logging.basicConfig can accept handlers as an argument starting in Python 3.3. One is required for logging to the log file and one to the console. You can also set the handlers to have different logging levels. The simplest way I can think of is to do this:
import logging
import sys
file_handler = logging.FileHandler('superwrapper.log')
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setLevel(logging.INFO)
logging.basicConfig(
level=logging.DEBUG,
format='%(asctime)s.%(msecs)03d %(levelname)s %(module)s - %(funcName)s: %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',
handlers=[
file_handler,
console_handler
]
)
One thing to note is the StreamHandler writes to strerr. Usually you will want to override this with sys.stdout
You can set up multiple loggers. This will get rid of the DEBUG messages, but note that messages of a higher severity will still be broadcast (e.g. 'WARNING' and 'ERROR').
This exact scenario is in the logging cookbook of the Python docs:
import logging
# set up logging to file - see previous section for more details
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
datefmt='%m-%d %H:%M',
filename='/temp/myapp.log',
filemode='w')
# define a Handler which writes INFO messages or higher to the sys.stderr
console = logging.StreamHandler()
console.setLevel(logging.INFO)
# set a format which is simpler for console use
formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
# tell the handler to use this format
console.setFormatter(formatter)
# add the handler to the root logger
logging.getLogger('').addHandler(console)
# Now, we can log to the root logger, or any other logger. First the root...
logging.info('Jackdaws love my big sphinx of quartz.')
# Now, define a couple of other loggers which might represent areas in your
# application:
logger1 = logging.getLogger('myapp.area1')
logger2 = logging.getLogger('myapp.area2')
logger1.debug('Quick zephyrs blow, vexing daft Jim.')
logger1.info('How quickly daft jumping zebras vex.')
logger2.warning('Jail zesty vixen who grabbed pay from quack.')
logger2.error('The five boxing wizards jump quickly.')
In the example given by the cookbook, you should see in the console all the 'INFO', 'WARNING' and 'ERROR' messages, but only the log file will hold the 'DEBUG' message.
#Alan's answer was great, but it also wrote the message from the root logger, which was too much for me, because I was afraid that the size of log file would get out of control. So I modified it like below to only log what I specified and nothing extra from the imported modules.
import logging
# set up logging to file - see previous section for more details
logging.basicConfig(format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
datefmt='%m-%d %H:%M',
filename='/temp/myapp.log',
filemode='w')
logger = logging.getLogger('myapp')
logger.setLevel(logging.DEBUG)
# define a Handler which writes INFO messages or higher to the sys.stderr
console = logging.StreamHandler()
console.setLevel(logging.INFO)
# set a format which is simpler for console use
formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
# tell the handler to use this format
console.setFormatter(formatter)
# add the handler to your logger
logger.addHandler(console)
Then in my app, I logged like below just using logger (without any numbers).
logger.debug('Quick zephyrs blow, vexing daft Jim.')
logger.info('How quickly daft jumping zebras vex.')
logger.warning('Jail zesty vixen who grabbed pay from quack.')
logger.error('The five boxing wizards jump quickly.')
I have the following lines of code that initialize logging.
I comment one out and leave the other to be used.
The problem that I'm facing is that the one that is meant to log to the file not logging to file. It is instead logging to the console.
Please help.
For logging to Console:
logging.basicConfig(level=logging.INFO,
format='%(asctime)s [%(levelname)s] (%(threadName)-10s) %(message)s',)
for file logging
logging.basicConfig(filename='server-soap.1.log',level=logging.INFO,
format='%(asctime)s [%(levelname)s] (%(threadName)-10s) %(message)s')
I found out what the problem was.
It was in the ordering of the imports and the logging definition.
The effect of the poor ordering was that the libraries that I imported before defining the logging using logging.basicConfig() defined the logging. This therefore took precedence to the logging that I was trying to define later using logging.basicConfig()
Below is how I needed to order it:
import logging
## for file logging
logging.basicConfig(filename='server-soap.1.log',
level=logging.INFO,
format='%(asctime)s %(levelname)s %(threadName)-10s %(message)s',)
from pysimplesoap.server import SoapDispatcher, SOAPHandler
from BaseHTTPServer import HTTPServer
import time,random,datetime,pytz,sys,threading
from datetime import timedelta
#DB
import psycopg2, psycopg2.extras
from psycopg2.pool import ThreadedConnectionPool
#ESB Call
from suds import WebFault
from suds.client import Client
But the faulty ordering that I initially had was:
from pysimplesoap.server import SoapDispatcher, SOAPHandler
from BaseHTTPServer import HTTPServer
import logging
import time,random,datetime,pytz,sys,threading
from datetime import timedelta
#DB
import psycopg2, psycopg2.extras
from psycopg2.pool import ThreadedConnectionPool
#ESB Call
from suds import WebFault
from suds.client import Client
## for file logging
logging.basicConfig(filename='server-soap.1.log',
level=logging.INFO,
format='%(asctime)s %(levelname)s %(threadName)-10s %(message)s',)
"Changed in version 3.8: The force argument was added." I think it's a better choice for new version.
For older Version(< 3.8):
From the source code of logging I found the flows:
This function does nothing if the root logger already has handlers
configured. It is a convenience method intended for use by simple scripts
to do one-shot configuration of the logging package.
So, if some module we import called the basicConfig() method before us, our call will do nothing.
A solution I found can work is that you can reload logging before your own calling to basicConfig(), such as
def init_logger(*, fn=None):
# !!! here
from imp import reload # python 2.x don't need to import reload, use it directly
reload(logging)
logging_params = {
'level': logging.INFO,
'format': '%(asctime)s__[%(levelname)s, %(module)s.%(funcName)s](%(name)s)__[L%(lineno)d] %(message)s',
}
if fn is not None:
logging_params['filename'] = fn
logging.basicConfig(**logging_params)
logging.error('init basic configure of logging success')
In case basicConfig() does not work:
logger = logging.getLogger('Spam Logger')
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
fh = logging.FileHandler('spam.log')
fh.setLevel(logging.DEBUG)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
fh.setFormatter(formatter)
# add the handlers to logger
logger.addHandler(ch)
logger.addHandler(fh)
# 'application' code
logger.debug('debug Spam message')
logging.debug('debug Spam message')
logger.info('info Ham message')
logger.warning('warn Eggs message')
logger.error('error Spam and Ham message')
logger.critical('critical Ham and Eggs message')
which gives me the following output:
2019-06-20 11:33:48,967 - Spam Logger - DEBUG - debug Spam message
2019-06-20 11:33:48,968 - Spam Logger - INFO - info Ham message
2019-06-20 11:33:48,968 - Spam Logger - WARNING - warn Eggs message
2019-06-20 11:33:48,968 - Spam Logger - ERROR - error Spam and Ham message
2019-06-20 11:33:48,968 - Spam Logger - CRITICAL - critical Ham and Eggs message
For the sake of reference, Python Logging Cookbook is readworthy.
I got the same error, I fixed it by passing the following argument to the basic config.
logging.basicConfig(
level="WARNING",
format="%(asctime)s - %(name)s - [ %(message)s ]",
datefmt='%d-%b-%y %H:%M:%S',
force=True,
handlers=[
logging.FileHandler("debug.log"),
logging.StreamHandler()
])
Here as you can see passing force=True overrides any other BasicConfigs
Another solution that worked for me is instead of tracing down which module might be importing logging or even calling basicConfig before me is to just call setLevel after basicConfig again.
import os
import logging
RUNTIME_DEBUG_LEVEL = os.environ.get('RUNTIME_DEBUG_LEVEL').upper()
LOGGING_KWARGS = {
'level': getattr(logging, RUNTIME_DEBUG_LEVEL)
}
logging.basicConfig(**LOGGING_KWARGS)
logging.setLevel(getattr(logging, RUNTIME_DEBUG_LEVEL))
Sort of crude, seems hacky, fixed my problem, worth a share.
IF YOU JUST WANT TO SET THE LOG LEVEL OF ALL LOGGERS
instead of ordering your imports after logging config:
just set level on the root level:
# Option 1:
logging.root.setLevel(logging.INFO)
# Option 2 - make it configurable:
# env variable + default value INFO
logging.root.setLevel(os.environ.get('LOG_LEVEL', logging.INFO))
Like vacing's answer mentioned, basicConfig has no effect if the root logger already has handlers configured.
I was using pytest which seems to set handlers which means the default logging setup with loglevel WARNING is active -- so it appears my app fails to log, but this only happens when executing unit tests with pytest. In a normal app run logs are produced as expected which is enough for my use case.
Where should I see the logging output on Eclipse while debugging ? and when running ?
It will depends on how you configure your logging system. If you use only print statement, it should be shown in the console view of eclipse.
If you use logging and you configured a Console handler it should also be displayed in the console eclipse view.
If you configured only file handler in the logging configuration, you'll have to tail the log files ;)
Here an example that worked for me:
# LOG
log = logging.getLogger("appname")
log.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
fmt = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(fmt)
log.addHandler(ch)
log.debug("something happened") # Will print on console '2013-01-26 12:58:28,974 - appname - DEBUG - something happened'