I come from SLF4J and Log4J, so that might be the reason why I don't get how logging works in Python.
I have the following
---- logging.yaml
version: 1
handlers:
console:
class: logging.StreamHandler
level: DEBUG
stream: ext://sys.stderr
formatter: simpleFormatter
file:
class: logging.FileHandler
filename: app.log
mode: w
level: DEBUG
formatter: simpleFormatter
formatters:
simpleFormatter:
#class: !!python/name:logging.Formatter
#class: logging.Formatter
format: '%(name)s %(asctime)s %(levelname)s %(message)s'
datefmt: '%d/%m/%Y %H:%M:%S'
root:
level: INFO
handlers: [console, file]
mod:
level: DEBUG
----- mod.py
import logging
def foo ():
log = logging.getLogger ( __name__ )
log.debug ( 'Hello from the module' )
---- main.py
from logging.config import dictConfig
import yaml
with open ( 'logging.yaml' ) as flog:
dictConfig ( yaml.load ( flog ) )
import logging
from mod import foo
if __name__ == '__main__':
log = logging.getLogger ( __name__ )
log.debug ( 'Hello from main' )
foo ()
With the config above, I would expect to see only the message 'Hello from the module'. Instead, nothing is printed. When I set DEBUG for the root logger, both messages are printed.
So, aren't the messages forwarded to the upper loggers? Isn't the mod logger a child of root? Doesn't the mod logger inherit the handlers configuration? (I've tried to repeat handlers in mod, but nothing changes).
How can I achieve a configuration saying: default level is INFO, the level for this module and sub-modules is DEBUG, everything goes to the handlers defined for root?
You have a fairly simple error: note that, per the docs, configuration for loggers other than root should be under the loggers key as:
a dict in which each key is a logger name and each value is a dict
describing how to configure the corresponding Logger instance
Adding this key and indenting the appropriate lines, to give:
loggers:
mod:
level: DEBUG
works as expected:
$ python main.py
mod 20/07/2016 14:35:32 DEBUG Hello from the module
$ cat app.log
mod 20/07/2016 14:35:32 DEBUG Hello from the module
Related
I have multiple python modules that I'd like to use the same logger while preserving the call hierarchy in those logs. I'd also like to do this with a logger whose name is the name of the calling module (or calling module stack). I haven't been able to work out how to get the name of the calling module except with messing with the stack trace, but that doesn't feel very pythonic.
Is this possible?
main.py
import logging
from sub_module import sub_log
logger = logging.getLogger(__name__)
logger.info("main_module")
sub_log()
sub_module.py
import logging
def sub_log():
logger = logging.getLogger(???)
logger.info("sub_module")
Desired Output
TIME main INFO main_module
TIME main.sub_module INFO sub_module
To solve your problem pythonic use the Logger Formatter:
For reference check the
Logging Docs
main.py
import logging
from submodule import sub_log
from submodule2 import sub_log2
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
fh = logging.FileHandler('test.log')
fh.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s %(name)s.%(module)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(fh)
sub_log("test")
sub_log2("test")
submodule.py
import logging
import __main__
def sub_log(msg):
logger = logging.getLogger(__main__.__name__)
logger.info(msg)
I've created second submodule. ( same code other name)
My Results:
2018-10-16 20:41:23,860 __main__.submodule - INFO - test
2018-10-16 20:41:23,860 __main__.submodule2 - INFO - test
I hope this will help you :)
I'm trying to use the standard library to debug my code:
This works fine:
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
logger.info('message')
I can't make work the logger for the lower levels:
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
logger.info('message')
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
logger.debug('message')
I don't get any response for neither of those.
What Python version? That works for me in 3.4. But note that basicConfig() won't affect the root handler if it's already setup:
This function does nothing if the root logger already has handlers configured for it.
To set the level on root explicitly do logging.getLogger().setLevel(logging.DEBUG). But ensure you've called basicConfig() before hand so the root logger initially has some setup. I.e.:
import logging
logging.basicConfig()
logging.getLogger().setLevel(logging.DEBUG)
logging.getLogger('foo').debug('bah')
logging.getLogger().setLevel(logging.INFO)
logging.getLogger('foo').debug('bah')
Also note that "Loggers" and their "Handlers" both have distinct independent log levels. So if you've previously explicitly loaded some complex logger config in you Python script, and that has messed with the root logger's handler(s), then this can have an effect, and just changing the loggers log level with logging.getLogger().setLevel(..) may not work. This is because the attached handler may have a log level set independently. This is unlikely to be the case and not something you'd normally have to worry about.
I use the following setup for logging.
Yaml based config
Create a yaml file called logging.yml like this:
version: 1
formatters:
simple:
format: "%(name)s - %(lineno)d - %(message)s"
complex:
format: "%(asctime)s - %(name)s - %(lineno)d - %(message)s"
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: simple
file:
class: logging.handlers.TimedRotatingFileHandler
when: midnight
backupCount: 5
level: DEBUG
formatter: simple
filename : Thrift.log
loggers:
qsoWidget:
level: INFO
handlers: [console,file]
propagate: yes
__main__:
level: DEBUG
handlers: [console]
propagate: yes
Python - The main
The "main" module should look like this:
import logging.config
import logging
import yaml
with open('logging.yaml','rt') as f:
config=yaml.safe_load(f.read())
f.close()
logging.config.dictConfig(config)
logger=logging.getLogger(__name__)
logger.info("Contest is starting")
Sub Modules/Classes
These should start like this
import logging
class locator(object):
def __init__(self):
self.logger = logging.getLogger(__name__)
self.logger.debug('{} initialized')
Hope that helps you...
In my opinion, this is the best approach for the majority of cases.
Configuration via an INI file
Create a filename logging.ini in the project root directory as below:
[loggers]
keys=root
[logger_root]
level=DEBUG
handlers=screen,file
[formatters]
keys=simple,verbose
[formatter_simple]
format=%(asctime)s [%(levelname)s] %(name)s: %(message)s
[formatter_verbose]
format=[%(asctime)s] %(levelname)s [%(filename)s %(name)s %(funcName)s (%(lineno)d)]: %(message)s
[handlers]
keys=file,screen
[handler_file]
class=handlers.TimedRotatingFileHandler
interval=midnight
backupCount=5
formatter=verbose
level=WARNING
args=('debug.log',)
[handler_screen]
class=StreamHandler
formatter=simple
level=DEBUG
args=(sys.stdout,)
Then configure it as below:
import logging
from logging.config import fileConfig
fileConfig('logging.ini')
logger = logging.getLogger('dev')
name = "stackoverflow"
logger.info(f"Hello {name}!")
logger.critical('This message should go to the log file.')
logger.error('So should this.')
logger.warning('And this, too.')
logger.debug('Bye!')
If you run the script, the sysout will be:
2021-01-31 03:40:10,241 [INFO] dev: Hello stackoverflow!
2021-01-31 03:40:10,242 [CRITICAL] dev: This message should go to the log file.
2021-01-31 03:40:10,243 [ERROR] dev: So should this.
2021-01-31 03:40:10,243 [WARNING] dev: And this, too.
2021-01-31 03:40:10,243 [DEBUG] dev: Bye!
And debug.log file should contain:
[2021-01-31 03:40:10,242] CRITICAL [my_loger.py dev <module> (12)]: This message should go to the log file.
[2021-01-31 03:40:10,243] ERROR [my_loger.py dev <module> (13)]: So should this.
[2021-01-31 03:40:10,243] WARNING [my_loger.py dev <module> (14)]: And this, too.
All done.
I wanted to leave the default logger at warning level but have detailed lower-level loggers for my code. But it wouldn't show anything. Building on the other answer, it's critical to run logging.basicConfig() beforehand.
import logging
logging.basicConfig()
logging.getLogger('foo').setLevel(logging.INFO)
logging.getLogger('foo').info('info')
logging.getLogger('foo').debug('info')
logging.getLogger('foo').setLevel(logging.DEBUG)
logging.getLogger('foo').info('info')
logging.getLogger('foo').debug('debug')
Outputs expected
INFO:foo:info
INFO:foo:info
DEBUG:foo:debug
For a logging solution across modules, I did this
# cfg.py
import logging
logging.basicConfig()
logger = logging.getLogger('foo')
logger.setLevel(logging.INFO)
logger.info(f'active')
# main.py
import cfg
cfg.logger.info(f'main')
I have the following lines of code that initialize logging.
I comment one out and leave the other to be used.
The problem that I'm facing is that the one that is meant to log to the file not logging to file. It is instead logging to the console.
Please help.
For logging to Console:
logging.basicConfig(level=logging.INFO,
format='%(asctime)s [%(levelname)s] (%(threadName)-10s) %(message)s',)
for file logging
logging.basicConfig(filename='server-soap.1.log',level=logging.INFO,
format='%(asctime)s [%(levelname)s] (%(threadName)-10s) %(message)s')
I found out what the problem was.
It was in the ordering of the imports and the logging definition.
The effect of the poor ordering was that the libraries that I imported before defining the logging using logging.basicConfig() defined the logging. This therefore took precedence to the logging that I was trying to define later using logging.basicConfig()
Below is how I needed to order it:
import logging
## for file logging
logging.basicConfig(filename='server-soap.1.log',
level=logging.INFO,
format='%(asctime)s %(levelname)s %(threadName)-10s %(message)s',)
from pysimplesoap.server import SoapDispatcher, SOAPHandler
from BaseHTTPServer import HTTPServer
import time,random,datetime,pytz,sys,threading
from datetime import timedelta
#DB
import psycopg2, psycopg2.extras
from psycopg2.pool import ThreadedConnectionPool
#ESB Call
from suds import WebFault
from suds.client import Client
But the faulty ordering that I initially had was:
from pysimplesoap.server import SoapDispatcher, SOAPHandler
from BaseHTTPServer import HTTPServer
import logging
import time,random,datetime,pytz,sys,threading
from datetime import timedelta
#DB
import psycopg2, psycopg2.extras
from psycopg2.pool import ThreadedConnectionPool
#ESB Call
from suds import WebFault
from suds.client import Client
## for file logging
logging.basicConfig(filename='server-soap.1.log',
level=logging.INFO,
format='%(asctime)s %(levelname)s %(threadName)-10s %(message)s',)
"Changed in version 3.8: The force argument was added." I think it's a better choice for new version.
For older Version(< 3.8):
From the source code of logging I found the flows:
This function does nothing if the root logger already has handlers
configured. It is a convenience method intended for use by simple scripts
to do one-shot configuration of the logging package.
So, if some module we import called the basicConfig() method before us, our call will do nothing.
A solution I found can work is that you can reload logging before your own calling to basicConfig(), such as
def init_logger(*, fn=None):
# !!! here
from imp import reload # python 2.x don't need to import reload, use it directly
reload(logging)
logging_params = {
'level': logging.INFO,
'format': '%(asctime)s__[%(levelname)s, %(module)s.%(funcName)s](%(name)s)__[L%(lineno)d] %(message)s',
}
if fn is not None:
logging_params['filename'] = fn
logging.basicConfig(**logging_params)
logging.error('init basic configure of logging success')
In case basicConfig() does not work:
logger = logging.getLogger('Spam Logger')
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
fh = logging.FileHandler('spam.log')
fh.setLevel(logging.DEBUG)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
fh.setFormatter(formatter)
# add the handlers to logger
logger.addHandler(ch)
logger.addHandler(fh)
# 'application' code
logger.debug('debug Spam message')
logging.debug('debug Spam message')
logger.info('info Ham message')
logger.warning('warn Eggs message')
logger.error('error Spam and Ham message')
logger.critical('critical Ham and Eggs message')
which gives me the following output:
2019-06-20 11:33:48,967 - Spam Logger - DEBUG - debug Spam message
2019-06-20 11:33:48,968 - Spam Logger - INFO - info Ham message
2019-06-20 11:33:48,968 - Spam Logger - WARNING - warn Eggs message
2019-06-20 11:33:48,968 - Spam Logger - ERROR - error Spam and Ham message
2019-06-20 11:33:48,968 - Spam Logger - CRITICAL - critical Ham and Eggs message
For the sake of reference, Python Logging Cookbook is readworthy.
I got the same error, I fixed it by passing the following argument to the basic config.
logging.basicConfig(
level="WARNING",
format="%(asctime)s - %(name)s - [ %(message)s ]",
datefmt='%d-%b-%y %H:%M:%S',
force=True,
handlers=[
logging.FileHandler("debug.log"),
logging.StreamHandler()
])
Here as you can see passing force=True overrides any other BasicConfigs
Another solution that worked for me is instead of tracing down which module might be importing logging or even calling basicConfig before me is to just call setLevel after basicConfig again.
import os
import logging
RUNTIME_DEBUG_LEVEL = os.environ.get('RUNTIME_DEBUG_LEVEL').upper()
LOGGING_KWARGS = {
'level': getattr(logging, RUNTIME_DEBUG_LEVEL)
}
logging.basicConfig(**LOGGING_KWARGS)
logging.setLevel(getattr(logging, RUNTIME_DEBUG_LEVEL))
Sort of crude, seems hacky, fixed my problem, worth a share.
IF YOU JUST WANT TO SET THE LOG LEVEL OF ALL LOGGERS
instead of ordering your imports after logging config:
just set level on the root level:
# Option 1:
logging.root.setLevel(logging.INFO)
# Option 2 - make it configurable:
# env variable + default value INFO
logging.root.setLevel(os.environ.get('LOG_LEVEL', logging.INFO))
Like vacing's answer mentioned, basicConfig has no effect if the root logger already has handlers configured.
I was using pytest which seems to set handlers which means the default logging setup with loglevel WARNING is active -- so it appears my app fails to log, but this only happens when executing unit tests with pytest. In a normal app run logs are produced as expected which is enough for my use case.
I cannot perform an on-the-fly logging fileHandle change.
For example, I have 3 classes
one.py
import logging
class One():
def __init__(self,txt="?"):
logging.debug("Hey, I'm the class One and I say: %s" % txt)
two.py
import logging
class Two():
def __init__(self,txt="?"):
logging.debug("Hey, I'm the class Two and I say: %s" % txt)
config.py
import logging
class Config():
def __init__(self,logfile=None):
logging.debug("Reading config")
self.logfile(logfile)
myapp
from one import One
from two import Two
from config import Config
import logging
#Set default logging
logging.basicConfig(
level=logging.getLevelName(DEBUG),
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
filename=None
)
logging.info("Starting with stdout")
o=One(txt="STDOUT")
c=Config(logfile="/tmp/logfile")
# Here must be the code that change the logging configuration and set the filehandler
t=One(txt="This must be on the file, not STDOUT")
If I try loggin.basicConfig() again, it doesn't work.
Indeed, logging.basicConfig does nothing if a handler has been set up already:
This function does nothing if the root logger already has handlers configured, unless the keyword argument force is set to True.
You'll need to either add force=True (requires Python 3.8 or newer), or, alternatively, replace the current handler on the root logger:
import logging
fileh = logging.FileHandler('/tmp/logfile', 'a')
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fileh.setFormatter(formatter)
log = logging.getLogger() # root logger
for hdlr in log.handlers[:]: # remove all old handlers
log.removeHandler(hdlr)
log.addHandler(fileh) # set the new handler
See the Configuring Logging chapter in the Python Logging HOWTO.
The answer provided by #Martijn Pieters works good. However, the code snipper removes all handlers and placed only the file handler back. This will be troublesome if your application has handlers added by other modules.
Hence, the below snippet is designed in such a way to replace only the file handler.
The line if isinstance(hdlr,logging.FileHandler) is the key.
import logging
filehandler = logging.FileHandler('/tmp/logfile', 'a')
formatter = logging.Formatter('%(asctime)-15s::%(levelname)s::%(filename)s::%(funcName)s::%(lineno)d::%(message)s')
filehandler.setFormatter(formatter)
log = logging.getLogger() # root logger - Good to get it only once.
for hdlr in log.handlers[:]: # remove the existing file handlers
if isinstance(hdlr,logging.FileHandler):
log.removeHandler(hdlr)
log.addHandler(filehandler) # set the new handler
# set the log level to INFO, DEBUG as the default is ERROR
log.setLevel(logging.DEBUG)
I found an easier way than the above 'accepted' answer. If you have a reference to the handler, all you need to do is call the close() method and then set the baseFilename property. When you assign baseFilename, be sure to use os.path.abspath(). There's a comment in the library source that indicates it's needed. I keep my configuration stuff in a global dict() so it's easy to keep the FileHandler reference objects. As you can see below, it only takes 2 lines of code to change a log filename for a handler on the fly.
import logging
def setup_logging():
global config
if config['LOGGING_SET']:
config['LOG_FILE_HDL'].close()
config['LOG_FILE_HDL'].baseFilename = os.path.abspath(config['LOG_FILE'])
config['DEBUG_LOG_HDL'].close()
config['DEBUG_LOG_HDL'].baseFilename = os.path.abspath(config['DEBUG_LOG'])
else:
format_str = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
formatter = logging.Formatter(format_str)
log = logging.getLogger()
log.setLevel(logging.DEBUG)
# add file mode="w" to overwrite
config['LOG_FILE_HDL'] = logging.FileHandler(config['LOG_FILE'], mode='a')
config['LOG_FILE_HDL'].setLevel(logging.INFO)
config['LOG_FILE_HDL'].setFormatter(formatter)
log.addHandler(config['LOG_FILE_HDL'])
# the delay=1 should prevent the file from being opened until used.
config['DEBUG_LOG_HDL'] = logging.FileHandler(config['DEBUG_LOG'], mode='a', delay=1)
config['DEBUG_LOG_HDL'].setLevel(logging.DEBUG)
config['DEBUG_LOG_HDL'].setFormatter(formatter)
log.addHandler(config['DEBUG_LOG_HDL'])
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
ch.setFormatter(formatter)
log.addHandler(ch)
config['LOGGING_SET'] = True
I tried to implemented the suggestions on this page from #Martijn Pieters combined with #Arun Thundyill Saseendran. I'm too new to be allowed to comment so I have to post an adjusted answer. In the isinstance call, I had to use 'logging' instead of 'log' to get access to the types (log was an instance) and then the 'FileHander' should be 'FileHandler'. I'm using Python 3.6.
import logging
filehandler = logging.FileHandler('/tmp/logfile', 'a')
formatter = logging.Formatter('%(asctime)-15s::%(levelname)s::%(filename)s::%(funcName)s::%(lineno)d::%(message)s')
filehandler.setFormatter(formatter)
log = logging.getLogger() # root logger - Good to get it only once.
for hdlr in log.handlers[:]: # remove the existing file handlers
if isinstance(hdlr,logging.FileHandler): #fixed two typos here
log.removeHandler(hdlr)
log.addHandler(filehandler) # set the new handler
# set the log level to INFO, DEBUG as the default is ERROR
logging.setLevel(log.DEBUG)
TL;DR
INDEX = 0
#: change to new file location
logger.handlers[INDEX].setStream( open('/path/to/new/log/file.log', 'a') )
#: change to stdout or stderr
import sys
logger.handlers[INDEX].setStream( sys.stdout ) # or sys.stderr
#: change to another device
logger.handlers[INDEX].setStream( open('/dev/ttys010', 'a') )
after registering a logging.FileHandler to your logger, you can reach into its internals and change the stream it outputs to "on the fly".
make sure INDEX accesses the right handler within the logger. if you only added a FileHandler, then it should be index 0.
Explained
Well first, following the logging documentation's suggested idiom, you would get a new logger instance named after the __name__ of whatever your specific package, module, class, or function:
#: class
>>> class A:
def __init__(self):
self.logger = logging.getLogger(self.__class__.__name__)
>>>A().logger
<Logger A (WARNING)>
#: function
>>> def func():
logger = logging.getLogger(func.__name__)
print(logger)
>>> func()
<Logger func (WARNING)>
#: module
>>> logger = logging.getLogger( __name__ )
>>> logger
<Logger __main__ (WARNING)>
#: package (e.g. a package named 'pkg', write this in '__init__.py')
>>> logger = logging.getLogger( __package__ )
>>> logger
<RootLogger pkg (WARNING)>
Next, if you've registered a logging.FileHandler handler for your logger, like so:
logger.addHandler( logging.FileHandler('/tmp/logfile.log', 'a') )
then you can you can change the file it outputs to by replacing the stream it outputs to:
INDEX = 0 # you will have to find the index position of the `FileHandler` you
# registered to this logger. I justed listed them with: `logger.handlers`
# and picked the one I needed. if you only register one handler
# then it should be at index 0, i.e the first one
#: change to new file location
logger.handlers[INDEX].setStream( open('/path/to/new/log/file.log', 'a') )
#: change to stdout or stderr
import sys
logger.handlers[INDEX].setStream( sys.stdout ) # or sys.stderr
#: change to another device
logger.handlers[INDEX].setStream( open('/dev/ttys010', 'a') )
If your curious, found this in a few minutes, by doing a bit of digging like so (in ipython and python interpreters):
>>> import logging
>>> logger = logging.getLogger( __name__ )
>>> logger.addHandler( logging.FileHandler('/tmp/logfile', 'a') )
>>> globals()
>>> dir(logger)
#: found that the logger has 'handlers' attribute
>>> dir(logger.handlers)
>>> logger.handlers
#: found that the FileHandler I registered earlier is at index: 0
>>> logger.handlers[0]
>>> dir(logger.handlers[0])
#: found that FileHandler has a dictionary '__dict__'
>>> logger.handlers[0].__dict__
#: found that FileHandler dict has 'baseFilename' attribute with the filename
#: i had set when registering the file handler
>>> logger.handlers[0].__dict__['baseFilename']
#: tried changing the file it points to
>>> logger.handlers[0].__dict__['baseFilename'] = '/tmp/logfile.log'
#: tried logging
>>> logger.info(f'hello world')
#: didn't work
#: found another interesting perhaps relevant attribute 'stream' in the
#: FileHandler dict
>>> logger.handlers[0].__dict__['stream']
>>> dir(logger.handlers[0].__dict__['stream'])
>>> logger.handlers[0].__dict__['stream'].__dict__
#: tried replacing the stream altogether
>>> logger.handlers[0].__dict__['stream'] = open('/tmp/logfile.log','a')
#: tried logging
>>> logger.info(f'hello world again')
#: it worked
>>> logger.info(f'hey it worked')
#: found another interesting perhaps relevant method 'setStream'
>>> logger.handlers[0].setStream( open('/tmp/otherlogfile.log','a') )
#: tried logging
>>> logger.info(f'hello world again')
#: it worked
>>> logger.info(f'hey it worked')
you can also change the name of the logger with:
logger.name = 'bla'
and more, see: dir(logger)
The following script:
#!/usr/bin/env python
from fabric.api import env, run
import logging
logging.getLogger().setLevel(logging.INFO)
env.host_string = "%s#%s:%s" % ('myuser', 'myhost', '22')
res = run('date', pty = False)
Produces the following output:
[myuser#myhost:22] run: date
No handlers could be found for logger "ssh.transport"
[myuser#myhost:22] out: Thu Mar 29 16:15:15 CEST 2012
I would like to get rid of this annoying error message: No handlers could be found for logger "ssh.transport"
The problem happens when setting the log level (setLevel).
How can I solve this? I need to set the log level, so skipping that won't help.
You need to initialize the logging system. You can make the error go away by doing so in your app thusly:
import logging
logging.basicConfig( level=logging.INFO )
Note: this uses the default Formatter, which is not terribly useful. You might consider something more like:
import logging
FORMAT="%(name)s %(funcName)s:%(lineno)d %(message)s"
logging.basicConfig(format=FORMAT, level=logging.INFO)
My hack is ugly but works:
# This is here to avoid the mysterious messages: 'No handlers could be found for logger "ssh.transport"'
class MyNullHandler(logging.Handler):
def emit(self, record):
pass
bugfix_loggers = { }
def bugfix(name):
global bugfix_loggers
if not name in bugfix_loggers:
# print "Setting dummy logger for '%s'" % (name)
logging.getLogger(name).addHandler(MyNullHandler())
bugfix_loggers[name] = True