I have a question on how to share the Python logging device when we have other modules.
Let's say that I have main.py that imports file protocol.py and declares a logger variable.
import logging
from protocol import Protocol
log = logging.getLogger(__name__)
log.debug("Currently in main.py")
protocol_obj_1 = Protocol(log)
protocol_obj_2 = Protocol()
Within protocol.py, let's say I wish to log additional data by sharing the same log variable.
class Protocol:
def __init__(self, log_var = None):
self.log_var = log_var
def logData(self):
if self.log_var is not None:
self.log_var.debug("Currently within class Protocol")
else:
print("Currently no logging")
Is this a Pythonic approach? Or are there better ways to share the logging device with other modules?
The most idiomatic Python way of creating a logger in a module is, as the documentation says:
A good convention to use when naming loggers is to use a module-level logger, in each module which uses logging, named as follows:
logger = logging.getLogger(__name__)
But there aren't rare cases where you want different modules to log into the same logger. For instance, when it's done in a Python (distribution) package (say, distributed on PyPI). Then, if the package is "flat" then you can use __package__ instead of __name__ in every module.
logger = logging.getLogger(__package__)
If you have import packages in your distribution package, then you may want to provide hardcoded names in every module.
logger = logging.getLogger('mypackage')
Related
I want to change my print statements of my package to using logging. So I will write my scripts like
import logging
logger = logging.getLogger(__name__)
def func():
logger.info("Calling func")
which is the recommended way?
However, many users do not initialize logging and hence will not see the output.
Is there a recommended way so that users who do not initialize logging can see info output, and those who explicitly set up logging do not get their specific configuration tampered with by my package?
As a general rule of thumb, modules should never configure logging directly (and do other unsolicited changes to the shared STDOUT/STDERR either) as that's the realm of the module user. If the user wants the output, he should explicitly enable logging and then, and only then, your module should be allowed to log.
Bonus points if you provide an interface for explicitly enabling logging within your module so that the user doesn't have to explicitly change levels / disable loggers of your inner components if they're interested only in logging from their own code.
Of course, to keep using logging when a STDOUT/STDERR handler is not yet initialized you can use logging.NullHandler so all you have to do in your module is:
import logging
logger = logging.getLogger(__name__)
logger.addHandler(logging.NullHandler()) # initialize your logger with a NullHandler
def func():
logger.info("Calling func")
func() # (((crickets)))
Now you can continue using your logger throughout your module and until the user initializes logging your module won't trespass on the output, e.g.:
import your_module
your_module.func() # (((crickets)))
import logging
logging.root.setLevel(logging.INFO)
logging.info("Initialized!") # INFO:root:Initialized!
your_module.func() # INFO:your_module:Calling func
Of course, the actual logging format, level and other settings should also be in the realm of the user so if they change the format of the root log handler, it should be inherited by your module logger as well.
I write big program with many modules. In same module I wish use logging. What best practice for logging in Python?
Should I use import standart logging module and use it in every my file:
#proxy_control.py
#!/usr/bin/env python3
import logging
class ProxyClass():
def check_proxy():
pass
logging.basicConfig(filename='proxy.log', level=logging.INFO)
logging.info('Proxy work fine')
Or maybe i should write one MyLog() class inherit from default logging and use it from all my other modules?
#proxy_control.py
#!/usr/bin/env python3
import MyLog
class ProxyClass():
def check_proxy():
pass
Mylog.send_log('Proxy work fine')
#my_log.py
#!/usr/env/bin python3
import logging
class MyLog():
def send_log(value):
pass
A typical convention is to define a logger at the top of every module that requires logging and then use that logger object throughout the module.
logger = logging.getLogger(__name__)
This way, loggers will follow your package names (ie. package.subpackage.module). This is useful in the logging module because loggers propagate messages upwards based on the logger name (ie. parent.child will pass messages up to parent). This means that you can do all your configuration at the top level logger and it will get messages from all the sub-loggers in your package. Or, if someone else is using your library, it will be very easy for them to configure which logging messages they get from your library because it will have a consistent namespace.
For a library, you typically don't want to show logging messages unless the user explicitly enables them. Python logging will automatically log to stderr unless you set up a handler, so you should add a NullHandler to your top-level logger. This will prevent the messages from automatically being printed to stderr.
logger = logging.getLogger(__name__)
logger.addHandler(logging.NullHandler())
NOTE - The NullHandler was added in Python2.7, for earlier versions, you'll have to implement it yourself.
Use the logging module, and leave logging configuration to your application's entry point (modules should not configure logging by themselves).
I have an app which runs several instances of main app depending on external parameters. The main app imports few libraries which in turn import other libraries. They all have a global
LOGGER = logging.getLogger('module_name')
The logger is configured as file handler so all the logs get written to an external file (all logs written to same file). Now I want to write log to different files based on a certain name that is passed to the main app. I need some thing like
LOGGER = logging.getLogger(dynamic_criteria_name)
The result will be multiple log files dynamic_criteria_name.log, dynamic_criteria_name.log, etc created and any time logger is called from any of the module it should write to the correct file based on the criteria it was called under.
But the problem is the LOGGER is global. I can pass the logger or the dynamic_criteria_name to each function to write to log but it sounds wrong some how. May be I'm just paranoid! I have modules which have sometimes only functions in them. I don't want to pass the logger all around I guess.
I thought about AOP for a while but I don't think it is really cross cutting as it is dynamically generated the logger looks to me as cross cutting but within one instance of the main app. I have thought about other ways to hack global states but I think the dynamic nature makes it all not possible at least in my head.
Below is the pseudocode (I haven't tried it running it) but it explains better what I'm talking about I think. As you can see module_1 imports module_2 and module_3, they both have a global LOGGER, and module_3 calls module_4. I'd like to find out if I can write to the separate log file from module_2 or module_3 without passing name explicitly to each imported module function. I can add multiple handler to logger with different file name but how can I refer to the correct logger from another module given they are all global at the moment.
# module_1.py
import logging
import time
import module_2
import module_3
LOGGER = logging.getLogger()
def start_main_loop(name):
while True:
module_2.say_boo()
module_3.say_foo()
LOGGER.debug('Sleeping...')
time.sleep(10)
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG)
for i in xrange(10):
start_main_loop(i)
#----------------------------------------------------
# module_2.py
import logging
LOGGER = logging.getLogger()
def say_boo():
msg = 'boo'
LOGGER.debug(msg)
LOGGER.debug(say_twice(msg))
def say_twice(msg):
LOGGER.debug('Called say twice')
return msg * 2
#----------------------------------------------------
# module_3.py
import logging
import module_4
LOGGER = logging.getLogger()
def say_foo():
msg = 'foo'
LOGGER.debug(msg)
LOGGER.debug(say_twice(msg))
module_4.say_bar()
def say_twice(msg):
LOGGER.debug('Called say twice')
return msg * 2
#----------------------------------------------------
# module_4.py
import logging
LOGGER = logging.getLogger()
def say_bar():
msg = 'bar'
LOGGER.debug(msg)
I'm willing to explore any ideas people might have. Hope I have explained myself clearly if not please let me know, I can rephrase the question. Thanks!
You don't need to pass loggers around: loggers are singletons. If code in multiple modules makes calls to getLogger('foo'), the same instance is returned for each call. So there is no need to pass logger instances around. In general the logger name indicates where a logging call originates from, so the __name__ value makes most sense. You could have code in a module foo log to a logger named bar - nothing prevents you doing this - it just hasn't been found especially useful in practice.
Sounds like AOP is overkill. Rather than passing logger names around, you might consider adding multiple handlers to the root logger, with specific filters on each handler to ensure that specific things go in specific files, according to your particular requirements.
Task
I have logging setup across multiple modules in my application. All these modules send logs to the same file.
Client.py
import logging
import Settings
import actions1
import actions2
def initLogging(logToConsole=False):
global logger
logger=Settings.customLogging(Name,Dir,FileName,Level,CmdId,logToConsole)
logger.info("Starting client")
def initActions():
actions.init(logger)
Settings.py
import logging
import logging.handlers
def initLogging(name,dir,file,cmdId,logToConsole=false):
logger=logging.getLogger(name)
**setup the logger properties with loglevels, formatting etc**
return logger
actions1.py
def init(plogger):
global logger
logger=plogger
def some_function()
**do something**
logger.info("some text")
I have multiple actions1/2/3.py modules and I want to have different logging levels and different logging formats for each of these action scripts. I went through the docs and came across auxiliary module example but it doesn't specify how I can achieve custom settings for the same logger when it is being accessed by these separate scripts.
Any help on how to do it would be appreciated. I possibly want to initialize these custom settings when I invoke the init() method but I am currently out of ideas how to.
P.S. Please ignore the typos. It was a long code.
Instead of creating a single global logger you can use logging.getlogger(name) method from the logging module.
This way you will get a logger object. Now you can set the other parameters for the logger object.
This is what docs says:
The logger name hierarchy is analogous to the Python package
hierarchy, and identical to it if you organise your loggers on a
per-module basis using the recommended construction
logging.getLogger(__name__). That’s because in a module, __name__ is
the module’s name in the Python package namespace.
I have my main.py as follows:
import logging
import os
import web
def is_test():
if 'WEBPY_ENV' in os.environ:
return os.environ['WEBPY_ENV'] == 'test'
app = web.application(router.urls, globals())
logging.basicConfig(filename="log/debug.log", level=logging.INFO)
global logger
logger = logging.getLogger("debug")
if (not is_test()) and __name__ == "__main__":
app.run()
Now, here I have defined a variable named logger as global. So, can I access this variable anywhere in my application w/o redefining it? I am using web.py.
Basically what i need is something like this. I want to initialize the logger once and I should be able to use it anywhere in my whole application. How can i do that?
You don't need to access any global variable since the logging module already provides access to logger objects anywhere in your code, that is, if you use:
logger = logging.getLogger("debug")
in other modules, you'll get the same logger object as in the main module.
I don't fully understand why you're using a "debug" logger when you can adjust the level of your logger to get the debug messages, but maybe yours was just an example or you really need that, so let's get to the point:
From the official documentation: The key benefit of having the logging API provided by a standard library module is that all Python modules can participate in logging, so your application log can include your own messages integrated with messages from third-party modules.
This means that you just need to configure your logging at the beginning of your application and after that when you need a logger you call logging.getLogger(name) and you get that logger.
So there's no need of "tracking" your logger variable with:
global logger
logger = logging.getLogger("debug")
because whenever you need to log from your "debug" logger (in the middle of whatever you want) you just do something like:
my_debug_logger = logging.getLogger("debug")
my_debug_logger.info('some message')
At the end the point is that when you import the logging module import logging you have access to each and every logger previously defined (and of course you can define new ones).