Observation: When I comment out from logging import handlers the below-mentioned error is observed.
Error:
file_handler = logging.handlers.RotatingFileHandler(
AttributeError: module 'logging' has no attribute 'handlers'
Question: If i have imported the logging why is required to do from logging import handlers?
import logging
import sys
#from logging import handlers
def LoggerDefination():
#file_handler = logging.FileHandler(filename='..\\logs\\BasicLogger_v0.1.log', mode='a')
file_handler = logging.handlers.RotatingFileHandler(
filename="..\\logs\\BasicLogger_v0.2.log",
mode='a',
maxBytes=20000,
backupCount=7,
encoding=None,
delay=0
)
file_handler.setLevel(logging.DEBUG)
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.setLevel(logging.DEBUG)
handlers = [file_handler, stdout_handler]
logging.basicConfig(
level=logging.DEBUG,
format='%(asctime)s | %(module)s | %(name)s | LineNo_%(lineno)d | %(levelname)s | %(message)s',
handlers=handlers
)
def fnt_test_log1():
LoggerDefination()
WriteLog1 = logging.getLogger('fnt_test_log1')
#WriteLog1.propagate=False
WriteLog1.info("######## START OF : test_log1 ##########")
WriteLog1.debug("test_log1 | This is debug level")
WriteLog1.debug("test_log1 | This is debug level")
WriteLog1.info("test_log1 | This is info level")
WriteLog1.warning("test_log1 | This is warning level")
WriteLog1.error("test_log1 | This is error level")
WriteLog1.critical("test_log1 |This is critiacl level")
WriteLog1.info("######## END OF : test_log1 ##########")
def fnt_test_log2():
LoggerDefination()
WriteLog2 = logging.getLogger('fnt_test_log2')
WriteLog2.info("######## START OF : test_log2 ##########")
WriteLog2.debug("test_log2 ===> debug")
WriteLog2.debug("test_log2 | This is debug level")
WriteLog2.debug("test_log2 | This is debug level")
WriteLog2.info("test_log2 | This is info level")
WriteLog2.warning("test_log2 | This is warning level")
WriteLog2.error("test_log2 | This is error level")
WriteLog2.critical("test_log2 |This is critiacl level")
WriteLog2.info("######## STOP OF : test_log2 ##########")
if __name__ == '__main__':
LoggerDefination()
MainLog = logging.getLogger('main')
LoggerDefination()
MainLog.info("Executing script: " + __file__)
fnt_test_log1()
fnt_test_log2()
This is a two-part answer: The first part concerns your immediate question. The second part concerns the question why your question might have come up in the first place (and, well, how I ended up in this thread, given that your question is already two months old).
First part (the answer to your question): When importing a package like import logging, Python by default never imports subpackages (or submodules) like logging.handlers but only exposes variables to you that are defined in the package's __init__.py file (in this case logging/__init__.py). Unfortunately, it's hard to tell from the outside if logging.handlers is a variable in logging/__init__.py or an actual separate module logging/handlers.py. So you have to take a look at logging/__init__.py and then you will see that it doesn't define a handlers variable and that, instead, there's a module logging/handlers.py which you need to import separately through import logging.handlers or from logging.handlers import TheHandlerYouWantToUse. So this should answer your question.
Second part: I recently noticed that my IDE (PyCharm) always suggests import logging whenever I actually want to use logging.handlers.QueueHandler. And for some arcane reason (and despite what I said in the first part) it's been working! …well, most of the time.
Specifically, in the following code, the type annotation causes the expected AttributeError: module 'logging' has no attribute 'handlers'. However, after commenting out the annotation (which for Python < 3.9 gets executed during module execution time), calling the function works – provided I call it "late enough" (more on that below).
import logging
from multiprocessing.queues import Queue
def redirect_logs_to_queue(
logging_queue: Queue, level: int = logging.DEBUG
) -> logging.handlers.QueueHandler: # <-------------------------------- This fails
queue_handler = logging.handlers.QueueHandler(logging_queue) # <--- This works
root = logging.getLogger()
root.addHandler(queue_handler)
root.setLevel(level)
return queue_handler
So what does "late enough" mean? Unfortunately, my application was a bit too complex to do some quick bug hunting. Nevertheless, it was clear to me that logging.handlers must "become available" at some point between start-up (i.e. all my modules getting loaded/executed) and invoking that function. This gave me the decisive hint: Turns out, some another module deep down the package hierarchy was doing from logging.handlers import RotatingFileHandler, QueueListener. This statement loaded the entire logging.handlers module and caused Python to "mount" the handlers module in its parent package logging, meaning that the logging variable would henceforth always come equipped with a handlers attribute even after a mere import logging, which is why I didn't need to import logging.handlers anymore (and which is probably what PyCharm noticed).
Try it out yourself:
This works:
import logging
from logging.handlers import QueueHandler
print(logging.handlers)
This doesn't:
import logging
print(logging.handlers)
All in all, this phenomenon is a result of Python's import mechanism using global state to avoid reloading modules. (Here, the "global state" I'm referring to is the logging variable that you get when you import logging.) While it is treacherous sometimes (as in the case above), it does make perfect sense: If you import logging.handlers in multiple modules, you don't want the code in the logging.handlers module to get loaded (and thus executed!) more than once. And so Python just adds the handlers attribute to the logging module object as soon as you import logging.handlers somewhere and since all modules that import logging share the same logging object, logging.handlers will suddenly be available everywhere.
Perhaps you have some other module called logging which masks the standard library one. Is there some file called logging.py somewhere in your code base?
How to diagnose
The best way to figure out the issue is to simply run python3, and then the following commands:
Python 3.8.10
>>> import logging
>>> logging
<module 'logging' from '/home/user/my_personal_package/logging/__init__.py'>
This shows that you are using this OTHER package (my_personal_package) instead of the built-in logging package.
It should say:
>>> import logging
>>> logging
<module 'logging' from '/usr/lib/python3.8/logging/__init__.py'>
You can also just put this in a program to debug it:
import logging
print(logging)
How to fix
Fix this by removing the offending logging.py or logging/__init__.py.
Related
I tried to enable logging for my python library. I want all logs to go into my custom subset my_logger instead of root. So this is what I tried:
import logging
my_logger = logging.getLogger('my_logger')
my_logger.warning("my hello!")
logging.warning("hello!")
And for some reason my custom logger didn't output subset name (my_logger) in front of it.
my hello!
WARNING:root:hello!
A simple change of the order of root and my loggers fixed the issue:
import logging
logging.warning("hello!")
my_logger = logging.getLogger('my_logger')
my_logger.warning("my hello!")
Output
WARNING:root:hello!
WARNING:my_logger:my hello!
I do not ever want to use a root logger at all. Is it possible to get WARNING:my_logger prefix to my output without logging to the root logger first?
Seems like the logging module is doing first-time configuration during the first call. You can do that first to get it out of the way before using your logger:
import logging
logging.basicConfig() # do the configuration
my_logger = logging.getLogger('my_logger')
my_logger.warning("my hello!")
logging.warning("hello!")
Result:
WARNING:my_logger:my hello!
WARNING:root:hello!
For a library that you expect others to import, this should be done by the code importing the module instead of the library.
I was wondering what would be the best way for me to structure my logs in a special situation.
I have a series of python services that use the same python files for communicating (ex. com.py) with the HW. I have logging implemented in this modules and i would like for it to be dependent(associated) with the main service that is calling the modules.
How should i structure the logger logic so that if i have:
main_service_1->module_for_comunication
The logging goes to file main_serv_1.log
main_service_2->module_for_comunication
The logging goes to file main_serv_2.log
What would be the best practice in this case without harcoding anything?
Is there a way to know the file which is importing the com.py, so that i am able inside of the com.py, to use this information to adapt the logging to the caller?
In my experience, for a situation like this, the cleanest and easiest to implement strategy is to pass the logger to the code that does the logging.
So, create a logger for each service you want to have log to a different file, and pass that logger in to the code from your communications module. You can use __name__ to get the name of the current module (the actual module name, without the .py extension).
In the example below I implemented a fallback for the case when no logger is passed in as well.
com.py
from log import setup_logger
class Communicator(object):
def __init__(self, logger=None):
if logger is None:
logger = setup_logger(__name__)
self.log = logger
def send(self, data):
self.log.info('Sending %s bytes of data' % len(data))
svc_foo.py
from com import Communicator
from log import setup_logger
logger = setup_logger(__name__)
def foo():
c = Communicator(logger)
c.send('foo')
svc_bar.py
from com import Communicator
from log import setup_logger
logger = setup_logger(__name__)
def bar():
c = Communicator(logger)
c.send('bar')
log.py
from logging import FileHandler
import logging
def setup_logger(name):
logger = logging.getLogger(name)
handler = FileHandler('%s.log' % name)
logger.addHandler(handler)
return logger
main.py
from svc_bar import bar
from svc_foo import foo
import logging
# Add a StreamHandler for the root logger, so we get some console output in
# addition to file logging (for easy of testing). Also set the level for
# the root level to INFO so our messages don't get filtered.
logging.basicConfig(level=logging.INFO)
foo()
bar()
So, when you execute python main.py, this is what you'll get:
On the console:
INFO:svc_foo:Sending 3 bytes of data
INFO:svc_bar:Sending 3 bytes of data
And svc_foo.log and svc_bar.log each will have one line
Sending 3 bytes of data
If a client of the Communicator class uses it without passing in a logger, the log output will end up in com.log (fallback).
I see several options:
Option 1
Use __file__. __file__ is the pathname of the file from which the module was loaded (doc). depending of your structure, you should be able to identify the module by performing an os.path.split() like so:
If the folder structure is
+- module1
| +- __init__.py
| +- main.py
+- module2
+- __init__.py
+- main.py
you should be able to obtain the module name with a code placed in main.py:
def get_name():
module_name = os.path.split(__file__)[-2]
return module_name
This is not exactly DRY because you need the same code in both main.py. Reference here.
Option 2
A bit cleaner is to open 2 terminal windows and use an environment variable. E.g. you can define MOD_LOG_NAME as MOD_LOG_NAME="main_service_1" in one terminal and MOD_LOG_NAME="main_service_2" in the other one. Then, in your python code you can use something like:
import os
LOG_PATH_NAME os.environ['MOD_LOG_NAME']
This follows separation of concerns.
Update (since the question evolved a bit)
Once you've established the distinct name, all you have to do is to configure the logger:
import logging
logging.basicConfig(filename=LOG_PATH_NAME,level=logging.DEBUG)
(or get_name())and run the program.
I want to find out how logging should be organised given that I write many scripts and modules that should feature similar logging. I want to be able to set the logging appearance and the logging level from the script and I want this to propagate the appearance and level to my modules and only my modules.
An example script could be something like the following:
import logging
import technicolor
import example_2_module
def main():
verbose = True
global log
log = logging.getLogger(__name__)
logging.root.addHandler(technicolor.ColorisingStreamHandler())
# logging level
if verbose:
logging.root.setLevel(logging.DEBUG)
else:
logging.root.setLevel(logging.INFO)
log.info("example INFO message in main")
log.debug("example DEBUG message in main")
example_2_module.function1()
if __name__ == '__main__':
main()
An example module could be something like the following:
import logging
log = logging.getLogger(__name__)
def function1():
print("printout of function 1")
log.info("example INFO message in module")
log.debug("example DEBUG message in module")
You can see that in the module there is minimal infrastructure written to import the logging of the appearance and the level set in the script. This has worked fine, but I've encountered a problem: other modules that have logging. This can result in output being printed twice, and very detailed debug logging from modules that are not my own.
How should I code this such that the logging appearance/level is set from the script but then used only by my modules?
You need to set the propagate attribute to False so that the log message does not propagate to ancestor loggers. Here is the documentation for Logger.propagate -- it defaults to True. So just:
import logging
log = logging.getLogger(__name__)
log.propagate = False
I'm writing unit tests to make sure that my log messages for syslog are limited in size. In particular, my Formatter is set like this:
import logging
from logging.handlers import SysLogHandler
[...]
default_fmt = logging.Formatter(
'%(name)s:%(levelname)s - %(message).{}s'.format(MAX_LOG_MESSAGE_SIZE)
)
All that I want to know is that the final message is less than a certain number of characters. I assumed I needed to patch SysLogHandler like this:
with patch('mylib.logger.logging.handlers.SysLogHandler') as m:
import mylib.logger
msg = 'foo'
mylib.logger.debug(msg)
print m.mock_calls
print m.call_list()
However running the tests with nosetests doesn't work:
ImportError: No module named logging
What am I forgetting?
This question already has answers here:
Python: logging module - globally
(5 answers)
Closed 6 years ago.
How do I make a Logger global so that I can use it in every module I make?
Something like this in moduleA:
import logging
import moduleB
log = logging.getLogger('')
result = moduleB.goFigure(5)
log.info('Answer was', result)
With this in moduleB:
def goFigure(integer):
if not isinstance(integer, int):
log.critical('not an integer')
else:
return integer + 1
Currently, I will get an error because moduleB does not know what log is. How do I get around that?
You could make your own logging "module" which instantiates the logger, than have all of your code import that instead. Think:
logger.py:
import logging
log = logging.getLogger('')
codeA.py:
from logger import log
log.info('whatever')
codeB.py:
from logger import log
log.warn('some other thing')
Creating a global logger which can be used to
create a new log file or
returns logger for a global log file.
Create a module called called myLogger.py : This will have the log creation code
myLogger.py:
import logging
def myLog(name, fname = 'myGlobalLog.log'):
'''Debug Log'''
logger = logging.getLogger(name);
logger.setLevel(logging.DEBUG)
fhan = logging.FileHandler(fname)
fhan.setLevel(logging.DEBUG)
logger.addHandler(fhan)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fhan.setFormatter(formatter)
'''comment this to enable requests logger'''
#logger.disabled = True
return logger
Now to create a new log in your module say A.py
from myLogger import myLog
log = myLog(__name__, 'newLog.log')
log.debug("In new log file")
Thus you have to pass the file name along while getting the logger.
To use the global logger in A.py:
from myLogger import myLog
log = myLog(__name__)
log.debug("In myGlobalLog file")
Need not pass the file name in this case as we gonna use the global log.
A module has by default only access to builtin functions and builtin constants. For all other variables, functions... you have to use the keyword import.
Now for your concrete example, you can import the log-variable of moduleA in modulesB like this:
from moduleA import log
The following would be equivalent because the logging-module returns the same instance of the logger that was returned to moduleA:
import logging
log = logging.getLogger('')
Another solution for you could be to use the default-logger of the logging module like this:
logging.info("Hello")