How to proxy all methods from a Python module to another? - python

I want to have in my application a common logging module that logs to a file.
For example in my commonlog.py I can have something like this:
# Python logging module
import logging
logging.basicConfig(filename="test.log", level=logging.DEBUG)
From the other modules in the application I want to import this module and be able to use it like if it was the Python logging module but without replicating all its functions, for example from module test.py:
import commonlog
commonlog.debug("debug message")
commonlog.info("info message")
commonlog.ANY_OTHER_METHOD_THAT_BELONGS_TO_LOGGING()
How can I "proxy" in my commonlog all the methods from the logging module ?
Doing:
commonlogging.logging.etc..
is not a valid solution because it's using the logging module directly.

I've never had to "inherit" from a module before so I don't know it's naive to do a from logging import * at the top of commonlogging. Here's code showing that it appears to work:
>>> with open('mylogging.py', 'w') as f:
... f.write('''from logging import *
... my_customization = "it works"''')
...
>>> import mylogging
>>> print mylogging.my_customization
it works
>>> help(mylogging.log)
Help on function log in module logging:
log(level, msg, *args, **kwargs)
Log 'msg % args' with the integer severity 'level' on the root logger.

Related

Python Logging - AttributeError: module 'logging' has no attribute 'handlers'

Observation: When I comment out from logging import handlers the below-mentioned error is observed.
Error:
file_handler = logging.handlers.RotatingFileHandler(
AttributeError: module 'logging' has no attribute 'handlers'
Question: If i have imported the logging why is required to do from logging import handlers?
import logging
import sys
#from logging import handlers
def LoggerDefination():
#file_handler = logging.FileHandler(filename='..\\logs\\BasicLogger_v0.1.log', mode='a')
file_handler = logging.handlers.RotatingFileHandler(
filename="..\\logs\\BasicLogger_v0.2.log",
mode='a',
maxBytes=20000,
backupCount=7,
encoding=None,
delay=0
)
file_handler.setLevel(logging.DEBUG)
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.setLevel(logging.DEBUG)
handlers = [file_handler, stdout_handler]
logging.basicConfig(
level=logging.DEBUG,
format='%(asctime)s | %(module)s | %(name)s | LineNo_%(lineno)d | %(levelname)s | %(message)s',
handlers=handlers
)
def fnt_test_log1():
LoggerDefination()
WriteLog1 = logging.getLogger('fnt_test_log1')
#WriteLog1.propagate=False
WriteLog1.info("######## START OF : test_log1 ##########")
WriteLog1.debug("test_log1 | This is debug level")
WriteLog1.debug("test_log1 | This is debug level")
WriteLog1.info("test_log1 | This is info level")
WriteLog1.warning("test_log1 | This is warning level")
WriteLog1.error("test_log1 | This is error level")
WriteLog1.critical("test_log1 |This is critiacl level")
WriteLog1.info("######## END OF : test_log1 ##########")
def fnt_test_log2():
LoggerDefination()
WriteLog2 = logging.getLogger('fnt_test_log2')
WriteLog2.info("######## START OF : test_log2 ##########")
WriteLog2.debug("test_log2 ===> debug")
WriteLog2.debug("test_log2 | This is debug level")
WriteLog2.debug("test_log2 | This is debug level")
WriteLog2.info("test_log2 | This is info level")
WriteLog2.warning("test_log2 | This is warning level")
WriteLog2.error("test_log2 | This is error level")
WriteLog2.critical("test_log2 |This is critiacl level")
WriteLog2.info("######## STOP OF : test_log2 ##########")
if __name__ == '__main__':
LoggerDefination()
MainLog = logging.getLogger('main')
LoggerDefination()
MainLog.info("Executing script: " + __file__)
fnt_test_log1()
fnt_test_log2()
This is a two-part answer: The first part concerns your immediate question. The second part concerns the question why your question might have come up in the first place (and, well, how I ended up in this thread, given that your question is already two months old).
First part (the answer to your question): When importing a package like import logging, Python by default never imports subpackages (or submodules) like logging.handlers but only exposes variables to you that are defined in the package's __init__.py file (in this case logging/__init__.py). Unfortunately, it's hard to tell from the outside if logging.handlers is a variable in logging/__init__.py or an actual separate module logging/handlers.py. So you have to take a look at logging/__init__.py and then you will see that it doesn't define a handlers variable and that, instead, there's a module logging/handlers.py which you need to import separately through import logging.handlers or from logging.handlers import TheHandlerYouWantToUse. So this should answer your question.
Second part: I recently noticed that my IDE (PyCharm) always suggests import logging whenever I actually want to use logging.handlers.QueueHandler. And for some arcane reason (and despite what I said in the first part) it's been working! …well, most of the time.
Specifically, in the following code, the type annotation causes the expected AttributeError: module 'logging' has no attribute 'handlers'. However, after commenting out the annotation (which for Python < 3.9 gets executed during module execution time), calling the function works – provided I call it "late enough" (more on that below).
import logging
from multiprocessing.queues import Queue
def redirect_logs_to_queue(
logging_queue: Queue, level: int = logging.DEBUG
) -> logging.handlers.QueueHandler: # <-------------------------------- This fails
queue_handler = logging.handlers.QueueHandler(logging_queue) # <--- This works
root = logging.getLogger()
root.addHandler(queue_handler)
root.setLevel(level)
return queue_handler
So what does "late enough" mean? Unfortunately, my application was a bit too complex to do some quick bug hunting. Nevertheless, it was clear to me that logging.handlers must "become available" at some point between start-up (i.e. all my modules getting loaded/executed) and invoking that function. This gave me the decisive hint: Turns out, some another module deep down the package hierarchy was doing from logging.handlers import RotatingFileHandler, QueueListener. This statement loaded the entire logging.handlers module and caused Python to "mount" the handlers module in its parent package logging, meaning that the logging variable would henceforth always come equipped with a handlers attribute even after a mere import logging, which is why I didn't need to import logging.handlers anymore (and which is probably what PyCharm noticed).
Try it out yourself:
This works:
import logging
from logging.handlers import QueueHandler
print(logging.handlers)
This doesn't:
import logging
print(logging.handlers)
All in all, this phenomenon is a result of Python's import mechanism using global state to avoid reloading modules. (Here, the "global state" I'm referring to is the logging variable that you get when you import logging.) While it is treacherous sometimes (as in the case above), it does make perfect sense: If you import logging.handlers in multiple modules, you don't want the code in the logging.handlers module to get loaded (and thus executed!) more than once. And so Python just adds the handlers attribute to the logging module object as soon as you import logging.handlers somewhere and since all modules that import logging share the same logging object, logging.handlers will suddenly be available everywhere.
Perhaps you have some other module called logging which masks the standard library one. Is there some file called logging.py somewhere in your code base?
How to diagnose
The best way to figure out the issue is to simply run python3, and then the following commands:
Python 3.8.10
>>> import logging
>>> logging
<module 'logging' from '/home/user/my_personal_package/logging/__init__.py'>
This shows that you are using this OTHER package (my_personal_package) instead of the built-in logging package.
It should say:
>>> import logging
>>> logging
<module 'logging' from '/usr/lib/python3.8/logging/__init__.py'>
You can also just put this in a program to debug it:
import logging
print(logging)
How to fix
Fix this by removing the offending logging.py or logging/__init__.py.

Is there an easy way to print Unix-style error messages?

I want to print basic error messages from a script. They should have the name of the script (i.e. sys.argv[0]) then the message, and should be printed to stderr. Is there an easy way to do this using the standard library?
I've written the below (inspired by this related answer) and put it in its own module so I can import it in my scripts, but I feel like there's a better way.
from __future__ import print_function
import sys
def print_error(*args, **kwargs):
"""Print script name and anything else to stderr."""
print_stderr(sys.argv[0] + ':', *args, file=sys.stderr, **kwargs)
The easiest way is to use logging. First set its format to '%(pathname)s: %(message)s', then use logging.error() to print. For example:
# Setup
import logging
logging.basicConfig(format='%(pathname)s: %(message)s')
# Print error
logging.error('Test error message')
Usage:
$ python3 test.py
test.py: Test error message
$ python3 ~/test.py
/home/wja/test.py: Test error message
See LogRecord attributes for more info about pathname and other available info.

How to structure the Logging in this case (python)

I was wondering what would be the best way for me to structure my logs in a special situation.
I have a series of python services that use the same python files for communicating (ex. com.py) with the HW. I have logging implemented in this modules and i would like for it to be dependent(associated) with the main service that is calling the modules.
How should i structure the logger logic so that if i have:
main_service_1->module_for_comunication
The logging goes to file main_serv_1.log
main_service_2->module_for_comunication
The logging goes to file main_serv_2.log
What would be the best practice in this case without harcoding anything?
Is there a way to know the file which is importing the com.py, so that i am able inside of the com.py, to use this information to adapt the logging to the caller?
In my experience, for a situation like this, the cleanest and easiest to implement strategy is to pass the logger to the code that does the logging.
So, create a logger for each service you want to have log to a different file, and pass that logger in to the code from your communications module. You can use __name__ to get the name of the current module (the actual module name, without the .py extension).
In the example below I implemented a fallback for the case when no logger is passed in as well.
com.py
from log import setup_logger
class Communicator(object):
def __init__(self, logger=None):
if logger is None:
logger = setup_logger(__name__)
self.log = logger
def send(self, data):
self.log.info('Sending %s bytes of data' % len(data))
svc_foo.py
from com import Communicator
from log import setup_logger
logger = setup_logger(__name__)
def foo():
c = Communicator(logger)
c.send('foo')
svc_bar.py
from com import Communicator
from log import setup_logger
logger = setup_logger(__name__)
def bar():
c = Communicator(logger)
c.send('bar')
log.py
from logging import FileHandler
import logging
def setup_logger(name):
logger = logging.getLogger(name)
handler = FileHandler('%s.log' % name)
logger.addHandler(handler)
return logger
main.py
from svc_bar import bar
from svc_foo import foo
import logging
# Add a StreamHandler for the root logger, so we get some console output in
# addition to file logging (for easy of testing). Also set the level for
# the root level to INFO so our messages don't get filtered.
logging.basicConfig(level=logging.INFO)
foo()
bar()
So, when you execute python main.py, this is what you'll get:
On the console:
INFO:svc_foo:Sending 3 bytes of data
INFO:svc_bar:Sending 3 bytes of data
And svc_foo.log and svc_bar.log each will have one line
Sending 3 bytes of data
If a client of the Communicator class uses it without passing in a logger, the log output will end up in com.log (fallback).
I see several options:
Option 1
Use __file__. __file__ is the pathname of the file from which the module was loaded (doc). depending of your structure, you should be able to identify the module by performing an os.path.split() like so:
If the folder structure is
+- module1
| +- __init__.py
| +- main.py
+- module2
+- __init__.py
+- main.py
you should be able to obtain the module name with a code placed in main.py:
def get_name():
module_name = os.path.split(__file__)[-2]
return module_name
This is not exactly DRY because you need the same code in both main.py. Reference here.
Option 2
A bit cleaner is to open 2 terminal windows and use an environment variable. E.g. you can define MOD_LOG_NAME as MOD_LOG_NAME="main_service_1" in one terminal and MOD_LOG_NAME="main_service_2" in the other one. Then, in your python code you can use something like:
import os
LOG_PATH_NAME os.environ['MOD_LOG_NAME']
This follows separation of concerns.
Update (since the question evolved a bit)
Once you've established the distinct name, all you have to do is to configure the logger:
import logging
logging.basicConfig(filename=LOG_PATH_NAME,level=logging.DEBUG)
(or get_name())and run the program.

Patching python's logging handlers

I'm writing unit tests to make sure that my log messages for syslog are limited in size. In particular, my Formatter is set like this:
import logging
from logging.handlers import SysLogHandler
[...]
default_fmt = logging.Formatter(
'%(name)s:%(levelname)s - %(message).{}s'.format(MAX_LOG_MESSAGE_SIZE)
)
All that I want to know is that the final message is less than a certain number of characters. I assumed I needed to patch SysLogHandler like this:
with patch('mylib.logger.logging.handlers.SysLogHandler') as m:
import mylib.logger
msg = 'foo'
mylib.logger.debug(msg)
print m.mock_calls
print m.call_list()
However running the tests with nosetests doesn't work:
ImportError: No module named logging
What am I forgetting?

python variable sharing between packages/modules

trying to understand and learn how to write packages... testing with something i've always used, logging...
Can you please help me understand why the 'log' variable is not working... and no logging is working on the screen?
Thanks!
main.py :
#!/opt/local/bin/python
import sys
sys.path.append('CLUSTER')
import clusterlogging.differentlogging
clusterlogging.differentlogging.consolelogging()
log.debug("Successfully logged in")
differentlogging.py
#!/opt/local/bin/python
def consolelogging():
import logging
class NullHandler(logging.Handler):
def emit(self, record):
pass
print "Console Logging loaded"
DEFAULTLOGLEVEL=logging.INFO
log = logging.getLogger(__name__)
log.addHandler(NullHandler())
log.debug("Successfully logged in")
def mysqllogging():
print "mysql logging module here"
def sysloglogging():
print "rsyslog logging module here"
output
Console Logging loaded
Traceback (most recent call last):
File "./svnprod.py", line 10, in <module>
log.debug("Successfully logged in")
NameError: name 'log' is not defined
log is a global variable in the differentlogging module. Thus you can access it as
clusterlogging.differentlogging.log.
You could also do something like from clusterlogging.differentlogging import log and then access it as just log.
Edit: actually, on reviewing your code again I don't know what to make of it. Could you please fix up your code indentation so that it makes sense? Are you defining log inside the consolelogging function? If so, you'll need to either make it global with global log or return it from the function and assign it to a variable log on the line where you call the function.
This will return the log array, and you will be able to use the logging function associated.
main.py:
#!/usr/bin/env python
import sys
sys.path.append('CLUSTER')
import clusterlogging.differentlogging
log=clusterlogging.differentlogging.ttylogging()
log.debug("Logging module loaded")
log.info ("It worked")
differentlogging.py :
#!/usr/bin/env python
def ttylogging():
print "Console Logging loaded"
import sys
import logging
class NullHandler(logging.Handler):
def emit(self, record):
pass
DEFAULTLOGLEVEL=logging.INFO
log = logging.getLogger(__name__)
log.addHandler(NullHandler())
log.setLevel(DEFAULTLOGLEVEL)
logStreamHandler = logging.StreamHandler(sys.stdout)
logStreamHandler.setFormatter(logging.Formatter("%(asctime)s %(levelname)5s %(name)s %(lineno)d: %(message)s"))
log.addHandler(logStreamHandler)
return log
def mysqllogging():
print "mysql logging module here"
def sysloglogging():
print "rsyslog logging module here"
Your main.py doesn't do anything to define the name log in the global namespace. Importing a module can define names in the namespace of that module, but can't put anything in the global namespace.
In your main.py you should add this statement:
from clusterlogging.differentlogging import log
By the way, I that is such a long module name, I would use import as:
import clusterlogging.differentlogging as difflogging
log = difflogging.log
EDIT: I originally recommended this but it won't work:
from difflogging import log # doesn't work
You might even want to use a really short name like dl:
import clusterlogging.differentlogging as dl
dl.log('whatever')
Since dl is really short, maybe you don't need to get log bound in the global namespace.
Also, you could get every name from a module by using import * but this is not recommended.
from clusterlogging.differentlogging import * # not recommended
You usually don't want to clutter the global namespace with all the stuff defined in a module. Import just what you need. This is tidier and helps document what you are actually using.

Categories