How to reduce the duplicated codes? - python

I have to print the thread name , but I need to add function call everywhere:
self.logger.debug("{} log".format(currentThread().getName()))
self.logger.error("{} log".format(currentThread().getName()))
I use the logging module, is it possible to add a log prefix which comes from a function call?
Antoher case is that the logger is not from logging module, is it possbile to monkey-patch the function, generating some functions so that I can call it like this:
self.logger.my_debug("log")
self.logger.my_error("log")
I do not want to write many functions manually with duplicated codes for different logging levels, it should be good if the codes are like this
for log_level in ("error", "warn", "debug", "info")
...generating functions...

Take a look at the documentation: https://docs.python.org/2/library/logging.html#logrecord-attributes
Your code could look like this:
FORMAT = '%(asctime)-15s %(threadName)s %(funcName)-8s %(message)s'
logging.basicConfig(format=FORMAT)
logger = logging.getLogger('mylogger')
logger.warning("message here)
To stress a point: Logging is normally configured globally. Especially thinks like "what format has the log" and "where does it get logged?" are decided there.
While I dislike the python logging system b/C it'S rather convuluted, I would say that understanding how it works is imperative if you want to log effectively.

Related

Python context management and verbosity

To facilitate testing code as I write it, I include verbosity in almost every module I write, as follows:
class MyObj(object):
def __init__(arg0, kwarg0="default", verbosity=0):
self.a0 = arg0
self.k0 = kwarg0
self.vb = verbosity
def my_method(self):
if verbosity > 2:
print(f"{self} is doing a thing now...")
or
def my_func(arg0, arg1, verbosity=0):
if verbosity > 2:
print(f"doing something to {arg0} and {arg1}...")
if verbosity > 5: # Added on later edit
import ipdb;ipdb.set_trace() # to clarify requirement
do_somthing()
The executable scripts that import these will have collected (from the command line or elsewhere) a verbosity argument which gets passed all the way down the stack.
It's occurred to me to use a context manager so that I wouldn't have to initialize this variable at every level of the stack, something like having this in the driver script:
with args.verbosity as vb:
my_func("x", "y")
Can I do that and then use vb in my_func without having to include it in the signature? Is there a better way to achieve this kind of control?
SUBSEQUENT EDIT: it's clear from the first answers—-thank you for those--that I need to check out the logging module, but in some cases I want to stop execution in the middle to inspect things at a particular stack level (see the ipdb code I am adding with this edit). Would you still recommend that I use logging? (I'm assuming there's a way to get the logging level if I felt compelled to occasionally litter my code with if statements like that one.)
Finally, I'm still interested in whether the context management solution would be expected to work (even if it's not the optimal solution).
To facilitate testing code as I write it, I include verbosity in almost every module I write ...
Don't litter your code with if-statements and prints for this kind of purpose. It makes the code messy, repetitive and less readable.
The use-case is exactly what stdlib logging is for: you can unconditionally log events which describe what the program is doing, at various verbosity levels, and the messages will be displayed - or not - depending on the logging system's configuration.
import logging
log = logging.getLogger(__name__)
def my_func(arg0, arg1):
log.info("doing something to %s and %s...", arg0, arg1)
do_something()
if __name__ == "__main__":
logging.basicConfig(level=logging.DEBUG, format="%(message)s")
my_func(123, 456)
In the example above, the message will print because it is at level INFO which is above the verbosity level that I've configured logging with (DEBUG). If you configure logging at level WARNING, then it won't display.
Generally the user will control the logging configuration settings (levels, formats, streams, files) via a config file, environment variables, or command-line arguments. It is up to the end-user to choose the specific logging configuration that meets their needs, as the developer you can just log events anytime. No need to worry about where the log events end up going to, or if they end up going anywhere at all.
Another way to do this is levels of logging. For example, Python's builtin logging module has error, warning, info, and debug levels:
import logging
logger = logging.getLogger()
logger.info('Normal message')
logger.debug('Message that only gets printed with high verbosity`)
Simply configure the logging level to debug, warn, etc., and you're basically done! Plus you get lots of native logging goodies.

Logfile creation

I have some loggings with a timestamp as name
logging_config.fileConfig( fname=ini_file, disable_existing_loggers=0 ) #, defaults, disable_existing_loggers)
logger = logging.getLogger("myLoggerABC")
logger.setLevel(logging.DEBUG)
Is it possible to configurate the logger that the logfile will be created with the first log-write-operation and not before.
Why?
When I have no reasons to log something, I will have many empty files. That's ugly.
Use delay=True in the handler's initialisation: see the documentation, as long as you are using Python 2.6 or later (and if not, consider updating). This defers creating/opening a file until something is actually logged.

writing a log file from python program

I want to output some strings to a log file and I want the log file to be continuously updated.
I have looked into the logging module pf python and found out that it is
mostly about formatting and concurrent access.
Please let me know if I am missing something or amy other way of doing it
usually i do the following:
# logging
LOG = "/tmp/ccd.log"
logging.basicConfig(filename=LOG, filemode="w", level=logging.DEBUG)
# console handler
console = logging.StreamHandler()
console.setLevel(logging.ERROR)
logging.getLogger("").addHandler(console)
The logging part initialises logging's basic configurations. In the following I set up a console handler that prints out some logging information separately. Usually my console output is set to output only errors (logging.ERROR) and the detailed output in the LOG file.
Your loggings will now printed to file. For instance using:
logger = logging.getLogger(__name__)
logger.debug("hiho debug message")
or even
logging.debug("next line")
should work.
Doug Hellmann has a nice guide.
To add my 10cents with regards to using logging. I've only recently discovered the Logging module and was put off at first. Maybe just because it initially looks like a lot of work, but it's really simple and incredibly handy.
This is the set up that I use. Similar to Mkinds answer, but includes a timestamp.
# Set up logging
log = "bot.log"
logging.basicConfig(filename=log,level=logging.DEBUG,format='%(asctime)s %(message)s', datefmt='%d/%m/%Y %H:%M:%S')
logging.info('Log Entry Here.')
Which will produce something like:
22/09/2015 14:39:34 Log Entry Here.
You can log to a file with the Logging API.
Example: http://docs.python.org/2/howto/logging.html#logging-to-a-file

Setting up advanced Python logging

I'd like to use logging for my modules, but I'm not sure how to design the following requirements:
normal logging levels (info, error, warning, debug) but also some additional more verbose debug levels
logging messages can have different types; some are meant for the developer, some are meant for the user; those types go to different outputs
errors should go to stderr
I also need to keep track which module/function/code line wrote a debug message so that I can activate or deactivate individual debug messages in a configuration
I need to keep track if errors occured at all to eventually execute a sys.exit() at the end of the program
all messages should go to stdout until the loggers are set up
I've read the logging documentation, but I'm not sure what's the most streamline way to use the logging module with the requirements above (how to use concept of Logger, Handler, Filter, ...). Can you point out an idea to set this up? (e.g. write module with two loggers 'user', 'developer'; derive from Logger; do getLogger(__name__); keep error flag like this,... etc.)
1) Adding more verbose debug levels.
Have you thought this through?
Take a look about what the doc says:
Defining your own levels is possible, but should not be necessary, as the existing levels have been chosen on the basis of practical experience. However, if you are convinced that you need custom levels, great care should be exercised when doing this, and it is possibly a very bad idea to define custom levels if you are developing a library. That's because if multiple library authors all define their own custom levels, there is a chance that the logging output from such multiple libraries used together will be difficult for the using developer to control and/or interpret, because a given numeric value might mean different things for different libraries.
Take also a look at When to use logging, there are two very good tables explaining when to use what.
Anyway, if you think you'll need those extra logging levels, take a look at: logging.addLevelName().
2) Some logging messages for the developer, and some for the user
Use different loggers family with different handlers. At the base of each family set Logger.propagate to False.
3) Errors should go to stderr
This already happen by default with StreamHandler:
class logging.StreamHandler(stream=None)
Returns a new instance of the StreamHandler class. If stream is specified, the instance will use it for logging output; otherwise, sys.stderr will be used.
4) Keep track of the source of a log message
Get Loggers with different names, and in your Formatter use format strings with %(name)s.
5) All messages should go to stdout until the loggers are set up
The setup of your logging system should be one of the very first things to do, so I don't really see what this means. If you need to send messages to stdout use print as it should be and already explained in When to use logging.
Last advice: carefully read the Logging Cookbook as it covers pretty well what you need.
From the comment: How would I design to have output to different sources and also filter my module?
I wouldn't filter in the first place, filers are hard to maintain and if they are all in one place that place will have to hold too much information. Every module should get abd set its own Logger (with its own handlers or filters) using or not its parent setting.
Very quick example:
# at the very beginning
root = logging.getLogger()
fallback_handler = logging.StreamHandler(stream=sys.stdout)
root.addHandler(fallback_handler)
# first.py
first_logger = logging.getLogger('first')
first_logger.parent = False
# ... set 'first' logger as you wish
class Foo:
def __init__(self):
self.logger = logging.getLogger('first.Foo')
def baz(self):
self.logger.info("I'm in baz")
# second.py
second_logger = logging.getLogger('first.second') # to use the same settings
# third.py
abstract_logger = logging.getLogger('abs')
abstract_logger.parent = False
# ... set 'abs' logger
third_logger = logging.getLogger('abs.third')
# ... set 'abs.third' particular settings
# fourth.py
fourth_logger = logging.getLogger('abs.fourth')
# [...]

Redefining logging root logger

At my current project there are thousand of code lines which looks like this:
logging.info("bla-bla-bla")
I don't want to change all these lines, but I would change log behavior. My idea is changing root logger to other Experimental logger, which is configured by ini-file:
[loggers]
keys = Experimental
[formatter_detailed]
format = %(asctime)s:%(name)s:%(levelname)s %(module)s:%(lineno)d: %(message)s
[handler_logfile]
class = FileHandler
args = ('experimental.log', 'a')
formatter = detailed
[logger_Experimental]
level = DEBUG
qualname = Experimental
handlers = logfile
propagate = 0
Now setting the new root logger is done by this piece of code:
logging.config.fileConfig(path_to_logger_config)
logging.root = logging.getLogger('Experimental')
Is redefining of root logger safe? Maybe there is more convenient way?
I've tried to use google and looked through stackoverflow questions, but I didn't find the answer.
You're advised not to redefine the root logger in the way you describe. In general you should only use the root logger directly for small scripts - for larger applications, best practice is to use
logger = logging.getLogger(__name__)
in each module where you use logging, and then make calls to logger.info() etc.
If all you want to do is to log to a file, why not just add a file handler to the root logger? You can do using e.g.
if __name__ == '__main__':
logging.basicConfig(filename='experimental.log', filemode='w')
main() # or whatever your main entry point is called
or via a configuration file.
Update: When I say "you're advised", I mean by me, here in this answer ;-) While you may not run into any problems in your scenario, it's not good practice to overwrite a module attribute which hasn't been designed to be overwritten. For example, the root logger is an instance of a different class (which is not part of the public API), and there are other references to it in the logging machinery which would still point to the old value. Either of these facts could lead to hard-to-debug problems. Since the logging package allows a number of ways of achieving what you seem to want (seemingly, logging to a file rather than the console), then you should use those mechanisms that have been provided.
logger = logging.getLogger()
Leaving the name empty will return you the root logger.
logger = logging.getLogger('name')
Gives you another logger.

Categories