I was wondering how to implement a global logger that could be used everywhere with your own settings:
I currently have a custom logger class:
class customLogger(logging.Logger):
...
The class is in a separate file with some formatters and other stuff.
The logger works perfectly on its own.
I import this module in my main python file and create an object like this:
self.log = logModule.customLogger(arguments)
But obviously, I cannot access this object from other parts of my code.
Am i using a wrong approach? Is there a better way to do this?
Use logging.getLogger(name) to create a named global logger.
main.py
import log
logger = log.setup_custom_logger('root')
logger.debug('main message')
import submodule
log.py
import logging
def setup_custom_logger(name):
formatter = logging.Formatter(fmt='%(asctime)s - %(levelname)s - %(module)s - %(message)s')
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger = logging.getLogger(name)
logger.setLevel(logging.DEBUG)
logger.addHandler(handler)
return logger
submodule.py
import logging
logger = logging.getLogger('root')
logger.debug('submodule message')
Output
2011-10-01 20:08:40,049 - DEBUG - main - main message
2011-10-01 20:08:40,050 - DEBUG - submodule - submodule message
Since I haven't found a satisfactory answer, I would like to elaborate on the answer to the question a little bit in order to give some insight into the workings and intents of the logging library, that comes with Python's standard library.
In contrast to the approach of the OP (original poster) the library clearly separates the interface to the logger and configuration of the logger itself.
The configuration of handlers is the prerogative of the application developer who uses your library.
That means you should not create a custom logger class and configure the logger inside that class by adding any configuration or whatsoever.
The logging library introduces four components: loggers, handlers, filters, and formatters.
Loggers expose the interface that application code directly uses.
Handlers send the log records (created by loggers) to the appropriate destination.
Filters provide a finer grained facility for determining which log records to output.
Formatters specify the layout of log records in the final output.
A common project structure looks like this:
Project/
|-- .../
| |-- ...
|
|-- project/
| |-- package/
| | |-- __init__.py
| | |-- module.py
| |
| |-- __init__.py
| |-- project.py
|
|-- ...
|-- ...
Inside your code (like in module.py) you refer to the logger instance of your module to log the events at their specific levels.
A good convention to use when naming loggers is to use a module-level logger, in each module which uses logging, named as follows:
logger = logging.getLogger(__name__)
The special variable __name__ refers to your module's name and looks something like project.package.module depending on your application's code structure.
module.py (and any other class) could essentially look like this:
import logging
...
log = logging.getLogger(__name__)
class ModuleClass:
def do_something(self):
log.debug('do_something() has been called!')
The logger in each module will propagate any event to the parent logger which in return passes the information to its attached handler! Analogously to the python package/module structure, the parent logger is determined by the namespace using "dotted module names". That's why it makes sense to initialize the logger with the special __name__ variable (in the example above name matches the string "project.package.module").
There are two options to configure the logger globally:
Instantiate a logger in project.py with the name __package__ which equals "project" in this example and is therefore the parent logger of the loggers of all submodules. It is only necessary to add an appropriate handler and formatter to this logger.
Set up a logger with a handler and formatter in the executing script (like main.py) with the name of the topmost package.
When developing a library which uses logging, you should take care to document how the library uses logging - for example, the names of loggers used.
The executing script, like main.py for example, might finally look something like this:
import logging
from project import App
def setup_logger():
# create logger
logger = logging.getLogger('project')
logger.setLevel(logging.DEBUG)
# create console handler and set level to debug
ch = logging.StreamHandler()
ch.setLevel(level)
# create formatter
formatter = logging.Formatter('%(asctime)s [%(levelname)s] %(name)s: %(message)s')
# add formatter to ch
ch.setFormatter(formatter)
# add ch to logger
logger.addHandler(ch)
if __name__ == '__main__' and __package__ is None:
setup_logger()
app = App()
app.do_some_funny_stuff()
The method call log.setLevel(...) specifies the lowest-severity log message a logger will handle but not necessarily output! It simply means the message is passed to the handler as long as the message's severity level is higher than (or equal to) the one that is set. But the handler is responsible for handling the log message (by printing or storing it for example).
Hence the logging library offers a structured and modular approach which just needs to be exploited according to one's needs.
Logging documentation
Create an instance of customLogger in your log module and use it as a singleton - just use the imported instance, rather than the class.
The python logging module is already good enough as global logger, you might simply looking for this:
main.py
import logging
logging.basicConfig(level = logging.DEBUG,format = '[%(asctime)s] %(levelname)s [%(name)s:%(lineno)s] %(message)s')
Put the codes above into your executing script, then you can use this logger with the same configs anywhere in your projects:
module.py
import logging
logger = logging.getLogger(__name__)
logger.info('hello world!')
For more complicated configs you may use a config file logging.conf with logging
logging.config.fileConfig("logging.conf")
You can just pass it a string with a common sub-string before the first period. The parts of the string separated by the period (".") can be used for different classes / modules / files / etc. Like so (specifically the logger = logging.getLogger(loggerName) part):
def getLogger(name, logdir=LOGDIR_DEFAULT, level=logging.DEBUG, logformat=FORMAT):
base = os.path.basename(__file__)
loggerName = "%s.%s" % (base, name)
logFileName = os.path.join(logdir, "%s.log" % loggerName)
logger = logging.getLogger(loggerName)
logger.setLevel(level)
i = 0
while os.path.exists(logFileName) and not os.access(logFileName, os.R_OK | os.W_OK):
i += 1
logFileName = "%s.%s.log" % (logFileName.replace(".log", ""), str(i).zfill((len(str(i)) + 1)))
try:
#fh = logging.FileHandler(logFileName)
fh = RotatingFileHandler(filename=logFileName, mode="a", maxBytes=1310720, backupCount=50)
except IOError, exc:
errOut = "Unable to create/open log file \"%s\"." % logFileName
if exc.errno is 13: # Permission denied exception
errOut = "ERROR ** Permission Denied ** - %s" % errOut
elif exc.errno is 2: # No such directory
errOut = "ERROR ** No such directory \"%s\"** - %s" % (os.path.split(logFileName)[0], errOut)
elif exc.errno is 24: # Too many open files
errOut = "ERROR ** Too many open files ** - Check open file descriptors in /proc/<PID>/fd/ (PID: %s)" % os.getpid()
else:
errOut = "Unhandled Exception ** %s ** - %s" % (str(exc), errOut)
raise LogException(errOut)
else:
formatter = logging.Formatter(logformat)
fh.setLevel(level)
fh.setFormatter(formatter)
logger.addHandler(fh)
return logger
class MainThread:
def __init__(self, cfgdefaults, configdir, pidfile, logdir, test=False):
self.logdir = logdir
logLevel = logging.DEBUG
logPrefix = "MainThread_TEST" if self.test else "MainThread"
try:
self.logger = getLogger(logPrefix, self.logdir, logLevel, FORMAT)
except LogException, exc:
sys.stderr.write("%s\n" % exc)
sys.stderr.flush()
os._exit(0)
else:
self.logger.debug("-------------------- MainThread created. Starting __init__() --------------------")
def run(self):
self.logger.debug("Initializing ReportThreads..")
for (group, cfg) in self.config.items():
self.logger.debug(" ------------------------------ GROUP '%s' CONFIG ------------------------------ " % group)
for k2, v2 in cfg.items():
self.logger.debug("%s <==> %s: %s" % (group, k2, v2))
try:
rt = ReportThread(self, group, cfg, self.logdir, self.test)
except LogException, exc:
sys.stderr.write("%s\n" % exc)
sys.stderr.flush()
self.logger.exception("Exception when creating ReportThread (%s)" % group)
logging.shutdown()
os._exit(1)
else:
self.threads.append(rt)
self.logger.debug("Threads initialized.. \"%s\"" % ", ".join([t.name for t in self.threads]))
for t in self.threads:
t.Start()
if not self.test:
self.loop()
class ReportThread:
def __init__(self, mainThread, name, config, logdir, test):
self.mainThread = mainThread
self.name = name
logLevel = logging.DEBUG
self.logger = getLogger("MainThread%s.ReportThread_%s" % ("_TEST" if self.test else "", self.name), logdir, logLevel, FORMAT)
self.logger.info("init database...")
self.initDB()
# etc....
if __name__ == "__main__":
# .....
MainThread(cfgdefaults=options.cfgdefaults, configdir=options.configdir, pidfile=options.pidfile, logdir=options.logdir, test=options.test)
Related
Is it possible for a library of code to obtain a reference to a logger object created by client code that uses a unique name?
The python advanced logging tutorial says the following:
A good convention to use when naming loggers is to use a module-level logger, in each module which uses logging, named as follows:
logger = logging.getLogger(__name__) This means that logger names track the package/module hierarchy, and it’s intuitively obvious where events are logged just from the logger name.
Every module in my library does:
LOG = logging.getLogger(__name__)
This works fine when client code does something like:
logger = logging.getLogger()
But breaks (I get no log output to the registered handlers from the logging object in main) when client code does something like:
logger = logging.getLogger('some.unique.path')
Because I am packaging my code as a library to be used with a lot of different clients, I want to have the most extensible logging. That is, I want my module level logging to reference the same logger object as main whenever possible, whether the client code is using a named logger or not.
Here is an example program to reproduce on your end. Imagine test.py is my library of code that I want to always reference any logger created in main.py.
Example Output
% python3 main.py
in main
%
Desired Output
% python3 main.py
in main
hi
%
main.py
import logging
from test import somefunc
LOG = logging.getLogger('some.unique.path')
LOG.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
LOG.addHandler(ch)
def main():
LOG.info('in main')
somefunc()
if __name__ == '__main__':
main()
test.py
import logging
LOG = logging.getLogger(__name__)
def somefunc():
LOG.info('hi')
This is the approach I would take.
Create a separate logging utility. I have attached the code for it below.
Now, import this logger utility (or) class (or) function that this provides, wherever needed.
Updated your code, assuming I am using the above linked logger utility and tested it locally on my machine.
Code for main.py
from logger import get_logger
from test import somefunc
LOG = get_logger()
def main():
LOG.info("in main")
somefunc()
if __name__ == "__main__":
main()
Code for test.py
from logger import get_logger
LOG = get_logger()
def somefunc():
LOG.info("hi")
Code for logger.py - Attaching the code here too
import logging
from logging.handlers import RotatingFileHandler
bytes_type = bytes
unicode_type = str
basestring_type = str
DEFAULT_LOGGER = "default"
INTERNAL_LOGGER_ATTR = "internal"
CUSTOM_LOGLEVEL = "customloglevel"
logger = None
_loglevel = logging.DEBUG
_logfile = None
_formatter = None
def get_logger(
name=None,
logfile=None,
level=logging.DEBUG,
maxBytes=0,
backupCount=0,
fileLoglevel=None,
):
_logger = logging.getLogger(name or __name__)
_logger.propagate = False
_logger.setLevel(level)
# Reconfigure existing handlers
has_stream_handler = False
for handler in list(_logger.handlers):
if isinstance(handler, logging.StreamHandler):
has_stream_handler = True
if isinstance(handler, logging.FileHandler) and hasattr(
handler, INTERNAL_LOGGER_ATTR
):
# Internal FileHandler needs to be removed and re-setup to be able
# to set a new logfile.
_logger.removeHandler(handler)
continue
# reconfigure handler
handler.setLevel(level)
if not has_stream_handler:
stream_handler = logging.StreamHandler()
setattr(stream_handler, INTERNAL_LOGGER_ATTR, True)
stream_handler.setLevel(level)
_logger.addHandler(stream_handler)
if logfile:
rotating_filehandler = RotatingFileHandler(
filename=logfile, maxBytes=maxBytes, backupCount=backupCount
)
setattr(rotating_filehandler, INTERNAL_LOGGER_ATTR, True)
rotating_filehandler.setLevel(fileLoglevel or level)
_logger.addHandler(rotating_filehandler)
return _logger
def setup_default_logger(
logfile=None, level=logging.DEBUG, formatter=None, maxBytes=0, backupCount=0
):
global logger
logger = get_logger(
name=DEFAULT_LOGGER, logfile=logfile, level=level, formatter=formatter
)
return logger
def reset_default_logger():
"""
Resets the internal default logger to the initial configuration
"""
global logger
global _loglevel
global _logfile
global _formatter
_loglevel = logging.DEBUG
_logfile = None
_formatter = None
logger = get_logger(name=DEFAULT_LOGGER, logfile=_logfile, level=_loglevel)
# Initially setup the default logger
reset_default_logger()
def loglevel(level=logging.DEBUG, update_custom_handlers=False):
"""
Set the minimum loglevel for the default logger
Reconfigures only the internal handlers of the default logger (eg. stream and logfile).
Update the loglevel for custom handlers by using `update_custom_handlers=True`.
:param int level: Minimum logging-level (default: `logging.DEBUG`).
:param bool update_custom_handlers: custom handlers to this logger; set `update_custom_handlers` to `True`
"""
logger.setLevel(level)
# Reconfigure existing internal handlers
for handler in list(logger.handlers):
if hasattr(handler, INTERNAL_LOGGER_ATTR) or update_custom_handlers:
# Don't update the loglevel if this handler uses a custom one
if hasattr(handler, CUSTOM_LOGLEVEL):
continue
# Update the loglevel for all default handlers
handler.setLevel(level)
global _loglevel
_loglevel = level
def formatter(formatter, update_custom_handlers=False):
"""
Set the formatter for all handlers of the default logger
:param Formatter formatter: default uses the internal LogFormatter.
:param bool update_custom_handlers: custom handlers to this logger - set ``update_custom_handlers`` to `True`
"""
for handler in list(logger.handlers):
if hasattr(handler, INTERNAL_LOGGER_ATTR) or update_custom_handlers:
handler.setFormatter(formatter)
global _formatter
_formatter = formatter
def logfile(
filename,
formatter=None,
mode="a",
maxBytes=0,
backupCount=0,
encoding=None,
loglevel=None,
):
"""
Function to handle the rotating fileHandler
:param filename: filename logs are being collected
:param mode: fileMode
:param maxBytes: values for roll-over at a pre-determined size; if zero; rollover never occurs
:param backupCount: if value is non-zero; system saves old logfiles; by appending extensions
:param encoding: set encoding option; if not None; open file with that encoding.
:param loglevel: loglevel set
"""
# Step 1: If an internal RotatingFileHandler already exists, remove it
for handler in list(logger.handlers):
if isinstance(handler, RotatingFileHandler) and hasattr(
handler, INTERNAL_LOGGER_ATTR
):
logger.removeHandler(handler)
# Step 2: If wanted, add the RotatingFileHandler now
if filename:
rotating_filehandler = RotatingFileHandler(
filename,
mode=mode,
maxBytes=maxBytes,
backupCount=backupCount,
encoding=encoding,
)
# Set internal attributes on this handler
setattr(rotating_filehandler, INTERNAL_LOGGER_ATTR, True)
if loglevel:
setattr(rotating_filehandler, CUSTOM_LOGLEVEL, True)
# Configure the handler and add it to the logger
rotating_filehandler.setLevel(loglevel or _loglevel)
logger.addHandler(rotating_filehandler)
Output:
in main
hi
Note:
Do dive deep into the Logger Utility linked to understand all the internal details.
Use logging.basicConfig instead of manually trying to configure a logger. Loggers inherit their configuration from their parent.
import logging
from test import somefunc
LOG = logging.getLogger('some.unique.path')
def main():
LOG.info('in main')
somefunc()
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG)
main()
I have a logging function with hardcoded logfile name (LOG_FILE):
setup_logger.py
import logging
import sys
FORMATTER = logging.Formatter("%(levelname)s - %(asctime)s - %(name)s - %(message)s")
LOG_FILE = "my_app.log"
def get_console_handler():
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setFormatter(FORMATTER)
return console_handler
def get_file_handler():
file_handler = logging.FileHandler(LOG_FILE)
file_handler.setFormatter(FORMATTER)
return file_handler
def get_logger(logger_name):
logger = logging.getLogger(logger_name)
logger.setLevel(logging.DEBUG) # better to have too much log than not enough
logger.addHandler(get_console_handler())
logger.addHandler(get_file_handler())
# with this pattern, it's rarely necessary to propagate the error up to parent
logger.propagate = False
return logger
I use this in various modules this way:
main.py
from _Core import setup_logger as log
def main(incoming_feed_id: int, type: str) -> None:
logger = log.get_logger(__name__)
...rest of my code
database.py
from _Core import setup_logger as log
logger = log.get_logger(__name__)
Class Database:
...rest of my code
etl.py
import _Core.database as db
from _Core import setup_logger as log
logger = log.get_logger(__name__)
Class ETL:
...rest of my code
What I want to achieve is to always change the logfile's path and name on each run based on arguments passed to the main() function in main.py.
Simplified example:
If main() receives the following arguments: incoming_feed_id = 1, type = simple_load, the logfile's name should be 1simple_load.log.
I am not sure what is the best practice for this. What I came up with is probably the worst thing to do: Add a log_file parameter to the get_logger() function in setup_logger.py, so I can add a filename in main() in main.py. But in this case I would need to pass the parameters from main to the other modules as well, which I do not think I should do as for example the database class is not even used in main.py.
I don't know enough about your application to be sure this'll work for you, but you can just configure the root logger in main() by calling get_logger('', filename_based_on_cmdline_args), and stuff logged to the other loggers will be passed to the root logger's handlers for processing if the logger levels configured allow it. The way you're doing it now seems to open multiple handlers pointing to the same file, which seems sub-optimal. The other modules can just use logging.getLogger(__name__) rather than log.get_logger(__name__).
I have the following python package structure.
python_logging
python_logging
__init__.py
first_class.py
second_class.py
run.py
Here is the code in __init__.py
init.py
import logging
import logging.config
# Create the Logger
loggers = logging.getLogger(__name__)
loggers.setLevel(logging.DEBUG)
# Create the Handler for logging data to a file
logger_handler = logging.FileHandler(filename='C:\Python\Log\stest.txt')
logger_handler.setLevel(logging.DEBUG)
# Create a Formatter for formatting the log messages
logger_formatter = logging.Formatter('%(name)s - %(levelname)s - %(message)s')
# Add the Formatter to the Handler
logger_handler.setFormatter(logger_formatter)
# Add the Handler to the Logger
loggers.addHandler(logger_handler)
loggers.info('Completed configuring logger()!')
Here is code for first_class.py
import logging
class FirstClass(object):
def __init__(self):
self.current_number = 0
self.logger = logging.getLogger(__name__)
def increment_number(self):
self.current_number += 1
self.logger.warning('Incrementing number!')
self.logger.info('Still incrementing number!!')
Here is code for second_class.py
class SecondClass(object):
def __init__(self):
self.enabled = False
self.logger = logging.getLogger(__name__)
def enable_system(self):
self.enabled = True
self.logger.warning('Enabling system!')
self.logger.info('Still enabling system!!')
Here is code for run.py
from LogModule.first_class import FirstClass
from LogModule.second_class import SecondClass
number = FirstClass()
number.increment_number()
system = SecondClass()
system.enable_system()
This is the output in the log file
LogModule - INFO - Completed configuring logger()!
LogModule.first_class - WARNING - Incrementing number!
LogModule.first_class - INFO - Still incrementing number!!
LogModule.second_class - WARNING - Enabling system!
LogModule.second_class - INFO - Still enabling system!!
Question : How did number.increment_number() and system.enable_system() write to a log file when the file handler was initialized in init.py ? also both the classes have different getloggers . Can anyone explain , it will be helpful.
Every logger has a parent - in Python you can think that all loggers construct into a single "tree". If you add handlers into one logger, its children loggers will also share these handlers. This means you have no need to create handlers for each logger - otherwise setting handlers on every logger will be repetitive and boring.
Back to your sample, your package structure,
python_logging
python_logging
__init__.py
first_class.py
second_class.py
run.py
in __init__.py file,
# Create the Logger
loggers = logging.getLogger(__name__)
the logger name is <python_logging>.
in first_class.py
self.logger = logging.getLogger(__name__)
the logger name is <python_logging>.<first_class>.
So the logger in first_class.py is the child of the logger in __init__.py
I'm using pytest-3.7.1 which has good support for logging, including live logging to stdout during tests. I'm using --log-cli-level=DEBUG to dump all debug-level logging to the console as it happens.
The problem I have is that --log-cli-level=DEBUG turns on debug logging for all modules in my test program, including third-party dependencies, and it floods the log with a lot of uninteresting output.
Python's logging module has the ability to set logging levels per module. This enables selective logging - for example, in a normal Python program I can turn on debugging for just one or two of my own modules, and restrict the log output to just those, or set different log levels for each module. This enables turning off debug-level logging for noisy libraries.
So what I'd like to do is apply the same concept to pytest's logging - i.e. specify a logging level, from the command line, for specific non-root loggers. For example, if I have a module called test_foo.py then I'm looking for a way to set the log level for this module from the command line.
I'm prepared to roll-my-own if necessary (I know how to add custom arguments to pytest), but before I do that I just want to be sure that there isn't already a solution. Is anyone aware of one?
I had the same problem, and found a solution in another answer:
Instead of --log-cli-level=DEBUG, use --log-level DEBUG. It disables all third-party module logs (in my case, I had plenty of matplotlib logs), but still outputs your app logs for each test that fails.
I got this working by writing a factory class and using it to set the level of the root logger to logger.INFO and use the logging level from the command line for all the loggers obtained from the factory. If the logging level from the command line is higher than the minimum global log level you specify in the class (using constant MINIMUM_GLOBAL_LOG_LEVEL), the global log level isn't changed.
import logging
MODULE_FIELD_WIDTH_IN_CHARS = '20'
LINE_NO_FIELD_WIDTH_IN_CHARS = '3'
LEVEL_NAME_FIELD_WIDTH_IN_CHARS = '8'
MINIMUM_GLOBAL_LOG_LEVEL = logging.INFO
class EasyLogger():
root_logger = logging.getLogger()
specified_log_level = root_logger.level
format_string = '{asctime} '
format_string += '{module:>' + MODULE_FIELD_WIDTH_IN_CHARS + 's}'
format_string += '[{lineno:' + LINE_NO_FIELD_WIDTH_IN_CHARS + 'd}]'
format_string += '[{levelname:^' + LEVEL_NAME_FIELD_WIDTH_IN_CHARS + 's}]: '
format_string += '{message}'
level_change_warning_sent = False
#classmethod
def get_logger(cls, logger_name):
if not EasyLogger._logger_has_format(cls.root_logger, cls.format_string):
EasyLogger._setup_root_logger()
logger = logging.getLogger(logger_name)
logger.setLevel(cls.specified_log_level)
return logger
#classmethod
def _setup_root_logger(cls):
formatter = logging.Formatter(fmt=cls.format_string, style='{')
if not cls.root_logger.hasHandlers():
handler = logging.StreamHandler()
cls.root_logger.addHandler(handler)
for handler in cls.root_logger.handlers:
handler.setFormatter(formatter)
cls.root_logger.setLevel(MINIMUM_GLOBAL_LOG_LEVEL)
if (cls.specified_log_level < MINIMUM_GLOBAL_LOG_LEVEL and
cls.level_change_warning_sent is False):
cls.root_logger.log(
max(cls.specified_log_level, logging.WARNING),
"Setting log level for %s class to %s, all others to %s" % (
__name__,
cls.specified_log_level,
MINIMUM_GLOBAL_LOG_LEVEL
)
)
cls.level_change_warning_sent = True
#staticmethod
def _logger_has_format(logger, format_string):
for handler in logger.handlers:
return handler.format == format_string
return False
The above class is then used to send logs normally as you would with a logging.logger object as follows:
from EasyLogger import EasyLogger
class MySuperAwesomeClass():
def __init__(self):
self.logger = EasyLogger.get_logger(__name__)
def foo(self):
self.logger.debug("debug message")
self.logger.info("info message")
self.logger.warning("warning message")
self.logger.critical("critical message")
self.logger.error("error message")
Enable/Disable/Modify the log level of any module in Python:
logging.getLogger("module_name").setLevel(logging.log_level)
I created a module named log.py where a function defines how the log will be registered. Here is the atomic code:
import logging
import time
def set_up_log():
"""
Create a logging file.
"""
#
# Create the parent logger.
#
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
#
# Create a file as handler.
#
file_handler = logging.FileHandler('report\\activity.log')
file_handler.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(filename)s - %(name)s - % (levelname)4s - %(message)s')
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
#
# Start recording.
#
logger.info('______ STARTS RECORDING _______')
if __name__=='__main__':
set_up_log()
A second module named read_file.py is using this log.py to record potential error.
import logging
import log
log.set_up_log()
logger = logging.getLogger(__name__)
def read_bb_file(input_file):
"""
Input_file must be the path.
Open the source_name and read the content. Return the result.
"""
content = list()
logger.info('noi')
try:
file = open(input_file, 'r')
except IOError, e:
logger.error(e)
else:
for line in file:
str = line.rstrip('\n\r')
content.append(str)
file.close()
return content
if __name__ == "__main__":
logger.info("begin execution")
c = read_bb_file('textatraiter.out')
logger.info("end execution")
In the command prompt lauching read_file.py, I get this error:
No handlers could be found for logger "__main__"
My result in the file is the following
2014-05-12 13:32:58,690 - log.py - log - INFO - ______ STARTS RECORDING _______
I read lots of topics here and on Py Doc but it seems I did not understand them properly since I have this error.
I add I would like to keep the log settlement appart in a function and not define it explicitely in my main method.
You have 2 distinct loggers and you're only configuring one.
The first is the one you make in log.py and set up correctly. Its name however will be log, because you have imported this module from read_file.py.
The second logger, the one you're hoping is the same as the first, is the one you assign to the variable logger in read_file.py. Its name will be __main__ because you're calling this module from the command line. You're not configuring this logger.
What you could do is to add a parameter to set_up_log to pass the name of the logger in, e.g.
def set_up_log(logname):
logger = logging.getLogger(logname)
That way, you will set the handlers and formatters for the correct logging instance.
Organizing your logs in a hierarchy is the way logging was intended to be used by Vinay Sajip, the original author of the module. So your modules would only log to a logging instance with the fully qualified name, as given by __name__. Then your application code could set up the loggers, which is what you're trying to accomplish with your set_up_log function. You just need to remember to pass it the relevant name, that's all. I found this reference very useful at the time.