Python / How to get logger with absolute path? - python

I try to use the logging of python:
import logging
log = logging.getLogger(__name__)
logInstall = logging.getLogger('/var/log/install.log')
log.info("Everything works great")
logInstall.info("Why is not printed to the log file?")
In the log file of the log, every thing works and it prints: "Everything works great"
But in /var/log/install.log, I don't see anything.
What I do wrong?

Where have you seen that the argument to logging.getLogger() was a file path ??? It's only a name for your logger object:
logging.getLogger([name])
Return a logger with the specified name or, if no name is specified,
return a logger which is the root logger of the hierarchy.
If specified, the name is typically a dot-separated hierarchical name
like “a”, “a.b” or “a.b.c.d”. Choice of these names is entirely up to
the developer who is using logging.
https://docs.python.org/2/library/logging.html#logging.getLogger
Where / how this logger logs to (file, stderr, syslog, mail, network call, whatever) depends on how you configure the logging for your own app - the point here is that you decouple the logger's use (in the libs) from the logger config (in the app).
Note that the usual practice is to use __name__ so you get a logger hierarchy that mimics the packages / modules hierarchy.

Related

logging - merging multiple configuration files

I am working on a project where we have a core application that loads multiple plugins.
Each plugin has its own configuration file, and the core application has one as well.
We are using the excellent logging module from python's standard library.
The logging module includes the ability to load the logging configuration from an .ini file.
However, if you load another configuration file, the other files are discarded and only the new configuration is used.
What I would like to do is to split my logging configuration into multiple files, so that the application can load its own configuration file and then load each plugin's merging their logging configuration into the main one.
Note: fileConfig has an option called disable_existing_loggers that can be set to False. However, this only keeps existing loggers alive, but it still clears the internal map of handlers (which means that a plugin's configuration cannot use a handler defined in the application's config file).
I could merge the files manually to produce my own config, but I'd rather avoid that.
Thanks.
To make it clearer, I'd like to do something like this:
# application.ini
[loggers]
keys=root,app
[handlers]
keys=rootHandler,appHandler
[formatters]
keys=myformatter
[logger_root]
# stuff
[handler_rootHandler]
# stuff
[formatter_myformatter]
# stuff
...
# plugin.ini
[loggers]
keys=pluginLogger # no root logger
[handlers]
keys=pluginHandler # no root handler
# no formatters section
[logger_pluginLogger]
# stuff
formatter=myformatter # using the formatter from application.ini
I usually do this with logging.config.dictConfig and the pyYaml package. The package allows you to load the content of a configuration file as a dict object.
The only additional thing needed is a small helper class to handle configuration overwrites/add-ons:
import yaml
class Configuration(dict):
def __init__(self,
config_file_path=None,
overwrites=None):
with open(config_file_path) as config_file:
config = yaml.load(config_file)
super(Configuration, self).__init__(config)
if overwrites is not None:
for overwrite_key, value in overwrites.items():
self.apply_overwrite(self, overwrite_key, value)
def apply_overwrite(self, node, key, value):
if isinstance(value, dict):
for item in value:
self.apply_overwrite(node[key], item, value[item])
else:
node[key] = value
For example, if your main configuration is:
logger:
version: 1
disable_existing_loggers: False
formatters:
simple:
format: '%(levelname)s: Module: %(name)s Msg: %(message)s'
handlers:
file:
level: DEBUG
class: logging.handlers.RotatingFileHandler
maxBytes: 10000000
backupCount: 50
formatter: simple
filename: '/tmp/log1.log'
root:
handlers: [file]
level: DEBUG
and your overwrite is:
logger:
handlers:
file:
filename: '/tmp/log2.log'
you can get your overwritten logger like that:
from configuration import Configuration
from logging.config import dictConfig
import logging
if __name__ == '__main__':
config = Configuration('standard.yml', overwrites=Configuration('overwrite.yml'))
dictConfig(config['logger'])
logger = logging.getLogger(__name__)
logger.info('I logged it')
I couldn't find a way to do what I wanted, so I ended up rolling a class to do it.
Here it is as a convenient github gist.

Creating multiple loggers in Python to different outputs in the same module

So I want to create multiple loggers in the same module
log = logging.getLogger('FirstLogger')
plog = logging.getLogger('SecondLogger')
and I want to configure each logger separately.
so I added a FileHandler for plog that only takes logging.INFO or above while the FileHandler for log would take logging.DEBUG or above.
I have created an init_logger() function that takes in the instance of the logger to perform the configuration on
def init_logger(logger, fmode, cmode)
So i want FirstLogger to log to a file that i created separately for it and log with DEBUG level. I would do
log = logging.getLogger('FirstLogger')
init_logger(log,logging.DEBUG,logging.INFO)
plog = logging.getLogger('SecondLogger')
init_logger(plog,logging.INFO,logging.INFO)
In init_logger I specify different files for the FileHandler and set the levels according to what is passed to init_logger.
flog = logging.FileHandler(logfile)
flog.setLevel(fmode)
flog.setFormatter(...)
console = logging.StreamHandler()
console.setLevel(cmode)
console.setFormatter(...)
log.addhandler(flog)
log.addHandler(console)
The problem I have is that even though 'log' has console set level to INFO and the FileHandler to DEBUG I still only get INFO in both the file and the console. I can't figure out what is wrong with what I am doing.
You set the level of the file handler to DEBUG, but you do not set the level of the logger itself to DEBUG
log.setLevel(min(cmode, fmode))

Python: logging disapears when in django shell

Running: Python 3.3 in virtualenv, Windows 7, Django 1.6.5
Background: I am working on a Django project currently. The projects directory has a python file logManager.py that configs the logging module, creates a root logging instance, which is then imported by the various modules:
import logging
logging.basicConfig(level=logging.DEBUG,
filename='logFile.log',
format='%(asctime)-25s - %(levelname)-8s - %(name)-12s - %(message)s',
filemode = 'w' # this makes the file clear before each new process
)
logger = logging.getLogger('custom')
All of the modules that use this logger import it like this:
from logManager.py import logger
Now I have an API module that's in a completely different directory, but it makes calls on the Django project. It also has a config of the logging module. This used to be in a separate file, but I recently added it to the api module itself, logRequests.py. Here's the logging config code:
import logging
logging.basicConfig(level=logging.DEBUG,
filename='C:\\SeleniumTests\\APILogFile.txt',
format='%(asctime)-25s - %(levelname)-8s - %(name)-12s - %(message)s'
)
libraryLogger = logging.getLogger('')
Problem: When I use the django shell, python manage.py shell, which I understand to be basically just like a python shell with one configuration variable set, all of my logging method calls are not actually printing to the APILogFile.txt, they must be getting hijacked somewhere by another instance of the logging module. None of the calls throw errors or exceptions or anything, they just don't end up in the log file. It's almost like they're being told to print somewhere else.
What can I do?

Selenium unit tests in Python -- where is my log file?

So I exported some unit tests from the Selenium IDE to Python. Now I'm trying to debug something, and I've noticed that Selenium uses the logging module. There is one particular line in selenium.webdriver.remote.remote_connection that I would really like to see the output of. It is:
LOGGER.debug('%s %s %s' % (method, url, data))
At the top of the file is another line that reads:
LOGGER = logging.getLogger(__name__)
So where is this log file? I want to look at it.
In your unit test script, place
import logging
logging.basicConfig(filename = log_filename, level = logging.DEBUG)
where log_filename is a path to wherever you'd like the log file written to.
Without the call to logging.basicConfig or some such call to setup a logging handler, the LOGGER.debug command does nothing.

How do I configure the Python logging module in Django?

I'm trying to configure logging for a Django app using the Python logging module. I have placed the following bit of configuration code in my Django project's settings.py file:
import logging
import logging.handlers
import os
date_fmt = '%m/%d/%Y %H:%M:%S'
log_formatter = logging.Formatter(u'[%(asctime)s] %(levelname)-7s: %(message)s (%(filename)s:%(lineno)d)', datefmt=date_fmt)
log_dir = os.path.join(PROJECT_DIR, "var", "log", "my_app")
log_name = os.path.join(log_dir, "nyrb.log")
bytes = 1024 * 1024 # 1 MB
if not os.path.exists(log_dir):
os.makedirs(log_dir)
handler = logging.handlers.RotatingFileHandler(log_name, maxBytes=bytes, backupCount=7)
handler.setFormatter(log_formatter)
handler.setLevel(logging.DEBUG)
logging.getLogger().setLevel(logging.DEBUG)
logging.getLogger().addHandler(handler)
logging.getLogger(__name__).info("Initialized logging subsystem")
At startup, I get a couple Django-related messages, as well as the "Initialized logging subsystem", in the log files, but then all the log messages end up going to the web server logs (/var/log/apache2/error.log, since I'm using Apache), and use the standard log format (not the formatter I designated). Am I configuring logging incorrectly?
Kind of anti-climactic, but it turns out there was a third-party app installed in the project that had its own logging configuration that was overriding the one I set up (it modified the root logger, for some reason -- not very kosher for a Django app!). Removed that code and everything works as expected.
See this other answer. Note that settings.py is usually imported twice, so you should avoid creating multiple handlers. Better logging support is coming to Django in 1.3 (hopefully), but for now you should ensure that if your setup code is called more than once, there are no adverse effects.
I'm not sure why your logged messages are going to the Apache logs, unless you've (somewhere else in your code) added a StreamHandler to your root logger with sys.stdout or sys.stderr as the stream. You might want to print out logging.getLogger().handlers just to see it's what you'd expect to see.
I used this with success (although it does not rotate):
# in settings.py
import logging
logging.basicConfig(
level = logging.DEBUG,
format = '%(asctime)s %(levelname)s %(funcName)s %(lineno)d \
\033[35m%(message)s\033[0m',
datefmt = '[%d/%b/%Y %H:%M:%S]',
filename = '/tmp/my_django_app.log',
filemode = 'a'
)
I'd suggest to try an absolute path, too.
I guess logging stops when Apache forks the process. After that happened, because all file descriptors were closed during daemonization, logging system tries to reopen log file and as far as I understand uses relative file path:
log_dir = os.path.join(PROJECT_DIR, "var", "log", "my_app")
log_name = os.path.join(log_dir, "nyrb.log")
But there is no “current directory” when process has been daemonized. Try to use absolute log_dir path. Hope that helps.

Categories