Change alembic logger - python

We are using alembic to apply DB revision. I have configured the connection and it works as expected, except I am not able to make it use our customer logger.
We have our own logger class (derived from Python logging) which is used throughout the application, and I want alembic to use it instead of the default.
Is there any way I can pass a logger object of our class to it? I want it to print its own log using the format and handler that is defined in the custom logger.
I tried,
env file
from sqlalchemy import engine_from_config
from sqlalchemy import pool
from alembic import context
from tools.logger import Logger
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
if config.attributes.get('configure_logger', True):
fileConfig(config.config_file_name)
logger = Logger('alembic.env')
my script
self.alembic_cfg = alembic_config(self.alembic_ini_path, attributes={'configure_logger': False})
I also tried,
self.alembic_cfg.set_section_option("logger", "keys", "root")
Both of the above methods just disable its own logs.

To my knowledge, it is not possible to replace one logger with another. Is it something that you really need though?
I want it to print its own log using the format and handler that is defined in the custom logger.
As I understand it, logger has handlers and handlers have formatters. If you have a handler with a formatter you could just edit alembic.ini and assign your handler to alembic logger.
add formatter to formatters in ini file
[formatters] # existing section
keys = generic,pyraider # just add the name of your formatter
define your custom formatter
[formatter_pyraider]
class=tools.logger.PyraiderFormatter
add handler to handlers in ini file
[handlers] # existing section
keys = console,pyraider # just add the name of your handler
define your custom handler
[handler_pyraider] # new section, handler_<your_name>
class = tools.logger.PyraiderHandler
args = (sys.stderr,) # might need to play around with this one
level = INFO
formatter = pyraider
assign handler to alembic logger
[logger_alembic] # existing section, what you want
level = INFO
handlers = pyraider # <---- add your handler, defined previously, here
qualname = alembic
Docs on alembic.ini file.
https://alembic.sqlalchemy.org/en/latest/tutorial.html#editing-the-ini-file
You might need to tweak some things but it should work as that's basically how python logging module works.
More info on how to structure you ini file for logging module
Official Python Docs
Hitchhiker's Guide

Related

Making Custom Logger available across Multiple modules

I had created a custom logger for my purpose using python and made it a utility. I created its context-based and had it created with custom handlers for different scenarios. I am trying to make my custom logger visible across all modules. But I am not able to this. I don't want to reuse these lines in each of my modules just for the logger and pass on my context and config just for that.
logger = myLogger(config, context) # config has data for context based custom handling
In my main module, I just made the logger object global so that other methods can use the logger without any further add ons. Is there any way I can do the same across modules.
In many similar queries. what is suggested is
logger = logging.getLogger(__name__)
But this does not pass on my custom handlers also.
Can someone please advise how I can achieve this.
Make my custom logger global for my whole run time so that I don't have to declare whenever I have to.
my code is like this
def main():
args=argparse.ArgumentParser
parser.add_argument('-context','--context')
parser.add_argument('-cfg','--cfg')
config=configparser.ConfigParser()
config.read(cfg)
global logger
logger=myLogger(config,context)
## here context is my section name from config. which has details for my current process.
##my myLogger reads from a log file config details in configparser onject
## there i will remove my detault handlers and add my custom handlers and return the logger object back to main so this is how it works
## making logger as global in main makes it visible to other methods in same module as main
## but i am trying to make my logger visible to other modules also if i call the methods from those module
if __name__==__main__:
main()
My way of doing it:
Logging.py
import logging
# Your custom stuff
logger = myLogger(config, context)
Every_other_file.py
from Logging import logger
Edit: Change the config later on
Logging.py
import logging
# Your custom stuff
logger = myLogger(config, context)
def change_config(config):
global logger
logger = myLogger(config, context)
def set_logger(config)
global logger
logger = myLogger(config.context)
** main.py**
config = something
Logging.set_logger(config)
Something like that. My point is you can call a method with it you can change the value

How to configure/initialize logging using logger for multiple modules only once in Python for entire project?

I have python project with multiple modules with logging. I perform initialization (reading log configuration file and creating root logger and enable/disable logging) in every module before start of logging the messages. Is it possible to perform this initialization only once in one place (like in one class may be called as Log) such that the same settings are reused by logging all over the project?
I am looking for a proper solution to have only once to read the configuration file and to only once get and configure a logger, in a class constructor, or perhaps in the initializer (__init__.py). I don't want to do this at client side (in __main__ ). I want to do this configuration only once in separate class and call this class in other modules when logging is required.
setup using #singleton pattern
#log.py
import logging.config
import yaml
from singleton_decorator import singleton
#singleton
class Log:
def __init__(self):
configFile = 'path_to_my_lof_config_file'/logging.yaml
with open(configFile) as f:
config_dict = yaml.load(f)
logging.config.dictConfig(config_dict)
self.logger = logging.getLogger('root')
def info(self, message):
self.logger.info(message)
#module1.py
from Log import Log
myLog = Log()
myLog.info('Message logged successfully)
#module2.py
from Log import Log
myLog = Log() #config read only once and only one object is created
myLog.info('Message logged successfully)
From the documentation,
Note that Loggers should NEVER be instantiated directly, but always through the module-level function logging.getLogger(name). Multiple calls to getLogger() with the same name will always return a reference to the same Logger object.
You can initialize and configure logging in your main entry point. See Logging from multiple modules in this Howto (Python 2.7).
I had the same problem and I don't have any classes or anything, so I solved it with just using global variable
utils.py:
existing_loggers = {}
def get_logger(name='my_logger', level=logging.INFO):
if name in existing_loggers:
return existing_loggers[name]
# Do the rest of initialization, handlers, formatters etc...

Python redirecting log

I am running a web server, tornado, and I am trying to redirect all the log output to a file using the following command. But I don't see the output in the file.
/usr/bin/python -u index.py 2>&1 >> /tmp/tornado.log
I pass -u option to python interpreter. I still don't see any output logged to my log file.
However, I see the output on stdout when I do the following
/usr/bin/python index.py
Tornado uses the built-in logging module. You can easily attach a file handler to the root logger and set its level to NOTSET so it records everything, or some other level if you want to filter.
Reference docs: logging, logging.handlers
Example that works with Tornado's logging:
import logging
# the root logger is created upon the first import of the logging module
# create a file handler to add to the root logger
filehandler = logging.FileHandler(
filename = 'test.log',
mode = 'a',
encoding = None,
delay = False
)
# set the file handler's level to your desired logging level, e.g. INFO
filehandler.setLevel(logging.INFO)
# create a formatter for the file handler
formatter = logging.Formatter('%(asctime)s.%(msecs)d [%(name)s](%(process)d): %(levelname)s: %(message)s')
# add filters if you want your handler to only handle events from specific loggers
# e.g. "main.sub.classb" or something like that. I'll leave this commented out.
# filehandler.addFilter(logging.Filter(name='root.child'))
# set the root logger's level to be at most as high as your handler's
if logging.root.level > filehandler.level:
logging.root.setLevel = filehandler.level
# finally, add the handler to the root. after you do this, the root logger will write
# records to file.
logging.root.addHandler(filehandler)
More often than not, I actually wish to suppress tornado's loggers (because I have my own, and catch their exceptions anyway, and they just end up polluting my logs,) and this is where adding a filter on your filehandlers can come in very handy.

Extending the Python Logger

I'm looking for a simple way to extend the logging functionality defined in the standard python library. I just want the ability to choose whether or not my logs are also printed to the screen.
Example: Normally to log a warning you would call:
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(levelname)s: %(message)s', filename='log.log', filemode='w')
logging.warning("WARNING!!!")
This sets the configurations of the log and puts the warning into the log
I would like to have something along the lines of a call like:
logging.warning("WARNING!!!", True)
where the True statement signifys if the log is also printed to stdout.
I've seen some examples of implementations of overriding the logger class
but I am new to the language and don't really follow what is going on, or how to implement this idea. Any help would be greatly appreciated :)
The Python logging module defines these classes:
Loggers that emit log messages.
Handlers that put those messages to a destination.
Formatters that format log messages.
Filters that filter log messages.
A Logger can have Handlers. You add them by invoking the addHandler() method. A Handler can have Filters and Formatters. You similarly add them by invoking the addFilter() and setFormatter() methods, respectively.
It works like this:
import logging
# make a logger
main_logger = logging.getLogger("my logger")
main_logger.setLevel(logging.INFO)
# make some handlers
console_handler = logging.StreamHandler() # by default, sys.stderr
file_handler = logging.FileHandler("my_log_file.txt")
# set logging levels
console_handler.setLevel(logging.WARNING)
file_handler.setLevel(logging.INFO)
# add handlers to logger
main_logger.addHandler(console_handler)
main_logger.addHandler(file_handler)
Now, you can use this object like this:
main_logger.info("logged in the FILE")
main_logger.warning("logged in the FILE and on the CONSOLE")
If you just run python on your machine, you can type the above code into the interactive console and you should see the output. The log file will get crated in your current directory, if you have permissions to create files in it.
I hope this helps!
It is possible to override logging.getLoggerClass() to add new functionality to loggers. I wrote simple class which prints green messages in stdout.
Most important parts of my code:
class ColorLogger(logging.getLoggerClass()):
__GREEN = '\033[0;32m%s\033[0m'
__FORMAT = {
'fmt': '%(asctime)s %(levelname)s: %(message)s',
'datefmt': '%Y-%m-%d %H:%M:%S',
}
def __init__(self, format=__FORMAT):
formatter = logging.Formatter(**format)
self.root.setLevel(logging.INFO)
self.root.handlers = []
(...)
handler = logging.StreamHandler()
handler.setFormatter(formatter)
self.root.addHandler(handler)
def info(self, message):
self.root.info(message)
(...)
def info_green(self, message):
self.root.info(self.__GREEN, message)
(...)
if __name__ == '__main__':
logger = ColorLogger()
logger.info("This message has default color.")
logger.info_green("This message is green.")
Handlers send the log records (created by loggers) to the appropriate
destination.
(from the docs: http://docs.python.org/library/logging.html)
Just set up multiple handlers with your logging object, one to write to file, another to write to the screen.
UPDATE
Here is an example function you can call in your classes to get logging set up with a handler.
def set_up_logger(self):
# create logger object
self.log = logging.getLogger("command")
self.log.setLevel(logging.DEBUG)
# create console handler and set min level recorded to debug messages
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
# add the handler to the log object
self.log.addHandler(ch)
You would just need to set up another handler for files, ala the StreamHandler code that's already there, and add it to the logging object. The line that says ch.setLevel(logging.DEBUG) means that this particular handler will take logging messages that are DEBUG or higher. You'll likely want to set yours to WARNING or higher, since you only want the more important things to go to the console. So, your logging would work like this:
self.log.info("Hello, World!") -> goes to file
self.log.error("OMG!!") -> goes to file AND console

Pass logger instance to class

I'm using a open-source Python library in my project. This library logs a lot of information using the logging class.
...but I can't see the output or log it to file. I know that i would have to create a logger instance and add a file-handler or a console-handler to it but how can i pass this logger instance to the class? Here's the init snippet of the class that I'm going to be using.
class Periscope:
''' Main Periscope class'''
def __init__(self):
self.config = ConfigParser.SafeConfigParser({"lang": "en"})
if is_local:
self.config_file = os.path.join(bd.xdg_config_home, "periscope", "config")
if not os.path.exists(self.config_file):
folder = os.path.dirname(self.config_file)
if not os.path.exists(folder):
logging.info("Creating folder %s" %folder)
os.mkdir(folder)
logging.info("Creating config file")
configfile = open(self.config_file, "w")
self.config.write(configfile)
configfile.close()
else:
#Load it
self.config.read(self.config_file)
self.pluginNames = self.listExistingPlugins()
self._preferedLanguages = None
Any help?
Thanks guys.
Simplest way will be to use basicConfig function in logging module. Here's what docs are saying:
Does basic configuration for the logging system by creating a StreamHandler with a default Formatter and adding it to the root logger. The function does nothing if any handlers have been defined for the root logger. The functions debug(), info(), warning(), error() and critical() will call basicConfig() automatically if no handlers are defined for the root logger.
This function does nothing if the root logger already has handlers configured.
logging module is designed in a way that configuration is separated from creating log messages, so there's no need of having access to logger instance.
Try setting the level to the lowest possible (DEBUG). This enables all log levels and should give you all logging messages. The simplest way to do default configuration is to use basicConfig()
import logging
logging.basicConfig(level=logging.DEBUG, filename='/path/to/mylog.log')
If the library you are using doesn't override the logging configuration this should be enough to get messages into the log file. If you know the name of the logger the library is using, you can set the level for the library specifically:
logging.getLogger("periscope").setLevel(logging.DEBUG)

Categories