I want to extend python(2.7)'s logging module (and specifically the logger class).
Currently I have:
class MyLogger(logging.Logger):
def __init__(self, name):
logging.Logger.__init__(self, name)
Is this the right way to initialize MyLogger?
Will I be able to use Logger.manager (logging.Logger.manager)?
Is it possible to "get" a logger (I only know logging.getLogger(name) - which is not available since I'm extending the Logger itself, and I know static methods aren't popular in python as they are in Java, for example)?
Where can I learn more about extending classes? The documentation in python.org is very poor and did not help me.
My goal is to be able to start a logger with standard configurations and handlers based on the caller module's name, and to set the entire system loggers to the same level with a short, readable, call.
Seems like my approach was wrong altogether.
I prefer the way stated in python.org:
Using configuration files for the cleans up code and allows to propagate changes easily.
A configuration file is loaded like so:
# view example on python.org site (logging for multiple modules)
logging.config.fileConfig('logging.conf')
As for batch abilities, since we keep logging.Logger.manager.loggerDict and logging.getLogger, batch operations can use simple loops to create changes (like setting loggers to a single level) throughout the system.
Related
my scenario is this - im using a simple logger, and have alot of log.info()/log.debug() messages in my code.
i would like to change the log level dynamically, mainly being able to "turn on/off" debug level logs.
my question is this - is it somehow possible to do so, but make the change only affect parts of my code? lets say only when im inside methods of a specific class.
or is the only way to do something like that is to use a different log object for that class and change only its level?
You can achive this by using the stdlib logging output for structlog, and then just configure logging to different levels for each logger.
Every logger has a name, that is used to differenciate where the logs came from and how to handle them. usually you define a single logger per python file like this:
import logging
logger = logging.getLogger(__name__)
What this does is give the logger the name of the python module. for example if the file is proj/routers/route1.py then the logger name will be proj.routers.route1.
This is useful because the std logging library allows you to configure a log level to the proj.routers logger, that will be applied to every proj.routers.* loggers.
see https://docs.python.org/3/howto/logging.html#configuring-logging
I am new to python and just trying to learn and find better ways to write code. I want to create a custom class for logging and use package logging inside it. I want the function in this class to be reusable and do my logging from other scripts rather than writing custom code in each and every scripts. Is there a good link you guys can share? Is this the right way to handle logging? I just want to avoid writing the same code in every script if I can reuse it from one module.
I would highly appreciate any reply.
You can build a custom class that utilizes the built in python logging library. There isn't really any right way to handle logging as the library allows you to use 5 standard levels indicating the severity of events (DEBUG, INFO, WARNING, ERROR, and CRITICAL). The way you use these levels are application specific. Here's another good explanation of the package.
It's indeed a good idea to keep all your logging configuration (formatters, level, handlers) in one place.
create a class wrapping a custom logger with your configuration
expose methods for logging with different levels
import this class wherever you want
create an instance of this class to log where you want
To make sure all you custom logging objects have the same config, you should make logging class own the configuration.
I don't think there's any links I can share for the whole thing but you can find links for the individual details I mentioned easily enough.
This question geared toward OOP best practices.
Background:
I've created a set of scripts that are either automatically triggered by cronjobs or are constantly running in the background to collect data in real time. In the past, I've used Python's smtplib to send myself notifications when errors occur or a job is successfully completed. Recently, I migrated these programs to the Google Cloud platform which by default blocks popular SMTP ports. To get around this I used linux's mail command to continue sending myself the reports.
Originally, my hacky solution was to have two separate modules for sending alerts that were initiated based on an argument I passed to the main script.
Ex:
$ python mycode.py my_arg
if sys.argv[1] == 'my_arg':
mailer = Class1()
else:
mailer = Class2()
I want to improve upon this and create a module that automatically handles this without the added code. The question I have is whether it is "proper" to include a conditional statement while initializing the class to handle the situation.
Ex:
Class Alert(object):
def __init__(self, sys.platform, other_args):
# Google Cloud Platform
if sys.platform == "linux":
#instantiate Class1 variables and methods
#local copy
else:
#instantiate Class2 variables and methods
My gut instinct says this is wrong but I'm not sure what the proper approach would be.
I'm mostly interested in answers regarding how to create OO classes/modules that handle environmental dependencies to provide the same service. In my case, a blocked port requires a different set of code altogether.
Edit: After some suggestions here are my favorite readings on this topic.
http://python-3-patterns-idioms-test.readthedocs.io/en/latest/Factory.html
This seems like a wonderful use-case for a factory class, which encapsulates the conditional, and always returns an instance of one of N classes, all of which implement the same interface, so that the rest of your code can use it without caring about the concrete class being used.
This is a way to do it. But I would rather use something like creating a dynamic class instance. To do that, you could have only one class instead of selecting from two different classes. The class would then take some arguments and return the result depending the on the arguments provided. There are quite some examples out there and I'm sure you can use them in your use-case. Try searching for how to create a dynamic class in python.
This is a Python question but a Django-specific solution is acceptable.
For a class I'm writing I would like to prefix log output on a per-instance basis. I do not want to interfere with the logging destinations that were set up. These are the solutions I can think of:
create a new logger as a sub-logger of the module, reconfigure the parent handlers with different formatters: mylog.info("foo") => prefixfoo
create a wrapper log class with the info(), warn() etc methods, each adding the prefix before calling the wrapped logger: mylog.info("foo")
store the prefix in the instance and manually add: log.info(self.p+"foo")
create a prefix-adding function that I manually wrap all log calls with: log.info(p("foo"))
Obviously I prefer solution 1 but I don't know how to do that.
What is the best solution? I'm a newbie Python programmer so I'm probably trying to solve the wrong problem :-)
Probably a very common question, but couldn't find suitable answer yet..
I have a (Python w/ C++ modules) application that makes heavy use of an SQLite database and its path gets supplied by user on application start-up.
Every time some part of application needs access to database, I plan to acquire a new session and discard it when done. For that to happen, I obviously need access to the path supplied on startup. Couple of ways that I see it happening:
1. Explicit arguments
The database path is passed everywhere it needs to be through an explicit parameter and database session is instantiated with that explicit path. This is perhaps the most modular, but seems to be incredibly awkward.
2. Database path singleton
The database session object would look like:
import foo.options
class DatabaseSession(object):
def __init__(self, path=foo.options.db_path):
...
I consider this to be the lesser-evil singleton, since we're storing only constant strings, which don't change during application runtime. This leaves it possible to override the default and unit test the DatabaseSession class if necessary.
3. Database path singleton + static factory method
Perhaps slight improvement over the above:
def make_session(path=None):
import foo.options
if path is None:
path = foo.options.db_path
return DatabaseSession(path)
class DatabaseSession(object):
def __init__(self, path):
...
This way the module doesn't depend on foo.options at all, unless we're using the factory method. Additionally, the method can perform stuff like session caching or whatnot.
And then there are other patterns, which I don't know of. I vaguely saw something similar in web frameworks, but I don't have any experience with those. My example is quite specific, but I imagine it also expands to other application settings, hence the title of the post.
I would like to hear your thoughts about what would be the best way to arrange this.
Yes, there are others. Your option 3 though is very Pythonic.
Use a standard Python module to encapsulate options (this is the way web frameworks like Django do it)
Use a factory to emit properly configured sessions.
Since SQLite already has a "connection", why not use that? What does your DatabaseSession class add that the built-in connection lacks?