Given this code I try to have log statements working but I am not able to. The documentation tells me that I do not have to set a level.
When a logger is created, the level is set to NOTSET (which causes all
messages to be processed when the logger is the root logger, or
delegation to the parent when the logger is a non-root logger).
But it did not work without. Therefore I tried to set it to debug. But still no luck.
"""
Experimental Port Fowarding
"""
import logging
def main(config):
""" entry point"""
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
log.debug("opening config file...")
config_file = open(config, 'r')
log.debug("config found!")
The logger you are getting doesn't have any handlers. You can check this by doing print(log.handlers) and seeing the output is an empty list ([]).
The simplest way to use the logging library is something like this, where you call logging.basicConfig to set everything up, as shown in the logging module basic tutorial:
"""
Experimental Port Fowarding
"""
import logging
logging.basicConfig(level=logging.DEBUG)
def main(config):
""" entry point"""
logging.debug("opening config file...")
config_file = open(config, 'r')
logging.debug("config found!")
main('test.conf')
This works for me from outside and inside IPython.
If you want to avoid basicConfig for some reason you need to register a handler manually, like this:
import logging
def main(config):
""" entry point"""
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
# Minimal change: add StreamHandler to display to stdout
log.addHandler(logging.StreamHandler())
log.debug("opening config file...")
config_file = open(config, 'r')
log.debug("config found!")
By default the logger writes to STDERR stream which usually prints to the console itself.
Basically you can change the log file path by setting:
logging.basicConfig(filename="YourFileName.log")
log = logging.getLogger(__name__)
Related
I have a Python script and I want that the info method write the messages in the console. But the warning, critical or error writes the messages to a file. How can I do that?
I tried this:
import logging
console_log = logging.getLogger("CONSOLE")
console_log.setLevel(logging.INFO)
stream_handler = logging.StreamHandler()
stream_handler.setLevel(logging.INFO)
console_log.addHandler(stream_handler)
file_log = logging.getLogger("FILE")
file_log.setLevel(logging.WARNING)
file_handler = logging.FileHandler('log.txt')
file_handler.setLevel(logging.WARNING)
file_log.addHandler(file_handler)
def log_to_console(message):
console_log.info(message)
def log_to_file(message):
file_log.warning(message)
log_to_console("THIS SHOULD SHOW ONLY IN CONSOLE")
log_to_file("THIS SHOULD SHOW ONLY IN FILE")
but the message that should be only in the file is going to the console too, and the message that should be in the console, is duplicating. What am I doing wrong here?
What happens is that the two loggers you created propagated the log upwards to the root logger. The root logger does not have any handlers by default, but will use the lastResort handler if needed:
A "handler of last resort" is available through this attribute. This
is a StreamHandler writing to sys.stderr with a level of WARNING, and
is used to handle logging events in the absence of any logging
configuration. The end result is to just print the message to
sys.stderr.
Source from the Python documentation.
Inside the Python source code, you can see where the call is done.
Therefore, to solve your problem, you could set the console_log and file_log loggers' propagate attribute to False.
On another note, I think you should refrain from instantiating several loggers for you use case. Just use one custom logger with 2 different handlers that will each log to a different destination.
Create a custom StreamHandler to log only the specified level:
import logging
class MyStreamHandler(logging.StreamHandler):
def emit(self, record):
if record.levelno == self.level:
# this ensures this handler will print only for the specified level
super().emit(record)
Then, use it:
my_custom_logger = logging.getLogger("foobar")
my_custom_logger.propagate = False
my_custom_logger.setLevel(logging.INFO)
stream_handler = MyStreamHandler()
stream_handler.setLevel(logging.INFO)
file_handler = logging.FileHandler("log.txt")
file_handler.setLevel(logging.WARNING)
my_custom_logger.addHandler(stream_handler)
my_custom_logger.addHandler(file_handler)
my_custom_logger.info("THIS SHOULD SHOW ONLY IN CONSOLE")
my_custom_logger.warning("THIS SHOULD SHOW ONLY IN FILE")
And it works without duplicate and without misplaced log.
I have a logging function with hardcoded logfile name (LOG_FILE):
setup_logger.py
import logging
import sys
FORMATTER = logging.Formatter("%(levelname)s - %(asctime)s - %(name)s - %(message)s")
LOG_FILE = "my_app.log"
def get_console_handler():
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setFormatter(FORMATTER)
return console_handler
def get_file_handler():
file_handler = logging.FileHandler(LOG_FILE)
file_handler.setFormatter(FORMATTER)
return file_handler
def get_logger(logger_name):
logger = logging.getLogger(logger_name)
logger.setLevel(logging.DEBUG) # better to have too much log than not enough
logger.addHandler(get_console_handler())
logger.addHandler(get_file_handler())
# with this pattern, it's rarely necessary to propagate the error up to parent
logger.propagate = False
return logger
I use this in various modules this way:
main.py
from _Core import setup_logger as log
def main(incoming_feed_id: int, type: str) -> None:
logger = log.get_logger(__name__)
...rest of my code
database.py
from _Core import setup_logger as log
logger = log.get_logger(__name__)
Class Database:
...rest of my code
etl.py
import _Core.database as db
from _Core import setup_logger as log
logger = log.get_logger(__name__)
Class ETL:
...rest of my code
What I want to achieve is to always change the logfile's path and name on each run based on arguments passed to the main() function in main.py.
Simplified example:
If main() receives the following arguments: incoming_feed_id = 1, type = simple_load, the logfile's name should be 1simple_load.log.
I am not sure what is the best practice for this. What I came up with is probably the worst thing to do: Add a log_file parameter to the get_logger() function in setup_logger.py, so I can add a filename in main() in main.py. But in this case I would need to pass the parameters from main to the other modules as well, which I do not think I should do as for example the database class is not even used in main.py.
I don't know enough about your application to be sure this'll work for you, but you can just configure the root logger in main() by calling get_logger('', filename_based_on_cmdline_args), and stuff logged to the other loggers will be passed to the root logger's handlers for processing if the logger levels configured allow it. The way you're doing it now seems to open multiple handlers pointing to the same file, which seems sub-optimal. The other modules can just use logging.getLogger(__name__) rather than log.get_logger(__name__).
I have a project that consists of several modules. There is main module (main.py) that creates a TK GUI and loads the data. It passes this data to process.py which processes the data using functions from checks.py. I am trying to implement logging for all the modules to log to a file. In the main.py log messages are written to the log file but in the other modules they are only written to the console. I assume its to do with the import module line executing part of the code before the code in main.py has set up the logger, but i can't work out how to arrange it to avoid that. It seems like a reasonably common question on Stackoverflow, but i couldn't get the other answers to work for me. I am sure I am not missing much. Simplified code is shown below:
Moving the logging code inside and outside of various functions in the modules. The code I used to start me off is the code from Corey Schaffer's Youtube channel.
Main.py
import logging
from process import process_data
def main():
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s:%(name)s:%(message)s')
templogfile = tempfile.gettempdir() + '\\' + 'TST_HA_Debug.log'
file_handler = logging.FileHandler(templogfile)
file_handler.setLevel(logging.DEBUG)
file_handler.setFormatter(formatter)
stream_handler = logging.StreamHandler()
stream_handler.setFormatter(formatter)
logger.addHandler(file_handler)
logger.addHandler(stream_handler)
logger.debug('Logging has started') # This gets written to the file
process_data(data_object) # call process_data in process.py
process.py
import logging
def process_data(data):
logger = logging.getLogger(__name__)
logger.debug('This message is logged by process') #This wont get written to the log file but get written to the console
#do some stuff with data here and log some msgs
return
Main.py will write to the log file, process.py will only write to the console.
I've rewritten your scripts a little so that this code can stand alone. If I changed this too much let me know and I can revisit it. These two files are an example of having it log to file. Note my comments:
## main.py
import logging
from process import process_data
import os
def main():
# Give this logger a name
logger = logging.getLogger("Example")
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s:%(name)s:%(message)s')
# I changed this to the same directory, for convenience
templogfile = os.path.join(os.getcwd(), 'TST_HA_Debug.log')
file_handler = logging.FileHandler(templogfile)
file_handler.setLevel(logging.DEBUG)
file_handler.setFormatter(formatter)
stream_handler = logging.StreamHandler()
stream_handler.setFormatter(formatter)
logger.addHandler(file_handler)
logger.addHandler(stream_handler)
logging.getLogger("Example").debug('Logging has started') # This still gets written to the file
process_data() # call process_data in process.py
if __name__ == '__main__':
main()
## process.py
import logging
def process_data(data=None):
# make sure to grab the correct logger
logger = logging.getLogger("Example")
logger.debug('This message is logged by process') # This does get logged to file now
#do some stuff with data here and log some msgs
return
Why does this work? Because the module-level functions use the default root logger, which is not the one you've configured. For more details on this see these docs. There is a similar question that goes more into depth here.
By getting the configured logger before you start logging, you are able to log to the right configuration. Hope this helps!
I have following setup
../test/dirA
../test/Conftest.py
../test/Test_1.py
../test/logging.config
Code for Conftest.py
import logging.config
from os import path
import time
config_file_path = path.join(path.dirname(path.abspath(__file__)), 'logging.conf')
log_file_path = path.join(path.dirname(path.abspath(__file__)), ‘logFile.log’)
logging.config.fileConfig(config_file_path)
logger = logging.getLogger(__name__)
fh = logging.FileHandler(log_file_path)
fh.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.ERROR)
# add the handlers to the logger
logger.addHandler(fh)
logger.addHandler(ch)
logging.info('done')
Code for ../test/Test_1.py
import logging
logger = logging.getLogger(__name__)
def test_foo():
logger = logging.getLogger(__name__)
logger.info("testing foo")
def test_foobar():
logger = logging.getLogger(__name__)
logger.info("testing foobar")
I need to see logs from both files in logFile.log but currently I see only log from conftest.py. What am I missing?
Also, I noticed that if I execute Conftest.py from test folder (one dir up), for some reason it does not see logging.config.
Is my usage for conftest for correct for logging?
How else I can achieve same result?
Thank you
Update:
I used approach described
https://blog.muya.co.ke/configuring-multiple-loggers-python/.
https://fangpenlin.com/posts/2012/08/26/good-logging-practice-in-python/
With some changes, this addresses my question.
Another lesson I learned that use
logging.getLogger(__name__)
at function level, not at module level.
You can see a lots of example out there (including this article, I did it just for giving example in short) get logger at module level. They looks harmless, but actually, there is a pitfall – Python logging module respects all created logger before you load the configuration from a file, if you get logger at the module level like this
Well, my first guess is that if you don't import the Conftest.py in the test_1.py, the logger setup code is not reached and executed, so the logging in the Test_1 file is in a default location log file.
Try this line to find the location of the log file:
logging.getLoggerClass().root.handlers[0].baseFilename
I used approach described at https://blog.muya.co.ke/configuring-multiple-loggers-python/.
With some changes, this addresses my question.
I created a module named log.py where a function defines how the log will be registered. Here is the atomic code:
import logging
import time
def set_up_log():
"""
Create a logging file.
"""
#
# Create the parent logger.
#
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
#
# Create a file as handler.
#
file_handler = logging.FileHandler('report\\activity.log')
file_handler.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(filename)s - %(name)s - % (levelname)4s - %(message)s')
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
#
# Start recording.
#
logger.info('______ STARTS RECORDING _______')
if __name__=='__main__':
set_up_log()
A second module named read_file.py is using this log.py to record potential error.
import logging
import log
log.set_up_log()
logger = logging.getLogger(__name__)
def read_bb_file(input_file):
"""
Input_file must be the path.
Open the source_name and read the content. Return the result.
"""
content = list()
logger.info('noi')
try:
file = open(input_file, 'r')
except IOError, e:
logger.error(e)
else:
for line in file:
str = line.rstrip('\n\r')
content.append(str)
file.close()
return content
if __name__ == "__main__":
logger.info("begin execution")
c = read_bb_file('textatraiter.out')
logger.info("end execution")
In the command prompt lauching read_file.py, I get this error:
No handlers could be found for logger "__main__"
My result in the file is the following
2014-05-12 13:32:58,690 - log.py - log - INFO - ______ STARTS RECORDING _______
I read lots of topics here and on Py Doc but it seems I did not understand them properly since I have this error.
I add I would like to keep the log settlement appart in a function and not define it explicitely in my main method.
You have 2 distinct loggers and you're only configuring one.
The first is the one you make in log.py and set up correctly. Its name however will be log, because you have imported this module from read_file.py.
The second logger, the one you're hoping is the same as the first, is the one you assign to the variable logger in read_file.py. Its name will be __main__ because you're calling this module from the command line. You're not configuring this logger.
What you could do is to add a parameter to set_up_log to pass the name of the logger in, e.g.
def set_up_log(logname):
logger = logging.getLogger(logname)
That way, you will set the handlers and formatters for the correct logging instance.
Organizing your logs in a hierarchy is the way logging was intended to be used by Vinay Sajip, the original author of the module. So your modules would only log to a logging instance with the fully qualified name, as given by __name__. Then your application code could set up the loggers, which is what you're trying to accomplish with your set_up_log function. You just need to remember to pass it the relevant name, that's all. I found this reference very useful at the time.