I started using Python logging to tidy up the messages generated by some code. I have the statement
logging.basicConfig(filename='my_log.log', filemode='w', encoding='utf-8', level=logging.INFO)
in the main Python module and was expecting the log file to be created from scratch every time the program runs. However, Python appends the new messages to the existing file. I wrote the program below to try to test what is going on but now I don't get a log file created and the messages appear in the console (Spyder 5). I am running WinPython 3.9 on a Windows 10 machine.
import datetime
import logging
logging.basicConfig(filename='my_log.log', filemode='w', level=logging.INFO)
t = datetime.datetime.now()
logging.info("Something happened at %s" % t)
All suggestions welcome - thanks!
Update: following #bzu's suggestion, I changed my code to this:
import datetime
import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
handler = logging.FileHandler('my_log.log', 'w')
logger.addHandler(handler)
t = datetime.datetime.now()
logging.info("Something happened at %s" % t)
This created the log file but nothing was written to it. If I change the final line to
logger.info("Something happened at %s" % t)
the the message appears in the log file. However, I want to log the messages from different modules which import logging and so know about logging in general but I don't know how they can access the specific logger object
If you run the program with plain python (python log_test_file.py), it should work as expected.
I suspect that the problem with Spyder is, that it overrides the logging module config.
To avoid this kind of problems, the recommended way is to use custom logger per file:
import logging
logger = logging.getLogger(__name__)
# boilerplate: can be converted into a method and reused
logger.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler = logging.FileHandler(filename='my_log.log', mode='w', encoding='utf-8')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.info("Something happened")
After some searching on the web, adding force=True to logging.basicConfig seemed to solve the problem ie:
logging.basicConfig(filename='my_log.log', filemode='w', encoding='utf-8', level=logging.INFO, force=True)
Related
I am very new to Python and am trying to add logs(in a log file) to a project. As soon as I added some log codes, the runtime tripled, so I was suggested to use a functionality that could perhaps store the logs in Python's memory and then print it out to a logfile - in the hope for the runtime to not increase so much.
I started searching for ways to do that and found this resource on MemoryHandler, which I interpreted as something that could help me achieve my purpose:
Base1.py (Attempt1)
I tried this code first, without the MemoryHandler:
import logging
import logging.config
import logging.handlers
formatter = logging.Formatter('%(asctime)s — %(name)s — %(levelname)s — %(funcName)s:%(lineno)d — %(message)s')
def setup_logger(__name__,log_file,level=logging.INFO):
'''to set up loggers'''
handler= logging.FileHandler(log_file)
handler.setFormatter(formatter)
logger = logging.getLogger(__name__)
logger.setLevel(level)
logger.addHandler(handler)
return logger
checking = setup_logger(__name__,'run_logfile.log')
checking.info("this is a test")
It took around 11 seconds to run for the entire project. This is actually a lot - because currently the data volumes are Nil and it should be around 3 seconds, which is what it was before I added these log codes. So next, I tried the below code in the hope of making the code faster:
Base2.py (Attempt2)
import logging
import logging.config
import logging.handlers
formatter = logging.Formatter('%(asctime)s — %(name)s — %(levelname)s — %(funcName)s:%(lineno)d — %(message)s')
def setup_logger(__name__,log_file,level=logging.INFO):
'''to set up loggers'''
handler= logging.FileHandler(log_file)
handler.setFormatter(formatter)
memoryhandler = logging.handlers.MemoryHandler(
capacity=1024*100,
flushLevel=logging.INFO,
target=handler,
flushOnClose=True
)
logger = logging.getLogger(__name__)
logger.setLevel(level)
logger.addHandler(handler)
logger.addHandler(memoryhandler)
return logger
checking = setup_logger(__name__,'run_logfile.log')
checking.info("this is a test")
This too takes 11 seconds - it does work, but my problem is that it is not at all faster than when I did not use the MemoryHandler previously, so I was wondering if my code was very wrong still?
Am I doing anything wrong here? Or if there is way to have logs without making the runtime longer?
It wouldn't be faster if you're logging INFO messages and the flushLevel is also INFO - it will flush after every message. What happens if you set the flushLevel to e.g. ERROR?
Also - if adding logging triples your execution time, something else is likely to be wrong - logging doesn't add that much overhead.
My logging setup is:
import coloredlogs
import logging
import sys
# Create a logger object.
# logger = logging.getLogger(__name__)
# By default the install() function installs a handler on the root logger,
# this means that log messages from your code and log messages from the
# libraries that you use will all show up on the terminal.
coloredlogs.install(level='DEBUG')
logging.basicConfig(
format='%(asctime)s %(levelname)-8s %(message)s',
level=logging.INFO,
stream=sys.stdout,
datefmt='%Y-%m-%d %H:%M:%S')
If I configure the console to use python for output, all lines start at column 0, but all output is red. But if I specify use terminal, the colors are there, but the lines don't start at column 1. They all start with the end column of the line before.
How can I get all log messages starting at column 0 AND in color?
Try adding isatty to your call to install. This overrides the whatever auto-detection is doing to attempt at determining what type of terminal is being used.
import coloredlogs
import logging
import sys
logger = logging.getLogger(__name__)
coloredlogs.install(level=logging.DEBUG, logger=logger, isatty=True,
fmt="%(asctime)s %(levelname)-8s %(message)s",
stream=sys.stdout,
datefmt='%Y-%m-%d %H:%M:%S')
logger.debug("this is a debugging message")
logger.info("this is an informational message")
logger.warning("this is a warning message")
logger.error("this is an error message")
logger.critical("this is a critical message")
explanation:
If I understood the problem correctly, you are seeing the logging handler default to using sys.stderr instead of sys.stdout. (this is why the text would appear in red)
There are likely two issues going on.
coloredlogs.install needs to be told what type of rules you want in the handler, not logging.basicConfig.
The automatic terminal detection would fail unless you either force the formatter to think it's a terminal or you set pycharm to 'simulate' a terminal. Fixed by passing in stream=sys.stdout
I have a main process which makes use of different other modules. And these modules also use other modules. I need to log all the logs into single log file. Due to use of TimedRotatingFileHandler, my log behaves differently after midnight. I got to know why it so but couldn't clearly how I can solve it.
Below is log_config.py which is used by all other modules to get the logger and log.
'''
import logging
import sys
from logging.handlers import TimedRotatingFileHandler
FORMATTER = logging.Formatter("%(asctime)s — %(name)s — %(message)s")
LOG_FILE = "my_app.log"
def get_file_handler():
file_handler = TimedRotatingFileHandler(LOG_FILE, when='midnight')
file_handler.setFormatter(FORMATTER)
return file_handler
def get_logger(logger_name):
logger = logging.getLogger(logger_name)
logger.setLevel(logging.DEBUG) # better to have too much log than not enough
logger.addHandler(get_file_handler())
#with this pattern, it's rarely necessary to propagate the error up to parent
logger.propagate = False
return logger
'''
All other modules call,
'logging = log_config.get_logger(name)'
and use it to log.
I came to know about QueueHandler and QueueListener but not sure how to use them in my code.
How can I use these to serialize logs to single file.?
I'm using the logging module from the python standard library and would like to obtain the current Formatter. The reason is that I'm using multiprocessing module and for each process I'd like to assign its logger another file handler to log to its own log file. When I do this in the following way
logger = logging.getLogger('subprocess')
log_path = 'log.txt'
with open(log_path, 'a') as outfile:
handler = logging.StreamHandler(outfile)
logger.addHandler(handler)
the messages in log.txt have no formatting at all, but I would like the message to be in the same format as my typical logging format. My typical logging setup is shown below
logging.basicConfig(
format='%(asctime)s | %(levelname)s | %(name)s - %(process)d | %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',
level=os.environ.get('LOGLEVEL', 'INFO').upper(),
)
logger = logging.getLogger(__name__)
Since it looks like the Formatter object is associated with the Handler object, I tried to obtain handler from the main Logger object which has the correct formatting. So I called
logger.handlers
but I got an empty list [].
So my question is, where do I get the Formatter object which has the same format that my main logger has?
For the record, I'm using python 3.8 on macOS but the code will be deployed to Linux (but still python 3.8).
Judging by the cpython source code on logging.basicConfig, it appears that the formatter object which contains your formatting string is eventually added to a handler that is passed to the root logger (see: here ). So you can obtain the handler (and therefore the formatter) from the root logger object by doing
logging.root.handlers[0].formatter
In particular, to achieve the subprocess logging handler in the question, you can do this instead
handler = logging.StreamHandler(outfile)
handler.setFormatter(logging.root.handlers[0].formatter)
handler.setLevel(logging.root.level)
logger.addHandler(handler)
which will set your handler to the same logging format (and level as well!) as that of your usual logger object.
I have written following code to enable Cloudwatch support.
import logging
from boto3.session import Session
from watchtower import CloudWatchLogHandler
logging.basicConfig(level=logging.INFO,format='[%(asctime)s.%(msecs).03d] [%(name)s,%(funcName)s:%(lineno)s] [%(levelname)s] %(message)s',datefmt='%d/%b/%Y %H:%M:%S')
log = logging.getLogger('Test')
boto3_session = Session(aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
region_name=REGION_NAME)
cw_handler = CloudWatchLogHandler(log_group=CLOUDWATCH_LOG_GROUP_NAME,stream_name=CLOUDWATCH_LOG_STREAM_NAME,boto3_session=boto3_session)
log.addHandler(cw_handler)
Whenever i try to print any logger statement, i am getting different output on my local system and cloudwatch.
Example:
log.info("Hello world")
Output of above logger statement on my local system (terminal) :
[24/Feb/2019 15:25:06.969] [Test,<module>:1] [INFO] Hello world
Output of above logger statement on cloudwatch (log stream) :
Hello world
Is there something i am missing ?
In the Lambda execution environment, the root logger is already preconfigured. You'll have to work with it or work around it. You could do some of the following:
You can set the formatting directly on the root logger:
root = logging.getLogger()
root.setLevel(logging.INFO)
root.handlers[0].setFormatter(logging.Formatter(fmt='[%(asctime)s.%(msecs).03d] [%(name)s,%(funcName)s:%(lineno)s] [%(levelname)s] %(message)s', datefmt='%d/%b/%Y %H:%M:%S'))
You could add the Watchtower handler to it (disclaimer: I have not tried this approach):
root = logging.getLogger()
root.addHandler(cw_handler)
However I'm wondering if you even need to use Watchtower. In Lambda, every line you print to stdout (so even just using print) get logged to Cloudwatch. So using the standard logging interface might be sufficient.
This worked for me
import logging
import watchtower
watch = watchtower.CloudWatchLogHandler()
watch.setFormatter(fmt = logging.Formatter('%(levelname)s - %(module)s - %(message)s'))
logger = logging.getLogger()
logger.addHandler(watch)
As per comment in https://stackoverflow.com/a/45624044/1021819 (and another answer there),
just add force=True to logging.basicConfig(), so in your case you need
logging.basicConfig(level=logging.INFO, force=True, format='[%(asctime)s.%(msecs).03d] [%(name)s,%(funcName)s:%(lineno)s] [%(levelname)s] %(message)s',datefmt='%d/%b/%Y %H:%M:%S')
log = logging.getLogger('Test')
This function does nothing if the root logger already has handlers configured, unless the keyword argument force is set to True.
(i.e. AWS case)
Re: force:
If this keyword argument is specified as true, any existing handlers attached to the root logger are removed and closed, before carrying out the configuration as specified by the other arguments.
REF: https://docs.python.org/3/library/logging.html#logging.basicConfig
THANKS:
https://stackoverflow.com/a/72054516/1021819
https://stackoverflow.com/a/45624044/1021819