Why doesn' t this code generate a log file? - python

Why isn't this code creating a log file? It used to work... I ran it from command line and debugger vscode...
I have logged at info and error level and still nothing.
Seems like an empty file should at least be created.. .but then again, maybe python does a lazy create.
import logging
import argparse
import datetime
import sys
import platform
def main():
print("something")
logging.error("something")
if(__name__ == '__main__'):
the_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
msg = "Start time: {}".format(the_time)
print(msg)
logging.info(msg)
parser = argparse.ArgumentParser(prog='validation')
parser.add_argument("-L", "--log", help="log level", required=False, default='INFO')
args = parser.parse_args()
numeric_level = getattr(logging, args.log.upper(), None)
print(numeric_level)
if not isinstance(numeric_level, int):
raise ValueError('Invalid log level: %s' % args.log.upper())
print("setting log level to: {} for log file validate.log".format(args.log.upper()))
logging.basicConfig(filename='validate.log', level=numeric_level, format='%(asctime)s %(levelname)-8s %(message)s', datefmt='%d-%b-%Y %H:%M:%S')
logging.info("Python version: {}".format(sys.version))
main()

The documentation of the function logging.basicConfig() says:
This function does nothing if the root logger already has handlers configured, unless the keyword argument force is set to True.
Now, does your root logger has handlers? Initially, no. However, when you call any of the log functions (debug, info, error, etc.), it creates a default handler. Therefore, calling basicConfig after these functions does nothing.
Running this program
print(logging.getLogger().hasHandlers())
logging.error('test message')
print(logging.getLogger().hasHandlers())
produces:
$ python3 test.py
False
ERROR:root:test message
True
To fix your issue, just add force=True as an argument for basicConfig.

Related

Correct logging(python) format is not being sent to Cloudwatch using watchtower

I have written following code to enable Cloudwatch support.
import logging
from boto3.session import Session
from watchtower import CloudWatchLogHandler
logging.basicConfig(level=logging.INFO,format='[%(asctime)s.%(msecs).03d] [%(name)s,%(funcName)s:%(lineno)s] [%(levelname)s] %(message)s',datefmt='%d/%b/%Y %H:%M:%S')
log = logging.getLogger('Test')
boto3_session = Session(aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
region_name=REGION_NAME)
cw_handler = CloudWatchLogHandler(log_group=CLOUDWATCH_LOG_GROUP_NAME,stream_name=CLOUDWATCH_LOG_STREAM_NAME,boto3_session=boto3_session)
log.addHandler(cw_handler)
Whenever i try to print any logger statement, i am getting different output on my local system and cloudwatch.
Example:
log.info("Hello world")
Output of above logger statement on my local system (terminal) :
[24/Feb/2019 15:25:06.969] [Test,<module>:1] [INFO] Hello world
Output of above logger statement on cloudwatch (log stream) :
Hello world
Is there something i am missing ?
In the Lambda execution environment, the root logger is already preconfigured. You'll have to work with it or work around it. You could do some of the following:
You can set the formatting directly on the root logger:
root = logging.getLogger()
root.setLevel(logging.INFO)
root.handlers[0].setFormatter(logging.Formatter(fmt='[%(asctime)s.%(msecs).03d] [%(name)s,%(funcName)s:%(lineno)s] [%(levelname)s] %(message)s', datefmt='%d/%b/%Y %H:%M:%S'))
You could add the Watchtower handler to it (disclaimer: I have not tried this approach):
root = logging.getLogger()
root.addHandler(cw_handler)
However I'm wondering if you even need to use Watchtower. In Lambda, every line you print to stdout (so even just using print) get logged to Cloudwatch. So using the standard logging interface might be sufficient.
This worked for me
import logging
import watchtower
watch = watchtower.CloudWatchLogHandler()
watch.setFormatter(fmt = logging.Formatter('%(levelname)s - %(module)s - %(message)s'))
logger = logging.getLogger()
logger.addHandler(watch)
As per comment in https://stackoverflow.com/a/45624044/1021819 (and another answer there),
just add force=True to logging.basicConfig(), so in your case you need
logging.basicConfig(level=logging.INFO, force=True, format='[%(asctime)s.%(msecs).03d] [%(name)s,%(funcName)s:%(lineno)s] [%(levelname)s] %(message)s',datefmt='%d/%b/%Y %H:%M:%S')
log = logging.getLogger('Test')
This function does nothing if the root logger already has handlers configured, unless the keyword argument force is set to True.
(i.e. AWS case)
Re: force:
If this keyword argument is specified as true, any existing handlers attached to the root logger are removed and closed, before carrying out the configuration as specified by the other arguments.
REF: https://docs.python.org/3/library/logging.html#logging.basicConfig
THANKS:
https://stackoverflow.com/a/72054516/1021819
https://stackoverflow.com/a/45624044/1021819

Python3 logging: Only send ERROR to screen not INFO

What have I missed?
I have a project with lots of files and modules. I want each module, and some classes, to have their own log file. For the most part I want just INFO and ERROR logged, but occasionally I'll want DEBUG.
However, I only want ERROR sent to the screen (iPython on Spyder via Anaconda). This is high speed code getting network updates several times each millisecond and printing all the INFO messages is not only very annoying but crashes Spyder.
In short, I want only ERROR sent to the screen. Everything else goes to a file. Below is my code for creating a separate log file for each class. It is called in the __init__ method of each class that should log items. The name argument is typically __class__.__name__. The fname argument is set as well. Typically, the lvl and formatter args are left with the defaults. The files are being created and they look more or less correct.
My searches are not turning up useful items and I'm missing something when I read the Logging Cookbook.
Code:
import logging, traceback
import time, datetime
import collections
standard_formatter = logging.Formatter('[{}|%(levelname)s]::%(funcName)s-%(message)s\n'.format(datetime.datetime.utcnow()))
standard_formatter.converter = time.gmtime
logging.basicConfig(level=logging.ERROR,
filemode='w')
err_handler = logging.StreamHandler()
err_handler.setLevel(logging.ERROR)
err_handler.setFormatter(standard_formatter)
Log = collections.namedtuple(
'LogLvls',
[
'info',
'debug',
'error',
'exception'
]
)
def setup_logger(
name: str,
fname: [str, None]=None,
lvl=logging.INFO,
formatter: logging.Formatter=standard_formatter
) -> Log:
logger = logging.getLogger(name)
if fname is None:
handler = logging.StreamHandler()
else:
handler = logging.FileHandler(fname, mode='a')
handler.setFormatter(formatter)
logger.setLevel(lvl)
logger.addHandler(handler)
logger.addHandler(err_handler)
return Log(
debug=lambda msg: logger.debug('{}::{}'.format(name, msg)),
info=lambda msg: logger.info('{}::{}'.format(name, msg)),
error=lambda msg: logger.error('{}::{}'.format(name, msg)),
exception=lambda e: logger.error('{}::{}: {}\n{}'.format(name, type(e), e, repr(traceback.format_stack()))),
)
You cannot have
logging.basicConfig(level=logging.ERROR,
filemode='w')
and specialized config same time. Comment it out or remove, then using
if __name__ == '__main__':
setup_logger('spam', "/tmp/spaaam")
logging.getLogger('spam').info('eggs')
logging.getLogger('spam').error('eggs')
logging.getLogger('spam').info('eggs')
You'll have only ERROR level messages on console, and both in file.

Change log level in unittest

I have the impression (but do not find the documentation for it) that unittest sets the logging level to WARNING for all loggers. I would like to:
be able to specify the logging level for all loggers, from the command line (when running the tests) or from the test module itself
avoid unittest messing around with the application logging level: when running the tests I want to have the same logging output (same levels) as when running the application
How can I achieve this?
I don't believe unittest itself does anything to logging, unless you use a _CapturingHandler class which it defines. This simple program demonstrates:
import logging
import unittest
logger = logging.getLogger(__name__)
class MyTestCase(unittest.TestCase):
def test_something(self):
logger.debug('logged from test_something')
if __name__ == '__main__':
# DEBUG for demonstration purposes, but you could set the level from
# cmdline args to whatever you like
logging.basicConfig(level=logging.DEBUG, format='%(name)s %(levelname)s %(message)s')
unittest.main()
When run, it prints
__main__ DEBUG logged from test_something
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
showing that it is logging events at DEBUG level, as expected. So the problem is likely to be related to something else, e.g. the code under test, or some other test runner which changes the logging configuration or redirects sys.stdout and sys.stderr. You will probably need to provide more information about your test environment, or better yet a minimal program that demonstrates the problem (as my example above shows that unittest by itself doesn't cause the problem you're describing).
See below example for logging in Python. Also you can change LOG_LEVEL using 'setLevel' method.
import os
import logging
logging.basicConfig()
logger = logging.getLogger(__name__)
# Change logging level here.
logger.setLevel(os.environ.get('LOG_LEVEL', logging.INFO))
logger.info('For INFO message')
logger.debug('For DEBUG message')
logger.warning('For WARNING message')
logger.error('For ERROR message')
logger.critical('For CRITICAL message')
This is in addition to #Vinay's answer above. It does not answer the original question. I wanted to include command line options for modifying the log level. The intent was to get detailed loggin only when I pass a certain parameter from the command line. This is how I solved it:
import sys
import unittest
import logging
from histogram import Histogram
class TestHistogram(unittest.TestCase):
def test_case2(self):
h = Histogram([2,1,2])
self.assertEqual(h.calculateMaxAreaON(), 3)
if __name__ == '__main__':
argv = len(sys.argv) > 1 and sys.argv[1]
loglevel = logging.INFO if argv == '-v' else logging.WARNING
logging.basicConfig(level=loglevel)
unittest.main()
The intent is to get more verbose logging. I know it does not answer the question, but I'll leave it here in case someone comes looking for a similar requirement such as this.
this worked for me:
logging.basicConfig(level=logging.DEBUG)
And if I wanted a specific format:
logging.basicConfig(
level=logging.DEBUG,
datefmt="%H:%M:%S",
format="%(asctime)s.%(msecs)03d [%(levelname)-5s] %(message)s",
)
Programmatically:
Put this line of code in each test function defined in your class that you want to set the logging level:
logging.getLogger().setLevel(logging.INFO)
Ex. class:
import unittest
import logging
class ExampleTest(unittest.TestCase):
def test_method(self):
logging.getLogger().setLevel(logging.INFO)
...
Command Line:
This example just shows how to do it in a normal script, not specific to unittest example. Capturing the log level via command line, using argparse for arguments:
import logging
import argparse
...
def parse_args():
parser = argparse.ArgumentParser(description='...')
parser.add_argument('-v', '--verbose', help='enable verbose logging', action='store_const', dest="loglevel", const=logging.INFO, default=logging.WARNING)
...
def main():
args = parse_args()
logging.getLogger().setLevel(args.loglevel)

Logging in a Python script is not working: results in empty log files

I had a script with logging capabilities, and it stopped working (the logging, not the script). I wrote a small example to illustrate the problem:
import logging
from os import remove
from os.path import exists
def setup_logger(logger_name, log_file, level=logging.WARNING):
# Erase log if already exists
if exists(log_file):
remove(log_file)
# Configure log file
l = logging.getLogger(logger_name)
formatter = logging.Formatter('%(message)s')
fileHandler = logging.FileHandler(log_file, mode='w')
fileHandler.setFormatter(formatter)
streamHandler = logging.StreamHandler()
streamHandler.setFormatter(formatter)
l.setLevel(level)
l.addHandler(fileHandler)
l.addHandler(streamHandler)
if __name__ == '__main__':
setup_logger('log_pl', '/home/myuser/test.log')
log_pl = logging.getLogger('log_pl')
log_pl.info('TEST')
log_pl.debug('TEST')
At the end of the script, the file test.log is created, but it is empty.
What am I missing?
Your setup_logger function specifies a (default) level of WARNING
def setup_logger(logger_name, log_file, level=logging.WARNING):
...and you later log two events that are at a lower level than WARNING, and are ignored as they should be:
log_pl.info('TEST')
log_pl.debug('TEST')
If you change your code that calls your setup_logger function to:
if __name__ == '__main__':
setup_logger('log_pl', '/home/myuser/test.log', logging.DEBUG)
...I'd expect that it works as you'd like.
See the simple example in the Logging HOWTO page.

Setting the log level causes fabric to complain with 'No handlers could be found for logger "ssh.transport"'

The following script:
#!/usr/bin/env python
from fabric.api import env, run
import logging
logging.getLogger().setLevel(logging.INFO)
env.host_string = "%s#%s:%s" % ('myuser', 'myhost', '22')
res = run('date', pty = False)
Produces the following output:
[myuser#myhost:22] run: date
No handlers could be found for logger "ssh.transport"
[myuser#myhost:22] out: Thu Mar 29 16:15:15 CEST 2012
I would like to get rid of this annoying error message: No handlers could be found for logger "ssh.transport"
The problem happens when setting the log level (setLevel).
How can I solve this? I need to set the log level, so skipping that won't help.
You need to initialize the logging system. You can make the error go away by doing so in your app thusly:
import logging
logging.basicConfig( level=logging.INFO )
Note: this uses the default Formatter, which is not terribly useful. You might consider something more like:
import logging
FORMAT="%(name)s %(funcName)s:%(lineno)d %(message)s"
logging.basicConfig(format=FORMAT, level=logging.INFO)
My hack is ugly but works:
# This is here to avoid the mysterious messages: 'No handlers could be found for logger "ssh.transport"'
class MyNullHandler(logging.Handler):
def emit(self, record):
pass
bugfix_loggers = { }
def bugfix(name):
global bugfix_loggers
if not name in bugfix_loggers:
# print "Setting dummy logger for '%s'" % (name)
logging.getLogger(name).addHandler(MyNullHandler())
bugfix_loggers[name] = True

Categories