Am I using "warnings" module right? - python

I am using this to issue warnings while parsing a configuration file. All sorts of errors could happen while doing this - some fatal, some not. All those non-fatal errors should not interrupt the parsing, but they must not escape user's attention either. This is where the warnings module comes in.
I am currently doing this (pseudo code):
while parsing:
try:
get dictionary["token"]
except KeyError:
warnings.warn("Looks like your config file don't have that token")
This all looks readable and cozy, but the message looks something like this:
C:\Users\Renae\Documents\test.py:3: UserWarning: Looks like your config file don't have that token
warnings.warn("Looks like your config file don't have that token")
Why is it printed twice? Should I be doing some sort of initialization before issuing warnings (like the logging module)? The standard docs doesn't have a tutorial on this (or does it?).
What differentiates warnings from print(), stdout or stderr?

When you are using warnings module, the second print is actually the stack, you can control what level you want to print using the stacklevel argument. Example -
import warnings
def warn():
warnings.warn("Blah",stacklevel=2)
warn()
This results -
a.py:6: UserWarning: Blah
warn()
If you set it to a non-existent level , in the above example lets say 3 , then it does not print the stack, Example -
def warn():
warnings.warn("Blah",stacklevel=3)
Result -
sys:1: UserWarning: Blah
Though as you can see, the file also changed to sys:1 . You might want to show a meaning stack over there (maybe something like stacklevel=2 for the caller of the function in which the warning was raised) .
Another way to suppress this would be to use warnings.warn_explicit() method and manually pass in the filename and linenumber (linenumber should not have any actual code in it, otherwise that code would be printed), though I do not advice this.
Also, yes when using warnings module, the data normally goes into sys.stderr , but you can also easily send warnings to a different file by using different functions like - warnings.showwarning()

Related

Is there any way to write every warning in a .txt file in python?

I would like my program to write every warning in a .txt file. Is there any way to do this without using catch_warnings?
One option is to use the built-in Python logging. There is a lot of information on the internet about how to use the Python logging system, especially in the Python documentation for it (e.g. see the Logging HOWTO). But the simplest way to turn on logging to a file is with the basicConfig() function, like this:
import logging
logging.basicConfig(filename="myfile.txt",level=logging.DEBUG)
Now you can enable logging of warnings with the captureWarnings() function:
logging.captureWarnings(True)
As a bonus, you can now also log your own messages:
logging.info("My own message")
An alternative is to replace the warning handler yourself. This is slightly more work but it is a bit more focused on just warnings. The warning module documentation for showwarning() says:
You may replace this function with any callable by assigning to warnings.showwarning.
So you could define your own function with the same parameter list, and assign it to that variable:
warning_file = open("warnings.txt", "w")
def mywarning(message, category, filename, lineno, file=None, line=None):
warning_file.write(warnings.formatwarning(message, category, filename, lineno, file, line))
warnings.showwarning = mywarning
Note that I've opened warning_file outside the function so it will be opened once when your Python script starts. I also used the formatwarning() function so that the output is the same format as the warnings module usually outputs.

messaging for command line programs

I tend to write a lot of command line utility programs and was wondering if
there is a standard way of messaging the user in Python. Specifically, I would like to print error and warning messages, as well as other more conversational output in a manner that is consistent with Unix conventions. I could produce these myself using the built-in print function, but the messages have a uniform structure so it seems like it would be useful to have a package to handle this for me.
For example, for commands that you run directly in the command line you might
get messages like this:
This is normal output.
error: no files given.
error: parse.c: no such file or directory.
error: parse.c:7:16: syntax error.
warning: /usr/lib64/python2.7/site-packages/simplejson:
not found, skipping.
If the commands might be run in a script or pipeline, they should include their name:
grep: /usr/dict/words: no such file or directory.
It would be nice if could handle levels of verbosity.
These things are all relatively simple in concept, but can result in a lot of
extra conditionals and complexity for each print statement.
I have looked at the logging facility in Python, but it seems overly complicated and more suited for daemons than command line utilities.
I can recommend Inform. It is the only package I have seen that seems to address this need. It provides a variety of print functions that print in different circumstances or with different headers. For example:
log() -- prints to log file, no header
comment() -- prints if verbose, no header
display() -- prints if not quiet, no header
output() -- always prints, no header
warning() -- always prints with warning header
error() -- always prints with error header
fatal() -- always prints with error header, terminates program.
Inform refers to these functions as 'informants'. Informants are very similar to the Python print function in that they take any number of arguments and builds the message by joining them together. It also allows you to specify a culprit, which is added to the front of the message.
For example, here is a simple search and replace program written using Inform.
#!/usr/bin/env python3
"""
Replace a string in one or more files.
Usage:
replace [options] <target> <replacement> <file>...
Options:
-v, --verbose indicate whether file is changed
"""
from docopt import docopt
from inform import Inform, comment, error, os_error
from pathlib import Path
# read command line
cmdline = docopt(__doc__)
target = cmdline['<target>']
replacement = cmdline['<replacement>']
filenames = cmdline['<file>']
Inform(verbose=cmdline['--verbose'], prog_name=True)
for filename in filenames:
try:
filepath = Path(filename)
orig = filepath.read_text()
new = orig.replace(target, replacement)
comment('updated' if orig != new else 'unchanged', culprit=filename)
filepath.write_text(new)
except OSError as e:
error(os_error(e))
Inform() is used to specify your preferences; comment() and error() are the
informants, they actually print the messages; and os_error() is a useful utility that converts OSError exceptions into a string that can be used as an error message.
If you were to run this, you might get the following output:
> replace -v tiger toe eeny meeny miny moe
eeny: updated
meeny: unchanged
replace error: miny: no such file or directory.
replace error: moe: no such file or directory.
Hopefully this gives you an idea of what Inform does. There is a lot more power there. For example, it provides a collection of utilities that are useful when printing messages. An example is os_error(), but there are others. You can also define your own informants, which is a way of handling multiple levels of verbosity.
import logging
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s')
level specified above controls the verbosity of the output.
You can attach handlers (this is where the complexity outweighs the benefit in my case) to the logging to send output to different places (https://docs.python.org/2/howto/logging-cookbook.html#multiple-handlers-and-formatters) but I haven't needed more than command line output to date.
To produce output you must specify it's verbosity as you log it:
logging.debug("This debug message will rarely appeal to end users")
I hadn't read your very last line, the answer seemed obvious by then and I wouldn't have imagined that single basicConfig line could be described as "overly complicated". It's all I use the 60% of the time when print is not enough.

Multi-line logging in Python

I'm using Python 3.3.5 and the logging module to log information to a local file (from different threads). There are cases where I'd like to output some additional information, without knowing exactly what that information will be (e.g. it might be one single line of text or a dict).
What I'd like to do is add this additional information to my log file, after the log record has been written. Furthermore, the additional info is only necessary when the log level is error (or higher).
Ideally, it would look something like:
2014-04-08 12:24:01 - INFO - CPU load not exceeded
2014-04-08 12:24:26 - INFO - Service is running
2014-04-08 12:24:34 - ERROR - Could not find any active server processes
Additional information, might be several lines.
Dict structured information would be written as follows:
key1=value1
key2=value2
2014-04-08 12:25:16 - INFO - Database is responding
Short of writing a custom log formatter, I couldn't find much which would fit my requirements. I've read about filters and contexts, but again this doesn't seem like a good match.
Alternatively, I could just write to a file using the standard I/O, but most of the functionality already exists in the Logging module, and moreover it's thread-safe.
Any input would be greatly appreciated. If a custom log formatter is indeed necessary, any pointers on where to start would be fantastic.
Keeping in mind that many people consider a multi-line logging message a bad practice (understandably so, since if you have a log processor like DataDog or Splunk which are very prepared to handle single line logs, multi-line logs will be very hard to parse), you can play with the extra parameter and use a custom formatter to append stuff to the message that is going to be shown (take a look to the usage of 'extra' in the logging package documentation).
import logging
class CustomFilter(logging.Filter):
def filter(self, record):
if hasattr(record, 'dct') and len(record.dct) > 0:
for k, v in record.dct.iteritems():
record.msg = record.msg + '\n\t' + k + ': ' + v
return super(CustomFilter, self).filter(record)
if __name__ == "__main__":
logging.getLogger().setLevel(logging.DEBUG)
extra_logger = logging.getLogger('extra_logger')
extra_logger.setLevel(logging.INFO)
extra_logger.addFilter(CustomFilter())
logging.debug("Nothing special here... Keep walking")
extra_logger.info("This shows extra",
extra={'dct': {"foo": "bar", "baz": "loren"}})
extra_logger.debug("You shouldn't be seeing this in the output")
extra_logger.setLevel(logging.DEBUG)
extra_logger.debug("Now you should be seeing it!")
That code outputs:
DEBUG:root:Nothing special here... Keep walking
INFO:extra_logger:This shows extra
foo: bar
baz: loren
DEBUG:extra_logger:Now you should be seeing it!
I still recommend calling the super's filter function in your custom filter, mainly because that's the function that decides whether showing the message or not (for instance, if your logger's level is set to logging.INFO, and you log something using extra_logger.debug, that message shouldn't be seen, as shown in the example above)
I just add \n symbols to the output text.
i'm using a simple line splitter in my smaller applications:
for line in logmessage.splitlines():
writemessage = logtime + " - " + line + "\n"
logging.info(str(writemessage))
Note that this is not thread-safe and should probably only be used in log-volume logging applications.
However you can output to log almost anything, as it will preserve your formatting. I have used it for example to output JSON API responses formatted using: json.dumps(parsed, indent=4, sort_keys=True)
It seems that I made a small typo when defining my LogFormatter string: by accidentally escaping the newline character, I wrongly assumed that writing multi-line output to a log file was not possible.
Cheers to #Barafu for pointing this out (which is why I assigned him the correct answer).
Here's the sample code:
import logging
lf = logging.Formatter('%(levelname)-8s - %(message)s\n%(detail)s')
lh = logging.FileHandler(filename=r'c:\temp\test.log')
lh.setFormatter(lf)
log = logging.getLogger()
log.setLevel(logging.DEBUG)
log.addHandler(lh)
log.debug('test', extra={'detail': 'This is a multi-line\ncomment to test the formatter'})
The resulting output would look like this:
DEBUG - test
This is a multi-line
comment to test the formatter
Caveat:
If there is no detail information to log, and you pass an empty string, the logger will still output a newline. Thus, the remaining question is: how can we make this conditional?
One approach would be to update the logging formatter before actually logging the information, as described here.

Can I use nose to check that a function raises a warning, but also keep that warning message from printing on the terminal?

I want to write a nose unit test that confirms that a function issues an expected warning. This answer shows how.
However, I would also like to not see the warning's message in the terminal window from which I'm running nosetests. (It clutters up nosetests' output.)
Is there a way to nose-test to see if a function issues a warning, without having the warning text show up?
I would look into using the https://pypi.python.org/pypi/mock module's patch decorator, to fake the warnings.warn function. You can then verify that the call was made, without it actually being called.
Something like this (untested):
#mock.patch('yourmodule.warnings')
def your_test(self, mock_warnings):
your_code()
mock_warnings.warn.assert_called_with("deprecated", DeprecationWarning)

trapping a MySql warning

In my python script I would like to trap a "Data truncated for column 'xxx'" warning durnig my query using MySql.
I saw some posts suggesting the code below, but it doesn' work.
Do you know if some specific module must be imported or if some option/flag should be called before using this code?
Thanks all
Afeg
import MySQLdb
try:
cursor.execute(some_statement)
# code steps always here: No Warning is trapped
# by the code below
except MySQLdb.Warning, e:
# handle warnings, if the cursor you're using raises them
except Warning, e:
# handle warnings, if the cursor you're using raises them
Warnings are just that: warnings. They get reported to (usually) stderr, but nothing else is done. You can't catch them like exceptions because they aren't being raised.
You can, however, configure what to do with warnings, and turn them off or turn them into exceptions, using the warnings module. For instance, warnings.filterwarnings('error', category=MySQLdb.Warning) to turn MySQLdb.Warning warnings into exceptions (in which case they would be caught using your try/except) or 'ignore' to not show them at all. You can (and probably should) have more fine-grained filters than just the category.
Raise MySQL Warnings as errors:
import warnings, MySQLdb
warnings.filterwarnings('error', category=MySQLdb.Warning)
To ignore instead of raising an error, replace "error" with "ignore".
Handle them in a try-except block like:
try:
# a MySQL DB operation that raises a warning
# for example: a data truncated warning
except Warning as a_warning:
# do something here
I would try first to set the sql_mode. You can do this at the session level. If you execute a:
SET ##sql_mode:=TRADITIONAL;
(you could also set it at the server level, but you need access to the server to do that. and, that setting an still be overridden at the session level, so the bottom line is, always set it at the session level, immediately after establishing the connection)
then many things that are normally warnings become errors. I don't know how those manifest themselves at the python level, but the clear advantage is that the changes are not stored in the database. See: http://dev.mysql.com/doc/refman/5.1/en/server-sql-mode.html#sqlmode_traditional
If you want to trap it to ignore it see "Temporarily suppressing warnings" in Python's documentation:
https://docs.python.org/2/library/warnings.html#temporarily-suppressing-warnings
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
# Put here your query raising a warning
Else, just read the doc about "warnings" module of Python, you can transform them into exception if you want to catch them later, etc... it's all here: https://docs.python.org/2/library/warnings.html
Just to add to Thomas Wouters reply, there is no need to import the warnings module to turn them into errors. Just run your script with "-W error" (or ignore) as flag for Python.
Have you tried using MySQL's SHOW WARNINGS command?
Stop using MySQLdb. It has such stupid behavior as truncating data and issuing only a warning. Use oursql instead.

Categories