This question already has an answer here:
Why does Python read from the current directory when printing a traceback?
(1 answer)
Closed 3 years ago.
When the Python interpreter reports an error/exception (I'm just going to say "error" to refer to both of these from now on), it prints the line number and contents of the line that caused the error.
Interestingly, if you have a long-running Python script which causes an error and change the .py file while the script is running, then the interpreter can report an incorrect line as raising the error, based on the changed contents of the .py file.
MWE:
sample.py
from time import sleep
for i in range(10):
print(i)
sleep(1)
raise Exception("foo", "bar")
This script runs for 10 seconds, then raises an exception.
sample2.py
from time import sleep
for i in range(10):
print(i)
sleep(1)
"""
This
is
just
some
filler
to
demonstrate
the
behavior
"""
raise Exception("foo", "bar")
This file is identical to sample.py except that it has some junk between the end of the loop and the line raises the following exception:
Traceback (most recent call last):
File "sample.py", line 7, in <module>
Exception: ('foo', 'bar')
What I Did
python3 sample.py
In a second terminal window, mv sample.py sample.py.bak && cp sample2.py sample.py before sample.py finishes execution
Expected Behavior
The interpreter reports the following:
Traceback (most recent call last):
File "sample.py", line 7, in <module>
Exception: ('foo', 'bar')
Here, the interpreter reports that there was an exception on line 7 of sample.py and prints the Exception.
Actual Behavior
The interpreter reports the following:
Traceback (most recent call last):
File "sample.py", line 7, in <module>
"""
Exception: ('foo', 'bar')
Here, the interpreter also reports """ when it reports the exception.
It seems to be looking in the file on disk to find this information, rather than the file loaded into memory to run the program.
Source of my Confusion
The following is my mental model for what happens when I run python3 sample.py:
The interpreter loads the contents of sample.py into memory
The interpreter performs lexical analysis, semantic analysis, code generation, etc. to produce machine code
The generated code is sent to the CPU and executed
If an error is raised, the interpreter consults the in-memory representation of the source code to produce an error message
Clearly, there is a flaw in my mental model.
What I want to know:
Why does the Python interpreter consult the file on disk to generate error message, rather than looking in memory?
Is there some other flaw in my understanding of what the interpreter is doing?
As per the answer linked by #b_c,
Python doesn't keep track of what source code corresponds to any compiled bytecode. It might not even read that source code until it needs to print a traceback.
[...]
When Python needs to print a traceback, that's when it tries to find source code corresponding to all the stack frames involved. The file name and line number you see in the stack trace are all Python has to go on
[...]
The default sys.excepthook goes through the native call PyErr_Display, which eventually winds up using _Py_DisplaySourceLine to display individual source lines. _Py_DisplaySourceLine unconditionally tries to find the file in the current working directory (for some reason - misguided optimization?), then calls _Py_FindSourceFile to search sys.path for a file matching that name if the working directory didn't have it.
Related
This question already has an answer here:
redirect_stderr does not work (Python 3.5)
(1 answer)
Closed last year.
I need to redirect my error message from the console to a file. For this example, I need to insert the error message into a file:
Traceback (most recent call last):
File "C:/Users/", line 5, in <module>
1/0
ZeroDivisionError: division by zero"
I have already tried to do something like this:
from contextlib import redirect_stdout
with open('error.txt', 'w') as f:
with redirect_stdout(f):
1/0
print('here is my error')
If you plan to run your script in console itself, you can just use the bash's ">" operator to send the input of your command (in this situation : your script) in a file just like this :
python ./yourScript > ./outputFile
Everything that your script will print will go in the specified file.
You need to catch the error or your application will fail:
from contextlib import redirect_stdout
with open('error.txt', 'w') as f:
try:
1/0
except ZeroDivisionError as e:
f.write(e)
Note: This assumes you're using Bash. I see that you are using Windows, so it's likely that you aren't using Bash. But from what I've read, this should still be applicable if you are using Cmd.exe, it's just that the syntax might be slightly different.
I think it's better to handle error message output outside of your script. Your script should attempt to do the "happy path" work and print an error to stderr if something goes wrong. This is what should happen by default in every programming language. Python gets this right.
Here is an example script:
print("Dividing by 0 now, I sure hope this works!")
1/0
print("Holy cow, it worked!")
If I run this script, the first line prints to stdout, and then the ZeroDivisionError output prints to stderr:
$ python /tmp/script.py
Dividing by 0 now, I sure hope this works!
Traceback (most recent call last):
File "/tmp/script.py", line 3, in <module>
1/0
ZeroDivisionError: integer division or modulo by zero
If I want to run the script and collect any error output in a file, I can use redirection in my shell when I run the command:
$ python /tmp/script.py 2> /tmp/errors.txt
Dividing by 0 now, I sure hope this works!
$ cat /tmp/errors.txt
Traceback (most recent call last):
File "/tmp/script.py", line 3, in <module>
1/0
ZeroDivisionError: integer division or modulo by zero
After a bit over 8 years of using Python I've run today into issue with Python 3.8: it executed code that I commented out.
I was able to interrupt it as it was going through code path that should have been blocked by the comment to get this screenshot:
As the function names indicate, the operation in question is somewhat time-consuming to rollback and I would love to know what happened to avoid dealing with that in the future.
My current best explanation is that since the code is run on a remote machine for whatever reason the commenting out did not go through when the code started, but did for the stack trace.
Does anyone had a similar experience or have an idea of what might have happened?
I confirmed my hypothesis from the comments, with a file like:
import time
def dont_run():
raise Exception("oh no i ran it")
time.sleep(10)
dont_run()
I saved that file, and ran it. While it was running I commented out the last line and re-saved the file, I then got this error:
$ py main.py
Traceback (most recent call last):
File "main.py", line 10, in <module>
# dont_run()
File "main.py", line 6, in dont_run
raise Exception("oh no i ran it")
Exception: oh no i ran it
So I think what must have happened here is that you ran the file before the file was saved to disk (perhaps a race between two network requests and you got unlucky).
I have a very large Python 3.x program running in Windows. It works great 99.9% of the time, but occasionally it crashes. I'm not sure what is causing the crash, it could be numerous things. Due to the fact that I have to run the program "compiled" .exe with an invisible console for security reasons (don't ask), I don't get to see any form of console readout when it dies. So obviously it would be great if I could have it output the crash traceback as a text file instead.
I'm familiar with try/except in Python but the piece of code that's causing the issue could be anywhere and I don't want to have to write an individual try/except around every single line of the literally thousands of lines of code. Is there a way that I can get the program to always output any program-stopping error as a text file, no matter what line of code is causing the problem, or what the error might be?
Somewhere in your code you must have a single entry-point into the code that might be crashing (in the main script, at a minimum). You can wrap that in a try/except pair and then use functions from the traceback module to print the exception to a file when it happens:
Change:
if __name__ == "__main__":
do_stuff()
To:
import traceback
if __name__ == "__main__":
try:
do_stuff()
except:
with open("exceptions.log", "a") as logfile:
traceback.print_exc(file=logfile)
raise
If you want to, you could add some extra code to write extra output to the file, with a time/date stamp or whatever other information you think might be useful. You may want to add additional try/except blocks, more or less like the one above, if you want to give special scrutiny to certain parts of your code. For instance, you could put a block in a loop, where you can print out the loop value if an exception occurs:
for x in some_iterable:
try:
do_something_with(x)
except:
with open("exceptions.log", "a") as logfile:
print("Got an exception while handling {!r} in the loop:".format(x)
traceback.print_exc(file=logfile)
raise # you could omit this line to suppress the exception and keep going in the loop
You could also use the logging module, if you want a more configurable system for the file writing end of the issue. The logging.debug and logging.exception functions both read the same exception information used by the traceback module, but with many more options for formatting things yourself (if you want that). Note that setting up logging is a bit more involved than just opening a file manually.
sometimes you cant use try/except or > in terminal.
you can use sys excepthook.
add this to beginning:
import sys
import traceback
def excepthook(exctype, value, tb):
with open("mylog.txt", "w") as mylog:
traceback.print_exception(exctype, value, tb, file=mylog)
sys.excepthook = excepthook
##########
# your code
after that, all traceback will be print to mylog.txt.
I ended up writing my own logging function
# Takes two inputs - logfile (path to desired .csv), and data to be written
# Writes "Y-M-D|H:M:S, data\n"
f = open(logfile, 'a+')
currentdate = time.strftime('%Y-%m-%d|%H:%M:%S')
f.write(currentdate + ',' + data +'\n')
f.close()
it requires time or datetime, I'm not sure which. Also you need to make sure that the log file exists.
Then I would just plop it wherever I needed, eg: logger(ERRLOG, "OCR didn't find poop. Check {}".format(ocr_outfilepath))
I'm not sure what kind of program this is or how you are running it, but you could try running your Python program and redirecting all its output (or all errors) to a file.
For example, if I have a very-contrived sample Python script like this
def do_stuff():
s = [1, 2, 3, 4, 5]
print(s[6])
if __name__ == "__main__":
do_stuff()
which is deliberately going to raise an IndexError exception.
You could run it like this:
$ python test.py &> mylogs.txt
$ cat mylogs.txt
Traceback (most recent call last):
File "test.py", line 8, in <module>
do_stuff()
File "test.py", line 4, in do_stuff
print(s[6])
IndexError: list index out of range
which redirects all output and errors to a file.
Or, if you can have it displayed on a console and also redirect it to a file:
$ python test.py 2>&1 | tee mylogs.txt
Traceback (most recent call last):
File "test.py", line 8, in <module>
do_stuff()
File "test.py", line 4, in do_stuff
print(s[6])
IndexError: list index out of range
$ cat mylogs.txt
Traceback (most recent call last):
File "test.py", line 8, in <module>
do_stuff()
File "test.py", line 4, in do_stuff
print(s[6])
IndexError: list index out of range
This way, you don't need to modify anything with the code.
Note that this solution is for Linux or Mac systems.
See other StackOverflow posts for redirecting Python output to a file.
The following is from the P3 documentation:
"The [traceback] module provides a standard interface to extract, format and print stack traces of Python programs. It exactly mimics the behavior of the Python interpreter when it prints a stack trace. This is useful when you want to print stack traces under program control, such as in a “wrapper” around the interpreter."
1) Why does the traceback module "mimic" the interpreter?
2) Why is this useful "under program control" (what does this phrase mean)?
From what I understand, by mimic the interpreter, it is meant that the formatting and wording on exception reporting is exactly similar to that performed by the interpreter. That is, this:
import traceback
try:
raise AttributeError("Foo")
except:
traceback.print_exc()
Displays the same message as this would:
raise AttributeError("Foo")
which is:
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
AttributeError: Foo
As for your second question, you can see an example of that in the examples section of the module documentation. The first example illustrates simple "wrapping" of the interpreter (with help from input and exec) and reporting by using print_exc (which mimics the interpreter).
After following the instructions given on this site: https://sourceware.org/gdb/wiki/STLSupport
GDB is still unable to print the contents of stl containers like vectors, other than printing out a huge amount of useless information. When GDB loads, I also get the following errors, which I think are related to the Python that I put into ~/.gdbinit
Traceback (most recent call last):
File "<string>", line 4, in <module>
File "/Users/mayankp/gdb_printers/python/libstdcxx/v6/printers.py", line 1247, in register_libstdcxx_printers
gdb.printing.register_pretty_printer(obj, libstdcxx_printer)
File "/usr/local/share/gdb/python/gdb/printing.py", line 146, in register_pretty_printer
printer.name)
RuntimeError: pretty-printer already registered: libstdc++-v6
/Users/mayankp/.gdbinit:6: Error in sourced command file:
Error while executing Python code.
When GDB loads, I also get the following errors...
It looks like instructions you followed on https://sourceware.org/gdb/wiki/STLSupport are invalid now. If you look at svn log you will see that registering of pretty printers was added in __init__.py recently:
------------------------------------------------------------------------
r215726 | redi | 2014-09-30 18:33:27 +0300 (Вт., 30 сент. 2014) | 4 lines
2014-09-30 Siva Chandra Reddy <sivachandra#google.com>
* python/hook.in: Only import libstdcxx.v6.
* python/libstdcxx/v6/__init__.py: Load printers and xmethods.
------------------------------------------------------------------------
And therefore second registration throws error. You can remove it or comment out:
#register_libstdcxx_printers (None)
GDB is still unable to print the contents of stl containers
You have probably mismatched pretty printers with your gcc. See https://stackoverflow.com/a/9108404/72178 for details.
From your traceback it seems that the register_libstdcxx_printers() call is failing because there already is such a pretty printer registered. To avoid that, you can wrap it in a try..except to make sure instructions in .gdbinit don't interfere with the launch of GDB if they fail:
python
import sys
sys.path.insert(0, '/home/user/gdb_printers/python')
from libstdcxx.v6.printers import register_libstdcxx_printers
try:
register_libstdcxx_printers(None)
except:
pass
end
(Note: You should usually never use a bare except statement without qualifying the type of exceptions you want to catch. This is a special case though, in startup configuration files like .gdbinit, .pdbrc or your PYTHONSTARTUP file you'll probably want to write defensive code like that).
But chances are this will only get rid of the ugly traceback for you, and printing of STL vectors still wont work. Because it seems there is already a pretty printer registered from somewhere else.
Make sure the path /home/user/gdb_printers/python actually matches the path where you checked out the module mentioned in the STLSupport docs.