if I catch a NameError exception using except:
try:
print(unknownVar)
except NameError as ne:
print(ne)
I get a string like :
NameError: name 'unknownVar' is not defined
I work in the context of eval'ed expressions and it whould be a useful information to me if I could obtain only the variable name (here "unknownVar" alone) and not the full string. I did not find an attribute for example in the NameError object to get it (perhaps does it exists, but I did not find it). Is there something better than parsing this string to do to get the information I need ?
Best Regards
Mikhaël
You can extract it using regex:
import re
try:
print(unknownVar)
except NameError as ne:
var_name = re.findall(r"'([^']*)'", str(ne))[0]
print(var_name) # output: unknownVar
Extract it from the string:
ne.args[0].split()[1].strip("'")
Unfortunately, error messages are not exactly Python's strong suit. However, there is actually an alternative to parsing the string, but it is quite "hacky" and only works with CPython (i.e. this will fail with PyPy, Jython, etc.).
The idea is to extract the name of whatever you wanted to load from the underlying code object.
import sys
import opcode
def extract_name():
tb = sys.exc_info()[2] # get the traceback
while tb.tb_next is not None:
tb = tb.tb_next
instr_pos = tb.tb_lasti # the index of the "current" instruction
frame = tb.tb_frame
code = frame.f_code # the code object
instruction = opcode.opname[code.co_code[instr_pos]]
arg = code.co_code[instr_pos + 1]
if instruction == 'LOAD_FAST':
return code.co_varnames[arg]
else:
return code.co_names[arg]
def test(s):
try:
exec(s)
except NameError:
name = extract_name()
print(name)
test("print(x + y)")
1. The Background of Code Object
Python compiles the original Python source code into bytecode and then executes that bytecode. The code is stored in "code objects", which are (partly) documented here. For our purpose, the following will suffice:
class CodeObject:
co_code: bytes # the bytecode instructions
co_varnames: tuple # names of local variables and parameters
co_names: tuple # all other names
If some code produces a NameError, it failed to load a specific name. That name must be either in the co_names or co_varnames tuple. All we have to figure out is which one.
While the code objects desribe the code statically, we also need a dynamic object that tells us the value of local variables and which instruction we are currently executing. This role is fulfilled by the "frame" (leaving out irrelevant details):
class Frame:
f_code: CodeObject # the code object (see above)
f_lasti: int # the instruction currently executed
You could think of the interpreter as basically doing the following:
def runCode(code):
frame = create_new_frame(code)
while True:
i = frame.f_lasti
opcode = frame.f_code.co_code[i]
arg = frame.f_code.co_code[i+1]
exec_opcode(opcode, arg)
frame.f_lasti += 2
The code to load a name then has a form like this:
LOAD_NAME 3 (the actual name is co_names[3])
LOAD_GLOBAL 3 (the actual name is co_names[3])
LOAD_FAST 3 (the actual name is co_varnames[3])
You can see that we have to distinguish between LOAD_FAST (i.e. load a local variable) and all other LOAD_X opcodes.
2. Getting The Right Name
When an error occurs, we need to go through the stacktrace/traceback until we find the frame in which the error occurred. From the frame we then get the code object with the list of all names and instructions, extract the instruction and argument that led to the error and thus the name.
We retrieve the traceback with sys.exc_info()[2]. The actual frame and traceback we are interested in is the very last one (this is what you can read in the line Traceback (most recent call last): whenever a runtime error occurs):
tb = sys.exc_info()[2] # get the traceback
while tb.tb_next is not None:
tb = tb.tb_next
This traceback object then contains two information of importance to us: the frame tb_frame and the instruction pointer tb_last where the error occurred. From the frame we then extract the code object:
instr_pos = tb.tb_lasti # the index of the "current" instruction
frame = tb.tb_frame
code = frame.f_code # the code object
Since the byte encoding the instruction can change with different Python versions, we want to get the human-readable form, which is more stable. We need that so that we can distinguish between local variables all others:
instruction = opcode.opname[code.co_code[instr_pos]]
arg = code.co_code[instr_pos + 1]
if instruction == 'LOAD_FAST':
return code.co_varnames[arg]
else:
return code.co_names[arg]
3. Caveat
If the code object uses more than 255 names, a single byte will no longer be enough as index into the tuples with all names. In that case, the bytecode allows for an extension prefix, which is not taken into account here. But for most code objects, this should work just fine.
As mentioned in the beginning, this is a rather hacky method that is based on internals of Python that might change (although this is rather unlikely). Nonetheless, it is fun taking Python apart this way, isn't it ;-).
Related
I'm working on a project where you click a button and it changes an int and writes it to the screen. My issues is that when I try to set a new value to the int it comes back with an AttributeError.
def busy():
unit_status.set(7)
Everything else is working except for that one line, and I can't for the life of me figure out why.
While this thread is a bit old, I dont think LouieC's response fully answered the OP's concern.
LouieC mentions that set is a built-in class, which is correct. But it is likely Warrior's Path was looking for the values, since he wanted to write them to the screen.
IF he didnt ask, then I am asking based upon an observation in the following code, adapted from the geeksforgeeks.org explanation. My point is addressed in the comments, particularly at the end.
Notice when LouieC's technique is applied it seems to incorrectly overwrite the entirety of IntVar.
# importing tkinter module
from tkinter import *
# creating Tk() variable
# required by Tkinter classes
master = Tk()
# Tkinter variables
# initialization using constructor
intvar = IntVar(master, value = 25, name ="2")
strvar = StringVar(master, "Hello !")
boolvar = BooleanVar(master, True)
doublevar = DoubleVar(master, 10.25)
print(intvar) # This prints the NAME, not the value.... the name is auto-assigned by python
print(strvar) # if not explicity declared...
print(boolvar)
print(doublevar)
print(intvar.get()) # This prints the VALUE, not the name....
print(strvar.get())
print(boolvar.get())
print(doublevar.get())
# But now watch what happens...
intvar = 1
print(intvar)
print(intvar.get())
# What's interesting here is... print(intvar.get()) worked at line 20...
and yet now it generates the following error
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-16-61984dfda0fb> in <module>
26 intvar = 1
27 print(intvar)
---> 28 print(intvar.get())
AttributeError: 'int' object has no attribute 'get'
If one runs a type test, in the first case, around line 20:
print(type(intvar))
One will get:
<class 'tkinter.IntVar'>
But if one runs the same type test after LouieC's reassignment, one will get:
<class 'int'>
That's why I said the reassignment doesnt work right.
The OP's question still seems to be open.
This is not how you reassign a variable of type integer; you want:
unit_status = 7
set is a built-in class in Python; official docs here
A set object is an unordered collection of distinct hashable objects. Common uses include membership testing, removing duplicates from a sequence, and computing mathematical operations such as intersection, union, difference, and symmetric difference. (For other containers see the built-in dict, list, and tuple classes, and the collections module.)
I am currently developing an automated function tester in Python.
The purpose of this application is to automatically test if functions are returning an expected return type based on their defined hints.
Currently I have two test functions (one which fails and one which passes), along with the rest of my code in one file. My code utilizes the globals() command in order to scan the Python file for all existing functions and to isolate user-made functions and exclude the default ones.
This initial iteration works well. Now I am trying to import the function and use it from another .py file.
When I run it in the other .py file it still returns results for the functions from the original file instead of the new test-cases in the new file.
Original File - The Main Application
from math import floor
import random
#declaring test variables
test_string = 'test_string'
test_float = float(random.random() * 10)
test_int = int(floor(random.random() * 10))
#Currently supported test types (input and return)
supported_types = ['int', 'float', 'str']
autotest_result = {}
def int_ret(number: int) -> str:
string = "cactusmonster"
return string
def false_test(number: int) -> str:
floating = 3.2222
return floating
def test_typematching():
for name in list(globals()):
if not name.startswith('__'):
try:
return_type = str((globals()[name].__annotations__)['return'])
autotest_result.update({name: return_type.replace("<class '", "").replace("'>", "")})
except:
continue
for func in autotest_result:
if autotest_result[func] != None:
this_func = globals()[func].__annotations__
for arg in this_func:
if arg != 'return':
input_type = str(this_func[arg]).replace("<class '", "").replace("'>", "")
for available in supported_types:
if available == input_type:
func_return = globals()[func]("test_" + input_type)
func_return = globals()[func]("test_" + input_type)
actual_return_type = str(type(func_return)).replace("<class '", "").replace("'>", "")
if actual_return_type == autotest_result[func]:
autotest_result[func] = 'Passed'
else:
autotest_result[func] = 'Failed'
return autotest_result
Test File - Where I Am Importing The "test_typematching()" Function
from auto_test import test_typematching
print(test_typematching())
def int_ret_newfile(number: int) -> str:
string="cactusmonster"
# print(string)
# return type(number)
return string
Regardless if I run my main "auto_test.py" file or the "tester.py" file, I still get the following output:
{'int_ret': 'Passed', 'false_test': 'Failed'}
I am guessing this means that even when I am running the function from auto_test.py on my tester.py file it still just scans itself. I would like it to scan the file where the function is currently being called. For example, I expect it to test the int_ret_newfile function of tester.py.
Any advice or help would be much appreciated.
globals() is a bit of a misnomer. It gets the calling module's __dict__. (Python's true "global" namespace is actually builtins.)
How can globals() get its caller's __dict__ when it's defined in the builtins module? Here's a clue:
PyObject *
PyEval_GetGlobals(void)
{
PyThreadState *tstate = _PyThreadState_GET();
PyFrameObject *current_frame = _PyEval_GetFrame(tstate);
if (current_frame == NULL) {
return NULL;
}
assert(current_frame->f_globals != NULL);
return current_frame->f_globals;
}
globals() is one of those builtins that's implemented in C (in CPython), but you get the gist. It reads the frame globals from the current stack frame, so in Python,
import inspect
inspect.currentframe().f_globals
would do the same thing as globals(). But you can't just put this in a function and expect it to work the same way, because calling it would add a stack frame, and that frame's globals depends on the function's .__globals__ attribute, which is set to the .__dict__ of the module that defined it. You want the caller's frame.
def myglobals():
"""Behaves like the builtin globals(), but written in Python!"""
return inspect.currentframe().f_back.f_globals
You could do the same thing in test_typematching. But walking up the stack to the previous frame like that is a weird thing to do. It can be surprising and brittle. It amounts to passing the caller's frame as an implicit hidden argument, something that normally is not supposed to matter. Consider what happens if you wrap it in a decorator. Now which stack frame are you getting the globals from?
So really, you should be passing in globals() as an explicit argument to test_typematching(), like test_typematching(globals()). A defined and documented parameter would be much less confusing than implicit introspection. "Explicit is better than implicit".
Still, Python's standard library does do this kind of thing occasionally, with globals() itself being a notable example. And exec() can use the current namespace if you don't give it a different one. It's also how super() can now work without arguments in Python 3. So stack frame inspection does have precedent for this kind of use case.
I'm facing a KeyError I can't explain or understand.
I have a notebook, in which I define a variable PREFIX in a cell:
PREFIX = "/home/mavax/Documents/info/notebook/log_study"
which is simply a path to a folder containing logs, so people using the notebook just need to change the path if they want to execute the code below.
Then, later (quite a bunch of cells beneath), I use it, without any problem:
for basename in ["log_converted_full.txt", "log_converted_trimmed.txt"]:
entries = load_log_for_insertion("%(PREFIX)s/datasets/logs/%(basename)s" % locals())
pprint(entries)
I then get the output I expect, meaning files are found and the (very long) output from the logs is being printed.
I have some more cells describing the structure I implement for this problem, and when the time comes to execute again the same piece of code, I get the KeyError:
Code bringing the error:
def demo_synthetic_dig_dag(data_size):
for basename in ["alert_converted_trimmed.txt"]:
###
entries = load_log_for_insertion("%(PREFIX)s/datasets/logs/%(basename)s" % locals())[:data_size]
g = AugmentedDigDag()
g.build(entries)
html(
"""
<table>
<tr><td>%s</td></tr>
</table>
""" % (
synthetic_graph_to_html(g, 2, 0.03)
)
)
and, in the next cell:
demo_synthetic_dig_dag(200)
Jupyter output:
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-179-7c2a79d0afd6> in <module>()
----> 1 demo_synthetic_dig_dag_armen(200)
<ipython-input-178-d17f57de3c01> in demo_synthetic_dig_dag(data_size)
18 for basename in ["log_converted_trimmed.txt"]:
19 ###
---> 20 entries = load_log_for_insertion("%(PREFIX)s/datasets/logs/%(basename)s" % locals())[:data_size]
21 g = AugmentedDigDag()
22 g.build(entries)
KeyError: 'PREFIX'
I'm pretty sure the mistake is quite simple and plain stupid, but still, if someone could open my eyes, i'd be very thankful !
Outside a function, locals() is the same as globals(), so you have no issue.
When placed inside a function, though, locals() doesn't contain PREFIX in any way (it is stored in globals(), it contains the local names for that function. That's why formatting these fail, it's trying to get a key named PREFIX from the dictionary returned from the locals() dict.
Instead of formatting with %, why not just use .format:
"{}/datasets/logs/{}s".format(PREFIX, basename)
Alternatively, you could bring PREFIX in the local scope with an additional parameter to your function:
def demo_synthetic_dig_dag(data_size, PREFIX=PREFIX):
but I don't see much of an upside to that. (Yes, there is a small performance boost for local look-up but I doubt it would play a role)
I am trying to teach myself object oriented programming in Python with the book "Python 3, Object Oriented Programming", by Dusty Phillips. On pages 54 and 55 he creates a class called Note and encourages the reader to repeat the example and import the module from the interpreter with the following commands. However, when I do, I type the n1 = command I get the message from the interpreter "TypeError: object() takes no parameters. Am I missing something in the implementation of this object, or did the book give a faulty example? Mind you the example and the lines typed into the interpreter are taken exactly from the book, at least I think I made no errors in copying the lines. This is different initialization syntax than C++, which makes me wonder if the author gave a bad example, but in the book example it looks as if he is trying to initialize with a call to the object directly and the object is supposed to recognize the text that gets passed to memo. Also I tried to run the example in python 2.7.9 and 3.4.2 to see if this was a version issue.
Interpreter lines
from notebook import Note
n1 = Note("hello first") # the code execution gets stopped here fur to the error
n2 = Note("hello again")
n1.id
n2.id
import datetime
# store the next available id for all new notes
last_id = 0
class Note:
'''Represent a note in the notebook. Match against a
string in searches and store tags for each note.'''
def _init_(self, memo, tags=''):
'''initialize a note with memo and optional
space-seperated tags. Automatically set the note's
creation date and a unique id.'''
self.memo = memo
self.tags = tags
self.creation_date = datetime.date()
global last_id
last_id += 1
self.id = last_id
def match(self, filter):
'''Determine if this note matches the filter
text. Return True if it matches, False otherwise.
Search is case sensitive and matches both text and
tags'''
return filter in self.memo or filter in self.tags
Maybe do what Christian said: Use __init__ instead of _init_. You need to have double underscores not single underscores. You can look at the Python Docs.
You are missing double underscores in the special __init__ method. You only have single underscores.
You might also consider having Note explicitly inherit from object, i.e. class Note(object).
I have steam coming out of my head now but I can't figure out what is wrong with my code.
Here are the relevant lines:
try:
outport = record_dict[id][hash_ % len(record_dict[id])]
except:
fp.write("Problem-"+str(type(record_dict[id]))+"\n")
fp.write("Problem-"+str(record_dict[id])+"\n")
fp.write("Problem-"+str(len(record_dict[id]))+"\n")
Here is the error I get:
File "xxxx.py", line 459, in yyyyy
fp.write("Problem-"+str(len(record_dict[id]))+"\n")
TypeError: 'long' object is not callable
Inside file pointed by fp:
Problem-<type 'list'>
Problem-[5, 6, 7, 8]
What is wrong with my code? How do I debug it?
did you create a variable named str or len anywhere? If so, that's your problem. (most likely, len since str was used earlier without any problem).
Python builtins are not reserved -- meaning that you are free to reassign them to any object that you want. It looks like you assigned len to a long integer which makes sense because len is a perfectly reasonable variable name in other languages.
The thing you should take away from this is to be careful not to "shadow" builtin functions by creating variables of the same name. It makes problems which can be hard to debug.
As a side note: bare "except" clauses are the worst possible exception handling scheme - you just don't know what exception can happen, and you loose all the useful debugging informations stored in the exception's traceback. FWIW, sys.exit is actually implemented by raising a SysExit exception that is caught by the python runtime.
If you're in a loop and want to log infos about the exception for the current iteration and continue with next item, make sure you don't catch SysExit and learn to use the logging module :
import logging
# this will require some minimal conf somewhere, cf the fine manual
logger = logging.getLogger("my-logger-name")
def myfunction(somesequence):
for item in somesequence:
try:
result = process(item)
except Exception, e:
# in 'recent' python version this will not catch SysExit
# please refer to the doc for your python version
# if it's a slightly outdated version, uncomment the
# following lines:
# if isinstance(e, SysExit):
# raise
logger.exception("got %s on item %s", e, item)
continue
else:
# ok for this item
do_something_with(result)
I had this problem in a different context:
slope = (nsize*iopsum - (osum)(isum)) / (nsize*oopsum - (osum)*2)
of course the reason I was getting it was (osum)(isum) it was interpreting osum as a number and trying to call a different number on it. so after it was interpreted it looked something like this: 1513(3541) which doesn't make any sense.
I fixed it by adding in the * at the right location (osum)*(isum)