Python - import local variables of a function - python

I'm trying to make a debugger function, which is called when an error is raised, and let me access a console so I can check what happened in my program.
Here is the basic function:
def DEBUGGER(error):
print(error)
print("[DEBUGGER] Your program has failed, here is the debugger. Enter EXIT to end program.")
while True:
line = input(">>> ").lower()
if line == 'exit':
sys.exit(0)
else:
try:
exec(line)
except Exception as e:
print(str(e))
The problem is that I can't enter something like print(var) because it's referenced in another function.
Globals functions don't help me since I want to be able to call any variable in my program, and I can't globalize them all.. I know I can resolve it by putting all my functions in classes but I can't for many reasons.
Is there a way to get local variables of the running functions ? (When I call DEBUGGER(), the mother function is still running)
If no, can I export the local variables of the current function and pass it as an argument to DEBUGGER() ?
Thanks for your answers.

You are basically re-implementing the Python debugger pdb. If you want to go this route, you probably want to study the source code. pdb itself is a user-interface around the lower-level bdb (basic debugger) module, and the source code for that is also available.
To answer your direct question: when you catch an exception you have access to a traceback object (either via exception.__traceback__ or via sys.exc_info()), and tracebacks have access to both the local and global namespace of each frame in the stack, via the tb_frame attribute. That attribute is set to a frame object, which has f_locals and f_globals attributes.
The bdb.Bdb.get_stack() method could be an interesting example on how to treat a traceback, and the internal pdb.Pdb._select_frame() method then is used to pick a frame from the stack to use the locals and globals from.
If you don't want to re-implement the full debugger, you can use the pdb.pm() or pdb.port_mortem() functions. These take the last traceback raised and let you inspect the stack frame in an interactive environment:
try:
exec(line)
except Exception as e:
pdb.post_mortem(e.__traceback__)

The correct way to "write" your "DEBUGGER" function is:
import pdb
DEBUGGER = pdb.set_trace
Now you can call DEBUGGER() wherever you want, you will be in an interactive environment with access not only to local vars but also to whole call stack, and the ability to execute the remaining code step by step (including stepping into other functions etc), change the control flow to continue executing from another line etc etc etc.
Oh and yes: you can of course just write import pdb; pdb.set_trace() instead ;-)

Related

Embed Python in Python?

I wrote a "compiler" PypTeX that converts an input file a.tex containing Hello #{3+4} to an ouput file a.pyptex containing Hello 7. I evaluate arbitrary Python fragments like #{3+4} using something like eval(compile('3+4','a.tex',mode='eval'),myglobals), where myglobals is some (initially empty) dict. This creates a thin illusion of an embedded interpreter for running code in a.tex, however the call stack when running '3+4' looks pretty weird, because it backs up all the way into the PypTeX interpreter, instead of topping out at the user code '3+4' in a.tex.
Is there a way of doing something like eval but chopping off the top of the call stack?
Motivation: debugging
Imagine an exception is raised by the Python fragment deep inside numpy, and pdb is launched. The user types up until they reach the scope of their user code and then they type list. The way I've done it, this displays the a.tex file, which is the right context to be showing to the user and is the reason why I've done it this way. However, if the user types up again, the user ends up in the bowels of the PypTeX compiler.
An analogy would be if the g++ compiler had an error deep in a template, displayed a template "call stack" in its error message, but that template call stack backed all the way out into the bowels of the actual g++ call stack and exposed internal g++ details that would only serve to confuse the user.
Embedding Python in Python
Maybe the problem is that the illusion of the "embedded interpreter" created by eval is slightly too thin. eval allows to specify globals, but it inherits whatever call stack the caller has, so if one could somehow supply eval with a truncated call stack, that would resolve my problem. Alternatively, if pdb could be told "you shall go no further up" past a certain stack frame, that would help too. For example, if I could chop off a part of the stack in the traceback object and then pass it to pdb.post_mortem().
Or if one could do from sys import Interpreter; foo = Interpreter(); foo.eval(...), meaning that foo is a clean embedded interpreter with a distinct call stack, global variables, etc..., that would also be good.
Is there a way of doing this?
A rejected alternative
One way that is not good is to extract all Python fragments from a.tex by regular expression, dump them into a temporary file a.py and then run them by invoking a fresh new Python interpreter at the command line. This causes pdb to eventually top out into a.py. I've tried this and it's a very bad user experience. a.py should be an implementation detail; it is automatically generated and will look very unfamiliar to the user. It is hard for the user to figure out what bits of a.py came from what bits of a.tex. For large documents, I found this to be much too hard to use. See also pythontex.
I think I found a sufficient solution:
import pdb, traceback
def exec_and_catch(cmd,globals):
try:
exec(cmd,globals) # execute a user program
except Exception as e:
tb = e.__traceback__.tb_next # if user program raises an exception, remove the
f = e.with_traceback(tb) # top stack frame and return the exception.
return f
return None # otherwise, return None signifying success.
foo = exec_and_catch("import module_that_does_not_exist",{})
if foo is not None:
traceback.print_exception(value=foo, tb=foo.__traceback__, etype=type(foo))
pdb.post_mortem(foo.__traceback__)

Load and execute a full python script from a raw link?

I'm facing some problems trying to load a full python script from my pastebin/github pages.
I followed this link, trying to convert the raw into a temp file and use it like a module: How to load a python script from a raw link (such as Pastebin)?
And this is my test (Using a really simple python script as raw, my main program is not so simple unfortunately): https://trinket.io/python/0e95ba50c8
When I run the script (that now is creating a temp file in the current directory of the .py file) I get this error:
PermissionError: [Errno 13] Permission denied: 'C:\\Users\\BOT\\Images\\tempxm4xpwpz.py'
Otherwise I also treid the exec() function... No better results unfortunately.
With this code:
import requests as rq
import urllib.request
def main():
code = "https://pastebin.com/raw/MJmYEKqh"
response = urllib.request.urlopen(code)
data = response.read()
exec(data)
I get this error:
File "<string>", line 10, in <module>
File "<string>", line 5, in hola
NameError: name 'printest' is not defined
Since my program is more complex compared to this simple test, I don't know how to proceed...
Basically What I want to achieve is to write the full script of my program on GitHub and connect it to a .exe so if I upgrade the raw also my program is updated. Avoiding to generate and share (only with my friends) a new .exe everytime...
Do you think is possible? If so.. what am I doing wrong?
PS: I'm also open to other possibilities to let my friends update the program without downloading everytime the .exe, as soon as they don't have to install anything (that's why I'm using .exe).
Disclaimer: it is really not a good idea to run an unverified (let alone untrusted) code. That being said if you really want to do it...
Probably the easiest and "least-dirty" way would be to run whole new process. This can be done directly in python. Something like this should work (inspiration from the answer you linked in your question):
import urllib.request
import tempfile
import subprocess
code = "https://pastebin.com/raw/MJmYEKqh"
response = urllib.request.urlopen(code)
data = response.read()
with tempfile.NamedTemporaryFile(suffix='.py') as source_code_file:
source_code_file.write(data)
source_code_file.flush()
subprocess.run(['python3', source_code_file.name])
You can also make your code with exec run correctly:
What may work:
exec(data, {}) -- All you need to do, is to supply {} as second argument (that is use exec(data, {})). Function exec may receive two additional optional arguments -- globals and locals. If you supply just one, it will use the same directory for locals. That is the code within the exec would behave like sort-of "clean" environment, at the top-level. Which is something you aim for.
exec(data, globals()) -- Second option is to supply the globals from your current scope. This will also work, though you probably has no need to give the execucted code access to your globals, given that that code will set-up everything inside anyway
What does not work:
exec(data, {}, {}) -- In this case the executed code will have two different dictionaries (albeit both empty) for locals and globals. As such it will behavie "as-in" (I'm not really sure about this part, but as I tested it, it seams as such) the function. Meaning that it will add the printest and hola functions to the local scope instead of global scope. Regardless, I expected it to work -- I expected it will just query the printest in the hola function from the local scope instead of global. However, for some reason the hola function in this case gets compiled in such a way it expects printest to be in global scope and not local, which is not there. I really did not figured out why. So this will result in the NameError
exec(data, globals(), locals()) -- This will provide access to the state from the caller function. Nevertheless, it will crash for the very same reason as in the previous case
exec(data) -- This is just a shorthand for exec(data, globals(), locals()

How to continue execution of a Python module which calls a failed C++ function?

I have a python file (app.py) which makes a call to a function as follows:
answer = fn1()
The fn1() is actually written in C++ and I've built a wrapper so that I can use it in Python.
The fn1() can either return a valid result, or it may sometimes fail and terminate. Now the issue is that at the times when fn1() fails and aborts, the calling file (i.e. app.py) also terminates and does not go forward to the error handling part.
I would like the calling file to move to my error handling part (i.e. 'except' and 'finally') if fn1() aborts and dumps core. Is there any way to achieve this?
From the OP:
The C++ file that I have built wrapper around aborts in case of exception and dumps core. Python error code is not executed
This was not evident in your question. To catch this sort of error, you can use the signal.signal function in the python standard library (relevant SO answer).
import signal
def sig_handler(signum, frame):
print("segfault")
signal.signal(signal.SIGSEGV, sig_handler)
answer = fn1()
You basically wrote the answer in your question. Use a try except finally block. Refer also to the Python3 documentation on error handling
try:
answer = fn1()
except Exception: # You can use an exception more specific to your failing code.
# do stuff
finally:
# do stuff
What you need to do is to catch the exception in your C++ function and then convert it to a python exception and return that to the python code.

Django stack traces are awesome. How can I get one outside Django?

I guess The title says it all, bit I'll elaborate.
In non-Django programs (even in non-web projects) I would like to get stack traces with:
Regular file and line number information, code of surrounding lines and scope identification (name of function and whatnot).
Local scope variables (just their names and repr() would be great)
Is there a library? A visual python debugger I could provide a plugin for? How could I go about getting this stack trace?
You can check the traceback module from the Python documentation and the examples in it.
import sys, traceback
def run_user_code(envdir):
source = raw_input(">>> ")
try:
exec source in envdir
except:
print "Exception in user code:"
print '-'*60
traceback.print_exc(file=sys.stdout)
print '-'*60
envdir = {}
while 1:
run_user_code(envdir)

How do I inspect the scope of a function where Python raises an exception?

I've recently discovered the very useful '-i' flag to Python
-i : inspect interactively after running script, (also PYTHONINSPECT=x)
and force prompts, even if stdin does not appear to be a terminal
this is great for inspecting objects in the global scope, but what happens if the exception was raised in a function call, and I'd like to inspect the local variables of the function? Naturally, I'm interested in the scope of where the exception was first raised, is there any way to get to it?
At the interactive prompt, immediately type
>>> import pdb
>>> pdb.pm()
pdb.pm() is the "post-mortem" debugger. It will put you at the scope where the exception was raised, and then you can use the usual pdb commands.
I use this all the time. It's part of the standard library (no ipython necessary) and doesn't require editing debugging commands into your source code.
The only trick is to remember to do it right away; if you type any other commands first, you'll lose the scope where the exception occurred.
In ipython, you can inspect variables at the location where your code crashed without having to modify it:
>>> %pdb on
>>> %run my_script.py
use ipython: http://mail.scipy.org/pipermail/ipython-user/2007-January/003985.html
Usage example:
from IPython.Debugger import Tracer; debug_here = Tracer()
#... later in your code
debug_here() # -> will open up the debugger at that point.
"Once the debugger activates, you can use all of its regular commands to
step through code, set breakpoints, etc. See the pdb documentation
from the Python standard library for usage details."

Categories