I'm using several Jupyter Notebooks to split the tasks between different modules. In my main notebook I call another module %run another_module.ipynb which loads all my data. However, it also plots and prints everything I have in another_module.ipynb.
I want to keep the plots in another_module.ipynb to help me visualise the data but I don't want to reprint everything when calling run another_module.ipynb. Is there an option to prevent priniting this?
Thanks
You could:
Override the print function and make it a no-op:
_print_function = print # create a backup in case you need it later
globals()["print"] = lambda *args, **kwargs: None
Run the file with the -i flag. Without -i, the file is run in a new namespace, so your modifications to the global variables are lost; with -i, the file is run in the current namespace.
%run -i another_module.ipynb
If you're using other methods to print logs (e.g., sys.stdout.write(), logging), it would be harder to create mocks for them. In that case, I would suggest redirecting the stdout or stderr pipe to /dev/null:
import os
import sys
sys.stdout = fopen(os.devnull, "w")
%run -i another_module.ipynb
Both methods are considered hacks and should only be used when you know the consequences. The better thing to do here is to change your code in the notebook, either to add a --verbose flag to control logging, or use some logging library (e.g., logging) that supports turning off logging entirely.
From what I have read, there are two ways to debug code in Python:
With a traditional debugger such as pdb or ipdb. This supports commands such as c for continue, n for step-over, s for step-into etc.), but you don't have direct access to an IPython shell which can be extremely useful for object inspection.
Using IPython by embedding an IPython shell in your code. You can do from IPython import embed, and then use embed() in your code. When your program/script hits an embed() statement, you are dropped into an IPython shell. This allows the full inspection of objects and testing of Python code using all the IPython goodies. However, when using embed() you can't step-by-step through the code anymore with handy keyboard shortcuts.
Is there any way to combine the best of both worlds? I.e.
Be able to step-by-step through your code with handy pdb/ipdb keyboard shortcuts.
At any such step (e.g. on a given statement), have access to a full-fledged IPython shell.
IPython debugging as in MATLAB:
An example of this type of "enhanced debugging" can be found in MATLAB, where the user always has full access to the MATLAB engine/shell, and she can still step-by-step through her code, define conditional breakpoints, etc. From what I have discussed with other users, this is the debugging feature that people miss the most when moving from MATLAB to IPython.
IPython debugging in Emacs and other editors:
I don't want to make the question too specific, but I work mostly in Emacs, so I wonder if there is any way to bring this functionality into it. Ideally, Emacs (or the editor) would allow the programmer to set breakpoints anywhere on the code and communicate with the interpreter or debugger to have it stop in the location of your choice, and bring to a full IPython interpreter on that location.
What about ipdb.set_trace() ? In your code :
import ipdb; ipdb.set_trace()
update: now in Python 3.7, we can write breakpoint(). It works the same, but it also obeys to the PYTHONBREAKPOINT environment variable. This feature comes from this PEP.
This allows for full inspection of your code, and you have access to commands such as c (continue), n (execute next line), s (step into the method at point) and so on.
See the ipdb repo and a list of commands. IPython is now called (edit: part of) Jupyter.
ps: note that an ipdb command takes precedence over python code. So in order to write list(foo) you'd need print(list(foo)), or !list(foo) .
Also, if you like the ipython prompt (its emacs and vim modes, history, completions,…) it's easy to get the same for your project since it's based on the python prompt toolkit.
You can use IPython's %pdb magic. Just call %pdb in IPython and when an error occurs, you're automatically dropped to ipdb. While you don't have the stepping immediately, you're in ipdb afterwards.
This makes debugging individual functions easy, as you can just load a file with %load and then run a function. You could force an error with an assert at the right position.
%pdb is a line magic. Call it as %pdb on, %pdb 1, %pdb off or %pdb 0. If called without argument it works as a toggle.
(Update on May 28, 2016) Using RealGUD in Emacs
For anyone in Emacs, this thread shows how to accomplish everything described in the OP (and more) using
a new important debugger in Emacs called RealGUD which can operate with any debugger (including ipdb).
The Emacs package isend-mode.
The combination of these two packages is extremely powerful and allows one to recreate exactly the behavior described in the OP and do even more.
More info on the wiki article of RealGUD for ipdb.
Original answer:
After having tried many different methods for debugging Python, including everything mentioned in this thread, one of my preferred ways of debugging Python with IPython is with embedded shells.
Defining a custom embedded IPython shell:
Add the following on a script to your PYTHONPATH, so that the method ipsh() becomes available.
import inspect
# First import the embed function
from IPython.terminal.embed import InteractiveShellEmbed
from IPython.config.loader import Config
# Configure the prompt so that I know I am in a nested (embedded) shell
cfg = Config()
prompt_config = cfg.PromptManager
prompt_config.in_template = 'N.In <\\#>: '
prompt_config.in2_template = ' .\\D.: '
prompt_config.out_template = 'N.Out<\\#>: '
# Messages displayed when I drop into and exit the shell.
banner_msg = ("\n**Nested Interpreter:\n"
"Hit Ctrl-D to exit interpreter and continue program.\n"
"Note that if you use %kill_embedded, you can fully deactivate\n"
"This embedded instance so it will never turn on again")
exit_msg = '**Leaving Nested interpreter'
# Wrap it in a function that gives me more context:
def ipsh():
ipshell = InteractiveShellEmbed(config=cfg, banner1=banner_msg, exit_msg=exit_msg)
frame = inspect.currentframe().f_back
msg = 'Stopped at {0.f_code.co_filename} at line {0.f_lineno}'.format(frame)
# Go back one level!
# This is needed because the call to ipshell is inside the function ipsh()
ipshell(msg,stack_depth=2)
Then, whenever I want to debug something in my code, I place ipsh() right at the location where I need to do object inspection, etc. For example, say I want to debug my_function below
Using it:
def my_function(b):
a = b
ipsh() # <- This will embed a full-fledged IPython interpreter
a = 4
and then I invoke my_function(2) in one of the following ways:
Either by running a Python program that invokes this function from a Unix shell
Or by invoking it directly from IPython
Regardless of how I invoke it, the interpreter stops at the line that says ipsh(). Once you are done, you can do Ctrl-D and Python will resume execution (with any variable updates that you made). Note that, if you run the code from a regular IPython the IPython shell (case 2 above), the new IPython shell will be nested inside the one from which you invoked it, which is perfectly fine, but it's good to be aware of. Eitherway, once the interpreter stops on the location of ipsh, I can inspect the value of a (which be 2), see what functions and objects are defined, etc.
The problem:
The solution above can be used to have Python stop anywhere you want in your code, and then drop you into a fully-fledged IPython interpreter. Unfortunately it does not let you add or remove breakpoints once you invoke the script, which is highly frustrating. In my opinion, this is the only thing that is preventing IPython from becoming a great debugging tool for Python.
The best you can do for now:
A workaround is to place ipsh() a priori at the different locations where you want the Python interpreter to launch an IPython shell (i.e. a breakpoint). You can then "jump" between different pre-defined, hard-coded "breakpoints" with Ctrl-D, which would exit the current embedded IPython shell and stop again whenever the interpreter hits the next call to ipsh().
If you go this route, one way to exit "debugging mode" and ignore all subsequent breakpoints, is to use ipshell.dummy_mode = True which will make Python ignore any subsequent instantiations of the ipshell object that we created above.
You can start IPython session from pudb and go back to the debugging session as you like.
BTW, ipdb is using IPython behind the scenes and you can actually use IPython functionality such as TAB completion and magic commands (the one starts with %). If you are OK with ipdb you can start it from IPython using commands such as %run and %debug. ipdb session is actually better than plain IPython one in the sense you can go up and down in the stack trace etc. What is missing in ipdb for "object inspection"?
Also, python.el bundled with Emacs >= 24.3 has nice ipdb support.
Looks like the approach in #gaborous's answer is deprecated.
The new approach seems to be:
from IPython.core import debugger
debug = debugger.Pdb().set_trace
def buggy_method():
debug()
Prefixing an "!" symbol to commands you type in pdb seems to have the same effect as doing something in an IPython shell. This works for accessing help for a certain function, or even variable names. Maybe this will help you to some extent. For example,
ipdb> help(numpy.transpose)
*** No help on (numpy.transpose)
But !help(numpy.transpose) will give you the expected help page on numpy.transpose. Similarly for variable names, say you have a variable l, typing "l" in pdb lists the code, but !l prints the value of l.
You can start IPython from within ipdb.
Induce the ipdb debugger1:
import idpb; ipdb.set_trace()
Enter IPython from within in the ipdb> console2:
from IPython import embed; embed()
Return to the ipdb> console from within IPython:
exit
If you're lucky enough to be using Emacs, things can be made even more convenient.
This requires using M-x shell. Using yasnippet and bm, define the following snippet. This will replace the text ipdb in the editor with the set-trace line. After inserting the snippet, the line will be highlighted so that it is easily noticeable and navigable. Use M-x bm-next to navigate.
# -*- mode: snippet -*-
# name: ipdb
# key: ipdb
# expand-env: ((yas-after-exit-snippet-hook #'bm-toggle))
# --
import ipdb; ipdb.set_trace()
1 All on one line for easy deletion. Since imports only happen once, this form ensures ipdb will be imported when you need it with no extra overhead.
2 You can save yourself some typing by importing IPython within your .pdbrc file:
try:
from IPython import embed
except:
pass
This allows you to simply call embed() from within ipdb (of course, only when IPython is installed).
Did you try this tip?
Or better still, use ipython, and call:
from IPython.Debugger import Tracer; debug_here = Tracer()
then you can just use
debug_here()
whenever you want to set a breakpoint
the right, easy, cool, exact answer for the question is to use %run macro with -d flag.
In [4]: run -d myscript.py
NOTE: Enter 'c' at the ipdb> prompt to continue execution.
> /cygdrive/c/Users/mycodefolder/myscript.py(4)<module>()
2
3
----> 4 a=1
5 b=2
One option is to use an IDE like Spyder which should allow you to interact with your code while debugging (using an IPython console, in fact). In fact, Spyder is very MATLAB-like, which I presume was intentional. That includes variable inspectors, variable editing, built-in access to documentation, etc.
If you type exit() in embed() console the code continue and go to the next embed() line.
The Pyzo IDE has similar capabilities as the OP asked for. You don't have to start in debug mode. Similarly to MATLAB, the commands are executed in the shell. When you set up a break-point in some source code line, the IDE stops the execution there and you can debug and issue regular IPython commands as well.
It does seem however that step-into doesn't (yet?) work well (i.e. stopping in one line and then stepping into another function) unless you set up another break-point.
Still, coming from MATLAB, this seems the best solution I've found.
From python 3.2, you have the interact command, which gives you access to the full python/ipython command space.
Running from inside Emacs' IPython-shell and breakpoint set via pdb.set_trace() should work.
Checked with python-mode.el, M-x ipython RET etc.
Developing New Code
Debugging inside IPython
Use Jupyter/IPython cell execution to speed up experiment iterations
Use %%debug for step through
Cell Example:
%%debug
...: for n in range(4):
...: n>2
Debugging Existing Code
IPython inside debugging
Debugging a broken unit test: pytest ... --pdbcls=IPython.terminal.debugger:TerminalPdb --pdb
Debugging outside of test case: breakpoint(), python -m ipdb, etc.
IPython.embed() for full IPython functionality where needed while in the debugger
Thoughts on Python
I agree with the OP that many things MATLAB does nicely Python still does not have and really should since just about everything in the language favors development speed over production speed. Maybe someday I will contribute more than trivial bug fixes to CPython.
https://github.com/ipython/ipython/commit/f042f3fea7560afcb518a1940daa46a72fbcfa68
See also Is it possible to run commands in IPython with debugging?
If put import ipdb; ipdb.set_trace() at cell outside function, it will occur error.
Using %pdb or %debug, you can only see the filnal error result. You cannot see the code doing step by step.
I use following skill:
%%writefile temp.py
.....cell code.....
save the code of cell to file temp.py.
and then
%run -i -d temp.py, it will run the cell code by pdb .
-i: run the file in IPython’s namespace instead of an empty one.
-d: run your program under the control of pdb, the Python debugger.
Question can be related to Use python subprocess module like a command line simulator
I have written some infrastructure code called my_shell to which you can pass shell commands of my application that looks like this
class ApplicationTestShell(object):
def __init__(self):
'''
Constructor
'''
self.play_ground_dir = "/var/tmp/MyAppDir"
ensure_dir_exists_and_empty(self.play_ground_dir)
def execute_command(self, command, on_success = None, on_failure = None):
p = create_shell_process(self, self.play_ground_dir)
sout, serr = p.communicate(input = command)
if p.returncode == 0:
on_success(sout)
else:
on_failure(serr)
def create_shell_process(self, cwd):
return Popen("/bin/bash", env= {WHAT DO I DO HERE?},cwd = test_dir, stdout=PIPE, stderr=PIPE, stdin=PIPE)
The interesting bit to me here is the env parameter. Python expects like a 'map' datastructure of all environment variable. My application requires several variables exported and set. The script for setting and exporting is generated by running say '/bin/appload myapp' (Assume appload is always available on the path). What I do currently
is when I call p.communicate I do the following
p.communicate(input = "eval `/bin/appload myapp`;" + command)
So basically before running the command I call the infrastructure setup.
Is there any way to do this in a better fashion in Python. I somehow want to push the eval /bin/appload part to the env parameter on the Popen class OR as part of the shell creation process.
What are the problems with my current implementation? (I feel it is hacky but I may be wrong)
It depends on how /bin/appload myapp works. If it only guarantees that it will output bash syntax, then parsing that output in Python in order to construct the environment object there is almost certainly more trouble than it's worth (you might need to support parameter and variable expansion, subshells, process substitution, etc, etc). On the other hand, if you are sure that /bin/appload myapp will only ever output lines of the form "VARIABLENAME=someword", then that's pretty trivial to parse in Python and you could move it into your Python code if you like.
There are an awful lot of different directions you could go with these requirements; you could capture the output of appload myapp into a tempfile and set the subprocess's $BASH_ENV to that filename; that would cause the shell to source your environment setup before running your command in a way that some might consider cleaner. You could give your command (with the eval-ing prefix) as the first argument to Popen and pass shell=True, and let Popen do the bash invocation on its own (setting $SHELL explicitly to bash if necessary). You could use bash's -c option to specify the code to run on the command line rather than via stdin. You could have a multi-tiered approach by invoking a shell from Python which eval's the appload myapp environment and then exec's another shell underneath it, so that the first doesn't show up in ps listings and the command given to create_shell_process has the shell all to itself (although that shouldn't really matter). You could do a lot of things, depending on what your concerns are with respect to how the shell is invoked, how it looks in ps listings, whether you want your command to still be run if the appload myapp output produces an error when eval'd, etc. But for a general solution, I think what you have is perfectly fine.
I don't see any real problems with the implementation, besides cosmetic things or minor things that probably only came from copying and pasting the code: create_shell_process doesn't use its cwd parameter, and the on_success and on_failure parameters look like they're optional but the defaults will break things (you can't call None).
I'm not sure if what I'm asking is possible at all, but since python is an interpreter it might be. I'm trying to make changes in an open-source project but because there are no types in python it's difficult to know what the variables have as data and what they do. You can't just look up the documentation on the var's type since you can't be sure what type it is. I want to drop to the terminal so I can quickly examine the types of the variables and what they do by typing help(var) or print(var). I could do this by changing the code and then re-running the program each time but that would be much slower.
Let's say I have a program:
def foo():
a = 5
my_debug_shell()
print a
foo()
my_debug_shell is the function I'm asking about. It would drop me to the '>>>' shell of the python interpreter where I can type help(a), and it would tell me that a is an integer. Then I type 'a=7', and some 'continue' command, and the program goes on to print 7, not 5, because I changed it.
http://docs.python.org/library/pdb.html
import pdb
pdb.set_trace()
Here is a solution that doesn't require code changes:
python -m pdb prog.py <prog_args>
(pdb) b 3
Breakpoint 1 at prog.py:3
(pdb) c
...
(pdb) p a
5
(pdb) a=7
(pdb) ...
In short:
start your program under debugger control
set a break point at a given line of code
let the program run up to that point
you get an interactive prompt that let's you do what you want (type 'help' for all options)
Python 3.7 has a new builtin way of setting breakpoints.
breakpoint()
The implementation of breakpoint() will import pdb and call pdb.set_trace().
Remember to include the braces (), since breakpoint is a function, not a keyword.
A one-line partial solution is simply to put 1/0 where you want the breakpoint: this will raise an exception, which will be caught by the debugger. Two advantages of this approach are:
Breakpoints set this way are robust against code modification (no dependence on a particular line number);
One does not need to import pdb in every program to be debugged; one can instead directly insert "breakpoints" where needed.
In order to catch the exception automatically, you can simply do python -m pdb prog.py… and then type c(ontinue) in order to start the program. When the 1/0 is reached, the program exits, but variables can be inspected as usual with the pdb debugger (p my_var). Now, this does not allow you to fix things and keep running the program. Instead you can try to fix the bug and run the program again.
If you want to use the powerful IPython shell, ipython -pdb prog.py… does the same thing, but leads to IPython's better debugger interface. Alternatively, you can do everything from within the IPython shell:
In IPython, set up the "debug on exception" mode of IPython (%pdb).
Run the program from IPython with %run prog.py…. When an exception occurs, the debugger is automatically activated and you can inspect variables, etc.
The advantage of this latter approach is that (1) the IPython shell is almost a must; and (2) once it is installed, debugging can easily be done through it (instead of directly through the pdb module). The full documentation is available on the IPython pages.
You can run the program using pdb, and add breakpoints before starting execution.
In reality though, it's usually just as fast to edit the code and put in the set_trace() call, as another user stated.
Not sure what the real question is. Python gives you the 'pdb' debugger (google yourself) and in addition you can add logging and debug output as needed.
Hey I was wondering... I am using the pydev with eclipse and I'm really enjoying the powerful debugging features, but I was wondering:
Is it possible to set a breakpoint in eclipse and jump into the interactive python interpreter during execution?
I think that would be pretty handy ;)
edit: I want to emphasize that my goal is not to jump into a debugger. pydev/eclipse have a great debugger, and I can just look at the traceback and set break points.
What I want is to execute a script and jump into an interactive python interpreter during execution so I can do things like...
poke around
check the values of things
manipulate variables
figure out some code before I add it to the app
I know you can do this all with a debugger, but I can do it faster in the interactive interpreter because I can try something, see that it didn't work, and try something else without having get the app back to the point of executing that code again.
So roughly a year on from the OP's question, PyDev has this capability built in. I am not sure when this feature was introduced, but all I know is I've spent the last ~2hrs Googling... configuring iPython and whatever (which was looking like it would have done the job), but only to realise Eclipse/PyDev has what I want ootb.
As soon as you hit a breakpoint in debug mode, the console is right there ready and waiting!
I only didn't notice this as there is no prompt or blinking cursor; I had wrongly assumed it was a standard, output-only, console... but it's not. It even has code-completion.
Great stuff, see http://pydev.org/manual_adv_debug_console.html for more details.
This is from an old project, and I didn't write it, but it does something similar to what you want using ipython.
'''Start an IPython shell (for debugging) with current environment.
Runs Call db() to start a shell, e.g.
def foo(bar):
for x in bar:
if baz(x):
import ipydb; ipydb.db() # <-- start IPython here, with current value of x (ipydb is the name of this module).
.
'''
import inspect,IPython
def db():
'''Start IPython shell with callers environment.'''
# find callers
__up_frame = inspect.currentframe().f_back
eval('IPython.Shell.IPShellEmbed([])()', # Empty list arg is
# ipythons argv later args to dict take precedence, so
# f_globals() shadows globals(). Need globals() for IPython
# module.
dict(globals().items() + __up_frame.f_globals.items()),
__up_frame.f_locals)
edit by Jim Robert (question owner): If you place the above code into a file called my_debug.py for the sake of this example. Then place that file in your python path, and you can insert the following lines anywhere in your code to jump into a debugger (as long as you execute from a shell):
import my_debug
my_debug.db()
I've long been using this code in my sitecustomize.py to start a debugger on an exception. This can also be triggered by Ctrl+C. It works beautifully in the shell, don't know about eclipse.
import sys
def info(exception_type, value, tb):
if hasattr(sys, 'ps1') or not sys.stderr.isatty() or not sys.stdin.isatty() or not sys.stdout.isatty() or type==SyntaxError:
# we are in interactive mode or we don't have a tty-like
# device, so we call the default hook
sys.__excepthook__(exception_type, value, tb)
else:
import traceback
import pdb
if exception_type != KeyboardInterrupt:
try:
import growlnotify
growlnotify.growlNotify("Script crashed", sticky = False)
except ImportError:
pass
# we are NOT in interactive mode, print the exception...
traceback.print_exception(exception_type, value, tb)
print
# ...then start the debugger in post-mortem mode.
pdb.pm()
sys.excepthook = info
Here's the source and more discussion on StackOverflow.
You can jump into an interactive session using code.InteractiveConsole as described here; however I don't know how to trigger this from Eclipse.
A solution might be to intercept Ctrl+C to jump into this interactive console (using the signal module: signal.signal(signal.SIGINT, my_handler)), but it would probably change the execution context and you probably don't want this.
If you are already running in debug mode you can set an additional breakpoint if the program execution is currently paused (e.g. because you are already at a breakpoint). I just tried it out now with the latest Pydev - it works just fine.
If you are running normally (i.e. not in debug mode) all breakpoints will be ignored. No changes to breakpoints will alter the way a non-debug run works.