NGINX/Apache and disabling assertions in Python [duplicate] - python

I'm running a python script from inside a different software (it provides a python interface to manipulate its data structures).
I'm optimizing my code for speed and would like to see what impact on performance my asserts have.
I'm unable to use python -O. What other options do I have, to programatically disable all asserts in python code? The variable __debug__ (which is cleared by -O flag) cannot be assigned to :(

The docs say,
The value for the built-in variable [__debug__] is determined when the
interpreter starts.
So, if you can not control how the python interpreter is started, then it looks like you can not disable assert.
Here then are some other options:
The safest way is to manually remove all the assert statements.
If all your assert statements occur on lines by themselves, then
perhaps you could remove them with
sed -i 's/assert /pass #assert /g' script.py
Note that this will mangle your code if other code comes after the assert. For example, the sed command above would comment-out the return in a line like this:
assert x; return True
which would change the logic of your program.
If you have code like this, it would probably be best to manually remove the asserts.
There might be a way to remove them programmatically by parsing your
script with the tokenize module, but writing such a program to
remove asserts may take more time than it would take to manually
remove the asserts, especially if this is a one-time job.
If the other piece of software accepts .pyc files, then there is a
dirty trick which seems to work on my machine, though note a Python
core developer warns against this (See Éric Araujo's comment on 2011-09-17). Suppose your script is called script.py.
Make a temporary script called, say, temp.py:
import script
Run python -O temp.py. This creates script.pyo.
Move script.py and script.pyc (if it exists) out of your PYTHONPATH
or whatever directory the other software is reading to find your
script.
Rename script.pyo --> script.pyc.
Now when the other software tries to import your script, it will
only find the pyc file, which has the asserts removed.
For example, if script.py looks like this:
assert False
print('Got here')
then running python temp.py will now print Got here instead of raising an AssertionError.

You may be able to do this with an environment variable, as described in this other answer. Setting PYTHONOPTIMIZE=1 is equivalent to starting Python with the -O option. As an example, this works in Blender 2.78, which embeds Python 3.5:
blender --python-expr 'assert False; print("foo")'
PYTHONOPTIMIZE=1 blender --python-expr 'assert False; print("foo")'
The first command prints a traceback, while the second just prints "foo".

As #unutbu describes, there is no official way of doing this. However, a simple strategy is to define a flag like _test somewhere (for example, as keyword argument to a function, or as a global variable in a module), then include this in your assert statements as follows:
def f(x, _test=True):
assert not _test or x > 0
...
Then you can disable asserts in that function if needed.
f(x, _test=False)

Related

Ansible - Testing if called python script is indeed running via Ansible

I'm looking for a way to test, in my python script, if said script is running from Ansible so I can also run it through shell (for running unit tests etc). Calling AnsibleModule without calling from an ansible playbook will just endlessly wait for a response that will never come.
I'm expecting that there isn't a simple test and that I have to restructure in some way, but I'm open to any options.
def main():
# must test if running via ansible before next line
module = AnsibleModule(
argument_spec=dict(
server=dict(required=True, type='str'),
[...]
)
[... do things ...]
)
if __name__ == "__main__":
if running_via_ansible:
main()
else:
run_tests()
I believe there are a couple of answers, with various levels of trickery involved
since your module is written in python, ansible will use the AnsiballZ framework to run it, which means its sys.argv[0] will start with AnsiballZ_; it will also likely be written to $HOME/.ansible/tmp on the target machine, so one could sniff for .ansible/tmp showing up in argv[0] also
if the file contains the string WANT_JSON in it, then ansible will invoke it with the module's JSON payload as the first argument instead of feeding it JSON on sys.stdin (thus far the filename has been colocated with the AnsiballZ_ script, but I don't know that such a thing is guaranteed)
Similar, although apparently far more python specific: if it contains a triple-quoted sentinel """<<INCLUDE_ANSIBLE_MODULE_JSON_ARGS>>""" (or the ''' flavor works, too) then that magic string is replaced by the serialized JSON that, again, would have been otherwise provided via stdin
While this may not apply, or be helpful, I actually would expect that any local testing environment would have more "fingerprints" than trying to detect the opposite, and has the pleasing side-effect of "failing open" in that the module will assume it is running in production mode unless it can prove testing mode, which should make for less weird false positives. Then again, I guess the reasonable default depends on how problematic it would be for the module to attempt to carry out its payload when not really in use

Step-by-step debugging with IPython

From what I have read, there are two ways to debug code in Python:
With a traditional debugger such as pdb or ipdb. This supports commands such as c for continue, n for step-over, s for step-into etc.), but you don't have direct access to an IPython shell which can be extremely useful for object inspection.
Using IPython by embedding an IPython shell in your code. You can do from IPython import embed, and then use embed() in your code. When your program/script hits an embed() statement, you are dropped into an IPython shell. This allows the full inspection of objects and testing of Python code using all the IPython goodies. However, when using embed() you can't step-by-step through the code anymore with handy keyboard shortcuts.
Is there any way to combine the best of both worlds? I.e.
Be able to step-by-step through your code with handy pdb/ipdb keyboard shortcuts.
At any such step (e.g. on a given statement), have access to a full-fledged IPython shell.
IPython debugging as in MATLAB:
An example of this type of "enhanced debugging" can be found in MATLAB, where the user always has full access to the MATLAB engine/shell, and she can still step-by-step through her code, define conditional breakpoints, etc. From what I have discussed with other users, this is the debugging feature that people miss the most when moving from MATLAB to IPython.
IPython debugging in Emacs and other editors:
I don't want to make the question too specific, but I work mostly in Emacs, so I wonder if there is any way to bring this functionality into it. Ideally, Emacs (or the editor) would allow the programmer to set breakpoints anywhere on the code and communicate with the interpreter or debugger to have it stop in the location of your choice, and bring to a full IPython interpreter on that location.
What about ipdb.set_trace() ? In your code :
import ipdb; ipdb.set_trace()
update: now in Python 3.7, we can write breakpoint(). It works the same, but it also obeys to the PYTHONBREAKPOINT environment variable. This feature comes from this PEP.
This allows for full inspection of your code, and you have access to commands such as c (continue), n (execute next line), s (step into the method at point) and so on.
See the ipdb repo and a list of commands. IPython is now called (edit: part of) Jupyter.
ps: note that an ipdb command takes precedence over python code. So in order to write list(foo) you'd need print(list(foo)), or !list(foo) .
Also, if you like the ipython prompt (its emacs and vim modes, history, completions,…) it's easy to get the same for your project since it's based on the python prompt toolkit.
You can use IPython's %pdb magic. Just call %pdb in IPython and when an error occurs, you're automatically dropped to ipdb. While you don't have the stepping immediately, you're in ipdb afterwards.
This makes debugging individual functions easy, as you can just load a file with %load and then run a function. You could force an error with an assert at the right position.
%pdb is a line magic. Call it as %pdb on, %pdb 1, %pdb off or %pdb 0. If called without argument it works as a toggle.
(Update on May 28, 2016) Using RealGUD in Emacs
For anyone in Emacs, this thread shows how to accomplish everything described in the OP (and more) using
a new important debugger in Emacs called RealGUD which can operate with any debugger (including ipdb).
The Emacs package isend-mode.
The combination of these two packages is extremely powerful and allows one to recreate exactly the behavior described in the OP and do even more.
More info on the wiki article of RealGUD for ipdb.
Original answer:
After having tried many different methods for debugging Python, including everything mentioned in this thread, one of my preferred ways of debugging Python with IPython is with embedded shells.
Defining a custom embedded IPython shell:
Add the following on a script to your PYTHONPATH, so that the method ipsh() becomes available.
import inspect
# First import the embed function
from IPython.terminal.embed import InteractiveShellEmbed
from IPython.config.loader import Config
# Configure the prompt so that I know I am in a nested (embedded) shell
cfg = Config()
prompt_config = cfg.PromptManager
prompt_config.in_template = 'N.In <\\#>: '
prompt_config.in2_template = ' .\\D.: '
prompt_config.out_template = 'N.Out<\\#>: '
# Messages displayed when I drop into and exit the shell.
banner_msg = ("\n**Nested Interpreter:\n"
"Hit Ctrl-D to exit interpreter and continue program.\n"
"Note that if you use %kill_embedded, you can fully deactivate\n"
"This embedded instance so it will never turn on again")
exit_msg = '**Leaving Nested interpreter'
# Wrap it in a function that gives me more context:
def ipsh():
ipshell = InteractiveShellEmbed(config=cfg, banner1=banner_msg, exit_msg=exit_msg)
frame = inspect.currentframe().f_back
msg = 'Stopped at {0.f_code.co_filename} at line {0.f_lineno}'.format(frame)
# Go back one level!
# This is needed because the call to ipshell is inside the function ipsh()
ipshell(msg,stack_depth=2)
Then, whenever I want to debug something in my code, I place ipsh() right at the location where I need to do object inspection, etc. For example, say I want to debug my_function below
Using it:
def my_function(b):
a = b
ipsh() # <- This will embed a full-fledged IPython interpreter
a = 4
and then I invoke my_function(2) in one of the following ways:
Either by running a Python program that invokes this function from a Unix shell
Or by invoking it directly from IPython
Regardless of how I invoke it, the interpreter stops at the line that says ipsh(). Once you are done, you can do Ctrl-D and Python will resume execution (with any variable updates that you made). Note that, if you run the code from a regular IPython the IPython shell (case 2 above), the new IPython shell will be nested inside the one from which you invoked it, which is perfectly fine, but it's good to be aware of. Eitherway, once the interpreter stops on the location of ipsh, I can inspect the value of a (which be 2), see what functions and objects are defined, etc.
The problem:
The solution above can be used to have Python stop anywhere you want in your code, and then drop you into a fully-fledged IPython interpreter. Unfortunately it does not let you add or remove breakpoints once you invoke the script, which is highly frustrating. In my opinion, this is the only thing that is preventing IPython from becoming a great debugging tool for Python.
The best you can do for now:
A workaround is to place ipsh() a priori at the different locations where you want the Python interpreter to launch an IPython shell (i.e. a breakpoint). You can then "jump" between different pre-defined, hard-coded "breakpoints" with Ctrl-D, which would exit the current embedded IPython shell and stop again whenever the interpreter hits the next call to ipsh().
If you go this route, one way to exit "debugging mode" and ignore all subsequent breakpoints, is to use ipshell.dummy_mode = True which will make Python ignore any subsequent instantiations of the ipshell object that we created above.
You can start IPython session from pudb and go back to the debugging session as you like.
BTW, ipdb is using IPython behind the scenes and you can actually use IPython functionality such as TAB completion and magic commands (the one starts with %). If you are OK with ipdb you can start it from IPython using commands such as %run and %debug. ipdb session is actually better than plain IPython one in the sense you can go up and down in the stack trace etc. What is missing in ipdb for "object inspection"?
Also, python.el bundled with Emacs >= 24.3 has nice ipdb support.
Looks like the approach in #gaborous's answer is deprecated.
The new approach seems to be:
from IPython.core import debugger
debug = debugger.Pdb().set_trace
def buggy_method():
debug()
Prefixing an "!" symbol to commands you type in pdb seems to have the same effect as doing something in an IPython shell. This works for accessing help for a certain function, or even variable names. Maybe this will help you to some extent. For example,
ipdb> help(numpy.transpose)
*** No help on (numpy.transpose)
But !help(numpy.transpose) will give you the expected help page on numpy.transpose. Similarly for variable names, say you have a variable l, typing "l" in pdb lists the code, but !l prints the value of l.
You can start IPython from within ipdb.
Induce the ipdb debugger1:
import idpb; ipdb.set_trace()
Enter IPython from within in the ipdb> console2:
from IPython import embed; embed()
Return to the ipdb> console from within IPython:
exit
If you're lucky enough to be using Emacs, things can be made even more convenient.
This requires using M-x shell. Using yasnippet and bm, define the following snippet. This will replace the text ipdb in the editor with the set-trace line. After inserting the snippet, the line will be highlighted so that it is easily noticeable and navigable. Use M-x bm-next to navigate.
# -*- mode: snippet -*-
# name: ipdb
# key: ipdb
# expand-env: ((yas-after-exit-snippet-hook #'bm-toggle))
# --
import ipdb; ipdb.set_trace()
1 All on one line for easy deletion. Since imports only happen once, this form ensures ipdb will be imported when you need it with no extra overhead.
2 You can save yourself some typing by importing IPython within your .pdbrc file:
try:
from IPython import embed
except:
pass
This allows you to simply call embed() from within ipdb (of course, only when IPython is installed).
Did you try this tip?
Or better still, use ipython, and call:
from IPython.Debugger import Tracer; debug_here = Tracer()
then you can just use
debug_here()
whenever you want to set a breakpoint
the right, easy, cool, exact answer for the question is to use %run macro with -d flag.
In [4]: run -d myscript.py
NOTE: Enter 'c' at the ipdb> prompt to continue execution.
> /cygdrive/c/Users/mycodefolder/myscript.py(4)<module>()
2
3
----> 4 a=1
5 b=2
One option is to use an IDE like Spyder which should allow you to interact with your code while debugging (using an IPython console, in fact). In fact, Spyder is very MATLAB-like, which I presume was intentional. That includes variable inspectors, variable editing, built-in access to documentation, etc.
If you type exit() in embed() console the code continue and go to the next embed() line.
The Pyzo IDE has similar capabilities as the OP asked for. You don't have to start in debug mode. Similarly to MATLAB, the commands are executed in the shell. When you set up a break-point in some source code line, the IDE stops the execution there and you can debug and issue regular IPython commands as well.
It does seem however that step-into doesn't (yet?) work well (i.e. stopping in one line and then stepping into another function) unless you set up another break-point.
Still, coming from MATLAB, this seems the best solution I've found.
From python 3.2, you have the interact command, which gives you access to the full python/ipython command space.
Running from inside Emacs' IPython-shell and breakpoint set via pdb.set_trace() should work.
Checked with python-mode.el, M-x ipython RET etc.
Developing New Code
Debugging inside IPython
Use Jupyter/IPython cell execution to speed up experiment iterations
Use %%debug for step through
Cell Example:
%%debug
...: for n in range(4):
...: n>2
Debugging Existing Code
IPython inside debugging
Debugging a broken unit test: pytest ... --pdbcls=IPython.terminal.debugger:TerminalPdb --pdb
Debugging outside of test case: breakpoint(), python -m ipdb, etc.
IPython.embed() for full IPython functionality where needed while in the debugger
Thoughts on Python
I agree with the OP that many things MATLAB does nicely Python still does not have and really should since just about everything in the language favors development speed over production speed. Maybe someday I will contribute more than trivial bug fixes to CPython.
https://github.com/ipython/ipython/commit/f042f3fea7560afcb518a1940daa46a72fbcfa68
See also Is it possible to run commands in IPython with debugging?
If put import ipdb; ipdb.set_trace() at cell outside function, it will occur error.
Using %pdb or %debug, you can only see the filnal error result. You cannot see the code doing step by step.
I use following skill:
%%writefile temp.py
.....cell code.....
save the code of cell to file temp.py.
and then
%run -i -d temp.py, it will run the cell code by pdb .
-i: run the file in IPython’s namespace instead of an empty one.
-d: run your program under the control of pdb, the Python debugger.

Python CLI program unit testing

I am working on a python Command-Line-Interface program, and I find it boring when doing testings, for example, here is the help information of the program:
usage: pyconv [-h] [-f ENCODING] [-t ENCODING] [-o file_path] file_path
Convert text file from one encoding to another.
positional arguments:
file_path
optional arguments:
-h, --help show this help message and exit
-f ENCODING, --from ENCODING
Encoding of source file
-t ENCODING, --to ENCODING
Encoding you want
-o file_path, --output file_path
Output file path
When I made changes on the program and want to test something, I must open a terminal,
type the command(with options and arguments), type enter, and see if any error occurs
while running. If error really occurs, I must go back to the editor and check the code
from top to end, guessing where the bug positions, make small changes, write print lines,
return to the terminal, run command again...
Recursively.
So my question is, what is the best way to do testing with CLI program, can it be as easy
as unit testing with normal python scripts?
I think it's perfectly fine to test functionally on a whole-program level. It's still possible to test one aspect/option per test. This way you can be sure that the program really works as a whole. Writing unit-tests usually means that you get to execute your tests quicker and that failures are usually easier to interpret/understand. But unit-tests are typically more tied to the program structure, requiring more refactoring effort when you internally change things.
Anyway, using py.test, here is a little example for testing a latin1 to utf8 conversion for pyconv::
# content of test_pyconv.py
import pytest
# we reuse a bit of pytest's own testing machinery, this should eventually come
# from a separatedly installable pytest-cli plugin.
pytest_plugins = ["pytester"]
#pytest.fixture
def run(testdir):
def do_run(*args):
args = ["pyconv"] + list(args)
return testdir._run(*args)
return do_run
def test_pyconv_latin1_to_utf8(tmpdir, run):
input = tmpdir.join("example.txt")
content = unicode("\xc3\xa4\xc3\xb6", "latin1")
with input.open("wb") as f:
f.write(content.encode("latin1"))
output = tmpdir.join("example.txt.utf8")
result = run("-flatin1", "-tutf8", input, "-o", output)
assert result.ret == 0
with output.open("rb") as f:
newcontent = f.read()
assert content.encode("utf8") == newcontent
After installing pytest ("pip install pytest") you can run it like this::
$ py.test test_pyconv.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.4.5dev1
collected 1 items
test_pyconv.py .
========================= 1 passed in 0.40 seconds =========================
The example reuses some internal machinery of pytest's own testing by leveraging pytest's fixture mechanism, see http://pytest.org/latest/fixture.html. If you forget about the details for a moment, you can just work from the fact that "run" and "tmpdir" are provided for helping you to prepare and run tests. If you want to play, you can try to insert a failing assert-statement or simply "assert 0" and then look at the traceback or issue "py.test --pdb" to enter a python prompt.
Start from the user interface with functional tests and work down towards unit tests. It can feel difficult, especially when you use the argparse module or the click package, which take control of the application entry point.
The cli-test-helpers Python package has examples and helper functions (context managers) for a holistic approach on writing tests for your CLI. It's a simple idea, and one that works perfectly with TDD:
Start with functional tests (to ensure your user interface definition) and
Work towards unit tests (to ensure your implementation contracts)
Functional tests
NOTE: I assume you develop code that is deployed with a setup.py file or is run as a module (-m).
Is the entrypoint script installed? (tests the configuration in your setup.py)
Can this package be run as a Python module? (i.e. without having to be installed)
Is command XYZ available? etc. Cover your entire CLI usage here!
Those tests are simplistic: They run the shell command you would enter in the terminal, e.g.
def test_entrypoint():
exit_status = os.system('foobar --help')
assert exit_status == 0
Note the trick to use a non-destructive operation (e.g. --help or --version) as we can't mock anything with this approach.
Towards unit tests
To test single aspects inside the application you will need to mimic things like command line arguments and maybe environment variables. You will also need to catch the exiting of your script to avoid the tests to fail for SystemExit exceptions.
Example with ArgvContext to mimic command line arguments:
#patch('foobar.command.baz')
def test_cli_command(mock_command):
"""Is the correct code called when invoked via the CLI?"""
with ArgvContext('foobar', 'baz'), pytest.raises(SystemExit):
foobar.cli.main()
assert mock_command.called
Note that we mock the function that we want our CLI framework (click in this example) to call, and that we catch SystemExit that the framework naturally raises. The context managers are provided by cli-test-helpers and pytest.
Unit tests
The rest is business as usual. With the above two strategies we've overcome the control a CLI framework may have taken away from us. The rest is usual unit testing. TDD-style hopefully.
Disclosure: I am the author of the cli-test-helpers Python package.
So my question is, what is the best way to do testing with CLI program, can it be as easy as unit testing with normal python scripts?
The only difference is that when you run Python module as a script, its __name__ attribute is set to '__main__'. So generally, if you intend to run your script from command line it should have following form:
import sys
# function and class definitions, etc.
# ...
def foo(arg):
pass
def main():
"""Entry point to the script"""
# Do parsing of command line arguments and other stuff here. And then
# make calls to whatever functions and classes that are defined in your
# module. For example:
foo(sys.argv[1])
if __name__ == '__main__':
main()
Now there is no difference, how you would use it: as a script or as a module. So inside your unit-testing code you can just import foo function, call it and make any assertions you want.
Maybe too little too late,
but you can always use
import os.system
result = os.system(<'Insert your command with options here'>
assert(0 == result)
In that way, you can run your program as if it was from command line, and evaluate the exit code.
(Update after I studied pytest)
You can also use capsys.
(from running pytest --fixtures)
capsys
Enable text capturing of writes to sys.stdout and sys.stderr.
The captured output is made available via ``capsys.readouterr()`` method
calls, which return a ``(out, err)`` namedtuple.
``out`` and ``err`` will be ``text`` objects.
This isn't for Python specifically, but what I do to test command-line scripts is to run them with various predetermined inputs and options and store the correct output in a file. Then, to test them when I make changes, I simply run the new script and pipe the output into diff correct_output -. If the files are the same, it outputs nothing. If they're different, it shows you where. This will only work if you are on Linux or OS X; on Windows, you will have to get MSYS.
Example:
python mycliprogram --someoption "some input" | diff correct_output -
To make it even easier, you can add all these test runs to your 'make test' Makefile target, which I assume you already have. ;)
If you are running many of these at once, you could make it a little more obvious where each one ends by adding a fail tag:
python mycliprogram --someoption "some input" | diff correct_output - || tput setaf 1 && echo "FAILED"
The short answer is yes, you can use unit tests, and should. If your code is well structured, it should be quite easy to test each component separately, and if you need to to can always mock sys.argv to simulate running it with different arguments.
pytest-console-scripts is a Pytest plugin for testing python scripts installed via console_scripts entry point of setup.py.
For Python 3.5+, you can use the simpler subprocess.run to call your CLI command from your test.
Using pytest:
import subprocess
def test_command__works_properly():
try:
result = subprocess.run(['command', '--argument', 'value'], check=True, capture_output=True, text=True)
except subprocess.CalledProcessError as error:
print(error.stdout)
print(error.stderr)
raise error
The output can be accessed via result.stdout, result.stderr, and result.returncode if needed.
The check parameter causes an exception to be raised if an error occurs. Note Python 3.7+ is required for the capture_output and text parameters, which simplify capturing and reading stdout/stderr.
Given that you are explicitly asking about testing for a command line application, I believe that you are aware of unit-testing tools in python and that you are actually looking for a tool to automate end-to-end tests of a command line tool. There are a couple of tools out there that are specifically designed for that. If you are looking for something that's pip-installable, I would recommend cram. It integrates well with the rest of the python environment (e.g. through a pytest extension) and it's quite easy to use:
Simply write the commands you want to run prepended with $ and the expected output prepended with . For example, the following would be a valid cram test:
$ echo Hello
Hello
By having four spaces in front of expected output and two in front of the test, you can actually use these tests to also write documentation. More on that on the website.
You can use standard unittest module:
# python -m unittest <test module>
or use nose as a testing framework. Just write classic unittest files in separate directory and run:
# nosetests <test modules directory>
Writing unittests is easy. Just follow online manual for unittesting
I would not test the program as a whole this is not a good test strategy and may not actually catch the actual spot of the error. The CLI interface is just front end to an API. You test the API via your unit tests and then when you make a change to a specific part you have a test case to exercise that change.
So, restructure your application so that you test the API and not the application it self. But, you can have a functional test that actually does run the full application and checks that the output is correct.
In short, yes testing the code is the same as testing any other code, but you must test the individual parts rather than their combination as a whole to ensure that your changes do not break everything.

Disabling python's assert() without -0 flag

I'm running a python script from inside a different software (it provides a python interface to manipulate its data structures).
I'm optimizing my code for speed and would like to see what impact on performance my asserts have.
I'm unable to use python -O. What other options do I have, to programatically disable all asserts in python code? The variable __debug__ (which is cleared by -O flag) cannot be assigned to :(
The docs say,
The value for the built-in variable [__debug__] is determined when the
interpreter starts.
So, if you can not control how the python interpreter is started, then it looks like you can not disable assert.
Here then are some other options:
The safest way is to manually remove all the assert statements.
If all your assert statements occur on lines by themselves, then
perhaps you could remove them with
sed -i 's/assert /pass #assert /g' script.py
Note that this will mangle your code if other code comes after the assert. For example, the sed command above would comment-out the return in a line like this:
assert x; return True
which would change the logic of your program.
If you have code like this, it would probably be best to manually remove the asserts.
There might be a way to remove them programmatically by parsing your
script with the tokenize module, but writing such a program to
remove asserts may take more time than it would take to manually
remove the asserts, especially if this is a one-time job.
If the other piece of software accepts .pyc files, then there is a
dirty trick which seems to work on my machine, though note a Python
core developer warns against this (See Éric Araujo's comment on 2011-09-17). Suppose your script is called script.py.
Make a temporary script called, say, temp.py:
import script
Run python -O temp.py. This creates script.pyo.
Move script.py and script.pyc (if it exists) out of your PYTHONPATH
or whatever directory the other software is reading to find your
script.
Rename script.pyo --> script.pyc.
Now when the other software tries to import your script, it will
only find the pyc file, which has the asserts removed.
For example, if script.py looks like this:
assert False
print('Got here')
then running python temp.py will now print Got here instead of raising an AssertionError.
You may be able to do this with an environment variable, as described in this other answer. Setting PYTHONOPTIMIZE=1 is equivalent to starting Python with the -O option. As an example, this works in Blender 2.78, which embeds Python 3.5:
blender --python-expr 'assert False; print("foo")'
PYTHONOPTIMIZE=1 blender --python-expr 'assert False; print("foo")'
The first command prints a traceback, while the second just prints "foo".
As #unutbu describes, there is no official way of doing this. However, a simple strategy is to define a flag like _test somewhere (for example, as keyword argument to a function, or as a global variable in a module), then include this in your assert statements as follows:
def f(x, _test=True):
assert not _test or x > 0
...
Then you can disable asserts in that function if needed.
f(x, _test=False)

Can I put a breakpoint in a running Python program that drops to the interactive terminal?

I'm not sure if what I'm asking is possible at all, but since python is an interpreter it might be. I'm trying to make changes in an open-source project but because there are no types in python it's difficult to know what the variables have as data and what they do. You can't just look up the documentation on the var's type since you can't be sure what type it is. I want to drop to the terminal so I can quickly examine the types of the variables and what they do by typing help(var) or print(var). I could do this by changing the code and then re-running the program each time but that would be much slower.
Let's say I have a program:
def foo():
a = 5
my_debug_shell()
print a
foo()
my_debug_shell is the function I'm asking about. It would drop me to the '>>>' shell of the python interpreter where I can type help(a), and it would tell me that a is an integer. Then I type 'a=7', and some 'continue' command, and the program goes on to print 7, not 5, because I changed it.
http://docs.python.org/library/pdb.html
import pdb
pdb.set_trace()
Here is a solution that doesn't require code changes:
python -m pdb prog.py <prog_args>
(pdb) b 3
Breakpoint 1 at prog.py:3
(pdb) c
...
(pdb) p a
5
(pdb) a=7
(pdb) ...
In short:
start your program under debugger control
set a break point at a given line of code
let the program run up to that point
you get an interactive prompt that let's you do what you want (type 'help' for all options)
Python 3.7 has a new builtin way of setting breakpoints.
breakpoint()
The implementation of breakpoint() will import pdb and call pdb.set_trace().
Remember to include the braces (), since breakpoint is a function, not a keyword.
A one-line partial solution is simply to put 1/0 where you want the breakpoint: this will raise an exception, which will be caught by the debugger. Two advantages of this approach are:
Breakpoints set this way are robust against code modification (no dependence on a particular line number);
One does not need to import pdb in every program to be debugged; one can instead directly insert "breakpoints" where needed.
In order to catch the exception automatically, you can simply do python -m pdb prog.py… and then type c(ontinue) in order to start the program. When the 1/0 is reached, the program exits, but variables can be inspected as usual with the pdb debugger (p my_var). Now, this does not allow you to fix things and keep running the program. Instead you can try to fix the bug and run the program again.
If you want to use the powerful IPython shell, ipython -pdb prog.py… does the same thing, but leads to IPython's better debugger interface. Alternatively, you can do everything from within the IPython shell:
In IPython, set up the "debug on exception" mode of IPython (%pdb).
Run the program from IPython with %run prog.py…. When an exception occurs, the debugger is automatically activated and you can inspect variables, etc.
The advantage of this latter approach is that (1) the IPython shell is almost a must; and (2) once it is installed, debugging can easily be done through it (instead of directly through the pdb module). The full documentation is available on the IPython pages.
You can run the program using pdb, and add breakpoints before starting execution.
In reality though, it's usually just as fast to edit the code and put in the set_trace() call, as another user stated.
Not sure what the real question is. Python gives you the 'pdb' debugger (google yourself) and in addition you can add logging and debug output as needed.

Categories