How to automate pdb commands? - python

I am calling pdb on some function func i.e.,
def fun():
a = 10
c = fun2(a)
d = 40
return c+d
def fun2(a):
xyz ='str'
return a+10
Now I am running pdb using pdb.runcall(func,a,b) now It will open a pdb console for debugging, now suppose I press 2 time s(step) and q to quit in pdb console
but problem is I don't want to do this manually, I want to make some script which do something like this (automatic tell pdb first two command is s then third is q) , I am asking because there are many functions which needs atleast two time c (continue) to overall excecution of function so that it can yield/return some valid output (like say generators)
Any help will be a serious help for me.

Update after better understanding the question:
In general, I don't think this is the ideal way to test code; designing code for testability (e.g. using TDD) will often result in functions that are easier to test (e.g. using mocks/fake objects, dependency injection etc), and I would encourage you to consider refactoring the code if possible. The other issue with this approach is that the tests may become very tightly coupled to the code. However, I'll assume here that you know what you are doing, and that the above is not an option for whatever reason.
Scripting pdb
If you want to script pdb from code, this is actually possible by instantiating your own pdb.Pdb class and passing in the stdin and, at the time of writing, stdout argument (I'm not sure both should be required - I've filed https://bugs.python.org/issue33749).
Example (I just added the extra input argument to fun):
def fun(i):
a = 10 + i
c = fun2(a)
d = 40
return c+d
def fun2(a):
xyz ='str'
return a+10
import pdb
import io
output = io.StringIO()
# this contains the pdb commands we want to execute:
pdb_script = io.StringIO("p i;; i = 100;; n;; p a;; c;;")
mypdb = pdb.Pdb(stdin=pdb_script, stdout=output)
Normal result (no scripting):
In [40]: pdb.runcall(fun, 1)
...:
> <ipython-input-1-28966c4f6e38>(2)fun()
-> a = 10 + i
(Pdb)
(Pdb) c
Out[40]: 61
Scripted pdb:
In [44]: mypdb = pdb.Pdb(stdin=pdb_script, stdout=output)
In [45]: mypdb.runcall(fun, 1)
Out[45]: 160
In [50]: print(output.getvalue())
> <ipython-input-1-28966c4f6e38>(2)fun()
-> a = 10 + i
(Pdb) 1
> <ipython-input-1-28966c4f6e38>(3)fun()
-> c = fun2(a)
110
You may find using pdb_script.seek(0) helpful to reset the script.
Original answer - using conditional breakpoints
It sounds like what you really want is to only get into the debugger when your code is in a certain state. This can be done with conditional breakpoints (see pdb docs for details).
For example, let's say you want to break in fun2 if a > 10:
...:
In [2]: import pdb
In [3]: pdb.runcall(fun, 1)
> <ipython-input-1-28966c4f6e38>(2)fun()
-> a = 10 + i
(Pdb) break fun2, a > 10
Breakpoint 1 at <ipython-input-1-28966c4f6e38>:6
(Pdb) c
> <ipython-input-1-28966c4f6e38>(7)fun2()
-> xyz ='str'
(Pdb) c
Out[3]: 61
In [4]: pdb.runcall(fun, -1)
> <ipython-input-1-28966c4f6e38>(2)fun()
-> a = 10 + i
(Pdb) c
Out[4]: 59
Notice in the first case you hit the breakpoint, in the second you didn't.
Original answer - using breakpoints and executing commands when hit
You could also try setting a breakpoint and using the commands facility.

Related

How to automatically generate unit testing routines from special syntax in method docstring / comment?

This is a mock-up of what I'm looking for:
def is_even(a, b):
"""Returns True if both numbers are even.
#AutoUnitTestTag:
- (0,2) -> True
- (2,1) -> False
- (3,5) -> False
"""
return (a % 2 == 0 and b % 2 == 0)
Is there a tool that could allow one to insert compact syntax-defined unit tests within a function's docstring and then automatically generate a unittest_foobar.py unit testing routine?
I'm almost sure I've seen this a while ago, but cannot find it.
EDIT: #mkrieger1 suggested doctest in the comments below and after playing a bit with it, I'd say it's a pretty apt solution.
However, I'd like to let this question linger a bit longer in order to collect more suggestions, especially about more sophisticated tools.
If someone's interested, here's how one would use doctest in my example case:
Format function in file is_even.py like so:
def is_even(a,b):
"""Returns True if both numbers are even.
>>> is_even(0,2)
True
>>> is_even(2,1)
False
>>> is_even(3,5)
False
"""
return (a % 2 == 0 and b % 2 == 0)
Then run the command python3 -m doctest is_even.py.
The output will look like so:
Trying:
is_even(0,2)
Expecting:
True
ok
Trying:
is_even(2,1)
Expecting:
False
ok
Trying:
is_even(3,5)
Expecting:
False
ok
1 items had no tests:
foo2
1 items passed all tests:
3 tests in foo2.is_even
3 tests in 2 items.
3 passed and 0 failed.
Test passed.
It's called doctest and it's part of the standard library. But you'll have to change your syntax a bit:
def is_even(a, b):
"""Returns True if both numbers are even.
>>> is_even(0, 2)
True
>>> is_even(2, 1)
False
>>> is_even(3, 5)
False
"""
return (a % 2 == 0 and b % 2 == 0)
You can run it with:
python -m doctest is_even.py
The syntax has been designed so that you can mostly copy and paste your tests from an interactive (C)Python interpreter session, so there is good reason not to try and change it. Moreover, other Python developers will already be familiar with this syntax, and not with anything custom you might come up with.

Running a pytest test from a python file and NOT from a command line

I have three python files in one directory (c:\Tests), I am trying to run the test using pytest from the file TestCases1.py but I have not succeed. I am new to python and I do not know if I am asking the right question. I have seen several examples but almost all use the command line to run the test and I want to run them from a python file. Since I am newbie to testing, I would appreciate a very simple answer (I have seen some similar questions but I did not get the answers). I am using Python 36-32 and Eclipse Oxygen 3a.
min_max.py => Some basic functions to be tested
def min(values):
_min = values[0]
for val in values:
if val < _min:
_min = val
return _min
def max(values):
_max = values[0]
for val in values:
if val > _max:
_max = valal
return _max
min_max_test.py => Some tests for the functions
import min_max
def test_min():
print("starting")
values = (2, 3, 1, 4, 6)
val = min(values)
assert val == 1
print("done test_min")
def test_max():
print("starting")
values = (2, 3, 1, 4, 6)
val = max(values)
assert val == 6
print("done test_max")
TestCases1.py => File from where I want to run the test
import pytest
pytest_args = [
'c:\Tests\min_max_test.py'
]
pytest.main(pytest_args)
Optionally, you could use subprocess to run pytest commands on your python script. For example,
# ~/tests
import subprocess
subprocess.run(["pytest . -q"], shell=True)
>>>
. [100%]
1 passed in 0.00s
CompletedProcess(args=['pytest . -q'], returncode=0)
In min_max_test.py, the min and max variable names in the test functions would be taken from the built-ins and not from your min_max.py file.
You either need to use something like min_max.min or import those functions using a from import rather than a full module import.
P.S. please include the error messages along with the questions and be specific as to what problem you are having. makes it that much easier :)

Error with quantifier in Z3Py

I would like Z3 to check whether it exists an integer t that satisfies my formula. I'm getting the following error:
Traceback (most recent call last):
File "D:/z3-4.6.0-x64-win/bin/python/Expl20180725.py", line 18, in <module>
g = ForAll(t, f1(t) == And(t>=0, t<10, user[t].rights == ["read"] ))
TypeError: list indices must be integers or slices, not ArithRef
Code:
from z3 import *
import random
from random import randrange
class Struct:
def __init__(self, **entries): self.__dict__.update(entries)
user = [Struct() for i in range(10)]
for i in range(10):
user[i].uid = i
user[i].rights = random.choice(["create","execute","read"])
s=Solver()
f1 = Function('f1', IntSort(), BoolSort())
t = Int('t')
f2 = Exists(t, f1(t))
g = ForAll(t, f1(t) == And(t>=0, t<10, user[t].rights == ["read"] ))
s.add(g)
s.add(f2)
print(s.check())
print(s.model())
You are mixing and matching Python and Z3 expressions, and while that is the whole point of Z3py, it definitely does not mean that you can mix/match them arbitrarily. In general, you should keep all the "concrete" parts in Python, and relegate the symbolic parts to "z3"; carefully coordinating the interaction in between. In your particular case, you are accessing a Python list (your user) with a symbolic z3 integer (t), and that is certainly not something that is allowed. You have to use a Z3 symbolic Array to access with a symbolic index.
The other issue is the use of strings ("create"/"read" etc.) and expecting them to have meanings in the symbolic world. That is also not how z3py is intended to be used. If you want them to mean something in the symbolic world, you'll have to model them explicitly.
I'd strongly recommend reading through http://ericpony.github.io/z3py-tutorial/guide-examples.htm which is a great introduction to z3py including many of the advanced features.
Having said all that, I'd be inclined to code your example as follows:
from z3 import *
import random
Right, (create, execute, read) = EnumSort('Right', ('create', 'execute', 'read'))
users = Array('Users', IntSort(), Right)
for i in range(10):
users = Store(users, i, random.choice([create, execute, read]))
s = Solver()
t = Int('t')
s.add(t >= 0)
s.add(t < 10)
s.add(users[t] == read)
r = s.check()
if r == sat:
print s.model()[t]
else:
print r
Note how the enumerated type Right in the symbolic land is used to model your "permissions."
When I run this program multiple times, I get:
$ python a.py
5
$ python a.py
9
$ python a.py
unsat
$ python a.py
6
Note how unsat is produced, if it happens that the "random" initialization didn't put any users with a read permission.

Dynamic Semantic errors in Python

i came across this as an interview question. This question seemed interesting. So, i am posting it here.
Consider the operation which gives semantic error like division by zero. By default, python compiler gives output like "Invalid Operation" or something. Can we control the output that is given out by Python compiler, like print some other error message, skip that division by zero operation, and carry on with rest of the instructions?
And also, how can i evaluate the cost of run-time semantic checks?
There are many python experts here. I am hoping someone will throw some light on this. Thanks in advance.
Can we control the output that is given out by Python compiler, like print some other error message, skip that division by zero operation, and carry on with rest of the instructions?
No, you cannot. You can manually wrap every dangerous command with a try...except block, but I'm assuming you're talking about an automatic recovery to specific lines within a try...except block, or even completely automatically.
By the time the error has fallen through such that sys.excepthook is called, or whatever outer scope if you catch it early, the inner scopes are gone. You can change line numbers with sys.settrace in CPython although that is only an implementation detail, but since the outer scopes are gone there is no reliable recorvery mechanism.
If you try to use the humorous goto April fools module (that uses the method I just described) to jump blocks even within a file:
from goto import goto, label
try:
1 / 0
label .foo
print("recovered")
except:
goto .foo
you get an error:
Traceback (most recent call last):
File "rcv.py", line 9, in <module>
goto .foo
File "rcv.py", line 9, in <module>
goto .foo
File "/home/joshua/src/goto-1.0/goto.py", line 272, in _trace
frame.f_lineno = targetLine
ValueError: can't jump into the middle of a block
so I'm pretty certain it's impossible.
And also, how can i evaluate the cost of run-time semantic checks?
I don't know what that is, but you're probably looking for a line_profiler:
import random
from line_profiler import LineProfiler
profiler = LineProfiler()
def profile(function):
profiler.add_function(function)
return function
#profile
def foo(a, b, c):
if not isinstance(a, int):
raise TypeError("Is this what you mean by a 'run-time semantic check'?")
d = b * c
d /= a
return d**a
profiler.enable()
for _ in range(10000):
try:
foo(random.choice([2, 4, 2, 5, 2, 3, "dsd"]), 4, 2)
except TypeError:
pass
profiler.print_stats()
output:
Timer unit: 1e-06 s
File: rcv.py
Function: foo at line 11
Total time: 0.095197 s
Line # Hits Time Per Hit % Time Line Contents
==============================================================
11 #profile
12 def foo(a, b, c):
13 10000 29767 3.0 31.3 if not isinstance(a, int):
14 1361 4891 3.6 5.1 raise TypeError("Is this what you mean by a 'run-time semantic check'?")
15
16 8639 20192 2.3 21.2 d = b * c
17 8639 20351 2.4 21.4 d /= a
18
19 8639 19996 2.3 21.0 return d**a
So the "run-time semantic check", in this case would be taking 36.4% of the time of running foo.
If you want to time specific blocks manually that are larger than you'd use timeit on but smaller than you'd want for a profiler, instead of using two time.time() calls (which is quite an inaccurate method) I suggest Steven D'Aprano's Stopwatch context manager.
I would just use an exception, this example is using python 3. For Python 2, simple remove the annotations after the function parameters. So you function signature would look like this -> f(a,b):
def f(a: int, b: int):
"""
#param a:
#param b:
"""
try:
c = a / b
print(c)
except ZeroDivisionError:
print("You idiot, you can't do that ! :P")
if __name__ == '__main__':
f(1, 0)
>>> from cheese import f
>>> f(0, 0)
You idiot, you can't do that ! :P
>>> f(0, 1)
0.0
>>> f(1, 0)
You idiot, you can't do that ! :P
>>> f(1, 1)
1.0
This is an example of how you could catch Zero Division, by making an exception case using ZeroDivisionError.
I won't go into any specific tools for making loggers, but you can indeed understand the costs associated with this kind of checking. You can put a start = time.time() at the start of the function and end = time.time() at the end. If you take the difference, you will get the execution time in seconds.
I hope that helps.

call python with system() in R to run a python script emulating the python console

I want to pass a chunk of Python code to Python in R with something like system('python ...'), and I'm wondering if there is an easy way to emulate the python console in this case. For example, suppose the code is "print 'hello world'", how can I get the output like this in R?
>>> print 'hello world'
hello world
This only shows the output:
> system("python -c 'print \"hello world\"'")
hello world
Thanks!
BTW, I asked in r-help but have not got a response yet (if I do, I'll post the answer here).
Do you mean something like this?
export NUM=10
R -q -e "rnorm($NUM)"
You might also like to check out littler - http://dirk.eddelbuettel.com/code/littler.html
UPDATED
Following your comment below, I think I am beginning to understand your question better. You are asking about running python inside the R shell.
So here's an example:-
# code in a file named myfirstpythonfile.py
a = 1
b = 19
c = 3
mylist = [a, b, c]
for item in mylist:
print item
In your R shell, therefore, do this:
> system('python myfirstpythonfile.py')
1
19
3
Essentially, you can simply call python /path/to/your/python/file.py to execute a block of python code.
In my case, I can simply call python myfirstpythonfile.py assuming that I launched my R shell in the same directory (path) my python file resides.
FURTHER UPDATED
And if you really want to print out the source code, here's a brute force method that might be possible. In your R shell:-
> system('python -c "import sys; sys.stdout.write(file(\'myfirstpythonfile.py\', \'r\').read());"; python myfirstpythonfile.py')
a = 1
b = 19
c = 3
mylist = [a, b, c]
for item in mylist:
print item
1
19
3
AND FURTHER FURTHER UPDATED :-)
So if the purpose is to print the python code before the execution of a code, we can use the python trace module (reference: http://docs.python.org/library/trace.html). In command line, we use the -m option to call a python module and we specify the options for that python module following it.
So for my example above, it would be:-
$ python -m trace --trace myfirstpythonfile.py
--- modulename: myfirstpythonfile, funcname: <module>
myfirstpythonfile.py(1): a = 1
myfirstpythonfile.py(2): b = 19
myfirstpythonfile.py(3): c = 3
myfirstpythonfile.py(4): mylist = [a, b, c]
myfirstpythonfile.py(5): for item in mylist:
myfirstpythonfile.py(6): print item
1
myfirstpythonfile.py(5): for item in mylist:
myfirstpythonfile.py(6): print item
19
myfirstpythonfile.py(5): for item in mylist:
myfirstpythonfile.py(6): print item
3
myfirstpythonfile.py(5): for item in mylist:
--- modulename: trace, funcname: _unsettrace
trace.py(80): sys.settrace(None)
Which as we can see, traces the exact line of python code, executes the result immediately after and outputs it into stdout.
The system command has an option called intern = FALSE. Make this TRUE and Whatever output was just visible before, will be stored in a variable.
Now run your system command with this option and you should get your output directly in your variable. Like this
tmp <- system("python -c 'print \"hello world\"'",intern=T)
My work around for this problem is defining my own functions that paste in parameters, write out a temporary .py file, and them execute the python file via a system call. Here is an example that calls ArcGIS's Euclidean Distance function:
py.EucDistance = function(poly_path,poly_name,snap_raster,out_raster_path_name,maximum_distance,mask){
py_path = 'G:/Faculty/Mann/EucDistance_temp.py'
poly_path_name = paste(poly_path,poly_name, sep='')
fileConn<-file(paste(py_path))
writeLines(c(
paste('import arcpy'),
paste('from arcpy import env'),
paste('from arcpy.sa import *'),
paste('arcpy.CheckOutExtension("spatial")'),
paste('out_raster_path_name = "',out_raster_path_name,'"',sep=""),
paste('snap_raster = "',snap_raster,'"',sep=""),
paste('cellsize =arcpy.GetRasterProperties_management(snap_raster,"CELLSIZEX")'),
paste('mask = "',mask,'"',sep=""),
paste('maximum_distance = "',maximum_distance,'"',sep=""),
paste('sr = arcpy.Describe(snap_raster).spatialReference'),
paste('arcpy.env.overwriteOutput = True'),
paste('arcpy.env.snapRaster = "',snap_raster,'"',sep=""),
paste('arcpy.env.mask = mask'),
paste('arcpy.env.scratchWorkspace ="G:/Faculty/Mann/Historic_BCM/Aggregated1080/Scratch.gdb"'),
paste('arcpy.env.outputCoordinateSystem = sr'),
# get spatial reference for raster and force output to that
paste('sr = arcpy.Describe(snap_raster).spatialReference'),
paste('py_projection = sr.exportToString()'),
paste('arcpy.env.extent = snap_raster'),
paste('poly_name = "',poly_name,'"',sep=""),
paste('poly_path_name = "',poly_path_name,'"',sep=""),
paste('holder = EucDistance(poly_path_name, maximum_distance, cellsize, "")'),
paste('holder = SetNull(holder < -9999, holder)'),
paste('holder.save(out_raster_path_name) ')
), fileConn, sep = "\n")
close(fileConn)
system(paste('C:\\Python27\\ArcGIS10.1\\python.exe', py_path))
}

Categories