Sometimes I'm writing small utilities functions and pack them as python package.
How small? 30 - 60 lines of python.
And my question is do you think writing the tests inside the actual code is bad? abusing?
I can see a great benefits like usage examples inside the code itself without jumping between files (again from really small projects).
Example:
#!/usr/bin/env python
# Actual code
def increment(number, by=1):
return number += by
# Tests
def test_increment_positive():
assert increment(1) == 2
def test_increment_negative():
assert increment(-5) == -4
def test_increment_zero():
assert increment(0) == 1
The general Idea taken from the monitoring framework riemann which I use, in riemann you write your tests file along with your code link
You can write doctests inside your documentation to indicate how your function should be used:
def increment(number, by=1):
""" Increments the given number by some other number
>>> increment(3)
4
>>> increment(5,3)
8
"""
return number += by
From the documentation:
To check that a module’s docstrings are up-to-date by verifying that all interactive examples still work as documented.
To perform regression testing by verifying that interactive examples from a test file or a test object work as expected.
To write tutorial documentation for a package, liberally illustrated with input-output examples. Depending on whether the
examples or the expository text are emphasized, this has the
flavor of “literate testing” or “executable documentation”
Related
This is an old problem as is demonstrated as in https://community.intel.com/t5/Analyzers/Unable-to-view-source-code-when-analyzing-results/td-p/1153210. I have tried all the listed methods, none of them works, and I cannot find any more solutions on the internet. Basically vtune cannot find the custom python source file no matter what is tried. I am using the most recently version as of speaking. Please let me whether there is a solution.
For example, if you run the following program.
def myfunc(*args):
# Do a lot of things.
if __name__ = '__main__':
# Do something and call myfunc
Call this script main.py. Now use the newest vtune version (I have using Ubuntu 18.04), run the vtune-gui and basic hotspot analysis. You will not found any information on this file. However, a huge pile of information on Python and its other codes are found (related to your python environment). In theory, you should be able to find the source of main.py as well as cost on each line in that script. However, that is simply not happening.
Desired behavior: I would really like to find the source file and function in the top-down manual (or any really). Any advice is welcome.
VTune offer full support for profiling python code and the tool should be able to display the source code in your python file as you expected. Could you please check if the function you are expecting to see in the VTune results, ran long enough?
Just to confirm that everything is working fine, I wrote a matrix multiplication code as shown below (don't worry about the accuracy of the code itself):
def matrix_mul(X, Y):
result_matrix = [ [ 1 for i in range(len(X)) ] for j in range(len(Y[0])) ]
# iterate through rows of X
for i in range(len(X)):
# iterate through columns of Y
for j in range(len(Y[0])):
# iterate through rows of Y
for k in range(len(Y)):
result_matrix[i][j] += X[i][k] * Y[k][j]
return result_matrix
Then I called this function (matrix_mul) on my Ubuntu machine with large enough matrices so that the overall execution time was in the order of few seconds.
I used the below command to start profiling (you can also see the VTune version I used):
/opt/intel/oneapi/vtune/2021.1.1/bin64/vtune -collect hotspots -knob enable-stack-collection=true -data-limit=500 -ring-buffer=10 -app-working-dir /usr/bin -- python3 /home/johnypau/MyIntel/temp/Python_matrix_mul/mat_mul_method.py
Now open the VTune results in the GUI and under the bottom-up tab, order by "Module / Function / Call-stack" (or whatever preferred grouping is).
You should be able to see the the module (mat_mul_method.py in my case) and the function "matrix_mul". If you double click, VTune should be able to load the sources too.
I'm taking open-source class. I am given doctests and a CLI to test my code and see if it passes, however, how can I visualize my code to see what is happening? For instance, I am having a hard time understanding how the for loop is working and I would like to see what list is assigned in each recursion.
The file has many functions so running -m doctest isn't ideal and also doesn't show the execution. When I run debug mode in vs code and set a breakpoint at the function, it never steps into the function. I imagine this is because in the file I never call nut_finder() but I don't want to start adding to the source code since that would defeat the purpose of the doctest. I can also run python interactively but that too won't show me how the code is executing.
The only solution I have found is to use a tool that visualizes the code, however, this requires me to manually copy and paste stuff. This also will become a problem from large applications and multi-file applications. So in short, how can I learn the right way to debug my code.
# lab05.py
def nut_finder(t):
"""Returns True if t contains a node with the value 'nut' and
False otherwise.
>>> scrat = tree('nut')
>>> nut_finder(scrat)
True
>>> sproul = tree('roots', [tree('branch1', [tree('leaf'), tree('nut')]), tree('branch2')])
>>> nut_finder(sproul)
True
>>> numbers = tree(1, [tree(2), tree(3, [tree(4), tree(5)]), tree(6, [tree(7)])])
>>> nut_finder(numbers)
False
>>> t = tree(1, [tree('nut',[tree('not nut')])])
>>> nut_finder(t)
True
"""
if label(t) == 'nut':
return True
for b in branches(t):
if nut_finder(b):
return True
return False
I think python 3.7 and up: type breakpoint() right where you want it to stop. It is a shortcut for using pdb module (https://docs.python.org/3/library/pdb.html).
When using the hypothesis library and performing stateful testing, how can I see or output the Bundle "services" the library is trying on my code?
Example
import hypothesis.strategies as st
from hypothesis.strategies import integers
from hypothesis.stateful import Bundle, RuleBasedStateMachine, rule, precondition
class test_servicediscovery(RuleBasedStateMachine):
services = Bundle('services')
#rule(target=services, s=st.integers(min_value=0, max_value=2))
def add_service(self, s):
return s
The question is: how do I print / see the Bundle "services" variable, generated by the library?
In the example you've given, the services bundle isn't being tried on your code - you're adding things to it, but never using them as inputs to another rule.
If you are, running Hypothesis in verbose mode will show all inputs as they happen; or even in normal mode failing examples will print all the values used.
Can I get some advice on writing a unit test for the following piece of code?
%python
import sys
import json
sys.argv = []
sys.argv.append('{"product1":{"brand":"x","type":"y"}}')
sys.argv.append('{"product1":{"brand":"z","type":"a"}}')
products = sys.argv
yy= {}
my_products = []
for n, i in enumerate(products[:]):
xx = json.loads(i)
for j in xx.keys():
yy["brand"] = xx[j]['brand']
yy["type"] = xx[j]["type"]
my_products.append(yy)
print my_products
As it stands there aren't any units to test!!!
A test might consist of:
packaging your program in a script
invoking your program from python unit test as a subprocess
piping the output of your command process to a buffer
asserting the buffer is what you except it to be
While the above would technically allow you to have an automated test on your code it comes with a lot of burden:
- multi processing
- weak assertions by not having types
- coarse interaction (have to invoke a script, can't just assert on the brand/type logic
One way to address those issues could be to package your code into smaller units, ie create a method to encapsulate:
for j in xx.keys():
yy["brand"] = xx[j]['brand']
yy["type"] = xx[j]["type"]
my_products.append(yy)
Import it, exercise it and assert on its output. Then there might be something to map the loading and application of xx.keys() loop to an array (which you could also encapsulate as a function).
And then there could be the highest level taking in args and composing the product mapper loader transformer. And since your code will be thoroughly unit tested at this point, you may get away with not having a test for your top level script?
I am writing a program that tries to compare two methods. I would like to generate Control flow graphs (CFG) for all matched methods and use either a topological sort to compare the two graphs.
RPython, the translation toolchain behind PyPy, offers a way of grabbing the flow graph (in the pypy/rpython/flowspace directory of the PyPy project) for type inference.
This works quite well in most cases but generators are not supported. The result will be in SSA form, which might be good or bad, depending on what you want.
There's a Python package called staticfg which does exactly the this -- generation of control flow graphs from a piece of Python code.
For instance, putting the first quick sort Python snippet from Rosseta Code in qsort.py, the following code generates its control flow graph.
from staticfg import CFGBuilder
cfg = CFGBuilder().build_from_file('quick sort', 'qsort.py')
cfg.build_visual('qsort', 'png')
Note that it doesn't seem to understand more advanced control flow like comprehensions.
I found py2cfg has a better representation of Control Flow Graph (CFG) than one from staticfg.
https://gitlab.com/classroomcode/py2cfg
https://pypi.org/project/py2cfg/
Let's take this function in Python:
def fib():
a, b = 0, 1
while True:
yield a
a, b = b, a + b
fib_gen = fib()
for _ in range(10):
next(fib_gen)
Image from StaticCFG:
Image from PY2CFG:
http://pycallgraph.slowchop.com/ looks like what you need.
Python trace module also have option --trackcalls that can be an entrypoint for call tracing machinery in stdlib.