How to see the "Bundle" output of Hypothesis Python library? (Stateful testing) - python

When using the hypothesis library and performing stateful testing, how can I see or output the Bundle "services" the library is trying on my code?
Example
import hypothesis.strategies as st
from hypothesis.strategies import integers
from hypothesis.stateful import Bundle, RuleBasedStateMachine, rule, precondition
class test_servicediscovery(RuleBasedStateMachine):
services = Bundle('services')
#rule(target=services, s=st.integers(min_value=0, max_value=2))
def add_service(self, s):
return s
The question is: how do I print / see the Bundle "services" variable, generated by the library?

In the example you've given, the services bundle isn't being tried on your code - you're adding things to it, but never using them as inputs to another rule.
If you are, running Hypothesis in verbose mode will show all inputs as they happen; or even in normal mode failing examples will print all the values used.

Related

Squish test functions always pass when used with squishtest module

We are using Squish for Qt 6.6.2 on Windows 10 with Python 3.8.7 and running our tests using squishtest module with Robot Framework 4.0.1.
We are having an issue with the test functions provided by the Squish API where any verifications done with such a call (for example squishtest.test.imagePresent) will Pass. The issue itself was quite simple to pinpoint to the fact that although the verification failed, the function call itself was passing without raising exceptions. This can also be verified from the report provided by the squishrunner where we have <scriptedVerificationResult type="FAIL" time="--"> on the passed execution.
The question is, can we in any way get the actual verification result passed to the Robot so we can fail the test accordingly? Preferrably in real time rather than parsing the report afterwards.
In Squish this works perfectly fine
def main():
startApplication("AUT")
snooze(2)
test.imagePresent("image.png", {"tolerant": True, "threshold": 85},
waitForObjectExists(names.sceneContainer_GraphWidget))
but with Robot this is always passing
# In testSuite.robot
*** Settings ***
Library MySquishLib
*** Test Cases ***
Test Image
Start AUT
Verify Image image.png {"tolerant": True, "threshold": 85} names.sceneContainer_GraphWidget
# In MySquishLib.py
import squishtest
import names
def start_aut():
squishtest.startApplication("AUT")
def verify_image(imageFile, imageParams, imageArea):
squishtest.test.imagePresent(imageFile, imageParams, imageArea)
Have a look at the documented: bool testSettings.throwOnFailure flag.
In your robot you can set this and you would not have to patch / rewrite every test.vp, test.compare, ... method.
When running from within the Squish IDE, this flag can be unset. Probably robotframework provides some environment variables to detect wether the test case is running from within itself.
https://doc.froglogic.com/squish/latest/rgs-squish.html#rgs-testsettings
The test functions are not supposed to raise exceptions during execution in order to allow test to continue even if a single VP was failed. The function does however return a boolean value just as expected. By using
def verify_image(imageFile, imageParams, imageArea):
if not squishtest.test.imagePresent(imageFile, imageParams, imageArea):
raise Exception("Image was not found")
I'm able to fail the Robot test without any issues.

Python unit test advice

Can I get some advice on writing a unit test for the following piece of code?
%python
import sys
import json
sys.argv = []
sys.argv.append('{"product1":{"brand":"x","type":"y"}}')
sys.argv.append('{"product1":{"brand":"z","type":"a"}}')
products = sys.argv
yy= {}
my_products = []
for n, i in enumerate(products[:]):
xx = json.loads(i)
for j in xx.keys():
yy["brand"] = xx[j]['brand']
yy["type"] = xx[j]["type"]
my_products.append(yy)
print my_products
As it stands there aren't any units to test!!!
A test might consist of:
packaging your program in a script
invoking your program from python unit test as a subprocess
piping the output of your command process to a buffer
asserting the buffer is what you except it to be
While the above would technically allow you to have an automated test on your code it comes with a lot of burden:
- multi processing
- weak assertions by not having types
- coarse interaction (have to invoke a script, can't just assert on the brand/type logic
One way to address those issues could be to package your code into smaller units, ie create a method to encapsulate:
for j in xx.keys():
yy["brand"] = xx[j]['brand']
yy["type"] = xx[j]["type"]
my_products.append(yy)
Import it, exercise it and assert on its output. Then there might be something to map the loading and application of xx.keys() loop to an array (which you could also encapsulate as a function).
And then there could be the highest level taking in args and composing the product mapper loader transformer. And since your code will be thoroughly unit tested at this point, you may get away with not having a test for your top level script?

Generate single file with HTMLTestRunner when running multiple test classes

I'm trying to configure the HTMLTestRunner to output to a single file when multiple test classes are being called, but after much reading I've been unable to achieve this.
An example of what I'm doing is:
class TestOne(unittest.TestCase):
def test_one_is_one(self):
one = 1
self.assertEqual(1, one)
class TestTwo(unittest.TestCase):
def test_two_is_two(self):
two = 2
self.assertEqual(2, two)
I'm then adding these into a test suite and running the HTMLTestRunner as below:
output = 'C:\\Reports\TestReport.html'
test_suite = unittest.TestSuite(unittest.TestLoader().loadTestsFromModule(Tests))
runner = HTMLTestRunner(output=output)
runner.run(test_suite)
However when running like this I'm getting two HTML files generated, one for TestOne and another for TestTwo.
I've looked around and other examples of this I've come across use:
with open(output, 'wb') as o:
runner = HTMLTestRunner(output=o)
runner.run(test_suite)
However this doesn't appear to be supported anymore by HTMLTestRunner.
Is what I'm after possible?
I really like the reports generated, however I don't really want to have to deal with lots of small HTML files that need to be either merged together or viewed separately.
Additional info:
I'm using Python 3.5 with HTMLTestRunner 1.0.3
I know this is an old ticket, but thought it was worth sharing the following information.
I wanted to do the same as the original question, a single HTML report for the entire test suite. In the latest version of HtmlTestRunner (installed using pip install html-testRunner), the following option is available:
combine_test_reports=True.
Which can be used as follows:
html_runner = HtmlTestRunner.HTMLTestRunner(
stream=output_file,
combine_reports=True,
report_title='HTML test runner report')
Lw246,
I see two htmltestrunners. the 1.0.3 version you used seems to be different, and is still in beta version. The author calls it html-testrunner with a '-'
The original htmlrunner of tungwaiyip is called 'htmltestrunner' without the '-' , and it has been forked with new version. you can see it here: https://github.com/dash0002/HTMLTestRunner.
You can also see the 2 different htmltestrunners here:
https://pypi.python.org/pypi?%3Aaction=search&term=htmltestrunner&submit=search
In addition, there is also an htmltestrunner2 :)

Writing python unit tests inside the actual code

Sometimes I'm writing small utilities functions and pack them as python package.
How small? 30 - 60 lines of python.
And my question is do you think writing the tests inside the actual code is bad? abusing?
I can see a great benefits like usage examples inside the code itself without jumping between files (again from really small projects).
Example:
#!/usr/bin/env python
# Actual code
def increment(number, by=1):
return number += by
# Tests
def test_increment_positive():
assert increment(1) == 2
def test_increment_negative():
assert increment(-5) == -4
def test_increment_zero():
assert increment(0) == 1
The general Idea taken from the monitoring framework riemann which I use, in riemann you write your tests file along with your code link
You can write doctests inside your documentation to indicate how your function should be used:
def increment(number, by=1):
""" Increments the given number by some other number
>>> increment(3)
4
>>> increment(5,3)
8
"""
return number += by
From the documentation:
To check that a module’s docstrings are up-to-date by verifying that all interactive examples still work as documented.
To perform regression testing by verifying that interactive examples from a test file or a test object work as expected.
To write tutorial documentation for a package, liberally illustrated with input-output examples. Depending on whether the
examples or the expository text are emphasized, this has the
flavor of “literate testing” or “executable documentation”

Aerospike Python Client. UDF module to count records. Cannot register module

I am currently implementing the Aerospike Python Client in order to benchmark it along with our Redis implementation, to see which is faster and/or more stable.
I'm still on baby steps, currently Unit-Testing basic functionality, for example if I correctly add records in my Set. For that reason, I want to create a function to count them.
I saw in Aerospike's Documentation, that :
"to perform an aggregation on query, you first need to register a UDF
with the database".
It seems that this is the suggested way that aggregations, counting and other custom functionality should be run in Aerospike.
Therefore, to count the records in a set I have, I created the following module:
# "counter.lua"
function count(s)
return s : map(function() return 1 end) : reduce (function(a,b) return a+b end)
end
I'm trying to use aerospike python client's function to register a UDF(User Defined Function) module:
udf_put(filename, udf_type, policy)
My code is as follows:
# aerospike_client.py:
# "udf_put" parameters
policy = {'timeout': 1000}
lua_module = os.path.join(os.path.dirname(os.path.realpath(__file__)), "counter.lua") #same folder
udf_type = aerospike.UDF_TYPE_LUA # equals to "0", which is for "Lua"
self.client.udf_put(lua_module, udf_type, policy) # Exception is thrown here
query = self.client.query(self.aero_namespace, self.aero_set)
query.select()
result = query.apply('counter', 'count')
an exception is thrown:
exceptions.Exception: (-2L, 'Filename should be a string', 'src/main/client/udf.c', 82)
Is there anything I'm missing or doing wrong?
Is there a way to "debug" it without compiling C code?
Is there any other suggested way to count the records in my set? Or I'm fine with the Lua module?
First, I'm not seeing that exception, but I am seeing a bug with udf_put where the module is registered but the python process hangs. I can see the module appear on the server using AQL's show modules.
I opened a bug with the Python client's repo on Github, aerospike/aerospike-client-python.
There's a best practices document regarding UDF development here: https://www.aerospike.com/docs/udf/best_practices.html
In general using the stream-UDF to aggregate the records through the count function is the correct way to go about it. There are examples here and here.

Categories