We are using Squish for Qt 6.6.2 on Windows 10 with Python 3.8.7 and running our tests using squishtest module with Robot Framework 4.0.1.
We are having an issue with the test functions provided by the Squish API where any verifications done with such a call (for example squishtest.test.imagePresent) will Pass. The issue itself was quite simple to pinpoint to the fact that although the verification failed, the function call itself was passing without raising exceptions. This can also be verified from the report provided by the squishrunner where we have <scriptedVerificationResult type="FAIL" time="--"> on the passed execution.
The question is, can we in any way get the actual verification result passed to the Robot so we can fail the test accordingly? Preferrably in real time rather than parsing the report afterwards.
In Squish this works perfectly fine
def main():
startApplication("AUT")
snooze(2)
test.imagePresent("image.png", {"tolerant": True, "threshold": 85},
waitForObjectExists(names.sceneContainer_GraphWidget))
but with Robot this is always passing
# In testSuite.robot
*** Settings ***
Library MySquishLib
*** Test Cases ***
Test Image
Start AUT
Verify Image image.png {"tolerant": True, "threshold": 85} names.sceneContainer_GraphWidget
# In MySquishLib.py
import squishtest
import names
def start_aut():
squishtest.startApplication("AUT")
def verify_image(imageFile, imageParams, imageArea):
squishtest.test.imagePresent(imageFile, imageParams, imageArea)
Have a look at the documented: bool testSettings.throwOnFailure flag.
In your robot you can set this and you would not have to patch / rewrite every test.vp, test.compare, ... method.
When running from within the Squish IDE, this flag can be unset. Probably robotframework provides some environment variables to detect wether the test case is running from within itself.
https://doc.froglogic.com/squish/latest/rgs-squish.html#rgs-testsettings
The test functions are not supposed to raise exceptions during execution in order to allow test to continue even if a single VP was failed. The function does however return a boolean value just as expected. By using
def verify_image(imageFile, imageParams, imageArea):
if not squishtest.test.imagePresent(imageFile, imageParams, imageArea):
raise Exception("Image was not found")
I'm able to fail the Robot test without any issues.
Related
I have a parameterized pytest test suite. Each parameter is a particular website, and the test suite runs using Selenium automation. After accounting for parameters, I have hundreds of tests in total, and they all run sequentially.
Once a week, Selenium will fail for a variety of reasons. Connection lost, could not instantiate chrome instance, etc. If it fails once in the middle of a test run, it'll crash all upcoming tests. Here's an example fail log:
test_example[parameter] failed; it passed 0 out of the required 1 times.
<class 'selenium.common.exceptions.WebDriverException'>
Message: chrome not reachable
(Session info: chrome=91.0.4472.106)
[<TracebackEntry test.py:122>, <TracebackEntry another.py:92>, <TracebackEntry /usr/local/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py:669>, <TracebackEntry /usr/local/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py:321>, <TracebackEntry /usr/local/lib/python3.7/site-packages/selenium/webdriver/remote/errorhandler.py:242>]
Ideally, I'd like to exit the suite as soon as a Selenium failure has occurred, because I know that all the upcoming tests will also fail.
Is there a method of this kind:
def pytest_on_test_fail(err): # this will be a pytest hook
if is_selenium(err): # user defined function
pytest_earlyexit() # this will be a pytest function
Or some other mechanism that will let me early exit out of the full test suite based on the detected condition.
After some more testing I got this to work. This uses the pytest_exception_interact hook and the pytest.exit function.
WebDriverException is the parent class of all Selenium issues (see source code).
def pytest_exception_interact(node, call, report):
error_class = call.excinfo.type
is_selenium_issue = issubclass(error_class, WebDriverException)
if is_selenium_issue:
pytest.exit('Selenium error detected, exiting test suite early', 1)
I have a Flyte task function like this:
#task
def do_stuff(framework_obj):
framework_obj.get_outputs() # This calls Types.Blob.fetch(some_uri)
Trying to load a blob URI using flytekit.sdk.types.Types.Blob.fetch, but getting this error:
ERROR:flytekit: Exception when executing No temporary file system is present. Either call this method from within the context of a task or surround with a 'with LocalTestFileSystem():' block. Or specify a path when calling this function. Note: Cleanup is not automatic when a path is specified.
I can confirm I can load blobs using with LocalTestFileSystem(), in tests, but when actually trying to run a workflow, I'm not sure why I'm getting this error, as the function that calls blob-processing is decorated with #task so it's definitely a Flyte Task. I also confirmed that the task node exists on the Flyte web console.
What path is the error referencing and how do I call this function appropriately?
Using Flyte Version 0.16.2
Could you please give a bit more information about the code? This is flytekit version 0.15.x? I'm a bit confused since that version shouldn't have the #task decorator. It should only have #python_task which is an older API. If you want to use the new python native typing API you should install flytekit==0.17.0 instead.
Also, could you point to the documentation you're looking at? We've updated the docs a fair amount recently, maybe there's some confusion around that. These are the examples worth looking at. There's also two new Python classes, FlyteFile and FlyteDirectory that have replaced the Blob class in flytekit (though that remains what the IDL type is called).
(would've left this as a comment but I don't have the reputation to yet.)
Some code to help with fetching outputs and reading from a file output
#task
def task_file_reader():
client = SynchronousFlyteClient("flyteadmin.flyte.svc.cluster.local:81", insecure=True)
exec_id = WorkflowExecutionIdentifier(
domain="development",
project="flytesnacks",
name="iaok0qy6k1",
)
data = client.get_execution_data(exec_id)
lit = data.full_outputs.literals["o0"]
ctx = FlyteContext.current_context()
ff = TypeEngine.to_python_value(ctx, lv=lit,
expected_python_type=FlyteFile)
with open(ff, 'rb') as fh:
print(fh.readlines())
Is there any documentation for stestr return codes. I tried to look into https://stestr.readthedocs.io/en/latest/MANUAL.html
but it only talks about non zero return codes but what are the typical return codes, it returns. I am trying to use stestr list
There currently isn't any documentation of the specific returns codes in every error case and honestly the actually value of the exit code is pretty ad-hoc throughout the code base. That being said all the stestr commands will return zero if the command was successful and the value will be greater than zero to indicate an error occurred (which includes a failed test). But adding documentation would be a very welcome addition. You could start by filing an issue about it: https://github.com/mtreinish/stestr/issues/new/choose
The documentation that currently exists that I think you're referring to is about the use of the --subunit flag on several of the commands. That is there because the behavior is different with subunit output enabled. Normally stestr will return an exit code > 0 when there are test failures that occurred during the run. But if you're using the --subunit flag it will always return with 0 even if there are test failures. The only case it will return > 0 is when there was an error generating subunit or another internal error. I wanted to make sure that this difference was clear for users since it might be unexpected.
When using the hypothesis library and performing stateful testing, how can I see or output the Bundle "services" the library is trying on my code?
Example
import hypothesis.strategies as st
from hypothesis.strategies import integers
from hypothesis.stateful import Bundle, RuleBasedStateMachine, rule, precondition
class test_servicediscovery(RuleBasedStateMachine):
services = Bundle('services')
#rule(target=services, s=st.integers(min_value=0, max_value=2))
def add_service(self, s):
return s
The question is: how do I print / see the Bundle "services" variable, generated by the library?
In the example you've given, the services bundle isn't being tried on your code - you're adding things to it, but never using them as inputs to another rule.
If you are, running Hypothesis in verbose mode will show all inputs as they happen; or even in normal mode failing examples will print all the values used.
I am currently implementing the Aerospike Python Client in order to benchmark it along with our Redis implementation, to see which is faster and/or more stable.
I'm still on baby steps, currently Unit-Testing basic functionality, for example if I correctly add records in my Set. For that reason, I want to create a function to count them.
I saw in Aerospike's Documentation, that :
"to perform an aggregation on query, you first need to register a UDF
with the database".
It seems that this is the suggested way that aggregations, counting and other custom functionality should be run in Aerospike.
Therefore, to count the records in a set I have, I created the following module:
# "counter.lua"
function count(s)
return s : map(function() return 1 end) : reduce (function(a,b) return a+b end)
end
I'm trying to use aerospike python client's function to register a UDF(User Defined Function) module:
udf_put(filename, udf_type, policy)
My code is as follows:
# aerospike_client.py:
# "udf_put" parameters
policy = {'timeout': 1000}
lua_module = os.path.join(os.path.dirname(os.path.realpath(__file__)), "counter.lua") #same folder
udf_type = aerospike.UDF_TYPE_LUA # equals to "0", which is for "Lua"
self.client.udf_put(lua_module, udf_type, policy) # Exception is thrown here
query = self.client.query(self.aero_namespace, self.aero_set)
query.select()
result = query.apply('counter', 'count')
an exception is thrown:
exceptions.Exception: (-2L, 'Filename should be a string', 'src/main/client/udf.c', 82)
Is there anything I'm missing or doing wrong?
Is there a way to "debug" it without compiling C code?
Is there any other suggested way to count the records in my set? Or I'm fine with the Lua module?
First, I'm not seeing that exception, but I am seeing a bug with udf_put where the module is registered but the python process hangs. I can see the module appear on the server using AQL's show modules.
I opened a bug with the Python client's repo on Github, aerospike/aerospike-client-python.
There's a best practices document regarding UDF development here: https://www.aerospike.com/docs/udf/best_practices.html
In general using the stream-UDF to aggregate the records through the count function is the correct way to go about it. There are examples here and here.