Python test runner - stestr return codes - python

Is there any documentation for stestr return codes. I tried to look into https://stestr.readthedocs.io/en/latest/MANUAL.html
but it only talks about non zero return codes but what are the typical return codes, it returns. I am trying to use stestr list

There currently isn't any documentation of the specific returns codes in every error case and honestly the actually value of the exit code is pretty ad-hoc throughout the code base. That being said all the stestr commands will return zero if the command was successful and the value will be greater than zero to indicate an error occurred (which includes a failed test). But adding documentation would be a very welcome addition. You could start by filing an issue about it: https://github.com/mtreinish/stestr/issues/new/choose
The documentation that currently exists that I think you're referring to is about the use of the --subunit flag on several of the commands. That is there because the behavior is different with subunit output enabled. Normally stestr will return an exit code > 0 when there are test failures that occurred during the run. But if you're using the --subunit flag it will always return with 0 even if there are test failures. The only case it will return > 0 is when there was an error generating subunit or another internal error. I wanted to make sure that this difference was clear for users since it might be unexpected.

Related

Squish test functions always pass when used with squishtest module

We are using Squish for Qt 6.6.2 on Windows 10 with Python 3.8.7 and running our tests using squishtest module with Robot Framework 4.0.1.
We are having an issue with the test functions provided by the Squish API where any verifications done with such a call (for example squishtest.test.imagePresent) will Pass. The issue itself was quite simple to pinpoint to the fact that although the verification failed, the function call itself was passing without raising exceptions. This can also be verified from the report provided by the squishrunner where we have <scriptedVerificationResult type="FAIL" time="--"> on the passed execution.
The question is, can we in any way get the actual verification result passed to the Robot so we can fail the test accordingly? Preferrably in real time rather than parsing the report afterwards.
In Squish this works perfectly fine
def main():
startApplication("AUT")
snooze(2)
test.imagePresent("image.png", {"tolerant": True, "threshold": 85},
waitForObjectExists(names.sceneContainer_GraphWidget))
but with Robot this is always passing
# In testSuite.robot
*** Settings ***
Library MySquishLib
*** Test Cases ***
Test Image
Start AUT
Verify Image image.png {"tolerant": True, "threshold": 85} names.sceneContainer_GraphWidget
# In MySquishLib.py
import squishtest
import names
def start_aut():
squishtest.startApplication("AUT")
def verify_image(imageFile, imageParams, imageArea):
squishtest.test.imagePresent(imageFile, imageParams, imageArea)
Have a look at the documented: bool testSettings.throwOnFailure flag.
In your robot you can set this and you would not have to patch / rewrite every test.vp, test.compare, ... method.
When running from within the Squish IDE, this flag can be unset. Probably robotframework provides some environment variables to detect wether the test case is running from within itself.
https://doc.froglogic.com/squish/latest/rgs-squish.html#rgs-testsettings
The test functions are not supposed to raise exceptions during execution in order to allow test to continue even if a single VP was failed. The function does however return a boolean value just as expected. By using
def verify_image(imageFile, imageParams, imageArea):
if not squishtest.test.imagePresent(imageFile, imageParams, imageArea):
raise Exception("Image was not found")
I'm able to fail the Robot test without any issues.

Running a Python + Behave automation project and trying to execute steps inside another steps

#step(u'Child step')
def login_to_something(context):
context.execute_steps(u'parent step 1')
context.execute_steps(u'parent step 2')
It is unable execute_steps as mentioned above for parent step 1 and it throws the following error:-
"behave.parser.ParserError: Failed to parse "
When the Behave engine is not able to identify or distinguish the steps within a step, probably the error you see. Then there is something probably not in semantic as expected by engine.
I got your point, yes the preposition should not matter and just the step is good enough.. But there is something missing in expected semantic so the parser error.
def login_to_something(context):
context.execute_steps('''
when write the step 1 here
then write the step 2 here
'''
)
I'm unable to get from more information shared by you in problem statement.
Check the Indentations of your feature file. We also faced this issues multiple times.

How can I change baselines code output/replay (PPO) on github?

I am trying to run my own version of baselines code source of reinforcement learning on github: (https://github.com/openai/baselines/tree/master/baselines/ppo2).
Whatever I do, I keep having the same display which looks like this :
Where can I edit it ? I know I should edit the "learn" method but I don't know how
Those prints are the result of the following block of code, which can be found at this link (for the latest revision at the time of writing this at least):
if update % log_interval == 0 or update == 1:
ev = explained_variance(values, returns)
logger.logkv("serial_timesteps", update*nsteps)
logger.logkv("nupdates", update)
logger.logkv("total_timesteps", update*nbatch)
logger.logkv("fps", fps)
logger.logkv("explained_variance", float(ev))
logger.logkv('eprewmean', safemean([epinfo['r'] for epinfo in epinfobuf]))
logger.logkv('eplenmean', safemean([epinfo['l'] for epinfo in epinfobuf]))
logger.logkv('time_elapsed', tnow - tfirststart)
for (lossval, lossname) in zip(lossvals, model.loss_names):
logger.logkv(lossname, lossval)
logger.dumpkvs()
If your goal is to still print some things here, but different things (or the same things in a different format) your only option really is to modify this source file (or copy the code you need into a new file and apply your changes there, if allowed by the code's license).
If your goal is just to suppress these messages, the easiest way to do so would probably be by running the following code before running this learn() function:
from baselines import logger
logger.set_level(logger.DISABLED)
That's using this function to disable the baselines logger. It might also disable other baselines-related output though.

Aerospike Python Client. UDF module to count records. Cannot register module

I am currently implementing the Aerospike Python Client in order to benchmark it along with our Redis implementation, to see which is faster and/or more stable.
I'm still on baby steps, currently Unit-Testing basic functionality, for example if I correctly add records in my Set. For that reason, I want to create a function to count them.
I saw in Aerospike's Documentation, that :
"to perform an aggregation on query, you first need to register a UDF
with the database".
It seems that this is the suggested way that aggregations, counting and other custom functionality should be run in Aerospike.
Therefore, to count the records in a set I have, I created the following module:
# "counter.lua"
function count(s)
return s : map(function() return 1 end) : reduce (function(a,b) return a+b end)
end
I'm trying to use aerospike python client's function to register a UDF(User Defined Function) module:
udf_put(filename, udf_type, policy)
My code is as follows:
# aerospike_client.py:
# "udf_put" parameters
policy = {'timeout': 1000}
lua_module = os.path.join(os.path.dirname(os.path.realpath(__file__)), "counter.lua") #same folder
udf_type = aerospike.UDF_TYPE_LUA # equals to "0", which is for "Lua"
self.client.udf_put(lua_module, udf_type, policy) # Exception is thrown here
query = self.client.query(self.aero_namespace, self.aero_set)
query.select()
result = query.apply('counter', 'count')
an exception is thrown:
exceptions.Exception: (-2L, 'Filename should be a string', 'src/main/client/udf.c', 82)
Is there anything I'm missing or doing wrong?
Is there a way to "debug" it without compiling C code?
Is there any other suggested way to count the records in my set? Or I'm fine with the Lua module?
First, I'm not seeing that exception, but I am seeing a bug with udf_put where the module is registered but the python process hangs. I can see the module appear on the server using AQL's show modules.
I opened a bug with the Python client's repo on Github, aerospike/aerospike-client-python.
There's a best practices document regarding UDF development here: https://www.aerospike.com/docs/udf/best_practices.html
In general using the stream-UDF to aggregate the records through the count function is the correct way to go about it. There are examples here and here.

fuse utimensat problem

I am developing fuse fs at python (with fuse-python bindings). What method I need to implement that touch correctly work? At present I have next output:
$ touch m/My\ files/d3elete1.me
touch: setting times of `m/My files/d3elete1.me': Invalid argument
File exists "d3elete1.me":
$ ls -l m/My\ files/d3elete1.me
-rw-rw-rw- 1 root root 0 Jul 28 15:28 m/My files/d3elete1.me
Also I was trying to trace system calls:
$ strace touch m/My\ files/d3elete1.me
...
open("m/My files/d3elete1.me", O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK|O_LARGEFILE, 0666) = 3
dup2(3, 0) = 0
close(3) = 0
utimensat(0, NULL, NULL, 0) = -1 EINVAL (Invalid argument)
close(0) = 0
...
As you see utimensat failed. I was trying to implement empty utimens and utime but its are not even called.
Try launching fuse with the -f option. Fuse will stay in foreground and you can see errors in the console.
You must implement utimens and getattr. Not all the system calls necessarily map directly to the C calls you might be expecting. Many of them are used internally by FUSE to check and navigate your filesystem, depending on which FUSE options are set.
I believe in your case FUSE is preceding it's interpretation of utimesat to utimens, with a getattr check to verify that the requested file is present, and has the expected attributes.
Update0
This is a great coincidence. There is a comment below suggestion that the issue likes with the fact that FUSE does not support utimensat. This is not the case. I had the exact same traceback you've provided while using fuse-python on Ubuntu 10.04. I poked around a little, it would appear that the fuse-python 0.2 bindings are for FUSE 2.6, it may be that a slight change has introduced this error (FUSE is now at version 2.8). My solution was to stop using fuse-python (the code is an ugly mess), and I found an alternate binding fusepy. I've not looked back, and had no trouble since.
I highly recommend you take a look, your initialization code will be cleaner, and minimal changes are required to adapt to to the new binding. Best of all, it's only one module, and an easy read.

Categories