Why Pipeline Failed when there's test case failed - python

i'm running robotframework in Gitlab-CI.
the problem i'm facing is, if there's any test case failed on the run, it will failed the pipeline too. so how to prevent the pipeline to failed? because the test failed, not the entire build process.
for now, this is how i run the robotframework on gitlab-ci.yml
- robot --exitonfailure -i "android-test" --outputdir ./output Android/Androidtest.robot
for example, i have 3 test cases on Androidtest.robot Test Suite:
1. register
2. fillin_profile
3. checkout_order
if register case and fillin_profile passed, but checkout order failed, the Ci pipeline will failed. i don't want that to be failed, because on the next job is to send the robotframework test report to gdrive, and it will never sent if the pipeline failed.
is that because i add --exitonfailure parameter btw? how to solve this?

Replace --exitonfailure with --nostatusrc.
If there are test failures, robot will exit with exitcode other than 0. Gitlab and for that matter every ci ever, looks if any command it executes with exitcode other than 0 and thinks that there was a failure. With --nostatusrc robot will always exit with 0 and thus your ci doesnt think there where failures.
Do consider that if you go with suppressing exit codes, you either loose the ability to mark the job within CI as failed if there was test failures unless you provide some other mechanism to do that if you happen to need such feature.

The whole point of CI is to fail when tests fail.
Uploading your test results or reports shouldn’t be an extra job in the pipeline. I don’t know about robotframework but GitLab supports publishing of artifacts after failed tests.
https://docs.gitlab.com/ee/ci/junit_test_reports.html

Related

What is the generic way to continue executing the code even after encountering the first failure in Selenium Python and Pytest?

What is the generic way to continue executing the code even after encountering the first failure in Selenium Python and Pytest?
Expectation:
Execution should not be stopped after encountering the failure and the failed test cases should also be reported in the report as failed.
As there is no code so, maybe soft assertions pytest-soft-assertions will help you. Soft assertions are assertions that do not terminate the test when they fail but their results are included in the test execution report. For more detail checkout the Link

How do I exit from pdb debugging when running hypothesis?

I like using hypothesis for my unit tests. I also like using pdb for debugging when things go wrong. But trying to use these two together can be very annoying. If I set a breakpoint in a file that is run by hypothesis using pytest <PATH-TO-FILE> -s, it will stop at the breakpoint as expected, and I can do my analysis. But after I am done, I want to be able to exit out of the test. However, if I do ctrl+c from inside the breakpoint, the test doesn't quit, it simply goes to the next hypothesis test case. And I have to keep doing this until hypothesis is done generating all it's test cases.
I usually end up opening system monitor and killing the pytest process everytime I want to be able to quit the test.
I'm hoping there is a better way.
The issue can be reproduced by the following snippet -
import hypothesis
from hypothesis import strategies as st
#hypothesis.given(st.integers())
def test_some_function(some_arg):
breakpoint()
print(some_arg)
test_some_function()
I am using python 3.8 with hypothesis 5.37.0
This happens under Linux but not under Windows, and it's unclear whether or not that's a bug in Hypothesis, or in Pdb, or 'just' undesirable behaviour from a combination of features.
As a workaround, you can import os; os._exit(0) to skip all cleanup logic and exit instantly.
A better, albeit somewhat more involved, solution is to disable multi-bug reporting and the shrinking phase when you use a debugger, so that Hypothesis stops generating examples immediately after the first failure. You can create a settings profile for use with the debugger, and then activate it via the --hypothesis-profile= argument to pytest.

Returning a pass/fail to robot framework from python code

I'm automating some software tests at the moment and I was wondering if it's possible to pass or fail a test in python so that RF shows the same result. For example, I'm doing an install test and at the end I check through the registry to confirm a clean install. If anything is missing I want to be able to essentially exit(0) and have RF show a fail, but just returns "[ ERROR ] Execution stopped by user."
Tests fail when a keyword fails. Keywords fail when they throw an exception (or call a keyword that throws an exception). So, you can write a keyword that executes your script and throws an exception if the return code is non-zero.
In other words, what you want won't happen automatically, but is extremely easy to implement.

How can I tell pytest-dependency to temporarily ignore test dependencies?

I've got a functional test suite using pytest-dependency to skip tests when other tests they depend on fail. That way, for example, if the login page is broken, I get one test failure saying "The login page is broken" instead of a slew of test failures saying "I couldn't log into user X", "I couldn't log into user Y", etc.
This works great for running the entire suite, but I'm trying to shorten my edit-compile-test loop, and right now the slowest point is testing my tests. If the test I'm working on has a bunch of other tests it depends on, they all have to succeed in order to not skip the test I'm trying to test. So, I either have to run the entire dependency tree, or comment out my #pytest.mark.dependency(...) decorators (which is an additional thing that I, as a human, have to remember to do). Technically there's nothing these depended-on tests do that enables their dependers to run - the only reason I want these dependencies at all is to make it easier for me to triage test failures.
Is there a command-line argument that would tell pytest-dependency to not skip things on account of dependents, or to tell pytest to not use the pytest-dependency plugin on this run (and this run only)?
The -p option allows disabling a particular plugin:
pytest -p no:dependency

Pause and Resume a Python Script from failed step

I have a long running python script(let's call it upgrade.py).
The script has many steps or parts (essentially XML API calls to a router to run certain commands on the router).
I need suggestions on how to achieve the following:
I'm looking to compartmentalize the script such that if any step fails the script execution should pause and it notifies the user via email (I can handle the emailing part).
The user can then fix the issue on his router and should be able to RESUME his script i.e. script resumes execution starting from the step that failed.
In short how do I go about compartmentalizing the script into steps (or test cases) so that:
The script PAUSES at a certain step that failed
The user is able to later RESUME the script (starting from that failed step)
Most test automation approaches will break off the test suite and then re-try all test cases after a fix has been applied. This has the added benefit that when a fix impacts already run scripts, this is also discovered.
Given the lengthy nature of your test, this may not be practical. The below scipt will use the Dialogs and the Wait Until Keyword Succeeds to allow for 3 retries before continuing with the next step in the test case.
*** Settings ***
Library Dialogs
*** Test Cases ***
Test Case
Wait Until Keyword Succeeds 3 times 1 second Failing Keyword
Log To Console End Test Case
*** Keywords ***
Failing Keyword
Fail Keyword failed
[Teardown] Dialogs.Pause Execution Please Check your settings.

Categories