I have a long running python script(let's call it upgrade.py).
The script has many steps or parts (essentially XML API calls to a router to run certain commands on the router).
I need suggestions on how to achieve the following:
I'm looking to compartmentalize the script such that if any step fails the script execution should pause and it notifies the user via email (I can handle the emailing part).
The user can then fix the issue on his router and should be able to RESUME his script i.e. script resumes execution starting from the step that failed.
In short how do I go about compartmentalizing the script into steps (or test cases) so that:
The script PAUSES at a certain step that failed
The user is able to later RESUME the script (starting from that failed step)
Most test automation approaches will break off the test suite and then re-try all test cases after a fix has been applied. This has the added benefit that when a fix impacts already run scripts, this is also discovered.
Given the lengthy nature of your test, this may not be practical. The below scipt will use the Dialogs and the Wait Until Keyword Succeeds to allow for 3 retries before continuing with the next step in the test case.
*** Settings ***
Library Dialogs
*** Test Cases ***
Test Case
Wait Until Keyword Succeeds 3 times 1 second Failing Keyword
Log To Console End Test Case
*** Keywords ***
Failing Keyword
Fail Keyword failed
[Teardown] Dialogs.Pause Execution Please Check your settings.
Related
What is the generic way to continue executing the code even after encountering the first failure in Selenium Python and Pytest?
Expectation:
Execution should not be stopped after encountering the failure and the failed test cases should also be reported in the report as failed.
As there is no code so, maybe soft assertions pytest-soft-assertions will help you. Soft assertions are assertions that do not terminate the test when they fail but their results are included in the test execution report. For more detail checkout the Link
I like using hypothesis for my unit tests. I also like using pdb for debugging when things go wrong. But trying to use these two together can be very annoying. If I set a breakpoint in a file that is run by hypothesis using pytest <PATH-TO-FILE> -s, it will stop at the breakpoint as expected, and I can do my analysis. But after I am done, I want to be able to exit out of the test. However, if I do ctrl+c from inside the breakpoint, the test doesn't quit, it simply goes to the next hypothesis test case. And I have to keep doing this until hypothesis is done generating all it's test cases.
I usually end up opening system monitor and killing the pytest process everytime I want to be able to quit the test.
I'm hoping there is a better way.
The issue can be reproduced by the following snippet -
import hypothesis
from hypothesis import strategies as st
#hypothesis.given(st.integers())
def test_some_function(some_arg):
breakpoint()
print(some_arg)
test_some_function()
I am using python 3.8 with hypothesis 5.37.0
This happens under Linux but not under Windows, and it's unclear whether or not that's a bug in Hypothesis, or in Pdb, or 'just' undesirable behaviour from a combination of features.
As a workaround, you can import os; os._exit(0) to skip all cleanup logic and exit instantly.
A better, albeit somewhat more involved, solution is to disable multi-bug reporting and the shrinking phase when you use a debugger, so that Hypothesis stops generating examples immediately after the first failure. You can create a settings profile for use with the debugger, and then activate it via the --hypothesis-profile= argument to pytest.
i'm running robotframework in Gitlab-CI.
the problem i'm facing is, if there's any test case failed on the run, it will failed the pipeline too. so how to prevent the pipeline to failed? because the test failed, not the entire build process.
for now, this is how i run the robotframework on gitlab-ci.yml
- robot --exitonfailure -i "android-test" --outputdir ./output Android/Androidtest.robot
for example, i have 3 test cases on Androidtest.robot Test Suite:
1. register
2. fillin_profile
3. checkout_order
if register case and fillin_profile passed, but checkout order failed, the Ci pipeline will failed. i don't want that to be failed, because on the next job is to send the robotframework test report to gdrive, and it will never sent if the pipeline failed.
is that because i add --exitonfailure parameter btw? how to solve this?
Replace --exitonfailure with --nostatusrc.
If there are test failures, robot will exit with exitcode other than 0. Gitlab and for that matter every ci ever, looks if any command it executes with exitcode other than 0 and thinks that there was a failure. With --nostatusrc robot will always exit with 0 and thus your ci doesnt think there where failures.
Do consider that if you go with suppressing exit codes, you either loose the ability to mark the job within CI as failed if there was test failures unless you provide some other mechanism to do that if you happen to need such feature.
The whole point of CI is to fail when tests fail.
Uploading your test results or reports shouldn’t be an extra job in the pipeline. I don’t know about robotframework but GitLab supports publishing of artifacts after failed tests.
https://docs.gitlab.com/ee/ci/junit_test_reports.html
I'm using Pycharm to write tests and running them with behave. I'm running the behave commands with cli. To write the features and scenarios i'm using Pycharm. How can i debug each step?
You need Pycharm Professional to easily setup debuging. Just create run/debug configuration, choose behave framework, then specify feature files folder and behave params.
Otherwise, if you doesn't have PyCharm Professional, you can create just basic python configuration, specify module behave and enter path to your feature folders in parameters.
If you don't have PyCharm proffesional and you want to launch behave from commands, you can resort to the well-known technique of placing prints with debug information wherever you think necessary to help you solve possible errors.
For these prints to be shown in the console, you must launch the behave command with the --no-capture option. An example would be:
features/test.feature
Feature: Test
Scenario: Scenario title
Given This is one step
steps/steps.py
from behave import *
#given("This is one step")
def step_impl(context):
print("I'm executing this code??")
#given("this is other setp")
def step_impl(context):
print("or I'm executing this other code??")
ouptut of behave --no-capture features/test.feature
$ behave --no-capture features/test.feature
Feature: Test # features/test.feature:1
Scenario: Scenario title # features/test.feature:3
Given This is one step # steps/steps.py:4
I'm executing this code??
1 feature passed, 0 failed, 0 skipped
1 scenario passed, 0 failed, 0 skipped
1 step passed, 0 failed, 0 skipped, 0 undefined
Took 0m0.000s
As you can see, the print tells you exactly which step you are running. With this technique you can debug your code by printing variable values or viewing the execution flow of your code.
I am running an automated test using an Android emulator driving an app with a Monkey script written in Python.
The script is copying files onto the emulator, clicks buttons in the app and reacts depending on the activities that the software triggers during its operation. The script is supposed to be running the cycle a few thousand times so I have this in a loop to run the adb tool to copy the files, start the activities and see how the software is reacting by calling the getProperty method on the device with the parameter 'am.current.comp.class'.
So here is a very simplified version of my script:
for target in targets:
androidSDK.copyFile(emulatorName, target, '/mnt/sdcard')
# Runs the component
device.startActivity(component='com.myPackage/com.myPackage.myactivity')
while 1:
if device.getProperty('am.current.comp.class') == 'com.myPackage.anotheractivity':
time.sleep(1) # to allow the scree to display the new activity before I click on it
device.touch(100, 100, 'DOWN_AND_UP')
# Log the result of the operation somewhere
break
time.sleep(0.1)
(androidSDK is a small class I've written that wraps some utility functions to copy and delete files using the adb tool).
On occasions the script crashes with one of a number of exceptions, for instance (I am leaving out the full stack trace)
[com.android.chimpchat.adb.AdbChimpDevice]com.android.ddmlib.ShellCommandUnresponsiveException
or
[com.android.chimpchat.adb.AdbChimpDevice] Unable to get variable: am.current.comp.class
[com.android.chimpchat.adb.AdbChimpDevice]java.net.SocketException: Software caused connectionabort: socket write error
I have read that sometimes the socket connection to the device becomes unstable and may need a restart (adb start-server and adb kill-server come in useful).
The problem I'm having is that the tools are throwing Java exceptions (Monkey runs in Jython), but I am not sure how those can be trapped from within my Python script. I would like to be able to determine the exact cause of the failure inside the script and recover the situation so I can carry on with my iterations (re-establish the connection, for instance? Would for instance re-initialising my device with another call to MonkeyRunner.waitForConnection be enough?).
Any ideas?
Many thanks,
Alberto
EDIT. I thought I'd mention that I have discovered that it is possible to catch Java-specific exceptions in a Jython script, should anyone need this:
from java.net import SocketException
...
try:
...
except(SocketException):
...
It is possible to catch Java-specific exceptions in a Jython script:
from java.net import SocketException
...
try:
...
except(SocketException):
...
(Taken from OP's edit to his question)
This worked for me:
device.shell('exit')# Exit the shell