Robot framework - Post run hook to send message - python

I run multiple test suites in a single pybot command like this,
pybot suite1.robot suite2.robot suite3.robot
Now, each one of these robot suites add one key,value into a global dictionary (this test variable contains some metrics that needs to be returned to the test-invoker) as,
{ "suite1-res" : xxxx,
"suite2-res" : yyyy,
"suite3-res" : zzzz }
Now, this dictionary needs to be posted to a kafka/any messaging broker. I like this functionality to be written in a post-run hook function/robot case that will be ran only once at the end of all robot cases.
Can this post-run hook functionality be added as a runtime parameter (like suite teardown) instead of passing it as an extra robot test case argument.

Use a suite file
If you put all of those suites in a folder, you can attach a suite teardown in the folder which will run after all of the child suites finish.
$ mkdir all_tests
$ mv suite*.robot all_tests
$ # edit all_tests/__init__.robot to have a suite teardown
$ pybot all_tests
Use a listener
You can use a listener, which can collect the data and run a post processing step. For example, your tests could attach the metrics as suite metadata which the listener will get when end_suite is called for each suite. The listener can store all of this metadata, then process it when close gets called. For more information see Using the listener interface in the robot framework users guide, and the documentation for Set Suite Metadata keyword in the builtin library.
Write a custom script
For more information, see the section titled Test Suite Directories in the robot framework users guide. For this to work your tests would have to write the dictionary to a file.
Your other choice is to write a simple bash or batch script that runs the pybot command, and then does whatever post processing you want. Your tests will need to somehow make the dictionary data available. For example, you could store the data as suite metadata, and then have the post-processing step pull that data out of the output.xml. Or, your suites could write the data to a file.

Related

Use Robot Framework logging file at the end of test case execution

I am a new Robot Framework user and have a question regarding the log file that it makes at the end of executing testcases. I would like to use the html file it creates and upload it automatically to the correct ticket. I already have python code that works to upload the file and it can be used as a keyword, but I am not sure how I can call upon that keyword as a test teardown step as at that point the logging probably is not created yet..
Is this correct and if so: is there another way to automatically call a python function to upload the html file after executing a testcase?
Yes, you can use things as below:
Test Teardown Run keyword if test Passed/Failed Name_of_kw
Add this to your settings section of .robot file.
Now, define the kw: (Import Process Library first in your robot file)
Name_of_kw
Start Process <tab> python <tab> path_to_file.py<tab> alias=prog
Wait for Process prog #wait until it gets completed
Get Process Result prog stdout=yes #to make sure you have uploaded it
================
more details at -https://robotframework.org/robotframework/latest/libraries/Process.html#library-documentation-top
I've been facing the same problem and I've got an alternative solution for getting log files (log.html, report.html, output.xml) in same execution and upload them to a FTP Server.
I created a python script in Robot Framework's root project folder:
import subprocess
import os.path
import time
arguments = sys.argv
subprocess.run(arguments)
logs = ['log.html', 'report.html', 'output.xml']
while not os.path.exists(logs[0] and logs[1] and logs[2]):
time.sleep(1)
do_something()
Instead of running:
robot .\tests\...\suite
Run:
python script.py robot .\tests\...\suite
In that way, you will be able to work with output files when tests or suites are finished.
If you run robot's command with -d flag to save results on a different folder, consider using automatic variable from Robot Framework called ${OUTPUT DIR} to get full source path and save it to txt file on root folder. That's because you'll be able to read it and find the logs in your script.py.

How to run multiple test case on multiple test suite parallely in robot framework | Python

I have developed a tool for automated web testing of 3 urls using robot framework python. I just wanted to execute all the test suites parallelly, along with test cases it should also run parallelly
for example
URL1 - TestCase1 , TestCase2
URL2 - TestCase1 , TestCase2
URL3 - TestCase1, TestCase2
Here I should run all this TestSuites ( URL1, URL2, URL3) parallelly , and each testcase in a test suite should run parallelly.
Is there any way I can do that? Currently I did something like this, which allows me to run all the testcases parallelly , but it executes all together (Test Suites ). I want each of the Test Suite to produce report separately.
os.system(
'cmd /c "pabot --testlevelsplit --processes 10'
' --outputdir C:/filemanager/'+log_time+'/'
' C:/Users/abc/*.robot"'
)
Not sure I follow, Ruli. If you want your own brand of parallelism, then you do not even need pabot. Just spawn simultaneous process shells. Background them as UNIX jobs using &, jobs and fg for bash built-in job control.
Seems as though you already know how to use pabot since you give it in your example, but perhaps your solution is NOT using it.
I like the way pabot retains the integrity of a given Test Suite, i.e. does not attempt parallel execution of tests within a Test Suite, by design.
Since the parallelism you seek is to have URL1, URL2 and URL3 run at the same time and produce, each on their own, Robot reports - then run them separately!

how to execute a particular test case n number of times in robot framework

I have 5 test cases regarding the creation of member and verification of job. I want to run those test cases like 5 or 20 times. My framework is robot and ide is pycharm, language - python.
APS Transformations Triggering
[Documentation] Triggering The APS Transformations for a Member
[Tags] APSXform APSXformTrigger
Login to Platform Analytics
${GENERATED_MEMBER} = Generate a Random Member
APS_Transformations
Search for the Member
Search the Results and Go To
Relogin If Needed
Verify Basic Member Homepage Details
Trigger APS Transformations
Save Member Details To Job Log File
APS Transformations Verification
[Documentation] Verifying The APS Transformations for a Member
[Tags] APSXform APSXformVerification All
Login to Platform Analytics
Log To Console Previous Run: ${verify_prev_run}
Fetch Previous Memeber Run Details
Fetch URL And Go To APP_LOGGER_URL
Log APS Transformations are Successful.
I know that I can do a for loop for keywords but do not want to write all these test cases in one keyword.
-Is there a git command where i can state that i want to run these tags for 20 times?
The simplest solution is to create a shell script that runs robot N times. You can specify a different output file for each, and then combine all of the results into a single file.
The following example will run robot 10 times and then generate log and report files of all of the combined results.
#!/bin/bash
outputs=()
for i in {0..10}; do
output="output-$i.xml"
outputs+=($output)
robot --output $output $#
done
rebot ${outputs[#]}
Run it like this:
$ bash run_robot.sh example.robot
There is an option repeating your destination path. Suppose you want to execute 6 times in your current path, then you can do as follows:
pybot --test "Yout test" . . . . . .
You can also put your tests inside a loop:
Example
:FOR ${count} IN RANGE 6
\ APS Transformations Triggering
\ APS Transformations Verification

How do I write to the console from a pytest plugin during the collection phase?

I'm writing a pytest plugin that needs to warn the user about anomalies encountered during the collection phase, but I don't find any way to consistently send output to the console from inside my pytest_generate_tests function.
Output from print and from the logging module only appears in the console when adding the -s option. All logging-related documentation I found refers to logging inside tests, not from within a plugin.
In the end I used the pytest-warning infrastructure by using the undocumented _warn() method of the pytest config object passed to or anyway accessible from various hooks. For example:
def pytest_generate_tests(metafunc):
[...]
if warning_condition:
metafunc.config._warn("Warning condition encountered.")
[...]
This way you get additional pytest-warnings in the one-line summary if any was reported and you can see the warnings details by adding the '-r w' option to the pytest command line.

Automated whole application output testing in python

I have a script that gets a file input plus some info, runs a couple of (possibly interdependent) programs on it using subprocess module, and distributes the output over the file-system.
Only a few parts can be tested in isolation by traditional unit-testing, so I'm searching a convenient way to automate the integration-testing (see if the output files exist in the right locations, in the right number, of the right size, etc).
I initially thought that setUp and tearDown methods from the default unittest module could help me, but they are re-run with each test, not once for the entire test suite, so it is not an option. Is there any way to make the unittest module run a global setUp and tearDown once? Or an alternative module/tool that I can use? Eclipse/PyDev integration would be a bonus.

Categories