I am working on an automation test project (using Pytest BDD) and I constantly hit the problem of how to handle environment prerequisites using BDD and Gherkin. For example, almost all of the scenarios require new entities created (users/admins/sites/organizations/etc) only in order to have something to work with.
I think I shouldn't write all the prerequisite actions in the 'Given' section (it seems anti-BDD), but I don't want to get lost about what scenario sets up what and how.
For example, I would never want to create this:
Scenario: A user can buy a ticket from a webshop.
Given an item available in the webshop
And a user is created
And the user has at least one payment option set up
And the user is logged in
When the user buys the item in the webshop
Then the user owns the item
How people generally write down these actions and entities in a way that is readable and maintainable in the future?
One Way - Using #BeforeClass (One time setup)
#RunWith(Cucumber.class)
#CucumberOptions(features = "classpath:features/checkoutmodule/registereduser/",
glue = {"com.ann.automation.test.steps" },
tags = { "#SignIn" },
plugin = { "pretty","json:target/cucumber.json",
"junit:target/cucumber-reports/Cucumber.xml", "html:target/cucumber-reports",
"com.cucumber.listener.ExtentCucumberFormatter"},
strict = false,
dryRun = false,
monochrome = true)
public class RunCuke {
// ----------------------------- Extent Report Configuration -----------------------------
#BeforeClass
public static void setup() {
// below is dummy code just to showcase
File newFile = new File(Constants.EXTENT_REPORT_PATH);
ExtentCucumberFormatter.initiateExtentCucumberFormatter(newFile,true);
ExtentCucumberFormatter.loadConfig(new File(Constants.EXTENT_CONFIG_FILE_PATH));
ExtentCucumberFormatter.addSystemInfo("Browser Name", Constants.BROWSER);
ExtentCucumberFormatter.addSystemInfo("Browser version", Constants.BROWSER_VERSION);
ExtentCucumberFormatter.addSystemInfo("Selenium version", Constants.SELENIUM_VERSION);
}
}
Other Way - Using Background (Setup before every scenario)
Cucumber provides a mechanism for this, by providing a Background keyword
where you can specify
a step or series of steps which are common to all the tests in the feature file.
a step or series of steps should run before each scenario in the
feature. Typically these will be Given steps, but you can use any steps
that you need to.
Example : Here, before every scenario/outline execution, we want user to take to home page of the site and search for a product. So let's see the implementation.
Background:
Given User is on Brand Home Page "https://www.anntaylor.com/"
Given User searches for a styleId for <Site> and makes product selection on the basis of given color and size
| Style_ID | Product_Size | Product_Color |
| TestData1 | TestData1 | TestData1 |
| TestData2 | TestData2 | TestData2 |
#guest_search
Scenario Outline: Validation of UseCase Guest User Order Placement flow from Search
Then Clicking on Cart icon shall take user to Shopping Bag
When Proceeding to checkout as "GuestUser" with emailId <EmailID> shall take user to Shipping Page
And Entering FN as <FName> LN as <LName> Add as <AddL1> ZCode as <ZipCode> PNo as <PhoneNo> shall take user to payment page
And Submitting CCardNo as <CCNo>" Month as <CCMonth> Year as <CCYear> and CVV as <CVV> shall take user to Order Review Page
Then Verify Order gets placed successfully
Examples: Checkout User Information
| EmailID | FName | LName | AddL1 | ZipCode | PhoneNo | CCNo | CCMonth | CCYear | CVV |
| TestData2 | TestData2 | TestData2 | TestData2 | TestData2 | TestData2 | TestData2 | TestData2 | TestData2 | TestData2 |
Last Way - Using #Before (Setup before every scenario)
#Before
public void setUpScenario(Scenario scenario){
log.info("***** FEATURE FILE :-- " + Utility.featureFileName(scenario.getId().split(";")[0].replace("-"," ")) + " --: *****");
log.info("---------- Scenario Name :-- " + scenario.getName() + "----------");
log.info("---------- Scenario Execution Started at " + Utility.getCurrentTime() + "----------");
BasePage.message=scenario;
ExtentTestManager.startTest("Scenario No . " + (x = x + 1) + " : " + scenario.getName());
ExtentTestManager.getTest().log(Status.INFO, "Scenario No . "+ x + " Started : - " + scenario.getName());
// Utility.setupAUTTestRecorder();
// --------- Opening Browser() before every test case execution for the URL given in Feature File. ---------
BaseSteps.getInstance().getBrowserInstantiation();
}
Use Cucumber's "Background" keyword:
Background in Cucumber is used to define a step or series of steps which are common to all the tests in the feature file. It allows you to add some context to the scenarios for a feature where it is defined.
See also the official docs.
Related
I have 2 questions on my development.
Question 1
I'm trying to create a template from a python code which consists of reading from BigQuery tables, apply some transformations and write in a different BigQuery table (which can exists or not).
The point is that I need to send the target table as parameter, but looks that I can't use parameters in the pipeline method WriteToBigQuery as it is raising the following error message: apache_beam.error.RuntimeValueProviderError: RuntimeValueProvider(option: project_target, type: str, default_value: 'Test').get() not called from a runtime context
Approach 1
with beam.Pipeline(options=options) as pipeline:
logging.info("Start logic process...")
kpis_report = (
pipeline
| "Process start" >> Create(["1"])
| "Delete previous data" >> ParDo(preTasks())
| "Read table" >> ParDo(readTable())
....
| 'Write table 2' >> Write(WriteToBigQuery(
table=custom_options.project_target.get() + ":" + custom_options.dataset_target.get() + "." + custom_options.table_target.get(),
schema=custom_options.target_schema.get(),
write_disposition=BigQueryDisposition.WRITE_APPEND,
create_disposition=BigQueryDisposition.CREATE_IF_NEEDED)
Approach 2
I created a ParDo function in order to get there the variable and set the WriteToBigQuery method. However, despite of having the pipeline execution completed sucessfully and seeing that the output is returning rows (theoretically written), I can't see the table nor data inserted on it.
with beam.Pipeline(options=options) as pipeline:
logging.info("Start logic process...")
kpis_report = (
pipeline
| "Process start" >> Create(["1"])
| "Pre-tasks" >> ParDo(preTasks())
| "Read table" >> ParDo(readTable())
....
| 'Write table 2' >> Write(WriteToBigQuery())
Where I tried with 2 methods and none works: BigQueryBatchFileLoads and WriteToBigQuery
class writeTable(beam.DoFn):
def process(self, element):
try:
#Load first here the parameters from the custom_options variable (Here we can do it)
result1 = Write(BigQueryBatchFileLoads(destination=target_table,
schema=target_schema,
write_disposition=BigQueryDisposition.WRITE_APPEND,
create_disposition=BigQueryDisposition.CREATE_IF_NEEDED))
result2 = WriteToBigQuery(table=target_table,
schema=target_schema,
write_disposition=BigQueryDisposition.WRITE_APPEND,
create_disposition=BigQueryDisposition.CREATE_IF_NEEDED,
method="FILE_LOADS"
)
Question 2
Other doubt I have is if in this last ParDo class, I need to return something as the element or result1 or result2 as we are in the last pipeline step.
Appreciate your help on this.
The most advisable way to do this is similar to #1, but passing the value provider without calling get, and passing a lambda for table:
with beam.Pipeline(options=options) as pipeline:
logging.info("Start logic process...")
kpis_report = (
pipeline
| "Process start" >> Create(["1"])
| "Delete previous data" >> ParDo(preTasks())
| "Read table" >> ParDo(readTable())
....
| 'Write table 2' >> WriteToBigQuery(
table=lambda x: custom_options.project_target.get() + ":" + custom_options.dataset_target.get() + "." + custom_options.table_target.get(),
schema=custom_options.target_schema,
write_disposition=BigQueryDisposition.WRITE_APPEND,
create_disposition=BigQueryDisposition.CREATE_IF_NEEDED)
This should work.
for a example
[root#test ~]# mysql -uroot -p'123123' -e"select user,host from mysql.user;"
+-------------------+-----------+
| user | host |
+-------------------+-----------+
| root | % |
| test | % |
| sqlaudit_test_mon | % |
| sysbase_test | % |
| mysql.session | localhost |
| mysql.sys | localhost |
+-------------------+-----------+
how to make search the result quick to convert to json format can you jq tools or python
such as that out put
[
{
"user":"root","host":"%"},
{
"user":"test","host":"%"},
{
"user":"sqlaudit_test_mon","host":"%"},
{
"user":"sysbase_test","host":"%"},
{
"user":"mysql.session","host":"localhost"},
{
"user":"mysql.sys","host":"localhost"}
]
i just want to know how to quick make search result to json,thank you!
it is better to user jq or python script it can make me search result to json format.
Just do it in your SELECT instead of pulling another program into a pipeline. MySQL has JSON functions. Ones of interest here are JSON_ARRAYAGG() and JSON_OBJECT(). Something like:
SELECT json_arrayagg(json_object('user', user, 'host', host)) FROM mysql.user;
should do it, plus whatever's needed to not print out that fancy table ascii art.
Here's an all-jq solution that assumes an invocation like this:
jq -Rcn -f program.jq sql.txt
Note in particular the -R ("raw input") and -n options.
def trim: sub(" *$";"") | sub("^ *";"");
# input: an array of values
def objectify($headers):
. as $in
| reduce range(0; $headers|length) as $i ({}; .[$headers[$i]] = ($in[$i]) ) ;
def preprocess:
select( startswith("|") )
| split("|")
| .[1:-1]
| map(trim) ;
reduce (inputs|preprocess) as $in (null;
if . == null then {header: $in}
else .header as $h
| .table += [$in|objectify($h)]
end )
| .table
how to parametrize a function for pytest such that parameter are taken from a text file and function name changes on each iteration for pytest-html report
text file format : Function_name | assert_value | Query for assertion from postgresql.
requirement : to create a pytest based framework
so far with my logic(it doesn't work):
with open("F_Query.txt","r") as ins1:
for F_Query in ins1:
#Name of function to be executed to be extracted from the file differentiated with delimeter " | "(leading and trailing space required)
F_name=F_Query.split(" | ")[0]
assert_val=F_Query.split(" | ")[1]
Query=F_Query.split(" | ")[2]
Loc_file=(Query.split(" ")[-5:])[0]
def f(text):
def r(y):
return y
r.__name__ = text
return r
c1.execute(Query)
assert(c1.rowcount() == assert_val), "Please check output file for records"
p = f(F_name)
can anyone explain how to get the function name changed with every iteration while parameters are being passed in pytest fun
latest changes (still doesn't work):
with open("F_Query.txt","r") as ins1:
for F_Query in ins1:
#Name of function to be executed to be extracted from the file differentiated with delimeter " | "(leading and trailing space required)
#F_name=F_Query.split(" | ")[0]
assert_val=int(F_Query.split(" | ")[1])
Query=F_Query.split(" | ")[2]
Query=Query.strip("\n")
#Loc_file=(Query.split(" ")[-5:])[0]
dictionary1[Query]=assert_val
#pytest.mark.parametrize('argumentname',dictionary1)
def test_values(argumentname):
c1.execute(dictionary1.keys(argumentname))
assert(c1.rowcount==dictionary1.values(argumentname))
I need to write a program that will write and read to/from a file. I have code that works depending on the order I call functions.
def FileSetup():
TextWrite = open('Leaderboard.txt','w')
TextWrite.write('''| Driver | Car | Team | Grid | Fastest Lap | Race Time | Points |
''')
TextWrite.close()
TextRead = open('Leaderboard.txt','r')
return TextRead
def SortLeaderboard(LeaderBoard):
TextFile = open('Leaderboard.txt', 'w')
for items in LeaderBoard:
TextFile.write('\n| '+items['Driver']+' | '+str(items['Car'])+' | '+items['Team']+' | '+str(items['Grid'])+' | '+items['Fastest Lap']+' | '+items['Race Time']+' | '+str(items['Points'])+' |')
Leaderboard = Setup()
FileSetup()
TextRead = FileSetup()
TextFile = open('Leaderboard.txt','w')
SortLeaderboard(Leaderboard)
#TextRead = FileSetup()
str = TextRead.read()
print str
Depending on which TextRead = FileSetup() I comment out either SortLeaderboard or FileSetup will work. If I comment out the TextRead after I call SortLeaderboard then SortLeaderboard will write to the file and FileSetup won't. If I call it after then FileSetup will write to the file and Sortleaderboard won't.
The problem is only one function writes to the file. I am not able to get both to write to it.
I'm sorry this is really confusing this was the best way I could think of explaining it. If you need me to explain something in a different way just ask and I will try,
Avoid calling .open and .close directly and use context managers instead. They will handle closing the file object after you are done.
from contextlib import contextmanager
#contextmanager
def setup_file():
with open('Leaderboard.txt','w') as writefile:
myfile.write('''| Driver | Car | Team | Grid | Fastest Lap | Race Time | Points |
''')
with open('Leaderboard.txt','r') as myread:
yield myread
def SortLeaderboard(LeaderBoard):
with open('Leaderboard.txt', 'w') as myfile:
for items in LeaderBoard:
TextFile.write('\n| '+items['Driver']+' | '+str(items['Car'])+' | '+items['Team']+' | '+str(items['Grid'])+' | '+items['Fastest Lap']+' | '+items['Race Time']+' | '+str(items['Points'])+' |')
Leaderboard = Setup()
with setup_file() as TextRead:
SortLeaderboard(Leaderboard)
str = TextRead.read()
print str
Here you instantiate your own context manager setup_file that encapsulates preparing the file for use, and cleaning up afterwards.
A context manager is a python generator with a yield statement. The control of flow is passed from the generator to the body of the generator after the yield statement.
After the body of the generator has been executed, flow of control is passed back into the generator and cleanup work can be done.
open can function as a context manager by default, and takes care of closing the file object.
I am using python pretty table to print the status of each record on CLI.
How to display the status updates on the CLI in the same table.
Example:
+--------------+---------+
| Jobs | Status |
+--------------+---------+
| job1 | FAILED |
| job2 | SUCCESS |
+--------------+---------+
The jobs status will be updated by a thread. I want to display the updated status in the same table in CLI console.
I found ascii code to move the cursor to the previous line. And I am using the below logic to achieve the purpose
number_of_records = len(records) # number of jobs in a tables
total_lines = number_of_records + 3 + 1 # number of records + Borders + Header
if prev_lines != 0:
for i in range(prev_lines):
sys.stdout.write('\033[F')
prev_lines = total_lines
print status_table
Thanks :)