How to Speed up Test Execution Using Selenium Grid on single machine - python

I am selenium python and I would like to speed up my tests. let's say 5 tests simultaneously. How can I achieve that on the single machine with the help of selenium grid

You won't need a Selenium Grid for this. The Grid is used to distribute the test execution across multiple machines. Since you're only using one machine you don't need to use it.
You are running tests so I'm assuming you are using a test framework. You should do some research on how you can run tests in parallel using this framework.
There will probably also be a way to execute a function before test execution. In this function you can start the driver.
I'd be happy to give you a more detailed answer but your question is lacking the framework you are using to run the tests.

This is my base class:
class BaseTestCase(object):
_multiprocess_can_split_ = True
def setUp(self):
self.driver = webdriver.Firefox()
self.driver.get("https://login.com")
self.assertEqual("Authorization required", self.driver.title)
def tearDown(self):
if sys.exc_info()[0]:
test_method_name = self._testMethodName
self.driver.save_screenshot("users/desktop/ErrorScreenshots/" + test_method_name + ".png")
self.driver.quit()
When I try to achieve this with nose by typing nosetests --processes=2 in terminal. it opens all 30 browsers at the same time and all test fails

Related

Python selenium share second browser between test

I'm quite new to selenium and may be doing something wrong. Please fix me!
I'm creating some E2E tests, part of them require second account.
Each time I open new browser, I have to make login procedure, that takes time.
So, I decided to keep the second browser open between tests and reuse it.
But I can't pass the newly created selenium object to the second test. What I'm doing wrong?
class RunTest(unittest.TestCase):
#classmethod
def setUpClass(self):
#main browser that I always use
self.driver = webdriver.Chrome(...)
def init_second_driver(self):
#second browser that could be used by some tests
self.second_driver = webdriver.Chrome(...)
def test_case1(self):
if not self.second_driver:
self.init_second_driver()
#some tests for case 1
def test_case2(self):
if not self.second_driver: #this check always fails! WHY?
self.init_second_driver()
#some tests for case 2
Thank you in advance
Everytime you create your chromedriver object it's default option is to create a new Chrome profile. Think of a profile as your local cookie store and chache.
You want this to happen. Selenium is designed for testing and logging in each time without history ensures you tests always start from the same state (not logged in and no cookies).
If you have a lot of tests and want your suite to run faster consider running tests in parallel.
For now, if you want to try sharing state between tests (i.e. staying logged in) you can instruct chrome to reuse a profile with the following option/tag:
options = webdriver.ChromeOptions()
options.add_argument('--user-data-dir=C:/Path/To/Your/Testing/User Data')
driver = webdriver.Chrome(options=options)
That should remove the need for a second browser your state.

Should pytest be used for integration testing an embedded system?

I'm working on setting up my team's new unit test and integration test infrastructure and want to make sure I'm starting off by selecting the correct test frameworks. I'm an embedded developer testing code running on a VxWorks operating system with a C/C++ production codebase.
We need a framework capable of directly testing C/C++ for unit testing, so for our unit tests I chose Googletest as our framework.
However, for integration tests we've generally tested using Python scripts (with no test framework). The Python scripts connect to the embedded system over a network and test cases via sending commands and receiving telemetry.
Would using pytest as a test framework be beneficial to the way we're currently using Python for integration testing an embedded system? Most of the examples I've seen use pytest in a more unit test fashion by creating assertions for single functions in a Python production codebase.
EDIT:
Per hoefling's comment, i'll provide a (very simplified) example of one of our existing Python integration test cases, and also what I believe its corresponding Pytest implementation would be.
#Current example
def test_command_counter():
preTestCmdCount = getCmdCountFromSystem()
sendCommandToSystem()
postTestCmdCount = getCmdCountFromSystem()
if (postTestCmdCount != (preTestCmdCount + 1)):
print("FAIL: Command count did not increment!")
else:
print("PASS")
#Using Pytest?
def test_command_counter():
preTestCmdCount = getCmdCountFromSystem()
sendCommandToSystem()
postTestCmdCount = getCmdCountFromSystem()
assert postTestCmdCount == (preTestCmdCount + 1)
So, correct me if I'm wrong, but it appears that the advantages of using Pytest over plain Python for this simplified case would be:
Being able to make use of pytest's automated test case discovery, so that I can easily run all of my test functions instead of having to create custom code to do so.
Being able to make use of the 'assert' syntax which will automatically generate pass/fail statements for each test instead of having to manually implement pass/fail print statements for each test case
I've been in a similar situation, and from what I gathered unit testing frameworks are NOT appropriate for integration testing on embedded systems. A similar question was asked here:
Test framework for testing embedded systems in Python
We personally use Google's OpenHTF to automate both the integration testing (as in your case) and the production verification testing, which includes bringup, calibration and overall verification of assembly.
Check it out: https://github.com/google/openhtf
We automate advanced test equipment such as RF switches and Spectrum Analysers all in Python in order to test and calibrate our RF units, which operate in the >500 MHz range.
Using OpenHTF, you can create complete tests with measurements very quickly. It provides you with a builtin web GUI and allows you to write your custom export 'callbacks'.
We're currently building a complete test solution for hardware testing. I'd be glad to help with OpenHTF if needed, as we're basing our flagship implementation on it.
This thread's old, but these suggestions might help someone...
For unit testing embedded C on a host PC and on target we use Unity and CMock. http://www.throwtheswitch.org/
For hardware-in-the-loop testing we use pytest.
Both work well and are part of our Jenkins release workflow.

How can I debug an iOS Selenium test in Python

I'm trying to run an iOS Selenium test in debug mode. I'm using Appium, an iOS simulator (Xcode), and writing the tests in Python.
Once the code reach my breakpoint I can see all the variables, but few seconds later, instead of seeing their values I get the following exception:
A session is either terminated or not started
This is happening even though I can see the simulator is still running.
I've tried looking online but couldn't find a solution, Can you please help?
Thanks!
You might want to increase newCommandTimeout Desired Capability value to something which will allow you to inspect the elements values. The relevant code line to increase the timeout to 5 minutes would be:
desired_caps['newCommandTimeout'] = '300'
Full initialization routine just in case:
from appium import webdriver
desired_caps = {}
desired_caps['platformName'] = 'iOS'
desired_caps['platformVersion'] = '12.3'
desired_caps['automationName'] = 'xcuitest'
desired_caps['deviceName'] = 'iPhone SE'
desired_caps['newCommandTimeout'] = '300'
driver = webdriver.Remote('http://localhost:4723/wd/hub', desired_caps)
This way Appium will wait for a new command from the client (your code) for 5 minutes prior to considering the client idle and terminating the session, it should be enough to enable debugging, feel free to increase more if needed.
You can also consider switching to Appium Studio which makes your life easier when it comes to inspecting the mobile layout, managing iOS devices/provisioning profiles, generating unique XPath locators for elements automatically and having an extra set of Desired Capabilities allowing you to faster deal with edge cases

Performance testing with JMeter JSR223 sampler

I'm doing performance testing with JMeter Python using JSR223 sampler. I want to know the following.
How to connect to existing browser window?
How to calculate performance timing?
Suppose I have 10 steps in Python code. I want to calculate timing from step 3 to step 5.
How to call methods from one JSR223 sampler to another?
Kindly help me with it.
Thanks.
If the browser was triggered from Selenium you can determine its sessionid like:
self.driver.session_id
and then start another WebDriver instance providing the aforementioned session_id as the parameter:
driver = webdriver.Remote(command_executor=url,desired_capabilities={})
driver.session_id = session_id
if the browser wasn't kicked off via Selenium - it's not possible.
You can use Transaction Controller to measure the cumulative execution time of its children
You can put your shared logic into a separate .py file and use sys.path to load it where required like:
from sys import path
path.append(path_to_your_shared_module)
import YourSharedModule
//call functions from the shared module

Changing execution speed of tests?

Updating with more context: Selenium 1 had a command called "setSpeed." This allowed the execution of each command to be slowed down by X milliseconds. The team behind Selenium 2 (Webdriver) decided to deprecate this command and now there is no way to slow down the tests to run at speeds where it's easy to visually monitor the App during execution. I've read the developers' explanation as to why they deprecated it, as well as the suggested workarounds like using implicit_waits, but that doesn't solve the issue for me (or the other people complaining about the deprecation). That said, I was hoping to work around this by setting a global execution speed that would be applicable to either each method in unittest, or the entire suite of tests.
Original Question: I have different unit tests that I'd like to execute using different delays between commands. I know that I can keep copying and pasting time.sleep between the commands, but surely there is a way to just set a universal sleep that will be run before each command in the specified method?
def test_x_test(self):
driver = self.driver
time.sleep(2)
print("running the First selenium command such as click button")
time.sleep(2)
print("running another Selenium command such as click link ")
time.sleep(2)
self.driver.quit()
if __name__ == '__main__':
unittest.main()
Ahh now the answer is so obvious.
Create a helper method that controls webdriver actions and before it executes the action put in a pause:
The below is going to be pseudocode-ish as I no longer have access to a Python IDE at work
#passing in Webdriver instance and the command we want to execute into our helper method
webdriverHelper(driver, command):
#this 2 second sleep will get run each time
time.sleep(2)
if command == "click":
driver.getElement.click()
elif command== "getText":
driver.getElement.getText()
etc...............

Categories