Python Selenium keeping elements after changed browser - python

I'm currently using Selenium on Python and have got a question about it.
elements = driver.find_elements_by_css_selector("div.classname a")
for element in elements:
element.click()
driver.back()
Since coming back to the previous page using back() in this code, Selenium couldn't find elements anymore, even though I still need it.
If someone has got any clue, please help me out.
Many appreciate in advance

Selenium creates a whole new set of objects when you change pages -- whether you click a link, or go back a page. If clicking on the element in line 3 causes Selenium to load a new page, you're getting a StaleElementException on the second element test. So what you have to do is every time you execute driver.back(), you need to search for the element objects on the page as you do in the first line, and probably maintain at least a counter as to how far down the list of elements you've already clicked (assuming they navigate away from the page). Make sense?

You can store the elements in a list and work on them with a loop. For example:
elementList = driver.find_elements_by_css_selector("div.classname a")
for i in range(len(elementList)):
element = driver.find_elements_by_css_selector("div.classname a")[i]
element.click()
driver.back()

Related

TypeError: 'FirefoxWebElement' object is not iterable error cycling through pages on a dynamic webpage with Selenium

This is the site I want to scrape.
I want to scrape all the information in the table on the first page:
then click on the second and do the same:
And so on until the 51st page. I know how to use selenium to click on page two:
link = "http://www.nigeriatradehub.gov.ng/Organizations"
driver = webdriver.Firefox()
driver.get(link)
xpath = '/html/body/form/div[3]/div[4]/div[1]/div/div/div[1]/div/div/div/div/div/div[2]/div[2]/span/a[1]'
find_element_by_xpath(xpath).click()
But I don't know how to set the code up so that it cycles through each page. The process of me getting the xpath is a manual one in the first place (I go on to Firefox, inspect the item and copy it into the code), so I don't know how to automate that step in and of itself and then the following ones.
I tried going a level higher in the webpage html and choosing the entire section of the page with the elements I want, and cycling through them, but that doesn't work because it's a Firefox web object(see below). Here'a a snapshot of the relevant part of the page source:
By calling the xpath of the higher class like so:
path = '//*[#id="dnn_ctr454_View_OrganizationsListViewDataPager"]'
driver.find_element_by_xpath(path)
and trying to see if I can cycle though it:
for i in driver.find_element_by_xpath(path):
i.click()
I get the following error:
Any advice would be greatly appreciated.
This error message...
...implies that you are trying to iterate through a WebElement where as only list objects are iterable.
Solution
Within the for() loop to create a list to iterate through it's elements, instead of using find_element* you need to use find_elements*. So your effective code block will be:
for i in driver.find_elements_by_xpath(path):
i.click()

Iterating google search results using python selenium

I want to iterate clicking the google search results and copy menus of each site. So far, i am through copying the menus and returning back to the results page but couldn't iterate clicking the results.For now, i would like to learn iterating search results alone but I'm stuck at stale element reference exception, i did see few other sources but no luck.
from selenium import webdriver
chrome_path = r"C:\Users\Downloads\chromedriver_win32\chromedriver.exe"
driver = webdriver.Chrome(chrome_path)
driver.get('https://www.google.com?q=python#q=python')
weblinks = driver.find_elements_by_xpath("//div[#class='g']//a[not(#class)]");
for links in weblinks[0:9]:
links.get_attribute("href")
print(links.get_attribute("href"))
links.click()
driver.back()
StaleElementReferenceException means that elements you are referring to do not exist anymore. That usually happens when page is automatically redrawn. In your case, you change the page and navigate back, so elements would be redrawn 100%.
Default solution is to search the list inside the loop every time.
If you want to be sure that list is same every iteration, you need to add some additional check (compare texts, etc.)
If you use this code for scraping, probably you don't need back navigation. Just open every page directly with driver.get(href)
Here you can find code example: How to open a link in new tab (chrome) using Selenium WebDriver?

selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page

driver.get("https://www.zacks.com/")
driver.find_element_by_xpath("//*[#id='search-q']")
i am trying to find search box on zacks website with selenium but I am getting StaleElementReferenceException
The reason why you're getting this error is simply, the element has been removed from the DOM. There are several reasons for this:
The page itself is destroying/recreating the element on the fly, maybe even rapidly.
Parts of the page have been updated (replaced), but you're still having and old reference.
You navigate to a new page but holding an old reference.
To avoid this, try to keep the element reference as short as possible. If the content is rapidly changing, make the operation directly without the round trip to the client, via javascript:
driver.executeScript("document.getElementById('serach-q').click();");
Maybe you're trying to find while the page and this exact search box are loading. Try to implement wait mechanism for this element, smth like that:
WebDriverWait wait = new WebDriverWait(driver, timeoutInSeconds);
wait.until(ExpectedConditions.visibilityOfElementLocated(locator));

Click all links on the website with action. The element reference is stale er

I make new script, I want click() all listed links on the website, find something, back to the listed links, click() next link, find something, back to the listed links.
I start with website which list for me some links:
link 1
link 2
link 3
etc
links = driver.find_elements_by_xpath("myxpath")
for link in links:
link.click()
try:
time.sleep(2)
wantedelement = driver.find_element_by_xpath("xpath")
wantedelement.click()
#Save to file
tofile = driver.find_element_by_xpath("xpath")
print (tofile.text)
myfile = open("file.txt", "a")
my.write(tofile.text + "\n")
driver.back()
except (ElementNotVisibleException, NoSuchElementException):
driver.back()
But my script check only one link and when back to the listed links print me error:
selenium.common.exceptions.StaleElementReferenceException: Message: The element reference is stale. Either the element is no longer attached to the DOM or the page has been refreshed.
on line:
for link in links:
link.click() <----
How can i fix it? (Python 2.7)
A couple of things:
Don't use delays like "sleep" to synchronize. Use "wait" conditions instead.
Since you are using "back()" you will want to only do that if you successfully moved off the previous page, which is one of the reasons that it is better to test the actions specifically - for example, how do you know that every click() is supposed to move you "forward", might be some other action involved. You are making assumptions about the DOM, here, which is OK, but you'll need to check your conditions before performing other actions. For example if you move "back" before your next page is loaded you will lose your DOM.
Also, it is possible that the back() is causing your DOM to refresh, which is changing the element identifiers. If that's the case you'll want to verify your page after each back and iterate through the elements by index, looking them up individually before clicking.
I always have found it better practice to be more exacting about things like click operations. Click the specific link and check for the specific result - and use wait conditions between actions to assure you are exactly where you expect to be.
More about wait conditions here.

Filling out WebForm without "finding element by..." Python Selenium

I am very new to selenium webdriver and I'm using python. I want to go to a webpage that contains(textbox, dropdowns, radio buttons, etc) and I want to tab/move from element to element w/o a "driver.find" syntax because that would defeat the purpose of automating, for some of these pages have a lot of fields. Is there a way to tab from one element to another, without getting the element's id?
Essentially, I would tab from field to field and create an if statement: If this is 'input' then do this, else if 'select' do this, etc.
Any help would be greatly appreciated? I hope I was clear? Thanks in advance.
Yeah you can do this! I will use an example for something with checkboxes:
checkboxes = driver.find_elements_by_xpath('//input[#type="checkbox"]')
for allChecks in checkboxes:
try:
allChecks.click()
except:
print("Checkbox not working")
So this finds every element that is a checkbox and checks it. with allChecks.click(). Notice a couple of things though as when I was learning they were really subtle and I missed.
When finding one element on a page you want to use find_element_by_xpath to find multiple elements on a page use find_elements_by_xpath notice that elements is plural in the second one.

Categories