Selenium + Python: StaleElementReferenceException (selecting by class) - python

I'm trying to write a simple Python script using Selenium, and while the loop runs once, I'm getting a StaleElementReferenceException.
Here's the script I'm running:
from selenium import webdriver
browser = webdriver.Firefox()
type(browser)
browser.get('http://digital2.library.ucla.edu/Search.do?keyWord=&selectedProjects=27&pager.offset=50&viewType=1&maxPageItems=1000')
links = browser.find_elements_by_class_name('searchTitle')
for link in links:
link.click()
print("clicked!")
browser.back()
I did try adding browser.refresh() to the loop, but it didn't seem to help.
I'm new to this, so please ... don't throw stuff at me, I guess.

It does not make sense to click through links inside of a loop. Once you click the first link, then you are no longer on the page where you got the links. Does that make sense?
Take the for loop out, and add something like link = links[0] to grab the first link, or something more specific to grab the specific link you want to click on.
If the intention is to click on every link on the page, then you can try something like the following:
links = browser.find_elements_by_class_name('searchTitle')
for i in range(len(links)):
links = browser.find_elements_by_class_name('searchTitle')
link = links[i] # specify the i'th link on the page
link.click()
print("clicked!")
browser.back()
EDIT: This might also be as simple as adding a pause after the browser.back(). You can do that with the following:
from time import sleep
...
sleep(5) # specify pause in seconds

Related

Click on link with Selenium Webdriver Python

I need to get all the links from specific div, and randomly click on one of them.
My code is wrong, because I'm trying to click on str object, but idk other solution.
links = driver.find_elements(By.CSS_SELECTOR,"div.product-grid a")
links_list=[]
for element in links:
links_list.append(element.get_attribute("href"))
random.choice(links_list).click()
it was easier than expected
links = driver.find_elements(By.CSS_SELECTOR,"div.product-grid a")
random.choice(links).click()

How to get next page in silenium?

I am working on selenium in python, I want to scrape all pages, but I am in trouble:
Here is the element I want to click:
I am using the folloing code:
link=driver.find_element_by_link_text ('2')
link.click()
But it give click on another element
Deos there exist another way to get next page?
First of all, sees like your element what you're trying to click overlapped by another one, so you need to wait for its becoming being clickable or other one disappear:
el = WebDriverWait(driver, 15).until(EC.element_to_be_clickable((By.XPATH,//div[#id="pagination_wrapper"]//li[#value="1"])))
or
WebDriverWait(driver, LONG_TIMEOUT
).until_not(EC.presence_of_element_located((By.XPATH,"//div[#class='close_cookie_alert']")))
Here's like you can find all of yours elements:
link1 = driver.find_element_by_xpath('//div[#id="pagination_wrapper"]//li[#value="1"]')
link2 = driver.find_element_by_xpath('//div[#id="pagination_wrapper"]//li[#class="2"]')
link3 = driver.find_element_by_xpath('//div[#id="pagination_wrapper"]//li[contains(text(),"text of the third element")]')
if usual click doesn't work, try to use click via javascript, like that:
driver.execute_script("arguments[0].click();", link1)
or, just move to the next page with:
driver.get('new_page')

Obtaining data on clicking multiple radio buttons in a page using selenium in python

I have a page, and I have 3 radio buttons on it. I want my code to consecutively click each of these buttons, and as they are clicked, a value (mpn) is displayed, I want to obtain this value. I am able to write the code for a single radio button, but I dont understand how i can create a loop so that only value of this button changes (value={1,2,3})
from selenium import webdriver
from bs4 import BeautifulSoup
driver = webdriver.Chrome(executable_path=r"C:\Users\Home\Desktop\chromedriver.exe")
driver.get("https://www.1800cpap.com/resmed-airfit-n30-nasal-cpap-mask-with-headgear")
soup = BeautifulSoup(driver.page_source, 'html.parser')
size=driver.find_element_by_xpath("//input[#class='product-views-option-tile-input-picker'and #value='2' ]")
size.click()
mpn= driver.find_element_by_xpath("//span[#class='mpn-value']")
print(mpn.text)
Also, for each page, the buttons vary in number, and their names. So, if there is any general solution that i could extend to all pages, for all buttons, it would be highly appreciated. Thanks!
Welcome to SO!
You were a small step from the correct solution! In particular, the find_element_by_xpath() function returns a single element, but the similar function find_elements_by_xpath() (mind the plural) returns an iterable list, which you can use to implement a for loop.
Below a MWE with the example page that you provided
from selenium import webdriver
import time
driver = webdriver.Firefox() # initiate the driver
driver.get("https://www.1800cpap.com/resmed-airfit-n30-nasal-cpap-mask-with-headgear")
time.sleep(2) # sleep for a couple seconds to ensure correct upload
mpn = [] # initiate an empty results' list
for button in driver.find_elements_by_xpath("//label[#data-label='label-custcol3']"):
button.click()
mpn.append(driver.find_element_by_xpath("//span[#class='mpn-value']").text)
print(mpn) # print results

Selenium not following click that opens a new tab

For the longest time I had an issue with selenium finding a button I was looking for on the page. After a week I got the "brilliant" idea to check the url that selenium was searching the button for. Dummy me, it was the wrong URL.
So the issue is, selenium searches page1 for a specific item. It then clicks it, and by the websites design, it opens page2 in a new tab. How can I get selenium to follow the click and work on the new tab?
I though of using beautiful soup to just copy the url from page1, but the website doesn't show the urls. Instead it shows functions that get the urls. It's really weird and confusing.
Ideas?
all_matches = driver.find_elements_by_xpath("//*[text()[contains(., 'Pink')]]")
item = all_matches[0]
actions.move_to_element(item).perform()
item.click()
try:
print (driver.current_url)
get_button = driver.find_elements_by_xpath('//*[#id="getItem"]')
except:
print 'Cant find button'
else:
actions.move_to_element(get_button).perform()
get_button.click()
Selenium treats tabs like windows, so generally, switching to new window/tab is as easy as:
driver.switch_to.window(driver.window_handles[-1])
You may find it helpful to keep track of the windows with vars:
main_window = driver.current_window_handle
page2_window = driver.window_handles[-1]
driver.switch_to.window(page2_window)
Note that when you want to close the new window/tab, you have to both close & switch back:
driver.close()
driver.switch_to.window(main_window)

Python, Selenium + stale element reference

I'm trying to go on a webpage,
save a set of links of the page I would like to click on, and then
I would like to click on each of the link if a for loop (going back and forth on the page. Here is the code:
from selenium import webdriver
driver = webdriver.Chrome(executable_path='/Applications/chromedriver')
driver.get("webpage link") #insert link to webpage
list_links = driver.find_elements_by_xpath("//a[contains(#href,'activities')]")
for link in list_links:
print(link)
link.click()
driver.goback()
driver.implicitly_wait(10) # seconds
driver.quit()
However, the first time I go back to the homepage I get the error message:
StaleElementReferenceException: stale element reference: element is not attached to the page document.
Can anyone help me to understand why? Suggest a solution?
Thank you. much appreciated.
Your list_links works only on page where it was defined. After you made first click on link DOM re-created and references to elements of list_links becomes invalid. You can apply below solution:
driver.implicitly_wait(10) # seconds
list_links = [link.get_attribute('href') for link in driver.find_elements_by_xpath("//a[contains(#href,'activities')]")]
for link in list_links:
print(link)
driver.get(link)
driver.goback()
driver.quit()
P.S. I assume that goback() is your custom method that was defined already as there is no such method in Selenium built-ins, but just back()
P.P.S. Note that you can call driver.implicitly_wait(10) only once in your code and it will be applicable for all next find_element...() calls
Its simple, you trying to save the elements of an html(links) which cannot be referenced anymore by the code(the loop logic), thats why it throws this error. Most of all those are selenium objects you trying to save which you should not do. It should be that you save the exact value of the link in an array and then loop them.

Categories