StaleElementReferenceException in Python - python

I am trying to scrape data from the Sunshine List website (http://www.sunshinelist.ca/) using the Selenium package but I get the following error mentioned below. From several other related posts I understand that I need to use the WebDriverWait to explicitly ask the driver to wait/refresh but I am unable to identify where and how I should call the function.
Screenshot of Error
StaleElementReferenceException: Message: The element reference
of (tr class="even") stale: either the element is no longer attached to the DOM or the
page has been refreshed
import numpy as np
import pandas as pd
import requests
import time
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.common.exceptions import StaleElementReferenceException
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
ffx_bin = FirefoxBinary(r'C:\Users\BhagatM\AppData\Local\Mozilla Firefox\firefox.exe')
ffx_caps = DesiredCapabilities.FIREFOX
ffx_caps['marionette'] = True
driver = webdriver.Firefox(capabilities=ffx_caps,firefox_binary=ffx_bin)
driver.get("http://www.sunshinelist.ca/")
driver.maximize_window()
tablewotags1=[]
while True:
divs = driver.find_element_by_id('datatable-disclosures')
divs1=divs.find_elements_by_tag_name('tbody')
for d1 in divs1:
div2=d1.find_elements_by_tag_name('tr')
for d2 in div2:
tablewotags1.append(d2.text)
try:
driver.find_element_by_link_text('Next →').click()
except NoSuchElementException:
break
year1=tablewotags1[0::10]
name1=tablewotags1[3::10]
position1=tablewotags1[4::10]
employer1=tablewotags1[1::10]
df1=pd.DataFrame({'Year':year1,'Name':name1,'Position':position1,'Employer':employer1})
df1.to_csv('Sunshine List-1.csv', index=False)

If your problem is to click the "Next" button, you can do that with the xpath:
driver = webdriver.Firefox(executable_path=r'/pathTo/geckodriver')
driver.get("http://www.sunshinelist.ca/")
wait = WebDriverWait(driver, 20)
el=wait.until(EC.presence_of_element_located((By.XPATH,"//ul[#class='pagination']/li[#class='next']/a[#href='#' and text()='Next → ']")))
el.click()

For each click on the "Next" button -- you should find that button and click on it.
Or do something like this:
max_attemps = 10
while True:
next = self.driver.find_element_by_css_selector(".next>a")
if next is not None:
break
else:
time.sleep(0.5)
max_attemps -= 1
if max_attemps == 0:
self.fail("Cannot find element.")
And after this code does click action.
PS: Also try to add just time.sleep(x) after fiding element and then do click action.

Try this code below.
When the element is no longer attached to the DOM and the StaleElementReferenceException is invoked, search for the element again to reference the element.
Please do note I checked with Chrome:
try:
driver.find_element_by_css_selector('div[id="datatable-disclosures_wrapper"] li[class="next"]>a').click()
except StaleElementReferenceException:
driver.find_element_by_css_selector('div[id="datatable-disclosures_wrapper"] li[class="next"]>a').click()
except NoSuchElementException:
break

>>>Stale Exceptions can be handled using **StaleElementReferenceException** to continue to execute the for loop. When you try to get the element by any find_element method in a for loop.
from selenium.common import exceptions
and customize your code of for loop as:
for loop starts:
try:
driver.find_elements_by_id("data") //method to find element
//your code
except exceptions.StaleElementReferenceException:
pass

When you raise the StaleElementException that means that somthing changed in the site, but not in the list you have. So the trick is to refresh that list every time, inside the loop like this:
while True:
driver.implicitly_wait(4)
for d1 in driver.find_element_by_id('datatable-disclosures').find_element_by_tag_name('tbody').find_elements_by_tag_name('tr'):
tablewotags1.append(d1.text)
try:
driver.switch_to.default_content()
driver.find_element_by_xpath('//*[#id="datatable-disclosures_wrapper"]/div[2]/div[2]/div/ul/li[7]/a').click()
except NoSuchElementException:
print('Don\'t be so cryptic about error messages, they are good\n
...Script broke clicking next') #jk aside put some info there
break
Hope this help you, cheers.
Edit:
So I went to the said website, the layout is pretty straight forward, but the structure repeats itself like four times. So when you go about crawling the site like that something is bound to change.
So I’ve edited the code to only scrap one tbody tree. This tree comes from the first datatable-disclousure. And added some waits.

Related

Can't find element in Selenium Python

I have been working on this problem for quite a bit now, and can't figure out why this is happening. I am trying to click a button. The button and the corresponding text always changes, but the XPATH stays the same.
Button I am trying to click
It doesn't work with CSS Selector either. I am using Chrome web version 106. Does anyone know why?
import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.common.exceptions import NoSuchElementException
PATH = r"C:\Users\###\Downloads\chromedriver.exe"
driver = webdriver.Chrome(PATH)
# time to login
driver.get("https://clever.com")
time.sleep(60)
try:
driver.find_element(By.XPATH, '//*[#id="root"]/div/div/div[2]/div[3]/div/div/div/div[2]/button[2]').click()
except NoSuchElementException:
print("no such element")
pass
time.sleep(5)
Capturing the XPATH for the element
Try this:
driver.find_element("xpath", '//*[#id="root"]/div/div/div[2]/div[3]/div/div/div/div[2]/button[2]').click()
May be you using not correct syntax

Why doesn't it find the element by the class?

I want to type something in the input field, but when I call it with the class it returns an error. The Website has enough time to load all Elements so that shouldn't be the problem.
My Code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
browser = webdriver.Firefox()
browser.get('https://www.tradingview.com/chart/')
print("a")
time.sleep(5)
elem = browser.find_element_by_id("header-toolbar-symbol-search") # Find the search box
print("b")
elem.click()
time.sleep(5)
crypto_search = browser.find_element_by_class_name("search-Hsmn_0WX upperCase-Hsmn_0WX input-3n5_2-hI")
print("c")
crypto_search.send_keys("VETUSD")
time.sleep(10)
browser.quit()
When I run the code it gives me this error:
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: .search-Hsmn_0WX upperCase-Hsmn_0WX input-3n5_2-hI
It gets to the lines where it prints the a and b but it stops at the line which calls the element with class.
1 Get rid of time.sleep() because your tests will become unreliable and slow. Use implicit/explicit waits
2 If you have multiple classes in one use css or xpath selectors.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.common.keys import Keys
browser = webdriver.Firefox()
browser.get('https://www.tradingview.com/chart/')
wait = WebDriverWait(browser, 10)
wait.until(EC.element_to_be_clickable((By.ID, "header-toolbar-symbol-search"))).click() # Find the search box
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, ".search-Hsmn_0WX.upperCase-Hsmn_0WX.input-3n5_2-hI"))).send_keys("VETUSD")
# browser.close()
Here I wait till the first element is clickable and the second is visible, as an example of how explicit waits work in Selenium.
Note, how faster my version of test is.
Instead of
crypto_search = browser.find_element_by_class_name("search-Hsmn_0WX upperCase-Hsmn_0WX input-3n5_2-hI")
try the following:
crypto_search = browser.find_element_by_css_selector(".search-Hsmn_0WX.upperCase-Hsmn_0WX.input-3n5_2-hI")
I tried and it worked for me.
Also, because these class names are multiple and looking not too discriptive I would prefer the following selector:
crypto_search = browser.find_element_by_css_selector("[data-name='symbol-search-items-dialog'] input")
according to the documentation you need to use 'find_elements'.
This is because classes are used when there will be multiple of them on a page, so it doesn't make sense to select only one element with a class name, even if there is only one on the page.
If that element is the only one with that class on the page, try using
browser.find_elements_by_class_name("search-Hsmn_0WX upperCase-Hsmn_0WX input-3n5_2-hI")[0]

Circumventing Stale Element Exceptions in Selenium

I have read several articles on this site regarding around the StaleElementReferenceException and am aware that this error is caused by the element no longer being in the site's DOM. What I am trying to do is click the bottom links on this webpage in order to go on and see the next page's listings. I have tried a few ways around this exception being given to me, and haven't found any to work. Here is an example of the code I have tried, and what I thought it might accomplish.
driver = webdriver.Chrome(r'C:\Users\Hank\Desktop\chromedriver_win32\chromedriver.exe')
driver.get('https://steamcommunity.com/market/listings/440/Unusual%20Old%20Guadalajara')
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait as wait
from selenium.webdriver.support.expected_conditions import presence_of_element_located
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import StaleElementReferenceException
action = ActionChains(driver)
page_links = wait(driver, 10).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, '[class^=market_paging_pagelink]')))
try:
action.move_to_element(page_links[1]).click().perform()
except StaleElementReferenceException as Exception:
print("Exception received, trying again")
time.sleep(5)
page_links = wait(driver, 10).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, '[class^=market_paging_pagelink]')))
action.move_to_element(page_links[1]).click().perform()
I was hoping that this code segment would attempt to move to the element at the bottom, click it, or return the error message, and try again, succeeding the second time. Instead, the code simply throws the error again. If my question has already been answered, please direct me to the relevant link.
Thank you!
The approach I normally go for is to click Next page until the button gets disabled/invisible.
Here's a working example based on your page. You should obviously do whatever relevant in the while loop; I chose to capture prices for the sake of example.
url="https://steamcommunity.com/market/listings/440/Unusual%20Old%20Guadalajara"
driver.get(url)
next_button=wait(driver, 10).until(EC.presence_of_element_located((By.ID,'searchResults_btn_next')))
# capture the start value from "Showing x-xx of 22 results"
#need this to check against later
ref_val=wait(driver, 10).until(EC.presence_of_element_located((By.ID,'searchResults_start'))).text
while next_button.get_attribute('class') == 'pagebtn':
next_button.click()
#wait until ref_val has changed
wait(driver, 10).until(lambda driver: wait(driver, 10).until(EC.presence_of_element_located((By.ID,'searchResults_start'))).text != ref_val)
# ====== Do whatever relevant here =============================
page_num=wait(driver, 10).until(EC.presence_of_element_located((By.CSS_SELECTOR,'.market_paging_pagelink.active'))).text
print(f"Prices from page {page_num}")
prices = wait(driver, 10).until(EC.presence_of_all_elements_located(
(By.XPATH, ".//span[#class='market_listing_price market_listing_price_with_fee']")))
for price in prices:
print(price.text)
#================================================================
#get the new reference value
ref_val = wait(driver, 10).until(EC.presence_of_element_located((By.ID, 'searchResults_start'))).text

How to make Selenium only click a button and nothing else? Inconsistent clicking

My goal: to scrape the amount of projects done by a user on khan academy.
To do so I need to parse the profile user page. But I need to click on show more to see all the project a user had done and then scrape them.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException,StaleElementReferenceException
from bs4 import BeautifulSoup
# here is one example of a user
driver = webdriver.Chrome()
driver.get('https://www.khanacademy.org/profile/trekcelt/projects')
# to infinite click on show more button until there is none
while True:
try:
showmore_project=WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CLASS_NAME,'showMore_17tx5ln')))
showmore_project.click()
except TimeoutException:
break
except StaleElementReferenceException:
break
# parsing the profile
soup=BeautifulSoup(driver.page_source,'html.parser')
# get a list of all the projects
project=soup.find_all(class_='title_1usue9n')
# get the number of projects
print(len(project))
This code return 0 for print(len(project)). And that's not normal because when you manually check https://www.khanacademy.org/profile/trekcelt/projects you can see there that the amount of projects is definetly not 0.
The weird thing: at first, you can see (with the webdriver) that this code is working fine and then selenium clicks on something else than the show more button, it click on one of the project's link for example and thus change the page and that's why we get 0.
I don't understand how to correct my code so selenium is only clicking on the right button and nothing else.
Check out the following implementation to get the desired behavior. When the script is running, take a closer look at the scroll bar to see the progress.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from bs4 import BeautifulSoup
with webdriver.Chrome() as driver:
wait = WebDriverWait(driver,10)
driver.get('https://www.khanacademy.org/profile/trekcelt/projects')
while True:
try:
showmore = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,'[class^="showMore"] > a')))
driver.execute_script("arguments[0].click();",showmore)
except Exception:
break
soup = BeautifulSoup(driver.page_source,'html.parser')
project = soup.find_all(class_='title_1usue9n')
print(len(project))
Another way would be:
while True:
try:
showmore = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,'[class^="showMore"] > a')))
showmore.location_once_scrolled_into_view
showmore.click()
wait.until(EC.invisibility_of_element_located((By.CSS_SELECTOR,'[class^="spinnerContainer"] > img[class^="loadingSpinner"]')))
except Exception:
break
Output at this moment:
381
I have modified the accepted answer to improve the performance of your script. Comment on how you can achieve it is in the code
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import NoSuchElementException, StaleElementReferenceException
from bs4 import BeautifulSoup
import time
start_time = time.time()
# here is one example of a user
with webdriver.Chrome() as driver:
driver.get('https://www.khanacademy.org/profile/trekcelt/projects')
# This code will wait until the first Show More is displayed (After page loaded)
showmore_project = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CLASS_NAME,
'showMore_17tx5ln')))
showmore_project.click()
# to infinite click on show more button until there is none
while True:
try:
# We will retrieve and click until we do not find the element
# NoSuchElementException will be raised when we reach the button. This will save the wait time of 10 sec
showmore_project= driver.find_element_by_css_selector('.showMore_17tx5ln [role="button"]')
# Using a JS to send the click will avoid Selenium to through an exception where the click would not be
# performed on the right element.
driver.execute_script("arguments[0].click();", showmore_project)
except StaleElementReferenceException:
continue
except NoSuchElementException:
break
# parsing the profile
soup=BeautifulSoup(driver.page_source,'html.parser')
# get a list of all the projects
project=soup.find_all(class_='title_1usue9n')
# get the number of projects
print(len(project))
print(time.time() - start_time)
Execution Time1: 14.343502759933472
Execution Time2: 13.955228090286255
Hope this help you!

extracting links with a specific class with Selenium in Python

I am trying to extract links from a infinite scroll website
It's my code for scrolling down the page
driver = webdriver.Chrome('C:\\Program Files (x86)\\Google\\Chrome\\chromedriver.exe')
driver.get('http://seekingalpha.com/market-news/top-news')
for i in range(0,2):
driver.implicitly_wait(15)
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(20)
I aim at extracting specific links from this page. With class = "market_current_title" and HTML like the following :
<a class="market_current_title" href="/news/3223955-dow-wraps-best-week-since-2011-s-and-p-strongest-week-since-2014" sasource="titles_mc_top_news" target="_self">Dow wraps up best week since 2011; S&P in strongest week since 2014</a>
When I used
URL = driver.find_elements_by_class_name('market_current_title')
I ended up with the error that says "stale element reference: element is not attached to the page document". Then I tried
URL = driver.find_elements_by_xpath("//div[#id='a']//a[#class='market_current_title']")
but it says that there is no such a link !!!
Do you have any idea about solving this problem?
You're probably trying to interact with an element that is already changed (probably elements above your scrolling and off screen). Try this answer for some good options on how to overcome this.
Here's a snippet:
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By
import selenium.webdriver.support.expected_conditions as EC
import selenium.webdriver.support.ui as ui
# return True if element is visible within 2 seconds, otherwise False
def is_visible(self, locator, timeout=2):
try:
ui.WebDriverWait(driver, timeout).until(EC.visibility_of_element_located((By.CSS_SELECTOR, locator)))
return True
except TimeoutException:
return False

Categories