So I have been using selenium to open a webpage and wait for a specific element to be loaded. Once that's loaded, I'm finding an element within the first element that's already loaded and getting a value from that element. But, every time I run the code I get a StaleElementReferenceException on the line that says price = float(...). This is weird to me because it didn't crash on the line before which has the same xpath search. I'm very new to this, any help would be appreciated!
browser.get(url + page)
element = WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.ID, "searchResultsTable")))
price_element = element.find_element_by_xpath("//div[#class='market_listing_row market_recent_listing_row market_listing_searchresult']")
price = float(price_element.find_element_by_xpath("//span[#style='color:white']").text[:1])
Related
I am trying to click on the "Next page" in Python-Selenium. The element and its path are seen, the buttom is being clicked but after clicking an error is shown:
"StaleElementReferenceException:stale element reference: element is not attached to the page document"
My code so far:
element = WebDriverWait(driver, 20).until(EC.presence_of_element_located(\
(By.XPATH, butn)))
print (element.is_enabled())
while True and element.is_enabled()==True:
driver.find_element_by_xpath(butn).click()
The error is one element.is_enabled()==True after clicking
Can someone help?
When you search elements in Selenium then it doesn't keep full objects but only references to objects in DOM in browser. And when you click then browser creates new DOM and old references are incorrect and you have to find elements again.
Something like this
# find first time
element = WebDriverWait(driver, 20).until(EC.presence_of_element_located((By.XPATH, butn)))
print(element.is_enabled())
while element.is_enabled():
driver.find_element_by_xpath(butn).click()
# find again after changes in DOM
element = WebDriverWait(driver, 20).until(EC.presence_of_element_located((By.XPATH, butn)))
print(element.is_enabled())
Given this code:
options = webdriver.ChromeOptions()
options.add_argument("headless")
driver = webdriver.Chrome(options=options)
driver.get('https://covid19.apple.com/mobility')
elements = driver.find_elements_by_css_selector("div.download-button-container a")
csvLink = [el.get_attribute("href") for el in elements]
driver.quit()
At the end, csvLink sometimes has the link and most times not. If I stop at the last line in the debugger, it often fails to have anything in csvlink, but if I manually execute (in the debugger) elements[0].get_attribute('href') the correct link is returned. Every time.
if I replace
csvLink = [el.get_attribute("href") for el in elements]
with a direct call -
csvLink = elements[0].get_attribute("href")
it also fails. But, again, if I'm stopped at the driver.quit() line, and manually execute, the correct link is returned.
is there a time or path dependency I'm unaware of in using Selenium?
I'm guessing it has to do with how and when the javascript loads the link. Selenium grabs it without waiting before the javascript is able to load the elements href attribute value. Try explicitly waiting for the selector, something like:
(
WebDriverWait(browser, 20)
.until(EC.presence_of_element_located(
(By.CSS_SELECTOR, "div.download-button-container a[href]")))
.click()
)
Reference:
Selenium - wait until element is present, visible and interactable
How do I target elements with an attribute that has any value in CSS?
Also, if you curl https://covid19.apple.com/mobility my suspicion would be that the element exists ( maybe ), but the href is blank?
Set-up
I'm trying to log in to a website using Python + Selenium.
My code to load the website is,
browser = webdriver.Firefox(
executable_path='/mypath/to/geckodriver')
url = 'https://secure6.e-boekhouden.nl/bh/'
browser.get(url)
Problem
Selenium cannot locate the element containing the account and password fields.
For example, for the field 'Gebruikersnaam',
browser.find_element_by_id('txtEmail')
browser.find_element_by_xpath('//*[#name="txtEmail"]')
browser.find_element_by_class_name('INPUTBOX')
all give NoSuchElementException: Unable to locate element.
Even worse, Selenium cannot find the body element on the page,
browser.find_element_by_xpath('/html/body')
gives NoSuchElementException: Unable to locate element: /html/body.
I'm guessing something on the page is either blocking Selenium (maybe the 'secure6' in the url) or is written in a language/form Selenium cannot handle.
Any suggestions?
All elements are inside the frame. So that, it is throwing No Such Element exception. Please try to switch to the frame before all actions as given below.
browser = webdriver.Firefox(
executable_path='/mypath/to/geckodriver')
url = 'https://secure6.e-boekhouden.nl/bh/'
browser.get(url)
browser.switch_to.frame(browser.find_element_by_id("mainframe"))
browser.find_element_by_id('txtEmail')
browser.find_element_by_xpath('//*[#name="txtEmail"]')
browser.find_element_by_class_name('INPUTBOX')
I'm learning how to use selenium and I'm stuck on figuring out how to scroll down in a website to verify an element exists.
I tried using the methods that was found in this question
Scrolling to element using webdriver?
but selenium won't scroll down the page. Instead it'll give me an error
"selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: element"
Heres the codes I am using
moveToElement:
element = driver.find_element_by_xpath('xpath')
actions = ActionChains(driver)
actions.move_to_element(element).perform()
Scrolling into View
element = driver.find_element_by_xpath('xpath')
driver.execute_script("arguments[1].scrollIntoView();", element)
The whole code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
driver = webdriver.Firefox()
driver.get("https://www.linkedin.com/")
element =
driver.find_element_by_xpath('/html/body/div/main/div/div[1]/div/h1/img')
element = driver.find_element_by_xpath('//*[#id="login-email"]')
element.send_keys('')
element = driver.find_element_by_xpath('//*[#id="login-password"]')
element.send_keys('')
element = driver.find_element_by_xpath('//*[#id="login-submit"]')
element.click();
element = driver.find_element_by_xpath('')
actions = ActionChains(driver)
actions.move_to_element(element).perform()
There are two aspects to your is problem
Whether the element exist?
Whether the element displayed on the page?
Whether the element exist?
It may happen that the element exists on the page[i.e. it is part of the DOM] but it is not available right away for further selenium action becuase it is not visible[hidden], it's only gets visible after some actions or it may get displayed on scroll down the page.
Your code is throwing an exception here-
element = driver.find_element_by_xpath('xpath')
as WebDiver not able to find the element using mentioned xpath. Once you fix this, you can moe forward the next part.
Whether the element displayed on the page?
Once you fix the above issue, you should check whether the element is being displayed or not. If it's not displayed and avialble on the scroll then you can use code like
if !element.is_displayed():
driver.execute_script("arguments[1].scrollIntoView();", element)
Perfer using Action Class for very specific mouse action.
Update:-
If you are application using lazy loading and the element you are trying to find is available on scroll the you can try something like this -
You have to import exception like -
from selenium.common.exceptions import NoSuchElementException
and the create new recursion function which would scroll if element not found, something like this -
def search_element():
try:
elem = driver.find_element_by_xpath("your")
return elem
except NosSuchElementException:
driver.execute_script("window.scrollTo(0,Math.max(document.documentElement.scrollHeight,document.body.scrollHeight,document.documentElement.clientHeight));")
search_element()
I am not sure that having recurion for finding element here is good idea, also I have naver worked on the python so you need to lookout for the syntax
Amm may be this might help.Just send page down key and if you are sure that element exists definitely then this would work
from selenium.webdriver.common.keys import Keys
from selenium.webdriver import ActionChains
import time
while True:
ActionChains(driver).send_keys(Keys.PAGE_DOWN).perform()
time.sleep(2) #2 seconds
try:
element = driver.find_element_by_xpath("your_element_xpath")
break
except:
continue
I'm trying to use Selenium & Python to scrape a website (http://epl.squawka.com/english-premier-league/06-03-2017/west-ham-vs-chelsea/matches). I am using the webdriver to click a heading and then wait for the new information to load before clicking on an object before scraping the resulting data (which loads from the clicking). My problem is that I keep on getting an 'Unable to locate element error.
I've taken a screenshot at this point and can physically see the element and I've also printed the entire source code and can see that the element is there.
driver.find_element_by_id("mc-stat-shot").click()
time.sleep(3)
driver.save_screenshot('test.png')
try:
element = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID,"svg")))
finally:
driver.find_element_by_xpath("//g[3]/circle").click()
time.sleep(1)
goalSource = driver.page_source
goalBsObj = BeautifulSoup(goalSource, "html.parser")
#print(goalBsObj)
print(goalBsObj.find(id="tt-mins").get_text())
print(goalBsObj.find(id="tt-event").get_text())
print(goalBsObj.find(id="tt-playerA").get_text())
and the result is an error:
"selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: //g[3]/circle"