I have the xpath of an element on a website but I'm trying to get the aria-label value of that element.
# NO SUCCESS: print(WebDriverWait(browser, 20).until(EC.visibility_of_element_located((By.XPATH, "element_xpath_you_found"))).get_attribute("aria-label"))
# NO SUCCESS: first_rev = browser.find_element(By.xpath, "/html/body/span/g-lightbox/div[2]/div[3]/span/div/div/div/div[2]/div[1]/div/div[2]/div[1]/div[1]/div[3]/div[1]/g-review-stars/span")
first_rev = browser.find_element_by_xpath("/html/body/span/g-lightbox/div[2]/div[3]/span/div/div/div/div[2]/div[1]/div/div[2]/div[1]/div[1]/div[3]/div[1]/g-review-stars/span").click()
aria_label = first_rev.find_element_by_css_selector('span').get_attribute("aria-label")
print(aria_label)
On the browser, I inspect the element and get this html:
<span class="fTKmHE99XE4__star fTKmHE99XE4__star-s" aria-label="Rated 3.0 out of 5," style=""><span style="width:42px"></span></span>
However, can the problem be that this element is inside a pop-up on the page? Page source doesn't show any html for any element in the pop-up.
click() doesn't have return value, which means it returns None, which make first_rev None, split it to two actions. You also don't need the first_rev.find_element, you are actually getting its child element
first_rev = browser.find_element_by_xpath("/html/body/span/g-lightbox/div[2]/div[3]/span/div/div/div/div[2]/div[1]/div/div[2]/div[1]/div[1]/div[3]/div[1]/g-review-stars/span")
first_rev.click()
aria_label = first_rev.get_attribute("aria-label")
print(aria_label)
Related
Trying to scrape a website, I created a loop and was able to locate all the elements. My problem is, that the next button id changes on every page. So I can not use the id as a locator.
This is the next button on page 1:
<a rel="nofollow" id="f_c7" href="#" class="nextLink jasty-link"></a>
And this is the next button on page 2:
<a rel="nofollow" id="f_c9" href="#" class="nextLink jasty-link"></a>
Idea:
next_button = browser.find_elements_by_class_name("nextLink jasty-link")
next_button.click
I get this error message:
Message: no such element: Unable to locate element
The problem here might be that there are two next buttons on the page.
So I tried to create a list but the list is empty.
next_buttons = browser.find_elements_by_class_name("nextLink jasty-link")
print(next_buttons)
Any idea on how to solve my problem? Would really appreciate it.
This is the website:
https://fazarchiv.faz.net/faz-portal/faz-archiv?q=Kryptow%C3%A4hrungen&source=&max=10&sort=&offset=0&_ts=1657629187558#hitlist
There are two issues in my opinion:
Depending from where you try to access the site there is a cookie banner that will get the click, so you may have to accept it first:
browser.find_element_by_class_name('cb-enable').click()
To locate a single element, one of the both next buttons, it doeas not matter, use browser.find_element() instead of browser.find_elements().
Selecting your element by multiple class names use xpath:
next_button = browser.find_element(By.XPATH, '//a[contains(#class, "nextLink jasty-link")]')
or css selectors:
next_button = browser.find_element(By.CSS_SELECTOR, '.nextLink.jasty-link')
Note: To avoid DeprecationWarning: find_element_by_* commands are deprecated. Please use find_element() import in addition from selenium.webdriver.common.by import By
You can't get elements by multiple class names. So, you can use find_elements_by_css_selector instead.
next_buttons = browser.find_elements_by_css_selector(".nextLink.jasty-link")
print(next_buttons)
You can then loop through the list and click the buttons:
next_buttons = browser.find_elements_by_css_selector(".nextLink.jasty-link")
for button in next_buttons:
button.click()
Try below xPath
//a[contains(#class, 'step jasty-link')]/following-sibling::a
I have an input tag html element that Selenium Python fails to identify (not because of the wait). So on a web page with a form (name is Form1), I want to extract the text in one of the fields. This is the html element here when I inspect the elements on chrome:
Input Element:
<input name="txtSerialNo" type="text" readonly="readonly" id="txtSerialNo" class="tbFormRO" style="width:160px;position:absolute;left:90px;top:7px;text-align:center;">
The full xpath is this when I right-click on the element to copy the xpath: /html/body/form/div[9]/input[1]
The HTML Element
There isn't any text on it, so I tried the below and all did not work. I also tried the implicit wait and WebDriverWait. They are irrelevant and did not work.
driver.maximize_window()
driver.find_element_by_xpath('/html/body/form/div[9]/input[1]')
driver.find_element_by_id('txtSerialNo')
driver.find_element_by_name("txtSerialNo")
driver.find_element_by_xpath("//input[#id='txtSerialNo']")
It all returns error:
NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"[id="txtSerialNo"]"}
(Session info: chrome=91.0.4472.114)
My question is: I can see that when I inspect the element, the text I want to retrieve is in the Property tab, under input#txtSerialNo.tbFormRO
Under Property
I am using a for loop to gather all input element, but I don't know how to extract that "value" property under the "category" of input#txtSerialNo.tbFormRO in the property tab when I inspect the element. Sorry I don't have a solid CSS/HTML knowledge.
The Text I Want to Extract
I tried the below without success:
for inp in driver.find_elements_by_xpath('//form[#name="Form1"]//input'):
for k in inp.get_property('attributes')[0].keys():
print(inp.get_attribute(k))
for inp in driver.find_elements_by_xpath('//form[#name="Form1"]//input'):
print(inp.value_of_css_property('value'))
# get_property(input#txtSerialNo.tbFormRO.text)
# .get_attribute('text')
# .get_attribute("innerHTML")
# .get_attribute('value')
# .get_property('input#txtSerialNo.tbFormRO.value')
I think you are looking for .get_attribute(). Based on the image lets adjust the xpath to '//input[#name="txtSerialNo"]'
for inp in driver.find_elements_by_xpath('//input[#name="txtSerialNo"]'):
print(inp.get_attribute('value'))
On this website, I'm trying to find an element based on its XPATH, but the XPATH keeps changing. What's the next best alternative?
Snippet from website
<button class="KnkXXg vHcWfw T1alpA kiPMng AvEAGQ vM2UTA DM1_6g _-kwXsw Mqe1NA SDIrVw edrpZg" type="button" aria-expanded="true"><span class="nW7nAQ"><div class="VpIG5Q"></div></span></button>
XPATH:
//*[#id="__id15"]/div/div/div[1]/div[2]/div
#Sometimes id is a number between 15-18
//*[#id="__id23"]/div/div/div[1]/div[2]/div
#Sometimes id is a number between 13-23
Here's how I use the code:
element = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, """//*[#id="__id3"]/div/div/div[1]/div[2]/div/div/div/button"""))).click()
I've tried clicking the element by finding the button class, but for whatever reason it won't do anything.
element = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CLASS_NAME, "KnkXXg vHcWfw T1alpA kiPMng AvEAGQ vM2UTA DM1_6g _-kwXsw Mqe1NA SDIrVw edrpZg"))).click()
If Part of the text is keep changing you can use contains in the xpath.
//*[contains(#id,"__id"]/div/div/div[1]/div[2]/div
I want to download CSV files from a website. this is why i use the the click() command from selenium.
Elements have the following structure
code
csvList = browser.find_elements_by_class_name("csv")
for l in csvlist:
if 'error' not in l.text and 'after' not in l.text:
#get link here
l.click()
Question
My question is how can we get the download link from the element before I download it? the link that pointed to by the black arrow in the picture.
When I use l.get_attribute('href') it gives me None.
For each element l in csvList, get the parent element by xpath and then get that element's href:
csvList = browser.find_elements_by_class_name("csv")
for l in csvlist:
if 'error' not in l.text and 'after' not in l.text:
currentLink = l.find_element_by_xpath("..")
href = currentLink.get_attribute("href")
Note: If you do a .click() in this loop and the link takes you to a new page, you will get a StaleElementException for each click after the first. In that case, extract each href and save to a collection. Then navigate to each href (URL) in the collection.
The div does not have the href attribute. Its parent the "a" tag does. I would use xpath.
By.XPath("//a[/div[#class='csv']]")
I am parsing a web page with an organization like this:
<nav class="sidebar-main">
<div class="sidebar">Found 3 targets</div>
<ul><li><span>target1</span></li>
<li><a href="#target2" ><span>target2</span></a></li>
<li><span>target3</span></li></ul>
</nav>
My goal is to loop through each list element, clicking each one in the process:
sidebar = browser.find_element_by_class_name('sidebar-main')
elementList = sidebar.find_elements_by_tag_name("li")
for sample in elementList:
browser.implicitly_wait(5)
run_test1 = WebDriverWait(browser, 5).until(
EC.presence_of_element_located((By.CLASS_NAME, 'sidebar-main'))
)
sample.click()
I keep getting the error:
Message: The element reference of <li> stale either the element is no
longer attached to the DOM or the page has been refreshed.
Right now only one link is clicked, obviously selenium cannot locate subsequent elements upon page refresh, how do I get around this?
Once you click on the first link, either navigation to new page happens or the page is refreshed. You need to keep track of the element list, find the list elements again and then click on the required element. If page is changed, then you need to navigate back to the original page as well.
You can try something like below
sidebar = browser.find_element_by_class_name('sidebar-main')
elementList = sidebar.find_elements_by_tag_name("li")
for i in range(len(elementList)):
element = browser.find_element_by_class_name('sidebar-main').find_elements_by_tag_name("li")[i]
element.click()