Python selenium 'list' object has no attribute 'text' error - python

I'm trying to copy text from comment on a website<span class="auto-link">yes</span> and my python code is
element=browser.find_elements_by_xpath('//span[#class="auto-link"][1]')
print(element.text)
but I keep on getting the 'list' object has no attribute 'text' error, I don't know what I'm doing wrong.

I'm using selenium in python. Try this code I hope this will work for you.
element=browser.find_elements_by_xpath('//span[#class="auto-link"][1]')
for value in element:
print(value.text)

I've never used Selenium, but based on the error and your response, the answer is pretty clear.
When you search for a class, there may be multiple matching elements, so it returns a list of all found matches. Even if you only have a single element with that class, it will still return a list for consistency.
Just grab the first element from the found elements:
elements = browser.find_elements_by_xpath('//span[#class="auto-link"][1]')
# ^ Renamed to reflect type better
print(elements[0].text)
# ^ Grab the first element

# instead of driver.find_elements_by_xpath() use driver.find_element_by_xpath() to get the individual element of the table
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
import time
# Google Chrome
driver = webdriver.Chrome(ChromeDriverManager().install())
driver.get("https://testautomationpractice.blogspot.com/")
# Get Number of Rows
rows = len(driver.find_elements_by_xpath("//*[#id='HTML1']/div[1]/table/tbody/tr"))
# Get Number of Columns
columns = len(driver.find_elements_by_xpath("//*[#id='HTML1']/div[1]/table/tbody/tr[1]/th"))
print("Number of Rows:", rows)
print("Number of Columns:", columns)
# In web table index starts with 1 instead of 0
for row in range(2, rows+1):
for col in range(1,columns+1):
value = driver.find_element_by_xpath("//*[#id='HTML1']/div[1]/table/tbody/tr["+str(row)+"]/td["+str(col)+"]").text
print(value, end=' ')
print()
time.sleep(5)
# Close the Browser
driver.close()

This will work when you are looking for more than one element and takes the first element that matches the xpath:
element=browser.find_elements_by_xpath('//span[#class="auto-link"][1]').get_attribute("innerHTML")
print(element)
This is when you are looking only for one:
element=browser.find_element_by_xpath('//span[#class="auto-link"]').get_attribute("innerHTML")
print(element)
The output:
>>>yes

First Write the xpath of span in which you are currently working and then add the index number in the last of xpath but within it like given below.
from selenium import webdriver`
driver = webdriver.Firefox()
driver.get("http://www.example.org")
element=browser.find_elements_by_xpath('//span[#class="auto-link"[1]').click()
print(element)
[1] is the index number of my value which i want to access.

dont use find_elements_by_xpath, but find_element_by_xpath

There are 10 elements by this selector, it prints all of them
roles = driver.find_elements_by_xpath("(//label[#class='container-checkmark disabled'])")
for x in range(len(roles)):
print(roles[x].text)

Related

find_element return only the first element selenuim?

this is part of the code
from selenium import webdriver
driver = webdriver.Firefox()
driver.get("link")
wait_page_to_load()
scroll_all_the_down()
comment_elems = driver.find_element(By.XPATH,'//*[#id="content-text"]')
len(comment_elems)#=1
the two function in the middle is working just fine i can see all the comments and i did verify the path but the find element is returning only the first element
i try this
comment_elems = driver.find_element(By.XPATH,'//*[#id="content-text"]')
but i need one with all elements
To find multiple elements use driver.find_elements(By.XPATH,'//*[#id="content-text"]')
details here https://selenium-python.readthedocs.io/locating-elements.html

Selenium not finding list of sections with classes?

I am attempting to get a list of games on
https://www.xbox.com/en-US/live/gold#gameswithgold
According to Firefox's dev console, it seems that I found the correct class: https://i.imgur.com/M6EpVDg.png
In fact, since there are 3 games, I am supposed to get a list of 3 objects with this code: https://pastebin.com/raw/PEDifvdX (the wait is so Seleium can load the page)
But in fact, Selenium says it does not exist: https://i.imgur.com/DqsIdk9.png
I do not get what I am doing wrong. I even tried css selectors like this
listOfGames = driver.find_element_by_css_selector("section.m-product-placement-item f-size-medium context-game gameDiv")
Still nothing. What am I doing wrong?
You are trying to get three different games so you need to give different element path or you can use some sort of loop like this one
i = 1
while i < 4:
link = f"//*[#id='ContentBlockList_11']/div[2]/section[{i}]/a/div/h3"
listGames = str(driver.find_element_by_xpath(link).text)
print(listGames)
i += 1
you can use this kind of loop in some places where there is slight different in xpath,css or class
in this way it will loop over web element one by one and get the list of game
as you are trying to get name I think so you need to put .text which will only get you the name nothing else
Another option with a selector that isn't looped over and changed-- also one that's less dependent on the page structure and a little easier to read:
//a[starts-with(#data-loc-link,'keyLinknowgame')]//h3
Here's sample code:
from selenium import webdriver
from selenium.common.exceptions import StaleElementReferenceException
driver = webdriver.Chrome()
url = f"https://www.xbox.com/en-US/live/gold#gameswithgold"
driver.get(url)
driver.implicitly_wait(10)
listOfGames = driver.find_elements_by_xpath("//a[starts-with(#data-loc-link,'keyLinknowgame')]//h3")
for game in listOfGames:
try:
print(game.text)
except StaleElementReferenceException:
pass
If you're after more than just the title, remove the //h3 selection:
//a[starts-with(#data-loc-link,'keyLinknowgame')]
And add whatever additional Xpath you want to narrow things down to the content/elements that you're after.

Selenium Python: How can I count the number of tables in a div?

I'm trying to find an element with Xpath but it changes like so:
//*[#id="emailwrapper"]/div/div/table[1]/tbody/tr/td[2]/a
//*[#id="emailwrapper"]/div/div/table[2]/tbody/tr/td[2]/a
//*[#id="emailwrapper"]/div/div/table[3]/tbody/tr/td[2]/a
//*[#id="emailwrapper"]/div/div/table[4]/tbody/tr/td[2]/a
//*[#id="emailwrapper"]/div/div/table[5]/tbody/tr/td[2]/a
//*[#id="emailwrapper"]/div/div/table[6]/tbody/tr/td[2]/a
My current assumption is that the table I'm looking for will always be that last one in the table array but I want to confirm this by counting the number of tables in the second div. Does anyone know how to do this?
Simple solution is using the below xpath.
//*[#id='emailwrapper']/div/div/table
Your code should be
lastTable = len(driver.find_elements_by_xpath("//*[#id='emailwrapper']/div/div/table"))-1
print lastTable
Assuming that there would be at least one element that matches the xpath of '//*[#id="emailwrapper"]/div/div/table', you can simply do:
driver.find_elements_by_xpath('//*[#id="emailwrapper"]/div/div/table')
It will return a list, or raise NoSuchElementException if none are found.
Exact same results but written differently:
from selenium.webdriver.common.by import By
driver.find_elements(By.XPATH, '//*[#id="emailwrapper"]/div/div/table')
After which you can do a len() on the list for how many elements

XPath - Select all <p> elements does not work

I have some basic selenium code and an xpath expression that performs well.
The xpath:
/html/body/div/div/table[2]/tbody/tr/td/div/table/tbody/tr//td/div[5]/table/tbody/tr[2]
selects the section I'm interested in, containing many elements.
however, append '//p' like so:
/html/body/div/div/table[2]/tbody/tr/td/div/table/tbody/tr//td/div[5]/table/tbody/tr[2]//p
does NOT select only those elements. Instead, what I ended up with is a single element.
I'm obviously missing something basic. This is an example of what my code looks like:
#!/usr/bin/env python
from selenium import webdriver
from time import sleep
fp = webdriver.FirefoxProfile()
wd = webdriver.Firefox(firefox_profile=fp)
wd.get("http://someurl.html")
# appending //p here is the problem that finds only a single <a> element
elems = wd.find_element_by_xpath("/html/body/div/div/table[2]/tbody/tr/td/div/table/tbody/tr/td/div[5]/table/tbody/tr[2]//p")
print elems.get_attribute("innerHTML").encode("utf-8", 'ignore')
wd.close()
EDIT: solved by using find_element*s*_by_xpath instead of find_element as suggested (thanks, Alexander Petrovich, for spotting this).
Don't use such locators. Shorten them a bit. Something like //table[#attr='value']/tbody/tr[2]//p
To select multiple elements, use find_elements_by_xpath() method (it returns a list of WebElement objects)
You will not be able to use elems.get_attribute(). Instead, you'll have to iterate through the list
elems = wd.find_elements_by_xpath("/your/xpath")
for el in elems:
print '\n' + el.get_attribute('innerHTML').encode("utf-8", 'ignore')

selenium count elements of xpath

i have a webpage with a table containing many Download links
i want to let selenium click on the last one :
table:
item1 Download
item2 Download
item3 Download
selenium must click on Download next item3
i used xpath to find all elments then get size of returned array or dict in this way
x = bot._driver.find_element_by_xpath("//a[contains(text(),'Download')]").size()
but i get always this error
TypeError: 'dict' object is not callable
i tried to use get_xpath_count methode but the methode doen't exist in selenium in python!
i thought about another solution but i don't know how to do it and it is as following
x = bot._driver.find_element_by_xpath("(//a[contains(text(),'Download')])[size()-1]")
or
x = bot._driver.find_element_by_xpath("(//a[contains(text(),'Download')])[last()]")
or something else
Use find_elements_by_xpath to get number of relevant elements
count = len(driver.find_elements_by_xpath(xpath))
Then click on the last element:
elem = driver.find_element_by_xpath(xpath[count])
elem.click()
Notice: the find_elements_by_xpath is plural in first code snippet
Although get_xpath_count("Xpath_Expression") does not exist in Python, you can use
len(bot._driver.find_element_by_xpath("//a[contains(text(),'Download')]"))
to achieve the number of elements, and then iterate through then, using
something.xpath(//a[position()=n])
where
n < len(bot._driver.find_element_by_xpath("//a[contains(text(),'Download')]"))
the best way to do this is to use JavaScript and find elements by class name
self.bot._driver.execute_script("var e = document.getElementsByClassName('download'); e[e.length-1].click()")
source:
http://de.slideshare.net/adamchristian/javascript-testing-via-selenium

Categories