How to get the number of rows in of table? [duplicate] - python

This question already has answers here:
How to count no of rows in table from web application using selenium python webdriver
(6 answers)
Closed 3 years ago.
I want to get the numbers of rows of a table on a web page using selenium python.
I tried the following way describe here: How to count no of rows in table from web application using selenium python webdriver
rows=len(driver.find_element_by_xpath("//table[#id='SheetContentPlaceHolder_GridView1']/tbody/tr"))
The result I get is the following:
rows=len(driver.find_element_by_xpath("//table[#id='SheetContentPlaceHolder_GridView1']/tbody/tr"))
TypeError: object of type 'FirefoxWebElement' has no len()
I don't understand what I misdo.
Thanks for your help

Method driver.find_element_by_xpath(...) returns you only the first child (row) of the table.
Change the line to driver.find_elements_by_xpath(...). It returns a list of elements. So the new code will be:
rows = driver.find_elements_by_xpath("//table[#id='SheetContentPlaceHolder_GridView1']/tbody/tr")
number_of_rows = len(rows)

find_element_by_xpath() would return a single element and as you were using FireFox, the first matching WebElement was returned which can be passed to len(). Hence you see the error:
TypeError: object of type 'FirefoxWebElement' has no len()
So instead of find_element_by_xpath() you need to to use find_elements_by_xpath() which would return a List.
Ideally, to extract the number of rows in js table using Selenium and Python you have to induce WebDriverWait for the visibility_of_all_elements_located() and you can use either of the following solutions:
Using XPATH:
print(len(WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.XPATH, "//table[#id='SheetContentPlaceHolder_GridView1']/tbody/tr")))))
Note : You have to add the following imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC

Related

Get element value from dynamic website using Selenium and Python [duplicate]

This question already has answers here:
How can I get text of an element in Selenium WebDriver, without including child element text?
(5 answers)
How to get text with Selenium WebDriver in Python
(9 answers)
How to get text element in html head by selenium?
(2 answers)
Closed 6 months ago.
I'm trying to get the value of the vending price of AMZN index market directly from the trading platform plus500, the value changes continuosly so I have to use selenium. The code I'm using is this one:
driver.get("https://app.plus500.com/trade/amazon")
# get AMZN vending price
Sell = driver.find_elements(By.CLASS_NAME, value="sell")
print(Sell)
The html from the source is this:
<div class="sell" data-no-trading="false" id="_win_plus500_bind873" data-show="true">126.28</div>
I need to scrape the value (in this case 126,28) every time it changes.
If it is needed I created a dummy Plus500 account for you: username "myrandomcode#gmail.com" password: "MyRandomCode87".
To extract the value of the vending price of AMZN index market directly from the trading platform plus500 i.e. the text 126.28 as the element is a dynamic element you need to induce WebDriverWait for the visibility_of_element_located() and you can use either of the following locator strategies:
Using XPATH:
print(WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//div[starts-with(#class, 'section-table-body')]//span[text()='Amazon']//following::div[2]"))).text)
Note : You have to add the following imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
You can find a relevant discussion in How to retrieve the text of a WebElement using Selenium - Python

Python Selenium - Attempting to write wait condition for when text changes in table element

I am attempting to write a selenium script to search for terms on a webpage, then select them from a table.
The table is dynamic and changes when elements are entered into the search field. Typically when I have written selenium code in the past, I have just used wait statements to wait for some element on the page to load before continuing. With this, I specifically need to wait until the element I am looking for to appear in the table, and then to select it.
Here is what I currently have written, where tableElement is the table I attempting to search through, and userID is the input I am hoping to find:
tableElement = self.driver.find_element_by_xpath(
'X-PATH_TO_ELEMENT')
ui.WebDriverWait(self.driver, 15).until(
EC.text_to_be_present_in_element(
tableElement, userID)
)
When running this code, I receive the following error message:
find_element() argument after * must be an iterable, not WebElement
As far as I am aware, this should be the correct syntax for the method I am attempting to call. Any help would be appreciated! Please let me know if I need to elaborate on any details.
Expcted condition are basically used with find_element and find_elements.
In your code you have provided web element to EC instead of locator.
you can use below code to use EC in your code :
WebDriverWait(driver, 30).until(
EC.visibility_of_all_elements_located((By.XPATH, "xpath of your element")))
Note: you have to add below importss to your solution:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By

Get the first element in web with individual Xpath [duplicate]

This question already has answers here:
How to retrieve the title attribute through Selenium using Python?
(3 answers)
Closed 3 years ago.
I am running a little Python Selenium script and I want to access attributes from the first element on this site: https://www.mydealz.de/gruppe/spielzeug. Every few minutes the first element is different and has therefore a different Xpath identifier.
What are the possibilites to access all the time this first element, which has different id's/Xpaths? The first result I meant.
Thanks a lot in advance!
I've keep an eye open on the website for the last 15 minutes, but for me the page has not changed.
Nevertheless, I tried to scrape the data with BS4 (which you could populate with Selenium's current browser session), where it should always return the first element first.
from bs4 import BeautifulSoup
import requests
data = requests.get('https://www.mydealz.de/gruppe/spielzeug')
soup = BeautifulSoup(data.text, "html.parser")
price_info = soup.select(".cept-tp")
for element in price_info:
for child in element:
print(child)
Of course this is just for the price, but you can apply the same logic for the other elements.
To print the first title you have to induce WebDriverWait for the desired visibility_of_element_located() and you can use either of the following Locator Strategies:
Using CSS_SELECTOR:
print(WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.CSS_SELECTOR, "div.threadGrid div.threadGrid-title.js-contextual-message-placeholder>strong.thread-title>a"))).get_attribute("title"))
Using XPATH:
print(WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//div[#class='threadGrid']//div[#class='threadGrid-title js-contextual-message-placeholder']/strong[#class='thread-title']/a"))).text)
Note : You have to add the following imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
Console Output of two back to back execution:
[Mediamarkt #Ebay.de] diverse Gravitrax Erweiterungen günstig!
[Mediamarkt #Ebay.de] diverse Gravitrax Erweiterungen günstig!
As per the documentation:
get_attribute(name)
method Gets the given attribute or property of the element.
text
attribute returns The text of the element.

How to iterate through an xml path string (table rows -- tr[1], tr[2], tr[3]...) using Python?

I have this html xml path:
"//*[#id="example"]/tbody/tr[2]/td[1]"
It has to be processed as a string by my find_element() algorithm
but I need to iterate up at tr[2] (eg, tr[2], tr[3], tr[4]...) so that my webscraping algorithm can expand a clickable button in a html table.
What are some strategies / implementations to accomplish this?
(I'm using the Selenium python library for the webscraper)
You can get collection of all desired elements (rows) using below code:
driver.find_elements_by_xpath("//*[#id="example"]/tbody//tr/td[1]");
Then you can iterate over collection of elements and do the desired operation.
If you wan to loop through it then ,
Just make your xpath dynamic like this :
I am assuming you have 5 rows.
for i in range(5):
driver.find_element_by_xpath("//*[#id="example"]/tbody/tr['"+i+"']/td[1]").click()
Or using WebDriverWait, it would be :
wait = WebDriverWait(driver,30)
for i in range(5):
wait.until(EC.element_to_be_clickable((By.XPATH, "//*[#id="example"]/tbody/tr['"+i+"']/td[1]"))).click()
Note that, in case you will have to imports these :
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

How to find element based on what its value ends with in Selenium?

I am dealing with a situation where every time I login a report is displayed in a table whose ID is dynamically generated with random text ending with "table".
I am automating this table with selenium python web driver. It has Syntax
driver.find_element_by_xpath('//*[#id="isc_43table"]/tbody/tr[1]/td[11]').click();
help me editing this syntax to match it with table ending id with "table".
(only one table is generated).
The ends-with XPath Constraint Function is part of XPath v2.0 but as per the current implementation Selenium supports XPath v1.0.
As per the HTML you have shared to identify the element you can use either of the Locator Strategies:
XPath using contains():
driver.find_element_by_xpath("//*[contains(#id,'table')]/tbody/tr[1]/td[11]").click();
Further, as you have mentioned that table whose ID is dynamically generated so to invoke click() on the desired element you need to induce WebDriverWait for the element to be clickable and you can use the following solution:
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//*[contains(#id,'table')]/tbody/tr[1]/td[11]"))).click()
Alternatively, you can also use CssSelector as:
driver.find_element_by_css_selector("[id$='table']>tbody>tr>td:nth-of-type(11)").click();
Again, you can also use CssSelector inducing WebDriverWait as:
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "[id$='table']>tbody>tr>td:nth-of-type(11)"))).click()
Note : You have to add the following imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
I hope, either these 2 will work for you
driver.find_element_by_xpath("//table[ends-with(#id,'table')]/tbody/tr[1]/td[11]").click();
OR
driver.find_element_by_xpath("//table[substring(#id,'table')]/tbody/tr[1]/td[11]").click();
If not getting, remove the tags from tbody.
For such situations, when you face randomly generated ids, you can use the below functions with XPATH expression
1) Contains,
2) Starts-with &
3) Ends-with
4) substring
Syntax
//table[ends-with(#id,'table')]
//h4/a[contains(text(),'SAP M')]
//div[substring(#id,'table')]
You need to identify the element which is having that id, whether its div or input or table. I think its a table.
You can try below XPath to simulate ends-with() syntax:
'//table[substring(#id, string-length(#id) - string-length("table") +1) = "table"]//tr[1]/td[11]'
You can also use CSS selector:
'table[id$="table"] tr>td:nth-of-type(11)'

Categories