Get element value from dynamic website using Selenium and Python [duplicate] - python

This question already has answers here:
How can I get text of an element in Selenium WebDriver, without including child element text?
(5 answers)
How to get text with Selenium WebDriver in Python
(9 answers)
How to get text element in html head by selenium?
(2 answers)
Closed 6 months ago.
I'm trying to get the value of the vending price of AMZN index market directly from the trading platform plus500, the value changes continuosly so I have to use selenium. The code I'm using is this one:
driver.get("https://app.plus500.com/trade/amazon")
# get AMZN vending price
Sell = driver.find_elements(By.CLASS_NAME, value="sell")
print(Sell)
The html from the source is this:
<div class="sell" data-no-trading="false" id="_win_plus500_bind873" data-show="true">126.28</div>
I need to scrape the value (in this case 126,28) every time it changes.
If it is needed I created a dummy Plus500 account for you: username "myrandomcode#gmail.com" password: "MyRandomCode87".

To extract the value of the vending price of AMZN index market directly from the trading platform plus500 i.e. the text 126.28 as the element is a dynamic element you need to induce WebDriverWait for the visibility_of_element_located() and you can use either of the following locator strategies:
Using XPATH:
print(WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//div[starts-with(#class, 'section-table-body')]//span[text()='Amazon']//following::div[2]"))).text)
Note : You have to add the following imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
You can find a relevant discussion in How to retrieve the text of a WebElement using Selenium - Python

Related

how to download email attachment with selenium python? [duplicate]

This question already has an answer here:
Selenium "selenium.common.exceptions.NoSuchElementException" when using Chrome
(1 answer)
Closed 11 months ago.
This is the html for the download part of the page:
<ul id="attachment-list" class="attachmentslist"><li class="application vnd.openxmlformats-officedocument.spreadsheetml.sheet xlsx" id="attach2"><span class="attachment-name">2ART.xlsx</span><span class="attachment-size">(~740 KB)</span><span class="inner">Opções</span></li>
</ul>
and I can't find any element related to the file.
driver.find_element(By.XPATH, '//*[#id="attach2"]/a[2]')
the output is:
Message: no such element: Unable to locate element:
{"method":"xpath","selector":"//*[#id="attach2"]/a[2]"}
Please check in the dev tools (Google chrome) if we have unique entry in HTML-DOM or not.
xpath that you should check :
//ul[#id='attachment-list']//li[#id='attach2']//a[#title='Opções']
Steps to check:
Press F12 in Chrome -> go to element section -> do a CTRL + F -> then paste the xpath and see, if your desired element is getting highlighted with 1/1 matching node.
If it's a unique match then click it like below:
Code trial 1:
time.sleep(5)
driver.find_element(By.XPATH, "//ul[#id='attachment-list']//li[#id='attach2']//a[#title='Opções']").click()
Code trial 2:
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//ul[#id='attachment-list']//li[#id='attach2']//a[#title='Opções']"))).click()
Imports:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
Code trial 2 is recommended.
if you still face the NoSuchElementException: with code trial 1 then debug like this:
If this is unique //ul[#id='attachment-list']//li[#id='attach2']//a[#title='Opções'] then you need to check for the below conditions as well.
Check if it's in any iframe/frame/frameset.
Solution: switch to iframe/frame/frameset first and then interact with this web element.
Check if it's in any shadow-root.
Solution: Use driver.execute_script('return document.querySelector to have returned a web element and then operates accordingly.
If you have redirected to a new tab/ or new windows and you have not switched to that particular new tab/new window, otherwise you will likely get NoSuchElement exception.
Solution: switch to the relevant window/tab first.
If you have switched to an iframe and the new desired element is not in the same iframe context then first switch to default content and then interact with it.
Solution: switch to default content and then switch to respective iframe.

I am trying to reference an element on Target's website, but am having issues grabbing it

I am trying to scrape some Target product information and am running into an issue trying to reference the UPC digits.
I am using Selenium on Python and am trying to reference the UPC and the digits, but there doesn't seem to be a way to reference the digits portion of it. I am currently trying:
UPC = driver.find_element_by_xpath("//*[text()[contains(.,'UPC')]]")
But this only returns the string 'UPC' and not the digits.
Does anyone know how to reference the entire element? I posted some images along with this, thank you!
To scrape the target product information element you need to induce WebDriverWait for the visibility_of_element_located() and you can use either of the following Locator Strategies:
Using XPATH:
UPC = WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//*[contains(., 'UPC')]")))
Note : You have to add the following imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC

How to get the number of rows in of table? [duplicate]

This question already has answers here:
How to count no of rows in table from web application using selenium python webdriver
(6 answers)
Closed 3 years ago.
I want to get the numbers of rows of a table on a web page using selenium python.
I tried the following way describe here: How to count no of rows in table from web application using selenium python webdriver
rows=len(driver.find_element_by_xpath("//table[#id='SheetContentPlaceHolder_GridView1']/tbody/tr"))
The result I get is the following:
rows=len(driver.find_element_by_xpath("//table[#id='SheetContentPlaceHolder_GridView1']/tbody/tr"))
TypeError: object of type 'FirefoxWebElement' has no len()
I don't understand what I misdo.
Thanks for your help
Method driver.find_element_by_xpath(...) returns you only the first child (row) of the table.
Change the line to driver.find_elements_by_xpath(...). It returns a list of elements. So the new code will be:
rows = driver.find_elements_by_xpath("//table[#id='SheetContentPlaceHolder_GridView1']/tbody/tr")
number_of_rows = len(rows)
find_element_by_xpath() would return a single element and as you were using FireFox, the first matching WebElement was returned which can be passed to len(). Hence you see the error:
TypeError: object of type 'FirefoxWebElement' has no len()
So instead of find_element_by_xpath() you need to to use find_elements_by_xpath() which would return a List.
Ideally, to extract the number of rows in js table using Selenium and Python you have to induce WebDriverWait for the visibility_of_all_elements_located() and you can use either of the following solutions:
Using XPATH:
print(len(WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.XPATH, "//table[#id='SheetContentPlaceHolder_GridView1']/tbody/tr")))))
Note : You have to add the following imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC

Get the first element in web with individual Xpath [duplicate]

This question already has answers here:
How to retrieve the title attribute through Selenium using Python?
(3 answers)
Closed 3 years ago.
I am running a little Python Selenium script and I want to access attributes from the first element on this site: https://www.mydealz.de/gruppe/spielzeug. Every few minutes the first element is different and has therefore a different Xpath identifier.
What are the possibilites to access all the time this first element, which has different id's/Xpaths? The first result I meant.
Thanks a lot in advance!
I've keep an eye open on the website for the last 15 minutes, but for me the page has not changed.
Nevertheless, I tried to scrape the data with BS4 (which you could populate with Selenium's current browser session), where it should always return the first element first.
from bs4 import BeautifulSoup
import requests
data = requests.get('https://www.mydealz.de/gruppe/spielzeug')
soup = BeautifulSoup(data.text, "html.parser")
price_info = soup.select(".cept-tp")
for element in price_info:
for child in element:
print(child)
Of course this is just for the price, but you can apply the same logic for the other elements.
To print the first title you have to induce WebDriverWait for the desired visibility_of_element_located() and you can use either of the following Locator Strategies:
Using CSS_SELECTOR:
print(WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.CSS_SELECTOR, "div.threadGrid div.threadGrid-title.js-contextual-message-placeholder>strong.thread-title>a"))).get_attribute("title"))
Using XPATH:
print(WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//div[#class='threadGrid']//div[#class='threadGrid-title js-contextual-message-placeholder']/strong[#class='thread-title']/a"))).text)
Note : You have to add the following imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
Console Output of two back to back execution:
[Mediamarkt #Ebay.de] diverse Gravitrax Erweiterungen günstig!
[Mediamarkt #Ebay.de] diverse Gravitrax Erweiterungen günstig!
As per the documentation:
get_attribute(name)
method Gets the given attribute or property of the element.
text
attribute returns The text of the element.

How to find element based on what its value ends with in Selenium?

I am dealing with a situation where every time I login a report is displayed in a table whose ID is dynamically generated with random text ending with "table".
I am automating this table with selenium python web driver. It has Syntax
driver.find_element_by_xpath('//*[#id="isc_43table"]/tbody/tr[1]/td[11]').click();
help me editing this syntax to match it with table ending id with "table".
(only one table is generated).
The ends-with XPath Constraint Function is part of XPath v2.0 but as per the current implementation Selenium supports XPath v1.0.
As per the HTML you have shared to identify the element you can use either of the Locator Strategies:
XPath using contains():
driver.find_element_by_xpath("//*[contains(#id,'table')]/tbody/tr[1]/td[11]").click();
Further, as you have mentioned that table whose ID is dynamically generated so to invoke click() on the desired element you need to induce WebDriverWait for the element to be clickable and you can use the following solution:
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//*[contains(#id,'table')]/tbody/tr[1]/td[11]"))).click()
Alternatively, you can also use CssSelector as:
driver.find_element_by_css_selector("[id$='table']>tbody>tr>td:nth-of-type(11)").click();
Again, you can also use CssSelector inducing WebDriverWait as:
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "[id$='table']>tbody>tr>td:nth-of-type(11)"))).click()
Note : You have to add the following imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
I hope, either these 2 will work for you
driver.find_element_by_xpath("//table[ends-with(#id,'table')]/tbody/tr[1]/td[11]").click();
OR
driver.find_element_by_xpath("//table[substring(#id,'table')]/tbody/tr[1]/td[11]").click();
If not getting, remove the tags from tbody.
For such situations, when you face randomly generated ids, you can use the below functions with XPATH expression
1) Contains,
2) Starts-with &
3) Ends-with
4) substring
Syntax
//table[ends-with(#id,'table')]
//h4/a[contains(text(),'SAP M')]
//div[substring(#id,'table')]
You need to identify the element which is having that id, whether its div or input or table. I think its a table.
You can try below XPath to simulate ends-with() syntax:
'//table[substring(#id, string-length(#id) - string-length("table") +1) = "table"]//tr[1]/td[11]'
You can also use CSS selector:
'table[id$="table"] tr>td:nth-of-type(11)'

Categories