Iterate through all elements using selenium in python - python

I currently have a script that can find the first element with the link text "Toast" using
toast_link = driver.find_element_by_link_text('Toast')
The problem I am having is that there are multiple instances of the link text "Toast" and I would like to iterate through all of them. What is the best way to do this?

To find multiple elements, use find_elements_* instead of find_element_* (NOTE: s):
toast_links = driver.find_elements_by_link_text('Toast')
for link in toast_links:
....

Related

Web Scraping a data inside an html h3 tag using Selenium Python

I wanted to grab a certain data using selenium, the data is located inside a tag with a similar class, so how do I grab it?
Those 2 are the data, but they are inside the same class.
i tried to use
driver.find_elements_by_class_name
But it doesn't work, is there a way to grab it? thanks
Use the following xpath "//*[#class='card-title']" and use the function driver.find_elements_by_xpath. In order to check the correctness of the xpath, inspect the page and with Control + F or Command + F put the xpath in the search bar so you will see if the xpath finds the elements you are looking for
Then if you want the text inside:
elements = driver.find_elements_by_xpath("//*[#class='card-title']")
data = [element.text for element in elements]
yes there is you can grab the first one like this:
driver.find_element_by_xpath("(//h3[#class='cart-title'])[1]").find_element_by_tag_name('b').text
and the second one like this
driver.find_element_by_xpath("(//h3[#class='cart-title'])[2]").find_element_by_tag_name('b').text

Click (), send Keys () functions are not available for find element by x path option

I am using Python version 3.8.2 with selenium 3.14.1.
I am new to both Python and Selenium. I am using Pycharm to write my automation scripts.
When i try to use driver.find_elements_by_xpath().click() command, The click() option is not displayed in the drop down.
The same click() option is available if i use driver.find_element_by_name or driver.find_element_by_id commands.
Find elements by Name
Find elements by id
How can we resolve this issue?
To quote the Selenium documentation:
To find multiple elements (these methods will return a list):
find_elements_by_name
find_elements_by_xpath
...
You can't call .click on driver.find_elements_by_xpath() because driver.find_elements_by_xpath() returns a list of elements, rather than a single element.
Suppose driver.find_elements_by_xpath() returns 10 elements. What do you want to do with these 10 elements? Click on the first one? Click on the last one? Click on all of them?
If you only want to find a single element using XPath, use driver.find_element_by_xpath() (note, no s after element) instead.
The documentation page I linked to above lists 8 methods for finding a single element on the page. All of these methods apart from find_element_by_id have a corresponding method for returning multiple elements, whose name differs only by replacing element with elements. (There is no find_elements_by_id method because ids are supposed to be unique: there should never be more than one element with the same id.)
Adding for above answer.
You can use a List to store webelements and loop through them while doing actions. Something in java.
List<WebElement> elements = driver.findElements(By.id("001"));
for(WebElement ele:elements) {
ele.click();
}

Finding an element by partial href (Python Selenium)

I'm trying to access text from elements that have different xpaths but very predictable href schemes across multiple pages in a web database. Here are some examples:
<a href="/mathscinet/search/mscdoc.html?code=65J22,(35R30,47A52,65J20,65R30,90C30)">
65J22 (35R30 47A52 65J20 65R30 90C30) </a>
In this example I would want to extract "65J22 (35R30 47A52 65J20 65R30 90C30)"
<a href="/mathscinet/search/mscdoc.html?code=05C80,(05C15)">
05C80 (05C15) </a>
In this example I would want to extract "05C80 (05C15)". My web scraper would not be able to search by xpath directly due to the xpaths of my desired elements changing between pages, so I am looking for a more roundabout approach.
My main idea is to use the fact that every href contains "/mathscinet/search/mscdoc.html?code=". Selenium can't directly search for hrefs, but I was thinking of doing something similar to this C# implementation:
Driver.Instance.FindElement(By.XPath("//a[contains(#href, 'long')]"))
To port this over to python, the only analogous method I could think of would be to use the in operator, but I am not sure how the syntax will work when everything is nested in a find_element_by_xpath. How would I bring all of these ideas together to obtain my desired text?
driver.find_element_by_xpath("//a['/mathscinet/search/mscdoc.html?code=' in #href]").text
If I right understand you want to locate all elements, that have same partial href. You can use this:
elements = driver.find_elements_by_xpath("//a[contains(#href, '/mathscinet/search/mscdoc.html')]")
for element in elements:
print(element.text)
or if you want to locate one element:
driver.find_element_by_xpath("//a[contains(#href, '/mathscinet/search/mscdoc.html')]").text
This will give a list of all elements located.
As per the HTML you have shared #AndreiSuvorkov's answer would possibly cater to your current requirement. Perhaps you can get much more granular and construct an optimized xpath by:
Instead of using contains using starts-with
Include the ?code= part of the #href attribute
Your effective code block will be:
all_elements = driver.find_elements_by_xpath("//a[starts-with(#href,'/mathscinet/search/mscdoc.html?code=')]")
for elem in all_elements:
print(elem.get_attribute("innerHTML"))

How to Collect the line with Selenium Python

I want to know how I can collect line, mailto link using selenium python the emails contains # sign in the contact page I tried the following code but it is somewhere works and somewhere not..
//*[contains(text(),"#")]
the emails formats are different somewhere it is <p>Email: name#domain.com</p> or <span>Email: name#domain.com</span> or name#domain.com
is there anyway to collect them with one statement..
Thanks
Here is the XPath you are looking for my friend.
//*[contains(text(),"#")]|//*[contains(#href,"#")]
You could create a collection of the link text values that contain # on the page and then iterate through to format. You are going to have to format the span like that has Email: name#domain.com anyway.
Use find_elements_by_partial_link_text to make the collection.
I think you need 2 XPath. First XPath for finding element that contains text "Email:", second XPath for element that contains attribute "mailto:".
//*[contains(text(),"Email:")]|//*[contains(#href,"mailto:")]

Selenium on Python unable to locate nested class element

I'm trying to reach a nested class, originally I used xPath but it returned an empty list, so I went through the classes individually, and one of them has an issue where selenium can't find it.
Up until Price4 it works fine, but it can't seem to find Price5
So, if you want to get the text from the last element containing the price you can define
String lastPriceXpath = "(//*[#class='css-1m1f8hn'])[last()]"
String lastPrice = driver.findElement(By.xpath(lastPriceXpath)).getText()
The syntax above is in Java but I hope you will be able to convert it to python, it's quite similar

Categories