How to extract multiple elements from XPath Selenium (Python) - python

I made this XPath
alo1 = driver.find_element(By.XPATH, "//div[#class='txt-block']/span/a/span").text
print(alo1)
but the problem is: i'm getting only the first element, but there is 3 or 4 elements with the same XPath, and i wanted then all.
From page to page the number of elements change from 0 to 4.
How can i do it?
And other thing, do you think is possible to make another XPath? I'm trying to get the name of the producers of the films.
EDIT:
I have a second difficulty. I'm passing this result to an excel sheet, but it needs to be in one line to be printed there, or else will only print the last one. How can it be done? ,
wb = xlwt.Workbook()
ws = wb.add_sheet("A Test Sheet")
driver = webdriver.Chrome()
driver.get('http://www.imdb.com/title/tt4854442/?ref_=wl_li_tt')
labels = driver.find_elements_by_xpath("//div[#class='txt-
block']/span/a/span")
for label in labels:
print (label.text)
ws.write(x-1,1,label.text)
wb.save("sinopses.xls")
The website for reference: http://www.imdb.com/title/tt4854442/?ref_=wl_li_tt

You can get them all at once, and then get text for each element:
alos = driver.find_elements(By.XPATH, "//div[#class='txt-block']/span/a/span")
for alo in alos:
print alo.text

For the first question:
FindElement always give only one result , even if the locator matches more than one , it automatically takes the first one.
If locator gives more than one matching result and you want all of them then you should go for findElements
For the second question:
labels = driver.find_elements_by_xpath("//div[#class='txt-
block']/span/a/span")
result = ''
for label in labels:
result += label.text
print (result)
ws.write(x-1,1,result)
wb.save("sinopses.xls")

Related

How to get same class name seperately by using css_selector?

I am using the below code to get data from http://www.bddk.org.tr/BultenHaftalik. Two table elements have the same class name. How can I get just one of them?
from selenium import webdriver
import time
driver_path = "C:\\Users\\Bacanli\\Desktop\\chromedriver.exe"
browser = webdriver.Chrome(driver_path)
browser.get("http://www.bddk.org.tr/BultenHaftalik")
time.sleep(3)
Krediler = browser.find_element_by_xpath("//*[#id='tabloListesiItem-253']/span")
Krediler.click()
elements = browser.find_elements_by_css_selector("td.ortala")
for element in elements:
print(element.text)
browser.close()
If you want to select all rows for one column only that match a specific css selection, then you can use :nth-child() selector.
Simply, the code will be like this:
elements = browser.find_elements_by_css_selector("td.ortala:nth-child(2)")
In this way, you will get the "Krediler" column rows only. You can also select the first child if you want to by applying the same idea.
I guess what you want to do is to extract the text and not the numbers, try this:
elements = []
for i in range(1,21):
css_selector = f'#Tablo > tbody:nth-child(2) > tr:nth-child({i}) > td:nth-child(2)'
element=browser.find_element_by_css_selector(css_selector)
elements.append(element)
for element in elements:
print(element.text)
browser.close()

Selenium Python - Scription Return empty string and unable to compare by assertequal

I created a selenium script to check the number of cart is shown zero (0)> However, it's returned empty although this field is zero.
Scripts:
shopping_cart_qty = self.driver.find_element_by_xpath("//span[contains(#class,'topnav-cart-qty')]").text
self.assertEqual('0', shopping_cart_qty, "The shopping car is not empty")
Return:
The shopping car is not empty
!= 0
Expected :0
Actual :
Try with
shopping_cart_qty = self.driver.find_element_by_xpath("//span[contains(#class,'topnav-cart-qty')]").get_attribute("value")
shopping_cart_qty = self.driver.find_element_by_xpath("//span[contains(#class,'topnav-cart-qty')]").get_attribute("textContent")
or
shopping_cart_qty = self.driver.find_element_by_xpath("//span[contains(#class,'topnav-cart-qty')]")
driver.execute_script("arguments[0].scrollIntoView()",shopping_cart_qty )
print(shopping_cart_qty.text)
if element is not visible then text will return empty as it considers the visibility also, you can use textContent to retrieve the text irrespective of the visibility .
But the best way is to scrollintoview first and then get text

Python selenium - select an element using action chain

I have quite a unique goal and I'm having a hard time to have my python code working. Inside a big selenium application, I'm trying simply to check if an element located on a specific position in the browser corresponds to an element.
For example, if you look at the test website: https://learn.letskodeit.com/p/practice
there's one element (link) labeled "Open Tab" and its coordinates on the browser are: x = 588, y = 576.
This is the code I'm using to confirm in that position I have that element:
target_elem = driver.find_element_by_id("opentab")
print("target elem actual location: {}".format(target_elem.location))
time.sleep(1)
zero_elem = driver.find_element_by_tag_name('body')
x_body_offset = zero_elem.location["x"]
y_body_offset = zero_elem.location["y"]
print("Body coordinates: {}, {}".format(x_body_offset, y_body_offset))
x = 588
y = 576
actions = ActionChains(driver)
actions.move_to_element_with_offset(driver.find_element_by_tag_name('body'), -x_body_offset, -y_body_offset).click()
actions.move_by_offset( x, y ).send_keys(Keys.ESCAPE).perform()
elem_found = driver.switch_to.active_element
print(elem_found.text)
when I print elem_found.text I don't get "Open Tab".
however, if inside the action chain, right before perform(), I add click(), the code above does click on the "Open Tab" link.
Hence my question: can we simply select the element by knowing its exact position in the browser?
I totally understand getting element by location is not really the best way to get element but on my end, I do really need to be able to confirm if an element in position X,Y corresponds to something I expect to find in that location.

Select button by highest number in xpath

There are multiple buttons on my page containing similar href. They only differ with id_invoices. I want to click one button on page using xpath and href which looks like:
href="/pl/payment/getinvoice/?id_invoices=461"
I can select all buttons using:
invoices = driver.find_elements_by_xpath("//a[contains(#href, '/payment/getinvoice/')]")
but I need to select only button with highest id_invoices. Can it be done? :)
What you can do is:
hrefList = driver.find_elements_by_xpath("//a[contains(#href, '/payment/getinvoice/')]/#href")
for i in hrefList:
hrefList[i]=hrefList[i].split("id_invoices=")[-1]
max = max(hrefList)
driver.find_elements_by_xpath("//a[contains(#href, '/payment/getinvoice/?id_invoices="+str(max))+"'"+"]").click()
I don't know much about python so giving you a direction/algorithm to achieve same
Using getAttribute('#href');
You will get strings of URLs
You need to split all element after getText() you get in invoice List.
Split by= and pick up the last array value.
Now you need to typecast string to int, as last value after = will be a number
Now you just need to pick the highest value.
Since you have an XPath that returns all the desired elements, you just need to grab the href attribute from each one, split the href by '=' to get the id (2nd part of string), find the largest id, and then use the id to find the element you want and click on it.
invoices = driver.find_elements_by_xpath("//a[contains(#href, '/payment/getinvoice/')]")
ids = []
for invoice in invoices
ids.append(invoice.get_attribute("href").split('=')[2])
results = list(map(int, ids)) // you can't do max on a list of string, you won't get the right answer
id = max(results)
driver.find_element_by_xpath("//a[#href='/pl/payment/getinvoice/?id_invoices=" + id + "']").click

Looping through xpath variables

How can I increment the Xpath variable value in a loop in python for a selenium webdriver script ?
search_result1 = sel.find_element_by_xpath("//a[not((//div[contains(#class,'s')]//div[contains(#class,'kv')]//cite)[1])]|((//div[contains(#class,'s')]//div[contains(#class,'kv')]//cite)[1])").text
search_result2 = sel.find_element_by_xpath("//a[not((//div[contains(#class,'s')]//div[contains(#class,'kv')]//cite)[2])]|((//div[contains(#class,'s')]//div[contains(#class,'kv')]//cite)[2])").text
search_result3 = sel.find_element_by_xpath("//a[not((//div[contains(#class,'s')]//div[contains(#class,'kv')]//cite)[3])]|((//div[contains(#class,'s')]//div[contains(#class,'kv')]//cite)[3])").text
why dont you create a list for storing search results similar to
search_results=[]
for i in range(1,11) #I am assuming 10 results in a page so you can set your own range
result=sel.find_element_by_xpath("//a[not((//div[contains(#class,'s')]//div[contains(#class,'kv')]//cite)[%s])]|((//div[contains(#class,'s')]//div[contains(#class,'kv')]//cite)[%s])"%(i,i)).text
search_results.append(result)
this sample code will create list of 10 values of results. you can get idea from this code to write your own. its just matter of automating task.
so
search_results[0] will give you first search result
search_results[1] will give you second search results
...
...
search_results[9] will give you 10th search result
#Alok Singh Mahor, I don't like hardcoding ranges. Guess, better approach is to iterate through the list of webelements:
search_results=[]
result_elements = sel.find_elements_by_xpath("//not/indexed/xpath/for/any/search/result")
for element in result_elements:
search_result = element.text
search_results.append(search_result)

Categories