selenium python clicking and looping on elements inside elements - python

I have a group of elements on the page that looks like:
<div class="course-lesson__course-wrapper" data-qa-level="z1">
<div class="course-lesson__course-title">
<section class="course-lesson__wrap" data-qa-lesson="trial">
<section class="course-lesson__wrap" data-qa-lesson="trial">
There are several pages with this layout. I want to get a list of all the elements in z1 and then click on them if it is a data-qa-lesson="trial"
I have this code
#finds all the elements for z1 - ...etc
listofA1 = driver.find_element(By.CSS_SELECTOR, "div.course-lesson__course-wrapper:nth-child(1)")
for elemen in listofA1:
#checks for the attribute i need to see if it's clickable
elementcheck = elemen.getAttribute("data-qa-lesson")
if elementcheck == "objective":
elemen.click()
#do some stuff then go back to main and begin again on the next element
driver.get(home_link)
But it does not seem to work

To avoid StaleElementException you can try this approach:
count = len(driver.find_elements(By.XPATH, '//div[#data-qa-level="z1"]//div[#data-qa-level="trial"]')) # Get count of elements
for i in range(count):
driver.find_elements(By.XPATH, '//div[#data-qa-level="z1"]//div[#data-qa-level="trial"]')[i].click() # click current element
# Do what you need
driver.get(home_link)

Related

Selenium Python - how to get deeply nested element

I am exploring Selenium Python and trying to grab a name property from Linkedin page in order to get its index later.
This is the HTML:
Here is how I try to do it:
all_span = driver.find_elements(By.TAG_NAME, "span")
all_span = [s for s in all_span if s.get_attribute("aria-hidden") == "true"]
counter = 1
for i in all_span:
print(counter)
print(i.text)
counter += 1
The problem is there are other spans on the same page that also have aria-hidden=true attribute, but not relevant and that messes up the index.
So I need to reach that span that contains name from one of its its parent divs but I don't know how.
Looking at documentation here: https://selenium-python.readthedocs.io/locating-elements.html# I cant seem to find how to target deeply neseted elements.
I need to get the name that is in span element.
The link
The best way would be to use xpath. https://selenium-python.readthedocs.io/locating-elements.html#locating-by-xpath
Let's say you have this:
<div id="this-div-contains-the-span-i-want">
<span aria-hidden="true">
<!--
...
//-->
</span>
</div>
Then, using xpath:
xpath = "//div[#id='this-div-contains-the-span-i-want']/span[#aria-hidden='true']"
span_i_want = driver.find_element(By.XPATH, xpath)
So, in your example, you could use:
xpath = "//a[#class='app-aware-link']/span[#dir='ltr']/span[#aria-hidden='true']"
span_i_want = driver.find_element(By.XPATH, xpath)
print(span_i_want.text)
No typos but
print(span_i_want) - returns [] empty array

Trying to Scroll inside a div with selenium, scroller function only scrolls up to a certain amount and then just stops

I want to get a list of all the list items which are present inside a div with a scroller. They are not loaded at once upon loading the page, rather the items are loaded dynamically as the user scrolls down (until there are no elements left). So, this is the scroller script which I tried to implement:
def scroller():
userList = None
prev = 0
while True:
time.sleep(5)
userList = WebDriverWait(browser, 50).until(
EC.presence_of_all_elements_located(( By.CLASS_NAME, '<class of list item>' ))
)
n = len(userList)
if n == prev:
break
prev = n
#getting the last element in the list in the view
userList[-1].location_once_scrolled_into_view
This function scrolls the list upto a certain length, but doesn't go to the full length of the elements (not even half). Can someone please suggest a better way to do this?
Thank you

How to get same class name seperately by using css_selector?

I am using the below code to get data from http://www.bddk.org.tr/BultenHaftalik. Two table elements have the same class name. How can I get just one of them?
from selenium import webdriver
import time
driver_path = "C:\\Users\\Bacanli\\Desktop\\chromedriver.exe"
browser = webdriver.Chrome(driver_path)
browser.get("http://www.bddk.org.tr/BultenHaftalik")
time.sleep(3)
Krediler = browser.find_element_by_xpath("//*[#id='tabloListesiItem-253']/span")
Krediler.click()
elements = browser.find_elements_by_css_selector("td.ortala")
for element in elements:
print(element.text)
browser.close()
If you want to select all rows for one column only that match a specific css selection, then you can use :nth-child() selector.
Simply, the code will be like this:
elements = browser.find_elements_by_css_selector("td.ortala:nth-child(2)")
In this way, you will get the "Krediler" column rows only. You can also select the first child if you want to by applying the same idea.
I guess what you want to do is to extract the text and not the numbers, try this:
elements = []
for i in range(1,21):
css_selector = f'#Tablo > tbody:nth-child(2) > tr:nth-child({i}) > td:nth-child(2)'
element=browser.find_element_by_css_selector(css_selector)
elements.append(element)
for element in elements:
print(element.text)
browser.close()

python selenium get text from element

how would I get the text "Premier League (ENG 1)" extracted from this HTML tree? (marked part)
I treid ti get the text with xpath, css selector, class... but I seem to cant get this text extracted.
Basically I want to create a list and go over all "class=with icon" elements that include a text (League) and append the text to that list.
This was my last attempt:
def scrape_test():
alleligen = []
#click the dropdown menue to open the folder with all the leagues
league_dropdown_menue = driver.find_element_by_xpath('/html/body/main/section/section/div[2]/div/div[2]/div/div[1]/div[1]/div[7]/div')
liga_dropdown_menue.click()
time.sleep(1)
#get text form all elements that conain a league as text
leagues = driver.find_elements_by_css_selector('body > main > section > section > div.ut-navigation-container-view--content > div > div.ut-pinned-list-container.ut-content-container > div > div.ut-pinned-list > div.ut-item-search-view > div.inline-list-select.ut-search-filter-control.has-default.has-image.is-open.active > div > ul > li:nth-child(3)')
#append to list
alleligen.append(leagues)
print(alleligen)
But I dont get any output.
What am I missing here?
(I am new to coding)
try this
path = "//ul[#class='inline-list']//li[first()+1]"
element = WebDriverWait(driver, 5).until(EC.presence_of_element_located((By.XPATH, path))).text
print(element)
the path specifies the element you want to target. the first // in the path means that the element you want to find is not the first element in the page and exists somewhere in the page. li[first()+1] states that you are interested in the li tag after the first li.
The WebDriverWait waits for the webpage to load completely for a specified number of seconds (in this case, 5). You might want to put the WebdriverWait inside a try block.
The .text in the end parses the text from the tag. In this case it is the text you want Premier League (ENG 1)
Can you try :
leagues = driver.find_elements_by_xpath(“//li[#class=‘with-icon’ and contains(text(), ‘League’)]”)
For league in leagues:
alleligen.append(league.text)
print(alleligen)
If you know that your locator will remain on the same position in that list tree, you can use the following where the li element is taken based on its index:
locator= "//ul[#class='inline-list']//li[2]"
element = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, locator))).text

How do I click on 2nd html element which is a duplicate of first element using selenium and python?

I want to click on an element that is copied throughout the website (it is a button), but how do I click on lets say the second button, not the first.
Here is the code of the button I want to click:
SHOP NOW
However, the issue is that sometimes it may greyed out if the item is not in stock so I don't want to click it
As a result, here is all of my code:
def mainclick(website):
while True:
time.sleep(1)
price_saved = [i.text.replace('$', "").replace(',', '') for i in driver.find_elements_by_css_selector('[itemprop=youSave]')]
print(price_saved)
for g in range(len(price_saved)):
a = g + 1
if float(price_saved[g]) > 200:
try:
driver.find_element_by_link_text("SHOP NOW")[a].click()
time.sleep(3)
try:
driver.find_element_by_id("addToCartButtonTop").click()
driver.execute_script("window.history.go(-1)")
except:
driver.execute_script("window.history.go(-1)")
except:
print("couldn't click")
pass
print(a)
driver.find_element_by_link_text("Next Page").click()
print("all pages done")
# starts time
start_time = time.time()
mainweb = "https://www.lenovo.com/us/en/outletus/laptops/c/LAPTOPS?q=%3Aprice-asc%3AfacetSys-Memory%3A16+GB%3AfacetSys-Processor%3AIntel%C2%AE+Core%E2%84%A2+i7%3AfacetSys-Processor%3AIntel%C2%AE+Core%E2%84%A2+i5%3AfacetSys-Memory%3A8+GB&uq=&text=#"
driver.get(mainweb)
mainclick(mainweb)
I tried using [a] to click on a certain one but it doesn't seem to work. Also, the href might change of the shop now button based on the product.
You can collect the elements using .find_elements*.
elements = driver.find_elements_by_link_text('insert_value_here')
elements[0].click()
The above example to click first elements.
This index [0], replace with what you want.
If you are sure that everytime you want to click on 2nd button
try using below xpath,
(//*[#class='button-called-out button-full facetedResults-cta'])[2]
If, count of buttons is not same ( may be greyed out)
try using findelements
List button=driver.findElements(By.xpath("//*[#class='button-called-out button-full facetedResults-cta']"));
button.size();
Append the button.size() to the xpath in the place of '2' dynamically, you can click on the second/first not greyed button
You can use XPath with an index a:
driver.find_element_by_xpath("(//a[.='SHOP NOW'])[{}]".format(a))
Note that the first element has index 1.

Categories