How to apply multiple conditional statements in selenium webdriver python - python

I am fairly new to Selenium, I am trying to find a Contact info element in a page, and click it if it exists. Many times, what happens is, the element is in all caps such as CONTACT, sometimes Contact and sometimes contact. So I've stored these cases in a variable and I'm using find_element_by_partial_link_text to find the right element and click on it. I'm using exception handling (try and except) and if loop to check each condition. This is my code:
from selenium import webdriver
import time
from selenium.common.exceptions import NoSuchElementException, StaleElementReferenceException
browser = webdriver.Chrome()
browser.implicitly_wait(30)
browser.maximize_window()
ab = 'Contact'
bc = 'CONTACT'
cd = 'contact'
browser.get('https://www.dominos.co.in/store-location/pune')
try:
if browser.find_element_by_partial_link_text(ab).is_displayed():
browser.find_element_by_partial_link_text(ab).click()
elif browser.find_element_by_partial_link_text(bc).is_displayed():
browser.find_element_by_partial_link_text(bc).click()
elif browser.find_element_by_partial_link_text(cd).is_displayed():
browser.find_element_by_partial_link_text(cd).click()
except NoSuchElementException:
print("No such element found")
browser.close()
So if Contact element is present in any webpage, this code is able to click on it, but if other two elements are present, it goes directly into except and prints No such element found. If you guys could help me tackle this scenario, I would really appreciate it :)

Use xpath with translate() to ignore the case of the text in the html. You can also use find_elements to avoid the try except:
elements = browser.find_elements_by_xpath('//a[translate(text(),"ABCDEFGHIJKLMNOPQRSTUVWXYZ","abcdefghijklmnopqrstuvwxyz") = "contact"]')
if elements and elements[0].is_displayed():
elements[0].click()

Try using translate() functions in xpath ,these functions helps you to deal with case sensitivity.
Hope this helps.

Related

Selenium not finding list of sections with classes?

I am attempting to get a list of games on
https://www.xbox.com/en-US/live/gold#gameswithgold
According to Firefox's dev console, it seems that I found the correct class: https://i.imgur.com/M6EpVDg.png
In fact, since there are 3 games, I am supposed to get a list of 3 objects with this code: https://pastebin.com/raw/PEDifvdX (the wait is so Seleium can load the page)
But in fact, Selenium says it does not exist: https://i.imgur.com/DqsIdk9.png
I do not get what I am doing wrong. I even tried css selectors like this
listOfGames = driver.find_element_by_css_selector("section.m-product-placement-item f-size-medium context-game gameDiv")
Still nothing. What am I doing wrong?
You are trying to get three different games so you need to give different element path or you can use some sort of loop like this one
i = 1
while i < 4:
link = f"//*[#id='ContentBlockList_11']/div[2]/section[{i}]/a/div/h3"
listGames = str(driver.find_element_by_xpath(link).text)
print(listGames)
i += 1
you can use this kind of loop in some places where there is slight different in xpath,css or class
in this way it will loop over web element one by one and get the list of game
as you are trying to get name I think so you need to put .text which will only get you the name nothing else
Another option with a selector that isn't looped over and changed-- also one that's less dependent on the page structure and a little easier to read:
//a[starts-with(#data-loc-link,'keyLinknowgame')]//h3
Here's sample code:
from selenium import webdriver
from selenium.common.exceptions import StaleElementReferenceException
driver = webdriver.Chrome()
url = f"https://www.xbox.com/en-US/live/gold#gameswithgold"
driver.get(url)
driver.implicitly_wait(10)
listOfGames = driver.find_elements_by_xpath("//a[starts-with(#data-loc-link,'keyLinknowgame')]//h3")
for game in listOfGames:
try:
print(game.text)
except StaleElementReferenceException:
pass
If you're after more than just the title, remove the //h3 selection:
//a[starts-with(#data-loc-link,'keyLinknowgame')]
And add whatever additional Xpath you want to narrow things down to the content/elements that you're after.

My if/else expression is not working I don't know why?

My code:
website = browser.find_element_by_link_text('Website')
if website:
website.click()
else:
print('no website')
What I am trying to do is click the button if it is available on the page. If the button isn't available I want it to print no website available on the console and proceed to the next step.
I do not know what I am doing wrong does anyone know how to do fix this?
Thanks in advance I am new to coding!
Before you find an element, you need first visit a website.
driver = webdriver.Firefox()
print('Acessando web site: {}'.format(os.getenv('VISIT_URL')))
driver.get('www.example.com')
#here onwards you can access browser elements like buttons links etc
You are giving a instance of Web element to if() condition, where a boolean expression is expected.
1) Try checking the presence first using find_elements_by_link_text() # _elements_
if len(driver.find_elements_by_link_text('Website')) > 0:
driver.find_element_by_id('Website').click()
2) Or use expected_conditions to check whether the element is available; expected_conditions documentation
3) Or use try/except block;
try:
website = driver.find_elements_by_link_text('Website')
except NoSuchElementException:
# code to execute if the expected element is not available

How to access text inside div tags using Selenium in Python?

I am trying to make a program in Python using Selenium which prints out the quotes from https://www.brainyquote.com/quote_of_the_day
EDIT:
I was able to access the quotes and the associated authors like so:
authors = driver.find_elements_by_css_selector("""div.col-xs-4.col-md-4 a[title="view author"]""")
for quote,author in zip(quotes,authors):
print('Quote: ', quote.text)
print('Author: ', author.text)
Not able to club topics similarly. Doing
total_topics = driver.find_elements_by_css_selector("""div.col-xs-4.col-md-4 a.qkw-btn.btn.btn-xs.oncl_list_kc""")
would make an undesired list
Earlier I was using Beautiful Soup which did the job perfectly except the fact that the requests library was able to access only the static website. However, I wanted to be able to scroll the website continuously to keep accessing new quotes. For that purpose, I'm trying to use Selenium.
This is how I did it using Soup:
for quote_data in soup.find_all('div', class_='col-xs-4 col-md-4'):
quote = quote_data.find('a',title='view quote').text
print('Quote: ',quote)
However, I am unable to find the same using Selenium.
My code in Selenium for basic testing:
driver.maximize_window()
driver.get('https://www.brainyquote.com/quote_of_the_day')
elem = driver.find_element_by_tag_name("body")
elem.send_keys(Keys.PAGE_DOWN)
time.sleep(0.2)
quote = driver.find_element_by_xpath('//div[#title="view quote"]')
I also tried CSS Selectors
print(driver.find_element_by_css_selector('div.col-xs-4 col-md-4')
The latter gave a NoSuchElementFound exception and the former is not giving any output at all. I would love to get some tips on where I am going wrong and how I would be able to tackle this.
Thanks!
quotes = driver.find_elements_by_xpath('//a[#title="view quote"]')
First scroll to bottom
You might need to write some kind of loop to scroll and click on the quotes links until there are no more elements found. Here's a bit of an outline of how I would do that:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver.get('https://www.brainyquote.com/quote_of_the_day')
while True:
# wait for all quote elements to appear
quote_links = WebDriverWait(driver, 10).until(EC.presence_of_all_elements_located((By.XPATH, "//a[#title='view quote']")))
# todo - need to check for the end condition. page has infinite scrolling
# break
# iterate the quote elements until we reach the end of this list
for quote_link in quote_links:
quote_link.click()
driver.back()
# now quote_links has gone stale because we are on a different page
quote_links = WebDriverWait(driver, 10).until(EC.presence_of_all_elements_located((By.XPATH, "//a[#title='view quote']")))
The above code enters a loop that searches for all of the 'View more' quote links on the page. Then, we iterate the list of links and click on each one. At this point the elements in quote_links list have gone stale due to the page no longer existing, so we re-find the elements with WebDriverWait before clicking another link.
This is just a rough outline and some extra work will need to be done to determine an end case for the infinite scrolling of the page, and you will need to write in the operations to perform on the quote pages themselves, but hopefully you see the idea here.

How to use Selenium to click through multiple elements while avoiding Stale Element Error

I'm working on making somewhat of a site map/tree (using anytree) and in order to do so, I need Selenium to find particular elements on a page (representing categories) and then systematically click through these elements, looking for new categories on each new page until we hit no more categories, ie. all leaves and the tree is populated.
I have much of this already written. My issue arises when trying to iterate through my elements list. I currently try to populate the tree depth-first, going down to the leaves and then popping back up to the original page to continue the same thing with the next element in the list. This, however, is resulting in a Stale element reference error because my page reloads. What is a workaround to this? Can I somehow open the new links in a new window so that the old page is preserved? The only fixes I have found for that exception are to neatly catch it, but that doesn't help me.
Here is my code so far (the issue lies in the for loop):
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from anytree import Node, RenderTree
def findnodes(driver) :
driver.implicitly_wait(5)
try:
nodes = driver.find_elements_by_css_selector('h3.ng-binding')
except:
nodes = []
return nodes
def populateTree(driver, par) :
url = driver.current_url
pages = findnodes(driver)
if len(pages)>0 :
for page in pages:
print(page.text)
Node(page.text, parent=par)
page.click()
populateTree(driver, page.text)
driver.get(url)
driver = webdriver.Chrome()
#Get starting page
main ='http://www.example.com'
root = Node(main)
driver.get(main)
populateTree(driver, root)
for pre, fill, node in RenderTree(root):
print("%s%s" % (pre, node.name))
I haven't worked in python but have worked on java/selenium. But,I can give you the idea to overcome staleness.
Generally we will be getting the Stale Exception if the element attributes or something is changed after initiating the webelement. For example, in some cases if user tries to click on the same element on the same page but after page refresh, gets staleelement exception.
To overcome this, we can create the fresh webelement in case if the page is changed or refreshed. Below code can give you some idea.(It's in java but the concept will be same)
Example:
webElement element = driver.findElement(by.xpath("//*[#id='StackOverflow']"));
element.click();
//page is refreshed
element.click();//This will obviously throw stale exception
To overcome this, we can store the xpath in some string and use it create a fresh webelement as we go.
String xpath = "//*[#id='StackOverflow']";
driver.findElement(by.xpath(xpath)).click();
//page has been refreshed. Now create a new element and work on it
driver.findElement(by.xpath(xpath)).click(); //This works
Hope this helps you.
xpath variable is not suppose to be star, it an xpath to desired elements. Stale exception appears, because we click something in the browser. That requires to find all the elements each time you click. So in each loop we find all the elements driver.find_elements_by_xpath(xpath). We get a list of elements. But then we need only one of them. Therefore we take element at specific index represented idx which will range from 0 to the number of elements.
xpath = '*'
for idx, _ in enumerate(range(len(driver.find_elements_by_xpath(xpath)))):
element = driver.find_elements_by_xpath(xpath)[idx]
element.click()

Selenium Page Source is Missing Elements

I have a basic Selenium script that makes use of the chromedriver binary. I'm trying to display a page with recaptcha on it and then hang until the answer has been completed and then store that in a variable for future use.
The roadblock I'm hitting is that I am unable to find the recaptcha element.
#!/bin/env python2.7
import os
from selenium import webdriver
driverBin=os.path.expanduser("~/Desktop/chromedriver")
driver=webdriver.Chrome(driverBin)
driver.implicitly_wait(5)
driver.get('http://patrickhlauke.github.io/recaptcha/')
Is there anything special needed to be able to see this element?
Also is there a way to grab the token after user solve without refreshing the page?
As it is now the input type of the recaptcha-token id is hidden. After solve a second recaptcha-token id is created. This is the value I wish to store in a variable. I was thinking of having a loop of checking length of found elements with that id. If greater than 1 parse. But I'm unsure whether the source updates per se.
UPDATE:
With more research it has to do with the nature of the element, particularly: with the tag: <input type="hidden". So I guess to rephrase my question, how does one extract the value of a hidden element.
The element you are looking for (the input) is in an iframe. You'll need switch to the iframe before you can locate the element and interact with it.
import os
from selenium import webdriver
driver=webdriver.Chrome()
try:
driver.implicitly_wait(5)
driver.get('http://patrickhlauke.github.io/recaptcha/')
# Find the iframe and switch to it
iframe_path = '//iframe[#title="recaptcha widget"]'
iframe = driver.find_element_by_xpath(iframe_path)
driver.switch_to.frame(iframe)
# Find the input element
input_elem = driver.find_element_by_id("recaptcha-token")
print("Found the input element: ", input_elem)
finally:
driver.quit()

Categories