Python find Element mismatch - python

I was having trouble with find elements by XPATH for a selenium script in Python.
Couldn’t understand the problem, i want to identify all of the elements that satisfy my xpath criteria.
So I simply compared the find_elements(By.XPATG, ‘//div’) to the same result when looking at Chrome’s Inspect tool.
The tool says 300+ results for this whereas my programme, which I got to return a count of results, only states 60? Any ideas? looks to be an thing from the browser/site side rather than my code.
Thanks
EDIT added my code:
import time
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.select import Select
from selenium.webdriver.support.ui import WebDriverWait
from selenium.common.exceptions import StaleElementReferenceException
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.support import expected_conditions as EC
import login
print(time.time())
driver.get("WEBSITE URL")
WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//body")))
time.sleep(1)
links = driver.find_elements(By.XPATH,'//body')
linklist = len(links)
print(linklist)
This provides a result of around 60 however when '//body' is put in the inspect element tool, it returns 300+
Apologies but the targeted webpage is confidential so I can't post. But suggestions are appreciated so I can investigate.

Does your web page has a scroll? If yes, you may need to deal with scroll bar. Also try different conditions for WebDriverWait, for example EC.visibility_of_all_elements_located or EC.presence_of_all_elements_located

Related

How to locate the pagination select button using Python Selenium

Code trials:
from selenium import webdriver
from selenium.webdriver.support.ui import Select
from selenium.webdriver.common.by import By
from bs4 import BeautifulSoup
import pandas as pd
driver = webdriver.Firefox()
url = r"https://www.nba.com/stats/players/advanced"
driver.get(url)
select = Select(driver.find_element(By.XPATH, r"/html/body/div[2]/div[2]/div[2]/div[3]/section[2]/div/div[2]/div[2]/div[1]/div[3]/div/label/div/select"))
select.select_by_index(0)
No matter everything I try I cannot find this full Xpath. I just want the code to recognise the little button that goes from page 1 to all to view all player stat on single page.
I've looked into similar questions but unable to get it solved.
Snapshot:
When it seems that the path is not working, the better way to start solving the problem is to gradually remove tags from left to right while in the inspector tool. By removing /html/body/div[2] from your xpath I was able to find the element in the HTML
xpath = "//div[2]/div[2]/div[3]/section[2]/div/div[2]/div[2]/div[1]/div[3]/div/label/div/select"
select = Select(driver.find_element(By.XPATH, xpath))
which if I understood correctly is this one
To recognise the little button that goes from page 1 to all to view all player stat on single page and select the an option within the website you can use the following locator strategies:
from selenium.webdriver.support.ui import Select
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
driver.get('https://www.nba.com/stats/players/advanced')
WebDriverWait(driver, 5).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button#onetrust-accept-btn-handler"))).click()
Select(driver.find_element(By.XPATH, "//div[starts-with(#class, 'Pagination')]//div[contains(., 'Page')]//following::div[1]//select")).select_by_index(0)

Interacting with "Klarna checkout" radio button/field using Selenium and Python

I'm trying to interact with the field of Klarna Checkout on a website, but I'm not able to change the selected radio buttons nor enter information in the credit card fields. I'm using VS code, Python and selenium webdriver.
The website destination is: https://voltfashion.com/no/functional/kassen/
You need to add an item in order to see the "Klarna checkout section".
It is a Norwegian website.
I have tried some different coding solutions, but none of them worked. I didn't have any problems interacting with the other elements of the website, just the Klarna checkout section. Any suggestions?
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
Some of the code I've tried:
cardForm = driver.find_element_by_id("cardNumber")
cardForm.send_keys('1234567890987654')
and
inputCC = WebDriverWait(driver, 5).until\
(lambda driver: driver.find_element_by_xpath\
("//input[#id='cardNumber']")
)
inputCC.send_keys("1234567890987654")
Klarna screenshot (Source):

how to extract website from google map search

I need exact company information such as name, website url etc,after search by google map. everything is ok, but at last i can not locate the element website.it can open website after you click the image.
but i can not find where the url element locate
anyone can help to locate the website url element? I use python selenium, only the last step, you can check the finished code as below:
from selenium import webdriver
from time import sleep
driver=webdriver.Firefox()
driver.get('https://www.google.com/maps/#35.780287,104.1374349,4z')
sleep(8)
place=driver.find_element_by_class_name('tactile-searchbox-input')
place.send_keys('oil and gas solutions+UAE')
sleep(8)
submit=driver.find_element_by_id('searchbox-searchbutton')
submit.click()
Please find the working code - Instead of sleep you should use WebdriverWait. Always use explicit waits, its a good practice.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait
driver = webdriver.Chrome()
driver.get('https://www.google.co.in/maps')
SearchTextbox = driver.find_element_by_id("searchboxinput")
SearchTextbox.send_keys("cafe coffee day")
SearchTextbox.send_keys(Keys.ENTER)
All_SearchResults = WebDriverWait(driver, 20).until(
EC.presence_of_all_elements_located((By.XPATH, '//div[#data-value="Website"]/button')))
for CCD in All_SearchResults:
print(CCD.get_attribute("aria-label"))
print("end...")
Do let me know if this is what you are looking for. And if it is then please mark it as answer.

Couldn't click on element by using the xpath with selenium in python

I'm trying out the selenium framework to learn something about the web browser automation. Therefore, I decided to build an Audi model...
My code so far:
from selenium import webdriver
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
chrome_driver = webdriver.Chrome(executable_path=r"chromedriver_win32\chromedriver.exe")
chrome_driver.get("https://www.audi.de/de/brand/de/neuwagen.html")
# choose A3 model
chrome_driver.find_element_by_xpath('//*[#id="list"]/div[1]/ul/li[2]/a[2]').click()
# choose sedan version
WebDriverWait(chrome_driver, 3).until(EC.visibility_of_element_located((By.XPATH, '//*[#id="a3limo"]/div/a'))).click()
# start model configuration
WebDriverWait(chrome_driver, 3).until(EC.visibility_of_element_located((By.XPATH, '/html/body/div[1]/div[2]/div/div[6]/div[2]/div/div[1]/ul/li[1]/a'))).click()
# choose the s-line competition package
WebDriverWait(chrome_driver, 3).until(EC.visibility_of_element_located((By.XPATH, '/html/body/div[1]/div[2]/div/div[7]/div[2]/div[2]/div[3]/div[1]/div/div[1]/div/div[1]/span/span'))).click()
# accept the s-line competition package
WebDriverWait(chrome_driver, 3).until(EC.visibility_of_element_located((By.XPATH, '/html/body/div[5]/div/div/div/div/div/ul[2]/li[2]/a'))).click()
Now, the code fail already on the line start model configuration (see "Konfiguration starten" button on webpage https://www.audi.de/de/brand/de/neuwagen/a3/a3-limousine-2019.html). The xPath must be correct and the element should also be visible, so what am I doing wrong here?
Actually required link becomes visible only if to scroll page down.
Try to use this piece of code to click link:
configuration_start = chrome_driver.find_element_by_xpath('//a[#title="Konfiguration starten"]')
chrome_driver.execute_script('arguments[0].scrollIntoView();', configuration_start)
configuration_start.click()
As navigation panel has fixed position it might overlap target link, so you can change navigation panel style before handling link:
nav_panel = chrome_driver.find_element_by_xpath('//div[#data-module="main-navigation"]')
driver.execute_script('arguments[0].style.position = "absolute";', nav_panel)
The following seems to work for me. I have added waits for elements and used a simulated click of the element via javascript.
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
url = 'https://www.audi.de/de/brand/de/neuwagen.html'
d = webdriver.Chrome()
d.get(url)
WebDriverWait(d,10).until(EC.presence_of_element_located((By.CSS_SELECTOR, '[data-filterval="a3"]'))).click()
d.get(WebDriverWait(d,10).until(EC.presence_of_element_located((By.CSS_SELECTOR, '#a3limo .mf-model-details a'))).get_attribute('href'))
element = WebDriverWait(d,10).until(EC.presence_of_element_located((By.CSS_SELECTOR, '[title="Konfiguration starten"]')))
d.execute_script("arguments[0].click();", element)

Python expected_conditions don't always work

I'm having an odd problem trying to web scrape some data from ESPN. I have the below code that sometimes works as intended, but sometimes will get hung up trying to log in. It really seems random, and I'm not sure what's going on.
import time
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
driver = webdriver.Chrome(r'Source')
driver.get("http://games.espn.go.com/ffl/signin")
WebDriverWait(driver,1000).until(EC.presence_of_all_elements_located((By.XPATH,"(//iframe)")))
frms = driver.find_elements_by_xpath("(//iframe)")
driver.switch_to_frame(frms[2])
time.sleep(2)
WebDriverWait(driver,10).until(EC.presence_of_element_located((By.XPATH,'(//input)[1]')))
driver.find_element_by_xpath("(//input)[1]").send_keys("Username")
driver.find_element_by_xpath("(//input)[2]").send_keys("password")
driver.find_element_by_xpath("//button").click()
driver.switch_to_default_content()
time.sleep(2)
When I run the code as is, it often times times out during the second "WebDriverWait", despite the page having fully loaded in chrome. If I take that line out, I then will get an error message that reads:
"selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element:"
You check this xpath? Apparently it is not correct, you can try it on the browser console.
$x('yourxpath')

Categories