I am trying to scrape names and odds for a project I am working on with Selenium 4, and just having an issue with the locator.
When I use the driver.find_element(By.XPATH), the XPATH I give it only seems to work when I open the inspect window on that particular page. When I close the inspect window, it gives me an NoSuchElementException: no such element: Unable to locate element: error.
Code:
from selenium import webdriver
from selenium.webdriver.common.by import By
driver = webdriver.Chrome()
url = 'https://www.bet365.com.au/#/AC/B13/C1/D50/E2/F163/'
driver.get(url)
Player1 = driver.find_element(By.XPATH, '/html/body/div[1]/div/div[4]/div[3]/div/div/div/div[1]/div/div/div[2]/div/div/div[1]/div[2]/div/div[1]/div[2]/div[1]/div/div[2]/div/div[1]/div').text
Player2 = driver.find_element(By.XPATH, '/html/body/div[1]/div/div[4]/div[3]/div/div/div/div[1]/div/div/div[2]/div/div/div[1]/div[2]/div/div[1]/div[2]/div[1]/div/div[2]/div/div[2]/div').text
Odds1 = driver.find_element(By.XPATH, '/html/body/div[1]/div/div[4]/div[3]/div/div/div/div[1]/div/div/div[2]/div/div/div[1]/div[2]/div/div[2]/div[2]/span').text
Odds2 = driver.find_element(By.XPATH, '/html/body/div[1]/div/div[4]/div[3]/div/div/div/div[1]/div/div/div[2]/div/div/div[1]/div[2]/div/div[3]/div[2]/span').text
print(f'{Player1}\t{Odds1}')
print(f'{Player2}\t{Odds2}')
Run just the section from Player1 onwards with the inspect window open and without it open. Hopefully you'll be able to replicate the issue.
I also ran
try:
if driver.find_element(By.XPATH, '/html/body/div[1]/div/div[4]/div[3]/div/div/div/div[1]/div/div/div[2]/div/div/div[1]/div[2]/div/div[1]/div[2]/div[1]/div/div[2]/div/div[1]/div').text:
print("yay")
except:
print("nay")
and it seemed to show the same situation. The element couldn't be found without the inspect window open. See attached image for where I got the XPATHs from.
Many thanks in advance!
Related
Dear Stackoverflowers,
I'm trying to automate a CC payment process but Selenium is having a hard time identifying a specific element I want to click on. I'm trying to click on 'REI Card - 6137' so that I can continue to the payment page. Using the inspect tool it shows the class as, "soloLink accountNamesize". Unfortunately, there's not an ID I can go after. When I try to search by class name I get this error in the console:
selenium.common.exceptions.NoSuchElementException: Message: Unable to
locate element: .soloLink accountNamesize
Below is a picture of the site and the inspector pane with the thing I'm trying to click on highlighted in blue. Since its my credit card and I'm already logged it a link to the page wouldn't really help you guys.
The script gets hung up on "driver.find_element_by_class_name('soloLink accountNamesize').click()"
My code is below:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import yaml
import time
conf = yaml.load(open(r'D:\Users\Matt\Documents\GitHub\YML_Files\REI_Login_Credentials.yml'))
myREIUsername = conf['REILogin']['username']
myREIPassword = conf['REILogin']['password']
driver = webdriver.Firefox(
executable_path=
r'D:\Users\Matt\Documents\GitHub\Executable_Files\geckodriver.exe'
)
def login():
driver.get('https://onlinebanking.usbank.com/Auth/Login?usertype=REIMC&redirect=login&lang=en&exp=')
time.sleep(4)
driver.find_element_by_id('aw-personal-id').send_keys(myREIUsername)
driver.find_element_by_id('aw-password').send_keys(myREIPassword)
time.sleep(2)
driver.find_element_by_id('aw-log-in').click()
time.sleep(15)
make_payment()
def make_payment():
if (driver.find_element_by_class_name("accountRowLast").text) != "0.00":
driver.find_element_by_class_name('soloLink accountNamesize').click()
else:
driver.quit()
I've tried searching by Xpath and Xpath + Class with no luck. I also tried searching for this issue but its a fairly unique class so I didn't have much luck. Have any other ideas I could try?
soloLink accountNamesize is multiple class names use the following css selector instead to click on that element.
driver.find_element_by_css_selector('a.soloLink.accountNamesize').click()
To induce waits we do
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "a.soloLink.accountNamesize"))).click()
Based on the photo, I think that this is the xpath that you might want
//div[#id='MyAccountsDiv']//div[#id='CreditsTableDiv']//tbody//tr[#class='accountRowFirst']//a[contains(#onclick, 'OpenAccountDashboard')]
As you can see, this xpath starts off with the top-most div that might be unique ( MyAccountsDiv ) and continues to dive into the HTML code.
Based off of this, you could click on the link with the following code
xpath = "//div[#id='MyAccountsDiv']//div[#id='CreditsTableDiv']//tbody//tr[#class='accountRowFirst']//a[contains(#onclick, 'OpenAccountDashboard')]"
driver.find_element(By.XPATH, xpath).click()
NOTE
Your error says
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: [id="aw-personal-id"]
Maybe you can use the above technique and see if you can isolate the xpath for the web element instead.
I am scraping an angular.js site. My initial link has a search button. I find by xpath and click with no issues. After I click search, I want to be able to click each of the athletes in the table to go to their info pages, but I am not having success with the click method. The links are attached to their names.
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
TIMEOUT = 5
driver = webdriver.Firefox()
driver.set_page_load_timeout(TIMEOUT)
url = 'https://n.rivals.com/search#?formValues=%7B%22sport%22:%22Football%22,%22recruit_year%22:2021,%22offer_and_visit_type%22:%5B%22Offer%22%5D,%22prospect_profiles.prospect_colleges.offer%22:true,%22page_number%22:1,%22page_size%22:50%7D'
try:
driver.get(url)
except TimeoutException:
pass
search_button = driver.find_element_by_xpath('//*[#id="articles"]/div/div[2]/div/div/div[1]/form/div[2]/div[5]/button')
search_button.click();
#below is where I tried, but could not get to click
first_athlete = driver.find_element_by_xpath('//*[#id="content_"]/td[1]/div[2]/a')
first_athlete.click();
Works if you remove the last /a in the xpath:
first_athlete = driver.find_element_by_xpath('//*[#id="content_"]/td[1]/div[2]')
first_athlete.click()
If you want to search for all athletes and you have the name of athletes with you, you can use CSS selector as well.
athelete = driver.find_elements_by_css_selector(`#content_ > td > div > a[href *="donovan-jackson"]);
athelete.click();
This code will give you a unique web element for each player.
Thanks
I am having trouble selecting a load more button on a Linkedin page. I receive this error in finding the xpath: selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element
I suspect that the issue is that the button is not visible on the page at that time. So I have tried actions.move_to_element. However, the page scrolls just below the element, so that the element is no longer visible, and the same error subsequently occurs.
I have also tried move_to_element_with_offset, but this hasn't changed where the page scrolls to.
How can I scroll to the right location on the page such that I can successfully select the element?
My relevant code:
import parameters
from selenium.webdriver.common.action_chains import ActionChains
from selenium import webdriver
ChromeOptions = webdriver.ChromeOptions()
driver = webdriver.Chrome('C:\\Users\\Root\\Downloads\\chromedriver.exe')
driver.get('https://www.linkedin.com/login?fromSignIn=true&trk=guest_homepage-basic_nav-header-signin')
sleep(0.5)
username = driver.find_element_by_name('session_key')
username.send_keys(parameters.linkedin_username)
sleep(0.5)
password = driver.find_element_by_name('session_password')
password.send_keys(parameters.linkedin_password)
sleep(0.5)
sign_in_button = driver.find_element_by_xpath('//button[#class="btn__primary--large from__button--floating"]')
sign_in_button.click()
driver.get('https://www.linkedin.com/in/kate-yun-yi-wang-054977127/?originalSubdomain=hk')
loadmore_skills=driver.find_element_by_xpath('//button[#class="pv-profile-section__card-action-bar pv-skills-section__additional-skills artdeco-container-card-action-bar artdeco-button artdeco-button--tertiary artdeco-button--3 artdeco-button--fluid"]')
actions = ActionChains(driver)
actions.move_to_element(loadmore_skills).perform()
#actions.move_to_element_with_offset(loadmore_skills, 0, 0).perform()
loadmore_skills.click()
After playing around with it, I seem to have figured out where the problem is stemming from. The error
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//button[#class="pv-profile-section__card-action-bar pv-skills-section__additional-skills artdeco-container-card-action-bar artdeco-button artdeco-button--tertiary artdeco-button--3 artdeco-button--fluid"]"}
(Session info: chrome=81.0.4044.113)
always correctly states the problem its encountering and as such it's not able to find the element. The possible causes of this include:
Element not present at the time of execution
Dynamically generated
content Conflicting names
In your case, it was the second point. As the content that is displayed is loaded dynamically as you scroll down. So When it first loads your profile the skills sections aren't actually present in the DOM. So to solve this, you simply have to scroll to the section so that it gets applied in the DOM.
This line is the trick here. It will position it to the correct panel and thus loading and applying the data to the DOM.
driver.execute_script("window.scrollTo(0, 1800)")
Here's my code (Please change it as necessary)
from time import sleep
# import parameters
from selenium.webdriver.common.action_chains import ActionChains
from selenium import webdriver
from selenium.webdriver.support.wait import WebDriverWait
ChromeOptions = webdriver.ChromeOptions()
driver = webdriver.Chrome('../chromedriver.exe')
driver.get('https://www.linkedin.com/login?fromSignIn=true&trk=guest_homepage-basic_nav-header-signin')
sleep(0.5)
username = driver.find_element_by_name('session_key')
username.send_keys('')
sleep(0.5)
password = driver.find_element_by_name('session_password')
password.send_keys('')
sleep(0.5)
sign_in_button = driver.find_element_by_xpath('//button[#class="btn__primary--large from__button--floating"]')
sign_in_button.click()
driver.get('https://www.linkedin.com/in/kate-yun-yi-wang-054977127/?originalSubdomain=hk')
sleep(3)
# driver.execute_script("window.scrollTo(0, 1800)")
sleep(3)
loadmore_skills=driver.find_element_by_xpath('//button[#class="pv-profile-section__card-action-bar pv-skills-section__additional-skills artdeco-container-card-action-bar artdeco-button artdeco-button--tertiary artdeco-button--3 artdeco-button--fluid"]')
actions = ActionChains(driver)
actions.move_to_element(loadmore_skills).perform()
#actions.move_to_element_with_offset(loadmore_skills, 0, 0).perform()
loadmore_skills.click()
Output
Update
In concerns to your newer problem, you need to implement a continuous scroll method that would enable you to dynamically update the skills section. This requires a lot of change and should ideally be asked as a another question.
I have also found a simple solution by setting the scroll to the correct threshold. For y=3200 seems to work fine for all the profiles I've checked including yours, mine and few others.
driver.execute_script("window.scrollTo(0, 3200)")
If the button is not visible on the page at the time of loading then use the until method to delay the execution
try:
myElem = WebDriverWait(browser, delay).until(EC.presence_of_element_located((By.ID, 'IdOfMyElement')))
print "Button is rdy!"
except TimeoutException:
print "Loading took too much time!"
Example is taken from here
To get the exact location of the element, you can use the following method to do so.
element = driver.find_element_by_id('some_id')
element.location_once_scrolled_into_view
This actually intends to return you the coordinates (x, y) of the element on-page, but also scroll down right to target element. You can then use the coordinates to make a click on the button. You can read more on that here.
You are getting NoSuchElementException error when the locators (i.e. id / xpath/name/class_name/css selectors etc) we mentioned in the selenium program code is unable to find the web element on the web page.
How to resolve NoSuchElementException:
Apply WebDriverWait : allow webdriver to wait for a specific time
Try catch block
so before performing action on webelement you need to take web element into view, I have removed unwated code and also avoided use of hardcoded waits as its not good practice to deal with synchronization issue. Also while clicking on show more button you have to scroll down otherwise it will not work.
from selenium import webdriver
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.common.action_chains import ActionChains
driver = webdriver.Chrome(executable_path="path of chromedriver.exe")
driver.get('https://www.linkedin.com/login?fromSignIn=true&trk=guest_homepage-basic_nav-header-signin')
driver.maximize_window()
WebDriverWait(driver, 10).until(
EC.element_to_be_clickable((By.NAME, "session_key"))).send_keys("email id")
WebDriverWait(driver, 10).until(
EC.element_to_be_clickable((By.NAME, "session_password"))).send_keys("password ")
WebDriverWait(driver, 10).until(
EC.element_to_be_clickable((By.XPATH, "//button[#class='btn__primary--large from__button--floating']"))).click()
driver.get("https://www.linkedin.com/in/kate-yun-yi-wang-054977127/?originalSubdomain=hk")
driver.maximize_window()
driver.execute_script("scroll(0, 250);")
buttonClick = WebDriverWait(driver, 10).until(
EC.element_to_be_clickable((By.XPATH, "//span[text()='Show more']")))
ActionChains(driver).move_to_element(buttonClick).click().perform()
Output:
I want to build a simple app to learn Selenium in Python using a custom google search url to search Reddit and scrape the results:
https://cse.google.com/cse/publicurl?cx=011171116424399119392:skuhhpapys8
I am trying to click on the first link in the overlay that comes up after a search on the site as pictured here
image
I have so far:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from bs4 import BeautifulSoup
url = "https://cse.google.com/cse/publicurl?cx=011171116424399119392:skuhhpapys8"
driver = webdriver.Firefox()
driver.implicitly_wait(30)
driver.get(url)
python_button = driver.find_element_by_class_name('gsc-search-button')
inputElement = driver.find_element_by_id("gsc-i-id1")
soup_level1=BeautifulSoup(driver.page_source, 'lxml')
search = input("Ask Reddit")
inputElement.send_keys(search)
inputElement.send_keys(Keys.ENTER)
overlay = driver.find_element_by_class_name('gsc-results-wrapper-overlay')
first_link = driver.find_element_by_css_selector('a.gsc-title').click()
and I get the error
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: a.gsc-title
Any idea on what I am missing? Is there something special I have to do to deal with an overlay like that? There are multiple elements by that css selector am I not targeting the first?
The class for a link, as far as I can see is not gsc-title, it's gs-title. Hence your main problem. But also a couple of other things:
There's nothing special about overlay, no need to account for it. So overlay = driver.<...> is not required, unless you use it to make sure results showed up (see next point)
Changing implicit wait (driver.implicitly_wait(30)) is not a good idea; use explicit wait instead:
from selenium.webdriver.support import expected_conditions as EC
...
WebDriverWait(driver, 30).until(
EC.presence_of_element_located((By.CSS_SELECTOR, "gsc-results-wrapper-overlay")))
first_link = driver.find_element_by_css_selector('a.gs-title').click()
I am trying to click on the "chercher" button on the left of the page (middle).
url = "https://www.fpjq.org/repertoires/repertoire-des-medias/"
driver = webdriver.Firefox()
driver.get(url)
time.sleep(2)
driver.find_element_by_xpath('//*[#id="recherche"]/input[3]').click()
However, it can't find the element. I copy pasted the XPath so I am not sure why it's not working.
Thanks.
That's because required button located inside an iframe and to be able to click it you need to switch to that iframe:
url = "https://www.fpjq.org/repertoires/repertoire-des-medias/"
driver = webdriver.Firefox()
driver.get(url)
time.sleep(2)
driver.switch_to.frame(driver.find_element_by_tag_name("iframe"))
driver.find_element_by_xpath('//*[#id="recherche"]/input[3]').click()
Also note that using time.sleep() is not a good practice. You can try to implement Explicitwait instead