Error message :
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: input.ytd-searchbox
I keep getting this error, even though i added a sleep command from other solutions for the page to load dynamically with javascript, but still it cannot find it?
import time
from selenium import webdriver
firefox = webdriver.Firefox()
firefox.get("https://www.youtube.com")
element = firefox.find_element_by_css_selector("ytd-mini-guide-entry-renderer.style-scope:nth-child(3) > a:nth-child(1)") # opens subscriptions
element.click()
time.sleep(10) # wait for page to load before finding it
searchelement = firefox.find_element_by_css_selector('input.ytd-searchbox') # search bar
searchelement.send_keys("Cute Puppies")
searchelement.submit()
I just changed the CSS Selector. You did that wrong here.
Umm... how i did that? Well there's an easy trick for selecting CSS Selectors.
Type the tag name first. In your case it's input.
If there's an ID present here, type the ID name with # on it. So
as i did : #search.
If there's a class there, then use . before it's name. For
example .search.
Try this. It's working :
import time
from selenium import webdriver
firefox = webdriver.Firefox(executable_path=r'C:\Users\intel\Downloads\Setups\geckodriver.exe')
firefox.get("https://www.youtube.com")
element = firefox.find_element_by_css_selector(".style-scope:nth-child(1) > #items > .style-scope:nth-child(3) > #endpoint .title") # opens subscriptions
element.click()
time.sleep(10) # wait for page to load before finding it
searchelement = firefox.find_element_by_css_selector('input#search') # search bar
searchelement.send_keys("Cute Puppies")
searchelement.submit()
Related
I'm trying to scrape data by python from this e-commerce site
Because it requires to select the shipping location first to access the data and the 3 selects have the same xpath so I use the code below
city = browser.find_element(By.XPATH,"(//select[not(#id) and not(#class)])[1]")
citydd = Select(city)
citydd.select_by_value('01') # Hanoi
time.sleep(1)
district = browser.find_element(By.XPATH,"(//select[not(#id) and not (#class)])[2]")
districtdd = Select(district)
districtdd.select_by_value('0101') # Ba Dinh
time.sleep(1)
ward = browser.find_element(By.XPATH,"(//select[not(#id) and not (#class)])[3]")
warddd = Select(ward)
warddd.select_by_value('010104') # Cong Vi
browser.find_element(By.XPATH,"//div[text()='Xác nhận']").click() # Xac nhan
It returns me this error
NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"(//select[not(#id) and not(#class)])[1]"}
May I know how to bypass this situation?
There is the ability to select better xpaths. You can use a relative xpaths using the label of associated select
//label[contains(text(),'Tỉnh/Thành phố')]/following-sibling::div/select
//label[contains(text(),'Quận/Huyện')]/following-sibling::div/select
//label[contains(text(),'Phường/Xã')]/following-sibling::div/select
This is the middle one identified as unique using the above:
If you're still getting no such error with these xpaths - please ensure you include explicit or implicit waits
Selenium's default wait strategy is the "the page has loaded". Most often in modern pages, the page loads, THEN scripts run which get more data or display a modal (like the popup on the image). Those async calls cause fails as nosuchelements in selenium.
Let me know if you need more information on sycnhronisation.
This is what i have tried -
from selenium.webdriver.support.ui import Select
from selenium.webdriver.support.wait import WebDriverWait
from time import sleep
from selenium import webdriver
driver = webdriver.Chrome()
wait = WebDriverWait(driver, 10)
driver.get('https://vinmart.com/')
FirstDropDown = Select(driver.find_element_by_xpath("(//select)[1]"))
FirstDropDown.select_by_index(1)
sleep(2)
SecondDropDown = Select(driver.find_element_by_xpath("(//select)[2]"))
SecondDropDown.select_by_index(1)
sleep(2)
ThirdDropDown = Select(driver.find_element_by_xpath("(//select)[3]"))
ThirdDropDown.select_by_index(1)
I have used sleep() because it will take time to populated data in the dropdown as per pervious dropdown selection.
Please mark it as answer if it resolves your problem.
Dear Stackoverflowers,
I'm trying to automate a CC payment process but Selenium is having a hard time identifying a specific element I want to click on. I'm trying to click on 'REI Card - 6137' so that I can continue to the payment page. Using the inspect tool it shows the class as, "soloLink accountNamesize". Unfortunately, there's not an ID I can go after. When I try to search by class name I get this error in the console:
selenium.common.exceptions.NoSuchElementException: Message: Unable to
locate element: .soloLink accountNamesize
Below is a picture of the site and the inspector pane with the thing I'm trying to click on highlighted in blue. Since its my credit card and I'm already logged it a link to the page wouldn't really help you guys.
The script gets hung up on "driver.find_element_by_class_name('soloLink accountNamesize').click()"
My code is below:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import yaml
import time
conf = yaml.load(open(r'D:\Users\Matt\Documents\GitHub\YML_Files\REI_Login_Credentials.yml'))
myREIUsername = conf['REILogin']['username']
myREIPassword = conf['REILogin']['password']
driver = webdriver.Firefox(
executable_path=
r'D:\Users\Matt\Documents\GitHub\Executable_Files\geckodriver.exe'
)
def login():
driver.get('https://onlinebanking.usbank.com/Auth/Login?usertype=REIMC&redirect=login&lang=en&exp=')
time.sleep(4)
driver.find_element_by_id('aw-personal-id').send_keys(myREIUsername)
driver.find_element_by_id('aw-password').send_keys(myREIPassword)
time.sleep(2)
driver.find_element_by_id('aw-log-in').click()
time.sleep(15)
make_payment()
def make_payment():
if (driver.find_element_by_class_name("accountRowLast").text) != "0.00":
driver.find_element_by_class_name('soloLink accountNamesize').click()
else:
driver.quit()
I've tried searching by Xpath and Xpath + Class with no luck. I also tried searching for this issue but its a fairly unique class so I didn't have much luck. Have any other ideas I could try?
soloLink accountNamesize is multiple class names use the following css selector instead to click on that element.
driver.find_element_by_css_selector('a.soloLink.accountNamesize').click()
To induce waits we do
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "a.soloLink.accountNamesize"))).click()
Based on the photo, I think that this is the xpath that you might want
//div[#id='MyAccountsDiv']//div[#id='CreditsTableDiv']//tbody//tr[#class='accountRowFirst']//a[contains(#onclick, 'OpenAccountDashboard')]
As you can see, this xpath starts off with the top-most div that might be unique ( MyAccountsDiv ) and continues to dive into the HTML code.
Based off of this, you could click on the link with the following code
xpath = "//div[#id='MyAccountsDiv']//div[#id='CreditsTableDiv']//tbody//tr[#class='accountRowFirst']//a[contains(#onclick, 'OpenAccountDashboard')]"
driver.find_element(By.XPATH, xpath).click()
NOTE
Your error says
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: [id="aw-personal-id"]
Maybe you can use the above technique and see if you can isolate the xpath for the web element instead.
I am scraping an angular.js site. My initial link has a search button. I find by xpath and click with no issues. After I click search, I want to be able to click each of the athletes in the table to go to their info pages, but I am not having success with the click method. The links are attached to their names.
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
TIMEOUT = 5
driver = webdriver.Firefox()
driver.set_page_load_timeout(TIMEOUT)
url = 'https://n.rivals.com/search#?formValues=%7B%22sport%22:%22Football%22,%22recruit_year%22:2021,%22offer_and_visit_type%22:%5B%22Offer%22%5D,%22prospect_profiles.prospect_colleges.offer%22:true,%22page_number%22:1,%22page_size%22:50%7D'
try:
driver.get(url)
except TimeoutException:
pass
search_button = driver.find_element_by_xpath('//*[#id="articles"]/div/div[2]/div/div/div[1]/form/div[2]/div[5]/button')
search_button.click();
#below is where I tried, but could not get to click
first_athlete = driver.find_element_by_xpath('//*[#id="content_"]/td[1]/div[2]/a')
first_athlete.click();
Works if you remove the last /a in the xpath:
first_athlete = driver.find_element_by_xpath('//*[#id="content_"]/td[1]/div[2]')
first_athlete.click()
If you want to search for all athletes and you have the name of athletes with you, you can use CSS selector as well.
athelete = driver.find_elements_by_css_selector(`#content_ > td > div > a[href *="donovan-jackson"]);
athelete.click();
This code will give you a unique web element for each player.
Thanks
I'm trying to automate the scraping of links from here:
https://thegoodpubguide.co.uk/pubs/?paged=1&order_by=category&search=pubs&pub_name=&postal_code=®ion=london
Once I have the first page, I want to click the right chevron at the bottom, in order to move to the second, the third and so on. Scraping the links in between.
Unfortunately nothing I try will allow me to send chrome to the next page.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
from datetime import datetime
import csv
from selenium.webdriver.common.action_chains import ActionChains
#User login info
pagenum = 1
#Creates link to Chrome Driver and shortens this to 'browser'
path_to_chromedriver = '/Users/abc/Downloads/chromedriver 2' # change path as needed
driver = webdriver.Chrome(executable_path = path_to_chromedriver)
#Navigates Chrome to the specified page
url = 'https://thegoodpubguide.co.uk/pubs/?paged=1&order_by=category&search=pubs&pub_name=&postal_code=®ion=london'
#Clicks Login
def findlinks(address):
global pagenum
list = []
driver.get(address)
#wait
while pagenum <= 2:
for i in range(20): # Scrapes available links
xref = '//*[#id="search-results"]/div[1]/div[' + str(i+1) + ']/div/div/div[2]/div[1]/p/a'
link = driver.find_element_by_xpath(xref).get_attribute('href')
print(link)
list.append(link)
with open("links.csv", "a") as fp: # Saves list to file
wr = csv.writer(fp, dialect='excel')
wr.writerow(list)
print(pagenum)
pagenum = pagenum + 1
element = driver.find_element_by_xpath('//*[#id="search-results"]/div[2]/div/div/ul/li[8]/a')
element.click()
findlinks(url)
Is something blocking the button that i'm not seeing?
The error printed in my terminal:
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//*[#id="search-results"]/div[2]/div/div/ul/li[8]/a"}
try this :
element = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "a[class='next-page btn']"))
element.click()
EDIT :
The xpath that you're specifying for the chevron is variable between pages, and is not exactly correct. Note the li[6] and li[8] and li[9].
On page 1: the xpath is //*[#id="search-results"]/div[2]/div/div/ul/li[6]/a/i
On page 2: the xpath is //*[#id="search-results"]/div[2]/div/div/ul/li[8]/a/i
On page 3: the xpath is //*[#id="search-results"]/div[2]/div/div/ul/li[9]/a/i
You'll have to come up with some way of determining what xpath to use. Here's a hint: it seems that the last li under the //*[#id="search-results"]/div[2]/div/div/ul/ designates the chevron.
ORIGINAL POST :
You may want to try waiting for the page to load before you try to find and click the chevron. I usually just do a time.sleep(...) when I'm testing my automation script, but for (possibly) more sophisticated functions, try Waits. See the documentation here.
I am trying to run a script in selenium webdriver python. Where I am trying to click on search field, but its always showing exception of "An element could not be located on the page using the given search parameters."
Here is script:
from selenium import webdriver
from selenium.webdriver.common.by import By
class Exercise:
def safari(self):
class Exercise:
def safari(self):
driver = webdriver.Safari()
driver.maximize_window()
url= "https://www.airbnb.com"
driver.implicitly_wait(15)
Title = driver.title
driver.get(url)
CurrentURL = driver.current_url
print("Current URL is "+CurrentURL)
SearchButton =driver.find_element(By.XPATH, "//*[#id='GeocompleteController-via-SearchBarV2-SearchBarV2']")
SearchButton.click()
note= Exercise()
note.safari()
Please Tell me, where I am wrong?
There appears to be two matching cases:
The one that matches the search bar is actually the second one. So you'd edit your XPath as follows:
SearchButton = driver.find_element(By.XPATH, "(//*[#id='GeocompleteController-via-SearchBarV2-SearchBarV2'])[2]")
Or simply:
SearchButton = driver.find_element_by_xpath("(//*[#id='GeocompleteController-via-SearchBarV2-SearchBarV2'])[2]")
You can paste your XPath in Chrome's Inspector tool (as seen above) by loading the same website in Google Chrome and hitting F12 (or just right click anywhere and click "Inspect"). This gives you the matching elements. If you scroll to 2 of 2 it highlights the search bar. Therefore, we want the second result. XPath indices start at 1 unlike most languages (which usually have indices start at 0), so to get the second index, encapsulate the entire original XPath in parentheses and then add [2] next to it.