I am trying to run a script in selenium webdriver python. Where I am trying to click on search field, but its always showing exception of "An element could not be located on the page using the given search parameters."
Here is script:
from selenium import webdriver
from selenium.webdriver.common.by import By
class Exercise:
def safari(self):
class Exercise:
def safari(self):
driver = webdriver.Safari()
driver.maximize_window()
url= "https://www.airbnb.com"
driver.implicitly_wait(15)
Title = driver.title
driver.get(url)
CurrentURL = driver.current_url
print("Current URL is "+CurrentURL)
SearchButton =driver.find_element(By.XPATH, "//*[#id='GeocompleteController-via-SearchBarV2-SearchBarV2']")
SearchButton.click()
note= Exercise()
note.safari()
Please Tell me, where I am wrong?
There appears to be two matching cases:
The one that matches the search bar is actually the second one. So you'd edit your XPath as follows:
SearchButton = driver.find_element(By.XPATH, "(//*[#id='GeocompleteController-via-SearchBarV2-SearchBarV2'])[2]")
Or simply:
SearchButton = driver.find_element_by_xpath("(//*[#id='GeocompleteController-via-SearchBarV2-SearchBarV2'])[2]")
You can paste your XPath in Chrome's Inspector tool (as seen above) by loading the same website in Google Chrome and hitting F12 (or just right click anywhere and click "Inspect"). This gives you the matching elements. If you scroll to 2 of 2 it highlights the search bar. Therefore, we want the second result. XPath indices start at 1 unlike most languages (which usually have indices start at 0), so to get the second index, encapsulate the entire original XPath in parentheses and then add [2] next to it.
Related
I have written a simple web scraping code using Selenium but I want to scrape only the portion that is present 'before scroll'
Say, if it is this page I want to scrape - https://en.wikipedia.org/wiki/Pandas_(software) - Selenium reads information till the absolute last element/text which for me is the 'Powered by Media Wiki' button on the far bottom-right of the page.
What I want Selenium to do is stop after DataFrames (see screenshot) and not scroll down to the bottom.
And I also want to know where on the page it stops. I have checked multiple sources and most of them ask for infinite scroll websites. No one asks for just the 'visible' half of a page.
This is my code now:
from selenium import webdriver
EXECUTABLE = r"chromedriver.exe"
# get the URL
url = "https://en.wikipedia.org/wiki/Pandas_(software)"
# open the chromedriver
driver = webdriver.Chrome(executable_path = EXECUTABLE)
# google window is maximized so that all webpages are rendered in the same size
driver.maximize_window()
# make the driver wait for 30 seconds before throwing a time-out exception
driver.implicitly_wait(30)
# get URL
driver.get(url)
for element in driver.find_elements_by_xpath("//*"):
try:
#stuff
except:
continue
driver.close()
Absolutely any direction is appreciated. I have tried to be as clear as possible here but let me know if any more details are required.
I don't think that is possible. Observe the DOM, all the informational elements are under one section I mean one tag div[#id='content'], which is already visible to Selenium. Even if you try with //*, div[#id='content'] is visible.
And trying to check whether the element is visible though not scrolled, will also return True. (If someone knows to do what you are asking for, even I would like to know.)
from selenium import webdriver
from selenium.webdriver.support.expected_conditions import _element_if_visible
driver = webdriver.Chrome(executable_path = 'path to chromedriver.exe')
driver.maximize_window()
driver.implicitly_wait(30)
driver.get("https://en.wikipedia.org/wiki/Pandas_(software)")
elements = driver.find_elements_by_xpath("//div[#id='content']//*")
for element in elements:
try:
if _element_if_visible(element):
print(element.get_attribute("innerText"))
except:
break
driver.quit()
I'm trying to scrape data by python from this e-commerce site
Because it requires to select the shipping location first to access the data and the 3 selects have the same xpath so I use the code below
city = browser.find_element(By.XPATH,"(//select[not(#id) and not(#class)])[1]")
citydd = Select(city)
citydd.select_by_value('01') # Hanoi
time.sleep(1)
district = browser.find_element(By.XPATH,"(//select[not(#id) and not (#class)])[2]")
districtdd = Select(district)
districtdd.select_by_value('0101') # Ba Dinh
time.sleep(1)
ward = browser.find_element(By.XPATH,"(//select[not(#id) and not (#class)])[3]")
warddd = Select(ward)
warddd.select_by_value('010104') # Cong Vi
browser.find_element(By.XPATH,"//div[text()='Xác nhận']").click() # Xac nhan
It returns me this error
NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"(//select[not(#id) and not(#class)])[1]"}
May I know how to bypass this situation?
There is the ability to select better xpaths. You can use a relative xpaths using the label of associated select
//label[contains(text(),'Tỉnh/Thành phố')]/following-sibling::div/select
//label[contains(text(),'Quận/Huyện')]/following-sibling::div/select
//label[contains(text(),'Phường/Xã')]/following-sibling::div/select
This is the middle one identified as unique using the above:
If you're still getting no such error with these xpaths - please ensure you include explicit or implicit waits
Selenium's default wait strategy is the "the page has loaded". Most often in modern pages, the page loads, THEN scripts run which get more data or display a modal (like the popup on the image). Those async calls cause fails as nosuchelements in selenium.
Let me know if you need more information on sycnhronisation.
This is what i have tried -
from selenium.webdriver.support.ui import Select
from selenium.webdriver.support.wait import WebDriverWait
from time import sleep
from selenium import webdriver
driver = webdriver.Chrome()
wait = WebDriverWait(driver, 10)
driver.get('https://vinmart.com/')
FirstDropDown = Select(driver.find_element_by_xpath("(//select)[1]"))
FirstDropDown.select_by_index(1)
sleep(2)
SecondDropDown = Select(driver.find_element_by_xpath("(//select)[2]"))
SecondDropDown.select_by_index(1)
sleep(2)
ThirdDropDown = Select(driver.find_element_by_xpath("(//select)[3]"))
ThirdDropDown.select_by_index(1)
I have used sleep() because it will take time to populated data in the dropdown as per pervious dropdown selection.
Please mark it as answer if it resolves your problem.
I am scraping an angular.js site. My initial link has a search button. I find by xpath and click with no issues. After I click search, I want to be able to click each of the athletes in the table to go to their info pages, but I am not having success with the click method. The links are attached to their names.
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
TIMEOUT = 5
driver = webdriver.Firefox()
driver.set_page_load_timeout(TIMEOUT)
url = 'https://n.rivals.com/search#?formValues=%7B%22sport%22:%22Football%22,%22recruit_year%22:2021,%22offer_and_visit_type%22:%5B%22Offer%22%5D,%22prospect_profiles.prospect_colleges.offer%22:true,%22page_number%22:1,%22page_size%22:50%7D'
try:
driver.get(url)
except TimeoutException:
pass
search_button = driver.find_element_by_xpath('//*[#id="articles"]/div/div[2]/div/div/div[1]/form/div[2]/div[5]/button')
search_button.click();
#below is where I tried, but could not get to click
first_athlete = driver.find_element_by_xpath('//*[#id="content_"]/td[1]/div[2]/a')
first_athlete.click();
Works if you remove the last /a in the xpath:
first_athlete = driver.find_element_by_xpath('//*[#id="content_"]/td[1]/div[2]')
first_athlete.click()
If you want to search for all athletes and you have the name of athletes with you, you can use CSS selector as well.
athelete = driver.find_elements_by_css_selector(`#content_ > td > div > a[href *="donovan-jackson"]);
athelete.click();
This code will give you a unique web element for each player.
Thanks
Error message :
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: input.ytd-searchbox
I keep getting this error, even though i added a sleep command from other solutions for the page to load dynamically with javascript, but still it cannot find it?
import time
from selenium import webdriver
firefox = webdriver.Firefox()
firefox.get("https://www.youtube.com")
element = firefox.find_element_by_css_selector("ytd-mini-guide-entry-renderer.style-scope:nth-child(3) > a:nth-child(1)") # opens subscriptions
element.click()
time.sleep(10) # wait for page to load before finding it
searchelement = firefox.find_element_by_css_selector('input.ytd-searchbox') # search bar
searchelement.send_keys("Cute Puppies")
searchelement.submit()
I just changed the CSS Selector. You did that wrong here.
Umm... how i did that? Well there's an easy trick for selecting CSS Selectors.
Type the tag name first. In your case it's input.
If there's an ID present here, type the ID name with # on it. So
as i did : #search.
If there's a class there, then use . before it's name. For
example .search.
Try this. It's working :
import time
from selenium import webdriver
firefox = webdriver.Firefox(executable_path=r'C:\Users\intel\Downloads\Setups\geckodriver.exe')
firefox.get("https://www.youtube.com")
element = firefox.find_element_by_css_selector(".style-scope:nth-child(1) > #items > .style-scope:nth-child(3) > #endpoint .title") # opens subscriptions
element.click()
time.sleep(10) # wait for page to load before finding it
searchelement = firefox.find_element_by_css_selector('input#search') # search bar
searchelement.send_keys("Cute Puppies")
searchelement.submit()
I am trying to go to the drought monitor website and tell it to select to show county data. I am able to get my code to navigate to the website, and it clicks the dropdown, but I cannot get it to type in "county". My code gets to the last line and then give the error: "Cannot focus element".
Any help would be greatly appreciated as I'm very new to Selenium.
from selenium import webdriver
from selenium.webdriver.support.ui import Select
from selenium.webdriver.common.keys import Keys
browser = webdriver.Chrome()
browser.get('http://droughtmonitor.unl.edu/Data/DataDownload/ComprehensiveStatistics.aspx')
browser.maximize_window()
dropdown = browser.find_element_by_xpath("""//*
[#id="dnn_ctr1009_USDMservice_CompStats_2017_aoiType_chosen"]""")
dropdown.click()
dropdown.send_keys('county')
dropdown.submit()
print("I'm done")
You're sending keys to the <div> that contains the search <input>, rather than to the <input> element itself. You'll need to find the <input> and send it the keys.
(Note: You also don't need to use XPath for something as simple as a lookup by id.)
dropdown = browser.find_element_by_id("dnn_ctr1009_USDMservice_CompStats_2017_aoiType_chosen")
dropdown.click()
search = dropdown.find_element_by_tag_name("input")
search.send_keys("county", Keys.ENTER)