I am trying to go to the drought monitor website and tell it to select to show county data. I am able to get my code to navigate to the website, and it clicks the dropdown, but I cannot get it to type in "county". My code gets to the last line and then give the error: "Cannot focus element".
Any help would be greatly appreciated as I'm very new to Selenium.
from selenium import webdriver
from selenium.webdriver.support.ui import Select
from selenium.webdriver.common.keys import Keys
browser = webdriver.Chrome()
browser.get('http://droughtmonitor.unl.edu/Data/DataDownload/ComprehensiveStatistics.aspx')
browser.maximize_window()
dropdown = browser.find_element_by_xpath("""//*
[#id="dnn_ctr1009_USDMservice_CompStats_2017_aoiType_chosen"]""")
dropdown.click()
dropdown.send_keys('county')
dropdown.submit()
print("I'm done")
You're sending keys to the <div> that contains the search <input>, rather than to the <input> element itself. You'll need to find the <input> and send it the keys.
(Note: You also don't need to use XPath for something as simple as a lookup by id.)
dropdown = browser.find_element_by_id("dnn_ctr1009_USDMservice_CompStats_2017_aoiType_chosen")
dropdown.click()
search = dropdown.find_element_by_tag_name("input")
search.send_keys("county", Keys.ENTER)
Related
Hi Before starting Thanks for the help in advance
So I am trying to scrape google flight website : https://www.google.com/travel/flights
When scraping I have done the sending Key to the text field but I am stuck at clicking the search button it always gives the error that the field is not clickable at a point or Other elements would receive the click
the error image is
and the code is
from time import sleep
from selenium import webdriver
chromedriver_path = 'E:/chromedriver.exe'
def search(urrl):
driver = webdriver.Chrome(executable_path=chromedriver_path)
driver.get(urrl)
asd= "//div[#aria-label='Enter your destination']//div//input[#aria-label='Where else?']"
driver.find_element_by_xpath("/html/body/c-wiz[2]/div/div[2]/c-wiz/div/c-wiz/c-wiz/div[2]/div[1]/div[1]/div[1]/div[2]/div[1]/div[4]/div/div/div[1]/div/div/input").click()
sleep(2)
TextBox = driver.find_element_by_xpath(asd)
sleep(2)
TextBox.click()
sleep(2)
print(str(TextBox))
TextBox.send_keys('Karachi')
sleep(2)
search_button = driver.find_element_by_xpath('//*[#id="yDmH0d"]/c-wiz[2]/div/div[2]/c-wiz/div/c-wiz/c-wiz/div[2]/div[1]/div[1]/div[2]/div/button/div[1]')
sleep(2)
search_button.click()
print(str(search_button))
sleep(15)
print("DONE")
driver.close()
def main():
url = "https://www.google.com/travel/flights"
print(" Getitng Data ")
search(url)
if __name__ == "__main__":
main()
and i have done it by copying the Xpath using dev tools
Thanks again
The problem you are facing is that after entering the city Karachi in the text box, there is a suggestion dropdown that is displayed over the Search Button. That is the cause of the exception as the dropdown would receive the click instead of the Search button. The intended usage of the website is to select the city from the dropdown and then continue.
A quick fix would be to first look for all of the dropdowns in the source (there are a few) and look for the one that is currently active using is_displayed(). Next you would select the first element in the dropdown suggested:
.....
TextBox.send_keys('Karachi')
sleep(2)
# the attribute(role) in the dropdowns element are named 'listbox'. Warning: This could change in the future
all_dropdowns = driver.find_elements_by_css_selector('ul[role="listbox"]')
# the active dropdown
active_dropdown = [i for i in all_dropdowns if i.is_displayed()][0]
# click the first suggestion in the dropdown (Note: this is very specific to your search. It could not be desirable in other circumstances and the logic can be modified accordingly)
active_dropdown.find_element_by_tag_name('li').click()
# I recommend using the advise #cruisepandey has offered above regarding usage of relative path instead of absolute xpath
search_button = driver.find_element_by_xpath("//button[contains(.,'Search')]")
sleep(2)
search_button.click()
.....
Also suggest to head the advise provided by #cruisepandey including research more about Explicit Waits in selenium to write better performing selenium programs. All the best
Instead of this absolute xapth
//*[#id="yDmH0d"]/c-wiz[2]/div/div[2]/c-wiz/div/c-wiz/c-wiz/div[2]/div[1]/div[1]/div[2]/div/button/div[1]
I would recommend you to have a relative path:
//button[contains(.,'Search')]
Also, I would recommend you to have explicit wait when you are trying a click operation.
Code:
search_button = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//button[contains(.,'Search')]")))
search_button.click()
You'd need below imports as well:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
Pro tip:
Launch the browser in full screen mode.
driver.maximize_window()
You should have the above line just before driver.get(urrl) command.
I’ve run into a problem trying to use Selenium ChromeDriver to scroll down the sidebar of a google maps results page. I am trying to get to the 6th result down but the result does not fully load until you scroll down. Using the find_element_by_xpath method, I am successfully able to access results 1-5 and click into them individually, but when trying to use the actions.move_to_element(link).perform() method to scroll to the 6th element, it does not work and throws an error message.
The error that I get is:
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element:
However, I know this element exists because when I manually scroll and more results are loaded, the Xpath works correctly. What am I doing wrong? I’ve spent many hours trying to solve this and I haven’t been able to solve with the available content out there. I appreciate any help or insights you can offer, thank you!
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from bs4 import BeautifulSoup as soup
import time
PATH = "C:\Program Files (x86)\chromedriver.exe"
driver = webdriver.Chrome(PATH)
driver.get("https://www.google.com/maps")
time.sleep(7)
page = soup(driver.page_source, 'html.parser')
#find the searchbar, enter search, and hit return
search = driver.find_element_by_id('searchboxinput')
search.send_keys("dentists in Austin Texas")
search.send_keys(Keys.RETURN)
driver.maximize_window()
time.sleep(7)
#I want to get the 6th result down but it requires a sidebar scroll to load
link = driver.find_element_by_xpath("//*[#id='pane']/div/div[1]/div/div/div[4]/div[1]/div[13]/div/a")
actions.move_to_element(link).perform()
link.click()
time.sleep(5)
driver.back()```
I found a solution that works, it is to target the element in XPATH from the javascript interface of selenium. You must then execute two commands on an instruction (targeting and scroll)
driver.executeScript("var el = document.evaluate('/html/body/jsl/div[3]/div[10]/div[8]/div/div[1]/div/div/div[4]/div[1]', document, null, XPathResult.FIRST_ORDERED_NODE_TYPE, null).singleNodeValue; el.scroll(0, 5000);");
this is the only solution that worked for me
The search results in the google map are located with //div[contains(#aria-label,'dentists in Austin Texas')]//div[contains(#jsaction,'mouseover')] XPath.
So, to select 6-th element there you can do the following
from selenium.webdriver.common.action_chains import ActionChains
results = driver.find_elements_by_xpath('//div[contains(#aria-label,"dentists in Austin Texas")]//div[contains(#jsaction,"mouseover")]')
ActionChains(driver).move_to_element(results[6]).click(button).perform()
I was just implementing scrolling on google map sidebar, it's working on my side. check this code please
# selecting scroll body
driver.find_element_by_xpath('/html/body/div[3]/div[9]/div[9]/div/div/div[1]/div[2]/div/div[1]/div/div/div[2]/div[1]').click()
#start scrolling your sidebar
html = driver.find_element_by_xpath('/html/body/div[3]/div[9]/div[9]/div/div/div[1]/div[2]/div/div[1]/div/div/div[2]/div[1]')
html.send_keys(Keys.END)
also add the "KEYS" library
from selenium.webdriver.common.keys import Keys
I hope it would help you.
by the way I have implemented scrapping of google map with its available data and used above code to scroll. check if you have any problem, let me know then
I'm trying to scrape data by python from this e-commerce site
Because it requires to select the shipping location first to access the data and the 3 selects have the same xpath so I use the code below
city = browser.find_element(By.XPATH,"(//select[not(#id) and not(#class)])[1]")
citydd = Select(city)
citydd.select_by_value('01') # Hanoi
time.sleep(1)
district = browser.find_element(By.XPATH,"(//select[not(#id) and not (#class)])[2]")
districtdd = Select(district)
districtdd.select_by_value('0101') # Ba Dinh
time.sleep(1)
ward = browser.find_element(By.XPATH,"(//select[not(#id) and not (#class)])[3]")
warddd = Select(ward)
warddd.select_by_value('010104') # Cong Vi
browser.find_element(By.XPATH,"//div[text()='Xác nhận']").click() # Xac nhan
It returns me this error
NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"(//select[not(#id) and not(#class)])[1]"}
May I know how to bypass this situation?
There is the ability to select better xpaths. You can use a relative xpaths using the label of associated select
//label[contains(text(),'Tỉnh/Thành phố')]/following-sibling::div/select
//label[contains(text(),'Quận/Huyện')]/following-sibling::div/select
//label[contains(text(),'Phường/Xã')]/following-sibling::div/select
This is the middle one identified as unique using the above:
If you're still getting no such error with these xpaths - please ensure you include explicit or implicit waits
Selenium's default wait strategy is the "the page has loaded". Most often in modern pages, the page loads, THEN scripts run which get more data or display a modal (like the popup on the image). Those async calls cause fails as nosuchelements in selenium.
Let me know if you need more information on sycnhronisation.
This is what i have tried -
from selenium.webdriver.support.ui import Select
from selenium.webdriver.support.wait import WebDriverWait
from time import sleep
from selenium import webdriver
driver = webdriver.Chrome()
wait = WebDriverWait(driver, 10)
driver.get('https://vinmart.com/')
FirstDropDown = Select(driver.find_element_by_xpath("(//select)[1]"))
FirstDropDown.select_by_index(1)
sleep(2)
SecondDropDown = Select(driver.find_element_by_xpath("(//select)[2]"))
SecondDropDown.select_by_index(1)
sleep(2)
ThirdDropDown = Select(driver.find_element_by_xpath("(//select)[3]"))
ThirdDropDown.select_by_index(1)
I have used sleep() because it will take time to populated data in the dropdown as per pervious dropdown selection.
Please mark it as answer if it resolves your problem.
The company has a list of 100+ sites that I am trying to use Selenium webdriver to automatically take a user into that site. I am fairly new to programming so please forgive me if my question is worded poorly.. But, I am trying to take the name of a site such as "Alpharetta - Cemex" in the example below from the user and find it in this long list and then select that link. Through testing I am pretty sure the element I need to click is the h3 class that also holds the name of the site under the data-hmi-name
Website Code Example:
I have tried to use the below and it never seems to work..
driver.find_element_by_css_selector("h3.tru-card-head-text uk-text-center[data-hmi-name='Alpharetta - Cemex']").click()
#For this one I tried to select the h3 class by searching for all those elements that has the name Alpharetta - Cemex
or
**theCards = main.find_elements_by_tag_name("h3")** #I tried both of these declarations for theCards
**#theCards = main.find_elements_by_class_name("tru-card-wrapper")**
#then used the loop below. This obviously didn't work and it just returns an error that card.text doesn't actually exist
for card in theCards:
#title = card.find_elements_by_tag_name("h3")
print(card.text)
if(card.text == theSite):
card.click()
Any help or guidance would be so appreciated! I am new to programming in Python and if you can explain what I am doing wrong I'd be forever thankful!
If you want to click a single link (e.g. Alpharetta - Cemex) , you can try like below:
theSite = "Alpharetta - Cemex" #You can store user inputted site Name here
linkXpath = "//a[h3[contains(text(),'"+theSite +"']]"
WebDriverWait(driver, 30).until(EC.element_to_be_clickable((By.XPATH, linkXpath))).click() #This will wait for element to be clickable before it clicks
In case above is not working. If your Link is not in screen / not visible. You can use java script to first scroll to element and the click like below:
ele = WebDriverWait(driver, 30).until(EC.presence_of_element_located((By.XPATH, linkXpath)))
driver.execute_script("arguments[0].scrollIntoView();", ele )
driver.execute_script("arguments[0].click();", ele )
You need to Import:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait
I want to send search term to a listbox, capture/print the url instead of clicking on it. If there is a better way than using Selenium that would also be acceptable also.
Example:
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
import time
#Choose Browser
driver=webdriver.Chrome()
time.sleep(1)
#Go to url
driver.get("https://coinmarketcap.com/coins/")
#fill out search field
driver.find_element_by_xpath("""/html/body/div/div/div[1]/div[3]/nav/nav/form/div/div/div/input""").send_keys("eth")
#Select first option from dropdown
listbox = Select(driver.find_element_by_xpath("/html/body/div/div/div[1]/div[3]/nav/nav/form/div/div[2]/ul/li[2]/a/span"))
print(listbox.select_by_index(0)) # I want to print the link instead of clicking it
I found that simply entering text in the searchbox did not show the dropdown. Here is a sample of code that will give you the innerHTML of the dropdown. You can modify the xpath to get your specific li element or parse it accordingly with BeautifulSoup.
driver.get("https://coinmarketcap.com/coins/")
#fill out search field
search_box = driver.find_element_by_xpath("""/html/body/div/div/div[1]/div[3]/nav/nav/form/div/div/div/input""")
search_box.click()
time.sleep(1)
search_box.send_keys("eth")
#Select first option from dropdown
listbox = driver.find_element_by_xpath("//div[#class='cmc-popover__dropdown']")
print(listbox.get_attribute('innerHTML'))
Edit: Also, if the driver instance is not wide enough, the search box won't appear. Consider opening it maximized.