I want to type something in the input field, but when I call it with the class it returns an error. The Website has enough time to load all Elements so that shouldn't be the problem.
My Code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
browser = webdriver.Firefox()
browser.get('https://www.tradingview.com/chart/')
print("a")
time.sleep(5)
elem = browser.find_element_by_id("header-toolbar-symbol-search") # Find the search box
print("b")
elem.click()
time.sleep(5)
crypto_search = browser.find_element_by_class_name("search-Hsmn_0WX upperCase-Hsmn_0WX input-3n5_2-hI")
print("c")
crypto_search.send_keys("VETUSD")
time.sleep(10)
browser.quit()
When I run the code it gives me this error:
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: .search-Hsmn_0WX upperCase-Hsmn_0WX input-3n5_2-hI
It gets to the lines where it prints the a and b but it stops at the line which calls the element with class.
1 Get rid of time.sleep() because your tests will become unreliable and slow. Use implicit/explicit waits
2 If you have multiple classes in one use css or xpath selectors.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.common.keys import Keys
browser = webdriver.Firefox()
browser.get('https://www.tradingview.com/chart/')
wait = WebDriverWait(browser, 10)
wait.until(EC.element_to_be_clickable((By.ID, "header-toolbar-symbol-search"))).click() # Find the search box
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, ".search-Hsmn_0WX.upperCase-Hsmn_0WX.input-3n5_2-hI"))).send_keys("VETUSD")
# browser.close()
Here I wait till the first element is clickable and the second is visible, as an example of how explicit waits work in Selenium.
Note, how faster my version of test is.
Instead of
crypto_search = browser.find_element_by_class_name("search-Hsmn_0WX upperCase-Hsmn_0WX input-3n5_2-hI")
try the following:
crypto_search = browser.find_element_by_css_selector(".search-Hsmn_0WX.upperCase-Hsmn_0WX.input-3n5_2-hI")
I tried and it worked for me.
Also, because these class names are multiple and looking not too discriptive I would prefer the following selector:
crypto_search = browser.find_element_by_css_selector("[data-name='symbol-search-items-dialog'] input")
according to the documentation you need to use 'find_elements'.
This is because classes are used when there will be multiple of them on a page, so it doesn't make sense to select only one element with a class name, even if there is only one on the page.
If that element is the only one with that class on the page, try using
browser.find_elements_by_class_name("search-Hsmn_0WX upperCase-Hsmn_0WX input-3n5_2-hI")[0]
Related
The following code is not writing any partial string in the From input field on the website even though this element seems to be an active element.
I spent lot of time trying to debug and make the code work but no success. Can anyone please provide some hint on what is wrong. Thanks.
from selenium import webdriver
from selenium.webdriver.common.by import By
import time
from selenium.webdriver.common.keys import Keys
from colorama import init, Fore
class BookingTest1():
def __init__(self):
pass
def test1(self):
baseUrl="https://www.goibibo.com/"
driver=webdriver.Chrome()
driver.maximize_window()
#open airline site
driver.get(baseUrl)
driver.implicitly_wait(3)
# Enter origin location.
partialTextOrigin="New"
#select flight tab
driver. find_element(By.XPATH,"//ul[#class='happy-nav']//li//a[#href='/flights/']").click()
# select input box
textElement = driver.find_element(By.XPATH, "//input")
# check if input box is active
if textElement==driver.switch_to.active_element:
print('element is in focus')
textElement.send_keys(partialTextOrigin)
else:
print('element is not in focus')
print("Focus Event Triggered")
driver.execute_script("arguments[0].focus();", textElement)
time.sleep(5)
if textElement==driver.switch_to.active_element:
print('finally element is in focus')
print(partialTextOrigin)
textElement.send_keys(partialTextOrigin)
time.sleep(5)
#test the code
tst=BookingTest1()
tst.test1()
There are several issues here:
First you need to click on p element in the From block and only after that when input appears there you can insert the text to it.
You should use unique locators. (There more that 10 input elements on this page)
Using WebDriverWait expected conditions explicit waits are much better than implicitly_wait in most cases.
No need to set timeouts to too short values.
No need to use driver.switch_to.active_element here.
The following code works for me:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
webdriver_service = Service('C:\webdrivers\chromedriver.exe')
driver = webdriver.Chrome(service=webdriver_service, options=options)
url = "https://www.goibibo.com/"
flights_xpath = "//ul[#class='happy-nav']//li//a[#href='/flights/']"
from_xpath = "//div[./span[contains(.,'From')]]//p[contains(text(),'Enter city')]"
from_input_xpath = "//div[./span[contains(.,'From')]]//input"
partialTextOrigin = "New"
wait = WebDriverWait(driver, 10)
driver.get(url)
wait.until(EC.element_to_be_clickable((By.XPATH, flights_xpath))).click()
wait.until(EC.element_to_be_clickable((By.XPATH, from_xpath))).click()
wait.until(EC.element_to_be_clickable((By.XPATH, from_input_xpath))).send_keys(partialTextOrigin)
from_xpath and from_input_xpath XPath locators are a little complex.
I was not sure about the class names in that elements block if they are fixed so I based on the texts.
For example "//div[./span[contains(.,'From')]]//p[contains(text(),'Enter city')]" means:
Find such div that it has a direct span child so that span contains From text content.
From the div parent element above find inside it a p child that contains Enter city text.
Similarly to the above locator "//div[./span[contains(.,'From')]]//input" means: find parent div as described before, then find inside it an input child element.
The result of the code above is
Hello everyone I'm learning selenium recently and as I bet is a classic newbie mistake I filled my code with time.sleep which made everything really slow. So I started to learn about webdriverWait. I took some sample code and I have tried multiple things but I still get the error of "what you clicked is blocked by this other thing" meaning that the library did not crash but it also did not do anything. Which must mean I am doing something wrong. This is an error I can avoid if use the time.sleep function.
I'm using expected conditions, BY and action chains alongside WebDriverWait, though in my last test I tried to not use Acton chains to lower the possibilities of why its failing.
I'm using a company site that is probably under NDA so I don't have a public example to show this on, I tried searching for "pages with cover openings" but I couldn't find any, so if you guys know of any that I can use to illustrate this I would also find that really helpful. What is happening is that the site loads with a "cover" animation and after a few seconds the animation goes away to reveal the Button I'm looking for.
Environment info:
Using Pycharm community latest version
Using Selenium 4.0.0b2.post1 (I also tried in 3.9 to no results)
Using ChromeDriver 89 as my google chrome version is 89 as well [Version 89.0.4389.114 for Chrome]
Here are my code snippets:
Attempt #1:
from selenium import webdriver
from selenium.webdriver.support.wait import WebDriverWait
import selenium.webdriver.support.expected_conditions as EC
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.by import By
import selenium.webdriver.support.ui as ui
def test(driver, actions, delay):
cart = "//header/div[1]/div[1]/a[2]/*[1]"
cartxp = WebDriverWait(driver, delay).until(EC.visibility_of_element_located((By.XPATH, cart)))
actions.move_to_element(cartxp).perform()
cartclk = driver.find_element_by_xpath(cart).click()
Attempt #2: Using UI library
from selenium import webdriver
from selenium.webdriver.support.wait import WebDriverWait
import selenium.webdriver.support.expected_conditions as EC
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.by import By
import selenium.webdriver.support.ui as ui
def test2(driver, delay, actions):
cart = "//header/div[1]/div[1]/a[2]/*[1]"
cartxp = ui.WebDriverWait(driver, delay).until(EC.visibility_of_element_located((By.XPATH, cart)))
actions.move_to_element(cartxp).perform()
cartclk = driver.find_element_by_xpath(cart).click()
Attempt #3: Not using Action chains
from selenium import webdriver
from selenium.webdriver.support.wait import WebDriverWait
import selenium.webdriver.support.expected_conditions as EC
from selenium.webdriver.common.by import By
def test3(driver, delay):
wait = WebDriverWait(driver, delay)
cart = "//header/div[1]/div[1]/a[2]/*[1]"
button = wait.until(EC.element_to_be_clickable((By.XPATH, cart)))
cartclk = driver.find_element_by_xpath(cart).click()
All of these have a main method that just has:
driver = webdriver.Chrome()
test3(driver, 30)
driver.get('Site that I cant reveal cuz of NDA')
action = ActionChains(driver)
The error I get is:
Message: element click intercepted: Element ... is not clickable at point (1183, 27). Other element would receive the click
Thanks in advance for any help!
Edit: After some comments changed my xpath to //a[#class='header__cart'] both the previous and this one leads to a single result If I use it on chrome inspect to look for the button, additionally it works as intended if I use it to actually click the button after using time.sleep() to wait out the animation
Additionally just in case tried using the try catch as they did on the suggested question. It also failed
Attempt 4: surrounding in a try-catch
def test4(driver, delay):
cart = "//a[#class='header__cart']"
try:
my_elem = WebDriverWait(driver, delay).until(EC.visibility_of_element_located((By.XPATH, cart)))
print
"Page is ready!"
except TimeoutException:
print
"Loading took too much time!"
cart_btn = driver.find_element_by_xpath(cart)
cart_btn.click()
Error: selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element <a href="/cart" class="header__cart" data-items-in-cart="0" )="">... is not clickable at point (1200, 27). Other element would receive the click
To clarify what I meant earlier this works:
cart = "//a[#class='header__cart']"
time.sleep(20)
cartclk = driver.find_element_by_xpath(cart).click()
My code goes into this website, and clicks on each row of a table.
These rows open a window, which contain a field "keywords" at the bottom of the new window, which I am trying to get.
I dont think I have the xpath for this field right, though all I did was right click and "copy xpath".
Expected output is to print the value in the keywords.
from selenium import webdriver
from bs4 import BeautifulSoup
import time
import requests
driver = webdriver.Chrome()
driver.get('https://aaaai.planion.com/Web.User/SearchSessions?ACCOUNT=AAAAI&CONF=AM2021&USERPID=PUBLIC&ssoOverride=OFF')
time.sleep(3)
page_source = driver.page_source
soup = BeautifulSoup(page_source,'html.parser')
eachRow=driver.find_elements_by_class_name('clickdiv')
for item in eachRow:
item.click() #opens the new window per each row
time.sleep(2)
faculty = driver.find_element_by_xpath("//td[#valign='MIDDLE']/b") #this works fine
keywords=item.find_element_by_xpath('//*[#id="W1"]/div/div/div/div[2]/div[2]/table/tbody/tr[5]/td/text()') #this does not
print(keywords.text)
print(faculty.text)
driver.find_element_by_class_name('XX').click()#closes window
ERROR message - selenium.common.exceptions.InvalidSelectorException: Message: invalid selector: The result of the xpath expression "//*[#id="W1"]/div/div/div/div[2]/div[2]/table/tbody/tr[5]/td/text()" is: [object Text]. It should be an element.
.....
Your XPath '//*[#id="W1"]/div/div/div/div[2]/div[2]/table/tbody/tr[5]/td/text()' is getting the actual text of the element, not the entire element itself. Just remove text() and it should work. XPath is notoriously finnicky, so be careful how you use it, and how you are defining your top level root.
I reworked your code very slightly here to give a better option than time.sleep(), explicit waits. It isn't necessary, but tends to make more reliable code and speeds up processing; it doesn't pause the process unless it has to, and only does so for as long as necessary.
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
driver = webdriver.Chrome()
driver.get('https://aaaai.planion.com/Web.User/SearchSessions?ACCOUNT=AAAAI&CONF=AM2021&USERPID=PUBLIC&ssoOverride=OFF')
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CLASS_NAME, 'clickdiv')))
eachRow = driver.find_elements_by_class_name('clickdiv')
for item in eachRow:
item.click() # opens the new window per each row
faculty = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, "//td[#valign='MIDDLE']/b")))
keywords = item.find_element_by_xpath('//*[#id="W1"]/div/div/div/div[2]/div[2]/table/tbody/tr[5]/td')
print(keywords.text)
print(faculty.text)
driver.find_element_by_class_name('XX').click() # closes window
I am trying to scrape data from the Sunshine List website (http://www.sunshinelist.ca/) using the Selenium package but I get the following error mentioned below. From several other related posts I understand that I need to use the WebDriverWait to explicitly ask the driver to wait/refresh but I am unable to identify where and how I should call the function.
Screenshot of Error
StaleElementReferenceException: Message: The element reference
of (tr class="even") stale: either the element is no longer attached to the DOM or the
page has been refreshed
import numpy as np
import pandas as pd
import requests
import time
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.common.exceptions import StaleElementReferenceException
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
ffx_bin = FirefoxBinary(r'C:\Users\BhagatM\AppData\Local\Mozilla Firefox\firefox.exe')
ffx_caps = DesiredCapabilities.FIREFOX
ffx_caps['marionette'] = True
driver = webdriver.Firefox(capabilities=ffx_caps,firefox_binary=ffx_bin)
driver.get("http://www.sunshinelist.ca/")
driver.maximize_window()
tablewotags1=[]
while True:
divs = driver.find_element_by_id('datatable-disclosures')
divs1=divs.find_elements_by_tag_name('tbody')
for d1 in divs1:
div2=d1.find_elements_by_tag_name('tr')
for d2 in div2:
tablewotags1.append(d2.text)
try:
driver.find_element_by_link_text('Next →').click()
except NoSuchElementException:
break
year1=tablewotags1[0::10]
name1=tablewotags1[3::10]
position1=tablewotags1[4::10]
employer1=tablewotags1[1::10]
df1=pd.DataFrame({'Year':year1,'Name':name1,'Position':position1,'Employer':employer1})
df1.to_csv('Sunshine List-1.csv', index=False)
If your problem is to click the "Next" button, you can do that with the xpath:
driver = webdriver.Firefox(executable_path=r'/pathTo/geckodriver')
driver.get("http://www.sunshinelist.ca/")
wait = WebDriverWait(driver, 20)
el=wait.until(EC.presence_of_element_located((By.XPATH,"//ul[#class='pagination']/li[#class='next']/a[#href='#' and text()='Next → ']")))
el.click()
For each click on the "Next" button -- you should find that button and click on it.
Or do something like this:
max_attemps = 10
while True:
next = self.driver.find_element_by_css_selector(".next>a")
if next is not None:
break
else:
time.sleep(0.5)
max_attemps -= 1
if max_attemps == 0:
self.fail("Cannot find element.")
And after this code does click action.
PS: Also try to add just time.sleep(x) after fiding element and then do click action.
Try this code below.
When the element is no longer attached to the DOM and the StaleElementReferenceException is invoked, search for the element again to reference the element.
Please do note I checked with Chrome:
try:
driver.find_element_by_css_selector('div[id="datatable-disclosures_wrapper"] li[class="next"]>a').click()
except StaleElementReferenceException:
driver.find_element_by_css_selector('div[id="datatable-disclosures_wrapper"] li[class="next"]>a').click()
except NoSuchElementException:
break
>>>Stale Exceptions can be handled using **StaleElementReferenceException** to continue to execute the for loop. When you try to get the element by any find_element method in a for loop.
from selenium.common import exceptions
and customize your code of for loop as:
for loop starts:
try:
driver.find_elements_by_id("data") //method to find element
//your code
except exceptions.StaleElementReferenceException:
pass
When you raise the StaleElementException that means that somthing changed in the site, but not in the list you have. So the trick is to refresh that list every time, inside the loop like this:
while True:
driver.implicitly_wait(4)
for d1 in driver.find_element_by_id('datatable-disclosures').find_element_by_tag_name('tbody').find_elements_by_tag_name('tr'):
tablewotags1.append(d1.text)
try:
driver.switch_to.default_content()
driver.find_element_by_xpath('//*[#id="datatable-disclosures_wrapper"]/div[2]/div[2]/div/ul/li[7]/a').click()
except NoSuchElementException:
print('Don\'t be so cryptic about error messages, they are good\n
...Script broke clicking next') #jk aside put some info there
break
Hope this help you, cheers.
Edit:
So I went to the said website, the layout is pretty straight forward, but the structure repeats itself like four times. So when you go about crawling the site like that something is bound to change.
So I’ve edited the code to only scrap one tbody tree. This tree comes from the first datatable-disclousure. And added some waits.
I am trying to write a webscraping in python that will activate the "onclick" functionality of certain buttons on a webpage because the tables with the data I want are converted to csv, which makes it much easier to access. But the problem is that I am unable to locate elements by xpath at all when using PhantomJs. How can I click the element and access the csv content that I want?
This is my code:
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By
from selenium.webdriver.common.proxy import *
url = "http://www.pro-football-reference.com/boxscores/201609180nwe.htm"
xpath = "//*[#id='all_player_offense']/div[1]/div/ul/li[1]/div/ul/li[3]/button"
path_to_phantomjs = 'browser/phantomjs'
browser = webdriver.PhantomJS(executable_path = path_to_phantomjs)
browser.get(url)
delay=3
element_present = EC.presence_of_element_located((By.ID, 'all_player_offense'))
WebDriverWait(browser, delay).until(element_present)
browser.find_element_by_xpath(xpath).click()
And I get this error:
selenium.common.exceptions.NoSuchElementException: Message: {"errorMessage":"Unable to find element with xpath '//*[#id='all_player_offense']/div[1]/div/ul/li[1]/div/ul/li[3]/button'","request":{"headers":{"Accept":"application/json","Accept-Encoding":"identity","Connection":"close","Content-Length":"153","Content-Type":"application/json;charset=UTF-8","Host":"127.0.0.1:50989","User-Agent":"Python-urllib/2.7"},"httpVersion":"1.1","method":"POST","post":"{\"using\": \"xpath\", \"sessionId\": \"93ff24f0-9cbe-11e6-8711-bdfa3ff9cfb1\", \"value\": \"//*[#id='all_player_offense']/div[1]/div/ul/li[1]/div/ul/li[3]/button\"}","url":"/element","urlParsed":{"anchor":"","query":"","file":"element","directory":"/","path":"/element","relative":"/element","port":"","host":"","password":"","user":"","userInfo":"","authority":"","protocol":"","source":"/element","queryKey":{},"chunks":["element"]},"urlOriginal":"/session/93ff24f0-9cbe-11e6-8711-bdfa3ff9cfb1/element"}}
Screenshot: available via screen
IMPORTANT THING I FORGOT TO MENTION: As described in this this issue on GitHub, try putting set_window_size(width, height) or maximize_window()after setting the webdriver. You should also consider telling the webdriver to implicitly_wait(10) for the element to appear.
So there's a special maneuver you have to perform in order for the Selenium Webdriver to properly emulate what you're doing. In essence, to get the desired data, you'd have to:
A: Hover over the "Share & More" dropdown menu. Then
B: Click "Get table as CSV (Excel)".
For A, this involves having to place the emulated cursor on the element without clicking it. This idea of "mouse over" can be done with the move_to_element() function provided in the ActionChains class. So at the top you'd insert this:
from selenium.webdriver.common.action_chains import ActionChains
You want Selenium to find the specific element and move to it. You can achieve this with 2 lines of code:
dropdown = browser.find_element_by_xpath('//*[#id="all_player_offense"]/div[1]/div/ul/li[1]')
ActionChains(browser).move_to_element(dropdown).perform()
If you omit the above, you'll get an ElementNotVisibleException.
Now for B, you should be able to do browser.find_element_by_xpath(xpath).click().