TimeoutException(message, screen, stacktrace) selenium.common.exceptions.TimeoutException: Message: - python

I use this line code to wait until product_id appear, My code works well for about 30 minutes
element = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, product_id)))
And then, I get an error
TimeoutException(message, screen, stacktrace)selenium.common.exceptions.TimeoutException: Message:
I tried to change WebDriverWait to 100, but I still can't solve them. I want to understand why to appear this error and any solution for this case. Thanks so much !!!
This is my solution, but I want to know the cause of this error and looking for a better solution
while True:
driver.get(URL)
try:
element = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, product_id)))
break
except TimeoutException:
driver.quit()

Can you provide HTML example? Without HTML example it's nearly impossible to debug for you. Code & HTML example will be helpful.
My guess is that your website uses dynamic id locators, or using iframe, or in your script, selenium is active on other tab/window/iframe/content. This is assuming your product_id is correct.
Update with the example:
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
PRODUCT_ID = 'productTitle'
driver = webdriver.Chrome()
driver.get("https://www.amazon.com/dp/B08BB9RWXD")
try:
element = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, PRODUCT_ID)))
print(element.text)
except TimeoutException:
print("Cannot find product title.")
driver.quit()
I don't have any problem using above code. You can extend it by youself :).
One thing that you need to note is that you DO NOT use while & break. WebDriverWait(driver, 10) already does the loop to find your element for 10 seconds. In this 10 seconds, if it finds your element, it will stop finding (break). You don't need to do the while loop yourself.

Related

How do I resolve this error in Selenium : ERROR: Couldn't read tbsCertificate as SEQUENCE, ERROR: Failed parsing Certificate

I'm trying to execute a selenium program in Python to go to a new URL on click of a button in the current homepage. I'm new to selenium and any help regarding this would be appreciated. Here's my code
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
url = 'https://nmit.ac.in'
driver = webdriver.Chrome()
driver.get(url)
try:
# wait 10 seconds before looking for element
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located(By.LINK_TEXT, "Parent Portal")
)
except:
print()
driver.find_element(By.LINK_TEXT, "Parent Portal").click()
I have tried to increase the wait time as well as using all forms of the supported located strategies under the BY keyword, but to no avail. I keep getting this error.
As far as I know, you shouldn't be worried by those errors. Although, I proved your code and it's not finding any element in the web page. I can recommend you use the xpath of the element you want to find:
# wait 10 seconds before looking for element
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.XPATH, "/html//div[#class='wsmenucontainer']//button[#href='https://nmitparents.contineo.in/']"))
)
element.click()
#wait for the new view to be loaded
time.sleep(10)
driver.quit()
Psdt: you can use the extention Ranorex Selocity to extract a good and unique xpath of any element in a webpage and also test it!!
image

Selenium command crash

I've written a simple crawler for data retrieval, however sometimes specific commands crash due to some other elements. For example the below command:
driver.find_element_by_xpath("//a[#class='abcs__123 js-tabs ']").click()
returns:
selenium.common.exceptions.ElementClickInterceptedException: Message: Element <a class="abcs__123 js-tabs "> is not clickable at point (549,38) because another element <div class="header__container"> obscures it
How can i fix this issue? Now i use the below trick:
time.sleep(8)
but it delays my program at a fixed rate without any guarantee for the above error avoidance.
There are several cases when this happens, so I can't know what exactly causes the issue in you specific case.
Clicking the element with JavaScript or moving to the element with Actions and then clicking on it helps in most cases.
So please try any of the following:
element = driver.find_element_by_xpath("//a[#class='abcs__123 js-tabs ']")
driver.execute_script("arguments[0].click();", element)
or
element = driver.find_element_by_xpath("//a[#class='abcs__123 js-tabs ']")
webdriver.ActionChains(driver).move_to_element(element ).click(element ).perform()
It may not load the element when you want to find it, but there are several causes for this problem.
if your problem about time to load the element, test this:
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.common.exceptions import TimeoutException
driver = webdriver.Chrome() #or firefox
driver.get("url")
delay = 10 #sec
try:
element = WebDriverWait(driver, delay).until(EC.presence_of_element_located((By.XPATH, '//a[#class='abcs__123 js-tabs ']')))
print ("Element loaded")
except TimeoutException:
print ("Loading too much time")

Selenium does NOT do anything after getting rid of GDPR consent

It's my first time using Selenium and webscraping. I have been stuck with the annoying GDPR iframe. I am simply trying to go to a website, search something in the search bar and then click in one of the results. But it does not seem to do anything after I get rid of the GDPR consent.
Important, it does not give any errors.
This is my very simple code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
#Web driver
driver = webdriver.Chrome(executable_path="C:\Program Files (x86)\chromedriver.exe")
driver.get("https://transfermarkt.co.uk/")
search = driver.find_element_by_name("query")
search.send_keys("Sevilla FC")
search.send_keys(Keys.RETURN)
WebDriverWait(driver,10).until(EC.frame_to_be_available_and_switch_to_it((By.ID, "sp_message_iframe_382445")))
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, "//button[text()='ACCEPT ALL']"))).click()
try:
sevfclink = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "368")))
sevfclink.click()
except:
driver.quit()
time.sleep(5)
driver.quit()
Not sure where you get the iframe from but the id might be dynamic so try this.
driver.get("https://transfermarkt.co.uk/")
wait = WebDriverWait(driver,10)
search = wait.until(EC.element_to_be_clickable((By.NAME, "query")))
search.send_keys("Sevilla FC", Keys.RETURN)
wait.until(EC.frame_to_be_available_and_switch_to_it((By.CSS_SELECTOR,"iframe[id^='sp_message_iframe']")))
wait.until(EC.element_to_be_clickable((By.XPATH, "//button[text()='ACCEPT ALL']"))).click()
driver.switch_to.default_content()
try:
sevfclink = wait.until(EC.element_to_be_clickable((By.ID, "368")))
sevfclink.click()
except:
pass
It looks like the two lines starting WebDriverWait throw an error. If I skip those to the try statement you get the results of the search. A page that gives an overview of Sevilla FC shows up. I presume the WebDriverWait lines are there to make sure you wait for something, but from what I can tell they are unnecessary.

Circumventing Stale Element Exceptions in Selenium

I have read several articles on this site regarding around the StaleElementReferenceException and am aware that this error is caused by the element no longer being in the site's DOM. What I am trying to do is click the bottom links on this webpage in order to go on and see the next page's listings. I have tried a few ways around this exception being given to me, and haven't found any to work. Here is an example of the code I have tried, and what I thought it might accomplish.
driver = webdriver.Chrome(r'C:\Users\Hank\Desktop\chromedriver_win32\chromedriver.exe')
driver.get('https://steamcommunity.com/market/listings/440/Unusual%20Old%20Guadalajara')
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait as wait
from selenium.webdriver.support.expected_conditions import presence_of_element_located
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import StaleElementReferenceException
action = ActionChains(driver)
page_links = wait(driver, 10).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, '[class^=market_paging_pagelink]')))
try:
action.move_to_element(page_links[1]).click().perform()
except StaleElementReferenceException as Exception:
print("Exception received, trying again")
time.sleep(5)
page_links = wait(driver, 10).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, '[class^=market_paging_pagelink]')))
action.move_to_element(page_links[1]).click().perform()
I was hoping that this code segment would attempt to move to the element at the bottom, click it, or return the error message, and try again, succeeding the second time. Instead, the code simply throws the error again. If my question has already been answered, please direct me to the relevant link.
Thank you!
The approach I normally go for is to click Next page until the button gets disabled/invisible.
Here's a working example based on your page. You should obviously do whatever relevant in the while loop; I chose to capture prices for the sake of example.
url="https://steamcommunity.com/market/listings/440/Unusual%20Old%20Guadalajara"
driver.get(url)
next_button=wait(driver, 10).until(EC.presence_of_element_located((By.ID,'searchResults_btn_next')))
# capture the start value from "Showing x-xx of 22 results"
#need this to check against later
ref_val=wait(driver, 10).until(EC.presence_of_element_located((By.ID,'searchResults_start'))).text
while next_button.get_attribute('class') == 'pagebtn':
next_button.click()
#wait until ref_val has changed
wait(driver, 10).until(lambda driver: wait(driver, 10).until(EC.presence_of_element_located((By.ID,'searchResults_start'))).text != ref_val)
# ====== Do whatever relevant here =============================
page_num=wait(driver, 10).until(EC.presence_of_element_located((By.CSS_SELECTOR,'.market_paging_pagelink.active'))).text
print(f"Prices from page {page_num}")
prices = wait(driver, 10).until(EC.presence_of_all_elements_located(
(By.XPATH, ".//span[#class='market_listing_price market_listing_price_with_fee']")))
for price in prices:
print(price.text)
#================================================================
#get the new reference value
ref_val = wait(driver, 10).until(EC.presence_of_element_located((By.ID, 'searchResults_start'))).text

Python: Element is not clickable selenium

I am trying to write a program in Python that click to the next page until the it reaches to the last page. I followed some old posts on Stackoverflow and wrote the following code:
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
driver = webdriver.Chrome(executable_path="/Users/yasirmuhammad/Downloads/chromedriver")
driver.get("https://stackoverflow.com/users/37181/alex-gaynor?tab=tags")
while True:
try:
driver.find_element_by_link_text('next').click()
except NoSuchElementException:
break
However, when I run the program, it throws following error:
selenium.common.exceptions.WebDriverException: Message: unknown error: Element ... is not clickable at point (1180, 566). Other element would receive the click: <html class="">...</html>
(Session info: chrome=68.0.3440.106)
I also followed a thread of Stackoverflow (selenium exception: Element is not clickable at point) but no luck.
You need to close this banner first -
Since selenium opens a fresh browser instance so the website will ask you to store cookies every time you run the script. It is this exact banner which is coming in the way of selenium clicking your "next" button. Use this code to delete that close button -
driver.find_element_by_xpath("//a[#class='grid--cell fc-white js-notice-close']").click()
Also, driver.find_element_by_link_text('next') will throw a StaleElementReferenceException. Use this locator instead -
driver.find_element_by_xpath("//span[contains(text(),'next')]").click()
Final code -
driver.get("https://stackoverflow.com/users/37181/alex-gaynor?tab=tags")
driver.find_element_by_xpath("//a[#class='grid--cell fc-white js-notice-close']").click()
while True:
try:
time.sleep(3)
driver.find_element_by_xpath("//span[contains(text(),'next')]").click()
except NoSuchElementException:
break
As per your question to click through the next page until the it reaches to the last page, you can use the following solution:
Code Block:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
from selenium.common.exceptions import NoSuchElementException
from selenium.common.exceptions import StaleElementReferenceException
options = webdriver.ChromeOptions()
options.add_argument("start-maximized")
options.add_argument('disable-infobars')
driver=webdriver.Chrome(chrome_options=options, executable_path=r'C:\Utility\BrowserDrivers\chromedriver.exe')
driver.get("https://stackoverflow.com/users/37181/alex-gaynor?tab=tags")
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//a[#class='grid--cell fc-white js-notice-close' and #aria-label='notice-dismiss']"))).click()
while True:
try:
driver.execute_script(("window.scrollTo(0, document.body.scrollHeight)"))
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//div[#class='pager fr']//a[last()]/span[#class='page-numbers next']")))
driver.find_element_by_xpath("//div[#class='pager fr']//a[last()]/span[#class='page-numbers next']").click()
except (TimeoutException, NoSuchElementException, StaleElementReferenceException) :
print("Last page reached")
break
driver.quit()
Console Output:
Last page reached
There are couple of things that need to be taken care of:
It seems the element is hidden by the cookies banner. By scrolling
the page the element can be made available.
When you click on the
next - the page is reloaded. So you need to handle the
StaleElementException.
Adding both these the code looks as follows:
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
from selenium.common.exceptions import StaleElementReferenceException
driver = webdriver.Chrome()
driver.get("https://stackoverflow.com/users/37181/alex-gaynor?tab=tags")
driver.execute_script(("window.scrollTo(0, document.body.scrollHeight)"))
while True:
try:
webdriver.ActionChains(driver).move_to_element(driver.find_element_by_link_text('next')).click().perform()
except NoSuchElementException:
break
except StaleElementReferenceException:
pass
print "Reached the last page"
driver.quit()
I met the same error and the solution is not to scroll the window to the object(Maybe it can fix some errors but not in my case).
My solution is using javascript, the code as follows:
click_goal = web.find_element_by_xpath('//*[#id="s_position_list"]/ul/li[1]/div[1]/div[1]/div[1]/a/h3')
web.execute_script("arguments[0].click();", click_goal)

Categories