Wait until page is loaded with Selenium WebDriver for Python - python

I want to scrape all the data of a page implemented by a infinite scroll. The following python code works.
for i in range(100):
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(5)
This means every time I scroll down to the bottom, I need to wait 5 seconds, which is generally enough for the page to finish loading the newly generated contents. But, this may not be time efficient. The page may finish loading the new contents within 5 seconds. How can I detect whether the page finished loading the new contents every time I scroll down? If I can detect this, I can scroll down again to see more contents once I know the page finished loading. This is more time efficient.

The webdriver will wait for a page to load by default via .get() method.
As you may be looking for some specific element as #user227215 said, you should use WebDriverWait to wait for an element located in your page:
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.common.exceptions import TimeoutException
browser = webdriver.Firefox()
browser.get("url")
delay = 3 # seconds
try:
myElem = WebDriverWait(browser, delay).until(EC.presence_of_element_located((By.ID, 'IdOfMyElement')))
print "Page is ready!"
except TimeoutException:
print "Loading took too much time!"
I have used it for checking alerts. You can use any other type methods to find the locator.
EDIT 1:
I should mention that the webdriver will wait for a page to load by default. It does not wait for loading inside frames or for ajax requests. It means when you use .get('url'), your browser will wait until the page is completely loaded and then go to the next command in the code. But when you are posting an ajax request, webdriver does not wait and it's your responsibility to wait an appropriate amount of time for the page or a part of page to load; so there is a module named expected_conditions.

Trying to pass find_element_by_id to the constructor for presence_of_element_located (as shown in the accepted answer) caused NoSuchElementException to be raised. I had to use the syntax in fragles' comment:
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
driver = webdriver.Firefox()
driver.get('url')
timeout = 5
try:
element_present = EC.presence_of_element_located((By.ID, 'element_id'))
WebDriverWait(driver, timeout).until(element_present)
except TimeoutException:
print "Timed out waiting for page to load"
This matches the example in the documentation. Here is a link to the documentation for By.

Find below 3 methods:
readyState
Checking page readyState (not reliable):
def page_has_loaded(self):
self.log.info("Checking if {} page is loaded.".format(self.driver.current_url))
page_state = self.driver.execute_script('return document.readyState;')
return page_state == 'complete'
The wait_for helper function is good, but unfortunately click_through_to_new_page is open to the race condition where we manage to execute the script in the old page, before the browser has started processing the click, and page_has_loaded just returns true straight away.
id
Comparing new page ids with the old one:
def page_has_loaded_id(self):
self.log.info("Checking if {} page is loaded.".format(self.driver.current_url))
try:
new_page = browser.find_element_by_tag_name('html')
return new_page.id != old_page.id
except NoSuchElementException:
return False
It's possible that comparing ids is not as effective as waiting for stale reference exceptions.
staleness_of
Using staleness_of method:
#contextlib.contextmanager
def wait_for_page_load(self, timeout=10):
self.log.debug("Waiting for page to load at {}.".format(self.driver.current_url))
old_page = self.find_element_by_tag_name('html')
yield
WebDriverWait(self, timeout).until(staleness_of(old_page))
For more details, check Harry's blog.

As mentioned in the answer from David Cullen, I've always seen recommendations to use a line like the following one:
element_present = EC.presence_of_element_located((By.ID, 'element_id'))
WebDriverWait(driver, timeout).until(element_present)
It was difficult for me to find somewhere all the possible locators that can be used with the By, so I thought it would be useful to provide the list here.
According to Web Scraping with Python by Ryan Mitchell:
ID
Used in the example; finds elements by their HTML id attribute
CLASS_NAME
Used to find elements by their HTML class attribute. Why is this
function CLASS_NAME not simply CLASS? Using the form object.CLASS
would create problems for Selenium's Java library, where .class is a
reserved method. In order to keep the Selenium syntax consistent
between different languages, CLASS_NAME was used instead.
CSS_SELECTOR
Finds elements by their class, id, or tag name, using the #idName,
.className, tagName convention.
LINK_TEXT
Finds HTML tags by the text they contain. For example, a link that
says "Next" can be selected using (By.LINK_TEXT, "Next").
PARTIAL_LINK_TEXT
Similar to LINK_TEXT, but matches on a partial string.
NAME
Finds HTML tags by their name attribute. This is handy for HTML forms.
TAG_NAME
Finds HTML tags by their tag name.
XPATH
Uses an XPath expression ... to select matching elements.

From selenium/webdriver/support/wait.py
driver = ...
from selenium.webdriver.support.wait import WebDriverWait
element = WebDriverWait(driver, 10).until(
lambda x: x.find_element_by_id("someId"))

On a side note, instead of scrolling down 100 times, you can check if there are no more modifications to the DOM (we are in the case of the bottom of the page being AJAX lazy-loaded)
def scrollDown(driver, value):
driver.execute_script("window.scrollBy(0,"+str(value)+")")
# Scroll down the page
def scrollDownAllTheWay(driver):
old_page = driver.page_source
while True:
logging.debug("Scrolling loop")
for i in range(2):
scrollDown(driver, 500)
time.sleep(2)
new_page = driver.page_source
if new_page != old_page:
old_page = new_page
else:
break
return True

Have you tried driver.implicitly_wait. It is like a setting for the driver, so you only call it once in the session and it basically tells the driver to wait the given amount of time until each command can be executed.
driver = webdriver.Chrome()
driver.implicitly_wait(10)
So if you set a wait time of 10 seconds it will execute the command as soon as possible, waiting 10 seconds before it gives up. I've used this in similar scroll-down scenarios so I don't see why it wouldn't work in your case. Hope this is helpful.
To be able to fix this answer, I have to add new text. Be sure to use a lower case 'w' in implicitly_wait.

Here I did it using a rather simple form:
from selenium import webdriver
browser = webdriver.Firefox()
browser.get("url")
searchTxt=''
while not searchTxt:
try:
searchTxt=browser.find_element_by_name('NAME OF ELEMENT')
searchTxt.send_keys("USERNAME")
except:continue

Solution for ajax pages that continuously load data. The previews methods stated do not work. What we can do instead is grab the page dom and hash it and compare old and new hash values together over a delta time.
import time
from selenium import webdriver
def page_has_loaded(driver, sleep_time = 2):
'''
Waits for page to completely load by comparing current page hash values.
'''
def get_page_hash(driver):
'''
Returns html dom hash
'''
# can find element by either 'html' tag or by the html 'root' id
dom = driver.find_element_by_tag_name('html').get_attribute('innerHTML')
# dom = driver.find_element_by_id('root').get_attribute('innerHTML')
dom_hash = hash(dom.encode('utf-8'))
return dom_hash
page_hash = 'empty'
page_hash_new = ''
# comparing old and new page DOM hash together to verify the page is fully loaded
while page_hash != page_hash_new:
page_hash = get_page_hash(driver)
time.sleep(sleep_time)
page_hash_new = get_page_hash(driver)
print('<page_has_loaded> - page not loaded')
print('<page_has_loaded> - page loaded: {}'.format(driver.current_url))

How about putting WebDriverWait in While loop and catching the exceptions.
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
browser = webdriver.Firefox()
browser.get("url")
delay = 3 # seconds
while True:
try:
WebDriverWait(browser, delay).until(EC.presence_of_element_located(browser.find_element_by_id('IdOfMyElement')))
print "Page is ready!"
break # it will break from the loop once the specific element will be present.
except TimeoutException:
print "Loading took too much time!-Try again"

You can do that very simple by this function:
def page_is_loading(driver):
while True:
x = driver.execute_script("return document.readyState")
if x == "complete":
return True
else:
yield False
and when you want do something after page loading complete,you can use:
Driver = webdriver.Firefox(options=Options, executable_path='geckodriver.exe')
Driver.get("https://www.google.com/")
while not page_is_loading(Driver):
continue
Driver.execute_script("alert('page is loaded')")

use this in code :
from selenium import webdriver
driver = webdriver.Firefox() # or Chrome()
driver.implicitly_wait(10) # seconds
driver.get("http://www.......")
or you can use this code if you are looking for a specific tag :
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Firefox() #or Chrome()
driver.get("http://www.......")
try:
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "tag_id"))
)
finally:
driver.quit()

Very good answers here. Quick example of wait for XPATH.
# wait for sizes to load - 2s timeout
try:
WebDriverWait(driver, 2).until(expected_conditions.presence_of_element_located(
(By.XPATH, "//div[#id='stockSizes']//a")))
except TimeoutException:
pass

I struggled a bit to get this working as that didn't worked for me as expected. anyone who is still struggling to get this working, may check this.
I want to wait for an element to be present on the webpage before proceeding with my manipulations.
we can use WebDriverWait(driver, 10, 1).until(), but the catch is until() expects a function which it can execute for a period of timeout provided(in our case its 10) for every 1 sec. so keeping it like below worked for me.
element_found = wait_for_element.until(lambda x: x.find_element_by_class_name("MY_ELEMENT_CLASS_NAME").is_displayed())
here is what until() do behind the scene
def until(self, method, message=''):
"""Calls the method provided with the driver as an argument until the \
return value is not False."""
screen = None
stacktrace = None
end_time = time.time() + self._timeout
while True:
try:
value = method(self._driver)
if value:
return value
except self._ignored_exceptions as exc:
screen = getattr(exc, 'screen', None)
stacktrace = getattr(exc, 'stacktrace', None)
time.sleep(self._poll)
if time.time() > end_time:
break
raise TimeoutException(message, screen, stacktrace)

If you are trying to scroll and find all items on a page. You can consider using the following. This is a combination of a few methods mentioned by others here. And it did the job for me:
while True:
try:
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
driver.implicitly_wait(30)
time.sleep(4)
elem1 = WebDriverWait(driver, 30).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "element-name")))
len_elem_1 = len(elem1)
print(f"A list Length {len_elem_1}")
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
driver.implicitly_wait(30)
time.sleep(4)
elem2 = WebDriverWait(driver, 30).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "element-name")))
len_elem_2 = len(elem2)
print(f"B list Length {len_elem_2}")
if len_elem_1 == len_elem_2:
print(f"final length = {len_elem_1}")
break
except TimeoutException:
print("Loading took too much time!")

selenium can't detect when the page is fully loaded or not, but javascript can. I suggest you try this.
from selenium.webdriver.support.ui import WebDriverWait
WebDriverWait(driver, 100).until(lambda driver: driver.execute_script('return document.readyState') == 'complete')
this will execute javascript code instead of using python, because javascript can detect when page is fully loaded, it will show 'complete'. This code means in 100 seconds, keep tryingn document.readyState until complete shows.

nono = driver.current_url
driver.find_element(By.XPATH,"//button[#value='Send']").click()
while driver.current_url == nono:
pass
print("page loaded.")

Related

I am trying to use selenium to load a waiting list and click the button, but I can't seem to find the element

I've tried to use find_element_by_class_name and link text and both result in NoSuchElement Exception. I'm just trying to click this Join waitlist button for https://v2.waitwhile.com/l/fostersbarbershop/list-view - Any assistance is greatly appreciated.
from selenium import webdriver
import time
PATH = "C:\Python\Pycharm\attempt\Drivers\chromedriver.exe"
driver = webdriver.Chrome(PATH)
driver.get("https://v2.waitwhile.com/l/fostersbarbershop/list-view")
join = False
while not join:
try:
joinButton = driver.find_element_by_class_name("disabled")
print("Button isnt ready yet.")
time.sleep(2)
driver.refresh()
except:
joinButton = driver.find_element_by_class_name("public-submit-btn")
print("Join")
joinButton.click()
join = True
It seems you have synchronization issue.
Induce WebDriverWait() and wait for element_to_be_clickable() for following ID locator
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.ID, "join-waitlist"))).click()
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.ID, "ww-name"))).send_keys("TestUser")
You need to import below libraries.
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
browser snapshot
The page your trying to automate is angular. Typically what happens in script-based pages is you download the source, the page load event is classed as complete, then some JS scripts run to get/update the page content. Those scripts (which can take seconds to complete) update the DOM with the page you see.
In contrast selenium can only recognise the page is loaded - Selenium will be attempting to find those elements at this readystate-point and is unaware of any running scripts.
You will need to wait for your element to be ready before you proceed.
Easy solution is to add an implicit wait to your script.
An implicit wait will ignore the nosuchelement exception until the timeout has been reached.
See here for more information on waits.
This is the line of code to add:
driver.implicitly_wait(10)
Just adjust the time based on what you need.
You only need it once.
This is the code that works for me:
driver = webdriver.Chrome()
driver.get("https://v2.waitwhile.com/l/fostersbarbershop/list-view")
driver.implicitly_wait(10)
join = False
while not join:
try:
joinButton = driver.find_element_by_class_name("disabled")
print("Button isnt ready yet.")
time.sleep(2)
driver.refresh()
except:
joinButton = driver.find_element_by_class_name("public-submit-btn")
print("Join")
joinButton.click()
join = True
This is the end state:

Circumventing Stale Element Exceptions in Selenium

I have read several articles on this site regarding around the StaleElementReferenceException and am aware that this error is caused by the element no longer being in the site's DOM. What I am trying to do is click the bottom links on this webpage in order to go on and see the next page's listings. I have tried a few ways around this exception being given to me, and haven't found any to work. Here is an example of the code I have tried, and what I thought it might accomplish.
driver = webdriver.Chrome(r'C:\Users\Hank\Desktop\chromedriver_win32\chromedriver.exe')
driver.get('https://steamcommunity.com/market/listings/440/Unusual%20Old%20Guadalajara')
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait as wait
from selenium.webdriver.support.expected_conditions import presence_of_element_located
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import StaleElementReferenceException
action = ActionChains(driver)
page_links = wait(driver, 10).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, '[class^=market_paging_pagelink]')))
try:
action.move_to_element(page_links[1]).click().perform()
except StaleElementReferenceException as Exception:
print("Exception received, trying again")
time.sleep(5)
page_links = wait(driver, 10).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, '[class^=market_paging_pagelink]')))
action.move_to_element(page_links[1]).click().perform()
I was hoping that this code segment would attempt to move to the element at the bottom, click it, or return the error message, and try again, succeeding the second time. Instead, the code simply throws the error again. If my question has already been answered, please direct me to the relevant link.
Thank you!
The approach I normally go for is to click Next page until the button gets disabled/invisible.
Here's a working example based on your page. You should obviously do whatever relevant in the while loop; I chose to capture prices for the sake of example.
url="https://steamcommunity.com/market/listings/440/Unusual%20Old%20Guadalajara"
driver.get(url)
next_button=wait(driver, 10).until(EC.presence_of_element_located((By.ID,'searchResults_btn_next')))
# capture the start value from "Showing x-xx of 22 results"
#need this to check against later
ref_val=wait(driver, 10).until(EC.presence_of_element_located((By.ID,'searchResults_start'))).text
while next_button.get_attribute('class') == 'pagebtn':
next_button.click()
#wait until ref_val has changed
wait(driver, 10).until(lambda driver: wait(driver, 10).until(EC.presence_of_element_located((By.ID,'searchResults_start'))).text != ref_val)
# ====== Do whatever relevant here =============================
page_num=wait(driver, 10).until(EC.presence_of_element_located((By.CSS_SELECTOR,'.market_paging_pagelink.active'))).text
print(f"Prices from page {page_num}")
prices = wait(driver, 10).until(EC.presence_of_all_elements_located(
(By.XPATH, ".//span[#class='market_listing_price market_listing_price_with_fee']")))
for price in prices:
print(price.text)
#================================================================
#get the new reference value
ref_val = wait(driver, 10).until(EC.presence_of_element_located((By.ID, 'searchResults_start'))).text

How to select element trough xpath in Selenium and Python?

I want to check the innertext of a web element, but xpath does not find it even if i gave it the absolute path. I get no such element error on the line where i try to define Plaje
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Edge("D:\pariuri\python\MicrosoftWebDriver.exe")
driver.get("https://www.unibet.ro/betting#filter/all/all/all/all/in-play")
try:
element_present = EC.presence_of_element_located((By.CLASS_NAME, 'KambiBC-event-result__score-list'))
WebDriverWait(driver, 4).until(element_present)
except TimeoutException:
print ('Timed out waiting for page to load')
event = driver.find_elements_by_class_name('KambiBC-event-item KambiBC-event-item--type-match')
for items in event:
link = items.find_element_by_class_name('KambiBC-event-item__link')
scoruri = items.find_element_by_class_name('KambiBC-event-item__score-container')
scor1 = scoruri.find_element_by_xpath(".//li[#class='KambiBC-event-result__match']/span[1]")
scor2 = scoruri.find_element_by_xpath(".//li[#class='KambiBC-event-result__match']/span[2]")
print (scor1.text)
print (scor2.text)
if scor1.text == '0' and scor2.text == '0':
link.click()
Plaje = driver.find_element_by_xpath(".//*[#id='KambiBC-contentWrapper__bottom']/div/div/div/div/div[2]/div[2]/div[3]/div/div/div[4]/div[1]/div[4]/div[2]/div/div/ul/li[2]/ul/li[6]/div[1]/h3")
print (Plaje.text)
Always add some implicit wait after initiating the webdriver.
driver = webdriver.Firefox()
driver.implicitly_wait(10) # seconds
Try with the below xpath.
"//h3[contains(text(),'Total goluri']"
or
"//div[#class='KambiBC-bet-offer-subcategory__container']/h3[1]"
Hope this helps. Thanks.
EDIT: Its always advisable to use the implicit wait. We can handle the same using the explicit wait also. But we need to add the explicit wait for each and every element. Also there is a good change you might miss adding explicit wait to few elements and debug again. The choice is yours always.
try using .get_attriute("value") or .get_attribute("innerHTML") instead of .text
Try below :
//h3[contains(.,'Total goluri')]
hope it will help you :)

extracting links with a specific class with Selenium in Python

I am trying to extract links from a infinite scroll website
It's my code for scrolling down the page
driver = webdriver.Chrome('C:\\Program Files (x86)\\Google\\Chrome\\chromedriver.exe')
driver.get('http://seekingalpha.com/market-news/top-news')
for i in range(0,2):
driver.implicitly_wait(15)
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(20)
I aim at extracting specific links from this page. With class = "market_current_title" and HTML like the following :
<a class="market_current_title" href="/news/3223955-dow-wraps-best-week-since-2011-s-and-p-strongest-week-since-2014" sasource="titles_mc_top_news" target="_self">Dow wraps up best week since 2011; S&P in strongest week since 2014</a>
When I used
URL = driver.find_elements_by_class_name('market_current_title')
I ended up with the error that says "stale element reference: element is not attached to the page document". Then I tried
URL = driver.find_elements_by_xpath("//div[#id='a']//a[#class='market_current_title']")
but it says that there is no such a link !!!
Do you have any idea about solving this problem?
You're probably trying to interact with an element that is already changed (probably elements above your scrolling and off screen). Try this answer for some good options on how to overcome this.
Here's a snippet:
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By
import selenium.webdriver.support.expected_conditions as EC
import selenium.webdriver.support.ui as ui
# return True if element is visible within 2 seconds, otherwise False
def is_visible(self, locator, timeout=2):
try:
ui.WebDriverWait(driver, timeout).until(EC.visibility_of_element_located((By.CSS_SELECTOR, locator)))
return True
except TimeoutException:
return False

selenium wait until html is reloaded

I have a script with python and selenium to scrape google results.. It works, but I'm looking for a better solution to wait until all 100 search results are fetched
I use this solution to wait until the search is done
driver.wait.until(EC.presence_of_element_located(
(By.ID, 'resultStats')))
This works, but I need to get 100 search results so I do this
driver.get(driver.current_url+'&num=100')
But now its not possible to re-use this line because the element ID is already written to the page..
driver.wait.until(EC.presence_of_element_located(
(By.ID, 'resultStats')))
Instead I use this solution, but its not a consistent solution (if the request takes more than 5 secs)
time.sleep(5)
code
url = 'https://www.google.com'
driver.get(url)
try:
box = driver.wait.until(EC.presence_of_element_located(
(By.NAME, 'q')))
box.send_keys(query.decode('utf-8'))
button = driver.wait.until(EC.element_to_be_clickable(
(By.NAME, 'btnG')))
button.click()
except TimeoutException:
error('Box or Button not found in google.com')
try:
driver.wait.until(EC.presence_of_element_located(
(By.ID, 'resultStats')))
driver.get(driver.current_url+'&num=100')
# Need a better solution to wait until all results are loaded
time.sleep(5)
print driver.find_element_by_tag_name('body').get_attribute('innerHTML').encode('utf-8')
except TimeoutException:
error('No results returned by Google. Could be HTTP 503 response')
You are absolutely right that time.sleep(5) is not a reliable and good way to wait for something on the page. You would need to use WebDriverWait class and a specific condition to wait for.
In this case, I'd wait for the count of elements with class="g" (which represents a search result) would be greater or equal to 100 via a custom Expected Condition:
from selenium.common.exceptions import StaleElementReferenceException
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
class wait_for_n_elements(object):
def __init__(self, locator, count):
self.locator = locator
self.count = count
def __call__(self, driver):
try:
count = len(EC._find_elements(driver, self.locator))
return count >= self.count
except StaleElementReferenceException:
return False
Usage:
wait = WebDriverWait(driver, 10)
wait.until(wait_for_n_elements((By.CSS_SELECTOR, ".g"), 100)

Categories