Clicking through pagination links that appear in sets with selenium - python

This is my first time with selenium and the website I'm scraping (page) doesn't have a next page button and the pages for pagination don't change till you click the "..." and then it shows the next set of 10 pagination links. How do I loop through the clicking.
I've seen a few answers online but I don't couldn't adapt them to my code because of the links only come in sets. This is the code
from selenium.webdriver import Chrome
from selenium.webdriver.support.ui import Select
from selenium.webdriver.common.by import By
driver_path = 'Projects\Selenium Driver\chromedriver_win32'
driver = Chrome(executable_path=driver_path)
driver.get('https://business.nh.gov/nsor/search.aspx')
drop_down = driver.find_element(By.ID, 'ctl00_cphMain_lstStates')
select = Select(drop_down)
select.select_by_visible_text('NEW HAMPSHIRE')
driver.find_element(By.ID, 'ctl00_cphMain_btnSubmit').click()
content = driver.find_elements(By.CSS_SELECTOR, 'table#ctl00_cphMain_gvwOffender a')
hrefs = []
for link_el in content:
href = link_el.get_attribute('href')
hrefs.append(href)
offenders_href = hrefs[:10]
pagination_links = driver.find_elements(By.CSS_SELECTOR, 'table#ctl00_cphMain_gvwOffender tbody tr td table tbody a')

With your current code, the next page elements are already captured within list content[10:]. And the last page hyperlink with ellipsis is actually the next logical sequence. Using this fact, we can use a current page variable to keep track of the page being visited and use that to identify the right anchor tag element within list content for the next page.
With a do-while loop logic and using your code to scrape the required elements, here the primary code:
offenders_href = list()
curr_page = 1
while True:
# find all anchor tags with this table
content = driver.find_elements(By.CSS_SELECTOR, 'table#ctl00_cphMain_gvwOffender a')
hrefs = []
for link_el in content:
href = link_el.get_attribute('href')
hrefs.append(href)
offenders_href += hrefs[:10]
curr_page += 1
# find next page element
for page_elem in content[10:]:
if page_elem.get_attribute("href").endswith('$'+str(curr_page)+"')"):
next_page = page_elem
break
else:
# last page reached, break out of while
break
print(f'clicking {next_page.text}...')
next_page.click()
sleep(1)
I placed this code in function launch_click_pages. Launching it with your URL, it is a able to scroll through pages (it kept going, but I stopped it at some page):
>>> launch_click_pages('https://business.nh.gov/nsor/search.aspx')
clicking 2...
clicking 3...
clicking 4...
clicking 5...
clicking 6...
clicking 7...
clicking 8...
clicking 9...
clicking 10...
clicking ......
clicking 12...
clicking 13...
clicking 14...
clicking 15...
^C

You can try to execute script e.g. driver.execute_script("javascript:__doPostBack('ctl00$cphMain$gvwOffender','Page$5')") and you will redirected to fifth page

Related

how to find URL of some elements of a webpage?

the webpage is : https://www.vpgame.com/market/gold?order_type=pro_price&order=desc&offset=0
As you can see there are 25 items in the selling part of this page that when you click them it opens a new tab and show you that specific item details.
Now I want to make a program to get those 25 item URLs and save them in a list, and my problem is as you can see in page inspect, their tags are which should be and also I can't find any 'href' attributes that related to them.
# using selenium and driver = webdriver.Chrome()
link = driver.find_elements_by_tag_name('a')
link2 = [l.get_attribute('href') for l in link]
I thought I can do it with above code but the problem is what I said. any suggestion?
Looks like you are trying to scrape a page that is powered by react. There are no href tags because javascript is powering all the linking. Your best bet is to use selenium to execute a click on each of the div objects, switch to the newly tabe, and use something like this code to get the URL of the page it's taken you to:
import time
links = driver.find_elements_by_class_name('card-header')
urls = []
for link in links:
new_page = link.click()
driver.switch_to.window(driver.window_handles[1])
url = driver.current_url
urls.append(url)
driver.close()
driver.switch_to.window(driver.window_handles[0])
time.sleep(1)
Note that the code closes the new tab each time and goes back to the main tab. I added time.sleep() so it doesn't go too fast.

Blocking login overlay window when scraping web page using Selenium

I am trying to scrape a long list of books in 10 web pages. When the loop clicks on next > button for the first time the website displays a login overlay so selenium can not find the target elements.
I have tried all the possible solutions:
Use some chrome options.
Use try-except to click X button on the overlay. But it appears only one time (when clicking next > for the first time). The problem is that when I put this try-except block at the end of while True: loop, it became infinite as I use continue in except as I do not want to break the loop.
Add some popup blocker extensions to Chrome but they do not work when I run the code although I add the extension using options.add_argument('load-extension=' + ExtensionPath).
This is my code:
options = Options()
options.add_argument('start-maximized')
options.add_argument('disable-infobars')
options.add_argument('disable-avfoundation-overlays')
options.add_argument('disable-internal-flash')
options.add_argument('no-proxy-server')
options.add_argument("disable-notifications")
options.add_argument("disable-popup")
Extension = (r'C:\Users\DELL\AppData\Local\Google\Chrome\User Data\Profile 1\Extensions\ifnkdbpmgkdbfklnbfidaackdenlmhgh\1.1.9_0')
options.add_argument('load-extension=' + Extension)
options.add_argument('--disable-overlay-scrollbar')
driver = webdriver.Chrome(options=options)
driver.get('https://www.goodreads.com/list/show/32339._50_?page=')
wait = WebDriverWait(driver, 2)
review_dict = {'title':[], 'author':[],'rating':[]}
html_soup = BeautifulSoup(driver.page_source, 'html.parser')
prod_containers = html_soup.find_all('table', class_ = 'tableList js-dataTooltip')
while True:
table = driver.find_element_by_xpath('//*[#id="all_votes"]/table')
for product in table.find_elements_by_xpath(".//tr"):
for td in product.find_elements_by_xpath('.//td[3]/a'):
title = td.text
review_dict['title'].append(title)
for td in product.find_elements_by_xpath('.//td[3]/span[2]'):
author = td.text
review_dict['author'].append(author)
for td in product.find_elements_by_xpath('.//td[3]/div[1]'):
rating = td.text[0:4]
review_dict['rating'].append(rating)
try:
close = wait.until(EC.element_to_be_clickable((By.XPATH, '/html/body/div[3]/div/div/div[1]/button')))
close.click()
except NoSuchElementException:
continue
try:
element = wait.until(EC.element_to_be_clickable((By.CLASS_NAME, 'next_page')))
element.click()
except TimeoutException:
break
df = pd.DataFrame.from_dict(review_dict)
df
Any help like if I can change the loop to for loop clicks next > button until the end rather than while loop or where should I put try-except block to close the overlay or if there is Chromeoption can disable overlay.
Thanks in advance
Thank you for sharing your code and the website that you are having trouble with. I was able to close the Login Modal by using xpath. I took this challenge and broke up the code using class objects. 1 object is for the selenium.webdriver.chrome.webdriver and the other object is for the page that you wanted to scrape the data against ( https://www.goodreads.com/list/show/32339 ). In the following methods, I used the Javascript return arguments[0].scrollIntoView(); method and was able to scroll to the last book that displayed on the page. After I did that, I was able to click the next button
def scroll_to_element(self, xpath : str):
element = self.chrome_driver.find_element(By.XPATH, xpath)
self.chrome_driver.execute_script("return arguments[0].scrollIntoView();", element)
def get_book_count(self):
return self.chrome_driver.find_elements(By.XPATH, "//div[#id='all_votes']//table[contains(#class, 'tableList')]//tbody//tr").__len__()
def click_next_page(self):
# Scroll to last record and click "next page"
xpath = "//div[#id='all_votes']//table[contains(#class, 'tableList')]//tbody//tr[{0}]".format(self.get_book_count())
self.scroll_to_element(xpath)
self.chrome_driver.find_element(By.XPATH, "//div[#id='all_votes']//div[#class='pagination']//a[#class='next_page']").click()
Once I clicked on the "Next" button, I saw the modal display. I was able to find the xpath for the modal and was able to close the modal.
def is_displayed(self, xpath: str, int = 5):
try:
webElement = DriverWait(self.chrome_driver, int).until(
DriverConditions.presence_of_element_located(locator = (By.XPATH, xpath))
)
return True if webElement != None else False
except:
return False
def is_modal_displayed(self):
return self.is_displayed("//body[#class='modalOpened']")
def close_modal(self):
self.chrome_driver.find_element(By.XPATH, "//div[#class='modal__content']//div[#class='modal__close']").click()
if(self.is_modal_displayed()):
raise Exception("Modal Failed To Close")
I hope this helps you to solve your problem.

Elem could not be scrolled into view

new to python and selenium.
For fun, I'm scraping a page. I have to click a first button for Comment, and then another button for All comments so I can get them all.
The first click works, but not the second.
I've set a hardcoded scroll, but still not working.
This is the python code I'm working on:
boton = driver.find_element_by_id('tabComments_btn')
boton.click()
wait = WebDriverWait(driver, 100)
from here on, it doesnt work (it scrolls but it says 'elem cant be scrolled into view'
driver.execute_script("window.scrollTo(0, 1300)")
botonTodos= driver.find_element_by_class_name('thread-node-children-load-all-btn')
wait = WebDriverWait(driver, 100)
botonTodos.click()
If I only click the first button, I'm able to scrape the first 10 comments, so this is working.
wait.until(EC.presence_of_element_located((By.CLASS_NAME, 'thread-node-message')))
for elm in driver.find_elements_by_css_selector(".thread-node-message"):
print(elm.text)
This is the part of the HTML I'm stuck in:
Load next 10 comments
Load all comments
Publicar un comentario
There's a whitespace node with the tag #text between each .
Any ideas welcome.
Thanks.
Here are the different options.
#first create the elements ref
load_next_btn = driver.find_element_by_css_selector(".thread-node-children-load-next-btn")
load_all_btn = driver.find_element_by_css_selector(".thread-node-children-load-all-btn")
# scroll to button you are interested (I am scrolling to load_all_btn
# Option 1
load_all_btn.location_once_scrolled_into_view
# Option 2
driver.execute_script("arguments[0].scrollIntoView();",load_all_btn)
# Option 3
btnLoctation = load_all_btn.location
driver.execute_script("window.scrollTo(" + str(btnLoctation['x']) + "," + str(btnLoctation['y']) +");")
Test Code:
Check if this code is working.
url = "https://stackoverflow.com/questions/55228646/python-selenium-cant-sometimes-scroll-element-into-view/55228932? noredirect=1#comment97192621_55228932"
driver.get(url)
element = driver.find_element_by_xpath("//a[.='Contact Us']")
element.location_once_scrolled_into_view
time.sleep(1)
driver.find_element_by_xpath("//p[.='active']").location_once_scrolled_into_view
driver.execute_script("arguments[0].scrollIntoView();",element)
time.sleep(1)

How to save state of selenium web driver in python?

I am trying to scrape this website: http://www.infoempleo.com/ofertas-internacionales/.
I wanted to scrape by selecting the "Last 15 days" radio button. So I wrote this code.
browser = webdriver.Chrome('C:\Users\Junaid\Downloads\chromedriver\chromedriver_win32\chromedriver.exe')
new_urls = deque(['http://www.infoempleo.com/ofertas-internacionales/'])
processed_urls = set()
while len(new_urls):
print "------ URL LIST -------"
print new_urls
print "-----------------------"
print
time.sleep(5)
url = new_urls.popleft()
processed_urls.add(url)
try:
print "----------- Scraping ==>",url
browser.get(url)
elem = browser.find_elements_by_id("fechapublicacion")[-1]
if ( elem.is_selected() ):
print "already selected"
else:
elem.click()
html = browser.page_source
except:
print "-------- Failed to Scrape, Moving to Next"
continue
soup = BeautifulSoup(html)
I have been able to select the radio button and scrape the first page.
There is a list of pages at the end like 1, 2, 3..
When moving to the next page, 'browser.get(url)' is called which resets the radio button to 'Any Date' instead of 'Last 15 Days'. Which makes the code execute the else statement else: elem.click() to select the radio button again, which open the first page that has been already scraped.
Is there a way around this? Help will be appreciated.
I have found a work around this problem. Instead of saving links to next pages in a list. I am selecting the nextPage button/element and using .click(). This way the browser.get(url) is not needed to call again and the page is not reloaded.

Selenium Python - Access next pages of search results

I have to click on each search result one by one from this url:
Search Guidelines
I first extract the total number of results from the displayed text so that I can set the upper limit for iteration
upperlimit=driver.find_element_by_id("total_results")
number = int(upperlimit.text.split(' ')[0])
The loop is then defiend as
for i in range(1,number):
However, after going through the first 10 results on the first page, list index goes out of range (probably because there are no more links to click). I need to click on "Next" to get the next 10 results, and so on till I'm done with all search results. How can I go around doing that?
Any help would be appreciated!
The problem is that the value of element with id total_results changes after the page is loaded, at first it contains 117, then changes to 44.
Instead, here is a more robust approach. It processes page by page until there is no more pages left:
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
driver = webdriver.Firefox()
url = 'http://www.nice.org.uk/Search.do?searchText=bevacizumab&newsearch=true#/search/?searchText=bevacizumab&mode=&staticTitle=false&SEARCHTYPE_all2=true&SEARCHTYPE_all1=&SEARCHTYPE=GUIDANCE&TOPICLVL0_all2=true&TOPICLVL0_all1=&HIDEFILTER=TOPICLVL1&HIDEFILTER=TOPICLVL2&TREATMENTS_all2=true&TREATMENTS_all1=&GUIDANCETYPE_all2=true&GUIDANCETYPE_all1=&STATUS_all2=true&STATUS_all1=&HIDEFILTER=EGAPREFERENCE&HIDEFILTER=TOPICLVL3&DATEFILTER_ALL=ALL&DATEFILTER_PREV=ALL&custom_date_from=&custom_date_to=11-06-2014&PAGINATIONURL=%2FSearch.do%3FsearchText%40%40bevacizumab%26newsearch%40%40true%26page%40%40&SORTORDER=BESTMATCH'
driver.get(url)
page_number = 1
while True:
try:
link = driver.find_element_by_link_text(str(page_number))
except NoSuchElementException:
break
link.click()
print driver.current_url
page_number += 1
Basically, the idea here is to get the next page link, until there is no such ( NoSuchElementException would be thrown). Note that it would work for any number of pages and results.
It prints:
http://www.nice.org.uk/Search.do?searchText=bevacizumab&newsearch=true&page=1
http://www.nice.org.uk/Search.do?searchText=bevacizumab&newsearch=true&page=2#showfilter
http://www.nice.org.uk/Search.do?searchText=bevacizumab&newsearch=true&page=3#showfilter
http://www.nice.org.uk/Search.do?searchText=bevacizumab&newsearch=true&page=4#showfilter
http://www.nice.org.uk/Search.do?searchText=bevacizumab&newsearch=true&page=5#showfilter
There is not even the need to programatically press on the Next button, if you see carrefully, the url just needs a new parameter when browsing other result pages:
url = "http://www.nice.org.uk/Search.do?searchText=bevacizumab&newsearch=true&page={}#showfilter"
for i in range(1,5):
driver.get(url.format(i))
upperlimit=driver.find_element_by_id("total_results")
number = int(upperlimit.text.split(' ')[0])
if you still want to programatically press on the next button you could use:
driver.find_element_by_class_name('next').click()
But I haven't tested that.

Categories