I wanted to extract text from multiple pages. Currently, I am able to extract data from the first page but I want to append and go to muliple pages and extract the data from pagination. I have written this simple code which extracts data from the first page. I am not able to extract the data from multiple pages which is dynamic in number.
`
element_list = []
opts = webdriver.ChromeOptions()
opts.headless = True
driver = webdriver.Chrome(ChromeDriverManager().install())
base_url = "XYZ"
driver.maximize_window()
driver.get(base_url)
driver.set_page_load_timeout(50)
element = WebDriverWait(driver, 50).until(EC.presence_of_element_located((By.ID, 'all-my-groups')))
l = []
l = driver.find_elements_by_xpath("//div[contains(#class, 'alias-wrapper sim-ellipsis sim-list--shortId')]")
for i in l:
print(i.text)
`
I have shared the images of class if this could help from pagination.
If we could extract the automate and extract from all the pages that would be awesome. Also, I am new so please pardon me for asking silly questions. Thanks in advance.
You have provided the code just for the previous page button. I guess you need to go to the next page until next page exists. As I don't know what site we are talking about I can only guess its behavior. So I'm assuming the button 'next' disappears when no next page exists. If so, it can be done like this:
element_list = []
opts = webdriver.ChromeOptions()
opts.headless = True
driver = webdriver.Chrome(ChromeDriverManager().install())
base_url = "XYZ"
driver.maximize_window()
driver.get(base_url)
driver.set_page_load_timeout(50)
element = WebDriverWait(driver, 50).until(EC.presence_of_element_located((By.ID, 'all-my-groups')))
l = []
l = driver.find_elements_by_xpath("//div[contains(#class, 'alias-wrapper sim-ellipsis sim-list--shortId')]")
while True:
try:
next_page = driver.find_element(By.XPATH, '//button[#label="Next page"]')
except NoSuchElementException:
break
next_page.click()
l.extend(driver.find_elements(By.XPATH, "//div[contains(#class, 'alias-wrapper sim-ellipsis sim-list--shortId')]"))
for i in l:
print(i.text)
To be able to catch the exception this import has to be added:
from selenium.common.exceptions import NoSuchElementException
Also note that the method find_elements_by_xpath is deprecated and it would be better to replace this line:
l = driver.find_elements_by_xpath("//div[contains(#class, 'alias-wrapper sim-ellipsis sim-list--shortId')]")
by this one:
l = driver.find_elements(By.XPATH, "//div[contains(#class, 'alias-wrapper sim-ellipsis sim-list--shortId')]")
Related
driver.get("https://www.google.com")
search = driver.find_element_by_name("q")
search.clear()
search.send_keys("tesla")
search.send_keys(Keys.RETURN)
time.sleep(3)
url_list = []
results_list = driver.find_elements_by_tag_name('a')
for url in results_list:
if url == None:
pass
else:
url_list.append(url.get_attribute("href"))
I want to capture screenshots of all websites that are results of google search. However, with this code, my program grab "videos, shopping, news, Images" buttons links. But, I just want to grab resulted links. How can I do this?
It's so much different but it's the exact thing that you want :)
for i in range(len(results_list)):
results_list[i] = results_list[i].text.replace(">", "/").replace("›", "/").replace(" ", "")
if not validators.url(results_list[i]):
results_list[i] = ''
results_list = list(filter(None, results_list))
print(results_list)
Full code:
from selenium import webdriver
import validators
fireFoxOptions = webdriver.FirefoxOptions()
fireFoxOptions.add_argument('--headless')
driver = webdriver.Firefox(options=fireFoxOptions, executable_path='./geckodriver.exe')
driver.get("https://www.google.com/search?q=tesla")
results_list = driver.find_elements_by_tag_name('cite')
for i in range(len(results_list)):
results_list[i] = results_list[i].text.replace(">", "/").replace("›", "/").replace(" ", "")
if not validators.url(results_list[i]):
results_list[i] = ''
results_list = list(filter(None, results_list))
print(results_list)
driver.quit()
I am trying to scrape profiles from LinkedIn, I get profile URLs from the below code and want to pass it to driver.get(URL), however when I scrape URLs the format of URLs is different, e.g it is in [ ] brackets and I get this error
selenium.common.exceptions.InvalidArgumentException: Message: invalid
argument: 'url' must be a string
Could you please suggest how to get the proper format of URLs in the list linklist = [ ] so I can pass them to driver.get(URL). Thanks!
options = Options()
options.add_argument("--start-maximized")
options.headless = True
url = "https://www.linkedin.com/login?fromSignIn=true&trk=guest_homepage-basic_nav-header-signin"
driver = webdriver.Chrome(path, options=options)
driver.get(url)
driver.find_element_by_id('username').send_keys('name')
driver.find_element_by_id('password').send_keys('password', Keys.ENTER)
driver.implicitly_wait(10)
driver.find_element_by_class_name('search-global-typeahead__input').send_keys('Marketing manager', Keys.ENTER)
driver.implicitly_wait(10)
driver.find_element_by_xpath('//button[text()="People"]').click()
x = 0
profile = []
linklist = []
condition = True
while condition:
sleep(2)
driver.execute_script("window.scrollTo(0, 1400);")
driver.implicitly_wait(10)
linkedin_members = driver.find_elements_by_xpath('//span[#class="entity-result__title"]')
links = [linkedin_member.find_element_by_xpath('.//a[#class="app-aware-link"]').get_attribute('href') for linkedin_member in linkedin_members if "/in/" in linkedin_member.find_element_by_xpath('.//a[#class="app-aware-link"]').get_attribute('href')]
x = x + 1
linklist.append(link for link in links)
driver.implicitly_wait(10)
driver.find_element_by_xpath("""//button[#class='artdeco-pagination__button artdeco-pagination__button--next artdeco-button artdeco-button--muted artdeco-button--icon-right artdeco-button--1 artdeco-button--tertiary ember-view' and contains(.,'Next')]""").click()
if x == 2:
condition = False
profile = []
for l in tqdm(linklist):
driver.get(l)
I used a for loop instad of the while loop you used, because there are no variable condition, you only want to do the loop twice.
Here's how you can do it:
linklist = []
for i in range(2):
time.sleep(2)
driver.execute_script("window.scrollTo(0, 1400);")
driver.implicitly_wait(10)
linkedin_members = driver.find_elements_by_xpath('//span[#class="entity-result__title"]')
link = driver.find_element_by_class_name('app-aware-link').get_attribute('href')
linklist.append(link)
driver.implicitly_wait(10)
driver.find_element_by_xpath("""//button[#class='artdeco-pagination__button artdeco-pagination__button--next artdeco-button artdeco-button--muted artdeco-button--icon-right artdeco-button--1 artdeco-button--tertiary ember-view' and contains(.,'Next')]""").click()
for url in linklist:
driver.get(url)
I searched the class that contains the profile url and used ".get_attribute('href')" to extract the url.
I'm scraping an E-Commerce website, Lazada using Selenium and bs4, I manage to scrape on the 1st page but I unable to iterate to the next page. What I'm tyring to achieve is to scrape the whole pages based on the categories I've selected.
Here what I've tried :
# Run the argument with incognito
option = webdriver.ChromeOptions()
option.add_argument(' — incognito')
driver = webdriver.Chrome(executable_path='chromedriver', chrome_options=option)
driver.get('https://www.lazada.com.my/')
driver.maximize_window()
# Select category item #
element = driver.find_elements_by_class_name('card-categories-li-content')[0]
webdriver.ActionChains(driver).move_to_element(element).click(element).perform()
t = 10
try:
WebDriverWait(driver,t).until(EC.visibility_of_element_located((By.ID,"a2o4k.searchlistcategory.0.i0.460b6883jV3Y0q")))
except TimeoutException:
print('Page Refresh!')
driver.refresh()
element = driver.find_elements_by_class_name('card-categories-li-content')[0]
webdriver.ActionChains(driver).move_to_element(element).click(element).perform()
print('Page Load!')
#Soup and select element
def getData(np):
soup = bs(driver.page_source, "lxml")
product_containers = soup.findAll("div", class_='c2prKC')
for p in product_containers:
title = (p.find(class_='c16H9d').text)#title
selling_price = (p.find(class_='c13VH6').text)#selling price
try:
original_price=(p.find("del", class_='c13VH6').text)#original price
except:
original_price = "-1"
if p.find("i", class_='ic-dynamic-badge ic-dynamic-badge-freeShipping ic-dynamic-group-2'):
freeShipping = 1
else:
freeShipping = 0
try:
discount = (p.find("span", class_='c1hkC1').text)
except:
discount ="-1"
if p.find(("div", {'class':['c16H9d']})):
url = "https:"+(p.find("a").get("href"))
else:
url = "-1"
nextpage_elements = driver.find_elements_by_class_name('ant-pagination-next')[0]
np=webdriver.ActionChains(driver).move_to_element(nextpage_elements).click(nextpage_elements).perform()
print("- -"*30)
toSave = [title,selling_price,original_price,freeShipping,discount,url]
print(toSave)
writerows(toSave,filename)
getData(np)
The problem might be that the driver is trying to click the button before the element is even loaded correctly.
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome(PATH, chrome_options=option)
# use this code after driver initialization
# this is make the driver wait 5 seconds for the page to load.
driver.implicitly_wait(5)
url = "https://www.lazada.com.ph/catalog/?q=phone&_keyori=ss&from=input&spm=a2o4l.home.search.go.239e359dTYxZXo"
driver.get(url)
next_page_path = "//ul[#class='ant-pagination ']//li[#class=' ant-pagination-next']"
# the following code will wait 5 seconds for
# element to become clickable
# and then try clicking the element.
try:
next_page = WebDriverWait(driver, 5).until(
EC.element_to_be_clickable((By.XPATH, next_page_path)))
next_page.click()
except Exception as e:
print(e)
EDIT 1
Changed the code to make the driver wait for the element to become clickable. You can add this code inside a while loop for iterating multiple times and break the loop if the button is not found and is not clickable.
I want to do a google search and collect the links to all hits so that I can click those links and extract data from them after collecting all links. How can I get the link from every hit?
I've tried several solutions like using a for loop and a while True statement. I'll show some examples of the code below. I either get no data at all or I get only data (links) from 1 webpage. Can someone please help me figure out how to iterate over every page of the google search and get all the links so I can continue scraping those pages? I'm new to using Selenium so I'm sorry if the code doesn't make much sense, I've really confused myself with this one.
driver.get('https://www.google.com')
search = driver.find_element_by_name('q')
search.send_keys('condition')
sleep(0.5)
search.send_keys(Keys.RETURN)
sleep(0.5)
while True:
try:
urls = driver.find_elements_by_class_name('iUh30')
for url in urls
urls = [url.text for url in urls]
sleep(0.5)
element = driver.find_element_by_id('pnnext')
driver.execute_script("return arguments[0].scrollIntoView();", element)
sleep(0.5)
element.click()
urls = driver.find_elements_by_class_name('iUh30')
urls = [url.text for url in urls]
sleep(0.5)
element = driver.find_element_by_id('pnnext')
driver.execute_script("return arguments[0].scrollIntoView();", element)
sleep(0.5)
element.click()
while True:
next_page_btn = driver.find_element_by_id('pnnext')
if len(next_page_btn) <1:
print("no more pages left")
break
else:
urls = driver.find_elements_by_class_name('iUh30')
urls = [url.text for url in urls]
sleep(0.5)
element = driver.find_element_by_id('pnnext')
driver.execute_script("return arguments[0].scrollIntoView();", element)
sleep(0.5)
element.click()
I expect a list of all urls from the google search that can be opened by Selenium so Selenium can get data from those pages.
I only get a list of urls from one page. The next step (scraping those pages) is working fine. But due to this restriction I only get 10 results while I'd like to see all results.
Try the following code. I have changed a bit.Hope this help.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions
driver=webdriver.Chrome()
driver.get('https://www.google.com')
search = driver.find_element_by_name('q')
search.send_keys('condition')
search.submit()
while True:
next_page_btn =driver.find_elements_by_xpath("//a[#id='pnnext']")
if len(next_page_btn) <1:
print("no more pages left")
break
else:
urls = driver.find_elements_by_xpath("//*[#class='iUh30']")
urls = [url.text for url in urls]
print(urls)
element =WebDriverWait(driver,5).until(expected_conditions.element_to_be_clickable((By.ID,'pnnext')))
driver.execute_script("return arguments[0].scrollIntoView();", element)
element.click()
OutPut :
['https://dictionary.cambridge.org/dictionary/english/condition', 'https://www.thesaurus.com/browse/condition', 'https://en.oxforddictionaries.com/definition/condition', 'https://www.dictionary.com/browse/condition', 'https://www.merriam-webster.com/dictionary/condition', 'https://www.collinsdictionary.com/dictionary/english/condition', 'https://en.wiktionary.org/wiki/condition', 'www.businessdictionary.com/definition/condition.html', 'https://en.wikipedia.org/wiki/Condition', 'https://www.definitions.net/definition/condition', '', '', '', '']
['https://www.thefreedictionary.com/condition', 'https://www.thefreedictionary.com/conditions', 'https://www.yourdictionary.com/condition', 'https://www.foxnews.com/.../woman-battling-rare-suicide-disease-says-chronic-pain-con...', 'https://youngminds.org.uk/find-help/conditions/', 'www.road.is/travel-info/road-conditions-and-weather/', 'https://roll20.net/compendium/dnd5e/Conditions', 'https://www.home-assistant.io/docs/scripts/conditions/', 'https://www.bhf.org.uk/informationsupport/conditions', 'https://www.gov.uk/driving-medical-conditions']
['https://immi.homeaffairs.gov.au/visas/already-have.../check-visa-details-and-condition...', 'https://www.d20pfsrd.com/gamemastering/conditions/', 'https://www.ofgem.gov.uk/licences-industry-codes-and.../licence-conditions', 'https://www.healthychildren.org/English/health-issues/conditions/Pages/default.aspx', 'https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html', 'https://www.ofcom.org.uk/phones-telecoms.../general-conditions-of-entitlement', 'https://www.rnib.org.uk/eye-health/eye-conditions', 'https://www.mdt.mt.gov/travinfo/map/mtmap_frame.html', 'https://www.mayoclinic.org/diseases-conditions', 'https://www.w3schools.com/python/python_conditions.asp']
['https://www.tremblant.ca/mountain-village/mountain-report', 'https://www.equibase.com/static/horsemen/horsemenareaCB.html', 'https://www.abebooks.com/books/rarebooks/...guide/.../guide-book-conditions.shtml', 'https://nces.ed.gov/programs/coe/', 'https://www.cdc.gov/wtc/conditions.html', 'https://snowcrows.com/raids/builds/engineer/engineer/condition/']
['https://www.millenniumassessment.org/en/Condition.html', 'https://ghr.nlm.nih.gov/condition', 'horsemen.ustrotting.com/conditions.cfm', 'https://lb.511ia.org/ialb/', 'https://www.nps.gov/deva/planyourvisit/conditions.htm', 'https://www.allaboutvision.com/conditions/', 'https://www.spine-health.com/conditions', 'https://www.tripcheck.com/', 'https://hb.511.nebraska.gov/', 'https://www.gamblingcommission.gov.uk/.../licence-conditions-and-codes-of-practice....']
['https://sports.yahoo.com/andrew-bogut-credits-beer-improved-022043569.html', 'https://ant.apache.org/manual/Tasks/conditions.html', 'https://www.disability-benefits-help.org/disabling-conditions', 'https://www.planningportal.co.uk/info/200126/applications/60/consent_types/12', 'https://www.leafly.com/news/.../qualifying-conditions-for-medical-marijuana-by-state', 'https://www.hhs.gov/healthcare/about-the-aca/pre-existing-conditions/index.html', 'https://books.google.co.uk/books?id=tRcHAAAAQAAJ', 'www.onr.org.uk/documents/licence-condition-handbook.pdf', 'https://books.google.co.uk/books?id=S0sGAAAAQAAJ']
['https://books.google.co.uk/books?id=KSjLDvXH6iUC', 'https://www.arcgis.com/apps/Viewer/index.html?appid...', 'https://www.trappfamily.com/trail-conditions.htm', 'https://books.google.co.uk/books?id=n_g0AQAAMAAJ', 'https://books.google.co.uk/books?isbn=1492586277', 'https://books.google.co.uk/books?id=JDjQ2-HV3l8C', 'https://www.newsshopper.co.uk/.../17529825.teenager-no-longer-in-critical-condition...', 'https://nbcpalmsprings.com/.../bicyclist-who-collided-with-minivan-hospitalized-in-cri...']
['https://www.stuff.co.nz/.../4yearold-christchurch-terrorist-attack-victim-in-serious-but-...', 'https://www.shropshirestar.com/.../woman-in-serious-condition-after-fall-from-motor...', 'https://www.expressandstar.com/.../woman-in-serious-condition-after-fall-from-motor...', 'https://www.independent.ie/.../toddler-rushed-to-hospital-in-serious-condition-after-hit...', 'https://www.nhsinform.scot/illnesses-and-conditions/ears-nose-and-throat/vertigo', 'https://www.rochdaleonline.co.uk/.../teenage-cyclist-in-serious-condition-after-collisio...', 'https://www.irishexaminer.com/.../baby-of-woman-found-dead-in-cumh-in-critical-cond...', 'https://touch.nihe.gov.uk/index/corporate/housing.../house_condition_survey.htm', 'https://www.nami.org/Learn-More/Mental-Health-Conditions', 'https://www.weny.com/.../update-woman-in-critical-but-stable-condition-after-being-s...']
I am trying to make a scraping application to scrape Hants.gov.uk and right now I am working on it just clicking the pages instead of scraping. When it gets to the last row on page 1 it just stopped, so what I did was make it click button "Next Page" but first it has to go back to the original URL. It clicks page 2, but after page 2 is scraped it doesn't go to page 3, it just restarts page 2.
Can somebody help me fix this issue?
Code:
import time
import config # Don't worry about this. This is an external file to make a DB
import urllib.request
from bs4 import BeautifulSoup
from selenium import webdriver
url = "https://planning.hants.gov.uk/SearchResults.aspx?RecentDecisions=True"
driver = webdriver.Chrome(executable_path=r"C:\Users\Goten\Desktop\chromedriver.exe")
driver.get(url)
driver.find_element_by_id("mainContentPlaceHolder_btnAccept").click()
def start():
elements = driver.find_elements_by_css_selector(".searchResult a")
links = [link.get_attribute("href") for link in elements]
result = []
for link in links:
if link not in result:
result.append(link)
else:
driver.get(link)
goUrl = urllib.request.urlopen(link)
soup = BeautifulSoup(goUrl.read(), "html.parser")
#table = soup.find_element_by_id("table", {"class": "applicationDetails"})
for i in range(20):
pass # Don't worry about all this commented code, it isn't relevant right now
#table = soup.find_element_by_id("table", {"class": "applicationDetails"})
#print(table.text)
# div = soup.select("div.applicationDetails")
# getDiv = div[i].split(":")[1].get_text()
# log = open("log.txt", "a")
# log.write(getDiv + "\n")
#log.write("\n")
start()
driver.get(url)
for i in range(5):
driver.find_element_by_id("ctl00_mainContentPlaceHolder_lvResults_bottomPager_ctl02_NextButton").click()
url = driver.current_url
start()
driver.get(url)
driver.close()
try this:
import time
# import config # Don't worry about this. This is an external file to make a DB
import urllib.request
from bs4 import BeautifulSoup
from selenium import webdriver
url = "https://planning.hants.gov.uk/SearchResults.aspx?RecentDecisions=True"
driver = webdriver.Chrome()
driver.get(url)
driver.find_element_by_id("mainContentPlaceHolder_btnAccept").click()
result = []
def start():
elements = driver.find_elements_by_css_selector(".searchResult a")
links = [link.get_attribute("href") for link in elements]
result.extend(links)
def start2():
for link in result:
# if link not in result:
# result.append(link)
# else:
driver.get(link)
goUrl = urllib.request.urlopen(link)
soup = BeautifulSoup(goUrl.read(), "html.parser")
#table = soup.find_element_by_id("table", {"class": "applicationDetails"})
for i in range(20):
pass # Don't worry about all this commented code, it isn't relevant right now
#table = soup.find_element_by_id("table", {"class": "applicationDetails"})
#print(table.text)
# div = soup.select("div.applicationDetails")
# getDiv = div[i].split(":")[1].get_text()
# log = open("log.txt", "a")
# log.write(getDiv + "\n")
#log.write("\n")
while True:
start()
element = driver.find_element_by_class_name('rdpPageNext')
try:
check = element.get_attribute('onclick')
if check != "return false;":
element.click()
else:
break
except:
break
print(result)
start2()
driver.get(url)
As per the url https://planning.hants.gov.uk/SearchResults.aspx?RecentDecisions=True to click through all the pages you can use the following solution:
Code Block:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
options.add_argument("disable-infobars")
options.add_argument("--disable-extensions")
driver = webdriver.Chrome(chrome_options=options, executable_path=r'C:\Utility\BrowserDrivers\chromedriver.exe')
driver.get('https://planning.hants.gov.uk/SearchResults.aspx?RecentDecisions=True')
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.ID, "mainContentPlaceHolder_btnAccept"))).click()
numLinks = len(WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, "div#ctl00_mainContentPlaceHolder_lvResults_topPager div.rdpWrap.rdpNumPart>a"))))
print(numLinks)
for i in range(numLinks):
print("Perform your scrapping here on page {}".format(str(i+1)))
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//div[#id='ctl00_mainContentPlaceHolder_lvResults_topPager']//div[#class='rdpWrap rdpNumPart']//a[#class='rdpCurrentPage']/span//following::span[1]"))).click()
driver.quit()
Console Output:
8
Perform your scrapping here on page 1
Perform your scrapping here on page 2
Perform your scrapping here on page 3
Perform your scrapping here on page 4
Perform your scrapping here on page 5
Perform your scrapping here on page 6
Perform your scrapping here on page 7
Perform your scrapping here on page 8
hi #Feitan Portor you have written the code absolutely perfect the only reason that you are redirected back to the first page is because you have given url = driver.current_url in the last for loop where it is the url that remains static and only the java script that instigates the next click event so just remove url = driver.current_url and driver.get(url)
and you are good to go i have tested my self
also to get the current page that your scraper is in just add this part in the for loop so you will get to know where your scraper is :
ss = driver.find_element_by_class_name('rdpCurrentPage').text
print(ss)
Hope this solves your confusion