I am trying to web scrape university ranking infomation from USNews site. And the problem is when I use selenium to open the webpage, the 'Load More Button' is not working properly. (I think I successfully click it but in the Chrome window opened by webdriver, when I scroll down to the button, is says that 'We're sorry, there was a problem loading the next page of search results'.
I am new to web scrawler and I did a lot of research on this, there are several similar questions but none of those answers helped. I really need some help. Here is my code:
driver_path = 'xxx' (chromedriver path)
driver = webdriver.Chrome(executable_path=driver_path)
url2 = 'https://www.usnews.com/education/best-global-universities/rankings'
wait = WebDriverWait(driver, 30)
driver.get(url2)
driver.maximize_window()
count = 1
while True:
try:
print(1)
# driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
wait.until(EC.visibility_of_element_located((By.XPATH,"//*[#id='rankings']/div[3]/button")))
print(2)
show_more = wait.until(EC.element_to_be_clickable((By.XPATH, "//*[#id='rankings']/div[3]/button")))
ActionChains(browser).move_to_element(show_more).click().perform()
print(3)
# driver.find_element(By.XPATH,"//*[#id='rankings']/div[3]/button").click()
# print(4)
# wait.until(EC.visibility_of_element_located((By.XPATH,"//*[#id='rankings']/div[3]/button")))
# print(5)
count += 1
time.sleep(2)
if count >=2:
break
Even though I did not write code to close the ad, but I don't think the ad is the problem since when I manually close it and then click the button, it is still not working. Is it the problem with the website?
import requests
import os
from bs4 import BeautifulSoup
from selenium import webdriver
import time
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
It is clear that there is an anti scraping control in the specific site. It is always recommended to consult the robots.txt file beforehand and check whether scraping is possible on a certain site or not.
In general, this site blocks just the IP (try to go to other pages afterwards, you will see that you will get a 403 error).
In general, however, the approach you used does not seem wrong to me. You can try contacting the site directly to see if the problem can be solved in some other way.
Related
I am trying to scrape from Google search results the blue highlighted portion as shown below:
When I use inspect element, it shows: span class="YhemCb". I have tried using various soup.find and soup.find_all commands, but everything I have tried has no
output so far. What command should I use to scrape this part?
Google uses javascript to display most of its web elements, so using something like requests and BeautifulSoup is unfortunately not enough.
Instead, use selenium! It essentially allows you to control a browser using code.
First, you will need to navigate to the google page you wish to scrape
google_search = 'https://www.google.com/search?q=courtyard+by+marriott+fayetteville+fort+bragg'
driver.get(google_search)
Then, you have to wait until the review page loads in the browser.
This is done using WebDriverWait: you have to specify an element that needs to appear on the page. The [data-attrid="kc:/local:one line summary"] span css selector allows me to select the review info about the hotel.
timeout = 10
expectation = EC.presence_of_element_located((By.CSS_SELECTOR, '[data-attrid="kc:/local:one line summary"] span'))
review_element = WebDriverWait(driver, timeout).until(expectation)
And finally, print the rating
print(review_element.get_attribute('innerHTML'))
Here's the full code in case you want to play around with it
import chromedriver_autoinstaller
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
# setup selenium (I am using chrome here, so chrome has to be installed on your system)
chromedriver_autoinstaller.install()
options = Options()
options.headless = True
driver = webdriver.Chrome(options=options)
# navigate to google
google_search = 'https://www.google.com/search?q=courtyard+by+marriott+fayetteville+fort+bragg'
driver.get(google_search)
# wait until the page loads
timeout = 10
expectation = EC.presence_of_element_located((By.CSS_SELECTOR, '[data-attrid="kc:/local:one line summary"] span'))
review_element = WebDriverWait(driver, timeout).until(expectation)
# print the rating
print(review_element.get_attribute('innerHTML'))
Note Google is notoriously defensive against anyone who is trying to scrape them. On first few attempts you might be successful, but eventually you will have to deal with Google Captcha.
To work around that, I would suggest using the search engine scraper, something like the quickstart guide to get you started!
Disclaimer: I work at Oxylabs.io
I am making web-crawler to get information from http://www.caam.org.cn/hyzc, but it showed me HTTP Error 302, and I cannot fix it.
https://imgur.com/a/W0cykim
The picture gives you a rough idea about the special layout of this website in that when you are browsing it, it will pop out a window, telling you that the website is accelerating, for the reason that there are so many people online, and then direct you to that website. As a result, when I use web-crawler, all I get is the information on this window, but nothing on this website. I think this is a good way for the website keeper to get rid of our web crawlers. So I want to ask for your help to get useful information from this website
At first, I used requests of python for my web crawler, and I only got information on that window, the results are shown here: https://imgur.com/a/GLcpdZn
And then I forbad website redirect, I got HTTP Error 303, shown:
https://imgur.com/a/6YtaVOt
This is the latest code I used:
python
import requests
def getpage(url):
try:
r= requests.get(url, headers={'User-Agent':'Mozilla/5.0'}, timeout=10)
r.raise_for_status()
r.encoding = r.apparent_encoding
return r.text
except:
return "try again"
url = "http://www.caam.org.cn/hyzc"
print(getpage(url))
The expected outcome of this question is to get useful information from the website http://www.caam.org.cn/hyzc. We may need to deal with the window popped out.
Looks like this website have some kind of protection against crawlers using requests, the page is not entirely loaded when you send a get request.
You can try to emulate a browser using selenium:
from selenium import webdriver
driver = webdriver.Chrome()
driver.get('http://www.caam.org.cn/hyzc')
print(driver.page_source)
driver.close()
driver.page_source will contain the page source.
You can learn how to setup selenium webdriver here.
I added something to delay the closure of my web crawl and this worked. So I want to share my lines in case you meet similar problem in the future:
python
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
options = Options()
driver = webdriver.Chrome(chrome_options=options)
driver.get('http://www.caam.org.cn')
body = driver.find_element_by_tag_name("body")
wait = WebDriverWait(driver, 5, poll_frequency=0.05)
wait.until(EC.staleness_of(body))
print(driver.page_source)
driver.close()
I've written a script in Python in association with selenium to click on each of the signs available in a map. However, when I execute my script, it throws timeout exception error upon reaching this line wait.until(EC.staleness_of(item)).
Before hitting that line, the script should have clicked once but It could not? How can I click on all the signs in that map cyclically?
This is the site link.
This is my code so far (perhaps, I'm trying with the wrong selectors):
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
link = "https://www.findapetwash.com/"
driver = webdriver.Chrome()
driver.get(link)
wait = WebDriverWait(driver, 15)
for item in wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, "#map .gm-style"))):
item.click()
wait.until(EC.staleness_of(item))
driver.quit()
Signs visible on that map like:
Post script: I know that this is their API https://www.findapetwash.com/api/locations/getAll/ using which I can get the JSON content but I would like to stick to the Selenium way. Thanks.
I know you wrote you don't want to use the API but using Selenium to get the locations from the map markers seems a bit overkill for this, instead, why not making a call to their Web service using requests and parse the returned json?
Here is a working script:
import requests
import json
api_url='https://www.findapetwash.com/api/locations/getAll/'
class Location:
def __init__(self, json):
self.id=json['id']
self.user_id=json['user_id']
self.name=json['name']
self.address=json['address']
self.zipcode=json['zipcode']
self.lat=json['lat']
self.lng=json['lng']
self.price_range=json['price_range']
self.photo='https://www.findapetwash.com' + json['photo']
def get_locations():
locations = []
response = requests.get(api_url)
if response.ok:
result_json = json.loads(response.text)
for location_json in result_json['locations']:
locations.append(Location(location_json))
return locations
else:
print('Error loading locations')
return False
if __name__ == '__main__':
locations = get_locations()
for l in locations:
print(l.name)
Selenium
If you still want to go the Selenium way, instead of waiting until all the elements are loaded, you could just halt the script for some seconds or even a minute to make sure everything is loaded, this should fix the timeout exception:
import time
driver.get(link)
# Wait 20 seconds
time.sleep(20)
For other possible workarounds, see the accepted answer here: Make Selenium wait 10 seconds
You can click one by one using Selenium if, for some reasons, you cannot use API. Also it is possible to extract information for each sign without clicking on them with Selenium.
Here code to click one by one:
signs = wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "li.marker.marker--list")))
for sign in signs:
driver.execute_script("arguments[0].click();", sign)
#do something
Try also without wait, probably will work.
I'm currently working on a research project in which we are trying to collect saved image files from Brazil's Hemeroteca database. I've done web scraping on PHP pages before using C/C++ with HTML forms, but as this is a shared script, I need to switch to python such that everyone in the group can use this tool.
The page which I'm trying to scrape is: http://bndigital.bn.gov.br/hemeroteca-digital/
There are three forms which populate, the first being the newspaper/journal. Upon selecting this, the available times populate, and the final field is the search term. I've inspected the HTML page here and the three IDs of these are respectively: 'PeriodicoCmb1_Input', 'PeriodoCmb1_Input', and 'PesquisaTxt1'.
Some google searches on this topic led me to the Selenium package, and I've put together this sample code to attempt to read the page:
import webbrowser
import requests
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium import webdriver
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
import time
print("Begin...")
browser = webdriver.Chrome()
url = "http://bndigital.bn.gov.br/hemeroteca-digital/"
browser.get(url)
print("Waiting to load page... (Delay 3 seconds)")
time.sleep(3)
print("Searching for elements")
journal = browser.find_element_by_id("PeriodicoCmb1_Input")
timeRange = browser.find_element_by_id("PeriodoCmb1_Input")
searchTerm = browser.find_element_by_id("PesquisaTxt1")
print(journal)
print("Set fields, delay 3 seconds between input")
search_journal = "Relatorios dos Presidentes dos Estados Brasileiros (BA)"
search_timeRange = "1890 - 1899"
search_text = "Milho"
journal.send_keys(search_journal)
time.sleep(3)
timeRange.send_keys(search_timeRange)
time.sleep(3)
searchTerm.send_keys(search_text)
print("Perform search")
submitButton = button.find_element_by_id("PesquisarBtn1_input")
submitButton.click()
The script runs to the print(journal) statement, where an error is thrown saying the element cannot be found.
Can anyone take a quick sweep of the page in question and make sure I've got the general premise of this script in line correctly, or point me towards some examples to get me running on this problem?
Thanks!
Your DOM elements you are trying to find are located in iframe. So before using find_element_by_id API you should switch to iframe context.
Here is a code how to switch to iframe context:
# add your code
frame_ref = browser.find_elements_by_tag_name("iframe")[0]
iframe = browser.switch_to.frame(frame_ref)
journal = browser.find_element_by_id("PeriodicoCmb1_Input")
timeRange = browser.find_element_by_id("PeriodoCmb1_Input")
searchTerm = browser.find_element_by_id("PesquisaTxt1")
# add your code
Here is a link describing switching to iframe context.
I'm new to selenium and trying to automate the download of some government data. When using the code below. I manage to navigate to the right page and enter the right parmeter in the form, but then can't find a way to click the 'submit' button. I've tried find_element_by_partial_link_text("Subm").click() and I've tried find_element_by_class_name on a number of class names. Nothing works. Any ideas?
import time
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.keys import Keys
main_url="http://data.stats.gov.cn/english/easyquery.htm?cn=E0101"
driver = webdriver.Firefox()
driver.get(main_url)
time.sleep(8)
driver.find_element_by_partial_link_text("Industry").click()
time.sleep(8)
driver.find_element_by_partial_link_text("Main Economic Indicat").click()
time.sleep(8)
driver.find_element_by_id("mySelect_sj").click()
time.sleep(3)
driver.find_element_by_class_name("dtText").send_keys("last72")
time.sleep(4)
try:
driver.find_element_by_class_name("dtFoot").click()
except:
driver.find_element_by_class_name("dtFoot").submit()
Solved my own problem, the key was using
driver.find_element_by_class_name(`dtTextBtn`)
instead of
driver.find_element_by_class_name(`dtTextBtn f10`)
The latter was what I saw in the source code, but the f10 blocked selenium.