I am using selenium and Python to scrape a website.I am not able to scrape particular table using Beautiful Soup.
Here is the code
from selenium import webdriver
from selenium.webdriver.support.ui import Select
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
from bs4 import BeautifulSoup
import pandas as pd
from bs4 import BeautifulSoup
from selenium.webdriver.support import expected_conditions
link='http://omms.nic.in/#'
browser=webdriver.Firefox()
browser.get(link)
time.sleep(15)
WebDriverWait(browser, 10).until(EC.element_to_be_clickable((By.XPATH, '/html/body/div[1]/div/div/ul/li[3]/a'))).click()
time.sleep(10)
WebDriverWait(browser, 10).until(EC.element_to_be_clickable((By.XPATH, '/html/body/div[1]/div/div/ul/li[3]/ul/li[1]/ul/li[5]/a'))).click()
select_state=Select(browser.find_element_by_xpath('/html/body/div[2]/div[1]/form/table/tbody/tr[1]/td[3]/select'))
select_state.select_by_index(35)
WebDriverWait(browser, 10).until(EC.element_to_be_clickable((By.XPATH, '/html/body/div[2]/div[1]/form/table/tbody/tr[3]/td[7]/input[1]'))).click()
select_district=Select(browser.find_element_by_xpath('/html/body/div[2]/div[1]/form/table/tbody/tr[1]/td[5]/select'))
options = [x.text for x in select_district.options]
select_district.select_by_index(3)
select_year=Select(browser.find_element_by_xpath('/html/body/div[2]/div[1]/form/table/tbody/tr[2]/td[3]/select'))
select_year.select_by_index(8)
time.sleep(10)
WebDriverWait(browser, 10).until(EC.element_to_be_clickable((By.XPATH, '/html/body/div[2]/div[1]/form/table/tbody/tr[4]/td[3]/input'))).click()
time.sleep(5)
soup=BeautifulSoup(browser.page_source,"html.parser")
table=soup.find_all('table',attrs={'class':"A35402edea1d24691942da96210fa88a3382"})
data_name = pd.read_html(str(table))[0]
I am getting the error as No tables found
This is probably because the table you are looking for is not under browser.page_source. It is loaded from a separate iframe. We can switch to the frame and then get the source.
browser.switch_to.frame(driver.find_element_by_xpath("//*[#id='loadReport']//iframe"))
print(browser.page_source)
soup = BeautifulSoup(browser.page_source,"html.parser")
In case if you need to interact with the page again, You can switch back using
browser.switch_to.default_content()
Also consider increasing the time to wait for the table to load.
Related
I am trying to scrape a website to get the heading and summary of the news. The problem I am facing is that when we first open the website, a redirect appears and we have to wait 8 seconds for the website to load. The problem I am facing is that the web data that is beign stored is that of the redirect instead of the main website.
from selenium import webdriver
import time
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
from bs4 import BeautifulSoup
# Specify the path to the ChromeDriver executable
chrome_driver_path = "C:/webdrivers/chromedriver"
# Initialize the webdriver
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))
# Navigate to website
driver.get("https://economictimes.indiatimes.com/markets/stocks/news")
time.sleep(10)
data2, data4 = [], []
while True:
# Extract data
soup = BeautifulSoup(driver.page_source, 'html.parser')
data = soup.find_all("div", {"class": "example-class"})
for item in data:
data2.append(item.find_all('h3'))
data4.append(item.find_all('p'))
try:
# Find the "Load More" button
load_more_button = driver.find_element_by_css_selector("div.autoload_continue")
# Click the button
load_more_button.click()
except:
break
# Close the browser
driver.quit()
print(data2)
You could check for switch to your final url:
wait.until(EC.url_to_be('https://economictimes.indiatimes.com/markets/stocks/news'))
Example
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from webdriver_manager.chrome import ChromeDriverManager
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))
url = 'https://economictimes.indiatimes.com/markets/stocks/news'
wait = WebDriverWait(driver, 10)
driver.get(url)
wait.until(EC.url_to_be('https://economictimes.indiatimes.com/markets/stocks/news'))
An ideal approach would be to wait for the News heading within the webpage to be visibible.
Solution
To wait for the News heading to be visibible you need to induce WebDriverWait for the visibility_of_element_located() and you can use either of the following locator strategies:
Using CSS_SELECTOR:
driver.get('https://economictimes.indiatimes.com/markets/stocks/news')
WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.CSS_SELECTOR, "h1.h1")))
Using XPATH:
driver.get('https://economictimes.indiatimes.com/markets/stocks/news')
WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//h1[#class='h1' and text()='News']")))
Note : You have to add the following imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
Alternative
You can also wait for the Page Title of the webpage to contain Stocks in News Today as follows:
driver.get('https://economictimes.indiatimes.com/markets/stocks/news')
WebDriverWait(driver, 10).until(EC.title_contains("Stocks in News Today"))
References
You can find a couple of relevant detailed discussions in:
Python selenium get page title
How to make selenium wait before getting contents from the actual website which loads after the landing page through IEDriverServer and IE
from scrapy_selenium import SeleniumRequest
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.select import Select
from selenium import webdriver
url='https://www.aeafa.es/asociados.php?provinput='
driver =webdriver.Chrome('C:\Program Files (x86)\chromedriver.exe')
driver.get(url)
wait = WebDriverWait(driver, 30)
detail=wait.until(EC.element_to_be_clickable((By.XPATH, "//tbody//td[6]")))
detail.click()
Error will be in these:
detail=wait.until(EC.element_to_be_clickable((By.XPATH, "//tbody//td[6]")))
detail.click()
I want that they click on the title of these is page link
and then there will scrape these information. How to scrape these data.
You cannot locate HTML attribute to click on it. Try to replace
detail=wait.until(EC.element_to_be_clickable((By.XPATH, "//td//img//#src"))).click()
with
detail = wait.until(EC.element_to_be_clickable((By.XPATH, "//a[#title='info']")))
detail.click()
UPDATE
Since your ScrapySelenium approach doeasn't seem to work, try common Selenium approach. Then you could adapt it to Scrapy
from selenium import webdriver
driver = webdriver.Chrome()
url = 'https://www.aeafa.es/asociados.php?provinput='
driver.get(url)
for mail in driver.find_elements("xpath", "//p/a[starts-with(#href, 'mailto')]"):
print(mail.get_attribute('textContent'))
Note that you don't need to open each details popup to get required email text
Here is the code I'm trying to use to scrape data from FRED website to download the time series data in CSV format but it is redirecting me top another page
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver import ActionChains
url='https://fred.stlouisfed.org/series/TERMCBAUTO48NS'
driver=webdriver.Chrome(executable_path=r'D:\\Workspace\\Python\\automation\\chromedriver.exe')
driver.get(url)
element=driver.find_element_by_id('download-button')
element.click()
wait1=WebDriverWait(driver,20)
result1=wait1.until(
EC.element_to_be_clickable((By.ID,'fg-download-menu')))
print('Result 1: ',result1)
menu=driver.find_element_by_id('fg-download-menu')
wait1=WebDriverWait(driver,20)
result2=wait1.until(
EC.element_to_be_clickable((By.ID,'download-data-csv')))
print('Result 2: ',result2)
hidden_submenu=driver.find_element_by_id('download-data-csv')
actions=ActionChains(driver)
actions.move_to_element(menu)
actions.click(hidden_submenu)
actions.perform()
driver.quit()
The locators you are using are not unique. Like there are several tags with the same id download-button.
Its important to find unique locators for Elements. You can refer This link
Try like below and confirm:
driver.get("https://fred.stlouisfed.org/series/TERMCBAUTO48NS")
wait = WebDriverWait(driver,30)
# Click on Download button
wait.until(EC.element_to_be_clickable((By.XPATH,"//div[#id='page-title']//button[#id='download-button']"))).click()
# Click on CSV data
wait.until(EC.element_to_be_clickable((By.XPATH,"//ul[#id='fg-download-menu']//a[#id='download-data-csv']"))).click()
This should work:
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome("D:/chromedriver/94/chromedriver.exe")
driver.get("https://fred.stlouisfed.org/series/TERMCBAUTO48NS")
# wait 60 seconds
wait = WebDriverWait(driver,60)
wait.until(EC.element_to_be_clickable((By.ID, "download-button"))).click()
wait.until(EC.element_to_be_clickable((By.ID, "download-data-csv"))).click()
I'm trying to read an html table and then extract it into pd.DataFrame but instead getting something different. What am I doing wrong?
the error is: [<selenium.webdriver.remote.webelement.WebElement (session="38159852443c19167a9033a2b078fe45", element="ef6a42a1-2775-44c1-955f-5f01870bc758")>]
Here is my code:
import pandas as pd
import time
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.chrome.options import Options
from bs4 import BeautifulSoup
options = Options()
options.add_argument('--headless')
options.add_argument('--disable-gpu')
driver = webdriver.Chrome(executable_path = 'mypath/chromedriver.exe', options = options)
driver.get("https://ai.fmcsa.dot.gov/SMS")
wait = WebDriverWait(driver, 20)
wait.until(EC.element_to_be_clickable((By.XPATH, "//a[#title='Close']"))).click()
wait.until(EC.element_to_be_clickable((By.XPATH, "(//input[#name='MCSearch'])[2]"))).send_keys('1818437')
wait.until(EC.element_to_be_clickable((By.XPATH, "(//input[#name='search'])[2]"))).click()
wait.until(EC.element_to_be_clickable((By.XPATH, "//*[#id='BASICs']/p[2]/a"))).click()
tables=WebDriverWait(driver,20).until(EC.presence_of_all_elements_located((By.XPATH,'//*[#id="BASICs"]/table/tbody/tr[2]')))
print(tables)
Disregard the bunch of extra imports as I've been trying to approach the problem in different way but keep failing
I might have solved it, actually.
I changed the XPATH of the table from
'//*[#id="BASICs"]/table/tbody/tr[2]'
to
"//tr[#class='valueRow sumData']"
and I just realized I was printing the element and not its content so I changed the last line from
print(tables) to print(tables.text)
Now it's printing the table but for some reason it's not printing the very last value (0.05 in this case). Any ideas why?
I am trying to scrape this website
https://script.google.com/a/macros/cprindia.org/s/AKfycbzgfCVNciFRpcpo8P7joP1wTeymj9haAQnNEkNJJ2fQ4FBXEco/exec
I am using selenium and python.I am not able to view entire page source,Basically i have to scrape the table inside it and click on next button,but the code of next and table not visible on page source.Here is my code
from selenium import webdriver
from selenium.webdriver.support.ui import Select
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
from bs4 import BeautifulSoup
from selenium import webdriver
browser = webdriver.PhantomJS()
browser.get(link)
pass1 = browser.find_element_by_xpath("/html/body/div[2]/table[2]/tbody/tr[1]/td/div/div/div[2]/div[2]")
pass1.click()
time.sleep(30)
I am getting this error,NoSuchElementException.
There are two iframes present on the page, so you need to first switch on those iframe and then you need to click on the element.
And you can apply explicit wait on the element so that the script waits until the element is visible on the page.
You can do it like:
browser = webdriver.PhantomJS()
browser.get(link)
browser.switch_to.frame(driver.find_element_by_id('sandboxFrame'))
browser.switch_to.frame(driver.find_element_by_id('userHtmlFrame'))
WebDriverWait(browser, 20).until(EC.presence_of_element_located((By.XPATH, "//div[contains(#class,'charts-custom-button-collapse-left')]//div"))).click()
Note: You have to add the following imports:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC