I'd like to parse the table data from the following site:
Pricing data and create a dataframe with all of the table values (vCPU, Memory, Storage, Price). However, with the following code, I can't seem to find the table on the page. Can someone help me figure out how to parse out the values?
Using the pd.read_html, an error shows up that no tables are found.
import pandas as pd
from bs4 import BeautifulSoup
import requests
import csv
url = "https://aws.amazon.com/ec2/pricing/on-demand/"
r = requests.get(url)
html_content = r.text
soup = BeautifulSoup(html_content, 'html.parser')
data=[]
tables = soup.find_all('table')
df = pd.read_html(url)
If your having trouble because of dynamic content a good work around is selenium, it simulates browser experience so you dont have to worry about managing cookies and other problems that come with dynamic web content. I was able to scrape the page with the following:
import pandas as pd
from bs4 import BeautifulSoup
from selenium import webdriver
from time import sleep
driver = webdriver.Firefox()
driver.get('https://aws.amazon.com/ec2/pricing/on-demand/')
sleep(3)
html = driver.page_source
soup = BeautifulSoup(html,'lxml')
driver.close()
data=[]
tables = soup.find_all('table')
print(tables)
Related
Beginner question.. I'm attempting to scrape data from a table but I can't seem to recognize it, I've tried using the class and the id to identify it but my result is 0. The code and output are below.
# Import necessary packages
from bs4 import BeautifulSoup
import requests
import pandas as pd
import re
# Site URL
url="https://fbref.com/en/comps/9/stats/Premier-League-Stats"
# Make a GET request to fetch the raw HTML content
html_content = requests.get(url).text
# Parse HTML code for the entire site
soup = BeautifulSoup(html_content, "lxml")
#print(soup.prettify()) # print the parsed data of html
gdp = soup.find_all("table", attrs={"id": "stats_standard"})
print("Number of tables on site: ",len(gdp))
Output - 'Number of tables on site: 0'
I suggest you to use selenium for such scraping, its performance is very reliable.
This code will work for you:
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
option = Options()
option.add_argument('--headless')
url = 'https://fbref.com/en/comps/9/stats/Premier-League-Stats'
driver = webdriver.Chrome(options=option)
driver.get(url)
bs = BeautifulSoup(driver.page_source, 'html.parser')
gdp = bs.find_all('table', {'id': 'stats_standard'})
driver.quit()
print("Number of tables on site: ",len(gdp))
Output
Number of tables on site: 1
Can you find the table(s) without using attrs={"id": "stats_standard"}?
I have checked and indeed I cannot find any table whose ID is stats_standard (but there is one with ID stats_standard_sh, for example). So I guess you might be using the wrong ID.
I know that this code works for other websites that end in .com
However I noticed that the code doesn't work if I try to parse websites that end in .kr
Can somebody help to find why this is happening and an alternate solution to parse these types of websites?
Following is my code.
import requests
from bs4 import BeautifulSoup
URL = 'https://everytime.kr/#nN4K1XC0weHnnM9VB5Qe'
page = requests.get(URL)
soup = BeautifulSoup(page.content, 'html.parser')
results = soup.find(id='container')
print(results)
The URL here is a link to my timetable. I need to parse this website so that I can easily collect the information for the subjects and data relevant to the subject (duration, location, professor's name, etc.).
Thanks
Website is serving dynamic content and you get an empty response back - you may use selenium.
Example
from selenium import webdriver
from bs4 import BeautifulSoup
import time
driver = webdriver.Chrome(executable_path=r'C:\Program Files\ChromeDriver\chromedriver.exe')
url = 'https://everytime.kr/#nN4K1XC0weHnnM9VB5Qe'
driver.get(url)
time.sleep(5)
soup = BeautifulSoup(driver.page_source, 'html.parser')
results = soup.find(id='container')
print(results)
driver.close()
This page I want to parse - https://fbref.com/en/comps/9/gca/Premier-League-Stats
It has 2 tables, I am trying to get information from the second table, but it keeps displaying the first table every time I run this code.
from bs4 import BeautifulSoup
import requests
source = requests.get('https://fbref.com/en/comps/9/gca/Premier-League-Stats').text
soup = BeautifulSoup(source, 'lxml')
stattable = soup.find('table', class_= 'min_width sortable stats_table min_width shade_zero')[1]
print(stattable)
min_width sortable stats_table min_width shade_zero is the ID of the 'second' table.
It does not give me an error nor does it return anything. It's null.
Since the second table is dynamically generated, why not combine selenium, BeautifulSoup, and pandas to get what you want?
For example:
import time
import pandas as pd
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = Options()
options.headless = False
driver = webdriver.Chrome(options=options)
driver.get("https://fbref.com/en/comps/9/gca/Premier-League-Stats")
time.sleep(2)
soup = BeautifulSoup(driver.page_source, "html.parser").find("div", {"id": "div_stats_gca"})
driver.close()
df = pd.read_html(str(soup), skiprows=[0, 1])
df = pd.concat(df)
df.to_csv("data.csv", index=False)
This spits out a .csv file that, well, looks like that table you want. :)
The HTML you see when you do inspect element are generated using Javascript. However, the same classes are not available in the raw html that you get using the script.
I disabled Javascript for this site and I saw that the table is not visible.
You can try something like Selenium. There is good information in this question.
from bs4 import BeautifulSoup
import requests
url="https://www.calculator.net/currency-calculator.html"
# Make a GET request to fetch the raw HTML content
html_content = requests.get(url).text
# Parse the html content
soup = BeautifulSoup(html_content,'html5lib')
print(soup.prettify()) # print the parsed data of html
conv_table = soup.find("table", attrs={"class":"cinfoT "})
conv_data = gdp_table.tbody.find_all("tr")
I have written the above script to get the table listed on this particular website.
when i run the same conv_table comes as None type object.
If you visit the website, basically i want to extract the 2nd table bigger table and its class name contains "cinfoT ". Also i have checked that there are some blank spaces in the class name.
Please help me out.
Thanks in advance.
It is because this data is loaded by javascript. Try selenium. requests will give you plain html file
from selenium import webdriver
from bs4 import BeautifulSoup
DRIVER_PATH="Your selenium chrome driver path"
url = 'https://www.calculator.net/currency-calculator.html'
driver = webdriver.Chrome(executable_path=DRIVER_PATH)
driver.get(url)
html_soup = BeautifulSoup(driver.page_source, 'html.parser')
table = html_soup.find_all('table', class_ = 'cinfoT ')
driver.quit()
print(table[0].tbody)
I would like to Weather History & Observations table from the following link:
https://www.wunderground.com/history/airport/HDY/2011/1/1/CustomHistory.html?dayend=31&monthend=12&yearend=2011&req_city=&req_state=&req_statename=&reqdb.zip=&reqdb.magic=&reqdb.wmo=
This is the code I have so far:
import pandas as pd
from bs4 import BeautifulSoup
import requests
link = 'https://www.wunderground.com/history/airport/HDY/2011/1/1/CustomHistory.html?dayend=31&monthend=12&yearend=2011&req_city=&req_state=&req_statename=&reqdb.zip=&reqdb.magic=&reqdb.wmo='
resp = requests.get(link)
c = resp.text
soup = BeautifulSoup(c)
I would like to know what is the next step to access the table info at the bottom of the page (assuming this is a good website format to allow this to happen).
Thank you
You can use find_all
table = soup.find('table', class_="responsive obs-table daily")
rows = table.find_all('tr')