Download table from wunderground with beautiful soup - python

I would like to Weather History & Observations table from the following link:
https://www.wunderground.com/history/airport/HDY/2011/1/1/CustomHistory.html?dayend=31&monthend=12&yearend=2011&req_city=&req_state=&req_statename=&reqdb.zip=&reqdb.magic=&reqdb.wmo=
This is the code I have so far:
import pandas as pd
from bs4 import BeautifulSoup
import requests
link = 'https://www.wunderground.com/history/airport/HDY/2011/1/1/CustomHistory.html?dayend=31&monthend=12&yearend=2011&req_city=&req_state=&req_statename=&reqdb.zip=&reqdb.magic=&reqdb.wmo='
resp = requests.get(link)
c = resp.text
soup = BeautifulSoup(c)
I would like to know what is the next step to access the table info at the bottom of the page (assuming this is a good website format to allow this to happen).
Thank you

You can use find_all
table = soup.find('table', class_="responsive obs-table daily")
rows = table.find_all('tr')

Related

BeautifulSoup - Saving scraped data in to rows and columns

I've just started to get into web scraping using Python and I'm slowly making progress. I hope someone can help me out.
I'm trying to scrape all the aircraft on Icelandic aircraft register. I've written a script that pulls all the data in from the table a prints it to the screen as shown here:
from bs4 import BeautifulSoup
import requests
import pandas as pd
url = "https://www.icetra.is/aviation/aircraft/register/"
page = requests.get(url)
soup = BeautifulSoup(page.text, 'html.parser')
aircraft = soup.findAll('tr')
for ac in aircraft:
print(ac.get_text())
What i would like to be able to do is save it to a csv file with rows and columns, my guess would be that i need to have each of the columns as a variable and read each row of data into the relevant column.
Regards,
Mark
You can use DataFrame.to_csv() from pandas. Here's an example:
from bs4 import BeautifulSoup
import requests
import pandas as pd
url = "https://www.icetra.is/aviation/aircraft/register/"
page = requests.get(url)
soup = BeautifulSoup(page.text, 'html.parser')
aircraft = soup.findAll('tr')
aircrafts = [ac.get_text() for ac in aircraft]
df = pd.DataFrame({"Aircrafts": aircrafts})
df.to_csv("aircrafts.csv")
Edit: I've noticed that soup.findAll('tr') might be getting more information that you wanted, in this case its getting the text from the whole row. You might want to use ac.stripped_strings (documentation) to get each string from the column.
Edit 2: You should try pd.read_html() to read this table. However, I tried fixing my last code and got this solution:
from bs4 import BeautifulSoup
import requests
import pandas as pd
url = "https://www.icetra.is/aviation/aircraft/register/"
page = requests.get(url)
soup = BeautifulSoup(page.text, 'html.parser')
aircraft = soup.findAll('tr')
rows = [list(ac.stripped_strings) for ac in aircraft]
df = pd.DataFrame.from_records(rows)
df.columns = df.iloc[0]
df.drop(index=0, inplace=True)
df.to_csv("aircrafts.csv", index=False)

How to extract table from NHC website in Python?

Here,
https://www.nhc.noaa.gov/gis/
There is a table under the "Data & Products" section. I want to extract the table and save it to a CSV file. I wrote this basic code:
from bs4 import BeautifulSoup
import requests
page = requests.get("https://www.nhc.noaa.gov/gis/")
soup = BeautifulSoup(page.content, 'html.parser')
print(soup)
I only know the basics of scraping. Please, guide me from here. Thanks!
You can use pandas
import pandas as pd
url = 'https://www.nhc.noaa.gov/gis/'
df = pd.read_html(url)[0]
# create csv file
df.to_csv("mycsv.csv")
It is hard to know but i guess that this is what you want:
from bs4 import BeautifulSoup
import requests
r = requests.get('https://www.nhc.noaa.gov/gis/')
soup = BeautifulSoup(r.content, 'html.parser')
for a in soup.find_all('a'):
if a.get('href'):
if '.' in a.get('href').split('/')[-1]\
and 'html' not in a.get('href')\
and '.php' not in a.get('href')\
and 'http' not in a.get('href')\
and 'mailto' not in a.get('href'):
print('https://www.nhc.noaa.gov' + a.get('href'))
prints:
https://www.nhc.noaa.gov/gis/examples/al112017_5day_020.zip
https://www.nhc.noaa.gov/gis/examples/AL112017_020adv_CONE.kmz
https://www.nhc.noaa.gov/gis/examples/AL112017_020adv_TRACK.kmz
https://www.nhc.noaa.gov/gis/examples/AL112017_020adv_WW.kmz
https://www.nhc.noaa.govforecast/archive/al092020_5day_latest.zip
https://www.nhc.noaa.gov/storm_graphics/api/AL092020_CONE_latest.kmz
https://www.nhc.noaa.gov/storm_graphics/api/AL092020_TRACK_latest.kmz
https://www.nhc.noaa.gov/storm_graphics/api/AL092020_WW_latest.kmz
https://www.nhc.noaa.govforecast/archive/al102020_5day_latest.zip
https://www.nhc.noaa.gov/storm_graphics/api/AL102020_CONE_latest.kmz
https://www.nhc.noaa.gov/storm_graphics/api/AL102020_TRACK_latest.kmz
https://www.nhc.noaa.gov/storm_graphics/api/AL102020_WW_latest.kmz
https://www.nhc.noaa.gov/gis/examples/al112017_fcst_020.zip
https://www.nhc.noaa.gov/gis/examples/AL112017_initialradii_020adv.kmz
https://www.nhc.noaa.gov/gis/examples/AL112017_forecastradii_020adv.kmz
https://www.nhc.noaa.govforecast/archive/al092020_fcst_latest.zip
https://www.nhc.noaa.gov/storm_graphics/api/AL092020_initialradii_latest.kmz
https://www.nhc.noaa.gov/storm_graphics/api/AL092020_forecastradii_latest.kmz
https://www.nhc.noaa.govforecast/archive/al102020_fcst_latest.zip
.. and so on...

Beautifulsoup object does not contain full table from webpage, instead grabs first 100 rows

I am attempting to scrape tables from the website spotrac.com and save the data to a pandas dataframe. For whatever reason, if the table I am scraping is over 100 rows, the BeautifulSoup object only appears to grab the first 100 rows of the table. If you run my code below, you'll see that the resulting dataframe has only 100 rows, and ends with "David Montgomery." If you visit the webpage (https://www.spotrac.com/nfl/rankings/2019/base/running-back/) and ctrl+F "David Montgomery", you'll see that there are additional rows. If you change the webpage in the get row of the code to "https://www.spotrac.com/nfl/rankings/2019/base/wide-receiver/" you'll see that the same thing happens. Only the first 100 rows are included in the BeautifulSoup object and in the dataframe.
import pandas as pd
import requests, lxml.html
from bs4 import BeautifulSoup
# Begin requests session
with requests.session() as s:
# Get page
r = s.get('https://www.spotrac.com/nfl/rankings/2019/base/running-back/')
# Get page content, find first table, and save to df
soup = BeautifulSoup(r.content,'lxml')
table = soup.find_all('table')[0]
df_list = pd.read_html(str(table))
df = df_list[0]
I have read that changing the parser can help. I have tried using different parsers by replacing the BeautifulSoup object code with the following:
soup = BeautifulSoup(r.content,'html5lib')
soup = BeautifulSoup(r.content,'html.parser')
Neither of these changes worked. I have run "pip install html5lib" and "pip install lxml" and confirmed that both were already installed.
This page uses JavaScript to load extra data.
In DevTools in Firefox/Chrome you can see it sends POST request with extra information {'ajax': True, 'mobile': False}
import pandas as pd
import requests, lxml.html
from bs4 import BeautifulSoup
with requests.session() as s:
r = s.post('https://www.spotrac.com/nfl/rankings/2019/base/running-back/', data={'ajax': True, 'mobile': False})
# Get page content, find first table, and save to df
soup = BeautifulSoup(r.content, 'lxml')
table = soup.find_all('table')[0]
df_list = pd.read_html(str(table))
df = df_list[0]
print(df)
I suggest you use request-html
import pandas as pd
from bs4 import BeautifulSoup
from requests_html import HTMLSession
if __name__ == "__main__":
# Begin requests session
s = HTMLSession()
# Get page
r = s.get('https://www.spotrac.com/nfl/rankings/2019/base/running-back/')
r.html.render()
# Get page content, find first table, and save to df
soup = BeautifulSoup(r.html.html, 'lxml')
table = soup.find_all('table')[0]
df_list = pd.read_html(str(table))
df = df_list[0]
Then you will get 140 lines.

Python BeautifulSoup - trouble parsing table from webpage

I'd like to parse the table data from the following site:
Pricing data and create a dataframe with all of the table values (vCPU, Memory, Storage, Price). However, with the following code, I can't seem to find the table on the page. Can someone help me figure out how to parse out the values?
Using the pd.read_html, an error shows up that no tables are found.
import pandas as pd
from bs4 import BeautifulSoup
import requests
import csv
url = "https://aws.amazon.com/ec2/pricing/on-demand/"
r = requests.get(url)
html_content = r.text
soup = BeautifulSoup(html_content, 'html.parser')
data=[]
tables = soup.find_all('table')
df = pd.read_html(url)
If your having trouble because of dynamic content a good work around is selenium, it simulates browser experience so you dont have to worry about managing cookies and other problems that come with dynamic web content. I was able to scrape the page with the following:
import pandas as pd
from bs4 import BeautifulSoup
from selenium import webdriver
from time import sleep
driver = webdriver.Firefox()
driver.get('https://aws.amazon.com/ec2/pricing/on-demand/')
sleep(3)
html = driver.page_source
soup = BeautifulSoup(html,'lxml')
driver.close()
data=[]
tables = soup.find_all('table')
print(tables)

Scraping a table appearing on click with python

I want to scrape information from this page.
Specifically, I want to scrape the table which appears when you click "View all" under the "TOP 10 HOLDINGS" (you have to scroll down on the page a bit).
I am new to webscraping, and have tried using BeautifulSoup to do this. However, there seems to be an issue because the "onclick" function I need to take into account. In other words: The HTML code I scrape directly from the page doesn't include the table I want to obtain.
I am a bit confused about my next step: should I use something like selenium or can I deal with the issue in an easier/more efficient way?
Thanks.
My current code:
from bs4 import BeautifulSoup
import requests
Soup = BeautifulSoup
my_url = 'http://www.etf.com/SHE'
page = requests.get(my_url)
htmltxt = page.text
soup = Soup(htmltxt, "html.parser")
print(soup)
You can get a json response from the api: http://www.etf.com/view_all/holdings/SHE. The table you're looking for is located in 'view_all'.
import requests
from bs4 import BeautifulSoup as Soup
url = 'http://www.etf.com/SHE'
api = "http://www.etf.com/view_all/holdings/SHE"
headers = {'X-Requested-With':'XMLHttpRequest', 'Referer':url}
page = requests.get(api, headers=headers)
htmltxt = page.json()['view_all']
soup = Soup(htmltxt, "html.parser")
data = [[td.text for td in tr.find_all('td')] for tr in soup.find_all('tr')]
print('\n'.join(': '.join(row) for row in data))

Categories