BeautifulSoup - Saving scraped data in to rows and columns - python

I've just started to get into web scraping using Python and I'm slowly making progress. I hope someone can help me out.
I'm trying to scrape all the aircraft on Icelandic aircraft register. I've written a script that pulls all the data in from the table a prints it to the screen as shown here:
from bs4 import BeautifulSoup
import requests
import pandas as pd
url = "https://www.icetra.is/aviation/aircraft/register/"
page = requests.get(url)
soup = BeautifulSoup(page.text, 'html.parser')
aircraft = soup.findAll('tr')
for ac in aircraft:
print(ac.get_text())
What i would like to be able to do is save it to a csv file with rows and columns, my guess would be that i need to have each of the columns as a variable and read each row of data into the relevant column.
Regards,
Mark

You can use DataFrame.to_csv() from pandas. Here's an example:
from bs4 import BeautifulSoup
import requests
import pandas as pd
url = "https://www.icetra.is/aviation/aircraft/register/"
page = requests.get(url)
soup = BeautifulSoup(page.text, 'html.parser')
aircraft = soup.findAll('tr')
aircrafts = [ac.get_text() for ac in aircraft]
df = pd.DataFrame({"Aircrafts": aircrafts})
df.to_csv("aircrafts.csv")
Edit: I've noticed that soup.findAll('tr') might be getting more information that you wanted, in this case its getting the text from the whole row. You might want to use ac.stripped_strings (documentation) to get each string from the column.
Edit 2: You should try pd.read_html() to read this table. However, I tried fixing my last code and got this solution:
from bs4 import BeautifulSoup
import requests
import pandas as pd
url = "https://www.icetra.is/aviation/aircraft/register/"
page = requests.get(url)
soup = BeautifulSoup(page.text, 'html.parser')
aircraft = soup.findAll('tr')
rows = [list(ac.stripped_strings) for ac in aircraft]
df = pd.DataFrame.from_records(rows)
df.columns = df.iloc[0]
df.drop(index=0, inplace=True)
df.to_csv("aircrafts.csv", index=False)

Related

Read table from Web using Python

I'm new to Python and am working to extract data from website https://www.screener.in/company/ABB/consolidated/ on a particular table (the last table which is Shareholding Pattern)
I'm using BeautifulSoup library for this but I do not know how to go about it.
So far, here below is my code snippet. am failing to pick the right table due to the fact that the page has multiple tables and all tables share common classes and IDs which makes it difficult for me to filter for the one table I want.
import requests import urllib.request
from bs4 import BeautifulSoup
url = "https://www.screener.in/company/ABB/consolidated/"
r = requests.get(url)
print(r.status_code)
html_content = r.text
soup = BeautifulSoup(html_content,"html.parser")
# print(soup)
#data_table = soup.find('table', class_ = "data-table")
# print(data_table) table_needed = soup.find("<h2>ShareholdingPattern</h2>")
#sub = table_needed.contents[0] print(table_needed)
Just use requests and pandas. Grab the last table and dump it to a .csv file.
Here's how:
import pandas as pd
import requests
df = pd.read_html(
requests.get("https://www.screener.in/company/ABB/consolidated/").text,
flavor="bs4",
)
df[-1].to_csv("last_table.csv", index=False)
Output from a .csv file:

BeautifulSoup to CSV

I have setup BeautifulSoup to find a specific class for two webpages.
I would like to know how to write each URL's result to a unique cell in one CSV?
Also is there a limit to the number of URLs I can read as I would like to expand this to about 200 URLs once I get this working.
The class is always the same and I don't need any formatting just the raw HTML in one cell per URL.
Thanks for any ideas.
from bs4 import BeautifulSoup
import requests
urls = ['https://www.ozbargain.com.au/','https://www.ozbargain.com.au/forum']
for u in urls:
response = requests.get(u)
data = response.text
soup = BeautifulSoup(data,'lxml')
soup.find('div', class_="block")
Use pandas to work with tabular data: pd.DataFrame to create a table, and pd.to_csv to save table as csv (might also check out the documentation, append mode for example).
Basically it.
import requests
import pandas as pd
from bs4 import BeautifulSoup
def func(urls):
for url in urls:
data = requests.get(url).text
soup = BeautifulSoup(data,'lxml')
yield {
"url": url, "raw_html": soup.find('div', class_="block")
}
urls = ['https://www.ozbargain.com.au/','https://www.ozbargain.com.au/forum']
data = func(urls)
table = pd.DataFrame(data)
table.to_csv("output.csv", index=False)

Beautifulsoup object does not contain full table from webpage, instead grabs first 100 rows

I am attempting to scrape tables from the website spotrac.com and save the data to a pandas dataframe. For whatever reason, if the table I am scraping is over 100 rows, the BeautifulSoup object only appears to grab the first 100 rows of the table. If you run my code below, you'll see that the resulting dataframe has only 100 rows, and ends with "David Montgomery." If you visit the webpage (https://www.spotrac.com/nfl/rankings/2019/base/running-back/) and ctrl+F "David Montgomery", you'll see that there are additional rows. If you change the webpage in the get row of the code to "https://www.spotrac.com/nfl/rankings/2019/base/wide-receiver/" you'll see that the same thing happens. Only the first 100 rows are included in the BeautifulSoup object and in the dataframe.
import pandas as pd
import requests, lxml.html
from bs4 import BeautifulSoup
# Begin requests session
with requests.session() as s:
# Get page
r = s.get('https://www.spotrac.com/nfl/rankings/2019/base/running-back/')
# Get page content, find first table, and save to df
soup = BeautifulSoup(r.content,'lxml')
table = soup.find_all('table')[0]
df_list = pd.read_html(str(table))
df = df_list[0]
I have read that changing the parser can help. I have tried using different parsers by replacing the BeautifulSoup object code with the following:
soup = BeautifulSoup(r.content,'html5lib')
soup = BeautifulSoup(r.content,'html.parser')
Neither of these changes worked. I have run "pip install html5lib" and "pip install lxml" and confirmed that both were already installed.
This page uses JavaScript to load extra data.
In DevTools in Firefox/Chrome you can see it sends POST request with extra information {'ajax': True, 'mobile': False}
import pandas as pd
import requests, lxml.html
from bs4 import BeautifulSoup
with requests.session() as s:
r = s.post('https://www.spotrac.com/nfl/rankings/2019/base/running-back/', data={'ajax': True, 'mobile': False})
# Get page content, find first table, and save to df
soup = BeautifulSoup(r.content, 'lxml')
table = soup.find_all('table')[0]
df_list = pd.read_html(str(table))
df = df_list[0]
print(df)
I suggest you use request-html
import pandas as pd
from bs4 import BeautifulSoup
from requests_html import HTMLSession
if __name__ == "__main__":
# Begin requests session
s = HTMLSession()
# Get page
r = s.get('https://www.spotrac.com/nfl/rankings/2019/base/running-back/')
r.html.render()
# Get page content, find first table, and save to df
soup = BeautifulSoup(r.html.html, 'lxml')
table = soup.find_all('table')[0]
df_list = pd.read_html(str(table))
df = df_list[0]
Then you will get 140 lines.

How can I pull webpage data into my DataFrame by referencing a specific HTML class or id using pandas read_html?

I'm trying to pull the data from the table at this site and save it in a CSV with the column 'ticker' included. Right now my code is this:
import requests
import pandas as pd
url = 'https://www.biopharmcatalyst.com/biotech-stocks/company-pipeline-database#marketCap=mid|stages=approved,crl'
html = requests.get(url).content
df_list = pd.read_html(html)
df = df_list[0]
print (df)
df.to_csv('my data.csv')
and it results in a file that looks like this.
I want to have the 'ticker' column in my CSV file with the corresponding ticker listed for each company. The ticker is in the HTML here (class="ticker--small"). The output should look like this.
I'm totally stuck on this. I've tried doing it in BeautifulSoup too but I can't get it working. Any help would be greatly appreciated.
it has multiple table, use BeautifulSoup to extract and do loop to write the csv.
from bs4 import BeautifulSoup
import requests, lxml
import pandas as pd
url = 'https://www.biopharmcatalyst.com/biotech-stocks/company-pipeline-database#marketCap=mid|stages=approved,crl'
html = requests.get(url).text
soup = BeautifulSoup(html, 'lxml')
tables = soup.findAll('table')
for table in tables:
df = pd.read_html(str(table))[0]
with open('my_data.csv', 'a+') as f:
df.to_csv(f)

Download table from wunderground with beautiful soup

I would like to Weather History & Observations table from the following link:
https://www.wunderground.com/history/airport/HDY/2011/1/1/CustomHistory.html?dayend=31&monthend=12&yearend=2011&req_city=&req_state=&req_statename=&reqdb.zip=&reqdb.magic=&reqdb.wmo=
This is the code I have so far:
import pandas as pd
from bs4 import BeautifulSoup
import requests
link = 'https://www.wunderground.com/history/airport/HDY/2011/1/1/CustomHistory.html?dayend=31&monthend=12&yearend=2011&req_city=&req_state=&req_statename=&reqdb.zip=&reqdb.magic=&reqdb.wmo='
resp = requests.get(link)
c = resp.text
soup = BeautifulSoup(c)
I would like to know what is the next step to access the table info at the bottom of the page (assuming this is a good website format to allow this to happen).
Thank you
You can use find_all
table = soup.find('table', class_="responsive obs-table daily")
rows = table.find_all('tr')

Categories