I am trying to scrape the table "List of chemical elements" from this website https://en.wikipedia.org/wiki/List_of_chemical_elements
I want to then store the table data into a pandas dataframe such that i can convert it into a csv file. So far i have scraped and stored the headers of the table into a dataframe. I also managed to retrieve each individual rows of data from the table. However, i am having trouble in storing the data for the table into the dataframe. Below is what i have so far
from bs4 import BeautifulSoup
import requests as r
import pandas as pd
response = r.get('https://en.wikipedia.org/wiki/List_of_chemical_elements')
wiki_text = response.text
soup = BeautifulSoup(wiki_text, 'html.parser')
table = soup.select_one('table.wikitable')
table_body = table.find('tbody')
#print(table_body)
rows = table_body.find_all('tr')
cols = [c.text.replace('\n', '') for c in rows[1].find_all('th')]
df2a = pd.DataFrame(columns = cols)
df2a
for row in rows:
records = row.find_all('td')
if records != []:
records = [r.text.strip() for r in records]
print(records)
Here i have found all columns data in which it is divided to two parts first and second columns data
all_columns=soup.find_all("tr",attrs={"style":"vertical-align:top"})
first_column_data=[i.get_text(strip=True) for i in all_columns[0].find_all("th")]
second_column_data=[i.get_text(strip=True) for i in all_columns[1].find_all("th")]
Now as we need 16 columns so take appropriate columns and added data to new_lst list which is column list
new_lst=[]
new_lst.extend(second_column_data[:3])
new_lst.extend(first_column_data[1:])
Now we have to find row data iterate through all tr with attrs and find respectivetd and it will return list of table data and append to main_lst
main_lst=[]
for i in soup.find_all("tr",attrs={"class":"anchor"}):
row_data=[row.get_text(strip=True) for row in i.find_all("td")]
main_lst.append(row_data)
Output:
Atomic numberZ Symbol Name Origin of name[2][3] Group Period Block Standardatomicweight[a] Density[b][c] Melting point[d] Boiling point[e] Specificheatcapacity[f] Electronegativity[g] Abundancein Earth'scrust[h] Origin[i] Phase atr.t.[j]
0 1 H Hydrogen Greekelementshydro-and-gen, 'water-forming' 1 1 s-block 1.008 0.00008988 14.01 20.28 14.304 2.20 1400 primordial gas
....
Let pandas parse it for you:
import pandas as pd
df = pd.read_html('https://en.wikipedia.org/wiki/List_of_chemical_elements')[0]
df.to_csv('file.csv', index=False)
Related
I am new to coding, so take it easy on me! I recently started a pet project which scrapes data from a table and will create a csv of the data for me. I believe I have successfully pulled the data, but trying to put it into a dataframe returns the error "Shape of passed values is (31719, 1), indices imply (31719, 23)". I have tried looking at the length of my headers and my rows and those numbers are correct, but when I try to put it into a dataframe it appears that it is only pulling one column into the dataframe. Again, I am very new to all of this but would appreciate any help! Code below
from bs4 import BeautifulSoup
from pandas.core.frame import DataFrame
import requests
import pandas as pd
url = 'https://www.fangraphs.com/leaders.aspx? pos=all&stats=bat&lg=all&qual=0&type=8&season=2018&month=0&season1=2018&ind=0&page=1_1500'
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
#pulling table from HTML
Table1 = soup.find('table', id = 'LeaderBoard1_dg1_ctl00')
#finding and filling table columns
headers = []
for i in Table1.find_all('th'):
title = i.text
headers.append(title)
#finding and filling table rows
rows = []
for j in Table1.find_all('td'):
data = j.text
rows.append(data)
#filling dataframe
df = pd.DataFrame(rows, columns = headers)
#show dataframe
print(df)
You are creating a dataframe with 692 rows with 23 columns as a new dataframe. However looking at the rows array, you only have 1 dimensional array so shape of passed values is not matching with indices. You are passing 692 x 1 to a dataframe with 692 x 23 which won't work.
If you want to create with the data you have, you should just use:
df=pd.DataFrame(rows, columns=headers[1:2])
Alternativly you can achieve your goal directly by using pandas.read_html that processe the data by BeautifulSoup for you:
pd.read_html(url, attrs={'id':'LeaderBoard1_dg1_ctl00'}, header=[1])[0].iloc[:-1]
attrs={'id':'LeaderBoard1_dg1_ctl00'} selects table by id
header=[1] adjusts the header cause there are multiple headers
.iloc[:-1] removes the table footer with pagination
Example
import pandas as pd
pd.read_html('https://www.fangraphs.com/leaders.aspx?pos=all&stats=bat&lg=all&qual=0&type=8&season=2018&month=0&season1=2018&ind=0&page=1_1500',
attrs={'id':'LeaderBoard1_dg1_ctl00'},
header=[1])[0]\
.iloc[:-1]
I am new to Pandas and Webscraping and BeautifulSoup in Python.
As I was learning to do some basic webscraping in Python by using requests and BeautifulSoup to scrape a webpage, I am confused with the task of assigning the 2nd and 3rd elements of an html table into a pandas dataframe.
Suppose I have this table:
Here is my code so far:
import pandas as pd
from bs4 import BeautifulSoup
import requests
html_data = requests.get('https://en.wikipedia.org/wiki/List_of_largest_banks').text
soup = BeautifulSoup(html_data, 'html.parser')
data = pd.DataFrame(columns=["Name", "Market Cap (US$ Billion)"])
for row in soup.find_all('tbody')[3].find_all('tr'): #This line will make sure to get to the third table which is this "By market capitalization" table on the webpage and finding all the rows of this table
col = row.find_all('td') #This is to target individual column values in a particular row of the table
for j, cell in enumerate(col):
#Further code here
As it can be seen, I want to target all the 2nd and 3rd column values of a row and append to the
empty dataframe, data, so that data contains the Bank names and market cap values. How can I achieve that kind of functionality?
For tables I would suggest pandas:
import pandas as pd
url = 'https://en.wikipedia.org/wiki/List_of_largest_banks'
tables = pd.read_html(url)
df = tables[1]
When you prefer using beautifulsoup, you can try this to accomplish the same:
url = 'https://en.wikipedia.org/wiki/List_of_largest_banks'
r = requests.get(url)
soup = BeautifulSoup(r.content, 'html.parser').find_all('table')
table = soup[1]
table_rows = table.find_all('tr')
table_header = [th.text.strip() for th in table_rows[0].find_all('th')]
table_data = []
for row in table_rows[1:]:
table_data.append([td.text.strip() for td in row.find_all('td')])
df = pd.DataFrame(table_data, columns=table_header)
When needed, you can set Rank as index with df.set_index('Rank', inplace=True. Image below is the unmodified dataframe.
Goal: The goal of my project is to use BeautifulSoup aka bs4 to scrape only necessary data from an HTML file and import it into excel. The html file is heavily formatted so unfortunately I haven't been able to tailor more common solutions to my needs.
What I have tried: I have been able to parse the HTML file to the point where I am only pulling the tables I need, and I am able to detect every column of data and print it. In example, if there are a total of 18 columns and 3 rows of data, the code will output 54 times with each piece of table data going from row 1 col 1 to row 3 col 18.
My code is as follows:
from bs4 import BeautifulSoup
import csv
import pandas as pd
url =
output =
#define table error to detect only tables with extractable data
def iserror(func, *args, **kw):
try:
func(*args, **kw)
return False
except Exception:
return True
#read the html
with open(url) as html_file:
soup = BeautifulSoup(html_file, 'html.parser')
#table = soup.find_all('table')
all_tables = soup.find_all('table')
#print(len(table))
df = pd.DataFrame( columns=(pks_col_names))
col_list = []
table_list = []
for tnum, tables in enumerate(all_tables):
if iserror(all_tables[tnum].tbody): #Finds table with data
table = tables.findAll('tr')
#Loops through rows of each data table
for rnum, row in enumerate(table):
table_row = table[rnum].findAll('td')
if len(table_row)==17:
#Loops through columns of each data table
for col in table_row:
col_list.append(col.string)
else:
pass
else:
pass
Example of data output currently achieved
row 1 column 1 (first string in list)
row 1 column 2
row 1 column 3 ...
row 3 column 17
row 3 column 18 (last string in list)
The current code creates a single list with the data outputted above, though I am unable to figure out a way to convert that list into a pandas dataframe tying each list output to the appropriate row/column. Could anyone provide ideas on how to do this or how to otherwise rework my code to import this data into a dataframe?
it's all messed up: your function iserror does in fact check if there's no error (and i don't think it works at all). what you call tables are rows and you don't need to enumerate
as you haven't provided the data, i made only rough tests. but it's a bit cleaner
row_list = []
for table in all_tables:
if is_ok(table.tbody): #Finds table with data
rows = table.findAll('tr')
#Loops through rows of each data table
for row in rows:
cols = row.findAll('td')
col_list = []
if len(cols)==17:
#Loops through columns of each data table
for col in cols:
col_list.append(col.string)
row_list.append(col_list)
df = pd.DataFrame(row_list, columns=(pks_col_names))
Thanks everyone for the help. I was able to achieve the desired data frame and the final code looks like this:
url = 'insert url here'
#define table error to detect only tables with extractable data
def iserror(func, *args, **kw):
try:
func(*args, **kw)
return False
except Exception:
return True
#All tables
all_tables = soup.findAll('table')
#Define column names
pks_col_names = ['TBD','Trade Date', 'Settle Date', 'Source', 'B/S', 'Asset Name', 'Security Description',
'Ticker','AccountID','Client Name','Shares','Price','Amount','Comms',
'Fees', 'Payout %','Payout']
row_list = []
for table in all_tables:
if iserror(table.tbody): #Finds table with data
rows = table.findAll('tr')
#Loops through rows of each data table
for row in rows:
cols = row.findAll('td')
col_list = []
if len(cols)==17:
#Loops through columns of each data table
for col in cols:
col_list.append(col.string)
row_list.append(col_list)
df = pd.DataFrame(row_list, columns=(pks_col_names))
df.to_csv(output, index=False, encoding = 'utf-8-sig')
I'm quite new to Python and BeautifulSoup, and I've been trying to work this out for several hours...
Firstly, I want to extract all table data from below link with "general election" in the title:
https://en.wikipedia.org/wiki/Carlow%E2%80%93Kilkenny_(D%C3%A1il_constituency)
I do have another dataframe with names of each table (eg. "1961 general election", "1965 general election"), but am hoping to get away with just searching for "general election" on each table to confirm if it's what I need.
I then want to get all the names that are in Bold (which indicates they won) and finally I want another list of "Count 1" (or sometimes 1st Pref) in the original order, which I want to compare to the "Bold" list. I haven't even looked at this piece yet, as I haven't gotten past the first hurdle.
url = "https://en.wikipedia.org/wiki/Carlow%E2%80%93Kilkenny_(D%C3%A1il_constituency)"
res = requests.get(url)
soup = BeautifulSoup(res.content,'lxml')
my_tables = soup.find_all("table", {"class":"wikitable"})
for table in my_tables:
rows = table.find_all('tr', text="general election")
print(rows)
Any help on this would be greatly appreciated...
This page requires some gymnastics, but it can be done:
import requests
from bs4 import BeautifulSoup as bs
import pandas as pd
req = requests.get('https://en.wikipedia.org/wiki/Carlow%E2%80%93Kilkenny_(D%C3%A1il_constituency)')
soup = bs(req.text,'lxml')
#first - select all the tables on the page
tables = soup.select('table.wikitable')
for table in tables:
ttr = table.select('tbody tr')
#next, filter out any table that doesn't involve general elections
if "general election" in ttr[0].text:
#clean up the rows
s_ttr = ttr[1].text.replace('\n','xxx').strip()
#find and clean up column headings
columns = [col.strip() for col in s_ttr.split('xxx') if len(col.strip())>0 ]
rows = [] #initialize a list to house the table rows
for c in ttr[2:]:
#from here, start processing each row and loading it into the list
row = [a.text.strip() if len(a.text.strip())>0 else 'NA' for a in c.select('td') ]
if (row[0])=="NA":
row=row[1:]
columns = [col.strip() for col in s_ttr.split('xxx') if len(col.strip())>0 ]
if len(row)>0:
rows.append(row)
#load the whole thing into a dataframe
df = pd.DataFrame(rows,columns=columns)
print(df)
The output should be all the general election tables on the page.
I have parsed a table and would like to convert two of those variables to a Pandas Dataframe to print to excel.
FYI:
I did ask a similar question, however, it was not answered thoroughly. There was no suggestion on how to create a Pandas DataFrame. This was the whole point of my question.
Caution:
There is small issue with the data that I parsed. The data contains "TEAM" and "SA/G" multiple times in the output.
The 1st variable that I would like in the DataFrame is 'TEAM'.
The 2nd variable that I would like in the DataFrame is 'SA/G'.
Here is my code so far:
# imports
from selenium import webdriver
from bs4 import BeautifulSoup
# make a webdriver object
driver = webdriver.Chrome('C:\webdrivers\chromedriver.exe')
# open some page using get method - url -- > parameters
driver.get('http://www.espn.com/nhl/statistics/team/_/stat/scoring/sort/avgGoals')
# driver.page_source
soup = BeautifulSoup(driver.page_source,'lxml')
#close driver
driver.close()
#find table
table = soup.find('table')
#find_all table rows
t_rows = table.find_all('tr')
#loop through tr to find_all td
for tr in t_rows:
td = tr.find_all('td')
row = [i.text for i in td]
# print(row)
# print(row[9])
# print(row[1], row[9])
team = row[1]
sag = row[9]
# print(team, sag)
data = [(team, sag)]
print(data)
Here is the final output that I would like printed to excel using the Pandas DataFrame option:
Team SA/G
Nashville 30.1
Colorado 33.6
Washington 31.0
... ...
Thanks in advance for any help that you may offer. I am still learning and appreciate any feedback that I can get.
Looks like you want to create a DataFrame from a list of tuples, which has been answered here.
I would change your code like this:
# Initial empty list
data = []
#loop through tr to find_all td
for tr in t_rows:
td = tr.find_all('td')
row = [i.text for i in td]
team = row[1]
sag = row[9]
# Add tuple containing one row of data
data.append((team, sag))
# Create df from list of tuples
df = pd.DataFrame(data, columns=['Team', 'SA/G'])
# Remove lines where Team value is "TEAM"
df = df[df["Team"] != "TEAM"]
EDIT: Add line to remove ("TEAM", "SA/G") rows in df
First inside the "for loop" append tuples into a list (instead of doing data=[(x,y)] declare the data variable before the loop as a list data = list() and append the tuples to list in the loop data.append((x,y))) and do the following
import pandas as pd
data=[("t1","sag1"),("t2","sag2"),("t3","sag3")]
df = pd.DataFrame(data,columns=['Team','SA/G'])
print(df)