Scraping special graphical characters in an HTML table - python

I am trying to scrape a table, which in some cells has a "graphical" element (arrow up/down) using R. Unfortunately, the library rvest function html_table seems to skip these elements. This is how such a cell with arrow looks like in HTML:
<td>
<span style="font-weight: bold; color: darkgreen">Ba2</span>
<i class="glyphicon glyphicon-arrow-down" title="negative outlook"></i>
</td>
The code I am using is:
require(rvest)
require(tidyverse)
url = "https://tradingeconomics.com/country-list/rating"
#bypass company firewall
download.file(url, destfile = "scrapedpage.html", quiet=TRUE)
content <- read_html("scrapedpage.html")
tables <- content %>% html_table(fill = TRUE, trim=TRUE)
But for example for the cell above, it gives me only Ba2 string. Is there a way to include also the arrows somehow (as text, e.g. Ba2 neg)? Solution in Python would be also useful, if R does not have such functionality.
Thank you!

I don't know if this is possible in R but in Python this will give you the required results.
I have tried to print the first few rows to give you an idea of how the data looks.
pos - Denotes Arrow-up and neg - Denotes Arrow-down
from bs4 import BeautifulSoup
import requests
url = 'https://tradingeconomics.com/country-list/rating'
resp = requests.get(url)
soup = BeautifulSoup(resp.text, 'html.parser')
t = soup.find('table', attrs= {'id': 'ctl00_ContentPlaceHolder1_ctl01_GridView1'})
tr = t.findAll('tr')
for i in range(1,10):
tds = tr[i].findAll('td')
temp = []
for j in tds:
fa_down = j.find('i', class_='glyphicon-arrow-down')
fa_up = j.find('i', class_='glyphicon-arrow-up')
if fa_up:
print(f'{j.text.strip()} (pos)')
elif fa_down:
print(f'{j.text.strip()} (neg)')
else:
print(f'{j.text.strip()}')
Output:
+------------+---------+-----------+-----------+---------+---------+
| Field 1 | Field 2 | Field 3 | Field 4 | Field 5 | Field 6 |
+------------+---------+-----------+-----------+---------+---------+
| Albania | B+ | B1 | | | 35 |
| Andorra | BBB | | BBB+ | | 62 |
| Angola | CCC+ | Caa1 | CCC | | 21 |
| Argentina | CCC+ | Ca | CCC | CCC | 15 |
| Armenia | | Ba3 | B+ | | 16 |
| Aruba | BBB | | BB | | 52 |
| Australia | AAA | Aaa | AAA (neg) | AAA | 100 |
| Austria | AA+ | Aa1 | AA+ | AAA | 96 |
| Azerbaijan | BB+ | Ba2 (pos) | BB+ | | 48 |
+------------+---------+-----------+-----------+---------+---------+

Related

Scrape table from url

I am trying to scrape data from "https://www.investing.com/equities/pre-market" here the picture of what I need :
class="datatable_table__D_jso PreMarketMostActiveStocksTable_preMarketMostActiveStocksTable__9yGOv datatable_table--mobile-basic__W2ilt datatable_table--freeze-column__7YoIE"
It seems that this HTML code contains the table, I tried to scrape using
soup.find but I get no result.
here is my code.
import requests
from bs4 import BeautifulSoup
url = "https://www.investing.com/equities/pre-market"
html = requests.get(url).content
soup = BeautifulSoup(html)
table = soup.find('table', {'class': 'datatable_row__qHMpQ'})
print(soup)
Thanks!
The class you're using belongs to the header row of the table, not the table tag itself. (It's indicated by the class name itself - "datatable_row__qHMpQ"...)
You can use one of the table classes instead (like datatable_table__D_jso) or you could use the data-test attribute:
table = soup.find('table', {'data-test': "pre-market-most-active-stocks-table"})
# import pandas
print(pandas.read_html(table.prettify())[0].to_markdown(index=False))
prints
| Name | Symbol | Last | Chg. | Chg. % | Vol. | Time |
|:--------------------|:---------|-------:|-------:|:---------|:-------|:---------|
| Jiuzi Holdings Inc | JZXN | 0.24 | 0.092 | +62.09% | 19.14M | 09:27:41 |
| OpGen Inc | OPGN | 0.245 | 0.12 | +96.00% | 16.41M | 09:27:41 |
| Powerbridge | PBTS | 0.1001 | 0.0084 | +9.16% | 12.07M | 09:26:40 |
| Faraday Future Int. | FFIE | 0.57 | 0.11 | +24.59% | 12.03M | 09:27:56 |
| Magenta Therapeuti. | MGTA | 1.45 | 0.3 | +26.09% | 9.12M | 09:27:58 |
| Starry Holdings | STRY | 0.122 | 0.022 | +21.50% | 8.34M | 09:26:57 |
| Netcapital Inc | NCPL | 3.03 | 1.64 | +117.99% | 6.51M | 09:27:59 |
| China Pharma | CPHI | 0.1449 | 0.0044 | +3.13% | 3.55M | 09:26:52 |
| 111 Inc | YI | 3.81 | 0.27 | +7.63% | 2.98M | 09:28:00 |
| Amesite | AMST | 0.369 | 0.059 | +19.03% | 2.45M | 09:21:45 |
EDIT: full code with some additions for debugging and/or error handling:
import requests
from bs4 import BeautifulSoup
import pandas as pd
import os
url = "https://www.investing.com/equities/pre-market"
resp = requests.get(url)
# resp.raise_for_status() # halt program right here if bad response
soup = BeautifulSoup(resp.content)
table = soup.find('table', {'data-test':"pre-market-most-active-stocks-table"})
if table is None and resp.status_code == 200: # ok respose, but no table
hfn = 'MISSING_DATA investing-com_equities_premarket.html'
with open(hfn, 'wb') as f: f.write(resp.content)
print('no such table found - inspect [on editor]: ', os.path.abspath(hfn))
elif table: print(pd.read_html(table.prettify())[0].to_markdown(index=False))
else: print(f'{resp.status_code} {resp.reason} - failed to scrape {url}')

Web Scrape table data from this webpage

I'm trying to scrape the data from the table in the specifications section of this webpage:
Lochinvar Water Heaters
I'm using beautiful soup 4. I've tried searching for it by class - for example - (class="Table__Cell-sc-1e0v68l-0 kdksLO") but bs4 can't find the class on the webpage. I listed all the available classes that it could find and it doesn't find anything useful. Any help is appreciated.
Here's the code I tried to get the classes
import requests
from bs4 import BeautifulSoup
URL = "https://www.lochinvar.com/products/commercial-water-heaters/armor-condensing-water-heater"
page = requests.get(URL)
soup = BeautifulSoup(page.content, "html.parser")
results = soup.find_all("div", class_='Table__Wrapper-sc-1e0v68l-3 iFOFNW')
classes = [value
for element in soup.find_all(class_=True)
for value in element["class"]]
classes = sorted(classes)
for cass in classes:
print(cass)
The page is populated with javascript, but fortunately in this case, much of the data [including the specs table you want] seems to be inside a script tag within the fetched html. The script just has one statement, so it's fairly easy to extract it as json
import json
### copied from your q ####
import requests
from bs4 import BeautifulSoup
URL = "https://www.lochinvar.com/products/commercial-water-heaters/armor-condensing-water-heater"
page = requests.get(URL)
soup = BeautifulSoup(page.content, "html.parser")
###########################
wrInf = soup.find(lambda l: l.name == 'script' and '__routeInfo' in l.text)
wrInf = wrInf.text.replace('window.__routeInfo = ', '', 1) # remove variable name
wrInf = wrInf.strip()[:-1] # get rid of ; at end
wrInf = json.loads(wrInf) # convert to python dictionary
specsTables = wrInf['data']['product']['specifications'][0]['table'] # get table (tsv string)
specsTables = [tuple(row.split('\t')) for row in specsTables.split('\n')] # convert rows to tuples
To view it, you could use pandas,
import pandas
headers = specsTables[0]
st_df = pandas.DataFrame([dict(zip(headers, r)) for r in specsTables[1:]])
# or just
# st_df = pandas.DataFrame(specsTables[1:], columns=headers)
print(st_df.head())
or you could simply print it
for i, r in enumerate(specsTables):
print(" | ".join([f'{c:^18}' for c in r]))
if i == 0: print()
output:
Model Number | Btu/Hr Input | Thermal Efficiency | GPH # 100ºF Rise | A | B | C | D | E | F | G | H | I | J | K | L | M | Gas Conn. | Water Conn. | Air Inlet | Vent Size | Ship. Wt.
AWH0400NPM | 399,000 | 99% | 479 | 45" | 24" | 30-1/2" | 42-1/2" | 29-3/4" | 20-1/4" | 12" | 20" | 38" | 3-1/2" | 10-1/2" | 19-1/4" | 20" | 1" | 2" | 4" | 4" | 326
AWH0500NPM | 500,000 | 99% | 600 | 45" | 24" | 30-1/2" | 42-1/2" | 29-3/4" | 20-1/4" | 12" | 20" | 38" | 3-1/2" | 10-1/2" | 19-1/4" | 20" | 1" | 2" | 4" | 4" | 333
AWH0650NPM | 650,000 | 98% | 772 | 45" | 24" | 41" | 53" | 30-1/2" | 15-1/4" | 12" | 20" | 38" | 3-1/2" | 10-1/2" | 19-1/4" | 20" | 1-1/4" | 2" | 4" | 6" | 424
AWH0800NPM | 800,000 | 98% | 950 | 45" | 24" | 41" | 53" | 30-1/2" | 15-1/4" | 12" | 20" | 38" | 3-1/2" | 10-1/2" | 19-1/4" | 20" | 1-1/4" | 2" | 4" | 6" | 434
AWH1000NPM | 999,000 | 98% | 1,187 | 45" | 24" | 48" | 62" | 30-1/2" | 15-3/4" | 12" | 20" | 38" | 3-1/2" | 10-1/2" | 19-1/4" | 20" | 1-1/4" | 2-1/2" | 6" | 6" | 494
AWH1250NPM | 1,250,000 | 98% | 1,485 | 51-1/2" | 34" | 49" | 59" | 5-1/2" | 5-1/2" | 13-1/2" | 6-3/4" | 46-3/4" | 5-3/4" | 19-3/4" | 23" | 22-1/2" | 1-1/2" | 2-1/2" | 8" | 8" | 1,568
AWH1500NPM | 1,500,000 | 98% | 1,782 | 51-1/2" | 34" | 52-3/4" | 62-3/4" | 4-1/2" | 4-1/2" | 13-1/2" | 6-3/4" | 46-3/4" | 5-3/4" | 19-3/4" | 23" | 22-1/2" | 1-1/2" | 2-1/2" | 8" | 8" | 1,649
AWH2000NPM | 1,999,000 | 98% | 2,375 | 51-1/2" | 34" | 65-1/2" | 75-1/2" | 7" | 5-3/4" | 14-3/4" | 7-1/4" | 46-3/4" | 6-3/4" | 18-3/4" | 23" | 23-1/2" | 1-1/2" | 2-1/2" | 8" | 8" | 1,911
AWH3000NPM | 3,000,000 | 98% | 3,564 | 67-1/4" | 48-1/4" | 79-3/4" | 93-3/4" | 4-3/4" | 6-3/4" | 17-3/4" | 8-3/4" | 60-1/4" | 8-1/2" | 25-1/2" | 29-1/2" | 40" | 2" | 4" | 10" | 10" | 3,147
AWH4000NPM | 4,000,000 | 98% | 4,752 | 67-1/4" | 48-1/4" | 96" | 110" | 5" | 7-1/2" | 17-3/4" | 8-3/4" | 60-1/4" | 8-1/2" | 25-1/2" | 29-1/2" | 40" | 2-1/2" | 4" | 12" | 12" | 3,694
If you wanted a specific models specs:
modelNo = 'AWH1000NPM'
mSpecs = [r for r in specsTables if r[0] == modelNo]
mSpecs = [[]] if mSpecs == [] else mSpecs # in case there is no match
mSpecs = dict(zip(specsTables[0], mSpecs[0])) # convert to dictionary
print(mSpecs)
output:
{'Model Number': 'AWH1000NPM', 'Btu/Hr Input': '999,000', 'Thermal Efficiency': '98%', 'GPH # 100ºF Rise': '1,187', 'A': '45"', 'B': '24"', 'C': '48"', 'D': '62"', 'E': '30-1/2"', 'F': '15-3/4"', 'G': '12"', 'H': '20"', 'I': '38"', 'J': '3-1/2"', 'K': '10-1/2"', 'L': '19-1/4"', 'M': '20"', 'Gas Conn.': '1-1/4"', 'Water Conn.': '2-1/2"', 'Air Inlet': '6"', 'Vent Size': '6"', 'Ship. Wt.': '494'}
The contents for constructing the table are within a script tag. You can extract the relevant string and re-create the table through string manipulation.
import requests, re
import pandas as pd
r = requests.get('https://www.lochinvar.com/products/commercial-water-heaters/armor-condensing-water-heater/').text
s = re.sub(r'\\"', '"', re.search(r'table":"([\s\S]+?)(?:","tableFootNote)', r).groups(1)[0])
lines = [i.split('\\t') for i in s.split('\\n')]
df = pd.DataFrame(lines[1:], columns = lines[:1])
df.head(5)

How can I pull player names from Rotowire?

I have been using this code below to pull MLB lineups from BaseballPress.com. However this pulls the official MLB lineups which dont normally get posted until about an hour before the game.
import requests
import pandas as pd
import openpyxl
from bs4 import BeautifulSoup
url = "https://www.baseballpress.com/lineups/2022-08-09"
soup = BeautifulSoup(requests.get(url).content, "html.parser")
def get_name(tag):
if tag.select_one(".desktop-name"):
return tag.select_one(".desktop-name").get_text()
elif tag.select_one(".mobile-name"):
return tag.select_one(".mobile-name").get_text()
else:
return tag.get_text()
data = []
for card in soup.select(".lineup-card"):
header = [
c.get_text(strip=True, separator=" ")
for c in card.select(".lineup-card-header .c")
]
h_p1, h_p2 = [
get_name(p) for p in card.select(".lineup-card-header .player")
]
data.append([*header, h_p1, h_p2])
for p1, p2 in zip(
card.select(".col--min:nth-of-type(1) .player"),
card.select(".col--min:nth-of-type(2) .player"),
):
p1 = get_name(p1).split(maxsplit=1)[-1]
p2 = get_name(p2).split(maxsplit=1)[-1]
data.append([*header, p1, p2])
df = pd.DataFrame(
data, columns=["Team1", "Date", "Team2", "Player1", "Player2"]
)
df.to_excel("MLB Games.xlsx", sheet_name='sheet1', index=False)
print(df.head(10).to_markdown(index=False))
In order to get around this, I found out that Rotowire releases the projected lineups about 24 hours in advance which is what I need for this analysis. I have changed the python script to match the website, except I am not sure how to alter the get_name() tag. Does anyone know how I would address this portion of the code? See the new code below:
import requests
import pandas as pd
import openpyxl
from bs4 import BeautifulSoup
url = "https://www.rotowire.com/baseball/daily-lineups.php"
soup = BeautifulSoup(requests.get(url).content, "html.parser")
def get_name(tag):
if tag.select_one(".desktop-name"):
return tag.select_one(".desktop-name").get_text()
elif tag.select_one(".mobile-name"):
return tag.select_one(".mobile-name").get_text()
else:
return tag.get_text()
data = []
for card in soup.select(".lineup__main"):
header = [
c.get_text(strip=True, separator=" ")
for c in card.select(".lineup__teams .c")
]
h_p1, h_p2 = [
get_name(p) for p in card.select(".lineup__teams .lineup__player")
]
data.append([*header, h_p1, h_p2])
for p1, p2 in zip(
card.select(".lineup__list is-visit:nth-of-type(1) .lineup__player"),
card.select(".lineup__list is-home:nth-of-type(2) .lineup__player"),
):
p1 = get_name(p1).split(maxsplit=1)[-1]
p2 = get_name(p2).split(maxsplit=1)[-1]
data.append([*header, p1, p2])
df = pd.DataFrame(
data, columns=["Team1", "Date", "Team2", "Player1", "Player2"]
)
df.to_excel("MLB Predicted Lineups.xlsx", sheet_name='sheet1', index=False)
print(df.head(10).to_markdown(index=False))
You need to look at the actual html to see what tags and attributes the html source is using, in order to correctly identify the content you want. I had made a script to do this, what you are asking here, a while back, so I'm just using/posting that.
import requests
from bs4 import BeautifulSoup
import re
import pandas as pd
def get_players(home_away_dict):
rows = []
for home_away, v in home_away_dict.items():
players = v['players']
print("\n{} - {}".format(v['team'],v['lineupStatus']))
for idx, player in enumerate(players):
if home_away == 'Home':
team = home_away_dict['Home']['team']
opp = home_away_dict['Away']['team']
else:
team = home_away_dict['Away']['team']
opp = home_away_dict['Home']['team']
if player.find('span', {'class':'lineup__throws'}):
playerPosition = 'P'
handedness = player.find('span', {'class':'lineup__throws'}).text
else:
playerPosition = player.find('div', {'class':'lineup__pos'}).text
handedness = player.find('span', {'class':'lineup__bats'}).text
if 'title' in list(player.find('a').attrs.keys()):
playerName = player.find('a')['title'].strip()
else:
playerName = player.find('a').text.strip()
playerRow = {
'Bat Order':idx,
'Name':playerName,
'Position':playerPosition,
'Team':team,
'Opponent':opp,
'Home/Away':home_away,
'Handedness':handedness,
'Lineup Status':home_away_dict[home_away]['lineupStatus']}
rows.append(playerRow)
print('{} {}'.format(playerRow['Position'], playerRow['Name']))
return rows
rows = []
url = 'https://www.rotowire.com/baseball/daily-lineups.php'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
lineupBoxes = soup.find_all('div', {'class':'lineup__box'})
for lineupBox in lineupBoxes:
try:
awayTeam = lineupBox.find('div', {'class':'lineup__team is-visit'}).text.strip()
homeTeam = lineupBox.find('div', {'class':'lineup__team is-home'}).text.strip()
print(f'\n\n############\n {awayTeam} # {homeTeam}\n############')
awayLineup = lineupBox.find('ul', {'lineup__list is-visit'})
homeLineup = lineupBox.find('ul', {'lineup__list is-home'})
awayLineupStatus = awayLineup.find('li', {'class':re.compile('lineup__status.*')}).text.strip()
homeLineupStatus = homeLineup.find('li', {'class':re.compile('lineup__status.*')}).text.strip()
awayPlayers = awayLineup.find_all('li', {'class':re.compile('lineup__player.*')})
homePlayers = homeLineup.find_all('li', {'class':re.compile('lineup__player.*')})
home_away_dict = {
'Home':{
'team':homeTeam, 'players':homePlayers, 'lineupStatus':homeLineupStatus},
'Away':{
'team':awayTeam, 'players':awayPlayers,'lineupStatus':awayLineupStatus}}
playerRows = get_players(home_away_dict)
rows += playerRows
except:
continue
df = pd.DataFrame(rows)
Output: First 20 of 300 rows
print(df.head(20).to_markdown(index=False))
| Bat Order | Name | Position | Team | Opponent | Home/Away | Handedness | Lineup Status |
|------------:|:-----------------|:-----------|:-------|:-----------|:------------|:-------------|:----------------|
| 0 | Nick Lodolo | P | CIN | PHI | Home | L | Expected Lineup |
| 1 | Jonathan India | 2B | CIN | PHI | Home | R | Expected Lineup |
| 2 | Nick Senzel | CF | CIN | PHI | Home | R | Expected Lineup |
| 3 | Kyle Farmer | 3B | CIN | PHI | Home | R | Expected Lineup |
| 4 | Joey Votto | 1B | CIN | PHI | Home | L | Expected Lineup |
| 5 | Aristides Aquino | DH | CIN | PHI | Home | R | Expected Lineup |
| 6 | Albert Almora | LF | CIN | PHI | Home | R | Expected Lineup |
| 7 | Matt Reynolds | RF | CIN | PHI | Home | R | Expected Lineup |
| 8 | Jose Barrero | SS | CIN | PHI | Home | R | Expected Lineup |
| 9 | Austin Romine | C | CIN | PHI | Home | R | Expected Lineup |
| 0 | Ranger Suarez | P | PHI | CIN | Away | L | Expected Lineup |
| 1 | Jean Segura | 2B | PHI | CIN | Away | R | Expected Lineup |
| 2 | Kyle Schwarber | LF | PHI | CIN | Away | L | Expected Lineup |
| 3 | Rhys Hoskins | 1B | PHI | CIN | Away | R | Expected Lineup |
| 4 | J.T. Realmuto | C | PHI | CIN | Away | R | Expected Lineup |
| 5 | Nick Castellanos | RF | PHI | CIN | Away | R | Expected Lineup |
| 6 | Alec Bohm | 3B | PHI | CIN | Away | R | Expected Lineup |
| 7 | Darick Hall | DH | PHI | CIN | Away | L | Expected Lineup |
| 8 | Bryson Stott | SS | PHI | CIN | Away | L | Expected Lineup |
| 9 | Matt Vierling | CF | PHI | CIN | Away | R | Expected Lineup |

How can I print all data for each unique observation to its own PDF or CSV using R or Python?

Consider the following example dataframe. How can I group by Name and then write all information pertaining to each unique observation under the name column to a PDF, CSV, or Excel file? For example I would like all of Dave's information printed to a file named "Dave", all of Sal's information printed to a file named "Sal".
Name | Score| Date | Test |
Dave | 95 | 09/03/21 | Math |
Dave | 90 | 09/20/21 | History |
Sal | 85 | 09/18/21 | Math |
Jackie| 89 | NA | English |
Sal | 88 | 09/15/21 | Gym |
Goat | 18 | 09/17/21 | Gym |
Jackie| 82 | 10/16/21 | Art |
Goat | 3 | 10/17/21 | Math |
Ty | 25 | 09/28/21 | Math |
Cheers
In R:
names <- unique(df$Name)
for(nam in names){
write.csv(x = df[df$Name== nam,], file = paste0(nam, ".csv"))
}
You can just put everything in a loop:
for name in my_df['Name'].unique().tolist():
new_df = my_df[my_df.Name ==name]
file_path_name = 'your path' + name +'.csv'
new_df.to_csv(file_path_name)

How can I extract text from a class within a class using Beautiful Soup in Python

Good Morning / Afternoon
I would be grateful for your help to give further clarity and direction using Beautiful Soup. My end goal if to have three variables
artist
song
station
I can currently extract the artist and song together (concatenated) and the radio station. This is in the format {str} \n\n artist song\n track hit \n
The code I have used below returns 20 tags which I can then use to extract the information above. My question to you all is - how can I retrieve the text from the raw code below using
<div class = "table__track-title"
<div class = "table__track-on air"
these would be combined with <tr class = "now-playing-tr" which was used initially to retrieve the result set of 20 records.
Code used to obtain the HTML subset
import bs4,requests,re
siteurl = 'https://onlineradiobox.com/uk/?cs=uk'
r = requests.get(siteurl)
soup = bs4.BeautifulSoup(r.text,'html.parser')
x = soup.find_all(class_='now_playing_tr')
for i in range(len(x)):
currtext = x[i].get_text()
HTML
<button aria-label="Listen live" class="b-play station_play" radioid="uk.cheekyhits"
radioimg="//cdn.onlineradiobox.com/img/l/2/88872.v2.png" radioname="Cheeky Hits"
stream="https://stream.zeno.fm/ys9fvbebgwzuv"
streamtype="mp3" title="Listen to radio"></button>
<img alt="Billy Joel - The DownEaster Alexa" src="https://is2-
ssl.mzstatic.com/image/thumb/Music124/v4/bf/b7/db/bfb7dbd8-6d55-d42a-a0f0-
3ecc5681cf9c/20UMGIM12176.rgb.jpg/30x30bb.jpg"><div class="table__track-title"><b>Billy
Joel</b> The DownEaster Alexa</div>
<div class="table__track-onair">Cheeky Hits</div>
</img></td></tr>
The main problem here is that not all stations have a track on air, so you have to account for a missing tag.
Here's my take on it:
import requests
from bs4 import BeautifulSoup
from tabulate import tabulate
soup = BeautifulSoup(
requests.get("https://onlineradiobox.com/uk/?cs=uk").text,
"lxml",
)
def check_for_track_on_air(tag) -> list:
if tag:
return tag.getText(strip=True).split(" - ")
return ["N/A", "N/A"]
stations = [
[
station.select_one(
".station__title__name, .podcast__title__name"
).getText(),
*check_for_track_on_air(
station.select_one(
".stations__station__track, .podcasts__podcast__track"
)
),
] for station
in soup.find_all("li", class_="stations__station")
]
station_chart = tabulate(
stations,
tablefmt="pretty",
headers=["Station", "Artist", "Song"],
stralign="left",
)
print(station_chart)
Output:
+-------------------------------+----------------------------+---------------------------------------+
| Station | Artist | Song |
+-------------------------------+----------------------------+---------------------------------------+
| Smooth Radio | Mark Cohn | Walking In Memphis |
| BBC Radio 1 | Hazey | Pots & Potions |
| Capital FM | Belters Only feat. Jazzy | Make Me Feel Good |
| Heart FM | Maroon 5 | This Love |
| Classic FM | Gioachino Rossini | La Cenerentola |
| BBC Radio London | Naughty Boy | La La La (feat. Sam Smith) |
| BBC Radio 2 | Simple Minds | Act Of Love |
| BBC Radio 4 | Nina Simone | Baltimore |
| Dance UK Radio | Faithless vs David Guetta | God Is A DJ |
| Gold Radio | Manfred Mann's Earthband | Davy's On The Road Again |
| KISS FM | Muni Long | Hrs & Hrs |
| LBC | N/A | N/A |
| Energy FM - Dance Music Radio | C-Sixty Four | On A Good Thing (Full Intention Edit) |
| Radio Caroline | Badly Drawn Boy | Once Around The Block |
| BBC Radio 6 Music | Bob Marley & The Wailers | It Hurts to Be Alone |
| BBC Radio 5 live | N/A | N/A |
| Absolute Chillout | N/A | N/A |
| House Nation UK | Kevin McKay, Katie McHardy | Everywhere (Extended Mix) |
| BBC Radio 4 Extra | N/A | N/A |
| Absolute Radio | The Knack | My Sharona |
| Magic Radio | Elton John | Rocket Man |
| Soul Groove Radio | N/A | N/A |
| BBC Radio 3 | Darius Milhaud | Violin Sonata, Op.257 |
| Jazz FM | N/A | N/A |
| BBC Radio 1Xtra | M'Way | Run It Up |
+-------------------------------+----------------------------+---------------------------------------+

Categories