I am having trouble parsing the code for the NBA starting lineups and would love some help if possible.
Here is my code so far:
import requests
from bs4 import BeautifulSoup
url = "https://www.rotowire.com/basketball/nba-lineups.php"
soup = BeautifulSoup(requests.get(url).text, "html.parser")
lineups = soup.find_all(class_='lineup__player')
print(lineups)
I am looking for the following data:
Player
Team
Position
I was hoping to scrape the data and then create a Pandas Dataframe from the output.
Here is an example of my desired output:
Player Team Position
Dennis Schroder BOS PG
Robert Langford BOS SG
Jayson Tatum BOS SF
Jabari Parker BOS PF
Grant Williams BOS C
Player Team Postion
Kyle Lowry MIA PG
Duncan Robinson MIA SG
Jimmy Butler MIA SF
P.J.Tucker MIA PF
Bam Adebayo MIA C
... ... ...
I was able to find the Player data but was unable to successfully parse it. I can see the Player data located inside 'Title'.
Any tips on how to complete this project will be greatly appreciated. Thank you in advance for any help that you may offer.
I am just looking for the 5 starting players... no need to add the bench players. And not sure if there is some way to add a space in between each team like my output above.
Here is and example of the current output that I would like to parse:
[<li class="lineup__player is-pct-play-100" title="Very Likely To Play">
<div class="lineup__pos">PG</div>
D. Schroder
</li>, <li class="lineup__player is-pct-play-100" title="Very Likely To Play">
<div class="lineup__pos">SG</div>
<a href="/basketball/player.php?id=4762" title="Romeo Langford">R.
You're on the right track. Here's one way to do it.
import requests, pandas
from bs4 import BeautifulSoup
url = "https://www.rotowire.com/basketball/nba-lineups.php"
soup = BeautifulSoup(requests.get(url).text, "html.parser")
lineups = soup.find_all(class_='is-pct-play-100')
positions = [x.find('div').text for x in lineups]
names = [x.find('a')['title'] for x in lineups]
teams = sum([[x.text] * 5 for x in soup.find_all(class_='lineup__abbr')], [])
df = pandas.DataFrame(zip(names, teams, positions))
print(df)
Related
I'm trying to scrape the positions, the artists and the songs from a ranking list on kworb. For example: https://kworb.net/spotify/country/us_weekly.html
I used the following script:
import requests
from bs4 import BeautifulSoup
response = requests.get("https://kworb.net/spotify/country/us_weekly.html")
content = response.content
soup = BeautifulSoup(response.content, 'html.parser')
print(soup.get_text())
And here is the output:
ITUNES
WORLDWIDE
ARTISTS
CHARTS
DON'T PRAY
RADIO
SPOTIFY
YOUTUBE
TRENDING
HOME
CountriesArtistsListenersCities
Spotify Weekly Chart - United States - 2023/02/09 | Totals
PosP+Artist and TitleWksPk(x?)StreamsStreams+Total
1
+1
SZA - Kill Bill
9
1(x5)
15,560,813
+247,052
148,792,089
2
-1
Miley Cyrus - Flowers
4
1(x3)
13,934,413
-4,506,662
75,009,251
3
+20
Morgan Wallen - Last Night
2
3(x1)
11,560,741
+6,984,649
16,136,833
...
How do I only get the positions, the artists and the songs separately and store it as an excel first?
expected output:
Pos Artist Songs
1 SZA Kill Bill
2 Miley Cyrus Flowers
3 Morgan Wallen Last Night
...
Best practice to scrape tables is using pandas.read_html() it uses BeautifulSoup under the hood for you.
import pandas as pd
#find table by id and select first index from list of dfs
df = pd.read_html('https://kworb.net/spotify/country/us_weekly.html', attrs={'id':'spotifyweekly'})[0]
#split the column by delimiter and creat your expected columns
df[['Artist','Song']]=df['Artist and Title'].str.split(' - ', n=1, expand=True)
#pick your columns and export to excel
df[['Pos','Artist','Song']].to_excel('yourfile.xlsx', index = False)
Alternative based on direct approach:
import requests
from bs4 import BeautifulSoup
import pandas as pd
soup = BeautifulSoup(requests.get("https://kworb.net/spotify/country/hk_weekly.html").content, 'html.parser')
data = []
for e in soup.select('#spotifyweekly tr:has(td)'):
data .append({
'Pos':e.td.text,
'Artist':e.a.text,
'Song':e.a.find_next_sibling('a').text
})
pd.DataFrame(data).to_excel('yourfile.xlsx', index = False)
Outputs
Pos
Artist
Song
1
SZA
Kill Bill
2
Miley Cyrus
Flowers
3
Morgan Wallen
Last Night
4
Metro Boomin
Creepin'
5
Lil Uzi Vert
Just Wanna Rock
6
Drake
Rich Flex
7
Metro Boomin
Superhero (Heroes & Villains) [with Future & Chris Brown]
8
Sam Smith
Unholy
...
For a project I'm scraping data from futbin players and I would like to add that scraped data to a dict or pandas dataframe. I'm stuck for a couple of hours and would like some help if possible. I will put my code below on what I have so far. This piece of code only prints out the data and from that I'm clueless about what to do.
Code:
from requests_html import HTMLSession
import requests
from bs4 import BeautifulSoup
import pandas as pd
urls = ['https://www.futbin.com/21/player/87/pele', 'https://www.futbin.com/21/player/27751/robert-lewandowski']
for url in urls:
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
info = soup.find('div', id='info_content')
rows = info.find_all('td')
for info in rows:
print(info.text.strip())
The work you have already done to identify the table you want is good.
use read_html() to convert to a dataframe
basic transforms to turn it into columns rather than key value pairs
in list comprehension get details of all wanted footballers
import requests
from bs4 import BeautifulSoup
import pandas as pd
urls = ['https://www.futbin.com/21/player/87/pele', 'https://www.futbin.com/21/player/27751/robert-lewandowski']
def myhtml(url):
# use BS4 to get table that has required data
html = str(BeautifulSoup(requests.get(url).content, 'html.parser').find('div', id='info_content').find("table"))
# read_html() returns a list, take first one, first column are attribute name, transpose to build DF
return pd.read_html(html)[0].set_index(0).T
df = pd.concat([myhtml(u) for u in urls])
Name
Club
Nation
League
Skills
Weak Foot
Intl. Rep
Foot
Height
Weight
Revision
Def. WR
Att. WR
Added on
Origin
R.Face
B.Type
DOB
Robert Lewandowski FIFA 21 Career Mode
Age
1
Edson Arantes Nascimento
FUT 21 ICONS
Brazil
Icons
5
4
5
Right
173cm
5'8"
70
Icon
Med
High
2020-09-10
Prime
nan
Unique
23-10-1940
nan
1
Robert Lewandowski
FC Bayern
Poland
Bundesliga
4
4
4
Right
184cm
6'0"
80
TOTY
Med
High
2021-01-22
TOTY
nan
Unique
nan
Robert Lewandowski FIFA 21 Career Mode
I would do it with open() and write()
file = open ("filename.txt", "w")
The w specifies the following :
"w" - Write - Opens a file for writing, creates the file if it does not exist
And then :
file.write (text_to_save)
Be sure to include os.path!
import os.path
I want to retrieve the tables on the following website and store them in a pandas dataframe: https://www.acf.hhs.gov/orr/resource/ffy-2012-13-state-of-colorado-orr-funded-programs
However, the third table on the page returns an empty dataframe with all the table's data stored in tuples as the column headers:
Empty DataFrame
Columns: [(Service Providers, State of Colorado), (Cuban - Haitian Program, $0), (Refugee Preventive Health Program, $150,000.00), (Refugee School Impact, $450,000), (Services to Older Refugees Program, $0), (Targeted Assistance - Discretionary, $0), (Total FY, $600,000)]
Index: []
Is there a way to "flatten" the tuple headers into header + values, then append this to a dataframe made up of all four tables? My code is below -- it has worked on other similar pages but keeps breaking because of this table's formatting. Thanks!
funds_df = pd.DataFrame()
url = 'https://www.acf.hhs.gov/programs/orr/resource/ffy-2011-12-state-of-colorado-orr-funded-programs'
page = requests.get(url)
soup = BeautifulSoup(page.text, 'html.parser')
year = url.split('ffy-')[1].split('-orr')[0]
tables = page.content
df_list = pd.read_html(tables)
for df in df_list:
df['URL'] = url
df['YEAR'] = year
funds_df = funds_df.append(df)
For this site, there's no need for beautifulsoup or requests
pandas.read_html creates a list of DataFrames for each <table> at the URL.
import pandas as pd
url = 'https://www.acf.hhs.gov/orr/resource/ffy-2012-13-state-of-colorado-orr-funded-programs'
# read the url
dfl = pd.read_html(url)
# see each dataframe in the list; there are 4 in this case
for i, d in enumerate(dfl):
print(i)
display(d) # display worker in Jupyter, otherwise use print
print('\n')
dfl[0]
Service Providers Cash and Medical Assistance* Refugee Social Services Program Targeted Assistance Program TOTAL
0 State of Colorado $7,140,000 $1,896,854 $503,424 $9,540,278
dfl[1]
WF-CMA 2 RSS TAG-F CMA Mandatory 3 TOTAL
0 $3,309,953 $1,896,854 $503,424 $7,140,000 $9,540,278
dfl[2]
Service Providers Refugee School Impact Targeted Assistance - Discretionary Services to Older Refugees Program Refugee Preventive Health Program Cuban - Haitian Program Total
0 State of Colorado $430,000 $0 $100,000 $150,000 $0 $680,000
dfl[3]
Volag Affiliate Name Projected ORR MG Funding Director
0 CWS Ecumenical Refugee & Immigration Services $127,600 Ferdi Mevlani 1600 Downing St., Suite 400 Denver, CO 80218 303-860-0128
1 ECDC ECDC African Community Center $308,000 Jennifer Guddiche 5250 Leetsdale Drive Denver, CO 80246 303-399-4500
2 EMM Ecumenical Refugee Services $191,400 Ferdi Mevlani 1600 Downing St., Suite 400 Denver, CO 80218 303-860-0128
3 LIRS Lutheran Family Services Rocky Mountains $121,000 Floyd Preston 132 E Las Animas Colorado Springs, CO 80903 719-314-0223
4 LIRS Lutheran Family Services Rocky Mountains $365,200 James Horan 1600 Downing Street, Suite 600 Denver, CO 80218 303-980-5400
I have been trying to scrape data from https://www.premierleague.com/players to get team rosters for premier league clubs for the past 10 years.
The following is the code I am using. In this particular example se=17 specifies season 2008/09 and cl=12 is for Manchester United.
url= 'https://www.premierleague.com/players?se=17&cl=12'
r=requests.get(url)
d= pd.read_html(r.text)
d[0]
Inspite of the url providing the correct data on the page, the table I get is the one for the current season 2019/20. I have tried multiple combinations of the url and still I am not able to scrape.
Can someone help?
I prefer to use BeautifulSoup to navigate the DOM. This works.
from bs4 import BeautifulSoup
import requests
resp = requests.get("https://www.premierleague.com/players", params={"se":17,"cl":12})
soup = BeautifulSoup(resp.content.decode(), "html.parser")
html = soup.find("div", {"class":"table playerIndex"}).find("table")
df = pd.read_html(str(html))[0]
sample output
Player Position Nationality
Rolando Aarons Midfielder England
Tammy Abraham Forward England
Che Adams Forward England
Dennis Adeniran Midfielder England
Adrián Goalkeeper Spain
Adrien Silva Midfielder Portugal
I have an output using BeautifulSoup.
I need to convert the output from 'type' 'bs4.element.Tag' to a list and export the list into a DataFrame column, named COLUMN_A
I want my output to stop at the 14th element (the last three h2 are useless)
My code:
import requests
from bs4 import BeautifulSoup
url = 'https://www.planetware.com/tourist-attractions-/oslo-n-osl-oslo.htm'
url_get = requests.get(url)
soup = BeautifulSoup(url_get.content, 'html.parser')
attraction_place=soup.find_all('h2', class_="sitename")
for attraction in attraction_place:
print(attraction.text)
type(attraction)
Output:
1 Vigeland Sculpture Park
2 Akershus Fortress
3 Viking Ship Museum
4 The National Museum
5 Munch Museum
6 Royal Palace
7 The Museum of Cultural History
8 Fram Museum
9 Holmenkollen Ski Jump and Museum
10 Oslo Cathedral
11 City Hall (Rådhuset)
12 Aker Brygge
13 Natural History Museum & Botanical Gardens
14 Oslo Opera House and Annual Music Festivals
Where to Stay in Oslo for Sightseeing
Tips and Tours: How to Make the Most of Your Visit to Oslo
More Related Articles on PlanetWare.com
I expect a list like:
attraction=[Vigeland Sculpture Park, Akershus Fortress, ......]
Thank you very much in advance.
A nice easy way is to take the alt attribute of the photos. This gets clean text output and only 14 without any need for slicing/indexing.
from bs4 import BeautifulSoup
import requests
r = requests.get('https://www.planetware.com/tourist-attractions-/oslo-n-osl-oslo.htm')
soup = bs(r.content, 'lxml')
attractions = [item['alt'] for item in soup.select('.photo [alt]')]
print(attractions)
new = []
count = 1
for attraction in attraction_place:
while count < 15:
text = attraction.text
new.append(text)
count += 1
You can use slice.
for attraction in attraction_place[:14]:
print(attraction.text)
type(attraction)