Python Scaling loops - python

For each letter in the alphabet. The code should go to website.com/a and grab a table. Then it should check for a next button grab the link and makesoup and grab the next table and repeat until there is no valid next link. Then move to website.com/b(next letter in alphabet) and repeat. But I can only get as far as 2 pages for each letter. the first for loop grabs page 1 and the second grabs page 2 for each letter. I know I could write a loop for as many pages as needed but that is not scalable. How can I fix this?
from nfl_fun import make_soup
import urllib.request
import os
from string import ascii_lowercase
import requests
letter = ascii_lowercase
link = "https://www.nfl.com"
for letter in ascii_lowercase:
soup = make_soup(f"https://www.nfl.com/players/active/{letter}")
for tbody in soup.findAll("tbody"):
for tr in tbody.findAll("a"):
if tr.has_attr("href"):
print(tr.attrs["href"])
for letter in ascii_lowercase:
soup = make_soup(f"https://www.nfl.com/players/active/{letter}")
for page in soup.footer.findAll("a", {"nfl-o-table-pagination__next"}):
pagelink = ""
footer = ""
footer = page.attrs["href"]
pagelink = f"{link}{footer}"
print(footer)
getpage = requests.get(pagelink)
if getpage.status_code == 200:
next_soup = make_soup(pagelink)
for next_page in next_soup.footer.findAll("a", {"nfl-o-table-pagination__next"}):
print(getpage)
for tbody in next_soup.findAll("tbody"):
for tr in tbody.findAll("a"):
if tr.has_attr("href"):
print(tr.attrs["href"])
soup = next_soup
Thank You again,

There is an element in there that says when the "Next" button is inactive. So that'll tell you you are on the last page. So what you can do is a while loop, and just keep going to the next page, until it reaches the last page (Ie "Next" is inactive) and then tell it to stop the loop and go to the next letter:
from bs4 import BeautifulSoup
from string import ascii_lowercase
import requests
import pandas as pd
import re
letters = ascii_lowercase
link = "https://www.nfl.com"
results = pd.DataFrame()
for letter in letters:
continueToNextPage = True
after = ''
page=1
while continueToNextPage == True:
# Get the Table
url = f"https://www.nfl.com/players/active/{letter}?query={letter}&after={after}"
response = requests.get(url, 'html.parser')
soup = BeautifulSoup(response.text, 'html.parser')
temp_df = pd.read_html(response.text)[0]
results = results.append(temp_df, sort=False).reset_index(drop=True)
print ("{letter}: Page: {page}".format(letter=letter.upper(), page=page))
# Check if next page is inactive
buttons = soup.find('div', {'class':'nfl-o-table-pagination__buttons'})
regex = re.compile('.*pagination__next.*is-inactive.*')
if buttons.find('span', {'class':regex}):
continueToNextPage = False
else:
after = buttons.find('a', {'title':'Next'})['href'].split('after=')[-1]
page+=1
Output:
print (results)
Player Current Team Position Status
0 Chidobe Awuzie Dallas Cowboys CB ACT
1 Josh Avery Seattle Seahawks DT ACT
2 Genard Avery Philadelphia Eagles DE ACT
3 Anthony Averett Baltimore Ravens CB ACT
4 Lee Autry Chicago Bears DT ACT
5 Denico Autry Indianapolis Colts DT ACT
6 Tavon Austin Dallas Cowboys WR UFA
7 Blessuan Austin New York Jets CB ACT
8 Antony Auclair Tampa Bay Buccaneers TE ACT
9 Jeremiah Attaochu Denver Broncos LB ACT
10 Hunter Atkinson Atlanta Falcons OT ACT
11 John Atkins Detroit Lions DE ACT
12 Geno Atkins Cincinnati Bengals DT ACT
13 Marcell Ateman Las Vegas Raiders WR ACT
14 George Aston New York Giants RB ACT
15 Dravon Askew-Henry New York Giants DB ACT
16 Devin Asiasi New England Patriots TE ACT
17 George Asafo-Adjei New York Giants OT ACT
18 Ade Aruna Las Vegas Raiders DE ACT
19 Grayland Arnold Philadelphia Eagles SAF ACT
20 Dan Arnold Arizona Cardinals TE ACT
21 Damon Arnette Las Vegas Raiders CB UDF
22 Ray-Ray Armstrong Dallas Cowboys LB UFA
23 Ka'John Armstrong Denver Broncos OT ACT
24 Dorance Armstrong Dallas Cowboys DE ACT
25 Cornell Armstrong Houston Texans CB ACT
26 Terron Armstead New Orleans Saints OT ACT
27 Ryquell Armstead Jacksonville Jaguars RB ACT
28 Arik Armstead San Francisco 49ers DE ACT
29 Alex Armah Carolina Panthers FB ACT
... ... ... ...
3180 Clive Walford Miami Dolphins TE UFA
3181 Cameron Wake Tennessee Titans DE UFA
3182 Corliss Waitman Pittsburgh Steelers P ACT
3183 Rick Wagner Green Bay Packers OT ACT
3184 Bobby Wagner Seattle Seahawks MLB ACT
3185 Ahmad Wagner Chicago Bears WR ACT
3186 Colby Wadman Denver Broncos P ACT
3187 Christian Wade Buffalo Bills RB ACT
3188 LaAdrian Waddle Buffalo Bills OT UFA
3189 Oshane Ximines New York Giants LB ACT
3190 Trevon Young Cleveland Browns DE ACT
3191 Sam Young Las Vegas Raiders OT ACT
3192 Kenny Young Los Angeles Rams ILB ACT
3193 Chase Young Washington Redskins DE UDF
3194 Bryson Young Atlanta Falcons DE ACT
3195 Isaac Yiadom Denver Broncos CB ACT
3196 T.J. Yeldon Buffalo Bills RB ACT
3197 Deon Yelder Kansas City Chiefs TE ACT
3198 Rock Ya-Sin Indianapolis Colts CB ACT
3199 Eddie Yarbrough Minnesota Vikings DE ACT
3200 Marshal Yanda Baltimore Ravens OG ACT
3201 Tavon Young Baltimore Ravens CB ACT
3202 Brandon Zylstra Carolina Panthers WR ACT
3203 Jabari Zuniga New York Jets DE UDF
3204 Greg Zuerlein Dallas Cowboys K ACT
3205 Isaiah Zuber New England Patriots WR ACT
3206 Justin Zimmer Cleveland Browns DT ACT
3207 Anthony Zettel Minnesota Vikings DE ACT
3208 Kevin Zeitler New York Giants OG ACT
3209 Olamide Zaccheaus Atlanta Falcons WR ACT
[3210 rows x 4 columns]

Related

Draw a Map of cities in python

I have a ranking of countries across the world in a variable called rank_2000 that looks like this:
Seoul
Tokyo
Paris
New_York_Greater
Shizuoka
Chicago
Minneapolis
Boston
Austin
Munich
Salt_Lake
Greater_Sydney
Houston
Dallas
London
San_Francisco_Greater
Berlin
Seattle
Toronto
Stockholm
Atlanta
Indianapolis
Fukuoka
San_Diego
Phoenix
Frankfurt_am_Main
Stuttgart
Grenoble
Albany
Singapore
Washington_Greater
Helsinki
Nuremberg
Detroit_Greater
TelAviv
Zurich
Hamburg
Pittsburgh
Philadelphia_Greater
Taipei
Los_Angeles_Greater
Miami_Greater
MannheimLudwigshafen
Brussels
Milan
Montreal
Dublin
Sacramento
Ottawa
Vancouver
Malmo
Karlsruhe
Columbus
Dusseldorf
Shenzen
Copenhagen
Milwaukee
Marseille
Greater_Melbourne
Toulouse
Beijing
Dresden
Manchester
Lyon
Vienna
Shanghai
Guangzhou
San_Antonio
Utrecht
New_Delhi
Basel
Oslo
Rome
Barcelona
Madrid
Geneva
Hong_Kong
Valencia
Edinburgh
Amsterdam
Taichung
The_Hague
Bucharest
Muenster
Greater_Adelaide
Chengdu
Greater_Brisbane
Budapest
Manila
Bologna
Quebec
Dubai
Monterrey
Wellington
Shenyang
Tunis
Johannesburg
Auckland
Hangzhou
Athens
Wuhan
Bangalore
Chennai
Istanbul
Cape_Town
Lima
Xian
Bangkok
Penang
Luxembourg
Buenos_Aires
Warsaw
Greater_Perth
Kuala_Lumpur
Santiago
Lisbon
Dalian
Zhengzhou
Prague
Changsha
Chongqing
Ankara
Fuzhou
Jinan
Xiamen
Sao_Paulo
Kunming
Jakarta
Cairo
Curitiba
Riyadh
Rio_de_Janeiro
Mexico_City
Hefei
Almaty
Beirut
Belgrade
Belo_Horizonte
Bogota_DC
Bratislava
Dhaka
Durban
Hanoi
Ho_Chi_Minh_City
Kampala
Karachi
Kuwait_City
Manama
Montevideo
Panama_City
Quito
San_Juan
What I would like to do is a map of the world where those cities are colored according to their position on the ranking above. I am opened to further solutions for the representation (such as bubbles of increasing dimension according to the position of the cities in the rank or, if necessary, representing only a sample of countries taken from the top rank, the middle and the bottom).
Thank you,
Federico
Your question has two parts; finding the location of each city and then drawing them on the map. Assuming you have the latitude and longitude of each city, here's how you'd tackle the latter part.
I like Folium (https://pypi.org/project/folium/) for drawing maps. Here's an example of how you might draw a circle for each city, with it's position in the list is used to determine the size of that circle.
import folium
cities = [
{'name':'Seoul', 'coodrs':[37.5639715, 126.9040468]},
{'name':'Tokyo', 'coodrs':[35.5090627, 139.2094007]},
{'name':'Paris', 'coodrs':[48.8588787,2.2035149]},
{'name':'New York', 'coodrs':[40.6976637,-74.1197631]},
# etc. etc.
]
m = folium.Map(zoom_start=15)
for counter, city in enumerate(cities):
circle_size = 5 + counter
folium.CircleMarker(
location=city['coodrs'],
radius=circle_size,
popup=city['name'],
color="crimson",
fill=True,
fill_color="crimson",
).add_to(m)
m.save('map.html')
Output:
You may need to adjust the circle_size calculation a little to work with the number of cities you want to include.

cleaning up web scrape data and combining together?

The website URL is https://www.justia.com/lawyers/criminal-law/maine
I'm wanting to scrape only the name of the lawyer and where their office is.
response = requests.get(url)
soup= BeautifulSoup(response.text,"html.parser")
Lawyer_name= soup.find_all("a","url main-profile-link")
for i in Lawyer_name:
print(i.find(text=True))
address= soup.find_all("span","-address -hide-landscape-tablet")
for x in address:
print(x.find_all(text=True))
The name prints out just find but the address is printing off with extra that I want to remove:
['\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t88 Hammond Street', '\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tBangor,\t\t\t\t\tME 04401\t\t\t\t\t\t ']
so the output I'm attempting to get for each lawyer is like this (the 1st one example):
Hunter J Tzovarras
88 Hammond Street
Bangor, ME 04401
two issues I'm trying to figure out
How can I clean up the address so it is easier to read?
How can I save the matching lawyer name with the address so they
don't get mixed up.
Use x.get_text() instead of x.find_all
for x in address:
print(x.get_text(strip=True))
Full working code:
import pandas as pd
import requests
from bs4 import BeautifulSoup
url = 'https://www.justia.com/lawyers/criminal-law/maine'
response = requests.get(url)
soup= BeautifulSoup(response.text,"html.parser")
n=[]
ad=[]
Lawyer_name= [x.get('title').strip() for x in soup.select('a.lawyer-avatar')]
n.extend(Lawyer_name)
#print(Lawyer_name)
address= [x.get_text(strip=True).replace('\t','').strip() for x in soup.find_all("span",class_="-address -hide-landscape-tablet")]
#print(address)
ad.extend(address)
df = pd.DataFrame(data=list(zip(n,ad)),columns=[['Lawyer_name','address']])
print(df)
Output:
Lawyer_name address
0 William T. Bly Esq 119 Main StreetKennebunk,ME 04043
1 John S. Webb 949 Main StreetSanford,ME 04073
2 William T. Bly Esq 20 Oak StreetEllsworth,ME 04605
3 Christopher Causey Esq 16 Middle StSaco,ME 04072
4 Robert Van Horn 88 Hammond StreetBangor,ME 04401
5 John S. Webb 37 Western Ave., Unit #307Kennebunk,ME 04043
6 Hunter J Tzovarras 4 Union Park RoadTopsham,ME 04086
7 Michael Stephen Bowser Jr. 241 Main StreetP.O. Box 57Saco,ME 04072
8 Richard Regan 6 City CenterSuite 301Portland,ME 04101
9 Robert Guillory Esq 75 Pearl St. Suite 400Portland,ME 04101
10 Dylan R. Boyd 160 Capitol StreetP.O. Box 79Augusta,ME 04332
11 Luke Rioux Esq 10 Stoney Brook LaneLyman,ME 04002
12 David G. Webbert 15 Columbia Street, Ste. 301Bangor,ME 04401
13 Amy Fairfield 32 Saco AveOld Orchard Beach,ME 04064
14 Mr. Richard Lyman Hartley 62 Portland Rd., Ste. 44Kennebunk,ME 04043
15 Neal L Weinstein Esq 647 U.S. Route One#203York,ME 03909
16 Albert Hansen 76 Tandberg Trail (Route 115)Windham,ME 04062
17 Russell Goldsmith Esq Two Canal PlazaPO Box 4600Portland,ME 04112
18 Miklos Pongratz Esq 18 Market Square Suite 5Houlton,ME 04730
19 Bradford Pattershall Esq 5 Island View DrCumberland Foreside,ME 04110
20 Michele D L Kenney 12 Silver StreetP.O. Box 559Waterville,ME 04903
21 John Simpson 344 Mount Hope Ave.Bangor,ME 04402
22 Mariah America Gleaton 192 Main StreetEllsworth,ME 04605
23 Wayne Foote Esq 85 Brackett StreetPortland,ME 04102
24 Will Ashe 16 Union StreetBrunswick,ME 04011
25 Peter J Cyr Esq 482 Congress Street Suite 402Portland,ME 04101
26 Jonathan Steven Handelman Esq PO Box 335York,ME 03909
27 Richard Smith Berne 36 Ossipee Trl W.Standish,ME 04084
28 Meredith G. Schmid 75 Pearl St.Suite 216Portland,ME 04101
29 Gregory LeClerc 28 Long Sands Road, Suite 5York,ME 03909
30 Cory McKenna 20 Mechanic StCamden,ME 04843
31 Thomas P. Elias P.O. Box 1049304 Hancock St. Suite 1KBangor,ME...
32 Christopher MacLean 1250 Forest Avenue, Ste 3APortland,ME 04103
33 Zachary J. Smith 415 Congress StreetSuite 202Portland,ME 04101
34 Stephen Sweatt 919 Ridge RoadP.O. BOX 119Bowdoinham,ME 04008
35 Michael Turndorf Esq 1250 Forest Avenue, Ste 3APortland,ME 04103
36 Andrews Bruce Campbell Esq 133 State StreetAugusta,ME 04330
37 Timothy Zerillo 110 Portland StreetFryeburg,ME 04037
38 Walter McKee Esq 440 Walnut Hill RdNorth Yarmouth,ME 04097
39 Shelley Carter 70 State StreetEllsworth,ME 04605
for your second query You can save them into a dictionary like this -
url = 'https://www.justia.com/lawyers/criminal-law/maine'
response = requests.get(url)
soup= BeautifulSoup(response.text,"html.parser")
# parse all names and save them in a list
lawyer_names = soup.find_all("a","url main-profile-link")
lawyer_names = [name.find(text=True).strip() for name in lawyer_names]
# parse all addresses and save them in a list
lawyer_addresses = soup.find_all("span","-address -hide-landscape-tablet")
lawyer_addresses = [re.sub('\s+',' ', address.get_text(strip=True)) for address in lawyer_addresses]
# map names with addresses
lawyer_dict = dict(zip(lawyer_names, lawyer_addresses))
print(lawyer_dict)
Output dictionary -
{'Albert Hansen': '62 Portland Rd., Ste. 44Kennebunk, ME 04043',
'Amber Lynn Tucker': '415 Congress St., Ste. 202P.O. Box 7542Portland, ME 04112',
'Amy Fairfield': '10 Stoney Brook LaneLyman, ME 04002',
'Andrews Bruce Campbell Esq': '919 Ridge RoadP.O. BOX 119Bowdoinham, ME 04008',
'Bradford Pattershall Esq': 'Two Canal PlazaPO Box 4600Portland, ME 04112',
'Christopher Causey Esq': '949 Main StreetSanford, ME 04073',
'Cory McKenna': '75 Pearl St.Suite 216Portland, ME 04101',
'David G. Webbert': '160 Capitol StreetP.O. Box 79Augusta, ME 04332',
'David Nelson Wood Esq': '120 Main StreetSuite 110Saco, ME 04072',
'Dylan R. Boyd': '6 City CenterSuite 301Portland, ME 04101',
'Gregory LeClerc': '36 Ossipee Trl W.Standish, ME 04084',
'Hunter J Tzovarras': '88 Hammond StreetBangor, ME 04401',
'John S. Webb': '16 Middle StSaco, ME 04072',
'John Simpson': '5 Island View DrCumberland Foreside, ME 04110',
'Jonathan Steven Handelman Esq': '16 Union StreetBrunswick, ME 04011',
'Luke Rioux Esq': '75 Pearl St. Suite 400Portland, ME 04101',
'Mariah America Gleaton': '12 Silver StreetP.O. Box 559Waterville, ME 04903',
'Meredith G. Schmid': 'PO Box 335York, ME 03909',
'Michael Stephen Bowser Jr.': '37 Western Ave., Unit #307Kennebunk, ME 04043',
'Michael Turndorf Esq': '415 Congress StreetSuite 202Portland, ME 04101',
'Michele D L Kenney': '18 Market Square Suite 5Houlton, ME 04730',
'Miklos Pongratz Esq': '76 Tandberg Trail (Route 115)Windham, ME 04062',
'Mr. Richard Lyman Hartley': '15 Columbia Street, Ste. 301Bangor, ME 04401',
'Neal L Weinstein Esq': '32 Saco AveOld Orchard Beach, ME 04064',
'Peter J Cyr Esq': '85 Brackett StreetPortland, ME 04102',
'Richard Regan': '4 Union Park RoadTopsham, ME 04086',
'Richard Smith Berne': '482 Congress Street Suite 402Portland, ME 04101',
'Robert Guillory Esq': '241 Main StreetP.O. Box 57Saco, ME 04072',
'Robert Van Horn': '20 Oak StreetEllsworth, ME 04605',
'Russell Goldsmith Esq': '647 U.S. Route One#203York, ME 03909',
'Shelley Carter': '110 Portland StreetFryeburg, ME 04037',
'Thaddeus Day Esq': '440 Walnut Hill RdNorth Yarmouth, ME 04097',
'Thomas P. Elias': '28 Long Sands Road, Suite 5York, ME 03909',
'Timothy Zerillo': '1250 Forest Avenue, Ste 3APortland, ME 04103',
'Todd H Crawford Jr': '1288 Roosevelt Trl, Ste #3P.O. Box 753Raymond, ME 04071',
'Walter McKee Esq': '133 State StreetAugusta, ME 04330',
'Wayne Foote Esq': '344 Mount Hope Ave.Bangor, ME 04402',
'Will Ashe': '192 Main StreetEllsworth, ME 04605',
'William T. Bly Esq': '119 Main StreetKennebunk, ME 04043',
'Zachary J. Smith': 'P.O. Box 1049304 Hancock St. Suite 1KBangor, ME 04401'}

Basic web scraper keeps repeating first entry in loop

I'm relatively new to python and have only done projects doing dataframe analysis. I am trying to learn web scraping to complete a personal proejct.
I'm practicing this basics and this is my current code:
import pandas as pd
from bs4 import BeautifulSoup
from selenium import webdriver
import requests
html_text = requests.get('https://www.pff.com/news/nfl-quarterback-rankings-all-32-starters-ahead-of-the-2021-nfl-season').text
soup = BeautifulSoup(html_text,'lxml')
players = soup.find('div',class_ = 'm-longform-copy')
for player in players:
name = players.h3.a.text
print(name)
When I run this, it just prints "Patrick Mahomes" repeatedly instead of going onto the next entry.
I looked up a few other similar questions like this on here, but don't know the syntax well enough to apply it to my issue. Any help would be great!
import pandas as pd
from bs4 import BeautifulSoup
from selenium import webdriver
import requests
html_text = requests.get('https://www.pff.com/news/nfl-quarterback-rankings-all-32-starters-ahead-of-the-2021-nfl-season').text
soup = BeautifulSoup(html_text,'lxml')
players = soup.select('h3 ')
for player in players:
name = player.text
print(name)
Output:
1. Patrick Mahomes, Kansas City Chiefs
2. Tom Brady, Tampa Bay Buccaneers
3. Aaron Rodgers, Green Bay Packers
4. Russell Wilson, Seattle Seahawks
5. Deshaun Watson, Houston Texans
6. Josh Allen, Buffalo Bills
7. Dak Prescott, Dallas Cowboys
8. Lamar Jackson, Baltimore Ravens
9. Matt Ryan, Atlanta Falcons
10. Baker Mayfield, Cleveland Browns
11. Matthew Stafford, Los Angeles Rams
12. Ryan Tannehill, Tennessee Titans
13. Derek Carr, Las Vegas Raiders
14. Kirk Cousins, Minnesota Vikings
15. Justin Herbert, Los Angeles Chargers
16. Ben Roethlisberger, Pittsburgh Steelers
17. Kyler Murray, Arizona Cardinals
18. Joe Burrow, Cincinnati Bengals
19. Ryan Fitzpatrick, Washington Football Team
20. Daniel Jones, New York Giants
21. Trevor Lawrence, Jacksonville Jaguars
22. Jimmy Garoppolo, San Francisco 49ers
23. Carson Wentz, Indianapolis Colts
24. Jameis Winston/Taysom Hill, New Orleans Saints
25. Justin Fields, Chicago Bears
26. Jared Goff, Detroit Lions
27. Cam Newton, New England Patriots
28. Sam Darnold, Carolina Panthers
29. Tua Tagovailoa, Miami Dolphins
30. Zach Wilson, New York Jets
31. Jalen Hurts, Philadelphia Eagles
32. Drew Lock, Denver Broncos
NFL Featured Tools

Trying to get python program to print out selected stats from web scraping

I am new to beautiful soup and was looking for a way to have a user input what team they wanted and what week. Then have the script print out certain stats for that week. In the output when I put in the team and week number it just goes right to the command line.
Here is my code:
import requests
from bs4 import BeautifulSoup
team = input('''What team are you looking for?
crd - Arizona Cardinals
atl - Atlanta Falcons
rav - Baltimore Ravens
buf - Buffalo Bills
car - Carolina Panthers
chi - Chicago Bears
cin - Cincinnati Bengals
cle - Cleveland Browns
dal - Dallas Cowboys
den - Denver Broncos
det - Detroit Lions
gnb - Green Bay Packers
htx - Houston Texans
clt - Indianapolis Colts
jax - Jacksonville Jaguars
kan - Kansas City Chiefs
sdg - Los Angeles Chargers
ram - Los Angeles Rams
mia - Miami Dolphins
min - Minnesota Vikings
nwe - New England Patriots
nor - New Orleans Saints
nyg - New York Giants
nyj - New York Jets
rai - Oakland Raiders
phi - Philadelphia Eagles
pit - Pittsburgh Steelers
sfo - San Fransisco 49ers
sea - Seattle Seahawks
tam - Tampa Bay Buccaneers
oti - Tennessee Titans
was - Washington Football Team
Enter the 3 letter code for the team: ''')
week = int(input('What week are you looking for? '))
url = 'https://www.pro-football-reference.com/teams/' + team.lower() + '/2019.htm'
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
week_num = soup.find_all('th', attrs={"data-stat": "week_num", "class": "right", "scope": "row"})
total_off = soup.find_all('td', attrs={"data-stat": "yards_off", "class": "right"})
total_def = soup.find_all('td', attrs={"data-stat": "yards_def", "class": "right"})
pass_yards_off = soup.find_all('td', attrs={"data-stat": "pass_yds_off", "class": "right"})
pass_yards_def = soup.find_all('td', attrs={"data-stat": "pass_yds_def", "class": "right"})
rush_yards_off = soup.find_all('td', attrs={"data-stat": "rush_yds_off", "class": "right"})
rush_yards_def = soup.find_all('td', attrs={"data-stat": "rush_yds_def", "class": "right"})
team_score = soup.find_all('td', attrs={"data-stat": "pts_off", "class": "right"})
opp_score = soup.find_all('td', attrs={"data-stat": "pts_def", "class": "right"})
for i in range(len(week_num)):
if week in week_num:
print('Week Number: ' + week_num[i].text.strip(),
'Total Off: ' + total_off[i].text.strip(),
'Total Def: ' + total_def[i].text.strip(),
'Passing Yards Off: ' + pass_yards_off[i].text.strip(),
'Passing Yards Def: ' + pass_yards_def[i].text.strip(),
'Rushing Yards Off: ' + rush_yards_off[i].text.strip(),
'Rushing Yards Def: ' + rush_yards_def[i].text.strip(), '\n')
Here is the output when I run it:
What team are you looking for?
crd - Arizona Cardinals
atl - Atlanta Falcons
rav - Baltimore Ravens
buf - Buffalo Bills
car - Carolina Panthers
chi - Chicago Bears
cin - Cincinnati Bengals
cle - Cleveland Browns
dal - Dallas Cowboys
den - Denver Broncos
det - Detroit Lions
gnb - Green Bay Packers
htx - Houston Texans
clt - Indianapolis Colts
jax - Jacksonville Jaguars
kan - Kansas City Chiefs
sdg - Los Angeles Chargers
ram - Los Angeles Rams
mia - Miami Dolphins
min - Minnesota Vikings
nwe - New England Patriots
nor - New Orleans Saints
nyg - New York Giants
nyj - New York Jets
rai - Oakland Raiders
phi - Philadelphia Eagles
pit - Pittsburgh Steelers
sfo - San Fransisco 49ers
sea - Seattle Seahawks
tam - Tampa Bay Buccaneers
oti - Tennessee Titans
was - Washington Football Team
Enter the 3 letter code for the team: nwe
What week are you looking for? 6
The if condition in the for loop has to be changed.
import requests
from bs4 import BeautifulSoup
team = input('''What team are you looking for?
crd - Arizona Cardinals
atl - Atlanta Falcons
rav - Baltimore Ravens
buf - Buffalo Bills
car - Carolina Panthers
chi - Chicago Bears
cin - Cincinnati Bengals
cle - Cleveland Browns
dal - Dallas Cowboys
den - Denver Broncos
det - Detroit Lions
gnb - Green Bay Packers
htx - Houston Texans
clt - Indianapolis Colts
jax - Jacksonville Jaguars
kan - Kansas City Chiefs
sdg - Los Angeles Chargers
ram - Los Angeles Rams
mia - Miami Dolphins
min - Minnesota Vikings
nwe - New England Patriots
nor - New Orleans Saints
nyg - New York Giants
nyj - New York Jets
rai - Oakland Raiders
phi - Philadelphia Eagles
pit - Pittsburgh Steelers
sfo - San Fransisco 49ers
sea - Seattle Seahawks
tam - Tampa Bay Buccaneers
oti - Tennessee Titans
was - Washington Football Team
Enter the 3 letter code for the team: ''')
week = int(input('What week are you looking for? '))
url = 'https://www.pro-football-reference.com/teams/' + team.lower() + '/2019.htm'
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
week_num = soup.find_all('th', attrs={"data-stat": "week_num", "class": "right", "scope": "row"})
total_off = soup.find_all('td', attrs={"data-stat": "yards_off", "class": "right"})
total_def = soup.find_all('td', attrs={"data-stat": "yards_def", "class": "right"})
pass_yards_off = soup.find_all('td', attrs={"data-stat": "pass_yds_off", "class": "right"})
pass_yards_def = soup.find_all('td', attrs={"data-stat": "pass_yds_def", "class": "right"})
rush_yards_off = soup.find_all('td', attrs={"data-stat": "rush_yds_off", "class": "right"})
rush_yards_def = soup.find_all('td', attrs={"data-stat": "rush_yds_def", "class": "right"})
team_score = soup.find_all('td', attrs={"data-stat": "pts_off", "class": "right"})
opp_score = soup.find_all('td', attrs={"data-stat": "pts_def", "class": "right"})
try:
print('Week Number: ' + week_num[week].text.strip(),
'Total Off: ' + total_off[week].text.strip(),
'Total Def: ' + total_def[week].text.strip(),
'Passing Yards Off: ' + pass_yards_off[week].text.strip(),
'Passing Yards Def: ' + pass_yards_def[week].text.strip(),
'Rushing Yards Off: ' + rush_yards_off[week].text.strip(),
'Rushing Yards Def: ' + rush_yards_def[week].text.strip(), '\n')
except Exception as e:
print(e)
Output for crd and 2:
Week Number: 3 Total Off: 248 Total Def: 413 Passing Yards Off: 127 Passing Yards Def: 240 Rushing Yards Off: 121 Rushing Yards Def: 173
We could actually create the team choices dynamically from the table. You can also use pandas to get the table then filter by the week number, as opposed to iterating.
*Note: you need to pip install choice
import pandas as pd
import requests
from bs4 import BeautifulSoup
import choice
url= 'https://www.pro-football-reference.com/teams/'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
teams = soup.find_all('th')
# Get the links to the teams in the table
teams_dict = {}
for each in teams:
if each.find('a'):
teams_dict[each.text] = each.find('a')['href']
team_choice = choice.Menu(teams_dict.keys()).ask()
week = input('What week are you looking for? ')
url = 'https://www.pro-football-reference.com{team_url}2019.htm'.format(team_url=teams_dict[team_choice])
df = pd.read_html(url,attrs={'id':'games'})[0]
new_col_names = [col[-1] if 'Unnamed' in col[0] else '_'.join(col) for col in df.columns]
# for loop equivalent to the list comprehension above
#new_col_names = []
#for col in df.columns:
# if 'Unnamed' in col[0]:
# new_col_names.append(col[-1])
# else:
# new_col_names.append('_'.join(col))
# List comprehension equivilant to above loop
#new_col_names = [col[-1] if 'Unnamed' in col[0] else '_'.join(col) for col in df.columns]
df.columns = new_col_names
df['Week'] = df['Week'].astype(str)
week_stats = df[df['Week']==week]
cols = ['Week','Offense_TotYd','Defense_TotYd','Offense_PassY','Defense_PassY','Offense_RushY','Defense_RushY']
print (week_stats[cols].to_string())
Output: for NE week 6
Week Offense_TotYd Defense_TotYd Offense_PassY Defense_PassY Offense_RushY Defense_RushY
5 6 427.0 213.0 313.0 161.0 114.0 52.0

How to pull only certain fields with BeautifulSoup

I'm trying to print all the fields that have England in them, the current code i have prints all the Nationalities into a txt file for me, but i want just the england fields to print. the page im pulling from is https://www.premierleague.com/players
import requests
from bs4 import BeautifulSoup
r=requests.get("https://www.premierleague.com/players")
c=r.content
soup=BeautifulSoup(c, "html.parser")
players = open("playerslist.txt", "w+")
for playerCountry in soup.findAll("span", {"class":"playerCountry"}):
players.write(playerCountry.text.strip())
players.write("\n")
Just need to check if it's not equal 'England', and if so, skip to next item in list:
import requests
from bs4 import BeautifulSoup
r=requests.get("https://www.premierleague.com/players")
c=r.content
soup=BeautifulSoup(c, "html.parser")
players = open("playerslist.txt", "w+")
for playerCountry in soup.findAll("span", {"class":"playerCountry"}):
if playerCountry.text.strip() != 'England':
continue
players.write(playerCountry.text.strip())
players.write("\n")
Or, you could just use pandas.read_html() and a couple lines of code:
import pandas as pd
df = pd.read_html("https://www.premierleague.com/players")[0]
print(df.loc[df['Nationality'] != 'England'])
Prints:
Player Position Nationality
2 Charlie Adam Midfielder Scotland
3 Adrián Goalkeeper Spain
4 Adrien Silva Midfielder Portugal
5 Ibrahim Afellay Midfielder Netherlands
6 Benik Afobe Forward The Democratic Republic Of Congo
7 Sergio Agüero Forward Argentina
9 Soufyan Ahannach Midfielder Netherlands
10 Ahmed Hegazi Defender Egypt
11 Nathan Aké Defender Netherlands
14 Toby Alderweireld Defender Belgium
15 Aleix García Midfielder Spain
17 Ali Gabr Defender Egypt
18 Allan Nyom Defender Cameroon
19 Allan Souza Midfielder Brazil
20 Joe Allen Midfielder Wales
22 Marcos Alonso Defender Spain
23 Paulo Alves Midfielder Portugal
24 Daniel Amartey Midfielder Ghana
25 Jordi Amat Defender Spain
27 Ethan Ampadu Defender Wales
28 Nordin Amrabat Forward Morocco

Categories