exporting to table not aligned - python

I'm trying to scrape a table from this link:
https://www.espn.com/nba/stats/player/_/table/offensive/sort/avgPoints/dir/desc
when scraping the table, the names and stats categories align but the numbers themselves don't.
import csv
from bs4 import BeautifulSoup
import requests
soup = BeautifulSoup(
requests.get("https://www.espn.com/nba/stats/player/_/table/offensive/sort/avgPoints/dir/desc", timeout=30).text,
'lxml')
def scrape_data(url):
# the categories of stats (first row)
ct = soup.find_all('tr', class_="Table__TR Table__even")
# player's stats table (the names and numbers)
st = soup.find_all('tr', class_="Table__TR Table__TR--sm Table__even")
header = [th.text.rstrip() for th in ct[1].find_all('th')]
with open('s espn.csv', 'w') as csv_file:
writer = csv.writer(csv_file)
writer.writerow(header)
for row in st[1:]:
data = [th.text.rstrip() for th in row.find_all('td')]
writer.writerow(data)
scrape_data(soup)
https://imgur.com/UFHC8wf

That's because those are under 2 separate table tags. A table for the names, then the table for stats. You'll need to merge them.
Use pandas. A lot easier to parse tables (it actually uses beautifulsoup under the hood). pd.read_html() will return a list of dataframes. I just merged them together. Then you can also manipulate the tables any way you need.
import pandas as pd
url = 'https://www.espn.com/nba/stats/player/_/table/offensive/sort/avgPoints/dir/desc'
dfs = pd.read_html(url)
df = dfs[0].join(dfs[1])
df[['Name','Team']] = df['Name'].str.extract('^(.*?)([A-Z]+)$', expand=True)
df.to_csv('s espn.csv', index=False)
Output:
print (df.head(10).to_string())
RK Name Team POS GP MIN PTS FGM FGA FG% 3PM 3PA 3P% FTM FTA FT% REB AST STL BLK TO DD2 TD3 PER
0 1 Trae Young ATL PG 2 36.5 38.5 13.5 23.0 58.7 5.5 10.0 55.0 6.0 8.0 75.0 7.0 9.0 1.5 0.0 5.5 0 0 40.95
1 2 Kyrie Irving BKN PG 3 34.7 37.7 12.0 26.3 45.6 4.7 11.3 41.2 9.0 9.7 93.1 5.7 6.3 1.7 0.7 2.0 0 0 37.84
2 3 Karl-Anthony Towns MIN C 3 33.7 32.0 10.7 20.3 52.5 5.0 9.7 51.7 5.7 9.0 63.0 13.3 5.0 3.0 2.0 2.3 3 0 40.60
3 4 Damian Lillard POR PG 3 36.7 31.7 10.7 20.3 52.5 2.3 8.0 29.2 8.0 9.0 88.9 4.0 6.0 1.7 0.3 2.7 0 0 32.86
4 5 Giannis Antetokounmpo MIL PF 2 32.5 29.5 11.5 19.0 60.5 1.0 5.0 20.0 5.5 10.0 55.0 15.0 10.0 2.0 1.5 5.5 2 1 34.55
5 6 Luka Doncic DAL SF 3 36.3 29.3 10.0 20.0 50.0 3.0 9.7 31.0 6.3 8.0 79.2 10.3 7.3 2.3 0.0 4.3 2 1 29.45
6 7 Pascal Siakam TOR PF 3 34.0 28.7 9.7 21.0 46.0 2.7 5.7 47.1 6.7 7.0 95.2 10.7 3.7 0.3 0.3 4.3 1 0 24.56
7 8 Brandon Ingra mNO SF 3 35.0 27.3 10.7 20.3 52.5 3.3 6.3 52.6 2.7 3.3 80.0 9.3 4.3 0.3 1.7 2.7 1 0 25.72
8 9 Kristaps Porzingis DAL PF 3 31.0 26.3 8.7 18.7 46.4 3.0 7.3 40.9 6.0 8.3 72.0 5.7 3.3 0.3 2.7 2.3 0 0 27.15
9 10 Russell Westbrook HOU PG 2 33.0 26.0 8.0 17.0 47.1 2.0 5.0 40.0 8.0 10.5 76.2 13.0 10.0 1.5 0.5 3.5 2 1 30.96
...
Explained line by line:
dfs = pd.read_html(url)
This will return a list of dataframes. Essentially it parses every <table> tag within the html. When you do this with the given URL, you will see it returns 2 dataframes:
The dataframe in index position 0, has the ranking and player name. If I look at that, I notice the text is the player name followed by the team abbreviation (the last 3 characters in the string).
So, dfs[0]['Team'] = dfs[0]['Name'].str[-3:] is going to create a column called 'Team', where it takes the Name column strings and takes the last 3 characters. dfs[0]['Name'] = dfs[0]['Name'].str[:-3] will store the string from the Name column up until the last 3 charachters. Essentially splitting the Name column into 2 columns.
df = dfs[0].merge(dfs[1], left_index=True, right_index=True)
The last part then takes those 2 dataframes in index position 0 and 1 (stored in the dfs list) and merges them together and merges on the index values.

Related

Having issues trying to make my dataframe numeric

So I have a sqlite local database, I read it into my program as a pandas dataframe using
""" Seperating hitters and pitchers """
pitchers = pd.read_sql_query("SELECT * FROM ALL_NORTHWOODS_DATA WHERE BF_y >= 20 AND BF_x >= 20", northwoods_db)
hitters = pd.read_sql_query("SELECT * FROM ALL_NORTHWOODS_DATA WHERE PA_y >= 25 AND PA_x >= 25", northwoods_db)
But when I do this, some of the numbers are not numeric. Here is a head of one of the dataframes:
index Year Age_x AgeDif_x Tm_x Lg_x Lev_x Aff_x G_x PA_x ... ER_y BK_y WP_y BF_y WHIP_y H9_y HR9_y BB9_y SO9_y SO/W_y
0 84 2020 21 -0.3 Hillsdale GMAC NCAA None 5 None ... 4.0 None 3.0 71.0 1.132 5.6 0.0 4.6 8.7 1.89
1 264 2018 -- None Duke ACC NCAA None 15 None ... 13 0 1 88 2.111 10.0 0.5 9.0 8.0 0.89
2 298 2019 21 0.1 Wisconsin-Milwaukee Horz NCAA None 8 None ... 1.0 0.0 2.0 21.0 2.25 9.0 0.0 11.3 11.3 1.0
3 357 2017 22 1.0 Nova Southeastern SSC NCAA None 15.0 None ... 20.0 0.0 3.0 206.0 1.489 9.7 0.4 3.7 8.5 2.32
4 418 2021 21 -0.4 Creighton BigE NCAA None 4 None ... 26.0 1.0 6.0 226.0 1.625 8.6 0.9 6.0 7.5 1.25
When I try to make the dataframe numeric, I used this line of code:
hitters = hitters.apply(pd.to_numeric, errors='coerce')
pitchers = pitchers.apply(pd.to_numeric, errors='coerce')
But when I did that, the new head of the dataframes is full of NaN's, it seems like it got rid of all of the string values but I want to keep those.
index Year Age_x AgeDif_x Tm_x Lg_x Lev_x Aff_x G_x PA_x ... ER_y BK_y WP_y BF_y WHIP_y H9_y HR9_y BB9_y SO9_y SO/W_y
0 84 2020 21.0 -0.3 NaN NaN NaN NaN 5.0 NaN ... 4.0 NaN 3.0 71.0 1.132 5.6 0.0 4.6 8.7 1.89
1 264 2018 NaN NaN NaN NaN NaN NaN 15.0 NaN ... 13.0 0.0 1.0 88.0 2.111 10.0 0.5 9.0 8.0 0.89
2 298 2019 21.0 0.1 NaN NaN NaN NaN 8.0 NaN ... 1.0 0.0 2.0 21.0 2.250 9.0 0.0 11.3 11.3 1.00
3 357 2017 22.0 1.0 NaN NaN NaN NaN 15.0 NaN ... 20.0 0.0 3.0 206.0 1.489 9.7 0.4 3.7 8.5 2.32
4 418 2021 21.0 -0.4 NaN NaN NaN NaN 4.0 NaN ... 26.0 1.0 6.0 226.0 1.625 8.6 0.9 6.0 7.5 1.25
Is there a better way to makethe number values numeric and keep all my string columns? Maybe there is an sqlite function that can do it better? I am not sure, any help is appriciated.
Maybe you can use combine_first:
hitters_new = hitters.apply(pd.to_numeric, errors='coerce').combine_first(hitters)
pitchers_new = pitchers.apply(pd.to_numeric, errors='coerce').combine_first(pitchers)
You can try using astype or convert_dtypes. They both take an argument which is the columns you want to convert, if you already know which columns are numeric and which ones are strings that can work. Otherwise, take a look at this thread to do this automatically.

How to impute values before merging two dataframes in python pandas to avoid loss of data

I have two dataframes, df1 and df2 which can be seen below:
df1
name posteam down rush
0 A.Ekeler LAC 1.0 35.7
1 A.Ekeler LAC 2.0 15.1
2 A.Ekeler LAC 3.0 15.9
3 A.Ekeler LAC 4.0 0.4
4 A.Jones GB 1.0 105.1
5 A.Jones GB 2.0 79.2
6 A.Jones GB 3.0 8.1
7 A.Jones GB 4.0 1.0
df2
name posteam down passes
0 A.Ekeler LAC 1.0 122.8
1 A.Ekeler LAC 2.0 63.2
2 A.Ekeler LAC 3.0 39.0
3 A.Ekeler LAC 4.0 -1.0
4 A.Jones GB 1.0 43.7
5 A.Jones GB 2.0 52.0
6 A.Jones GB 3.0 10.4
I would like to merge them on the rows name, posteam, and down. However, some of the values under name have data for down == 4 in df1 but not df2 (Look at A.Jones. One df has down == 4 data, the other doesn't). When I merge, since I'm merging with down, the values on down == 4 disappear, like so:
merged = pd.merge(df1,df2,on=['name','posteam','down'])
name posteam down rush passes
0 A.Ekeler LAC 1.0 35.7 122.8
1 A.Ekeler LAC 2.0 15.1 63.2
2 A.Ekeler LAC 3.0 15.9 39.0
3 A.Ekeler LAC 4.0 0.4 -1.0
4 A.Jones GB 1.0 105.1 43.7
5 A.Jones GB 2.0 79.2 52.0
6 A.Jones GB 3.0 8.1 10.4
Player A.Jones had data for down == 4 in df1 but not df2. How can I impute a 0 for players that don't have data in one of the dfs so that they don't disappear when I merge? Like this (look at index 7):
df2
name posteam down passes
0 A.Ekeler LAC 1.0 122.8
1 A.Ekeler LAC 2.0 63.2
2 A.Ekeler LAC 3.0 39.0
3 A.Ekeler LAC 4.0 -1.0
4 A.Jones GB 1.0 43.7
5 A.Jones GB 2.0 52.0
6 A.Jones GB 3.0 10.4
7 A.Jones GB 4.0 0.0
So when I merge, I will get keep the down == 4 data from df1, like so (look at index 7):
merged = pd.merge(df1,df2,on=['name','posteam,'down])
name posteam down rush passes
0 A.Ekeler LAC 1.0 35.7 122.8
1 A.Ekeler LAC 2.0 15.1 63.2
2 A.Ekeler LAC 3.0 15.9 39.0
3 A.Ekeler LAC 4.0 0.4 -1.0
4 A.Jones GB 1.0 105.1 43.7
5 A.Jones GB 2.0 79.2 52.0
6 A.Jones GB 3.0 8.1 10.4
7 A.Jones GB 4.0 1.0 0.0
I tried taking down out of the merge, but that messed everything up
you should try this
merged_df = pd.merge(df1, df2,
how='outer',
on=['name', 'posteam', 'down']).fillna(value=0.0)

How can I split columns with regex to move trailing CAPS into a separate column?

I'm trying to split a column using regex, but can't seem to get the split correctly. I'm trying to take all the trailing CAPS and move them into a separate column. So I'm getting all the CAPS that are either 2-4 CAPS in a row. However, it's only leaving the 'Name' column while the 'Team' column is blank.
Here's my code:
import pandas as pd
url = "https://www.espn.com/nba/stats/player/_/table/offensive/sort/avgAssists/dir/desc"
df = pd.read_html(url)[0].join(pd.read_html(url)[1])
df[['Name','Team']] = df['Name'].str.split('[A-Z]{2,4}', expand=True)
I want this:
print(df.head(5).to_string())
RK Name POS GP MIN PTS FGM FGA FG% 3PM 3PA 3P% FTM FTA FT% REB AST STL BLK TO DD2 TD3 PER
0 1 LeBron JamesLA SF 35 35.1 24.9 9.6 19.7 48.6 2.0 6.0 33.8 3.7 5.5 67.7 7.9 11.0 1.3 0.5 3.7 28 9 26.10
1 2 Ricky RubioPHX PG 30 32.0 13.6 4.9 11.9 41.3 1.2 3.7 31.8 2.6 3.1 83.7 4.6 9.3 1.3 0.2 2.5 12 1 16.40
2 3 Luka DoncicDAL SF 32 32.8 29.7 9.6 20.2 47.5 3.1 9.4 33.1 7.3 9.1 80.5 9.7 8.9 1.2 0.2 4.2 22 11 31.74
3 4 Ben SimmonsPHIL PG 36 35.4 14.9 6.1 10.8 56.3 0.1 0.1 40.0 2.7 4.6 59.0 7.5 8.6 2.2 0.7 3.6 19 3 19.49
4 5 Trae YoungATL PG 34 35.1 28.9 9.3 20.8 44.8 3.5 9.4 37.5 6.7 7.9 85.0 4.3 8.4 1.2 0.1 4.8 11 1 23.47
to become this:
print(df.head(5).to_string())
RK Name Team POS GP MIN PTS FGM FGA FG% 3PM 3PA 3P% FTM FTA FT% REB AST STL BLK TO DD2 TD3 PER
0 1 LeBron James LA SF 35 35.1 24.9 9.6 19.7 48.6 2.0 6.0 33.8 3.7 5.5 67.7 7.9 11.0 1.3 0.5 3.7 28 9 26.10
1 2 Ricky Rubio PHX PG 30 32.0 13.6 4.9 11.9 41.3 1.2 3.7 31.8 2.6 3.1 83.7 4.6 9.3 1.3 0.2 2.5 12 1 16.40
2 3 Luka Doncic DAL SF 32 32.8 29.7 9.6 20.2 47.5 3.1 9.4 33.1 7.3 9.1 80.5 9.7 8.9 1.2 0.2 4.2 22 11 31.74
3 4 Ben Simmons PHIL PG 36 35.4 14.9 6.1 10.8 56.3 0.1 0.1 40.0 2.7 4.6 59.0 7.5 8.6 2.2 0.7 3.6 19 3 19.49
4 5 Trae Young ATL PG 34 35.1 28.9 9.3 20.8 44.8 3.5 9.4 37.5 6.7 7.9 85.0 4.3 8.4 1.2 0.1 4.8 11 1 23.47
You may extract the data into two columns by using a regex like ^(.*?)([A-Z]+)$ or ^(.*[^A-Z])([A-Z]+)$:
df[['Name','Team']] = df['Name'].str.extract('^(.*?)([A-Z]+)$', expand=True)
This will keep all up to the last char that is not an uppercase letter in Group "Name" and the last uppercase letters in Group "Team".
See regex demo #1 and regex demo #2
Details
^ - start of a string
(.*?) - Capturing group 1: any zero or more chars other than line break chars, as few as possible
or
(.*[^A-Z]) - any zero or more chars other than line break chars, as many as possible, up to the last char that is not an ASCII uppercase letter (granted the subsequent patterns match) (note that this pattern implies there is at least 1 char before the last uppercase letters)
([A-Z]+) - Capturing group 2: one or more ASCII uppercase letters
$ - end of string.
I have made a few alterations in the functions, You might need to add re package.
Its a bit manual, But I hope this will suffice. Have a great day!
df_obj_skel = dict()
df_obj_skel['Name'] = list()
df_obj_skel['Team'] = list()
for index,row in df.iterrows():
Name = row['Name']
Findings = re.search('[A-Z]{2,4}$', Name)
Refined_Team = Findings[0]
Refined_Name = re.sub(Refined_Team + "$", "", Name)
df_obj_skel['Team'].append(Refined_Team)
df_obj_skel['Name'].append(Refined_Name)
df_final = pd.DataFrame(df_obj_skel)
print(df_final)

delete consecutive rows conditionally pandas

I have a df with columns (A, B, C, D, F). I want to:
1) Compare consecutive rows
2) if the absolute difference between consecutive E <=1 AND absolute difference between consecutive C>7, then delete the row with the lowest C value.
Sample Data:
A B C D E
0 94.5 4.3 26.0 79.0 NaN
1 34.0 8.8 23.0 58.0 54.5
2 54.2 5.4 25.5 9.91 50.2
3 42.2 3.5 26.0 4.91 5.1
4 98.0 8.2 13.0 193.7 5.5
5 20.5 9.6 17.0 157.3 5.3
6 32.9 5.4 24.5 45.9 79.8
Desired result:
A B C D E
0 94.5 4.3 26.0 79.0 NaN
1 34.0 8.8 23.0 58.0 54.5
2 54.2 5.4 25.5 9.91 50.2
3 42.2 3.5 26.0 4.91 5.01
4 32.9 5.4 24.5 45.9 79.8
Row 4 was deleted when compared with row 3. Row 5 is now row 4 and it was deleted when compared to row 3.
This code returns the results as boolean (not df with values) and does not satisfy all the conditions.
df = (abs(df.E.diff(-1)) <=1 & (abs(df.C.diff(-1)) >7.)
The result of the code:
0 False
1 False
2 False
3 True
4 False
5 False
6 False
dtype: bool
Any help appreciated.
Using shift() to compare the rows, and a while loop to iterate until no new change happens:
while(True):
rows = len(df)
df = df[~((abs(df.E - df.E.shift(1)) <= 1)&(abs(df.C - df.C.shift(1)) > 7))]
df.reset_index(inplace = True, drop = True)
if (rows == len(df)):
break
It produces the desired output:
A B C D E
0 94.5 4.3 26.0 79.00 NaN
1 34.0 8.8 23.0 58.00 54.5
2 54.2 5.4 25.5 9.91 50.2
3 42.2 3.5 26.0 4.91 5.1
4 32.9 5.4 24.5 45.90 79.8

Numpy Separating CSV into columns

I'm trying to use a CSV imported from bballreference.com. But as you can see, the separated values are all in one row rather than separated by columns. On NumPy Pandas, what would be the easiest way to fix this? I've googled to no avail.
csv on jupyter
I don't know how to post CSV file in a clean way but here it is:
",,,Totals,Totals,Totals,Totals,Totals,Totals,Totals,Totals,Totals,Totals,Totals,Totals,Totals,Totals,Totals,Totals,Totals,Totals,Shooting,Shooting,Shooting,Per Game,Per Game,Per Game,Per Game,Per Game,Per Game"
"Rk,Player,Age,G,GS,MP,FG,FGA,3P,3PA,FT,FTA,ORB,DRB,TRB,AST,STL,BLK,TOV,PF,PTS,FG%,3P%,FT%,MP,PTS,TRB,AST,STL,BLK"
"1,Kevin Durant\duranke01,29,5,5,182,54,107,9,28,22,27,3,34,37,24,7,6,10,7,139,.505,.321,.815,36.5,27.8,7.4,4.8,1.4,1.2"
"2,Klay Thompson\thompkl01,27,5,5,183,38,99,12,43,11,11,3,29,32,9,1,2,6,11,99,.384,.279,1.000,36.7,19.8,6.4,1.8,0.2,0.4"
"3,Stephen Curry\curryst01,29,4,3,125,32,67,15,34,19,19,2,19,21,14,8,2,15,6,98,.478,.441,1.000,31.2,24.5,5.3,3.5,2.0,0.5"
"4,Draymond Green\greendr01,27,5,5,186,27,55,8,20,12,15,12,47,59,50,12,8,18,16,74,.491,.400,.800,37.1,14.8,11.8,10.0,2.4,1.6"
"5,Andre Iguodala\iguodan01,34,5,4,140,14,29,4,12,7,12,4,21,25,17,10,2,3,7,39,.483,.333,.583,27.9,7.8,5.0,3.4,2.0,0.4"
"6,Quinn Cook\cookqu01,24,4,0,58,12,27,0,10,6,8,1,8,9,4,1,0,2,4,30,.444,.000,.750,14.4,7.5,2.3,1.0,0.3,0.0"
"7,Kevon Looney\looneke01,21,5,0,113,12,17,0,0,4,8,10,19,29,5,4,1,2,17,28,.706,,.500,22.6,5.6,5.8,1.0,0.8,0.2"
"8,Shaun Livingston\livinsh01,32,5,0,79,11,27,0,0,4,4,0,6,6,12,0,1,3,9,26,.407,,1.000,15.9,5.2,1.2,2.4,0.0,0.2"
"9,David West\westda01,37,5,0,40,8,14,0,0,0,0,2,5,7,13,2,4,3,4,16,.571,,,7.9,3.2,1.4,2.6,0.4,0.8"
"10,Nick Young\youngni01,32,4,2,41,3,11,3,10,2,3,0,4,4,1,1,0,1,3,11,.273,.300,.667,10.2,2.8,1.0,0.3,0.3,0.0"
"11,JaVale McGee\mcgeeja01,30,3,1,19,3,8,0,1,0,0,4,2,6,0,0,1,0,2,6,.375,.000,,6.2,2.0,2.0,0.0,0.0,0.3"
"12,Zaza Pachulia\pachuza01,33,2,0,8,1,2,0,0,2,4,4,2,6,0,2,0,1,1,4,.500,,.500,4.2,2.0,3.0,0.0,1.0,0.0"
"13,Jordan Bell\belljo01,23,4,0,23,1,4,0,0,1,2,1,5,6,5,2,2,0,2,3,.250,,.500,5.8,0.8,1.5,1.3,0.5,0.5"
"14,Damian Jones\jonesda03,22,1,0,3,0,1,0,0,2,2,0,0,0,0,0,0,0,0,2,.000,,1.000,3.2,2.0,0.0,0.0,0.0,0.0"
",Team Totals,26.5,5,,1200,216,468,51,158,92,115,46,201,247,154,50,29,64,89,575,.462,.323,.800,240.0,115.0,49.4,30.8,10.0,5.8"
It seems that the first two rows of your CSV file are headers, but the default behavior of pd.read_csv thinks that only the first row is header.
Also, the beginning and trailing quotes make pd.read_csv think the text in between is a single field/column.
You could try the following:
Remove the beginning and trailing quotes, and
bbal = pd.read_csv('some_file.csv', header=[0, 1], delimiter=',')
Following is how you could use Python to remove the beginning and trailing quotes:
# open 'quotes.csv' in read mode with variable in_file as handle
# open 'no_quotes.csv' in write mode with variable out_file as handle
with open('quotes.csv') as in_file, open('no_quotes.csv', 'w') as out_file:
# read in_file line by line
# the variable line stores each line as string
for line in in_file:
# line[1:-1] slices the string to omit the first and last character
# append a newline character '\n' to the sliced line
# write the string with newline to out_file
out_file.write(line[1:-1] + '\n')
# read_csv on 'no_quotes.csv'
bbal = pd.read_csv('no_quotes.csv', header=[0, 1], delimiter=',')
bbal.head()
Consider reading in csv as a text file to be stripped of the beginning/end quotes per line on a text file read which tell the parser all data between is a singular value. And use built-in StringIO to read text string into dataframe instead of saving to disk for import.
Additionally, skip the first row of repeated Totals and Per Game and even the last row that aggregates since you can do that with pandas.
from io import StringIO
import pandas as pd
with open('BasketballCSVQuotes.csv') as f:
csvdata = f.read().replace('"', '')
df = pd.read_csv(StringIO(csvdata), skiprows=1, skipfooter=1, engine='python')
print(df)
Output
Rk Player Age G GS MP FG FGA 3P 3PA ... PTS FG% 3P% FT% MP.1 PTS.1 TRB.1 AST.1 STL.1 BLK.1
0 1.0 Kevin Durant\duranke01 29.0 5 5.0 182 54 107 9 28 ... 139 0.505 0.321 0.815 36.5 27.8 7.4 4.8 1.4 1.2
1 2.0 Klay Thompson\thompkl01 27.0 5 5.0 183 38 99 12 43 ... 99 0.384 0.279 1.000 36.7 19.8 6.4 1.8 0.2 0.4
2 3.0 Stephen Curry\curryst01 29.0 4 3.0 125 32 67 15 34 ... 98 0.478 0.441 1.000 31.2 24.5 5.3 3.5 2.0 0.5
3 4.0 Draymond Green\greendr01 27.0 5 5.0 186 27 55 8 20 ... 74 0.491 0.400 0.800 37.1 14.8 11.8 10.0 2.4 1.6
4 5.0 Andre Iguodala\iguodan01 34.0 5 4.0 140 14 29 4 12 ... 39 0.483 0.333 0.583 27.9 7.8 5.0 3.4 2.0 0.4
5 6.0 Quinn Cook\cookqu01 24.0 4 0.0 58 12 27 0 10 ... 30 0.444 0.000 0.750 14.4 7.5 2.3 1.0 0.3 0.0
6 7.0 Kevon Looney\looneke01 21.0 5 0.0 113 12 17 0 0 ... 28 0.706 NaN 0.500 22.6 5.6 5.8 1.0 0.8 0.2
7 8.0 Shaun Livingston\livinsh01 32.0 5 0.0 79 11 27 0 0 ... 26 0.407 NaN 1.000 15.9 5.2 1.2 2.4 0.0 0.2
8 9.0 David West\westda01 37.0 5 0.0 40 8 14 0 0 ... 16 0.571 NaN NaN 7.9 3.2 1.4 2.6 0.4 0.8
9 10.0 Nick Young\youngni01 32.0 4 2.0 41 3 11 3 10 ... 11 0.273 0.300 0.667 10.2 2.8 1.0 0.3 0.3 0.0
10 11.0 JaVale McGee\mcgeeja01 30.0 3 1.0 19 3 8 0 1 ... 6 0.375 0.000 NaN 6.2 2.0 2.0 0.0 0.0 0.3
11 12.0 Zaza Pachulia\pachuza01 33.0 2 0.0 8 1 2 0 0 ... 4 0.500 NaN 0.500 4.2 2.0 3.0 0.0 1.0 0.0
12 13.0 Jordan Belelljo01 23.0 4 0.0 23 1 4 0 0 ... 3 0.250 NaN 0.500 5.8 0.8 1.5 1.3 0.5 0.5
13 14.0 Damian Jones\jonesda03 22.0 1 0.0 3 0 1 0 0 ... 2 0.000 NaN 1.000 3.2 2.0 0.0 0.0 0.0 0.0
[14 rows x 30 columns]

Categories