Updating Pandas Column Using Conditions and a List - python

This is similar to some other questions posted, but i can't find an answer that fits my needs.
I have a Dataframe with the following:
RK PLAYER SCHOOL YEAR POS POS RK HT WT 2019 2018 2017 2016
0 1 Nick Bosa Ohio St. Jr EDGE 1 6-4 266 Jr
1 2 Quinnen Williams Alabama Soph DL 1 6-3 303 Soph
2 3 Josh Allen Kentucky Sr EDGE 2 6-5 262 Sr
3 4 Ed Oliver Houston Jr DL 2 6-2 287 Jr
2018, 2017, and 2016 have np.NaN values; but i can't format this table correctly with them in it.
Now i have a separate list containing the following:
season = ['Sr', 'Jr', 'Soph', 'Fr']
The 2019 column says their current status, and i would like for the 2018 column to show their status as of the prior year. So if it was 'Sr', it should be 'Jr'. Essentially, what i want to do is have the column check for the value in [season], move it one index ahead, and then take that value back into the column. The result for 2018 should be:
RK PLAYER SCHOOL YEAR POS POS RK HT WT 2019 2018 2017 2016
0 1 Nick Bosa Ohio St. Jr EDGE 1 6-4 266 Jr Soph
1 2 Quinnen Williams Alabama Soph DL 1 6-3 303 Soph Fr
2 3 Josh Allen Kentucky Sr EDGE 2 6-5 262 Sr Jr
3 4 Ed Oliver Houston Jr DL 2 6-2 287 Jr Soph
I can think of a way to do this with a for k, v in iteritems loop that would check the values, but i'm wondering if there's a better way?

I'm not sure if this is much smarter than what you already have, but its a suggestion
import pandas as pd
def get_season(curr_season, curr_year, prev_year):
season = ['Sr', 'Jr', 'Soph', 'Fr']
try:
return season[season.index(curr_season) + (curr_year - prev_year)]
except IndexError:
# Return some meaningful meassage perhaps?
return '-'
df = pd.DataFrame({'2019': ['Jr', 'Soph', 'Sr', 'Jr']})
df['2018'] = [get_season(s, 2019, 2018) for s in df['2019']]
df['2017'] = [get_season(s, 2019, 2017) for s in df['2019']]
df['2016'] = [get_season(s, 2019, 2016) for s in df['2019']]
df
Out[18]:
2019 2018 2017 2016
0 Jr Soph Fr -
1 Soph Fr - -
2 Sr Jr Soph Fr
3 Jr Soph Fr -

Another possible solution is to write a function that will accept a row, do a slice of seasons list starting from '2019' value and return that slice as pandas.Series. Then we can apply that function to columns using apply(). I used a part of your input DataFrame for testing.
In [3]: df
Out[3]:
WT 2019 2018 2017 2016
0 266 Jr NaN NaN NaN
1 303 Soph NaN NaN NaN
2 262 Sr NaN NaN NaN
3 287 Jr NaN NaN NaN
In [4]: def fill_row(row):
...: season = ['Sr', 'Jr', 'Soph', 'Fr']
...: data = season[season.index(row['2019']):]
...: return pd.Series(data)
In [5]: cols_to_update = ['2019', '2018', '2017', '2016']
In [6]: df[cols_to_update] = df[cols_to_update].apply(fill_row, axis=1)
In [7]: df
Out[7]:
WT 2019 2018 2017 2016
0 266 Jr Soph Fr NaN
1 303 Soph Fr NaN NaN
2 262 Sr Jr Soph Fr
3 287 Jr Soph Fr NaN

Related

Combine rows with containing blanks and each other data - Python [duplicate]

I have pandas DF as below ,
id age gender country sales_year
1 None M India 2016
2 23 F India 2016
1 20 M India 2015
2 25 F India 2015
3 30 M India 2019
4 36 None India 2019
I want to group by on id, take the latest 1 row as per sales_date with all non null element.
output expected,
id age gender country sales_year
1 20 M India 2016
2 23 F India 2016
3 30 M India 2019
4 36 None India 2019
In pyspark,
df = df.withColumn('age', f.first('age', True).over(Window.partitionBy("id").orderBy(df.sales_year.desc())))
But i need same solution in pandas .
EDIT ::
This can the case with all the columns. Not just age. I need it to pick up latest non null data(id exist) for all the ids.
Use GroupBy.first:
df1 = df.groupby('id', as_index=False).first()
print (df1)
id age gender country sales_year
0 1 20.0 M India 2016
1 2 23.0 F India 2016
2 3 30.0 M India 2019
3 4 36.0 NaN India 2019
If column sales_year is not sorted:
df2 = df.sort_values('sales_year', ascending=False).groupby('id', as_index=False).first()
print (df2)
id age gender country sales_year
0 1 20.0 M India 2016
1 2 23.0 F India 2016
2 3 30.0 M India 2019
3 4 36.0 NaN India 2019
print(df.replace('None',np.NaN).groupby('id').first())
first replace the 'None' with NaN
next use groupby() to group by 'id'
next filter out the first row using first()
Use -
df.dropna(subset=['gender']).sort_values('sales_year', ascending=False).groupby('id')['age'].first()
Output
id
1 20
2 23
3 30
4 36
Name: age, dtype: object
Remove the ['age'] to get full rows -
df.dropna().sort_values('sales_year', ascending=False).groupby('id').first()
Output
age gender country sales_year
id
1 20 M India 2015
2 23 F India 2016
3 30 M India 2019
4 36 None India 2019
You can put the id back as a column with reset_index() -
df.dropna().sort_values('sales_year', ascending=False).groupby('id').first().reset_index()
Output
id age gender country sales_year
0 1 20 M India 2015
1 2 23 F India 2016
2 3 30 M India 2019
3 4 36 None India 2019

Pandas Merge Not Working When Values Are an Exact Match

Below is my code and Dataframes. stats_df is much bigger. Not sure if it matters, but the column values are EXACTLY as they appear in the actual files. I can't merge the two DFs without losing 'Alex Len' even though both DFs have the same PlayerID value of '20000852'
stats_df = pd.read_csv('stats_todate.csv')
matchup_df = pd.read_csv('matchup.csv')
new_df = pd.merge(stats_df, matchup_df[['PlayerID','Matchup','Started','GameStatus']])
I have also tried:
stats_df['PlayerID'] = stats_df['PlayerID'].astype(str)
matchup_df['PlayerID'] = matchup_df['PlayerID'].astype(str)
stats_df['PlayerID'] = stats_df['PlayerID'].str.strip()
matchup_df['PlayerID'] = matchup_df['PlayerID'].str.strip()
Any ideas?
Here are my two Dataframes:
DF1:
PlayerID SeasonType Season Name Team Position
20001713 1 2018 A.J. Hammons MIA C
20002725 2 2022 A.J. Lawson ATL SG
20002038 2 2021 Élie Okobo BKN PG
20002742 2 2022 Aamir Simms NY PF
20000518 3 2018 Aaron Brooks MIN PG
20000681 1 2022 Aaron Gordon DEN PF
20001395 1 2018 Aaron Harrison DAL SG
20002680 1 2022 Aaron Henry PHI SF
20002005 1 2022 Aaron Holiday PHO PG
20001981 3 2018 Aaron Jackson HOU PF
20002539 1 2022 Aaron Nesmith BOS SF
20002714 1 2022 Aaron Wiggins OKC SG
20001721 1 2022 Abdel Nader PHO SF
20002251 2 2020 Abdul Gaddy OKC PG
20002458 1 2021 Adam Mokoka CHI SG
20002619 1 2022 Ade Murkey SAC PF
20002311 1 2022 Admiral Schofield ORL PF
20000783 1 2018 Adreian Payne ORL PF
20002510 1 2022 Ahmad Caver IND PG
20002498 2 2020 Ahmed Hill CHA PG
20000603 1 2022 Al Horford BOS PF
20000750 3 2018 Al Jefferson IND C
20001645 1 2019 Alan Williams BKN PF
20000837 1 2022 Alec Burks NY SG
20001882 1 2018 Alec Peters PHO PF
20002850 1 2022 Aleem Ford ORL SF
20002542 1 2022 Aleksej Pokuševski OKC PF
20002301 3 2021 Alen Smailagic GS PF
20001763 1 2019 Alex Abrines OKC SG
20001801 1 2022 Alex Caruso CHI SG
20000852 1 2022 Alex Len SAC C
DF2:
PlayerID Name Date Started Opponent GameStatus Matchup
20000681 Aaron Gordon 4/1/2022 1 MIN 16
20002005 Aaron Holiday 4/1/2022 0 MEM 21
20002539 Aaron Nesmith 4/1/2022 0 IND 13
20002714 Aaron Wiggins 4/1/2022 1 DET 14
20002311 Admiral Schofield 4/1/2022 0 TOR 10
20000603 Al Horford 4/1/2022 1 IND 13
20002542 Aleksej Pokuševski 4/1/2022 1 DET 14
20000852 Alex Len 4/1/2022 1 HOU 22
You need to specify the column you want to merge on using the on keyword argument:
new_df = pd.merge(stats_df, matchup_df[['PlayerID','Matchup','Started','GameStatus']], on=['PayerID'])
Otherwise it will merge using all of the shared columns.
Here is the explanation from the pandas docs:
on : label or list
Column or index level names to join on. These must be found in both
DataFrames. If on is None and not merging on indexes then this defaults
to the intersection of the columns in both DataFrames.

How to add null value rows into pandas dataframe for missing years in a multi-line chart plot

I am building a chart from a dataframe with a series of yearly values for six countries. This table is created by an SQL query and then passed to pandas with read_sql command...
country date value
0 CA 2000 123
1 CA 2001 125
2 US 1999 223
3 US 2000 235
4 US 2001 344
5 US 2002 355
...
Unfortunately, not every year has a value in each country, nevertheless the chart tool requires each country to have the same number of years in the dataframe. Years that have no values need a Nan (null) row added.
In the end, I want the pandas dataframe to look as follows for all six countries....
country date value
0 CA 1999 Nan
1 CA 2000 123
2 CA 2001 125
3 CA 2002 Nan
4 US 1999 223
5 US 2000 235
6 US 2001 344
7 US 2002 355
8 DE 1999 Nan
9 DE 2000 Nan
10 DE 2001 423
11 DE 2002 326
...
Are there any tools or shortcuts for determining min-max dates and then ensuring a new nan row is created if needed?
Use Series.unstack with DataFrame.stack trick:
df = df.set_index(['country','date']).unstack().stack(dropna=False).reset_index()
print (df)
country date value
0 CA 1999 NaN
1 CA 2000 123.0
2 CA 2001 125.0
3 CA 2002 NaN
4 US 1999 223.0
5 US 2000 235.0
6 US 2001 344.0
7 US 2002 355.0
Another idea with DataFrame.reindex:
mux = pd.MultiIndex.from_product([df['country'].unique(),
range(df['date'].min(), df['date'].max() + 1)],
names=['country','date'])
df = df.set_index(['country','date']).reindex(mux).reset_index()
print (df)
country date value
0 CA 1999 NaN
1 CA 2000 123.0
2 CA 2001 125.0
3 CA 2002 NaN
4 US 1999 223.0
5 US 2000 235.0
6 US 2001 344.0
7 US 2002 355.0

Python pandas: Setting index value of dataframe to another dataframe as a column using multiple column conditions

I have two dataframes: data_df and geo_dimension_df.
I would like to take the index of geo_dimension_df, which I renamed to id, and make it a column on data_df called geo_id.
I'll be inserting both of these dataframes as tables into a database, and the id columns will be their primary keys while geo_id is a foreign key that will link data_df to geo_dimension_df.
As can be seen, the cbsa and name values can change over time. (Yuba City, CA -> Yuba City-Marysville, CA). Therefore, the geo_dimension_df is all the unique combinations of cbsa and name.
I need to compare the cbsa and name values on both dataframes and then when matching set geo_dimension_df.id as the data_df.geo_id value.
I tried using merge for a bit, but got confused, so now I'm trying with apply and looking at it like an Excel vlookup across multiple column values, but having no luck. The following is my attempt, but it's a bit gibberish...
data_df['geo_id'] = data_df[['cbsa', 'name']]
.apply(
lambda x, y:
geo_dimension_df
.index[geo_dimension_df[['cbsa', 'name]]
.to_list()
== [x,y])
Below are the two original dataframes followed by the desired result. Thank you.
geo_dimension_df:
cbsa name
id
1 10180 Abilene, TX
2 10420 Akron, OH
3 10500 Albany, GA
4 10540 Albany, OR
5 10540 Albany-Lebanon, OR
...
519 49620 York-Hanover, PA
520 49660 Youngstown-Warren-Boardman, OH-PA
521 49700 Yuba City, CA
522 49700 Yuba City-Marysville, CA
523 49740 Yuma, AZ
data_df:
cbsa name month year units_total
id
1 10180 Abilene, TX 1 2004 22
2 10180 Abilene, TX 2 2004 12
3 10180 Abilene, TX 3 2004 44
4 10180 Abilene, TX 4 2004 32
5 10180 Abilene, TX 5 2004 21
...
67145 49740 Yuma, AZ 12 2018 68
67146 49740 Yuma, AZ 1 2019 86
67147 49740 Yuma, AZ 2 2019 99
67148 49740 Yuma, AZ 3 2019 99
67149 49740 Yuma, AZ 4 2019 94
Desired Outcome:
data_df (with geo_id foreign key column added):
cbsa name month year units_total geo_id
id
1 10180 Abilene, TX 1 2004 22 1
2 10180 Abilene, TX 2 2004 12 1
3 10180 Abilene, TX 3 2004 44 1
4 10180 Abilene, TX 4 2004 32 1
5 10180 Abilene, TX 5 2004 21 1
...
67145 49740 Yuma, AZ 12 2018 68 523
67146 49740 Yuma, AZ 1 2019 86 523
67147 49740 Yuma, AZ 2 2019 99 523
67148 49740 Yuma, AZ 3 2019 99 523
67149 49740 Yuma, AZ 4 2019 94 523
Note: I'll be dropping cbsa and name from data_df after this, in case anybody is curious as to why I'm duplicating data.
First, because the index is not a proper column, make it a column so that it can be used in a later merge:
geo_dimension_df['geo_id'] = geo_dimension_df.index
Next, join data_df and geo_dimension_df
data_df = pd.merge(data_df,
geo_dimension_df['cbsa', 'name', 'geo_id'],
on=['cbsa', 'name'],
how='left')
Finally, drop the column you added to the geo_dimension_df at the start:
geo_dimension_df.drop('geo_id', axis=1, inplace=True)
After doing this, geo_dimension_df's index column, id, will now appear on data_df under the column geo_id:
data_df:
cbsa name month year units_total geo_id
id
1 10180 Abilene, TX 1 2004 22 1
2 10180 Abilene, TX 2 2004 12 1
3 10180 Abilene, TX 3 2004 44 1
4 10180 Abilene, TX 4 2004 32 1
5 10180 Abilene, TX 5 2004 21 1
...
67145 49740 Yuma, AZ 12 2018 68 523
67146 49740 Yuma, AZ 1 2019 86 523
67147 49740 Yuma, AZ 2 2019 99 523
67148 49740 Yuma, AZ 3 2019 99 523
67149 49740 Yuma, AZ 4 2019 94 523

Problems with isin pandas

Sorry, I just asked this question: Pythonic Way to have multiple Or's when conditioning in a dataframe but marked it as answered prematurely because it passed my overly simplistic test case, but isn't working more generally. (If it is possible to merge and reopen the question that would be great...)
Here is the full issue:
sum(data['Name'].isin(eligible_players))
> 0
sum(data['Name'] == "Antonio Brown")
> 68
"Antonio Brown" in eligible_players
> True
Basically if I understand correctly, I am showing that Antonio Brown is in eligible players and he is in the dataframe. However, for some reason the .isin() isn't working properly.
As I said in my prior question, I am looking for a way to check many ors to select the proper rows
____ EDIT ____
In[14]:
eligible_players
Out[14]:
Name
Antonio Brown 378
Demaryius Thomas 334
Jordy Nelson 319
Dez Bryant 309
Emmanuel Sanders 293
Odell Beckham 289
Julio Jones 288
Randall Cobb 284
Jeremy Maclin 267
T.Y. Hilton 255
Alshon Jeffery 252
Golden Tate 250
Mike Evans 236
DeAndre Hopkins 223
Calvin Johnson 220
Kelvin Benjamin 218
Julian Edelman 213
Anquan Boldin 213
Steve Smith 213
Roddy White 208
Brandon LaFell 205
Mike Wallace 205
A.J. Green 203
DeSean Jackson 200
Jordan Matthews 194
Eric Decker 194
Sammy Watkins 190
Torrey Smith 186
Andre Johnson 186
Jarvis Landry 178
Eddie Royal 176
Brandon Marshall 175
Vincent Jackson 175
Rueben Randle 174
Marques Colston 173
Mohamed Sanu 171
Keenan Allen 170
James Jones 168
Malcom Floyd 168
Kenny Stills 167
Greg Jennings 162
Kendall Wright 162
Doug Baldwin 160
Michael Floyd 159
Robert Woods 158
Name: Pts, dtype: int64
and
In [31]:
data.tail(110)
Out[31]:
Name Pts year week pos Team
28029 Dez Bryant 25 2014 17 WR DAL
28030 Antonio Brown 25 2014 17 WR PIT
28031 Jordan Matthews 24 2014 17 WR PHI
28032 Randall Cobb 23 2014 17 WR GB
28033 Rueben Randle 21 2014 17 WR NYG
28034 Demaryius Thomas 19 2014 17 WR DEN
28035 Calvin Johnson 19 2014 17 WR DET
28036 Torrey Smith 18 2014 17 WR BAL
28037 Roddy White 17 2014 17 WR ATL
28038 Steve Smith 17 2014 17 WR BAL
28039 DeSean Jackson 16 2014 17 WR WAS
28040 Mike Evans 16 2014 17 WR TB
28041 Anquan Boldin 16 2014 17 WR SF
28042 Adam Thielen 15 2014 17 WR MIN
28043 Cecil Shorts 15 2014 17 WR JAC
28044 A.J. Green 15 2014 17 WR CIN
28045 Jordy Nelson 14 2014 17 WR GB
28046 Brian Hartline 14 2014 17 WR MIA
28047 Robert Woods 13 2014 17 WR BUF
28048 Kenny Stills 13 2014 17 WR NO
28049 Emmanuel Sanders 13 2014 17 WR DEN
28050 Eddie Royal 13 2014 17 WR SD
28051 Marques Colston 13 2014 17 WR NO
28052 Chris Owusu 12 2014 17 WR NYJ
28053 Brandon LaFell 12 2014 17 WR NE
28054 Dontrelle Inman 12 2014 17 WR SD
28055 Reggie Wayne 11 2014 17 WR IND
28056 Paul Richardson 11 2014 17 WR SEA
28057 Cole Beasley 11 2014 17 WR DAL
28058 Jarvis Landry 10 2014 17 WR MIA
(Aside: once you posted what you were actually using, it only took seconds to see the problem.)
Series.isin(something) iterates over something to determine the set of things you want to test membership in. But your eligible_players isn't a list, it's a Series. And iteration over a Series is iteration over the values, even though membership (in) is with respect to the index:
In [72]: eligible_players = pd.Series([10,20,30], index=["A","B","C"])
In [73]: list(eligible_players)
Out[73]: [10, 20, 30]
In [74]: "A" in eligible_players
Out[74]: True
So in your case, you could use eligible_players.index instead to pass the right names:
In [75]: df = pd.DataFrame({"Name": ["A","B","C","D"]})
In [76]: df
Out[76]:
Name
0 A
1 B
2 C
3 D
In [77]: df["Name"].isin(eligible_players) # remember, this will be [10, 20, 30]
Out[77]:
0 False
1 False
2 False
3 False
Name: Name, dtype: bool
In [78]: df["Name"].isin(eligible_players.index)
Out[78]:
0 True
1 True
2 True
3 False
Name: Name, dtype: bool
In [79]: df["Name"].isin(eligible_players.index).sum()
Out[79]: 3

Categories