Applying a condition to a df to get the aggregate counts - python

I have this df structured like this, where each year has the same rows/entries:
Year Name Expire
2001 Bob 2002
2001 Tim 2003
2001 Will 2004
2002 Bob 2002
2002 Tim 2003
2002 Will 2004
2003 Bob 2002
2003 Tim 2003
2003 Will 2004
I have subsetted the df (df[df['Expire']> df['Year'])
2001 Bob 2002
2001 Tim 2003
2001 Will 2004
2002 Tim 2003
2002 Will 2004
2003 Will 2004
Now I want to return the count for each year the amount of names that expired, something like:
Year count
2001 0
2002 1
2003 1
How can I accomplish this? I can't do (df[df['Expire']<= df['Year'])['name'].groupby('Year').agg(['count']), because that would return unnecessary rows for me. Any way to count only the last instance only?

You can use groupby with boolean mask and aggregate sum:
print (df['Expire']<= df['Year'])
0 False
1 False
2 False
3 True
4 False
5 False
6 True
7 True
8 False
dtype: bool
df=(df['Expire']<=df['Year']).groupby(df['Year']).sum().astype(int).reset_index(name='count')
print (df)
Year count
0 2001 0
1 2002 1
2 2003 2
Verifying:
print (df[df['Expire']<= df['Year']])
Year Name Expire
3 2002 Bob 2002
6 2003 Bob 2002
7 2003 Tim 2003

IIUC : You can use .apply and sum of true values i.e
df.groupby('Year').apply(lambda x: (x['Expire']<=x['Year']).sum())
Output:
Year
2001 0
2002 1
2003 2

Related

Python summing selected values in a column that match given condition

Here's the data after the preliminary data cleaning.
year
country
employees
2001
US
9
2001
Canada
81
2001
France
22
2001
Japan
31
2001
Chile
7
2001
Mexico
15
2001
Total
165
2002
US
5
2002
Canada
80
2002
France
20
2002
Japan
30
2002
Egypt
35
2002
Total
170
...
...
...
2010
US
32
...
...
...
What I want to get is the table below, which is summing up all countries except "US, Canada, France, and Japan" into 'others'. The list of countries varies every year from 2001 to 2010 so I want to use a for loop with if condition to loop over every year.
year
country
employees
2001
US
9
2001
Canada
81
2001
France
22
2001
Japan
31
2001
Others
22
2001
Total
165
2002
US
5
2002
Canada
80
2002
France
20
2002
Japan
30
2002
Others
35
2002
Total
170
Any leads would be greatly appreciated!
You may consider dropping Total from your dataframe.
However, as stated, your question can be solved by using Series.where to map away values that you don't recognize:
country = df["country"].where(df["country"].isin(["US", "Canada", "France", "Japan", "Total"]), "Others")
df.groupby([df["year"], country]).sum(numeric_only=True)

How to add null value rows into pandas dataframe for missing years in a multi-line chart plot

I am building a chart from a dataframe with a series of yearly values for six countries. This table is created by an SQL query and then passed to pandas with read_sql command...
country date value
0 CA 2000 123
1 CA 2001 125
2 US 1999 223
3 US 2000 235
4 US 2001 344
5 US 2002 355
...
Unfortunately, not every year has a value in each country, nevertheless the chart tool requires each country to have the same number of years in the dataframe. Years that have no values need a Nan (null) row added.
In the end, I want the pandas dataframe to look as follows for all six countries....
country date value
0 CA 1999 Nan
1 CA 2000 123
2 CA 2001 125
3 CA 2002 Nan
4 US 1999 223
5 US 2000 235
6 US 2001 344
7 US 2002 355
8 DE 1999 Nan
9 DE 2000 Nan
10 DE 2001 423
11 DE 2002 326
...
Are there any tools or shortcuts for determining min-max dates and then ensuring a new nan row is created if needed?
Use Series.unstack with DataFrame.stack trick:
df = df.set_index(['country','date']).unstack().stack(dropna=False).reset_index()
print (df)
country date value
0 CA 1999 NaN
1 CA 2000 123.0
2 CA 2001 125.0
3 CA 2002 NaN
4 US 1999 223.0
5 US 2000 235.0
6 US 2001 344.0
7 US 2002 355.0
Another idea with DataFrame.reindex:
mux = pd.MultiIndex.from_product([df['country'].unique(),
range(df['date'].min(), df['date'].max() + 1)],
names=['country','date'])
df = df.set_index(['country','date']).reindex(mux).reset_index()
print (df)
country date value
0 CA 1999 NaN
1 CA 2000 123.0
2 CA 2001 125.0
3 CA 2002 NaN
4 US 1999 223.0
5 US 2000 235.0
6 US 2001 344.0
7 US 2002 355.0

Merge dataframes

I'm trying to merge this two dataframes:
df1=
pais ano cantidad
0 Chile 2000 10
1 Chile 2001 11
2 Chile 2002 12
df2=
pais ano cantidad
0 Chile 1999 0
1 Chile 2000 0
2 Chile 2001 0
3 Chile 2002 0
4 Chile 2003 0
I'm trying to merge df1 into df2 and replace the existing año rows with those from df1. This is the code that I'm trying right now and what I'm getting:
df=df1.combine_first(df2)
df=
pais ano cantidad
0 Chile 2000.0 10.0
1 Chile 2001.0 11.0
2 Chile 2002.0 12.0
3 Chile 2002.0 0.0
4 Chile 2003.0 0.0
As you can see, row corresponding to 1999 is missing and the one for 2002 with 'cantidad'= 0 shoudn't be there. My desired output is this:
df=
pais ano cantidad
0 Chile 1999 0
1 Chile 2000 10
2 Chile 2001 11
3 Chile 2002 12
4 Chile 2003 0
Any ideas? Thank you!
Add how='outer param to the merge.
By default, merge works with "inner", which means it takes only values which are in both dataframe (intersection) while you want union of those sections.
Also, you may want to add on="ano" to declare on which column you want to merge. It may not be needed on your case, but it's worth to check it out.
Please check Pandas Merging 101 for more details
You can perform a left join on df2 and fillna missing values from df2.cantidad. I'm joining on pais and ano because I assume in your real dataframe are more countries than 'chile'.
df = df2[['pais','ano']].merge(df1, on=['pais','ano'], how='left').fillna({'cantidad': df2.cantidad})
df.cantidad = df.cantidad.astype('int')
df
Out:
pais ano cantidad
0 Chile 1999 0
1 Chile 2000 10
2 Chile 2001 11
3 Chile 2002 12
4 Chile 2003 0

Pandas Melt with Multiple Value Vars

I have a data set which is in wide format like this
Index Country Variable 2000 2001 2002 2003 2004 2005
0 Argentina var1 12 15 18 17 23 29
1 Argentina var2 1 3 2 5 7 5
2 Brazil var1 20 23 25 29 31 32
3 Brazil var2 0 1 2 2 3 3
I want to reshape my data to long so that year, var1, and var2 become new columns
Index Country year var1 var2
0 Argentina 2000 12 1
1 Argentina 2001 15 3
2 Argentina 2002 18 2
....
6 Brazil 2000 20 0
7 Brazil 2001 23 1
I got my code to work when I only had one variable by writing
df=(pd.melt(df,id_vars='Country',value_name='Var1', var_name='year'))
I cant figure out how to do this for a var1,var2, var3, etc.
Instead of melt, you can use a combination of stack and unstack:
(df.set_index(['Country', 'Variable'])
.rename_axis(['Year'], axis=1)
.stack()
.unstack('Variable')
.reset_index())
Variable Country Year var1 var2
0 Argentina 2000 12 1
1 Argentina 2001 15 3
2 Argentina 2002 18 2
3 Argentina 2003 17 5
4 Argentina 2004 23 7
5 Argentina 2005 29 5
6 Brazil 2000 20 0
7 Brazil 2001 23 1
8 Brazil 2002 25 2
9 Brazil 2003 29 2
10 Brazil 2004 31 3
11 Brazil 2005 32 3
Option 1
Using melt then unstack for var1, var2, etc...
(df1.melt(id_vars=['Country','Variable'],var_name='Year')
.set_index(['Country','Year','Variable'])
.squeeze()
.unstack()
.reset_index())
Output:
Variable Country Year var1 var2
0 Argentina 2000 12 1
1 Argentina 2001 15 3
2 Argentina 2002 18 2
3 Argentina 2003 17 5
4 Argentina 2004 23 7
5 Argentina 2005 29 5
6 Brazil 2000 20 0
7 Brazil 2001 23 1
8 Brazil 2002 25 2
9 Brazil 2003 29 2
10 Brazil 2004 31 3
11 Brazil 2005 32 3
Option 2
Using pivot then stack:
(df1.pivot(index='Country',columns='Variable')
.stack(0)
.rename_axis(['Country','Year'])
.reset_index())
Output:
Variable Country Year var1 var2
0 Argentina 2000 12 1
1 Argentina 2001 15 3
2 Argentina 2002 18 2
3 Argentina 2003 17 5
4 Argentina 2004 23 7
5 Argentina 2005 29 5
6 Brazil 2000 20 0
7 Brazil 2001 23 1
8 Brazil 2002 25 2
9 Brazil 2003 29 2
10 Brazil 2004 31 3
11 Brazil 2005 32 3
Option 3 (ayhan's solution)
Using set_index, stack, and unstack:
(df.set_index(['Country', 'Variable'])
.rename_axis(['Year'], axis=1)
.stack()
.unstack('Variable')
.reset_index())
Output:
Variable Country Year var1 var2
0 Argentina 2000 12 1
1 Argentina 2001 15 3
2 Argentina 2002 18 2
3 Argentina 2003 17 5
4 Argentina 2004 23 7
5 Argentina 2005 29 5
6 Brazil 2000 20 0
7 Brazil 2001 23 1
8 Brazil 2002 25 2
9 Brazil 2003 29 2
10 Brazil 2004 31 3
11 Brazil 2005 32 3
numpy
years = df.drop(['Country', 'Variable'], 1)
y = years.values
m = y.shape[1]
c = df.Country.values
v = df.Variable.values
f0, u0 = pd.factorize(df.Country.values)
f1, u1 = pd.factorize(df.Variable.values)
w = np.empty((u1.size, u0.size, m), dtype=y.dtype)
w[f1, f0] = y
results = pd.DataFrame(dict(
Country=u0.repeat(m),
Year=np.tile(years.columns.values, u0.size),
)).join(pd.DataFrame(w.reshape(-1, m * u1.size).T, columns=u1))
results
Country Year var1 var2
0 Argentina 2000 12 1
1 Argentina 2001 15 3
2 Argentina 2002 18 2
3 Argentina 2003 17 5
4 Argentina 2004 23 7
5 Argentina 2005 29 5
6 Brazil 2000 20 0
7 Brazil 2001 23 1
8 Brazil 2002 25 2
9 Brazil 2003 29 2
10 Brazil 2004 31 3
11 Brazil 2005 32 3

pandas remove observations depending on multi-index level value

I have a multi-index data frame with levels 'id' and 'year':
value
id year
10 2001 100
2002 200
11 2001 110
12 2001 200
2002 300
13 2002 210
I want to keep the ids that have values for both years 2001 and 2002. This means I want to obtain:
value
id year
10 2001 100
2002 200
12 2001 200
2002 300
I know that df.loc[df.index.get_level_values('year') == 2002] works but I cannot extend that to account for both 2001 and 2002.
Thanks in advance.
How about use groupby and filter:
df.groupby(level=0).filter(
lambda df:np.in1d([2001, 2002], df.index.get_level_values(1)).all()
)

Categories