adding value to prior row in dataframe within a function - python

I am looking for a solution that produces column C (first row = 10000) within a function framework, i.e. without iterations but with vectorization:
index A B c
1 0 0 1000
2 100 0 900
3 0 0 900
4 0 200 1100
5 0 0 1100
the Function should look similar to this:
def calculate(self):
df = pd.DataFrame()
df['A'] = self.some_value
df['B'] = self.some_other_value
df['C'] = df['C'].shift(1) - df['A'] + df['B']........
but the reference to the prior row does not work. what can be done to accomplish the task lined out?

This should work:
df['C'] = 1000 + (-df['A'] + df['B']).cumsum()
df
Out[80]:
A B C
0 0 0 1000
1 100 0 900
2 0 0 900
3 0 200 1100
4 0 0 1100

Related

Are there any pythonic way to find class counts for pandas dataframe in given condition? [duplicate]

I am trying to groupby-aggregate a dataframe using lambda functions that are being created programatically. This so I can simulate a one-hot encoder of the categories present in a column.
Dataframe:
df = pd.DataFrame(np.array([[10, 'A'], [10, 'B'], [20, 'A'],[30,'B']]),
columns=['ID', 'category'])
ID category
10 A
10 B
20 A
30 B
Expected result:
ID A B
10 1 1
20 1 0
30 0 1
What I am trying:
one_hot_columns = ['A','B']
lambdas = [lambda x: 1 if x.eq(column).any() else 0 for column in one_hot_columns]
df_g = df.groupby('ID').category.agg(lambdas)
Result:
ID A B
10 1 1
20 0 0
30 1 1
But the above is not quite the expected result. Not sure what I am doing wrong.
I know I could do this with get_dummies, but using lambdas is more convenient for automation. Also, I can ensure the order of the output columns.
Use crosstab:
pd.crosstab(df.ID, df['category']).reset_index()
Output:
category ID A B
0 10 1 1
1 20 1 0
2 30 0 1
You can use pd.get_dummies with Groupby.sum:
In [4331]: res = pd.get_dummies(df, columns=['category']).groupby('ID', as_index=False).sum()
In [4332]: res
Out[4332]:
ID category_A category_B
0 10 1 1
1 20 1 0
2 30 0 1
OR, use pd.concat with pd.get_dummies:
In [4329]: res = pd.concat([df, pd.get_dummies(df.category)], axis=1).groupby('ID', as_index=False).sum()
In [4330]: res
Out[4330]:
ID A B
0 10 1 1
1 20 1 0
2 30 0 1

Convert Ratings into Columns and Columns as rows in Pandas [duplicate]

I am trying to groupby-aggregate a dataframe using lambda functions that are being created programatically. This so I can simulate a one-hot encoder of the categories present in a column.
Dataframe:
df = pd.DataFrame(np.array([[10, 'A'], [10, 'B'], [20, 'A'],[30,'B']]),
columns=['ID', 'category'])
ID category
10 A
10 B
20 A
30 B
Expected result:
ID A B
10 1 1
20 1 0
30 0 1
What I am trying:
one_hot_columns = ['A','B']
lambdas = [lambda x: 1 if x.eq(column).any() else 0 for column in one_hot_columns]
df_g = df.groupby('ID').category.agg(lambdas)
Result:
ID A B
10 1 1
20 0 0
30 1 1
But the above is not quite the expected result. Not sure what I am doing wrong.
I know I could do this with get_dummies, but using lambdas is more convenient for automation. Also, I can ensure the order of the output columns.
Use crosstab:
pd.crosstab(df.ID, df['category']).reset_index()
Output:
category ID A B
0 10 1 1
1 20 1 0
2 30 0 1
You can use pd.get_dummies with Groupby.sum:
In [4331]: res = pd.get_dummies(df, columns=['category']).groupby('ID', as_index=False).sum()
In [4332]: res
Out[4332]:
ID category_A category_B
0 10 1 1
1 20 1 0
2 30 0 1
OR, use pd.concat with pd.get_dummies:
In [4329]: res = pd.concat([df, pd.get_dummies(df.category)], axis=1).groupby('ID', as_index=False).sum()
In [4330]: res
Out[4330]:
ID A B
0 10 1 1
1 20 1 0
2 30 0 1

Groupby and aggregate using lambda functions

I am trying to groupby-aggregate a dataframe using lambda functions that are being created programatically. This so I can simulate a one-hot encoder of the categories present in a column.
Dataframe:
df = pd.DataFrame(np.array([[10, 'A'], [10, 'B'], [20, 'A'],[30,'B']]),
columns=['ID', 'category'])
ID category
10 A
10 B
20 A
30 B
Expected result:
ID A B
10 1 1
20 1 0
30 0 1
What I am trying:
one_hot_columns = ['A','B']
lambdas = [lambda x: 1 if x.eq(column).any() else 0 for column in one_hot_columns]
df_g = df.groupby('ID').category.agg(lambdas)
Result:
ID A B
10 1 1
20 0 0
30 1 1
But the above is not quite the expected result. Not sure what I am doing wrong.
I know I could do this with get_dummies, but using lambdas is more convenient for automation. Also, I can ensure the order of the output columns.
Use crosstab:
pd.crosstab(df.ID, df['category']).reset_index()
Output:
category ID A B
0 10 1 1
1 20 1 0
2 30 0 1
You can use pd.get_dummies with Groupby.sum:
In [4331]: res = pd.get_dummies(df, columns=['category']).groupby('ID', as_index=False).sum()
In [4332]: res
Out[4332]:
ID category_A category_B
0 10 1 1
1 20 1 0
2 30 0 1
OR, use pd.concat with pd.get_dummies:
In [4329]: res = pd.concat([df, pd.get_dummies(df.category)], axis=1).groupby('ID', as_index=False).sum()
In [4330]: res
Out[4330]:
ID A B
0 10 1 1
1 20 1 0
2 30 0 1

Replace upper and lower triangle values in a dataframe with zero, or keep only diagonal values

I have the following DataFrame as a toy example:
a = [5,2,6,8]
b = [2,10,19,16]
c = [3,8,15,17]
d = [3,8,12,20]
df = pd.DataFrame([a,b,c,d], columns = ['a','b','c','d'])
df
I want to create a new DataFrame df1 that keeps only the diagonal elements and converts upper and lower triangular values to zero.
My final dataset should look like:
a b c d
0 5 0 0 0
1 0 10 0 0
2 0 0 15 0
3 0 0 0 20
You could use numpy.diag:
df = pd.DataFrame(data=np.diag(np.diag(df)), columns=df.columns)
print(df)
Output
a b c d
0 5 0 0 0
1 0 10 0 0
2 0 0 15 0
3 0 0 0 20
import pandas as pd
def diag(df):
res_df = pd.DataFrame(0, index=df.index, columns=df.columns)
for i in range(min(df.shape)): res_df.iloc[i, i] = df.iloc[i, i]
return res_df

How to use groupby and apply with DataFrames to set all values in a group column to 1 if one of the column values is 1?

I have a DataFrame with the following structure:
I want to transform the DataFrame so that for every unique user_id, if a column contains a 1, then the whole column should contain 1s for that user_id. Assume that I don't know all of the column names in advance. Based on the above input, the output would be:
I have the following code (please excuse how unsuccinct it is):
df = df.groupby('user_id').apply(self.transform_columns)
def transform_columns(self, x):
x.apply(self.transform)
def transform(self, x):
if 1 in x:
for element in x:
element = 1
var = x
At the point of the transform function, x is definitely a series. For some reason this code is returning an empty DataFrame. Btw if you also know a way of excluding certain columns from the transformation (e.g. user_id) that would be great. Please help.
I'm going to explain how I transformed the data into the initial state for the input, as after attempting Jezrael's answer, I am getting a KeyError on the 'user_id' column (which definitely exists in the df). The initial state of the data was as below:
I transformed it to the state shown in the first image in the question with the following code:
df2 = self.add_support_columns(df)
df = df.join(df2)
def add_support_columns(self, df):
df['pivot_column'] = df.apply(self.get_col_name, axis=1)
df['flag'] = 1
df = df.pivot(index='user_id', columns='pivot_column')['flag']
df.reset_index(inplace=True)
df = df.fillna(0)
return df
You can use set_index + groupby + transform with any + reset_index:
It working because 1s are in any function processes like Trues - so if at least one 1 it return 1 else 0.
df = pd.DataFrame({
'user_id' : [33,33,33,33,22,22],
'q1' : [1,0,0,0,0,0],
'q2' : [0,0,0,0,1,0],
'q3' : [0,1,0,0,0,1],
})
df = df.reindex_axis(['user_id','q1','q2','q3'], 1)
print (df)
user_id q1 q2 q3
0 33 1 0 0
1 33 0 0 1
2 33 0 0 0
3 33 0 0 0
4 22 0 1 0
5 22 0 0 1
df = df.set_index('user_id')
.groupby('user_id') # or groupby(level=0)
.transform(lambda x: 1 if x.any() else 0)
.reset_index()
print (df)
user_id q1 q2 q3
0 33 1 0 1
1 33 1 0 1
2 33 1 0 1
3 33 1 0 1
4 22 0 1 1
5 22 0 1 1
Solution with join:
df = df[['user_id']].join(df.groupby('user_id').transform(lambda x: 1 if x.any() else 0))
print (df)
user_id q1 q2 q3
0 33 1 0 1
1 33 1 0 1
2 33 1 0 1
3 33 1 0 1
4 22 0 1 1
5 22 0 1 1
EDIT:
More dynamic solution with difference + reindex_axis:
#select only some columns
cols = ['q1','q2']
#all another columns are not transforming
cols2 = df.columns.difference(cols)
df1 = df[cols2].join(df.groupby('user_id')[cols].transform(lambda x: 1 if x.any() else 0))
#if need same order of columns as original
df1 = df1.reindex_axis(df.columns, axis=1)
print (df1)
user_id q1 q2 q3
0 33 1 0 0
1 33 1 0 1
2 33 1 0 0
3 33 1 0 0
4 22 0 1 0
5 22 0 1 1
Also logic can be inverted:
#select only columns which are not transforming
cols = ['user_id']
#all another columns are transforming
cols2 = df.columns.difference(cols)
df1 = df[cols].join(df.groupby('user_id')[cols2].transform(lambda x: 1 if x.any() else 0))
df1 = df1.reindex_axis(df.columns, axis=1)
print (df1)
user_id q1 q2 q3
0 33 1 0 1
1 33 1 0 1
2 33 1 0 1
3 33 1 0 1
4 22 0 1 1
5 22 0 1 1
EDIT:
More efficient solution is return only boolean mask and then convert to int:
df1 = df.groupby('user_id').transform('any').astype(int)
Timings:
In [170]: %timeit (df.groupby('user_id').transform(lambda x: 1 if x.any() else 0))
1 loop, best of 3: 514 ms per loop
In [171]: %timeit (df.groupby('user_id').transform('any').astype(int))
10 loops, best of 3: 84 ms per loop
Sample for timings:
np.random.seed(123)
N = 1000
df = pd.DataFrame(np.random.choice([0,1], size=(N, 3)),
index=np.random.randint(1000, size=N))
df.index.name = 'user_id'
df = df.add_prefix('q').reset_index()
#print (df)

Categories