Python Pandas DataFrames compare with next rows - python

I have dataframe like this.
col1
0 1
1 3
2 3
3 1
4 2
5 3
6 2
7 2
I want to create column out by compare each row. If row 0 less than row 1 then out is 1. If row 1 more than row 2 then out is 0. like this sample.
col1 out
0 1 1 # 1<3 = 1
1 3 0 # 3<3 = 0
2 3 0 # 3<1 = 0
3 1 1 # 1<2 = 1
4 2 1 # 2<3 = 1
5 3 0 # 3<2 = 0
6 2 0 # 2<2 = 0
7 2 -
I try with this code.
def comp_out(a):
return np.concatenate(([1],a[1:] > a[2:]))
df['out'] = comp_out(df.col1.values)
It show error like this.
ValueError: operands could not be broadcast together with shapes (11,) (10,)

Let's use shift instead to "shift" the column up so that rows are aligned with the previous, then use lt to compare less than and astype convert the booleans to 1/0:
df['out'] = df['col1'].lt(df['col1'].shift(-1)).astype(int)
col1 out
0 1 1
1 3 0
2 3 0
3 1 1
4 2 1
5 3 0
6 2 0
7 2 0
We can strip the last value with iloc if needed:
df['out'] = df['col1'].lt(df['col1'].shift(-1)).iloc[:-1].astype(int)
df:
col1 out
0 1 1.0
1 3 0.0
2 3 0.0
3 1 1.0
4 2 1.0
5 3 0.0
6 2 0.0
7 2 NaN
If we want to use the function we should make sure both are the same length, by slicing off the last value:
def comp_out(a):
return np.concatenate([a[0:-1] < a[1:], [np.NAN]])
df['out'] = comp_out(df['col1'].to_numpy())
df:
col1 out
0 1 1.0
1 3 0.0
2 3 0.0
3 1 1.0
4 2 1.0
5 3 0.0
6 2 0.0
7 2 NaN

Related

Subtraction in dataframe between rows

i want an easy subtraction of two values. I want to replace the value in [10, 150] by calculation the value in ([10, 150] - [9, 150]).
Somehow the code does not like the "rows-1"
for columns in listofcolumns:
rows = 0
while rows < row_count:
column= all_columns.index(columns)
df_merged.iloc[rows, column] = (df_merged.iloc[rows, column] - df_merged.iloc[rows-1, columns])
rows = rows+ 1
It seems to be the case that the df_merged.iloc[rows-1, column] takes the last value of the column.
I used the exact same line in another script before and it worked
This would be an example of some columns
Col1 Col2
0 2
0 3
0 4
0 4
1 5
1 7
1 8
1 8
2 8
The output dataframe i want would look like this.
Col1 Col1
nAn nAn
0 1
0 1
0 0
1 1
0 2
0 1
0 1
1 1
If I understood what you want to do, this would be the solution:
data = {'A': [5,7,9,3,2], 'B': [1,4,6,1,2]}
df = pd.DataFrame(data)
df["A"] = df["A"] - df["B"]
DataFrame at the start
A B
0 5 1
1 7 4
2 9 6
3 3 1
4 2 2
DataFrame at the end
A B
0 4 1
1 3 4
2 3 6
3 2 1
4 0 2
df.diff(1)
Col1 Col2
0 NaN NaN
1 0.0 1.0
2 0.0 1.0
3 0.0 0.0
4 1.0 1.0
5 0.0 2.0
6 0.0 1.0
7 0.0 0.0
8 1.0 0.0
above is based on the following data
Col1 Col2
0 0 2
1 0 3
2 0 4
3 0 4
4 1 5
5 1 7
6 1 8
7 1 8
8 2 8

Difference in score to next rank

I have a dataframe
Group Score Rank
1 0 3
1 4 1
1 2 2
2 3 2
2 1 3
2 7 1
I have to take the difference of the score in next rank within each group. For example, in group 1 rank(1) - rank(2) = 4 - 2
Expected output:
Group Score Rank Difference
1 0 3 0
1 4 1 2
1 2 2 2
2 3 2 2
2 1 3 0
2 7 1 4
you can try:
df = df.sort_values(['Group', 'Rank'],ascending = [True,False])
df['Difference'] =df.groupby('Group', as_index=False)['Score'].transform('diff').fillna(0).astype(int)
OUTPUT:
Group Score Rank Difference
0 1 0 3 0
2 1 2 2 2
1 1 4 1 2
4 2 1 3 0
3 2 3 2 2
5 2 7 1 4
NOTE: The result is sorted based on the rank column.
I think you can create a new column for the values in the next rank by using the shift() and then calculate the difference. You can see the following codes:
# Sort the dataframe
df = df.sort_values(['Group','Rank']).reset_index(drop=True)
# Shift up values by one row within a group
df['Score_next'] = df.groupby('Group')['Score'].shift(-1).fillna(0)
# Calculate the difference
df['Difference'] = df['Score'] - df['Score_next']
Here is the result:
print(df)
Group Score Rank Score_next Difference
0 1 4 1 2.0 2.0
1 1 2 2 0.0 2.0
2 1 0 3 0.0 0.0
3 2 7 1 3.0 4.0
4 2 3 2 1.0 2.0
5 2 1 3 0.0 1.0

Pandas Insert a new row after every nth row

I have a dataframe that looks like below:
**L_Type L_ID C_Type E_Code**
0 1 1 9
0 1 2 9
0 1 3 9
0 1 4 9
0 2 1 2
0 2 2 2
0 2 3 2
0 2 4 2
0 3 1 3
0 3 2 3
0 3 3 3
0 3 4 3
I need to insert a new row after every 4 row and increment the value in third column (C_Type) by 01 like below table while keeping the values same as first two columns and does not want any value in last column:
L_Type L_ID C_Type E_Code
0 1 1 9
0 1 2 9
0 1 3 9
0 1 4 9
0 1 5
0 2 1 2
0 2 2 2
0 2 3 2
0 2 4 2
0 2 5
0 3 1 3
0 3 2 3
0 3 3 3
0 3 4 3
0 3 5
I have searched other threads but could not figure out the exact solution:
How to insert n DataFrame to another every nth row in Pandas?
Insert new rows in pandas dataframe
You can seelct rows by slicing, add 1 to column C_Type and 0.5 to index, for 100% sorrect slicing, because default method of sorting in DataFrame.sort_index is quicksort. Last join together, sort index and create default by concat with DataFrame.reset_index and drop=True:
df['C_Type'] = df['C_Type'].astype(int)
df2 = (df.iloc[3::4]
.assign(C_Type = lambda x: x['C_Type'] + 1, E_Code = np.nan)
.rename(lambda x: x + .5))
df1 = pd.concat([df, df2], sort=False).sort_index().reset_index(drop=True)
print (df1)
L_Type L_ID C_Type E_Code
0 0 1 1 9.0
1 0 1 2 9.0
2 0 1 3 9.0
3 0 1 4 9.0
4 0 1 5 NaN
5 0 2 1 2.0
6 0 2 2 2.0
7 0 2 3 2.0
8 0 2 4 2.0
9 0 2 5 NaN
10 0 3 1 3.0
11 0 3 2 3.0
12 0 3 3 3.0
13 0 3 4 3.0
14 0 3 5 NaN

Identify first non-zero element within group composed of multiple columns in pandas

I have a dataframe that looks like the following. The rightmost column is my desired column:
Group1 Group2 Value Target_Column
1 3 0 0
1 3 1 1
1 4 1 1
1 4 1 0
2 5 5 5
2 5 1 0
2 6 0 0
2 6 1 1
2 6 9 0
How do I identify the first non-zero value in a group that is made up of two columns(Group1 & Group2) and then create a column that shows the first non-zero value and shows all else as zeroes?
This question is very similar to one posed earlier here:
Identify first non-zero element within a group in pandas
but that solution gives an error on groups based on multiple columns.
I have tried:
import pandas as pd
dt = pd.DataFrame({'Group1': [1,1,1,1,2,2,2,2,2], 'Group2': [3,3,4,4,5,5,6,6,6], 'Value': [0,1,1,1,5,1,0,1,9]})
dt['Newcol']=0
dt.loc[dt.Value.ne(0).groupby(dt['Group1','Group2']).idxmax(),'Newcol']=dt.Value
Setup
df['flag'] = df.Value.ne(0)
Using numpy.where and assign:
df.assign(
target=np.where(df.index.isin(df.groupby(['Group1', 'Group2']).flag.idxmax()),
df.Value, 0)
).drop('flag', 1)
Using loc and assign
df.assign(
target=df.loc[df.groupby(['Group1', 'Group2']).flag.idxmax(), 'Value']
).fillna(0).astype(int).drop('flag', 1)
Both produce:
Group1 Group2 Value target
0 1 3 0 0
1 1 3 1 1
2 1 4 1 1
3 1 4 1 0
4 2 5 5 5
5 2 5 1 0
6 2 6 0 0
7 2 6 1 1
8 2 6 9 0
The number may off, since when there are only have two same values, I do not know you need the which one.
Using user3483203 's setting up
df['flag'] = df.Value.ne(0)
df['Target']=df.sort_values(['flag'],ascending=False).drop_duplicates(['Group1','Group2']).Value
df['Target'].fillna(0,inplace=True)
df
Out[20]:
Group1 Group2 Value Target_Column Target
0 1 3 0 0 0.0
1 1 3 1 1 1.0
2 1 4 1 1 1.0
3 1 4 1 0 0.0
4 2 5 5 5 5.0
5 2 5 1 0 0.0
6 2 6 0 0 0.0
7 2 6 1 1 1.0

How to fill values based on data present in column and an array? Pandas

Lets say I have dataframe with nans in each group like
df = pd.DataFrame({'data':[0,1,2,0,np.nan,2,np.nan,0,1],'group':[1,1,1,2,2,2,3,3,3]})
and a numpy array like
x = np.array([0,1,2])
Now based on groups how to fill the missing values that are in the numpy array I have i.e
df = pd.DataFrame({'data':[0,1,2,0,1,2,2,0,1],'group':[1,1,1,2,2,2,3,3,3]})
data group
0 0 1
1 1 1
2 2 1
3 0 2
4 1 2
5 2 2
6 2 3
7 0 3
8 1 3
Let me explain a bit of how the data should be filled. Consider the group 2. The values of data are 0,np.nan,2 . The np.nan is the missing value from the array [0,1,2]. So the data to be filled inplace of nan is 1.
For multiple nan values, take a group for example that has data [np.nan,0,np.nan] now the values to be filled in place of nan are 1 and 2. resulting in [1,0,2].
First find value which miss and then add it to fillna:
def f(y):
a = list(set(x)-set(y))
a = 1 if len(a) == 0 else a[0]
y = y.fillna(a)
return (y)
df['data'] = df.groupby('group')['data'].apply(f).astype(int)
print (df)
data group
0 0 1
1 1 1
2 2 1
3 0 2
4 1 2
5 2 2
6 2 3
7 0 3
8 1 3
EDIT:
df = pd.DataFrame({'data':[0,1,2,0,np.nan,2,np.nan,np.nan,1, np.nan, np.nan, np.nan],
'group':[1,1,1,2,2,2,3,3,3,4,4,4]})
x = np.array([0,1,2])
print (df)
data group
0 0.0 1
1 1.0 1
2 2.0 1
3 0.0 2
4 NaN 2
5 2.0 2
6 NaN 3
7 NaN 3
8 1.0 3
9 NaN 4
10 NaN 4
11 NaN 4
def f(y):
a = list(set(x)-set(y))
if len(a) == 1:
return y.fillna(a[0])
elif len(a) == 2:
return y.fillna(a[0], limit=1).fillna(a[1])
elif len(a) == 3:
y = pd.Series(x, index=y.index)
return y
else:
return y
df['data'] = df.groupby('group')['data'].apply(f).astype(int)
print (df)
data group
0 0 1
1 1 1
2 2 1
3 0 2
4 1 2
5 2 2
6 0 3
7 2 3
8 1 3
9 0 4
10 1 4
11 2 4

Categories