finding transitive relation between two columns in pandas - python

I have a pandas data frame with 2 columns - user1 and user2
something like this
Now, I want to do a transitive relation such that if A is related to B and B is to C and C is to D, then I want the output as a list like "A-B-C-D" in one group and "E-F-G" in another group.
Thanks

If you have just 2 groups, you can do in this way. But it works only for 2 groups, and you cannot generalize:
x = []
y = []
x.append(df['user1'][0])
x.append(df['user2'][0])
for index, i in enumerate(df['user1']):
if df['user1'][index] in x:
x.append(df['user2'][index])
else:
y.append(df['user1'][index])
y.append(df['user2'][index])
x = set(x)
y = set(y)

If you want to find all the transitive relationships then most likely you need to perform a recursion. Perhaps this following piece of code may help:
import pandas as pd
data={'user1':['A','A','B', 'C', 'E', 'F'],
'user2':['B', 'C','C','D','F','G']}
df=pd.DataFrame(data)
print(df)
# this method is similar to the commnon table expression (CTE) in SQL
def cte(df_anchor,df_ref,level):
if (level==0):
df_anchor.insert(0, 'user_root',df_anchor['user1'])
df_anchor['level']=0
df_anchor['relationship']=df_anchor['user1']+'-'+df_anchor['user2']
_df_anchor=df_anchor
if (level>0):
_df_anchor=df_anchor[df_anchor.level==level]
_df=pd.merge(_df_anchor, df_ref , left_on='user2', right_on='user1', how='inner', suffixes=('', '_x'))
if not(_df.empty):
_df['relationship']=_df['relationship']+'-'+_df['user2_x']
_df['level']=_df['level']+1
_df=_df[['user_root','user1_x', 'user2_x', 'level','relationship']].rename(columns={'user1_x': 'user1', 'user2_x': 'user2'})
df_anchor_new=pd.concat([df_anchor, _df])
return cte(df_anchor_new, df_ref, level+1)
else:
return df_anchor
df_rel=cte(df, df, 0)
print("\nall relationship=\n",df_rel)
print("\nall relationship related to A=\n", df_rel[df_rel.user_root=='A'])
user1 user2
0 A B
1 A C
2 B C
3 C D
4 E F
5 F G
all relationship=
user_root user1 user2 level relationship
0 A A B 0 A-B
1 A A C 0 A-C
2 B B C 0 B-C
3 C C D 0 C-D
4 E E F 0 E-F
5 F F G 0 F-G
0 A B C 1 A-B-C
1 A C D 1 A-C-D
2 B C D 1 B-C-D
3 E F G 1 E-F-G
0 A C D 2 A-B-C-D
all relationship related to A=
user_root user1 user2 level relationship
0 A A B 0 A-B
1 A A C 0 A-C
0 A B C 1 A-B-C
1 A C D 1 A-C-D
0 A C D 2 A-B-C-D

Related

How do I give score (0/1) to CSV rows

My csv file row column data looks like this -
a a a a a
b b b b b
c c c c c
d d d d d
a b c d e
a d b c c
When I have patterns like row 1-5, I want to return value 0
When I have row like 6 or random alphabets (not like row 1-5), I want to return value 1.
How do I do it using python?It must be done by using csv file
You can read your csv file to pandas dataframe using:
df = pd.read_csv(header=None)
output:
0 1 2 3 4
0 a a a a a
1 b b b b b
2 c c c c c
3 d d d d d
4 a b c d e
5 a d b c c
Then, use nunique to count the number of unique values per row, if 1 or 5 (the max), then it is valid, else not. Use between for that.
df.nunique(1).between(2, len(df.columns)-1).astype(int)
output:
0 0
1 0
2 0
3 0
4 0
5 1
dtype: int64

Create a dataframe of all combinations of columns names per row based on mutual presence of columns pairs

I'm trying to create a dataframe based on other dataframe and a specific condition.
Given the pandas dataframe above, I'd like to have a two column dataframe, which each row would be the combinations of pairs of words that are different from 0 (coexist in a specific row), beginning with the first row.
For example, for this part of image above, the new dataframe that I want is like de following:
and so on...
Does anyone have some tip of how I can do it? I'm struggling... Thanks!
As you didn't provide a text example, here is a dummy one:
>>> df
A B C D E
0 0 1 1 0 1
1 1 1 1 1 1
2 1 0 0 1 0
3 0 0 0 0 1
4 0 1 1 0 0
you could use a combination of masking, explode and itertools.combinations:
from itertools import combinations
mask = df.gt(0)
series = (mask*df.columns).apply(lambda x: list(combinations(set(x).difference(['']), r=2)), axis=1)
pd.DataFrame(series.explode().dropna().to_list(), columns=['X', 'Y'])
output:
X Y
0 C E
1 C B
2 E B
3 E D
4 E C
5 E B
6 E A
7 D C
8 D B
9 D A
10 C B
11 C A
12 B A
13 A D
14 C B

perform df.loc to groupby df

I've a df consisted of person, origin and destination
df = pd.DataFrame({'PersonID':['1','1','2','2','2','3'],'O':['A','B','C','B','A','X'],'D':['B','A','B','A','B','Y']})
the df:
PersonID O D
1 A B
1 B A
2 C B
2 B A
2 A B
3 X Y
I have grouped by the df with df_grouped = df.groupby(['O','D']) and match them with another dataframe, taxi.
TaxiID O D
T1 B A
T2 A B
T3 C B
similarly, I group by the taxi with their O and D. Then I merged them after aggregating and counting the PersonID and TaxiID per O-D pair. I did it to see how many taxis are available for how many people.
O D PersonID TaxiID
count count
A B 2 1
B A 2 1
C B 1 1
Now, I want to perform df.loc to take only those PersonID that was counted in the merged file. How can I do this? I've tried to us:
seek = df.loc[df.PersonID.isin(merged['PersonID'])]
but it returns an empty dataframe. What can I do to do this?
edit: I attach the complete code for this case using dummy data
df = pd.DataFrame({'PersonID':['1','1','2','2','2','3'],'O':['A','B','C','B','A','X'],'D':['B','A','B','A','B','Y']})
taxi = pd.DataFrame({'TaxiID':['T1','T2','T3'],'O':['B','A','C'],'D':['A','B','B']})
df_grouped = df.groupby(['O','D'])
taxi_grouped = taxi.groupby(['O','D'])
dfm = df_grouped.agg({'PersonID':['count',list]}).reset_index()
tgm = taxi_grouped.agg({'TaxiID':['count',list]}).reset_index()
merged = pd.merge(dfm, tgm, how='inner')
seek = df.loc[df.PersonID.isin(merged['PersonID'])]
Select MultiIndex by tuple with Series.explode for scalars from nested lists:
seek = df.loc[df.PersonID.isin(merged[('PersonID', 'list')].explode().unique())]
print (seek)
PersonID O D
0 1 A B
1 1 B A
2 2 C B
3 2 B A
4 2 A B
For better performance is possible use set comprehension with flatten:
seek = df.loc[df.PersonID.isin(set(z for x in merged[('PersonID', 'list')] for z in x))]
print (seek)
PersonID O D
0 1 A B
1 1 B A
2 2 C B
3 2 B A
4 2 A B

How can I remove a certain type of values in a group in pandas?

I have the following dataframe which is a small part of a bigger one:
acc_num trans_cdi
0 1 c
1 1 d
3 3 d
4 3 c
5 3 d
6 3 d
I'd like to delete all rows where the last items are "d". So my desired dataframe would look like this:
acc_num trans_cdi
0 1 c
3 3 d
4 3 c
So the point is, that a group shouldn't have "d" as the last item.
There is a code that deletes the last row in the groups where the last item is "d". But in this case, I have to run the code twice to delete all last "d"-s in group 3 for example.
clean_3 = clean_2[clean_2.groupby('account_num')['trans_cdi'].transform(lambda x: (x.iloc[-1] != "d") | (x.index != x.index[-1]))]
Is there a better solution to this problem?
We can use idxmax here with reversing the data [::-1] and then get the index:
grps = df['trans_cdi'].ne('d').groupby(df['acc_num'], group_keys=False)
idx = grps.apply(lambda x: x.loc[:x[::-1].idxmax()]).index
df.loc[idx]
acc_num trans_cdi
0 1 c
3 3 d
4 3 c
Testing on consecutive value
acc_num trans_cdi
0 1 c
1 1 d <--- d between two c, so we need to keep
2 1 c
3 1 d <--- row to be dropped
4 3 d
5 3 c
6 3 d
7 3 d
grps = df['trans_cdi'].ne('d').groupby(df['acc_num'], group_keys=False)
idx = grps.apply(lambda x: x.loc[:x[::-1].idxmax()]).index
df.loc[idx]
acc_num trans_cdi
0 1 c
1 1 d
2 1 c
4 3 d
5 3 c
Still gives correct result.
You can try this not so pandorable solution.
def r(x):
c = 0
for v in x['trans_cdi'].iloc[::-1]:
if v == 'd':
c = c+1
else:
break
return x.iloc[:-c]
df.groupby('acc_num', group_keys=False).apply(r)
acc_num trans_cdi
0 1 c
3 3 d
4 3 c
First, compare to the next row with shift if the values are both equal to 'd'. ~ filters out the specified rows.
Second, Make sure the last row value is not d. If it is, then delete the row.
code:
df = df[~((df['trans_cdi'] == 'd') & (df.shift(1)['trans_cdi'] == 'd'))]
if df['trans_cdi'].iloc[-1] == 'd': df = df.iloc[0:-1]
df
input (I tested it on more input data to ensure there were no bugs):
acc_num trans_cdi
0 1 c
1 1 d
3 3 d
4 3 c
5 3 d
6 3 d
7 1 d
8 1 d
9 3 c
10 3 c
11 3 d
12 3 d
output:
acc_num trans_cdi
0 1 c
1 1 d
4 3 c
5 3 d
9 3 c
10 3 c

How to get top 5 items for each group in grouped dataframe?

df = pd.DataFrame({'Weekday':list('MMMMMMMMMMTTTTTTTTTT'),
'Items': list("AAABBCDEFGBBBCCADEFG")
})
grouped = df.groupby(['Weekday','Items'],sort=True).agg({'Items': 'count'})
Then, I get the result of grouped:
Weekday Items
M A 3
B 2
C 1
D 1
E 1
F 1
G 1
T A 1
B 3
C 2
D 1
E 1
F 1
G 1
So how to output the top 5 items for each "weekdays" (5 for 'M' and 'T'), like:
Weekday Items
M A 3
B 2
C 1
D 1
E 1
T
B 3
C 2
A 1
D 1
E 1
Anyone can help this?
df = pd.DataFrame({'Weekday':list('MMMMMMMMMMTTTTTTTTTT'),
'Item': list("AAABBCDEFGBBBCCADEFG")
})
grouped = df.groupby(['Weekday','Item'],sort=True).agg(count=('Item', 'count'))
grouped.sort_values(['Weekday','count'],ascending=False).groupby('Weekday').head(5)
count
Weekday Item
T B 3
C 2
A 1
D 1
E 1
M A 3
B 2
C 1
D 1
E 1
grouped = (df.groupby(['Weekday','Items'])
.Items.agg(counter='count')
.groupby(['Weekday'],
as_index=False))
pd.concat([group.nlargest(5,'counter') for name,group in grouped])
counter
Weekday Items
M A 3
B 2
C 1
D 1
E 1
T B 3
C 2
A 1
D 1
E 1
groupby twice, first to get the counter variable. the second groupby allows an iteration through the groups to get the top 5, using nlargest. last step is to combine the dataframes in the list into one.
vb_rise's solution should be faster as it avoids the iteration process.

Categories