I have a df of tennis results and I would like to be able to see how many days its been since each player last won a game.
This is what my df looks like
Player 1
Player 2
Date
p1_win
p2_win
Murray
Nadal
2022-05-16
1
0
Nadal
Murray
2022-05-25
1
0
and this is what I want it to look like
Player 1
Player 2
Date
p1_win
p2_win
p1_lastwin
p2_lastwin
Murray
Nadal
2022-05-16
1
0
na
na
Nadal
Murray
2022-05-25
1
0
na
9
the results will have to be able to include the days since the last win whether the player was player 1 or 2 using group by I think. Also maybe if possible it would be good to have a win percentage for the year if possible.
Any help is much appreciated.
edit - here is the dict
{'Player 1': {0: 'Murray',
1: 'Nadal',
2: 'Murray',
3: 'Nadal',
4: 'Murray',
5: 'Nadal',
6: 'Murray',
7: 'Nadal',
8: 'Murray',
9: 'Nadal',
10: 'Murray'},
'Player 2': {0: 'Nadal',
1: 'Murray',
2: 'Nadal',
3: 'Murray',
4: 'Nadal',
5: 'Murray',
6: 'Nadal',
7: 'Murray',
8: 'Nadal',
9: 'Murray',
10: 'Nadal'},
'Date': {0: '2022-05-16',
1: '2022-05-26',
2: '2022-05-27',
3: '2022-05-28',
4: '2022-05-29',
5: '2022-06-01',
6: '2022-06-02',
7: '2022-06-05',
8: '2022-06-09',
9: '2022-06-13',
10: '2022-06-17'},
'p1_win': {0: '1',
1: '1',
2: '0',
3: '1',
4: '0',
5: '0',
6: '1',
7: '0',
8: '1',
9: '0',
10: '1'},
'p2_win': {0: '0',
1: '0',
2: '1',
3: '0',
4: '1',
5: '1',
6: '0',
7: '1',
8: '0',
9: '1',
10: '0'}}
Thanks :)
I leveraged pd.merge_asof to find the latest win, and then did a merge to the relevant index.
df = pd.DataFrame({'Player 1': {0: 'Murray', 1: 'Nadal', 2: 'Murray', 3: 'Nadal', 4: 'Murray', 5: 'Nadal', 6: 'Murray'}, 'Player 2': {0: 'Nadal', 1: 'Murray', 2: 'Nadal', 3: 'Murray', 4: 'Nadal', 5: 'Murray', 6: 'Nadal'}, 'Date': {0: '2022-05-16', 1: '2022-05-26', 2: '2022-05-27', 3: '2022-05-28', 4: '2022-05-29', 5: '2022-06-01', 6: '2022-06-02'}, 'p1_win': {0: '1', 1: '1', 2: '0', 3: '1', 4: '0', 5: '0', 6: '1'}, 'p2_win': {0: '0', 1: '0', 2: '1', 3: '0', 4: '1', 5: '1', 6: '0'}})
df['p1_win']=df.p1_win.astype(int)
df['p2_win']=df.p2_win.astype(int)
df['Date'] = pd.to_datetime(df['Date'])
df['match'] = [x+'_'+y if x>y else y+'_'+x for x,y in zip(df['Player 1'],df['Player 2'])]
# df['winner'] = np.where(df.p1_win==1,df['Player 1'],df['Player 2'])
# df['looser'] = np.where(df.p1_win==0,df['Player 1'],df['Player 2'])
df = df.reset_index()
df = df.sort_values(by='Date')
df = pd.merge_asof(df,df[df.p1_win==1][['match','Date','index']],by=['match'],on='Date',suffixes=('','_latest_win_p1'),allow_exact_matches=False,direction='backward')
df = pd.merge_asof(df,df[df.p2_win==1][['match','Date','index']],by=['match'],on='Date',suffixes=('','_latest_win_p2'),allow_exact_matches=False,direction='backward')
# df = df[['index','Date','Player 1','Player 2','p1_win','p2_win','match','winner','looser','index_latest_win_p2','index_latest_win_p1']]
df = df.merge(df[['Date','index','match']],how='left',left_on=['index_latest_win_p1','match'],right_on=['index','match'],suffixes=('','_latest_win_winner'))
df = df.merge(df[['Date','index','match']],how='left',left_on=['index_latest_win_p2','match'],right_on=['index','match'],suffixes=('','_latest_win_looser'))
df['days_since_last_win_winner'] = (df['Date']-df.Date_latest_win_winner).dt.days
df['days_since_last_win_looser'] = (df['Date']-df.Date_latest_win_looser).dt.days
not sure that this is exactly what you meant but let me know if you need anything else:
Related
I am trying to convert the date to a correct date format. I have tested some of the possibilities that I have read in the forum but, I still don't know how to tackle this issue:
After importing:
df = pd.read_excel(r'/path/df_datetime.xlsb', sheet_name="12FEB22", engine='pyxlsb')
I get the following df:
{'Unnamed: 0': {0: 'Administrative ID',
1: '000002191',
2: '000002382',
3: '000002434',
4: '000002728',
5: '000002826',
6: '000003265',
7: '000004106',
8: '000004333'},
'Unnamed: 1': {0: 'Service',
1: 'generic',
2: 'generic',
3: 'generic',
4: 'generic',
5: 'generic',
6: 'generic',
7: 'generic',
8: 'generic'},
'Unnamed: 2': {0: 'Movement type',
1: 'New',
2: 'New',
3: 'New',
4: 'Modify',
5: 'New',
6: 'New',
7: 'New',
8: 'New'},
'Unnamed: 3': {0: 'Date',
1: 37503,
2: 37475,
3: 37453,
4: 44186,
5: 37711,
6: 37658,
7: 37770,
8: 37820},
'Unnamed: 4': {0: 'Contract Term',
1: '12',
2: '12',
3: '12',
4: '12',
5: '12',
6: '12',
7: '12',
8: '12'}}
However, even although I have tried to convert the 'Date' Column (or 'Unnamed 3', because the original dataset hasn't first row so I have to change the header after that) during the importation, it has been unsuccessful.
Is there any option that I can do?
Thanks!
try this:
from xlrd import xldate_as_datetime
def trans_date(x):
if isinstance(x, int):
return xldate_as_datetime(x, 0).date()
else:
return x
print(df['Unnamed: 3'].apply(trans_date))
>>>
0 Date
1 2002-09-04
2 2002-08-07
3 2002-07-16
4 2020-12-21
5 2003-03-31
6 2003-02-06
7 2003-05-29
8 2003-07-18
Name: Unnamed: 3, dtype: object
I have transactional table and a lookup table as below. I need add val field from df_lkp to df_txn by lookup.
For each record of df_txn, I need to loop thru df_lkp. If the grp field value is a then compare only field a in both tables and get match. If the grp value is ab then compare fields a and b in both tables. If it is abc then a, b, c fields should be compared to fetch val field, and so on. Is there a way this could done in pandas without a for-loop?
df_txn = pd.DataFrame({'id': {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6', 6: '7'},
'amt': {0: 100, 1: 200, 2: 300, 3: 400, 4: 500, 5: 600, 6: 700},
'a': {0: '226', 1: '227', 2: '248', 3: '236', 4: '248', 5: '236', 6: '236'},
'b': {0: '0E31', 1: '0E32', 2: '0E40', 3: '0E35', 4: '0E40', 5: '0E40', 6: '0E33'},
'c': {0: '3014', 1: '3015', 2: '3016', 3: '3016', 4: '3016', 5: '3016', 6: '3016'}})
df_lkp = pd.DataFrame({'a': {0: '226', 1: '227', 2: '236', 3: '237', 4: '248'},
'b': {0: '0E31', 1: '0E32', 2: '0E33', 3: '0E35', 4: '0E40'},
'c': {0: '3014', 1: '3015', 2: '3016', 3: '3018', 4: '3019'},
'grp': {0: 'a', 1: 'ab', 2: 'abc', 3: 'b', 4: 'bc'},
'val': {0: 'KE00CH0004', 1: 'KE00CH0003', 2: 'KE67593065', 3: 'KE67593262', 4: 'KE00CH0003'}})
the output
df_tx2 = pd.DataFrame({'id': {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6', 6: '7'},
'amt': {0: 100, 1: 200, 2: 300, 3: 400, 4: 500, 5: 600, 6: 700},
'a': {0: '226', 1: '227', 2: '248', 3: '236', 4: '248', 5: '236', 6: '236'},
'b': {0: '0E31', 1: '0E32', 2: '0E40', 3: '0E35', 4: '0E40', 5: '0E40', 6: '0E33'},
'c': {0: '3014', 1: '3015', 2: '3016', 3: '3016', 4: '3016', 5: '3016', 6: '3016'},
'val': {0: 'KE00CH0004', 1: 'KE00CH0003', 2: '', 3: '', 4: '', 5: '', 6: 'KE67593065'}
})
In my data, I have a column that shows either one of the following options: 'NOT_TESTED', 'NOT_COMPLETED', 'TOO_LOW', or a value between 150 and 190 with a step of 5 (so 150, 155, 160 etc).
I am trying to plot a barplot which shows the amount of time each of these appear in the column, including each individual number.
So the barplot should have as variables on the x-axis: 'NOT_TESTED', 'NOT_COMPLETED', 'TOO_LOW', 150, 155, 160 and so on.
The stick height should be the amount of times it appears in the column.
This is the code I have tried and it has gotten me the closest to my goal, however, all the numbers (150-190) output 1 as a value for the barplot, so all the sticks are at the same height.
This does not follow the data and I do not know how to move forward.
I am new to this, any guidance would be greatly appreciated!
num_range = list(range(150,191, 5))
OUTCOMES = ['NOT_TESTED', 'NOT_COMPLETED', 'TOO_LOW']
OUTCOMES.extend(num_range)
df = df.append(pd.DataFrame(num_range,
columns=['PT1']),
ignore_index = True)
df["outcomes_col"] = df["PT1"].astype ("category")
df["outcomes_col"].cat.set_categories(OUTCOMES , inplace = True )
sns.countplot(x= "outcomes_col", data=df, palette='Magma')
plt.xticks(rotation = 90)
plt.ylabel('Amount')
plt.xlabel('Outcomes')
plt.title("Outcomes per Testing")
plt.show()
pd.DataFrame({'ID': {0: 'GF342', 1: 'IF874', 2: 'FH386', 3: 'KJ190', 4: 'TY748', 5: 'YT947', 6: 'DF063', 7: 'ET512', 8: 'GC714', 9: 'SD978', 10: 'EF472', 11: 'PL489', 12: 'AZ315', 13: 'OL821', 14: 'HN765', 15: 'ED589'}, 'Location': {0: 'Q1', 1: 'Q3', 2: 'Q1', 3: 'Q3', 4: 'Q3', 5: 'Q4', 6: 'Q3', 7: 'Q1', 8: 'Q2', 9: 'Q3', 10: 'Q1', 11: 'Q2', 12: 'Q1', 13: 'Q1', 14: 'Q3', 15: 'Q1'}, 'NEW': {0: 'YES', 1: 'NO', 2: 'NO', 3: 'YES', 4: 'YES', 5: 'NO', 6: 'NO', 7: 'YES', 8: 'NO', 9: 'NO', 10: 'NO', 11: 'YES', 12: 'NO', 13: 'YES', 14: 'YES', 15: 'YES'}, 'YEAR': {0: 2021, 1: 2018, 2: 2019, 3: 2021, 4: 2021, 5: 2019, 6: 2019, 7: 2021, 8: 2018, 9: 2019, 10: 2018, 11: 2021, 12: 2018, 13: 2021, 14: 2021, 15: 2021}, 'PT1': {0: '', 1: 'NOT_TESTED', 2: '', 3: 'NOT_FINISHED', 4: '165', 5: '', 6: '180', 7: '145', 8: '155', 9: '', 10: '', 11: '', 12: 'TOO_LOW', 13: '150', 14: '155', 15: ''}, 'PT2': {0: '', 1: '', 2: '', 3: '', 4: '', 5: 'TOO_LOW', 6: '', 7: '', 8: '160', 9: 'TOO_LOW', 10: '', 11: '', 12: '', 13: '', 14: '', 15: ''}, 'PT3': {0: '', 1: 'TOO_LOW', 2: '', 3: 'TOO_LOW', 4: '', 5: '', 6: '', 7: '', 8: '', 9: '', 10: '', 11: 'NOT_FINISHED', 12: '', 13: '185', 14: '', 15: '165'}, 'PT4': {0: '', 1: '', 2: '', 3: '', 4: '', 5: 165.0, 6: '', 7: '', 8: '', 9: '', 10: '', 11: '', 12: 180.0, 13: '', 14: '', 15: ''}})
This not the whole dataset, just part of it.
Starting from this dataframe:
(I replaced NOT_FINISHED with NOT_COMPLETED, compliant to code in your question, let me know if this replacement is correct)
ID Location NEW YEAR PT1 PT2 PT3 PT4
0 GF342 Q1 YES 2021
1 IF874 Q3 NO 2018 NOT_TESTED TOO_LOW
2 FH386 Q1 NO 2019
3 KJ190 Q3 YES 2021 NOT_COMPLETED TOO_LOW
4 TY748 Q3 YES 2021 165
5 YT947 Q4 NO 2019 TOO_LOW 165
6 DF063 Q3 NO 2019 180
7 ET512 Q1 YES 2021 145
8 GC714 Q2 NO 2018 155 160
9 SD978 Q3 NO 2019 TOO_LOW
10 EF472 Q1 NO 2018
11 PL489 Q2 YES 2021 NOT_COMPLETED
12 AZ315 Q1 NO 2018 TOO_LOW 180
13 OL821 Q1 YES 2021 150 185
14 HN765 Q3 YES 2021 155
15 ED589 Q1 YES 2021 165
If you are interested in a count plot of 'PT1' column, first of all you have to define the categories to be plotted. You can use pandas.CategoricalDtype, so you can sort these categories.
So you define a new 'outcomes_col' column:
num_range = list(range(150,191, 5))
OUTCOMES = ['NOT_TESTED', 'NOT_COMPLETED', 'TOO_LOW']
OUTCOMES.extend([str(num) for num in num_range])
OUTCOMES = CategoricalDtype(OUTCOMES, ordered = True)
df["outcomes_col"] = df["PT1"].astype(OUTCOMES)
Then you can proceed to plot this column:
sns.countplot(x= "outcomes_col", data=df, palette='Magma')
plt.xticks(rotation = 90)
plt.ylabel('Amount')
plt.xlabel('Outcomes')
plt.title("Outcomes per Testing")
plt.show()
Complete Code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from pandas.api.types import CategoricalDtype
df = pd.DataFrame({'ID': {0: 'GF342', 1: 'IF874', 2: 'FH386', 3: 'KJ190', 4: 'TY748', 5: 'YT947', 6: 'DF063', 7: 'ET512', 8: 'GC714', 9: 'SD978', 10: 'EF472', 11: 'PL489', 12: 'AZ315', 13: 'OL821', 14: 'HN765', 15: 'ED589'}, 'Location': {0: 'Q1', 1: 'Q3', 2: 'Q1', 3: 'Q3', 4: 'Q3', 5: 'Q4', 6: 'Q3', 7: 'Q1', 8: 'Q2', 9: 'Q3', 10: 'Q1', 11: 'Q2', 12: 'Q1', 13: 'Q1', 14: 'Q3', 15: 'Q1'}, 'NEW': {0: 'YES', 1: 'NO', 2: 'NO', 3: 'YES', 4: 'YES', 5: 'NO', 6: 'NO', 7: 'YES', 8: 'NO', 9: 'NO', 10: 'NO', 11: 'YES', 12: 'NO', 13: 'YES', 14: 'YES', 15: 'YES'}, 'YEAR': {0: 2021, 1: 2018, 2: 2019, 3: 2021, 4: 2021, 5: 2019, 6: 2019, 7: 2021, 8: 2018, 9: 2019, 10: 2018, 11: 2021, 12: 2018, 13: 2021, 14: 2021, 15: 2021}, 'PT1': {0: '', 1: 'NOT_TESTED', 2: '', 3: 'NOT_COMPLETED', 4: '165', 5: '', 6: '180', 7: '145', 8: '155', 9: '', 10: '', 11: '', 12: 'TOO_LOW', 13: '150', 14: '155', 15: ''}, 'PT2': {0: '', 1: '', 2: '', 3: '', 4: '', 5: 'TOO_LOW', 6: '', 7: '', 8: '160', 9: 'TOO_LOW', 10: '', 11: '', 12: '', 13: '', 14: '', 15: ''}, 'PT3': {0: '', 1: 'TOO_LOW', 2: '', 3: 'TOO_LOW', 4: '', 5: '', 6: '', 7: '', 8: '', 9: '', 10: '', 11: 'NOT_COMPLETED', 12: '', 13: '185', 14: '', 15: '165'}, 'PT4': {0: '', 1: '', 2: '', 3: '', 4: '', 5: 165.0, 6: '', 7: '', 8: '', 9: '', 10: '', 11: '', 12: 180.0, 13: '', 14: '', 15: ''}})
num_range = list(range(150,191, 5))
OUTCOMES = ['NOT_TESTED', 'NOT_COMPLETED', 'TOO_LOW']
OUTCOMES.extend([str(num) for num in num_range])
OUTCOMES = CategoricalDtype(OUTCOMES, ordered = True)
df["outcomes_col"] = df["PT1"].astype(OUTCOMES)
sns.countplot(x= "outcomes_col", data=df, palette='Magma')
plt.xticks(rotation = 90)
plt.ylabel('Amount')
plt.xlabel('Outcomes')
plt.title("Outcomes per Testing")
plt.show()
I'm stuck at exporting a multi index dataframe to excel, in the matter what I'm looking for.
This is what I'm looking for in excel.
I know I have to add an extra Index Parameter on the left for the row of SRR (%) and Traction (-), but how?
My code so far.
import pandas as pd
import matplotlib.pyplot as plt
data = {'Step 1': {'Step Typ': 'Traction', 'SRR (%)': {1: 8.384, 2: 9.815, 3: 7.531, 4: 10.209, 5: 7.989, 6: 7.331, 7: 5.008, 8: 2.716, 9: 9.6, 10: 7.911}, 'Traction (-)': {1: 5.602, 2: 6.04, 3: 2.631, 4: 2.952, 5: 8.162, 6: 9.312, 7: 4.994, 8: 2.959, 9: 10.075, 10: 5.498}, 'Temperature': 30, 'Load': 40}, 'Step 3': {'Step Typ': 'Traction', 'SRR (%)': {1: 2.909, 2: 5.552, 3: 5.656, 4: 9.043, 5: 3.424, 6: 7.382, 7: 3.916, 8: 2.665, 9: 4.832, 10: 3.993}, 'Traction (-)': {1: 9.158, 2: 6.721, 3: 7.787, 4: 7.491, 5: 8.267, 6: 2.985, 7: 5.882, 8: 3.591, 9: 6.334, 10: 10.43}, 'Temperature': 80, 'Load': 40}, 'Step 5': {'Step Typ': 'Traction', 'SRR (%)': {1: 4.765, 2: 9.293, 3: 7.608, 4: 7.371, 5: 4.87, 6: 4.832, 7: 6.244, 8: 6.488, 9: 5.04, 10: 2.962}, 'Traction (-)': {1: 6.656, 2: 7.872, 3: 8.799, 4: 7.9, 5: 4.22, 6: 6.288, 7: 7.439, 8: 7.77, 9: 5.977, 10: 9.395}, 'Temperature': 30, 'Load': 70}, 'Step 7': {'Step Typ': 'Traction', 'SRR (%)': {1: 9.46, 2: 2.83, 3: 3.249, 4: 9.273, 5: 8.792, 6: 9.673, 7: 6.784, 8: 3.838, 9: 8.779, 10: 4.82}, 'Traction (-)': {1: 5.245, 2: 8.491, 3: 10.088, 4: 9.988, 5: 4.886, 6: 4.168, 7: 8.628, 8: 5.038, 9: 7.712, 10: 3.961}, 'Temperature': 80, 'Load': 70} }
df = pd.DataFrame(data)
items = list()
series = list()
for item, d in data.items():
items.append(item)
series.append(pd.DataFrame.from_dict(d))
df = pd.concat(series, keys=items)
df.set_index(['Step Typ', 'Load', 'Temperature'], inplace=True).T.to_excel('testfile.xlsx')
The picture below, shows df.set_index(['Step Typ', 'Load', 'Temperature'], inplace=True).T as a dataframe: (somehow close, but not exactly what I'm looking for):
Edit 1:
Found a good solution, not the exact one I was looking for, but it's still worth using it.
df.reset_index().drop(["level_0","level_1"], axis=1).pivot(columns=["Step Typ", "Load", "Temperature"], values=["SRR (%)", "Traction (-)"]).apply(lambda x: pd.Series(x.dropna().values)).to_excel("solution.xlsx")
Can you explain clearely and show the output you are looking for?
To export a table to excel use df.to_excel('path', index=True/False)
where:
index=True or False - to insert or not the index column into the file
Found a good solution, not the exact one I was looking for, but it's still worth using it.
df.reset_index().drop(["level_0","level_1"], axis=1).pivot(columns=["Step Typ", "Load", "Temperature"], values=["SRR (%)", "Traction (-)"]).apply(lambda x: pd.Series(x.dropna().values)).to_excel("solution.xlsx")
I need to unstack a contact list (id, relatives, phone numbers...) so that the columns keep a specific order.
Given an index, dataframe UNSTACK operates by unstacking single columns one by one, even when applied to couple of columns
Data have
df_have=pd.DataFrame.from_dict({'ID': {0: '100', 1: '100', 2: '100', 3: '100', 4: '100', 5: '200', 6: '200', 7: '200', 8: '200', 9: '200'},
'ID_RELATIVE': {0: '100', 1: '100', 2: '150', 3: '150', 4: '190', 5: '200', 6: '200', 7: '250', 8: '290', 9: '290'},
'RELATIVE_ROLE': {0: 'self', 1: 'self', 2: 'father', 3: 'father', 4: 'mother', 5: 'self', 6: 'self', 7: 'father', 8: 'mother', 9: 'mother'},
'PHONE': {0: '111111', 1: '222222', 2: '333333', 3: '444444', 4: '555555', 5: '123456', 6: '456789', 7: '987654', 8: '778899', 9: '909090'}})
Data want
df_want=pd.DataFrame.from_dict({'ID': {0: '100', 1: '200'},
'ID_RELATIVE_1': {0: '100', 1: '200'},
'RELATIVE_ROLE_1': {0: 'self', 1: 'self'},
'PHONE_1_1': {0: '111111', 1: '123456'},
'PHONE_1_2': {0: '222222', 1: '456789'},
'ID_RELATIVE_2': {0: '150', 1: '250'},
'RELATIVE_ROLE_2': {0: 'father', 1: 'father'},
'PHONE_2_1': {0: '333333', 1: '987654'},
'PHONE_2_2': {0: '444444', 1: 'nan'},
'ID_RELATIVE_3': {0: '190', 1: '290'},
'RELATIVE_ROLE_3': {0: 'mother', 1: 'mother'},
'PHONE_3_1': {0: '555555', 1: '778899'},
'PHONE_3_2': {0: 'nan', 1: '909090'}})
So, in the end, I need ID to be the index, and to unstack the other columns that will hence become attributes of ID.
The usual unstack process provides a "correct" ouput but in the wrong shape.
df2=have.groupby(['ID'])['ID_RELATIVE','RELATIVE_ROLE','PHONE'].apply(lambda x: x.reset_index(drop=True)).unstack()
This would require the re-ordering of columns and some removal of duplicates (by columns, not by row), together with a FOR loop. I'd like to avoid using this approach, since I'm looking for a more "elegant" way of achieving the desired result by means of grouping/stacking/unstacking/pivoting and so on.
Thanks a lot
Solution have main 2 steps - first grouping by all column without PHONE for pairs, convert columns names to ordered catagoricals for correct sorting and then grouping by ID:
c = ['ID','ID_RELATIVE','RELATIVE_ROLE']
df = df_have.set_index(c+ [df_have.groupby(c).cumcount().add(1)])['PHONE']
df = df.unstack().add_prefix('PHONE_').reset_index()
df = df.set_index(['ID', df.groupby('ID').cumcount().add(1)])
df.columns = pd.CategoricalIndex(df.columns, categories=df.columns.tolist(), ordered=True)
df = df.unstack().sort_index(axis=1, level=1)
df.columns = [f'{a}_{b}' for a, b in df.columns]
df = df.reset_index()
print (df)
ID ID_RELATIVE_1 RELATIVE_ROLE_1 PHONE_1_1 PHONE_2_1 ID_RELATIVE_2 \
0 100 100 self 111111 222222 150
1 200 200 self 123456 456789 250
RELATIVE_ROLE_2 PHONE_1_2 PHONE_2_2 ID_RELATIVE_3 RELATIVE_ROLE_3 PHONE_1_3 \
0 father 333333 444444 190 mother 555555
1 father 987654 NaN 290 mother 778899
PHONE_2_3
0 NaN
1 909090
If need change order of digits in PHONE columns:
df.columns = [f'{a.split("_")[0]}_{b}_{a.split("_")[1]}'
if 'PHONE' in a
else f'{a}_{b}' for a, b in df.columns]
df = df.reset_index()
print (df)
ID ID_RELATIVE_1 RELATIVE_ROLE_1 PHONE_1_1 PHONE_1_2 ID_RELATIVE_2 \
0 100 100 self 111111 222222 150
1 200 200 self 123456 456789 250
RELATIVE_ROLE_2 PHONE_2_1 PHONE_2_2 ID_RELATIVE_3 RELATIVE_ROLE_3 PHONE_3_1 \
0 father 333333 444444 190 mother 555555
1 father 987654 NaN 290 mother 778899
PHONE_3_2
0 NaN
1 909090