I have a df that looks like this:
|ID|PREVIOUS |CURRENT|NEXT|
|--| --- | --- |---|
|1||A||
|1||B||
|2||C||
|2||D||
|2||E||
|2||F||
|3||G||
|4||H||
|4||I||
I want it to populate PREVIOUS and NEXT columns like this:
|ID|PREVIOUS |CURRENT|NEXT|
|--| --- | --- |---|
|1|nan|A|B|
|1|A|B|nan|
|2|nan|C|D|
|2|C|D|E|
|2|D|E|F|
|2|E|F|nan|
|3|nan|G|nan|
|4|nan|H|I|
|4|H|I|nan|
So for each unique ID I want to populate PREVIOUS and next columns based on the values of CURRENT column.
Until now I figured out how to do it if the df had only one type of ID (exept the case where there is no PREVIOUS and NEXT i.e ID=3) but I am struggling to generalize it for more ID-s.
for i in range(0,len(df)):
if i==0:
df["PREVIOUS"].iloc[i] = str(np.NaN)
df["NEXT"].iloc[i] = df["CURRENT"].iloc[i+1]
if i == (len(df)-1):
df["NEXT"].iloc[i] = str(np.NaN)
df["PREVIOUS"].iloc[i] = df["CURRENT"].iloc[i-1]
if (i > 0) and (i < (len(df)-1)):
df["PREVIOUS"].iloc[i] = df["CURRENT"].iloc[i-1]
df["NEXT"].iloc[i] = df["CURRENT"].iloc[i+1]
I am guessing it should employe a groupby and size() but until now I couldn't achieve the result I wanted.
This should do what your question asks:
import pandas as pd
import numpy as np
df = pd.DataFrame({'ID':[1,1,2,2,2,2,3,4,4], 'CURRENT':list('ABCDEFGHI')})
print(df)
from collections import defaultdict
valById = defaultdict(list)
df.apply(lambda x: valById[x['ID']].append(x['CURRENT']), axis = 1)
df = pd.DataFrame([{'ID':k, 'PREVIOUS': v[i-1] if i else np.nan, 'CURRENT': v[i], 'NEXT': v[i+1] if i+1 < len(v) else np.nan} for k, v in valById.items() for i in range(len(v))])
print(df)
Output:
ID CURRENT
0 1 A
1 1 B
2 2 C
3 2 D
4 2 E
5 2 F
6 3 G
7 4 H
8 4 I
ID PREVIOUS CURRENT NEXT
0 1 NaN A B
1 1 A B NaN
2 2 NaN C D
3 2 C D E
4 2 D E F
5 2 E F NaN
6 3 NaN G NaN
7 4 NaN H I
8 4 H I NaN
Related
I have two DataFrames
df_1:
idx A X
0 1 A
1 2 B
2 3 C
3 4 D
4 1 E
5 2 F
and
df_2:
idx B Y
0 1 H
1 2 I
2 4 J
3 2 K
4 3 L
5 1 M
my goal is get the following:
df_result:
idx A X B Y
0 1 A 1 H
1 2 B 2 I
2 4 D 4 J
3 2 F 2 K
I am trying to match both A and B columns, based on on the column Bfrom df_2.
Columns A and B repeat their content after getting to 4. The order matters here and because of that the row from df_1 with idx = 4 does not match the one from df_2 with idx = 5.
I was trying to use:
matching = list(set(df_1["A"]) & set(df_2["B"]))
and then
df1_filt = df_1[df_1['A'].isin(matching)]
df2_filt = df_2[df_2['B'].isin(matching)]
But this does not take the order into consideration.
I am looking for a solution without many for loops.
Edit:
df_result = pd.merge_asof(left=df_1, right=df_2, left_on='idx', right_on='idx', left_by='A', right_by='B', direction='backward', tolerance=2).dropna().drop(labels='idx', axis='columns').reset_index(drop=True)
Gets me what I want.
IIUC this should work:
df_result = df_1.merge(df_2,
left_on=['idx', 'A'], right_on=['idx', 'B'])
I have a dataframe with a column like this
Col1
1 A, 2 B, 3 C
2 B, 4 C
1 B, 2 C, 4 D
I have used the .str.split(',', expand=True), the result is like this
0 | 1 | 2
1 A | 2 B | 3 C
2 B | 4 C | None
1 B | 2 C | 4 D
what I am trying to achieve is to get this one:
Col A| Col B| Col C| Col D
1 A | 2 B | 3 C | None
None | 2 B | 4 C | None
None | 1 B | 2 C | 4 D
I am stuck, how to get new columns formatted as such ?
Let's try:
# split and explode
s = df['Col1'].str.split(', ').explode()
# create new multi-level index
s.index = pd.MultiIndex.from_arrays([s.index, s.str.split().str[-1].tolist()])
# unstack to reshape
out = s.unstack().add_prefix('Col ')
Details:
# split and explode
0 1 A
0 2 B
0 3 C
1 2 B
1 4 C
2 1 B
2 2 C
2 4 D
Name: Col1, dtype: object
# create new multi-level index
0 A 1 A
B 2 B
C 3 C
1 B 2 B
C 4 C
2 B 1 B
C 2 C
D 4 D
Name: Col1, dtype: object
# unstack to reshape
Col A Col B Col C Col D
0 1 A 2 B 3 C NaN
1 NaN 2 B 4 C NaN
2 NaN 1 B 2 C 4 D
Most probably there are more general approaches you can use but this worked for me. Please note that this is based on a lot of assumptions and constraints of your particular example.
test_dict = {'col_1': ['1 A, 2 B, 3 C', '2 B, 4 C', '1 B, 2 C, 4 D']}
df = pd.DataFrame(test_dict)
First, we split the df into initial columns:
df2 = df.col_1.str.split(pat=',', expand=True)
Result:
0 1 2
0 1 A 2 B 3 C
1 2 B 4 C None
2 1 B 2 C 4 D
Next, (first assumption) we need to ensure that we can later use ' ' as delimiter to extract the columns. In order to do that we need to remove all the starting and trailing spaces from each string
func = lambda x: pd.Series([i.strip() for i in x])
df2 = df2.astype(str).apply(func, axis=1)
Next, We would need to get a list of unique columns. To do that we first extract column names from each cell:
func = lambda x: pd.Series([i.split(' ')[1] for i in x if i != 'None'])
df3 = df2.astype(str).apply(func, axis=1)
Result:
0 1 2
0 A B C
1 B C NaN
2 B C D
Then create a list of unique columns ['A', 'B', 'C', 'D'] that are present in your DataFrame:
columns_list = pd.unique(df3[df3.columns].values.ravel('K'))
columns_list = [x for x in columns_list if not pd.isna(x)]
And create an empty base dataframe with those columns which will be used to assign the corresponding values:
result_df = pd.DataFrame(columns=columns_list)
Once the preparations are done we can assign column values for each of the rows and use pd.concat to merge them back in to one DataFrame:
result_list = []
result_list.append(result_df) # Adding the empty base table to ensure the columns are present
for row in df2.iterrows():
result_object = {} # dict that will be used to represent each row in source DataFrame
for column in columns_list:
for value in row[1]: # row is returned in the format of tuple where first value is row_index that we don't need
if value != 'None':
if value.split(' ')[1] == column: # Checking for a correct column to assign
result_object[column] = [value]
result_list.append(pd.DataFrame(result_object)) # Adding dicts per row
Once the list of DataFrames is generated we can use pd.concat to put it together:
final_df = pd.concat(result_list, ignore_index=True) # ignore_index will rebuild the index for the final_df
And the result will be:
A B C D
0 1 A 2 B 3 C NaN
1 NaN 2 B 4 C NaN
2 NaN 1 B 2 C 4 D
I don't think this is the most elegant and efficient way to do it but it will produce the results you need
I have a database with strings and the index as below.
df0
idx name_id_code string_line_0
0 0.01 A
1 0.5 B
2 77.6 C
3 29.8 D
4 56.2 E
5 88.1000005 F
6 66.4000008 G
7 2.1 H
8 99 I
9 550.9999999 J
df1
idx string_line_1
0 A
1 F
2 J
3 G
4 D
Now, I want to match the df1 with df0, taking the values where df1 = df 0 but, keeping the index of df0 true as below
df_result name_id_code string_line_0
0 0.01 A
5 88.1000005 F
9 550.9999999 J
6 66.4000008 G
3 29.8 D
I tried with my code but it didnt work for string and only matching index
c = df0['name_id_code'] + ' (' + df0['string_line_0'].astype(str) + ')'
out = df1[df2['string_line_1'].isin(s)]
I also tried to keep simple just last column match like
c = df0['string_line_0'].astype(str) + ')'
out = df1[df1['string_line_1'].isin(s)]
but blank output.
Because is filtered df0 DataFrame then is index values not changed if use Series.isin by df1['string_line_1', only order of columns is like in original df0:
out = df0[df0['string_line_0'].isin(df1['string_line_1'])]
print (out)
name_id_code string_line_0
idx
0 0.010000 A
3 29.800000 D
5 88.100001 F
6 66.400001 G
9 551.000000 J
Or if use DataFrame.merge then for avoid lost df0.index is necessary add DataFrame.reset_index:
out = (df1.rename(columns={'string_line_1':'string_line_0'})
.merge(df0.reset_index(), on='string_line_0'))
print (out)
string_line_0 idx name_id_code
0 A 0 0.010000
1 F 5 88.100001
2 J 9 551.000000
3 G 6 66.400001
4 D 3 29.800000
Similar solution, only same values in string_line_0 and string_line_1 columns:
out = (df1.merge(df0.reset_index(), left_on='string_line_1', right_on='string_line_0'))
print (out)
string_line_1 idx name_id_code string_line_0
0 A 0 0.010000 A
1 F 5 88.100001 F
2 J 9 551.000000 J
3 G 6 66.400001 G
4 D 3 29.800000 D
You can do:
out = df0.loc[(df0["string_line_0"].isin(df1["string_line_1"]))].copy()
out["string_line_0"] = pd.Categorical(out["string_line_0"], categories=df1["string_line_1"].unique())
out.sort_values(by=["string_line_0"], inplace=True)
The first line filters df0 to just the rows where string_line_0 is in the string_line_1 column of df1.
The second line converts string_line_0 in the output df to a Categorical feature, which is then custom sorted by the order of the values in df1
I have the following dataframe which is a small part of a bigger one:
acc_num trans_cdi
0 1 c
1 1 d
3 3 d
4 3 c
5 3 d
6 3 d
I'd like to delete all rows where the last items are "d". So my desired dataframe would look like this:
acc_num trans_cdi
0 1 c
3 3 d
4 3 c
So the point is, that a group shouldn't have "d" as the last item.
There is a code that deletes the last row in the groups where the last item is "d". But in this case, I have to run the code twice to delete all last "d"-s in group 3 for example.
clean_3 = clean_2[clean_2.groupby('account_num')['trans_cdi'].transform(lambda x: (x.iloc[-1] != "d") | (x.index != x.index[-1]))]
Is there a better solution to this problem?
We can use idxmax here with reversing the data [::-1] and then get the index:
grps = df['trans_cdi'].ne('d').groupby(df['acc_num'], group_keys=False)
idx = grps.apply(lambda x: x.loc[:x[::-1].idxmax()]).index
df.loc[idx]
acc_num trans_cdi
0 1 c
3 3 d
4 3 c
Testing on consecutive value
acc_num trans_cdi
0 1 c
1 1 d <--- d between two c, so we need to keep
2 1 c
3 1 d <--- row to be dropped
4 3 d
5 3 c
6 3 d
7 3 d
grps = df['trans_cdi'].ne('d').groupby(df['acc_num'], group_keys=False)
idx = grps.apply(lambda x: x.loc[:x[::-1].idxmax()]).index
df.loc[idx]
acc_num trans_cdi
0 1 c
1 1 d
2 1 c
4 3 d
5 3 c
Still gives correct result.
You can try this not so pandorable solution.
def r(x):
c = 0
for v in x['trans_cdi'].iloc[::-1]:
if v == 'd':
c = c+1
else:
break
return x.iloc[:-c]
df.groupby('acc_num', group_keys=False).apply(r)
acc_num trans_cdi
0 1 c
3 3 d
4 3 c
First, compare to the next row with shift if the values are both equal to 'd'. ~ filters out the specified rows.
Second, Make sure the last row value is not d. If it is, then delete the row.
code:
df = df[~((df['trans_cdi'] == 'd') & (df.shift(1)['trans_cdi'] == 'd'))]
if df['trans_cdi'].iloc[-1] == 'd': df = df.iloc[0:-1]
df
input (I tested it on more input data to ensure there were no bugs):
acc_num trans_cdi
0 1 c
1 1 d
3 3 d
4 3 c
5 3 d
6 3 d
7 1 d
8 1 d
9 3 c
10 3 c
11 3 d
12 3 d
output:
acc_num trans_cdi
0 1 c
1 1 d
4 3 c
5 3 d
9 3 c
10 3 c
I have a df:
ColA ColB
1 1
2 3
2 2
1 2
1 3
2 1
I would like to use two different dictionaries to change the values in ColB. I would like to use d1 if the value in ColA is 1 and d2 if the value in ColB is 2.
d1 = {1:'a',2:'b',3:'c'}
d2 = {1:'d',2:'e',3:'f'}
Resulting in:
ColA ColB
1 a
2 f
2 e
1 b
1 c
2 d
How would be the best way of achieving this?
One way is using np.where to map the values in ColB using one dictionary or the other depending on the values of ColA:
import numpy as np
df['ColB'] = np.where(df.ColA.eq(1), df.ColB.map(d1), df.ColB.map(d2))
Which gives:
ColA ColB
0 1 a
1 2 f
2 2 e
3 1 b
4 1 c
5 2 d
For a more general solution, you could use np.select, which works for multiple conditions. Let's add another value in ColA and a dictionary, to see how this could be done with three different mappings:
print(df)
ColA ColB
0 1 1
1 2 3
2 2 2
3 1 2
4 3 3
5 3 1
values_to_map = [1,2,3]
d1 = {1:'a',2:'b',3:'c'}
d2 = {1:'d',2:'e',3:'f'}
d3 = {1:'g',2:'h',3:'i'}
#create a list of boolean Series as conditions
conds = [df.ColA.eq(i) for i in values_to_map]
# List of Series to choose from depending on conds
choices = [df.ColB.map(d) for d in [d1,d2,d3]]
# use np.select to select form the choice list based on conds
df['ColB'] = np.select(conds, choices)
Resulting in:
ColA ColB
0 1 a
1 2 f
2 2 e
3 1 b
4 3 i
5 3 g
You can use a new dictionary in which the keys are tuples and map it against the zipped columns.
d = {**{(1, k): v for k, v in d1.items()}, **{(2, k): v for k, v in d2.items()}}
df.assign(ColB=[*map(d.get, zip(df.ColA, df.ColB))])
ColA ColB
0 1 a
1 2 f
2 2 e
3 1 b
4 1 c
5 2 d
Or we can get cute with a lambda to map.
NOTE: I aligned the dictionaries to switch between based on their relative position in the list [0, d1, d2]. In this case it doesn't matter what is in the first position. I put 0 arbitrarily.
df.assign(ColB=[*map(lambda x, y: [0, d1, d2][x][y], df.ColA, df.ColB)])
ColA ColB
0 1 a
1 2 f
2 2 e
3 1 b
4 1 c
5 2 d
For robustness I'd stay away from cute and map a lambda that had some default value capability
df.assign(ColB=[*map(lambda x, y: {1: d1, 2: d2}.get(x, {}).get(y), df.ColA, df.ColB)])
ColA ColB
0 1 a
1 2 f
2 2 e
3 1 b
4 1 c
5 2 d
If it needs to be done for many groups use a dict of dicts to map each group separately. Ideally you can find some functional way to create d:
d = {1: d1, 2: d2}
df['ColB'] = pd.concat([gp.ColB.map(d[idx]) for idx, gp in df.groupby('ColA')])
Output:
ColA ColB
0 1 a
1 2 f
2 2 e
3 1 b
4 1 c
5 2 d
I am using concat with reindex
idx=pd.MultiIndex.from_arrays([df.ColA, df.ColB])
df.ColB=pd.concat([pd.Series(x) for x in [d1,d2]],keys=[1,2]).reindex(idx).values
df
Out[683]:
ColA ColB
0 1 a
1 2 f
2 2 e
3 1 b
4 1 c
5 2 d
You can create a function that does this for one element and then use an apply lambda to your dataframe.
def your_func(row):
if row["ColA"] == 1:
return d1[row["ColB"]]
elif row["ColB"] == 2:
return d2[row["ColB"]]
else:
return None
df["ColB"] = df.apply(lambda row: your_func(row), axis=1)
You can use two replace as such:
df.loc[df['ColA'] == 1,'ColB'] = df['ColB'].replace(d1, regex=True)
df.loc[df['ColA'] == 2,'ColB'] = df['ColB'].replace(d2, regex=True)
I hope it helps,
BR