I have a df:
ColA ColB
1 1
2 3
2 2
1 2
1 3
2 1
I would like to use two different dictionaries to change the values in ColB. I would like to use d1 if the value in ColA is 1 and d2 if the value in ColB is 2.
d1 = {1:'a',2:'b',3:'c'}
d2 = {1:'d',2:'e',3:'f'}
Resulting in:
ColA ColB
1 a
2 f
2 e
1 b
1 c
2 d
How would be the best way of achieving this?
One way is using np.where to map the values in ColB using one dictionary or the other depending on the values of ColA:
import numpy as np
df['ColB'] = np.where(df.ColA.eq(1), df.ColB.map(d1), df.ColB.map(d2))
Which gives:
ColA ColB
0 1 a
1 2 f
2 2 e
3 1 b
4 1 c
5 2 d
For a more general solution, you could use np.select, which works for multiple conditions. Let's add another value in ColA and a dictionary, to see how this could be done with three different mappings:
print(df)
ColA ColB
0 1 1
1 2 3
2 2 2
3 1 2
4 3 3
5 3 1
values_to_map = [1,2,3]
d1 = {1:'a',2:'b',3:'c'}
d2 = {1:'d',2:'e',3:'f'}
d3 = {1:'g',2:'h',3:'i'}
#create a list of boolean Series as conditions
conds = [df.ColA.eq(i) for i in values_to_map]
# List of Series to choose from depending on conds
choices = [df.ColB.map(d) for d in [d1,d2,d3]]
# use np.select to select form the choice list based on conds
df['ColB'] = np.select(conds, choices)
Resulting in:
ColA ColB
0 1 a
1 2 f
2 2 e
3 1 b
4 3 i
5 3 g
You can use a new dictionary in which the keys are tuples and map it against the zipped columns.
d = {**{(1, k): v for k, v in d1.items()}, **{(2, k): v for k, v in d2.items()}}
df.assign(ColB=[*map(d.get, zip(df.ColA, df.ColB))])
ColA ColB
0 1 a
1 2 f
2 2 e
3 1 b
4 1 c
5 2 d
Or we can get cute with a lambda to map.
NOTE: I aligned the dictionaries to switch between based on their relative position in the list [0, d1, d2]. In this case it doesn't matter what is in the first position. I put 0 arbitrarily.
df.assign(ColB=[*map(lambda x, y: [0, d1, d2][x][y], df.ColA, df.ColB)])
ColA ColB
0 1 a
1 2 f
2 2 e
3 1 b
4 1 c
5 2 d
For robustness I'd stay away from cute and map a lambda that had some default value capability
df.assign(ColB=[*map(lambda x, y: {1: d1, 2: d2}.get(x, {}).get(y), df.ColA, df.ColB)])
ColA ColB
0 1 a
1 2 f
2 2 e
3 1 b
4 1 c
5 2 d
If it needs to be done for many groups use a dict of dicts to map each group separately. Ideally you can find some functional way to create d:
d = {1: d1, 2: d2}
df['ColB'] = pd.concat([gp.ColB.map(d[idx]) for idx, gp in df.groupby('ColA')])
Output:
ColA ColB
0 1 a
1 2 f
2 2 e
3 1 b
4 1 c
5 2 d
I am using concat with reindex
idx=pd.MultiIndex.from_arrays([df.ColA, df.ColB])
df.ColB=pd.concat([pd.Series(x) for x in [d1,d2]],keys=[1,2]).reindex(idx).values
df
Out[683]:
ColA ColB
0 1 a
1 2 f
2 2 e
3 1 b
4 1 c
5 2 d
You can create a function that does this for one element and then use an apply lambda to your dataframe.
def your_func(row):
if row["ColA"] == 1:
return d1[row["ColB"]]
elif row["ColB"] == 2:
return d2[row["ColB"]]
else:
return None
df["ColB"] = df.apply(lambda row: your_func(row), axis=1)
You can use two replace as such:
df.loc[df['ColA'] == 1,'ColB'] = df['ColB'].replace(d1, regex=True)
df.loc[df['ColA'] == 2,'ColB'] = df['ColB'].replace(d2, regex=True)
I hope it helps,
BR
Related
I'm trying to create a dataframe based on other dataframe and a specific condition.
Given the pandas dataframe above, I'd like to have a two column dataframe, which each row would be the combinations of pairs of words that are different from 0 (coexist in a specific row), beginning with the first row.
For example, for this part of image above, the new dataframe that I want is like de following:
and so on...
Does anyone have some tip of how I can do it? I'm struggling... Thanks!
As you didn't provide a text example, here is a dummy one:
>>> df
A B C D E
0 0 1 1 0 1
1 1 1 1 1 1
2 1 0 0 1 0
3 0 0 0 0 1
4 0 1 1 0 0
you could use a combination of masking, explode and itertools.combinations:
from itertools import combinations
mask = df.gt(0)
series = (mask*df.columns).apply(lambda x: list(combinations(set(x).difference(['']), r=2)), axis=1)
pd.DataFrame(series.explode().dropna().to_list(), columns=['X', 'Y'])
output:
X Y
0 C E
1 C B
2 E B
3 E D
4 E C
5 E B
6 E A
7 D C
8 D B
9 D A
10 C B
11 C A
12 B A
13 A D
14 C B
I have two DataFrames
df_1:
idx A X
0 1 A
1 2 B
2 3 C
3 4 D
4 1 E
5 2 F
and
df_2:
idx B Y
0 1 H
1 2 I
2 4 J
3 2 K
4 3 L
5 1 M
my goal is get the following:
df_result:
idx A X B Y
0 1 A 1 H
1 2 B 2 I
2 4 D 4 J
3 2 F 2 K
I am trying to match both A and B columns, based on on the column Bfrom df_2.
Columns A and B repeat their content after getting to 4. The order matters here and because of that the row from df_1 with idx = 4 does not match the one from df_2 with idx = 5.
I was trying to use:
matching = list(set(df_1["A"]) & set(df_2["B"]))
and then
df1_filt = df_1[df_1['A'].isin(matching)]
df2_filt = df_2[df_2['B'].isin(matching)]
But this does not take the order into consideration.
I am looking for a solution without many for loops.
Edit:
df_result = pd.merge_asof(left=df_1, right=df_2, left_on='idx', right_on='idx', left_by='A', right_by='B', direction='backward', tolerance=2).dropna().drop(labels='idx', axis='columns').reset_index(drop=True)
Gets me what I want.
IIUC this should work:
df_result = df_1.merge(df_2,
left_on=['idx', 'A'], right_on=['idx', 'B'])
I have a dataframe with a column like this
Col1
1 A, 2 B, 3 C
2 B, 4 C
1 B, 2 C, 4 D
I have used the .str.split(',', expand=True), the result is like this
0 | 1 | 2
1 A | 2 B | 3 C
2 B | 4 C | None
1 B | 2 C | 4 D
what I am trying to achieve is to get this one:
Col A| Col B| Col C| Col D
1 A | 2 B | 3 C | None
None | 2 B | 4 C | None
None | 1 B | 2 C | 4 D
I am stuck, how to get new columns formatted as such ?
Let's try:
# split and explode
s = df['Col1'].str.split(', ').explode()
# create new multi-level index
s.index = pd.MultiIndex.from_arrays([s.index, s.str.split().str[-1].tolist()])
# unstack to reshape
out = s.unstack().add_prefix('Col ')
Details:
# split and explode
0 1 A
0 2 B
0 3 C
1 2 B
1 4 C
2 1 B
2 2 C
2 4 D
Name: Col1, dtype: object
# create new multi-level index
0 A 1 A
B 2 B
C 3 C
1 B 2 B
C 4 C
2 B 1 B
C 2 C
D 4 D
Name: Col1, dtype: object
# unstack to reshape
Col A Col B Col C Col D
0 1 A 2 B 3 C NaN
1 NaN 2 B 4 C NaN
2 NaN 1 B 2 C 4 D
Most probably there are more general approaches you can use but this worked for me. Please note that this is based on a lot of assumptions and constraints of your particular example.
test_dict = {'col_1': ['1 A, 2 B, 3 C', '2 B, 4 C', '1 B, 2 C, 4 D']}
df = pd.DataFrame(test_dict)
First, we split the df into initial columns:
df2 = df.col_1.str.split(pat=',', expand=True)
Result:
0 1 2
0 1 A 2 B 3 C
1 2 B 4 C None
2 1 B 2 C 4 D
Next, (first assumption) we need to ensure that we can later use ' ' as delimiter to extract the columns. In order to do that we need to remove all the starting and trailing spaces from each string
func = lambda x: pd.Series([i.strip() for i in x])
df2 = df2.astype(str).apply(func, axis=1)
Next, We would need to get a list of unique columns. To do that we first extract column names from each cell:
func = lambda x: pd.Series([i.split(' ')[1] for i in x if i != 'None'])
df3 = df2.astype(str).apply(func, axis=1)
Result:
0 1 2
0 A B C
1 B C NaN
2 B C D
Then create a list of unique columns ['A', 'B', 'C', 'D'] that are present in your DataFrame:
columns_list = pd.unique(df3[df3.columns].values.ravel('K'))
columns_list = [x for x in columns_list if not pd.isna(x)]
And create an empty base dataframe with those columns which will be used to assign the corresponding values:
result_df = pd.DataFrame(columns=columns_list)
Once the preparations are done we can assign column values for each of the rows and use pd.concat to merge them back in to one DataFrame:
result_list = []
result_list.append(result_df) # Adding the empty base table to ensure the columns are present
for row in df2.iterrows():
result_object = {} # dict that will be used to represent each row in source DataFrame
for column in columns_list:
for value in row[1]: # row is returned in the format of tuple where first value is row_index that we don't need
if value != 'None':
if value.split(' ')[1] == column: # Checking for a correct column to assign
result_object[column] = [value]
result_list.append(pd.DataFrame(result_object)) # Adding dicts per row
Once the list of DataFrames is generated we can use pd.concat to put it together:
final_df = pd.concat(result_list, ignore_index=True) # ignore_index will rebuild the index for the final_df
And the result will be:
A B C D
0 1 A 2 B 3 C NaN
1 NaN 2 B 4 C NaN
2 NaN 1 B 2 C 4 D
I don't think this is the most elegant and efficient way to do it but it will produce the results you need
I have the following dataframe which is a small part of a bigger one:
acc_num trans_cdi
0 1 c
1 1 d
3 3 d
4 3 c
5 3 d
6 3 d
I'd like to delete all rows where the last items are "d". So my desired dataframe would look like this:
acc_num trans_cdi
0 1 c
3 3 d
4 3 c
So the point is, that a group shouldn't have "d" as the last item.
There is a code that deletes the last row in the groups where the last item is "d". But in this case, I have to run the code twice to delete all last "d"-s in group 3 for example.
clean_3 = clean_2[clean_2.groupby('account_num')['trans_cdi'].transform(lambda x: (x.iloc[-1] != "d") | (x.index != x.index[-1]))]
Is there a better solution to this problem?
We can use idxmax here with reversing the data [::-1] and then get the index:
grps = df['trans_cdi'].ne('d').groupby(df['acc_num'], group_keys=False)
idx = grps.apply(lambda x: x.loc[:x[::-1].idxmax()]).index
df.loc[idx]
acc_num trans_cdi
0 1 c
3 3 d
4 3 c
Testing on consecutive value
acc_num trans_cdi
0 1 c
1 1 d <--- d between two c, so we need to keep
2 1 c
3 1 d <--- row to be dropped
4 3 d
5 3 c
6 3 d
7 3 d
grps = df['trans_cdi'].ne('d').groupby(df['acc_num'], group_keys=False)
idx = grps.apply(lambda x: x.loc[:x[::-1].idxmax()]).index
df.loc[idx]
acc_num trans_cdi
0 1 c
1 1 d
2 1 c
4 3 d
5 3 c
Still gives correct result.
You can try this not so pandorable solution.
def r(x):
c = 0
for v in x['trans_cdi'].iloc[::-1]:
if v == 'd':
c = c+1
else:
break
return x.iloc[:-c]
df.groupby('acc_num', group_keys=False).apply(r)
acc_num trans_cdi
0 1 c
3 3 d
4 3 c
First, compare to the next row with shift if the values are both equal to 'd'. ~ filters out the specified rows.
Second, Make sure the last row value is not d. If it is, then delete the row.
code:
df = df[~((df['trans_cdi'] == 'd') & (df.shift(1)['trans_cdi'] == 'd'))]
if df['trans_cdi'].iloc[-1] == 'd': df = df.iloc[0:-1]
df
input (I tested it on more input data to ensure there were no bugs):
acc_num trans_cdi
0 1 c
1 1 d
3 3 d
4 3 c
5 3 d
6 3 d
7 1 d
8 1 d
9 3 c
10 3 c
11 3 d
12 3 d
output:
acc_num trans_cdi
0 1 c
1 1 d
4 3 c
5 3 d
9 3 c
10 3 c
I have a dictionary of arrays like the following:
d = {'a': [1,2], 'b': [3,4], 'c': [5,6]}
I want to create a pandas dataframe like this:
0 1 2
0 a 1 2
1 b 3 4
2 c 5 6
I wrote the following code:
pd.DataFrame(list(d.items()))
which returns:
0 1
0 a [1,2]
1 b [3,4]
2 c [5,6]
Do you know how can I achieve my goal?!
Thank you in advance.
Pandas allows you to do this in a straightforward fashion:
pd.DataFrame.from_dict(d,orient = 'index')
>> 0 1
a 1 2
b 3 4
c 5 6
pd.DataFrame.from_dict(d,orient = 'index').reset_index() gives you what you are looking for.
Use the splat operator in a comprehension to produce your dataframe:
pd.DataFrame([k, *v] for k, v in d.items())
0 1 2
0 a 1 2
1 b 3 4
2 c 5 6
If you don't mind having index as one of your column names, simply transpose and reset_index:
pd.DataFrame(d).T.reset_index()
index 0 1
0 a 1 2
1 b 3 4
2 c 5 6
Finally, although it's rather ugly, the most performant option I could find on very large dictionaries is the following:
pd.DataFrame(list(d.values()), index=list(d.keys())).reset_index()