Here is a simple DataFrame:
> df = pd.DataFrame({'a': ['a1', 'a2', 'a3'],
'b': ['optional1', None, 'optional3'],
'c': ['c1', 'c2', 'c3'],
'd': [1, 2, 3]})
> df
a b c d
0 a1 optional1 c1 1
1 a2 None c2 2
2 a3 optional3 c3 3
Pivot method 1
The data can be pivoted to this:
> df.pivot_table(index=['a','b'], columns='c')
d
c c1 c3
a b
a1 optional1 1.0 NaN
a3 optional3 NaN 3.0
Downside: data in the 2nd row is lost because df['b'][1] == None.
Pivot method 2
> df.pivot_table(index=['a'], columns='c')
d
c c1 c2 c3
a
a1 1.0 NaN NaN
a2 NaN 2.0 NaN
a3 NaN NaN 3.0
Downside: column b is lost.
How can the two methods be combined so that columns b and the 2nd row are kept like so:
d
c c1 c2 c3
a b
a1 optional1 1.0 NaN NaN
a2 None NaN 2.0 NaN
a3 optional3 NaN NaN 3.0
More generally: How can information from a row be retained during pivoting if a key has NaN value?
Use set_index and unstack to perform the pivot:
df = df.set_index(['a', 'b', 'c']).unstack('c')
This is essentially what pandas does under the hood for pivot. The stack and unstack methods are closely related to pivot, and can generally be used to perform pivot-like operations that don't quite conform with the built-in pivot functions.
The resulting output:
d
c c1 c2 c3
a b
a1 optional1 1.0 NaN NaN
a2 NaN NaN 2.0 NaN
a3 optional3 NaN NaN 3.0
You could use fillna to replace the None entry:
df['b'] = df['b'].fillna('foo')
df.pivot_table(index=['a','b'], columns=['c'])
----
d
c c1 c2 c3
a b
a1 optional1 1.0 NaN NaN
a2 foo NaN 2.0 NaN
a3 optional3 NaN NaN 3.0
Use this one:
def pivot_table(df, index, columns, values):
df = df[index + columns + values]
i = len(index)
df = df.set_index(index+columns).unstack(columns).reset_index()
df.columns = df.columns.droplevel(1)[:i].append(df.columns.droplevel(0)[i:])
return df
pivot_table(df, index =['a', 'b'], columns= ['c'], values= ['d'])
You can use fillna to replace type None to string "NULL"
Say...
df.fillna("NULL").pivot_table(index=['a'], columns='c')
Related
I am trying to merge multiple-choice question columns using pandas so I can then manipulate them. An example of what my questions look like is:
C1 C2 C3
0 A A
1 B B
2 C C
3 D D
The data is currently presented as C1 and C2 but I need it to be combined into 1 column as represented in C3.
One option, assuming NaN in empty cells, is to bfill the first column and copy it:
df['C3'] = df[['C1', 'C2']].bfill(axis=1)['C1']
This way is extensible to any number of initial columns.
Output:
C1 C2 C3
0 A NaN A
1 NaN B B
2 NaN C C
3 D NaN D
You may try with fillna
df['C3'] = df['C1'].fillna(df['C2'])
df
Out[483]:
C1 C2 C3
0 A NaN A
1 NaN B B
2 NaN C C
3 D NaN D
You can also use combine_first:
df['C3'] = df['C1'].combine_first(df['C2'])
print(df)
# Output
C1 C2 C3
0 A NaN A
1 NaN B B
2 NaN C C
3 D NaN D
If your cells contain empty strings and not null values, replace them temporary by NaN:
df['C3'] = df['C1'].replace('', np.nan).combine_first(df['C2'])
print(df)
# Output
C1 C2 C3
0 A A
1 B B
2 C C
3 D D
I have a dataframe with three columns and a function that calculates the values of column y and z given the value of column x. I need to only calculate the values if they are missing NaN.
def calculate(x):
return 1, 2
df = pd.DataFrame({'x':['a', 'b', 'c', 'd', 'e', 'f'], 'y':[np.NaN, np.NaN, np.NaN, 'a1', 'b2', 'c3'], 'z':[np.NaN, np.NaN, np.NaN, 'a2', 'b1', 'c4']})
x y z
0 a NaN NaN
1 b NaN NaN
2 c NaN NaN
3 d a1 a2
4 e b2 b1
5 f c3 c4
mask = (df.isnull().any(axis=1))
df[['y', 'z']] = df[mask].apply(calculate, axis=1, result_type='expand')
However, I get the following result, although I only apply to the masked set. Unsure what I'm doing wrong.
x y z
0 a 1.0 2.0
1 b 1.0 2.0
2 c 1.0 2.0
3 d NaN NaN
4 e NaN NaN
5 f NaN NaN
If the mask is inverted I get the following result:
df[['y', 'z']] = df[~mask].apply(calculate, axis=1, result_type='expand')
x y z
0 a NaN NaN
1 b NaN NaN
2 c NaN NaN
3 d 1.0 2.0
4 e 1.0 2.0
5 f 1.0 2.0
Expected result:
x y z
0 a 1.0 2.0
1 b 1.0 2.0
2 c 1.0 2.0
3 d a1 a2
4 e b2 b1
5 f c3 c4
you can fillna after calculating for the full dataframe and set_axis
out = (df.fillna(df.apply(calculate, axis=1, result_type='expand')
.set_axis(['y','z'],inplace=False,axis=1)))
print(out)
x y z
0 a 1 2
1 b 1 2
2 c 1 2
3 d a1 a2
4 e b2 b1
5 f c3 c4
Try:
df.loc[mask,["y","z"]] = pd.DataFrame(df.loc[mask].apply(calculate, axis=1).to_list(), index=df[mask].index, columns = ["y","z"])
print(df)
x y z
0 a 1 2
1 b 1 2
2 c 1 2
3 d a1 a2
4 e b2 b1
5 f c3 c4
ex)
C1 C2 C3 C4 C5 C6
0 A B nan C A nan
1 B C D nan B nan
2 D E F nan C nan
3 nan nan A nan nan B
I'm merging columns, but I want to give '\n\n' in the merging process.
so output what I want
C
0 A
B
C
A
1 B
C
D
B
2 D
E
F
C
3. A
B
I want 'nan' to drop.
I tried
df['merge'] = df['C1'].map(str) + '\n\n' + tt['C2'].map(str) + '\n\n' + tt['C3'].map(str) + '\n\n' + df['C4'].map(str)
However, this includes all nan values.
thank you for reading.
Use DataFrame.stack for Series, misisng values are removed, so you can aggregate join:
df['merge'] = df.stack().groupby(level=0).agg('\n\n'.join)
#for filter only C columns
df['merge'] = df.filter(like='C').stack().groupby(level=0).agg('\n\n'.join)
Or remove missing values by join per rows by Series.dropna:
df['merge'] = df.apply(lambda x: '\n\n'.join(x.dropna()), axis=1)
#for filter only C columns
df['merge'] = df.filter(like='C').apply(lambda x: '\n\n'.join(x.dropna()), axis=1)
print (df)
C1 C2 C3 C4 C5 C6 merge
0 A B NaN C A NaN A\n\nB\n\nC\n\nA
1 B C D NaN B NaN B\n\nC\n\nD\n\nB
2 D E F NaN C NaN D\n\nE\n\nF\n\nC
3 NaN NaN A NaN NaN B A\n\nB
I have the following type of a dataframe, values which are grouped by 3 different categories A,B,C:
import pandas as pd
A = ['A1', 'A2', 'A3', 'A2', 'A1']
B = ['B3', 'B2', 'B2', 'B1', 'B3']
C = ['C2', 'C2', 'C3', 'C1', 'C3']
value = ['6','2','3','3','5']
df = pd.DataFrame({'categA': A,'categB': B, 'categC': C, 'value': value})
df
Which looks like:
categA categB categC value
0 A1 B3 C2 6
1 A2 B2 C2 2
2 A3 B2 C3 3
3 A2 B1 C1 3
4 A1 B3 C3 5
Now, when I want to unstack this df by the C category, .unstack() returns some multi-indexed dataframe with 'value' at the first level and my categories of interest C1, C2 & C3 at the second level:
df = df.set_index(['categA','categB','categC']).unstack('categC')
df
Output:
value
categC C1 C2 C3
categA categB
A1 B3 NaN 6 5
A2 B1 3 NaN NaN
B2 NaN 2 NaN
A3 B2 NaN NaN 3
Is there a quick and clean way to get rid of the multi-index by reducing it to the highest available level? This is what I'd like to have as output:
categA categB C1 C2 C3
A1 B3 NaN 6 5
A2 B1 3 NaN NaN
B2 NaN 2 NaN
A3 B2 NaN NaN 3
Many thanks in advance!
Edit:
print(df.reset_index())
gives:
categA categB value
categC C1 C2 C3
0 A1 B3 NaN 6 5
1 A2 B1 3 NaN NaN
2 A2 B2 NaN 2 NaN
3 A3 B2 NaN NaN 3
Adding reset_index also , unstack with Series
df.set_index(['categA','categB','categC']).value.unstack('categC').reset_index()
Out[875]:
categC categA categB C1 C2 C3
0 A1 B3 None 6 5
1 A2 B1 3 None None
2 A2 B2 None 2 None
3 A3 B2 None None 3
How can I join Series A multiindexed by (A, B) with Series B indexed by A?
Currently the only way is to bring the indices to a common footing -- e.g. move the B level of the series_A MultiIndex to a column so that both series_A and series_B are indexed only by A:
import pandas as pd
series_A = pd.Series(1, index=pd.MultiIndex.from_product([['A1', 'A4'],['B1','B2']], names=['A','B']), name='series_A')
# A B
# A1 B1 1
# B2 1
# A4 B1 1
# B2 1
# Name: series_A, dtype: int64
series_B = pd.Series(2, index=pd.Index(['A1', 'A2', 'A3'], name='A'), name='series_B')
# A
# A1 2
# A2 2
# A3 2
# Name: series_B, dtype: int64
tmp = series_A.to_frame().reset_index('B')
result = tmp.join(series_B, how='outer').set_index('B', append=True)
print(result)
yields
series_A series_B
A B
A1 B1 1.0 2.0
B2 1.0 2.0
A2 NaN NaN 2.0
A3 NaN NaN 2.0
A4 B1 1.0 NaN
B2 1.0 NaN
Another way to join them would be to unstack the B level from series_A:
In [215]: series_A.unstack('B').join(series_B, how='outer')
Out[215]:
B1 B2 series_B
A
A1 1.0 1.0 2.0
A2 NaN NaN 2.0
A3 NaN NaN 2.0
A4 1.0 1.0 NaN
unstack moves the B index level to the column index. Thus the theme is the
same (bring the indices to a common footing), though the result is different.