reshape a pandas dataframe index to columns - python

Consider the below pandas Series object,
index = list('abcdabcdabcd')
df = pd.Series(np.arange(len(index)), index = index)
My desired output is,
a b c d
0 0 1 2 3
1 4 5 6 7
2 8 9 10 11
I have put some effort with pd.pivot_table, pd.unstack and probably the solution lies with correct use of one of them. The closest i have reached is
df.reset_index(level = 1).unstack(level = 1)
but this does not gives me the output i my looking for
// here is something even closer to the desired output, but i am not able to handle the index grouping.
df.to_frame().set_index(df1.values, append = True, drop = False).unstack(level = 0)
a b c d
0 0.0 NaN NaN NaN
1 NaN 1.0 NaN NaN
2 NaN NaN 2.0 NaN
3 NaN NaN NaN 3.0
4 4.0 NaN NaN NaN
5 NaN 5.0 NaN NaN
6 NaN NaN 6.0 NaN
7 NaN NaN NaN 7.0
8 8.0 NaN NaN NaN
9 NaN 9.0 NaN NaN
10 NaN NaN 10.0 NaN
11 NaN NaN NaN 11.0

A bit more general solution using cumcount to get new index values, and pivot to do the reshaping:
# Reset the existing index, and construct the new index values.
df = df.reset_index()
df.index = df.groupby('index').cumcount()
# Pivot and remove the column axis name.
df = df.pivot(columns='index', values=0).rename_axis(None, axis=1)
The resulting output:
a b c d
0 0 1 2 3
1 4 5 6 7
2 8 9 10 11

Here is a way that will work if the index is always cycling in the same order, and you know the "period" (in this case 4):
>>> pd.DataFrame(df.values.reshape(-1,4), columns=list('abcd'))
a b c d
0 0 1 2 3
1 4 5 6 7
2 8 9 10 11
>>>

Related

Pandas move rows into single column and reshape dataframe

Hi I have the following dataframe that is ~1,400,000 rows:
x = pd.DataFrame({'ID':['A','B','D','D','F'], 'start1':[1,2,3,4,5], 'start2':[12,11,10,6,7], 'start3':[1,6,2,4,5], 'start4':[5,4,2,3,1], 'start5':[0,0,0,0,0], 'end1':[2,3,4,7,9] })
ID start1 start2 start3 start4 start5 end1
A 1 12 1 5 0 2
B 2 11 6 4 0 3
D 3 10 2 2 0 4
D 4 6 4 3 0 7
F 5 7 5 1 0 9
I'm looking to collapse all rows that contain column headers 'start' or 'end' into the following format:
desired output:
ID start end
A 1 NaN
A 12 NAN
A 1 NAN
A 5 NaN
A 0 NaN
A NaN 2
B 2 NaN
B 11 NaN
B 6 NaN
B 4 NaN
B 0 NaN
B 3 NaN
...
F 1 NaN
F 0 NaN
F NaN 9
I have tried:
joined = df2.apply(lambda x: ' '.join([str(xi) for xi in x]), axis=1)
split = joined.str.split('', expand=True).reset_index(drop=False).melt(id_vars='index')
However this seems to use up all my memory and the environment crashes.
Any help would be great
Try melt the start columns and concat
(pd.concat([x.iloc[:,:-1].melt('ID', value_name='start')
.sort_values(['ID','variable']).drop('variable',axis=1),
x[['ID','end1']]
])
.sort_values('ID', kind='mergesort')
)
Output:
ID start end1
0 A 1.0 NaN
5 A 12.0 NaN
10 A 1.0 NaN
15 A 5.0 NaN
20 A 0.0 NaN
0 A NaN 2.0
1 B 2.0 NaN
6 B 11.0 NaN
11 B 6.0 NaN
16 B 4.0 NaN
21 B 0.0 NaN
1 B NaN 3.0
2 D 3.0 NaN
3 D 4.0 NaN
7 D 10.0 NaN
8 D 6.0 NaN
12 D 2.0 NaN
13 D 4.0 NaN
17 D 2.0 NaN
18 D 3.0 NaN
22 D 0.0 NaN
23 D 0.0 NaN
2 D NaN 4.0
3 D NaN 7.0
4 F 5.0 NaN
9 F 7.0 NaN
14 F 5.0 NaN
19 F 1.0 NaN
24 F 0.0 NaN
4 F NaN 9.0
Remember that you are trying to duplicate a large amount of data here, so you need to be careful.
How about this?
import numpy as np
out = pd.DataFrame(columns = ['ID', 'start', 'end'])
for col in x.columns:
if 'start'in col:
out_col = 'start'
if 'end' in col:
out_col = 'end'
if 'ID' not in col:
temp = x[['ID', col]].rename(columns = {col:out_col})
out = pd.concat([out,temp])
Output:
ID start end
0 A 1 NaN
0 A 0 NaN
0 A NaN 2
0 A 12 NaN
0 A 5 NaN
0 A 1 NaN
1 B 0 NaN
1 B 4 NaN
1 B NaN 3
1 B 6 NaN
1 B 11 NaN
1 B 2 NaN
2 D 10 NaN
2 D 0 NaN
2 D 2 NaN
3 D 4 NaN
3 D NaN 7
2 D NaN 4
2 D 2 NaN
3 D 3 NaN
3 D 4 NaN
2 D 3 NaN
3 D 6 NaN
3 D 0 NaN
4 F 5 NaN
4 F 1 NaN
4 F 7 NaN
4 F 5 NaN
4 F 0 NaN
4 F NaN 9
You can merge all columns into one by .ravel()
So: add end1 and Id to another variable
end1values = x['end1']a
idvalues = x['ID']
Remove end and Id from data set:
x.drop('end1',
axis='columns', inplace=True)
x.drop('ID',
axis='columns', inplace=True)
use Ravel for starts:
df = pd.DataFrame({'start':x.values.ravel()})
Add end1 + ID
df['ID'] = idvalues
df['end'] = end1values
Result:

How to move values over in each Pandas data frame row where np.nan are located?

If I have a pandas data frame like this:
A B C D E F G H
0 0 2 3 5 NaN NaN NaN NaN
1 2 7 9 1 2 NaN NaN NaN
2 1 5 7 2 1 2 1 NaN
3 6 1 3 2 1 1 5 5
4 1 2 3 6 NaN NaN NaN NaN
How do I move all of the numerical values to the end of each row and place the NANs before them? Such that I get a pandas data frame like this:
A B C D E F G H
0 NaN NaN NaN NaN 0 2 3 5
1 NaN NaN NaN 2 7 9 1 2
2 NaN 1 5 7 2 1 2 1
3 6 1 3 2 1 1 5 5
4 NaN NaN NaN NaN 1 2 3 6
One row solution:
df.apply(lambda x: pd.concat([x[x.isna()==True], x[x.isna()==False]], ignore_index=True), axis=1)
I guess the best approach is to work row by row. Make a function to do the job and use apply or transform to use that function on each row.
def movenan(x):
fl = len(x)
nl = len(x.dropna())
nanarr = np.empty(fl - nl)
nanarr[:] = np.nan
return pd.concat([pd.Series(nanarr), x.dropna()], ignore_index=True)
ddf = df.transform(movenan, axis=1)
ddf.columns = df.columns
Using your sample data, the resulting ddf is:
A B C D E F G H
0 NaN NaN NaN NaN 0.0 2.0 3.0 5.0
1 NaN NaN NaN 2.0 7.0 9.0 1.0 2.0
2 NaN 1.0 5.0 7.0 2.0 1.0 2.0 1.0
3 6.0 1.0 3.0 2.0 1.0 1.0 5.0 5.0
4 NaN NaN NaN NaN 1.0 2.0 3.0 6.0
The movenan function creates an array of nan of the required length, drops the nan from the row, and concatenates the two resulting Series.
ignore_index=True is required because you don't want to preserve data position in their columns (values are moved to different columns), but doing this the column names are lost and replaced by integers. The last line simply copies back the column names into the new dataframe.

Columns appending is troublesome with Pandas

Here is what I have tried and what error I received:
>>> import pandas as pd
>>> df = pd.DataFrame({"A":[1,2,3,4,5],"B":[5,4,3,2,1],"C":[0,0,0,0,0],"D":[1,1,1,1,1]})
>>> df
A B C D
0 1 5 0 1
1 2 4 0 1
2 3 3 0 1
3 4 2 0 1
4 5 1 0 1
>>> import pandas as pd
>>> df = pd.DataFrame({"A":[1,2,3,4,5],"B":[5,4,3,2,1],"C":[0,0,0,0,0],"D":[1,1,1,1,1]})
>>> first = [2,2,2,2,2,2,2,2,2,2,2,2]
>>> first = pd.DataFrame(first).T
>>> first.index = [2]
>>> df = df.join(first)
>>> df
A B C D 0 1 2 3 4 5 6 7 8 9 10 11
0 1 5 0 1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 2 4 0 1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 3 3 0 1 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0
3 4 2 0 1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
4 5 1 0 1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
>>> second = [3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3]
>>> second = pd.DataFrame(second).T
>>> second.index = [1]
>>> df = df.join(second)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python35\lib\site-packages\pandas\core\frame.py", line 6815, in join
rsuffix=rsuffix, sort=sort)
File "C:\Python35\lib\site-packages\pandas\core\frame.py", line 6830, in _join_compat
suffixes=(lsuffix, rsuffix), sort=sort)
File "C:\Python35\lib\site-packages\pandas\core\reshape\merge.py", line 48, in merge
return op.get_result()
File "C:\Python35\lib\site-packages\pandas\core\reshape\merge.py", line 552, in get_result
rdata.items, rsuf)
File "C:\Python35\lib\site-packages\pandas\core\internals\managers.py", line 1972, in items_overlap_with_suffix
'{rename}'.format(rename=to_rename))
ValueError: columns overlap but no suffix specified: Index([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], dtype='object')
I am trying to create new list with the extra columns which I have to add at specific indexes of the main dataframe df.
When i tried the first it worked and you can see the output. But when I tried the same way with second I received the above mentioned error.
Kindly, let me know what I can do in this situation and achieve the goal I am expecting.
Use DataFrame.combine_first instead join if need assign to same columns created before, last DataFrame.reindex by list of columns for expected ordering:
df = pd.DataFrame({"A":[1,2,3,4,5],"B":[5,4,3,2,1],"C":[0,0,0,0,0],"D":[1,1,1,1,1]})
orig = df.columns.tolist()
first = [2,2,2,2,2,2,2,2,2,2,2,2]
first = pd.DataFrame(first).T
first.index = [2]
df = df.combine_first(first)
second = [3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3]
second = pd.DataFrame(second).T
second.index = [1]
df = df.combine_first(second)
df = df.reindex(orig + first.columns.tolist(), axis=1)
print (df)
A B C D 0 1 2 3 4 5 6 7 8 9 10 11
0 1 5 0 1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 2 4 0 1 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0
2 3 3 0 1 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0
3 4 2 0 1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
4 5 1 0 1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
Yes this is expected behaviour because join works much like an SQL join, meaning that it will join on the provided index and concatenate all the columns together. The problem arises from the fact that pandas does not accept two columns to have the same name. Hence, if you have 2 columns in each dataframe with the same name, it will first look for a suffix to add to those columns to avoid name clashes. This is controlled with the lsuffix and rsuffix arguments in the join method.
Conclusion: 2 ways to solve this:
Either provide a suffix so that pandas is able to resolve the name clashes; or
Make sure that you don't have overlapping columns
You have to specify the suffixes since the column names are the same. Assuming you are trying to add the second values as new columns horizontally:
df = df.join(second, lsuffix='first', rsuffix='second')
A B C D 0first 1first 2first 3first 4first 5first ... 10second 11second 12 13 14 15 16 17 18 19
0 1 5 0 1 NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 2 4 0 1 NaN NaN NaN NaN NaN NaN ... 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0
2 3 3 0 1 2.0 2.0 2.0 2.0 2.0 2.0 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
3 4 2 0 1 NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
4 5 1 0 1 NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN

pandas ffill/bfill for specific amount of observation

I have the following dataframe:
id indicator
1 NaN
1 NaN
1 1
1 NaN
1 NaN
1 NaN
In reality, I have several more ids. My question now is, how do I do a forward or backward fill for a specific range, e.g. for only the next/last 2 observations. My dataframe should look like this:
id indicator
1 NaN
1 NaN
1 1
1 1
1 1
1 NaN
I know the command
df.groupby("id")["indicator"].fillna(value=None, method="ffill")
However, this fills all the missing values instead of just the next two observations. Anyone knows a solution?
I think DataFrameGroupBy.ffill or DataFrameGroupBy.bfill with limit parameter is nicer:
df.groupby("id")["indicator"].ffill(limit=3)
df.groupby("id")["indicator"].bfill(limit=3)
Sample:
#5 value is in the end of group, so only one value is filled
df['filled'] = df.groupby("id")["indicator"].ffill(limit=2)
print (df)
id indicator filled
0 1 NaN NaN
1 1 NaN NaN
2 1 1.0 1.0
3 1 NaN 1.0
4 1 NaN 1.0
5 1 NaN NaN
6 1 NaN NaN
7 1 NaN NaN
8 1 4.0 4.0
9 1 NaN 4.0
10 1 NaN 4.0
11 1 NaN NaN
12 1 NaN NaN
13 2 NaN NaN
14 2 NaN NaN
15 2 1.0 1.0
16 2 NaN 1.0
17 2 NaN 1.0
18 2 NaN NaN
19 2 5.0 5.0
20 2 NaN 5.0
21 3 3.0 3.0
22 3 NaN 3.0
23 3 NaN 3.0
24 3 NaN NaN
25 3 NaN NaN
almost there,
straight from the doc
If method is specified, this is the maximum number of consecutive NaN values to forward/backward fill. In other words, if there is a gap with more than this number of consecutive NaNs, it will only be partially filled. If method is not specified, this is the maximum number of entries along the entire axis where NaNs will be filled. Must be greater than 0 if not None.
df.groupby("id")["indicator"].fillna(value=None,method="ffill",limit=3)

Insert value into column which is named in known column pandas

I'm preparing data for machine learning where data is in pandas DataFrame which looks like this:
Column v1 v2
first 1 2
second 3 4
third 5 6
now i want to transform it into:
Column v1 v2 first-v1 first-v2 second-v1 econd-v2 third-v1 third-v2
first 1 2 1 2 Nan Nan Nan Nan
second 3 4 Nan Nan 3 4 Nan Nan
third 5 6 Nan Nan Nan Nan 5 6
what i've tried is to do something like this:
# we know how many values there are but
# length can be changed into length of [1, 2, 3, ...] values
values = ['v1', 'v2']
# data with description from above is saved in data
for value in values:
data[ str(data['Column'] + '-' + value)] = data[ value]
Results are a columns with name:
['first-v1' 'second-v1'..], ['first-v2' 'second-v2'..]
where there are correct values. What i'm doing wrong? Is there a more optimal way to do this because my data is big?
Thank you for your time!
You can use unstack with swaping and sorting MultiIndex in columns:
df = data.set_index('Column', append=True)[values].unstack()
.swaplevel(0,1, axis=1).sort_index(1)
df.columns = df.columns.map('-'.join)
print (df)
first-v1 first-v2 second-v1 second-v2 third-v1 third-v2
0 1.0 2.0 NaN NaN NaN NaN
1 NaN NaN 3.0 4.0 NaN NaN
2 NaN NaN NaN NaN 5.0 6.0
Or stack + unstack:
df = data.set_index('Column', append=True).stack().unstack([1,2])
df.columns = df.columns.map('-'.join)
print (df)
first-v1 first-v2 second-v1 second-v2 third-v1 third-v2
0 1.0 2.0 NaN NaN NaN NaN
1 NaN NaN 3.0 4.0 NaN NaN
2 NaN NaN NaN NaN 5.0 6.0
Last join to original:
df = data.join(df)
print (df)
Column v1 v2 first-v1 first-v2 second-v1 second-v2 third-v1 \
0 first 1 2 1.0 2.0 NaN NaN NaN
1 second 3 4 NaN NaN 3.0 4.0 NaN
2 third 5 6 NaN NaN NaN NaN 5.0
third-v2
0 NaN
1 NaN
2 6.0

Categories