This question already has an answer here:
Reshape a pandas DataFrame of (720, 720) into (518400, ) 2D into 1D
(1 answer)
Closed 2 years ago.
I have this dataframe:
pd.DataFrame({"X": [1,2,3,4],
"Y": [5,6,7,8],
"Z": [9,10,11,12]})
And I'm looking for this output:
Currently, the similar problems solved I have found are the opposite: looking from series to dataframe. The most similar I have found is this one, which isn't similar at all. I have tried also with pivot_table() and reshape(), but they require an index column where I'm just looking for one column.
Any suggestions?
PS: You can assume that the dataframe has 100 columns to avoid selecting them one by one, but you call them as they are ordered (e.g. if they are 100 columns, you can do X1:X100)
Use flattening with ravel('F') -
In [14]: pd.Series(df.to_numpy(copy=False).ravel('F'))
Out[14]:
0 1
1 2
2 3
3 4
4 5
5 6
6 7
7 8
8 9
9 10
10 11
11 12
dtype: int64
This series is a view into the input dataframe, which means virtually free runtime and zero memory overhead. Let's verify -
In [20]: s = pd.Series(df.to_numpy(copy=False).ravel('F'))
In [21]: np.shares_memory(s,df)
Out[21]: True
Let's confirm the timings too -
In [2]: df = pd.DataFrame(np.random.rand(100000,3), columns=['X','Y','Z'])
In [3]: %timeit pd.Series(df.to_numpy(copy=False).ravel('F'))
579 µs ± 9.09 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
This is melt:
df.melt()[['value']]
Output:
value
0 1
1 2
2 3
3 4
4 5
5 6
6 7
7 8
8 9
9 10
10 11
11 12
One way is to reshape the data from the "wide" to the "tall" format by stacking:
df.T.stack().reset_index(drop=True)
#0 1
#1 2
#2 3
#3 4
#4 5
#5 6
#6 7
#7 8
#8 9
#9 10
#10 11
#11 12
As always, there are many ways to "skin a cat" in Pandas, and then performance may become the criterion. This is a meta-answer that compares the performance:
ravel by Divakar: 80 us
stack by DYZ: 640 us
melt by Quang Hoang: 2.03 ms
Related
This question already has answers here:
Split / Explode a column of dictionaries into separate columns with pandas
(13 answers)
Closed 4 years ago.
I have a really simple Pandas dataframe where each cell contains a list. I'd like to split each element of the list into it's own column. I can do that by exporting the values and then creating a new dataframe. This doesn't seem like a good way to do this especially, if my dataframe had a column aside from the list column.
import pandas as pd
df = pd.DataFrame(data=[[[8,10,12]],
[[7,9,11]]])
df = pd.DataFrame(data=[x[0] for x in df.values])
Desired output:
0 1 2
0 8 10 12
1 7 9 11
Follow-up based on #Psidom answer:
If I did have a second column:
df = pd.DataFrame(data=[[[8,10,12], 'A'],
[[7,9,11], 'B']])
How do I not loose the other column?
Desired output:
0 1 2 3
0 8 10 12 A
1 7 9 11 B
You can loop through the Series with apply() function and convert each list to a Series, this automatically expand the list as a series in the column direction:
df[0].apply(pd.Series)
# 0 1 2
#0 8 10 12
#1 7 9 11
Update: To keep other columns of the data frame, you can concatenate the result with the columns you want to keep:
pd.concat([df[0].apply(pd.Series), df[1]], axis = 1)
# 0 1 2 1
#0 8 10 12 A
#1 7 9 11 B
You could do pd.DataFrame(df[col].values.tolist()) - is much faster ~500x
In [820]: pd.DataFrame(df[0].values.tolist())
Out[820]:
0 1 2
0 8 10 12
1 7 9 11
In [821]: pd.concat([pd.DataFrame(df[0].values.tolist()), df[1]], axis=1)
Out[821]:
0 1 2 1
0 8 10 12 A
1 7 9 11 B
Timings
Medium
In [828]: df.shape
Out[828]: (20000, 2)
In [829]: %timeit pd.DataFrame(df[0].values.tolist())
100 loops, best of 3: 15 ms per loop
In [830]: %timeit df[0].apply(pd.Series)
1 loop, best of 3: 4.06 s per loop
Large
In [832]: df.shape
Out[832]: (200000, 2)
In [833]: %timeit pd.DataFrame(df[0].values.tolist())
10 loops, best of 3: 161 ms per loop
In [834]: %timeit df[0].apply(pd.Series)
1 loop, best of 3: 40.9 s per loop
I'm selecting several columns of a dataframe, by a list of the column names. This works fine if all elements of the list are in the dataframe.
But if some elements of the list are not in the DataFrame, then it will generate the error "not in index".
Is there a way to select all columns which included in that list, even if not all elements of the list are included in the dataframe? Here is some sample data which generates the above error:
df = pd.DataFrame( [[0,1,2]], columns=list('ABC') )
lst = list('ARB')
data = df[lst] # error: not in index
I think you need Index.intersection:
df = pd.DataFrame({'A':[1,2,3],
'B':[4,5,6],
'C':[7,8,9],
'D':[1,3,5],
'E':[5,3,6],
'F':[7,4,3]})
print (df)
A B C D E F
0 1 4 7 1 5 7
1 2 5 8 3 3 4
2 3 6 9 5 6 3
lst = ['A','R','B']
print (df.columns.intersection(lst))
Index(['A', 'B'], dtype='object')
data = df[df.columns.intersection(lst)]
print (data)
A B
0 1 4
1 2 5
2 3 6
Another solution with numpy.intersect1d:
data = df[np.intersect1d(df.columns, lst)]
print (data)
A B
0 1 4
1 2 5
2 3 6
Few other ways, and list comprehension is much faster
In [1357]: df[df.columns & lst]
Out[1357]:
A B
0 1 4
1 2 5
2 3 6
In [1358]: df[[c for c in df.columns if c in lst]]
Out[1358]:
A B
0 1 4
1 2 5
2 3 6
Timings
In [1360]: %timeit [c for c in df.columns if c in lst]
100000 loops, best of 3: 2.54 µs per loop
In [1359]: %timeit df.columns & lst
1000 loops, best of 3: 231 µs per loop
In [1362]: %timeit df.columns.intersection(lst)
1000 loops, best of 3: 236 µs per loop
In [1363]: %timeit np.intersect1d(df.columns, lst)
10000 loops, best of 3: 26.6 µs per loop
Details
In [1365]: df
Out[1365]:
A B C D E F
0 1 4 7 1 5 7
1 2 5 8 3 3 4
2 3 6 9 5 6 3
In [1366]: lst
Out[1366]: ['A', 'R', 'B']
A really simple solution here is to use filter(). In your example, just type:
df.filter(lst)
and it will automatically ignore any missing columns. For more, see the documentation for filter.
As a general note, filter is a very flexible and powerful way to select specific columns. In particular, you can use regular expressions. Borrowing the sample data from #jezrael, you could type either of the following.
df.filter(regex='A|R|B')
df.filter(regex='[ARB]')
Those are trivial examples, but suppose you wanted only columns starting with those letters, then you could type:
df.filter(regex='^[ARB]')
FWIW, in some quick timings I find this to be faster than the list comprehension method, but I don't think speed is really much of a concern here -- even the slowest way should be fast enough, as the speed does not depend on the size of the dataframe, only on the number of columns.
Honestly, all of these ways are fine and you can go with whatever is most readable to you. I prefer filter because it is simple while also giving you more options for selecting columns than a simple intersection.
Use * with list
data = df[[*lst]]
It will give the desired result.
please try this:
syntax : Dataframe[[List of Columns]]
for example : df[['a','b']]
a
Out[5]:
a b c
0 1 2 3
1 12 3 44
X is the list of req columns to slice
x = ['a','b']
this would give you the req slice:
a[x]
Out[7]:
a b
0 1 2
1 12 3
Performance:
%timeit a[x]
333 µs ± 9.27 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Given the DataFrame:
import pandas as pd
df = pd.DataFrame([6, 4, 2, 4, 5], index=[2, 6, 3, 4, 5], columns=['A'])
Results in:
A
2 6
6 4
3 2
4 4
5 5
Now, I would like to sort by values of Column A AND the index.
e.g.
df.sort_values(by='A')
Returns
A
3 2
6 4
4 4
5 5
2 6
Whereas I would like
A
3 2
4 4
6 4
5 5
2 6
How can I get a sort on the column first and index second?
You can sort by index and then by column A using kind='mergesort'.
This works because mergesort is stable.
res = df.sort_index().sort_values('A', kind='mergesort')
Result:
A
3 2
4 4
6 4
5 5
2 6
Using lexsort from numpy may be other way and little faster as well:
df.iloc[np.lexsort((df.index, df.A.values))] # Sort by A.values, then by index
Result:
A
3 2
4 4
6 4
5 5
2 6
Comparing with timeit:
%%timeit
df.iloc[np.lexsort((df.index, df.A.values))] # Sort by A.values, then by index
Result:
1000 loops, best of 3: 278 µs per loop
With reset index and set index again:
%%timeit
df.reset_index().sort_values(by=['A','index']).set_index('index')
Result:
100 loops, best of 3: 2.09 ms per loop
The other answers are great. I'll throw in one other option, which is to provide a name for the index first using rename_axis and then reference it in sort_values. I have not tested the performance but expect the accepted answer to still be faster.
df.rename_axis('idx').sort_values(by=['A', 'idx'])
A
idx
3 2
4 4
6 4
5 5
2 6
You can clear the index name afterward if you want with df.index.name = None.
I'm selecting several columns of a dataframe, by a list of the column names. This works fine if all elements of the list are in the dataframe.
But if some elements of the list are not in the DataFrame, then it will generate the error "not in index".
Is there a way to select all columns which included in that list, even if not all elements of the list are included in the dataframe? Here is some sample data which generates the above error:
df = pd.DataFrame( [[0,1,2]], columns=list('ABC') )
lst = list('ARB')
data = df[lst] # error: not in index
I think you need Index.intersection:
df = pd.DataFrame({'A':[1,2,3],
'B':[4,5,6],
'C':[7,8,9],
'D':[1,3,5],
'E':[5,3,6],
'F':[7,4,3]})
print (df)
A B C D E F
0 1 4 7 1 5 7
1 2 5 8 3 3 4
2 3 6 9 5 6 3
lst = ['A','R','B']
print (df.columns.intersection(lst))
Index(['A', 'B'], dtype='object')
data = df[df.columns.intersection(lst)]
print (data)
A B
0 1 4
1 2 5
2 3 6
Another solution with numpy.intersect1d:
data = df[np.intersect1d(df.columns, lst)]
print (data)
A B
0 1 4
1 2 5
2 3 6
Few other ways, and list comprehension is much faster
In [1357]: df[df.columns & lst]
Out[1357]:
A B
0 1 4
1 2 5
2 3 6
In [1358]: df[[c for c in df.columns if c in lst]]
Out[1358]:
A B
0 1 4
1 2 5
2 3 6
Timings
In [1360]: %timeit [c for c in df.columns if c in lst]
100000 loops, best of 3: 2.54 µs per loop
In [1359]: %timeit df.columns & lst
1000 loops, best of 3: 231 µs per loop
In [1362]: %timeit df.columns.intersection(lst)
1000 loops, best of 3: 236 µs per loop
In [1363]: %timeit np.intersect1d(df.columns, lst)
10000 loops, best of 3: 26.6 µs per loop
Details
In [1365]: df
Out[1365]:
A B C D E F
0 1 4 7 1 5 7
1 2 5 8 3 3 4
2 3 6 9 5 6 3
In [1366]: lst
Out[1366]: ['A', 'R', 'B']
A really simple solution here is to use filter(). In your example, just type:
df.filter(lst)
and it will automatically ignore any missing columns. For more, see the documentation for filter.
As a general note, filter is a very flexible and powerful way to select specific columns. In particular, you can use regular expressions. Borrowing the sample data from #jezrael, you could type either of the following.
df.filter(regex='A|R|B')
df.filter(regex='[ARB]')
Those are trivial examples, but suppose you wanted only columns starting with those letters, then you could type:
df.filter(regex='^[ARB]')
FWIW, in some quick timings I find this to be faster than the list comprehension method, but I don't think speed is really much of a concern here -- even the slowest way should be fast enough, as the speed does not depend on the size of the dataframe, only on the number of columns.
Honestly, all of these ways are fine and you can go with whatever is most readable to you. I prefer filter because it is simple while also giving you more options for selecting columns than a simple intersection.
Use * with list
data = df[[*lst]]
It will give the desired result.
please try this:
syntax : Dataframe[[List of Columns]]
for example : df[['a','b']]
a
Out[5]:
a b c
0 1 2 3
1 12 3 44
X is the list of req columns to slice
x = ['a','b']
this would give you the req slice:
a[x]
Out[7]:
a b
0 1 2
1 12 3
Performance:
%timeit a[x]
333 µs ± 9.27 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
I have a data frame which represent different classes with their values. for example:
df=pd.DataFrame(
{'label':['a','a','b','a','b','b','a','c','c','d','e','c'],
'date':[1,2,3,4,3,7,12,18,11,2,5,3],'value':np.random.randn(12)})
I want to choose the labels with values_counts less than a specific threshold and then put them into one class i.e. label them as for example 'zero'.
This is my attemp:
value_count=df.label.value_counts()
threshold = 3
for index in value_count[value_count.values<=threshold].index:
df.label[df.label==index]='zero'
Is there a better way to do this?
You can use groupby.transform to get the value counts aligned with the original index, then use it as a boolean index:
df.loc[df.groupby('label')['label'].transform('count') <= threshold, 'label'] = 'zero'
df
Out:
date label value
0 1 a -0.587957
1 2 a 0.341551
2 3 zero 0.516933
3 4 a 0.234042
4 3 zero -0.206185
5 7 zero 0.840724
6 12 a -0.728868
7 18 zero 0.111260
8 11 zero -0.471337
9 2 zero 0.030803
10 5 zero 1.012638
11 3 zero -1.233750
Here are my timings:
df = pd.concat([df]*10**4)
%timeit df.groupby('label')['label'].transform('count') <= threshold
100 loops, best of 3: 7.86 ms per loop
%%timeit
value_count=df.label.value_counts()
df['label'].isin(value_count[value_count.values<=threshold].index)
100 loops, best of 3: 9.24 ms per loop
You could do
In [59]: df.loc[df['label'].isin(value_count[value_count.values<=threshold].index),
'label'] = 'zero'
In [60]: df
Out[60]:
date label value
0 1 a -0.132887
1 2 a -1.306601
2 3 zero -1.431952
3 4 a 0.928743
4 3 zero 0.278955
5 7 zero 0.128430
6 12 a 0.200825
7 18 zero -0.560548
8 11 zero -2.925706
9 2 zero -0.061373
10 5 zero -0.632036
11 3 zero -1.061894
Timings
In [87]: df = pd.concat([df]*10**4, ignore_index=True)
In [88]: %timeit df['label'].isin(value_count[value_count.values<=threshold].index)
100 loops, best of 3: 7.1 ms per loop
In [89]: %timeit df.groupby('label')['label'].transform('count') <= threshold
100 loops, best of 3: 11.7 ms per loop
In [90]: df.shape
Out[90]: (120000, 3)
You may want to benchmark with larger dataset. And, this may not be aaccurate to compare, since you're precomuting value_count