I'm trying to return two different values from an apply method but I cant figure out how to get the results I need.
With a function as:
def fun(row):
s = [sum(row[i:i+2]) for i in range (len(row) -1)]
ps = s.index(max(s))
return max(s),ps
and df as:
6:00 6:15 6:30
0 3 8 9
1 60 62 116
I'm trying to return the max value of the row, but i also need to get the index of the first value that produces the max combination.
df["phour"] = t.apply(fun, axis=1)
I can get the output I need, but I don't know how I can get the index in a new column.So far im getting both answer in a tuple
6:00 6:15 6:30 phour
0 3 8 9 (17, 1)
1 60 62 116 (178, 1)
How can I get the index value in its own column?
You can get the index in a separate column like this:
df[['phour','index']] = df.apply(lambda row: pd.Series(list(fun(row))), axis=1)
Or if you modify fun slightly:
def fun(row):
s = [sum(row[i:i+2]) for i in range (len(row) -1)]
ps = s.index(max(s))
return [max(s),ps]
Then the code becomes a little less convoluted:
df[['phour','index']] = df.apply(lambda row: pd.Series(fun(row)), axis=1)
You can apply pd.Series
df.drop('Double', 1).join(df.Double.apply(pd.Series, index=['D1', 'D2']))
A B C D1 D2
0 1 2 3 1 2
1 2 3 2 3 4
2 3 4 4 5 6
3 4 1 1 7 8
Equivalently
df.drop('Double', 1).join(
pd.DataFrame(np.array(df.Double.values.tolist()), columns=['D1', 'D2'])
)
setup
using #GordonBean's df
df = pd.DataFrame({'A':[1,2,3,4], 'B':[2,3,4,1], 'C':[3,2,4,1], 'Double': [(1,2), (3,4), (5,6), (7,8)]})
If you are just trying to get the max and argmax, I recommend using the pandas API:
DataFrame.idxmax
So:
df = pd.DataFrame({'A':[1,2,3,4], 'B':[2,3,4,1], 'C':[3,2,4,1]})
df
A B C
0 1 2 3
1 2 3 2
2 3 4 4
3 4 1 1
df['Max'] = df.max(axis=1)
df['ArgMax'] = df.idxmax(axis=1)
df
A B C Max ArgMax
0 1 2 3 3 C
1 2 3 2 3 B
2 3 4 4 4 B
3 4 1 1 4 A
Update:
And if you need the actual index value, you can use numpy.ndarray.argmax:
df['ArgMaxNum'] = df[['A','B','C']].values.argmax(axis=1)
A B C Max ArgMax ArgMaxNum
0 1 2 3 3 C 2
1 2 3 2 3 B 1
2 3 4 4 4 B 1
3 4 1 1 4 A 0
One way to split out the tuples into separate columns could be with tuple unpacking:
df = pd.DataFrame({'A':[1,2,3,4], 'B':[2,3,4,1], 'C':[3,2,4,1], 'Double': [(1,2), (3,4), (5,6), (7,8)]})
df
A B C Double
0 1 2 3 (1, 2)
1 2 3 2 (3, 4)
2 3 4 4 (5, 6)
3 4 1 1 (7, 8)
df['D1'] = [d[0] for d in df.Double]
df['D2'] = [d[1] for d in df.Double]
df
A B C Double D1 D2
0 1 2 3 (1, 2) 1 2
1 2 3 2 (3, 4) 3 4
2 3 4 4 (5, 6) 5 6
3 4 1 1 (7, 8) 7 8
There's got to be a better way but you can do:
df.merge(pd.DataFrame(((i,j) for
i,j in df.apply(lambda x: fun(x)).values),
columns=['phour','index']),
left_index=True,right_index=True)
Related
So I have a dataframe like this
df = pd.DataFrame({
'A': [1,1,2,2,3,3,3],
'B': [1,3,1,3,1,2,1],
'C': [1,3,5,3,7,7,1]})
A B C
0 1 1 1
1 1 3 3
2 2 1 5
3 2 3 3
4 3 1 7
5 3 2 7
6 3 1 1
I want to create a binning of column B (count) with groupby of column A
for example B_bin1 where B < 3 and B_bin2 is the rest (>=3), C_bin1 for C < 5 and C_bin2 for the rest
From that example the output I want is like this
A B_bin1 B_bin2 C_bin1 C_bin2
0 1 1 1 2 0
1 2 1 1 1 1
2 3 3 0 1 2
I found similar question Pandas groupby with bin counts
, it is working for 1 bin
bins = [0,2,10]
temp_df=df.groupby(['A', pd.cut(df['B'], bins)])
temp_df.size().unstack()
B (0, 2] (2, 10]
A
1 1 1
2 1 1
3 3 0
but when I tried using more than 1 bin, it is not working (my real data has a lot of binning groups)
bins = [0,2,10]
bins2 = [0,4,10]
temp_df=df.groupby(['A', pd.cut(df['B'], bins), pd.cut(df['C'], bins2)])
temp_df.size().unstack()
C (0, 4] (4, 10]
A B
1 (0, 2] 1 0
(2, 10] 1 0
2 (0, 2] 0 1
(2, 10] 1 0
3 (0, 2] 1 2
(2, 10] 0 0
My workaround is by create small temporary df and then binning them using 1 group 1 by 1 and then merge them in the end
I also still trying using aggregation (probably using pd.NamedAgg too) similar to this, but I wonder if that can works
df.groupby('A').agg(
b_count = ('B', 'count'),
b_sum = ('B', 'sum')
c_count = ('C', 'count'),
c_sum = ('C', 'sum')
)
Is anyone has another idea for this?
Because you need processing each bin separately instead groupby+size+unstack is used crosstab with join DataFrames by concat:
bins = [0,2,10]
bins2 = [0,4,10]
temp_df1=pd.crosstab(df['A'], pd.cut(df['B'], bins, labels=False)).add_prefix('B_')
temp_df2=pd.crosstab(df['A'], pd.cut(df['C'], bins2, labels=False)).add_prefix('C_')
df = pd.concat([temp_df1, temp_df2], axis=1).reset_index()
print (df)
A B_0 B_1 C_0 C_1
0 1 1 1 2 0
1 2 1 1 1 1
2 3 3 0 1 2
One option, is with get_dummies, before the aggregation; this works since you have a limited bin (I'm skipping the bin and using comparison):
temp = (df
.assign(B = df.B.lt(3), C = df.C.lt(5))
.replace({True:1, False:2})
)
(pd
.get_dummies(temp, columns = ['B','C'], prefix_sep='_bin')
.groupby('A')
.sum()
)
B_bin1 B_bin2 C_bin1 C_bin2
A
1 1 1 2 0
2 1 1 1 1
3 3 0 1 2
You could use the bins, along with pd.factorize and get_dummies:
temp = df.copy()
temp['B'] = pd.cut(df.B, bins)
temp['B'] = pd.factorize(temp.B)[0] + 1
temp['C'] = pd.cut(df.C, bins2)
temp['C'] = pd.factorize(temp.C)[0] + 1
(pd
.get_dummies(temp, columns = ['B','C'], prefix_sep='_bin')
.groupby('A')
.sum()
)
B_bin1 B_bin2 C_bin1 C_bin2
A
1 1 1 2 0
2 1 1 1 1
3 3 0 1 2
I know about pandas resampling functions using a DateTimeIndex.
But how can I easily resample/group along an integer index?
The following code illustrates the problem and works:
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.randint(5, size=(10, 2)), columns=list('AB'))
print(df)
A B
0 3 2
1 1 1
2 0 1
3 2 3
4 2 0
5 4 0
6 3 1
7 3 4
8 0 2
9 4 4
# sum of n consecutive elements
n = 3
tuples = [(i, i+n-1) for i in range(0, len(df.index), n)]
df_new = pd.concat([df.loc[i[0]:i[1]].sum() for i in tuples], 1).T
print(df_new)
A B
0 4 4
1 8 3
2 6 7
3 4 4
But isn't there a more elegant way to accomplish this?
The code seems a bit heavy-handed to me..
Thanks in advance!
You can floor divide index and aggregate some function:
df1 = df.groupby(df.index // n).sum()
If index is not default (integer, unique) aggregate by floor divided numpy.arange created by len of DataFrame:
df1 = df.groupby(np.arange(len(df)) // n).sum()
You can use group by on the integer division of the index by n. i.e.
df.groupby(lambda i: i//n).sum()
here is the code
import numpy as np
import pandas as pd
n=3
df = pd.DataFrame(np.random.randint(5, size=(10, 2)), columns=list('AB'))
print('df:')
print(df)
res = df.groupby(lambda i: i//n).sum()
print('using groupby:')
print(res)
tuples = [(i, i+n-1) for i in range(0, len(df.index), n)]
df_new = pd.concat([df.loc[i[0]:i[1]].sum() for i in tuples], 1).T
print('using your method:')
print(df_new)
and the output
df:
A B
0 1 0
1 3 0
2 1 1
3 0 4
4 3 4
5 0 1
6 0 4
7 4 0
8 0 2
9 2 2
using groupby:
A B
0 5 1
1 3 9
2 4 6
3 2 2
using you method:
A B
0 5 1
1 3 9
2 4 6
3 2 2
How do I find the nth smallest number in a row, within a DataFrame, and add that value as an entry in a new column (because I would ultimately like to export the data).
Example Data
Setup
np.random.seed([3,14159])
df = pd.DataFrame(np.random.randint(10, size=(4, 5)), columns=list('ABCDE'))
A B C D E
0 4 8 1 1 9
1 2 8 1 4 2
2 8 2 8 4 9
3 4 3 4 1 5
In all of the following solutions, I assume n = 3
Solution 1
function prt below
Use np.partition to place smallest to the left of a partition and the largest to the right. Then take all to the left and find the max.
df.assign(nth=np.partition(df.values, 3, axis=1)[:, :3].max(1))
A B C D E nth
0 4 8 1 1 9 4
1 2 8 1 4 2 2
2 8 2 8 4 9 8
3 4 3 4 1 5 4
Solution 2
function srt below
More intuitive but more costly time complexity with np.sort
df.assign(nth=np.sort(df.values, axis=1)[:, 2])
A B C D E nth
0 4 8 1 1 9 4
1 2 8 1 4 2 2
2 8 2 8 4 9 8
3 4 3 4 1 5 4
Solution 3
function rnk below
Using pd.DataFrame.rank
Concise version that upcast to float
df.assign(nth=df.where(df.rank(1, method='first').eq(3)).stack().values)
A B C D E nth
0 4 8 1 1 9 4.0
1 2 8 1 4 2 2.0
2 8 2 8 4 9 8.0
3 4 3 4 1 5 4.0
Solution 4
function whr below
Using np.where and pd.DataFrame.rank
i, j = np.where(df.rank(1, method='first') == 3)
df.assign(nth=df.values[i, j])
A B C D E nth
0 4 8 1 1 9 4
1 2 8 1 4 2 2
2 8 2 8 4 9 8
3 4 3 4 1 5 4
Timing
Notice that srt is quickest but comparable to prt for a bit, then for larger number of columns, the more efficient algorithm of prt kicks in.
res.plot(loglog=True)
prt = lambda df, n: df.assign(nth=np.partition(df.values, n, axis=1)[:, :n].max(1))
srt = lambda df, n: df.assign(nth=np.sort(df.values, axis=1)[:, n - 1])
rnk = lambda df, n: df.assign(nth=df.where(df.rank(1, method='first').eq(n)).stack().values)
def whr(df, n):
i, j = np.where(df.rank(1, method='first').values == n)
return df.assign(nth=df.values[i, j])
res = pd.DataFrame(
index=[10, 30, 100, 300, 1000, 3000, 10000],
columns='prt srt rnk whr'.split(),
dtype=float
)
for i in res.index:
num_rows = int(np.log(i))
d = pd.DataFrame(np.random.rand(num_rows, i))
for j in res.columns:
stmt = '{}(d, 3)'.format(j)
setp = 'from __main__ import d, {}'.format(j)
res.at[i, j] = timeit(stmt, setp, number=100)
You can do this as follows:
df.assign(nth=df.apply(lambda x: np.partition(x, nth)[nth], axis='columns'))
Example:
In[72]: df = pd.DataFrame(np.random.rand(3, 3), index=list('abc'), columns=[1, 2, 3])
In[73]: df
Out[73]:
1 2 3
a 0.436730 0.653242 0.843014
b 0.643496 0.854859 0.531652
c 0.831672 0.575336 0.517944
In[74]: df.assign(nth=df.apply(lambda x: np.partition(x, 1)[1], axis='columns'))
Out[74]:
1 2 3 nth
a 0.436730 0.653242 0.843014 0.653242
b 0.643496 0.854859 0.531652 0.643496
c 0.831672 0.575336 0.517944 0.575336
Here is a method that finds nth smallest item in a list:
def find_nth_in_list(list, n):
return sorted(list)[n-1]
The usage:
list =[10,5,7,9,8,4,6,2,1,3]
print(find_nth_in_list(list, 2))
Output:
2
You can give the row items as a list to this function.
EDIT
You can find rows with this function:
#Returns all rows as a list
def find_rows(df):
rows=[]
for row in df.iterrows():
index, data = row
rows.append(data.tolist())
return rows
Example usage:
rows = find_rows(df) #all rows as a list
smallest_3th = find_nth_in_list(rows[2], 3) #3rd row, 3rd smallest item
generate some random data
dd=pd.DataFrame(data=np.random.rand(7,3))
find minumum value per row using numpy
dd['minPerRow']=dd.apply(np.min,axis=1)
export results
dd['minPerRow'].to_csv('file.csv')
In a pandas dataframe I have a column that looks like:
0 M
1 E
2 L
3 M.1
4 M.2
5 M.3
6 E.1
7 E.2
8 E.3
9 E.4
10 L.1
11 L.2
12 M.1.a
13 M.1.b
14 M.1.c
15 M.2.a
16 M.3.a
17 E.1.a
18 E.1.b
19 E.1.c
20 E.2.a
21 E.3.a
22 E.3.b
23 E.4.a
I need to group all the value where the first elements are E, M, or L and then, for each group, I need to create a subgroup where the index is 1, 2, or 3 which will contain a record for each lowercase letter (a,b,c, ...)
Potentially the solution should work for any number of levels concatenate elements (in this case the number of levels is 3 (eg: A.1.a))
0 1 2
E 1 a
b
c
2 a
3 a
b
4 a
L 1
2
M 1 a
b
c
2 a
3 a
I tried with:
df.groupby([0,1,2]).count()
But the result is missing the L level because it doesn't have records at the last sub-level
A workaround is to add a dummy variable and then remove it ... like:
df[2][(df[0]=='L') & (df[2].isnull()) & (df[1].notnull())]='x'
df = df.replace(np.nan,' ', regex=True)
df.sort_values(0, ascending=False, inplace=True)
newdf = df.groupby([0,1,2]).count()
which gives:
0 1 2
E 1 a
b
c
2 a
3 a
b
4 a
L 1 x
2 x
M 1 a
b
c
2 a
3 a
I then deal with the dummy entry x later in my code ...
how can avoid this ackish way to use groupby ?
Assuming the column under consideration to be represented by s, we can:
Split on "." delimiter along with expand=True to produce an expanded DF.
fnc : checks if all elements of the grouped frame consists of only None, then it replaces them by a dummy entry "" which is established via a list-comprehension. A series constructor is later called on the filtered list. Any None's present here are subsequently removed using dropna.
Perform groupby w.r.t. 0 & 1 column names and apply fnc to 2.
split_str = s.str.split(".", expand=True)
fnc = lambda g: pd.Series(["" if all(x is None for x in g) else x for x in g]).dropna()
split_str.groupby([0, 1])[2].apply(fnc)
produces:
0 1
E 1 1 a
2 b
3 c
2 1 a
3 1 a
2 b
4 1 a
L 1 0
2 0
M 1 1 a
2 b
3 c
2 1 a
3 1 a
Name: 2, dtype: object
To obtain a flattened DF, reset the indices same as the levels used to group the DF before:
split_str.groupby([0, 1])[2].apply(fnc).reset_index(level=[0, 1]).reset_index(drop=True)
produces:
0 1 2
0 E 1 a
1 E 1 b
2 E 1 c
3 E 2 a
4 E 3 a
5 E 3 b
6 E 4 a
7 L 1
8 L 2
9 M 1 a
10 M 1 b
11 M 1 c
12 M 2 a
13 M 3 a
Maybe you have to find a way with regex.
import pandas as pd
df = pd.read_clipboard(header=None).iloc[:, 1]
df2 = df.str.extract(r'([A-Z])\.?([0-9]?)\.?([a-z]?)')
print df2.set_index([0,1])
and the result is,
2
0 1
M
E
L
M 1
2
3
E 1
2
3
4
L 1
2
M 1 a
1 b
1 c
2 a
3 a
E 1 a
1 b
1 c
2 a
3 a
3 b
4 a
I have a pandas dataframe with two columns A,B as below.
I want a vectorized solution for creating a new column C where C[i] = C[i-1] - A[i] + B[i].
df = pd.DataFrame(data={'A': [10, 2, 3, 4, 5, 6], 'B': [0, 1, 2, 3, 4, 5]})
>>> df
A B
0 10 0
1 2 1
2 3 2
3 4 3
4 5 4
5 6 5
Here is the solution using for-loops:
df['C'] = df['A']
for i in range(1, len(df)):
df['C'][i] = df['C'][i-1] - df['A'][i] + df['B'][i]
>>> df
A B C
0 10 0 10
1 2 1 9
2 3 2 8
3 4 3 7
4 5 4 6
5 6 5 5
... which does the job.
But since loops are slow in comparison to vectorized calculations, I want a vectorized solution for this in pandas:
I tried to use the shift() method like this:
df['C'] = df['C'].shift(1).fillna(df['A']) - df['A'] + df['B']
but it didn't help since the shifted C column isn't updated with the calculation. It keeps its original values:
>>> df['C'].shift(1).fillna(df['A'])
0 10
1 10
2 2
3 3
4 4
5 5
and that produces a wrong result.
This can be vectorized since:
delta[i] = C[i] - C[i-1] = -A[i] +B[i]. You can get delta from A and B first, then...
calculate cumulative sum of delta (plus C[0]) to get full C
Code as follows:
delta = df['B'] - df['A']
delta[0] = 0
df['C'] = df.loc[0, 'A'] + delta.cumsum()
print df
A B C
0 10 0 10
1 2 1 9
2 3 2 8
3 4 3 7
4 5 4 6
5 6 5 5