I am trying to use interpolation (linear) to fill in the missing values in my data frame. The interpolation should apply on the group of rows (which have the same id ) separately. An example of the data frame is below:
mdata:
id f1 f2 f3 f4 f5
d1 34 3 5 nan 6
d1 nan 4 6 9 7
d1 37 nan 6 10 8
d2 nan 7 8 1 32
d2 12 8 nan 45 56
d2 13 9 11 46 59
Given the above example , I want to apply the interpolation function on the rows which have id1, then id2 and etc. I tried to group them and then use interpolation, but it seems something is wrong in my code:
mdata=[~mdata['id'].map(mdata.groupby('id').apply(mdata.interpolate(method
='linear', limit_direction ='both')))]
My desired output should be something like this:
output:
id f1 f2 f3 f4 f5
d1 34 3 5 9 6
d1 35.5 4 6 9 7
d1 37 5 6 10 8
d2 12 7 8 1 32
d2 12 8 9.5 45 56
d2 13 9 11 46 59
You can define a function:
def f(x):
return x.interpolate(method ='linear', limit_direction ='both')
#Finally:
mdata=mdata.groupby('id').apply(f)
OR
via anonymous function:
mdata=(mdata.groupby('id')
.apply(lambda x:x.interpolate(method ='linear', limit_direction ='both')))
output of mdata:
id f1 f2 f3 f4 f5
0 d1 34.0 3.0 5.0 9.0 6
1 d1 35.5 4.0 6.0 9.0 7
2 d1 37.0 4.0 6.0 10.0 8
3 d2 12.0 7.0 8.0 1.0 32
4 d2 12.0 8.0 9.5 45.0 56
5 d2 13.0 9.0 11.0 46.0 59
Related
I am trying merge 2 dataframes.
df1
Date A B C
01.01.2021 1 8 14
02.01.2021 2 9 15
03.01.2021 3 10 16
04.01.2021 4 11 17
05.01.2021 5 12 18
06.01.2021 6 13 19
07.01.2021 7 14 20
df2
Date B
07.01.2021 14
08.01.2021 27
09.01.2021 28
10.01.2021 29
11.01.2021 30
12.01.2021 31
13.01.2021 32
Both dataframes have one same row (although there could be several overlappings).
So I want to get df3 that looks as follows:
df3
Date A B C
01.01.2021 1 8 14
02.01.2021 2 9 15
03.01.2021 3 10 16
04.01.2021 4 11 17
05.01.2021 5 12 18
06.01.2021 6 13 19
07.01.2021 7 14 20
08.01.2021 Nan 27 Nan
09.01.2021 Nan 28 Nan
10.01.2021 Nan 29 Nan
11.01.2021 Nan 30 Nan
12.01.2021 Nan 31 Nan
13.01.2021 Nan 32 Nan
I've tried
df3=df1.merge(df2, on='Date', how='outer') but it gives extra A,B,C columns. Could you give some idea how to get df3?
Thanks a lot.
merge outer without specifying on (default on is the intersection of columns between the two DataFrames in this case ['Date', 'B']):
df3 = df1.merge(df2, how='outer')
df3:
Date A B C
0 01.01.2021 1.0 8 14.0
1 02.01.2021 2.0 9 15.0
2 03.01.2021 3.0 10 16.0
3 04.01.2021 4.0 11 17.0
4 05.01.2021 5.0 12 18.0
5 06.01.2021 6.0 13 19.0
6 07.01.2021 7.0 14 20.0
7 08.01.2021 NaN 27 NaN
8 09.01.2021 NaN 28 NaN
9 10.01.2021 NaN 29 NaN
10 11.01.2021 NaN 30 NaN
11 12.01.2021 NaN 31 NaN
12 13.01.2021 NaN 32 NaN
Assuming you always want to keep the first full version, you can concat the df2 on the end of df1 and drop duplicates on the Date column.
pd.concat([df1,df2]).drop_duplicates(subset='Date')
Output
Date A B C
0 01.01.2021 1.0 8 14.0
1 02.01.2021 2.0 9 15.0
2 03.01.2021 3.0 10 16.0
3 04.01.2021 4.0 11 17.0
4 05.01.2021 5.0 12 18.0
5 06.01.2021 6.0 13 19.0
6 07.01.2021 7.0 14 20.0
1 08.01.2021 NaN 27 NaN
2 09.01.2021 NaN 28 NaN
3 10.01.2021 NaN 29 NaN
4 11.01.2021 NaN 30 NaN
5 12.01.2021 NaN 31 NaN
6 13.01.2021 NaN 32 NaN
I'm trying to fill a dataframe with missing data. I've got these two dataframes:
df1:
df1 = pd.DataFrame({'a':['11','11','11','11','22','22','43','43'], 'x': ['d1', 'd2','d3','d4','d1','d2','d1','d3'], 'b': [1, 2,3,4,5,6,7,8]})
a x b
0 11 d1 1
1 11 d2 2
2 11 d3 3
3 11 d4 4
4 22 d1 5
5 22 d2 6
6 43 d1 7
7 43 d3 8
df2:
df2 = pd.DataFrame({'x': ['d1', 'd2','d3','d4']})
x
0 d1
1 d2
2 d3
3 d4
I've tried doing this:
df1.groupby('a', as_index=False).apply(lambda d: d.merge(df2, on='x', how='right')).reset_index(drop=True)
But my result is:
a x b
0 11 d1 1.0
1 11 d2 2.0
2 11 d3 3.0
3 11 d4 4.0
4 22 d1 5.0
5 22 d2 6.0
6 NaN d3 NaN
7 NaN d4 NaN
8 NaN d2 NaN
9 NaN d4 NaN
10 43 d1 7.0
11 43 d3 8.0
My desired output would be:
a x b
0 11 d1 1.0
1 11 d2 2.0
2 11 d3 3.0
3 11 d4 4.0
4 22 d1 5.0
5 22 d2 6.0
6 22 d3 NaN
7 22 d4 NaN
8 43 d1 7.0
9 43 d2 NaN
10 43 d3 8.0
11 43 d4 NaN
Is it possible to fill the missing data represented by NaN in the rows that I need? This way I've got d2 and d4in rows 8 and 9 when I need them in rows 10 and 11
My dataframe has around 150-200 rows so I'm trying to keep this generic as much as I can
For performance groupby with merge is not good idea. Better is create MultiIndex with all possible combinations for a and x columns and use DataFrame.reindex:
mux = pd.MultiIndex.from_product([df1['a'].unique(), df2['x']], names=['a','x'])
df = df1.set_index(['a','x']).reindex(mux).reset_index()
print (df)
a x b
0 11 d1 1.0
1 11 d2 2.0
2 11 d3 3.0
3 11 d4 4.0
4 22 d1 5.0
5 22 d2 6.0
6 22 d3 NaN
7 22 d4 NaN
8 43 d1 7.0
9 43 d2 NaN
10 43 d3 8.0
11 43 d4 NaN
Then if need set a by missing values from b column and get them to end of groups by a use:
df = (df.assign(tmp = df['b'].isna())
.sort_values(['a','tmp'])
.assign(a = lambda x: x['a'].mask(x['b'].isna()))
.drop('tmp', axis=1))
print (df)
a x b
0 11 d1 1.0
1 11 d2 2.0
2 11 d3 3.0
3 11 d4 4.0
4 22 d1 5.0
5 22 d2 6.0
6 NaN d3 NaN
7 NaN d4 NaN
8 43 d1 7.0
10 43 d3 8.0
9 NaN d2 NaN
11 NaN d4 NaN
I might not fully understand the question, shouldn't the concatenation be more like:
a x b
0 11 d1 1.0
1 11 d2 2.0
2 11 d3 3.0
3 11 d4 4.0
4 22 d1 5.0
5 22 d2 6.0
6 NaN d3 NaN
7 NaN d4 NaN
8 43 d1 7.0
9 NaN d2 NaN
10 43 d3 8.0
11 NaN d4 NaN
Which is what I get from your code:
import pandas as pd
df1 = pd.DataFrame({'a':['11','11','11','11','22','22','43','43'], 'x': ['d1', 'd2','d3','d4','d1','d2','d1','d3'], 'b': [1, 2,3,4,5,6,7,8]})
df2 = pd.DataFrame({'x': ['d1', 'd2','d3','d4']})
print(df1.groupby('a', as_index=False).apply(lambda d: d.merge(df2, on='x', how='right')).reset_index(drop=True))
Result:
[Running] python -u "c:\MyProjects\~python\pandas\dframe.py"
a x b
0 11 d1 1.0
1 11 d2 2.0
2 11 d3 3.0
3 11 d4 4.0
4 22 d1 5.0
5 22 d2 6.0
6 NaN d3 NaN
7 NaN d4 NaN
8 43 d1 7.0
9 NaN d2 NaN
10 43 d3 8.0
11 NaN d4 NaN
I want to take the surface-weighted average of the columns in my dataframe. I have two surface-columns and two U-value-columns. I want to create an extra column 'U_av' (surface-weighted-average U-value) and U_av = (A1*U1 + A2*U2) / (A1+A2). If NaN occurs in one of the columns, NaN should be returned.
Initial df:
ID A1 A2 U1 U2
0 14 2 1.0 10.0 11
1 16 2 2.0 12.0 12
2 18 2 3.0 24.0 13
3 20 2 NaN 8.0 14
4 22 4 5.0 84.0 15
5 24 4 6.0 84.0 16
Desired Output:
ID A1 A2 U1 U2 U_av
0 14 2 1.0 10.0 11 10.33
1 16 2 2.0 12.0 12 12
2 18 2 3.0 24.0 13 17.4
3 20 2 NaN 8.0 14 NaN
4 22 4 5.0 84.0 15 45.66
5 24 4 6.0 84.0 16 43.2
Code:
import numpy as np
import pandas as pd
df = pd.DataFrame({"ID": [14,16,18,20,22,24],
"A1": [2,2,2,2,4,4],
"U1": [10,12,24,8,84,84],
"A2": [1,2,3,np.nan,5,6],
"U2": [11,12,13,14,15,16]})
print(df)
#the mean of two columns U1 and U2 and dropping NaN is easy (U1+U2/2 in this case)
#but what to do for the surface-weighted mean (U_av = (A1*U1 + A2*U2) / (A1+A2))?
df.loc[:,'Umean'] = df[['U1','U2']].dropna().mean(axis=1)
EDIT:
adding to the solutions below:
df["U_av"] = (df.A1.mul(df.U1) + df.A2.mul(df.U2)).div(df[['A1','A2']].sum(axis=1))
Hope I got you correct:
df['U_av'] = (df['A1']*df['U1'] + df['A2']*df['U2']) / (df['A1']+df['A2'])
df
ID A1 U1 A2 U2 U_av
0 14 2 10 1.0 11 10.333333
1 16 2 12 2.0 12 12.000000
2 18 2 24 3.0 13 17.400000
3 20 2 8 NaN 14 NaN
4 22 4 84 5.0 15 45.666667
5 24 4 84 6.0 16 43.200000
Try this code:
numerator = df.A1.mul(df.U1) + (df.A2.mul(df.U2))
denominator = df.A1.add(df.A2)
df["U_av"] = numerator.div(denominator)
df
ID A1 A2 U1 U2 U_av
0 14 2 1.0 10.0 11 10.333333
1 16 2 2.0 12.0 12 12.000000
2 18 2 3.0 24.0 13 17.400000
3 20 2 NaN 8.0 14 NaN
4 22 4 5.0 84.0 15 45.666667
5 24 4 6.0 84.0 16 43.200000
I have the following pandas DataFrame.
import pandas as pd
df = pd.read_csv('filename.csv')
print(df)
time Group blocks
0 1 A 4
1 2 A 7
2 3 A 12
3 4 A 17
4 5 A 21
5 6 A 26
6 7 A 33
7 8 A 39
8 9 A 48
9 10 A 59
.... .... ....
36 35 A 231
37 1 B 1
38 2 B 1.5
39 3 B 3
40 4 B 5
41 5 B 6
.... .... ....
911 35 Z 349
This is a dataframe with multiple time series-esque data, from min=1 to max=35. Each Group has a relationship in the range time=1 to time=35 .
I would like to segment this dataframe into columns Group A, Group B, Group C, etc.
How does one "unconcatenate" this dataframe?
is that what you want?
In [84]: df.pivot_table(index='time', columns='Group')
Out[84]:
blocks
Group A B
time
1 4.0 1.0
2 7.0 1.5
3 12.0 3.0
4 17.0 5.0
5 21.0 6.0
6 26.0 NaN
7 33.0 NaN
8 39.0 NaN
9 48.0 NaN
10 59.0 NaN
35 231.0 NaN
data:
In [86]: df
Out[86]:
time Group blocks
0 1 A 4.0
1 2 A 7.0
2 3 A 12.0
3 4 A 17.0
4 5 A 21.0
5 6 A 26.0
6 7 A 33.0
7 8 A 39.0
8 9 A 48.0
9 10 A 59.0
36 35 A 231.0
37 1 B 1.0
38 2 B 1.5
39 3 B 3.0
40 4 B 5.0
41 5 B 6.0
I have a list of students in a csv file. I want (using Python) to display four columns in that I want to display the male students who have higher marks in Maths, Computer, and Physics.
I tried to use pandas library.
marks = pd.concat([data['name'],
data.loc[data['students']==1, 'maths'].nlargest(n=10)], 'computer'].nlargest(n=10)], 'physics'].nlargest(n=10)])
I used 1 for male students and 0 for female students.
It gives me an error saying: Invalid syntax.
Here's a way to show the top 10 students in each of the disciplines. You could of course just sum the three scores and select the students with the highest total if you want the combined as opposed to the individual performance (see illustration below).
df1 = pd.DataFrame(data={'name': [''.join(random.choice('abcdefgh') for _ in range(8)) for i in range(100)],
'students': np.random.randint(0, 2, size=100)})
df2 = pd.DataFrame(data=np.random.randint(0, 10, size=(100, 3)), columns=['math', 'physics', 'computers'])
data = pd.concat([df1, df2], axis=1)
data.info()
RangeIndex: 100 entries, 0 to 99
Data columns (total 5 columns):
name 100 non-null object
students 100 non-null int64
math 100 non-null int64
physics 100 non-null int64
computers 100 non-null int64
dtypes: int64(4), object(1)
memory usage: 4.0+ KB
res = pd.concat([data.loc[:, ['name']], data.loc[data['students'] == 1, 'math'].nlargest(n=10), data.loc[data['students'] == 1, 'physics'].nlargest(n=10), data.loc[data['students'] == 1, 'computers'].nlargest(n=10)], axis=1)
res.dropna(how='all', subset=['math', 'physics', 'computers'])
name math physics computers
0 geghhbce NaN 9.0 NaN
1 hbbdhcef NaN 7.0 NaN
4 ghgffgga NaN NaN 8.0
6 hfcaccgg 8.0 NaN NaN
14 feechdec NaN NaN 8.0
15 dfaabcgh 9.0 NaN NaN
16 ghbchgdg 9.0 NaN NaN
23 fbeggcha NaN NaN 9.0
27 agechbcf 8.0 NaN NaN
28 bcddedeg NaN NaN 9.0
30 hcdgbgdg NaN 8.0 NaN
38 fgdfeefd NaN NaN 9.0
39 fbcgbeda 9.0 NaN NaN
41 agbdaegg 8.0 NaN 9.0
49 adgbefgg NaN 8.0 NaN
50 dehdhhhh NaN NaN 9.0
55 ccbaaagc NaN 8.0 NaN
68 hhggfffe 8.0 9.0 NaN
71 bhggbheg NaN 9.0 NaN
84 aabcefhf NaN NaN 9.0
85 feeeefbd 9.0 NaN NaN
86 hgeecacc NaN 8.0 NaN
88 ggedgfeg 9.0 8.0 NaN
89 faafgbfe 9.0 NaN 9.0
94 degegegd NaN 8.0 NaN
99 beadccdb NaN NaN 9.0
data['total'] = data.loc[:, ['math', 'physics', 'computers']].sum(axis=1)
data[data.students==1].nlargest(10, 'total').sort_values('total', ascending=False)
name students math physics computers total
29 fahddafg 1 8 8 8 24
79 acchhcdb 1 8 9 7 24
9 ecacceff 1 7 9 7 23
16 dccefaeb 1 9 9 4 22
92 dhaechfb 1 4 9 9 22
47 eefbfeef 1 8 8 5 21
60 bbfaaada 1 4 7 9 20
82 fbbbehbf 1 9 3 8 20
18 dhhfgcbb 1 8 8 3 19
1 ehfdhegg 1 5 7 6 18