I have the following pandas dataframe. There are many NaN but there are lots of NaN value (I skipped the NaN value to make it look shorter).
0 NaN
...
26 NaN
27 357.0
28 357.0
29 357.0
30 NaN
...
246 NaN
247 357.0
248 357.0
249 357.0
250 NaN
...
303 NaN
304 58.0
305 58.0
306 58.0
307 58.0
308 58.0
309 58.0
310 58.0
311 58.0
312 58.0
313 58.0
314 58.0
315 58.0
316 NaN
...
333 NaN
334 237.0
I would like to filter all the NaN value and also only keep the first value out of the NaN (e.g. from index 27-29 there are three values, I would like to keep the value indexed 27 and skip the 28 and 29 value). The targeted array should be as follows:
27 357.0
247 357.0
304 58.0
334 237.0
I am not sure how could I keep only the first value. Thanks in advance.
Take only values that aren't nan, but the value before them is nan:
df = df[df.col1.notna() & df.col1.shift().isna()]
Output:
col1
27 357.0
247 357.0
304 58.0
334 237.0
Assuming all values are greater than 0, we could also do:
df = df.fillna(0).diff()
df = df[df.col1.gt(0)]
You can find the continuous index and diff to get its first value
m = (df['col'].dropna()
.index.to_series()
.diff().fillna(2).gt(1)
.reindex(range(df.index.max()+1))
.fillna(False))
out = df[m]
print(out)
col
27 357.0
247 357.0
304 58.0
334 237.0
This question already has answers here:
Pandas Merging 101
(8 answers)
Closed 11 months ago.
Following is one of my dataframe structure:
strike coi chgcoi
120 200 20
125 210 15
130 230 12
135 240 9
and the other one is:
strike poi chgpoi
125 210 15
130 230 12
135 240 9
140 225 12
What I want is:
strike coi chgcoi strike poi chgpoi
120 200 20 120 0 0
125 210 15 125 210 15
130 230 12 130 230 12
135 240 9 135 240 9
140 0 0 140 225 12
First, you need to create two dataframes using pandas
df1 = pd.Dataframe({'column_1': [val_1, val_2, ..., val_n], 'column_2':[val_1, val_2, ..., val_n]})
df2 = pd.Dataframe({'column_1': [val_1, val_2, ..., val_n], 'column_2':[val_1, val_2, ..., val_n]})
Then you can use outer join
df1.merge(df2, on='common_column_name', how='outer')
db1
strike coi chgcoi
0 120 200 20
1 125 210 15
2 130 230 12
3 135 240 9
db2
strike poi chgpoi
0 125 210 15
1 130 230 12
2 135 240 9
3 140 225 12
merge = db1.merge(db2,how="outer",on='strike')
merge
strike coi chgcoi poi chgpoi
0 120 200.0 20.0 NaN NaN
1 125 210.0 15.0 210.0 15.0
2 130 230.0 12.0 230.0 12.0
3 135 240.0 9.0 240.0 9.0
4 140 NaN NaN 225.0 12.0
merge.fillna(0)
strike coi chgcoi poi chgpoi
0 120 200.0 20.0 0.0 0.0
1 125 210.0 15.0 210.0 15.0
2 130 230.0 12.0 230.0 12.0
3 135 240.0 9.0 240.0 9.0
4 140 0.0 0.0 225.0 12.0
This is your expected result with the only difference that 'strike' is not repeated
I have a dataframe with ID's of clients and their expenses for 2014-2018. What I want is to have the mean of the expenses per ID but only the years before a certain date can be taken into account when calculating the mean value (so column 'Date' dictates which columns can be taken into account for the mean).
Example: for index 0 (ID: 12), the date states '2016-03-08', then the mean should be taken from the columns 'y_2014' and 'y_2015', so then for this index, the mean is 111.0.
If the date is too early (e.g. somewhere in 2014 or earlier in this case), then NaN should be returned (see index 6 and 9).
Initial dataframe:
y_2014 y_2015 y_2016 y_2017 y_2018 Date ID
0 100.0 122.0 324 632 NaN 2016-03-08 12
1 120.0 159.0 54 452 541.0 2015-04-09 96
2 NaN 164.0 687 165 245.0 2016-02-15 20
3 180.0 421.0 512 184 953.0 2018-05-01 73
4 110.0 654.0 913 173 103.0 2017-08-04 84
5 130.0 NaN 754 124 207.0 2016-07-03 26
6 170.0 256.0 843 97 806.0 2013-02-04 87
7 140.0 754.0 95 101 541.0 2016-06-08 64
8 80.0 985.0 184 84 90.0 2019-03-05 11
9 96.0 65.0 127 130 421.0 2014-05-14 34
Desired output:
y_2014 y_2015 y_2016 y_2017 y_2018 Date ID mean
0 100.0 122.0 324 632 NaN 2016-03-08 12 111.0
1 120.0 159.0 54 452 541.0 2015-04-09 96 120.0
2 NaN 164.0 687 165 245.0 2016-02-15 20 164.0
3 180.0 421.0 512 184 953.0 2018-05-01 73 324.25
4 110.0 654.0 913 173 103.0 2017-08-04 84 559.0
5 130.0 NaN 754 124 207.0 2016-07-03 26 130.0
6 170.0 256.0 843 97 806.0 2013-02-04 87 NaN
7 140.0 754.0 95 101 541.0 2016-06-08 64 447
8 80.0 985.0 184 84 90.0 2019-03-05 11 284.6
9 96.0 65.0 127 130 421.0 2014-05-14 34 NaN
Tried code: -> I'm still working on it, as I don't really know how to start for this, I only uploaded the dataframe so far, probably something with the 'datetime'-package has to be done to get the desired dataframe?
import pandas as pd
import numpy as np
import datetime
df = pd.DataFrame({"ID": [12,96,20,73,84,26,87,64,11,34],
"y_2014": [100,120,np.nan,180,110,130,170,140,80,96],
"y_2015": [122,159,164,421,654,np.nan,256,754,985,65],
"y_2016": [324,54,687,512,913,754,843,95,184,127],
"y_2017": [632,452,165,184,173,124,97,101,84,130],
"y_2018": [np.nan,541,245,953,103,207,806,541,90,421],
"Date": ['2016-03-08', '2015-04-09', '2016-02-15', '2018-05-01', '2017-08-04',
'2016-07-03', '2013-02-04', '2016-06-08', '2019-03-05', '2014-05-14']})
print(df)
Due to your naming convention, one need to extract the years from column names for comparison purpose. Then you can mask the data and taking mean:
# the years from columns
data = df.filter(like='y_')
data_years = data.columns.str.extract('(\d+)')[0].astype(int)
# the years from Date
years = pd.to_datetime(df.Date).dt.year.values
df['mean'] = data.where(data_years<years[:,None]).mean(1)
Output:
y_2014 y_2015 y_2016 y_2017 y_2018 Date ID mean
0 100.0 122.0 324 632 NaN 2016-03-08 12 111.00
1 120.0 159.0 54 452 541.0 2015-04-09 96 120.00
2 NaN 164.0 687 165 245.0 2016-02-15 20 164.00
3 180.0 421.0 512 184 953.0 2018-05-01 73 324.25
4 110.0 654.0 913 173 103.0 2017-08-04 84 559.00
5 130.0 NaN 754 124 207.0 2016-07-03 26 130.00
6 170.0 256.0 843 97 806.0 2013-02-04 87 NaN
7 140.0 754.0 95 101 541.0 2016-06-08 64 447.00
8 80.0 985.0 184 84 90.0 2019-03-05 11 284.60
9 96.0 65.0 127 130 421.0 2014-05-14 34 NaN
one more answer:
import pandas as pd
import numpy as np
df = pd.DataFrame({"ID": [12,96,20,73,84,26,87,64,11,34],
"y_2014": [100,120,np.nan,180,110,130,170,140,80,96],
"y_2015": [122,159,164,421,654,np.nan,256,754,985,65],
"y_2016": [324,54,687,512,913,754,843,95,184,127],
"y_2017": [632,452,165,184,173,124,97,101,84,130],
"y_2018": [np.nan,541,245,953,103,207,806,541,90,421],
"Date": ['2016-03-08', '2015-04-09', '2016-02-15', '2018-05-01', '2017-08-04',
'2016-07-03', '2013-02-04', '2016-06-08', '2019-03-05', '2014-05-14']})
#Subset from original df to calculate mean
subset = df.loc[:,['y_2014', 'y_2015', 'y_2016', 'y_2017', 'y_2018']]
#an expense value is only available for the calculation of the mean when that year has passed, therefore 2015-01-01 is chosen for the 'y_2014' column in the subset etc. to check with the 'Date'-column
subset.columns = ['2015-01-01', '2016-01-01', '2017-01-01', '2018-01-01', '2019-01-01']
s = subset.columns[0:].values < df.Date.values[:,None]
t = s.astype(float)
t[t == 0] = np.nan
df['mean'] = (subset.iloc[:,0:]*t).mean(1)
print(df)
#Additionally: (gives the sum of expenses before a certain date in the 'Date'-column
df['sum'] = (subset.iloc[:,0:]*t).sum(1)
print(df)
I have a dataframe with ID's of clients and their expenses for 2014-2018. What I want is to have the mean of the expenses for 2014-2018 of each ID in the dataframe.
There is however one condition: if one of the cells in the rows (2014-2018) is empty, NaN should be returned. So I only want the mean to be calculated when all 5 row-cells in the columns 2014-2018 have a numeric value.
Initial dataframe:
2014 2015 2016 2017 2018 ID
100 122.0 324 632 NaN 12.0
120 159.0 54 452 541.0 96.0
NaN 164.0 687 165 245.0 20.0
180 421.0 512 184 953.0 73.0
110 654.0 913 173 103.0 84.0
130 NaN 754 124 207.0 26.0
170 256.0 843 97 806.0 87.0
140 754.0 95 101 541.0 64.0
80 985.0 184 84 90.0 11.0
96 65.0 127 130 421.0 34.0
Desired output
2014 2015 2016 2017 2018 ID mean
100 122.0 324 632 NaN 12.0 NaN
120 159.0 54 452 541.0 96.0 265.20
NaN 164.0 687 165 245.0 20.0 NaN
180 421.0 512 184 953.0 73.0 450.00
110 654.0 913 173 103.0 84.0 390.60
130 NaN 754 124 207.0 26.0 NaN
170 256.0 843 97 806.0 87.0 434.40
140 754.0 95 101 541.0 64.0 326.20
80 985.0 184 84 90.0 11.0 284.60
96 65.0 127 130 421.0 34.0 167.80
Tried code: -> this however only gives me the mean, ignoring the NaN condition. Is their some brief lambda function that can add the condition to the code?
import pandas as pd
import numpy as np
data = pd.DataFrame({"ID": [12,96,20,73,84,26,87,64,11,34],
"2014": [100,120,np.nan,180,110,130,170,140,80,96],
"2015": [122,159,164,421,654,np.nan,256,754,985,65],
"2016": [324,54,687,512,913,754,843,95,184,127],
"2017": [632,452,165,184,173,124,97,101,84,130],
"2018": [np.nan,541,245,953,103,207,806,541,90,421]})
print(data)
fiveyear = ["2014", "2015", "2016", "2017", "2018"] -> if a cell in these rows is empty(NaN), then NaN should be in the new 'mean'-column. I only want the mean when, all 5 cells in the row have a numeric value.
data.loc[:, 'mean'] = data[fiveyear].mean(axis=1)
print(data)
Use dropna to remove rows before calculating the mean. Because pandas will align on index when assigning the result back, and these rows were removed, the result of these dropped rows is NaN
df['mean'] = df[fiveyear].dropna(how='any').mean(1)
Also possible to mask the result to only those rows that were all non-null
df['mean'] = df[fiveyear].mean(1).mask(df[fiveyear].isnull().any(1))
A bit more of a hack, but because you know you need all 5 values you could also use sum which supports the min_count argument, so anything with fewer than 5 values is NaN
df['mean'] = df[fiveyear].sum(1, min_count=len(fiveyear))/len(fiveyear)
This is the same as #ALollz answer but with a flexible way to detect all columns regardless of how many years there are in the df
#get years columns in a list
yearsCols= [c for c in df if c != 'ID']
#calculate mean
df['mean'] = df[yearsCols].dropna(how='any').mean(1)