My dataframe looks like this:
customer_nr
order_value
year_ordered
payment_successful
1
50
1980
1
1
75
2017
0
1
10
2020
1
2
55
2000
1
2
300
2007
1
2
15
2010
0
I want to know the total amount a customer has successfully paid in the years before, for a specific order.
The expected output is as follows:
customer_nr
order_value
year_ordered
payment_successful
total_successfully_previously_paid
1
50
1980
1
0
1
75
2017
0
50
1
10
2020
1
50
2
55
2000
1
0
2
300
2007
1
55
2
15
2010
0
355
Closest i've gotten is this:
df.groupby(['customer_nr', 'payment_successful'], as_index=False)['order_value'].sum()
That just gives me the summed amount successfully and unsuccessfully paid all time per customer. It doesn't account for selecting only previous orders to participate in the sum.
Try:
df["total_successfully_previously_paid"] = (df["payment_successful"].mul(df["order_value"])
.groupby(df["customer_nr"])
.transform(lambda x: x.cumsum().shift().fillna(0))
)
>>> df
customer_nr ... total_successfully_previously_paid
0 1 ... 0.0
1 1 ... 50.0
2 1 ... 50.0
3 2 ... 0.0
4 2 ... 55.0
5 2 ... 355.0
[6 rows x 5 columns]
I have a dataframe with several columns and I need to re-sample from that data with more weight to one category. I think np.random.choice should work but not sure how to implement it. Following is the example data from which I want to sample randomly but want 70% probability of getting expensive home (based on the Expensive_home column, value = 1) and 30% probability for Expensive_home=0. How can I create the re-sampled data file? Thank you!
ID Lot_Area Year_Built Full_Bath Bedroom Sale_Price Expensive_home
1 31770 1960 1 3 215000 0
2 11622 1961 1 2 105000 0
3 5389 1995 2 2 236500 0
4 8402 1998 2 3 180400 0
5 10176 1990 1 2 171500 0
6 6820 1985 1 1 212000 0
7 53504 2003 3 4 538000 1
8 12134 1988 2 4 164000 0
9 11394 2010 1 1 394432 1
10 19138 1951 1 2 141000 0
11 13175 1978 2 3 210000 0
12 11751 1977 2 3 190000 0
13 10625 1974 2 3 170000 0
14 7500 2000 2 3 216000 0
15 11241 1970 1 2 149000 0
16 2280 1978 2 3 146000 0
17 12858 2009 2 3 376162 1
18 12883 2009 2 3 290941 0
19 12182 2005 2 3 220000 0
20 11520 2005 2 3 275000 0
similar data file but with more of randomly picked 1s in the last column
To create a dataframe of the same length but allowing expensive to have a higher chance of being selected and allowing replacements, use:
weights = df['Expensive_home'].replace({0: 30, 1: 70})
df1 = df.sample(len(df), replace=True, weights=weights)
To create a dataframe with all expensive and then 30% of non-expensive, you can do:
expensive = df['Expensive_home'].astype(bool)
df2 = pd.concat([df[expensive], df[~expensive].sample(frac=0.3)])
My dataframe is given below:
input_df =
index Year Month Day Hour Minute GHI
0 2017 1 1 7 30 100
1 2017 1 1 8 30 200
2 2017 1 2 9 30 300
3 2017 1 2 10 30 400
4 2017 2 1 11 30 500
5 2017 2 1 12 30 600
6 2017 2 2 13 30 700
I want to sum each day GHI data. From above I am expecting an output like below:
result_df =
index Year Month Day GHI
0 2017 1 1 300
1 2017 1 2 700
2 2017 2 1 1100
3 2017 2 2 700
My code and my present output is:
result_df = input_df.groupby(['Year','Month','Day'])['GHI'].sum()
print(result_df)
result_df =
index Year Month Day GHI
0 2017 1 1 1400
1 2017 2 2 1400
My above code is combining first day in each month and summing the data. But it is wrong. How to overcome it?
You are incredibly close in your attempt. The thing to bear in mind is that pd.groupby() has a parameter as_index with default value True. Therefore your groupby() outputs a multi-index data frame. To get the desired output you can either chain the reset_index() method after the groupby or change the value of the as_index parameter to False.
result_df = input_df.groupby(['Year','Month','Day'])['GHI'].sum()
result_df
Out[12]:
Year Month Day
2017 1 1 300
2 700
2 1 1100
2 700
Name: GHI, dtype: int64
# Getting the desired output
input_df.groupby(['Year','Month','Day'])['GHI'].sum().reset_index()
Out[16]:
Year Month Day GHI
0 2017 1 1 300
1 2017 1 2 700
2 2017 2 1 1100
3 2017 2 2 700
input_df.groupby(['Year','Month','Day'], as_index=False)['GHI'].sum()
Out[17]:
Year Month Day GHI
0 2017 1 1 300
1 2017 1 2 700
2 2017 2 1 1100
3 2017 2 2 700
I have a pandas dataframe for which I'm trying to compute an expanding windowed aggregation after grouping by columns. The data structure is something like this:
df = pd.DataFrame([['A',1,2015,4],['A',1,2016,5],['A',1,2017,6],['B',1,2015,10],['B',1,2016,11],['B',1,2017,12],
['A',1,2015,24],['A',1,2016,25],['A',1,2017,26],['B',1,2015,30],['B',1,2016,31],['B',1,2017,32],
['A',2,2015,4],['A',2,2016,5],['A',2,2017,6],['B',2,2015,10],['B',2,2016,11],['B',2,2017,12]],columns=['Typ','ID','Year','dat'])\
.sort_values(by=['Typ','ID','Year'])
i.e.
Typ ID Year dat
0 A 1 2015 4
6 A 1 2015 24
1 A 1 2016 5
7 A 1 2016 25
2 A 1 2017 6
8 A 1 2017 26
12 A 2 2015 4
13 A 2 2016 5
14 A 2 2017 6
3 B 1 2015 10
9 B 1 2015 30
4 B 1 2016 11
10 B 1 2016 31
5 B 1 2017 12
11 B 1 2017 32
15 B 2 2015 10
16 B 2 2016 11
17 B 2 2017 12
In general, there is a completely varying number of years per Type-ID and rows per Type-ID-Year. I need to group this dataframe by the columns Type and ID, then compute an expanding windowed median & std of all observations by Year. I would like to get output results like this:
Typ ID Year median std
0 A 1 2015 14.0 14.14
1 A 1 2016 14.5 11.56
2 A 1 2017 15.0 10.99
3 A 2 2015 4.0 0
4 A 2 2016 4.5 0
5 A 2 2017 5.0 0
6 B 1 2015 20.0 14.14
7 B 1 2016 20.5 11.56
8 B 1 2017 21.0 10.99
9 B 2 2015 10.0 0
10 B 2 2016 10.5 0
11 B 2 2017 11.0 0
Hence, I want something like a groupby by ['Type','ID','Year'], with the median & std for each Type-ID-Year computed for all data with the same Type-ID and cumulative inclusive that Year.
How can I do this without manual iteration?
There's been no activity on this question, so I'll post the solution I found.
mn = df.groupby(by=['Typ','ID']).dat.expanding().median().reset_index().set_index('level_2')
mylast = lambda x: x.iloc[-1]
mn = mn.join(df['Year'])
mn = mn.groupby(by=['Typ','ID','Year']).agg(mylast).reset_index()
My solution follows this algorithm:
group the data, compute the windowed median, and get the original index back
with the original index back, get the year back from the original dataframe
group by the grouping columns, taking the last (in order) value for each
This gives the output desired. The same process can be followed for the standard deviation (or any other statistic desired).
Hi i am stata user and now iam trying to pass my codes in stata to python/pandas. In this case i want to create a new variables size that assign the value 1 if the number of jobs is between 1 and 9, the value 2 if jobs is between 10 and 49, 3 between 50 and 199 and 4 for bigger than 200 jobs.
And aftewards, if it is possible label them (1:'Micro', 2:'Small', 3:'Median', 4:'Big')
id year entry cohort jobs
1 2009 0 NaN 3
1 2012 1 2012 3
1 2013 0 2012 4
1 2014 0 2012 11
2 2010 1 2010 11
2 2011 0 2010 12
2 2012 0 2010 13
3 2007 0 NaN 38
3 2008 0 NaN 58
3 2012 1 2012 58
3 2013 0 2012 70
4 2007 0 NaN 231
4 2008 0 NaN 241
I tried using this code but couldnt succed
df['size'] = np.where((1 <= df['jobs'] <= 9),'Micro',np.where((10 <= df['jobs'] <= 49),'Small'),np.where((50 <= df['jobs'] <= 200),'Median'),np.where((200 <= df['empleo']),'Big','NaN'))
What you are trying to do is called binning use pd.cut i.e
df['new'] = pd.cut(df['jobs'],bins=[1,10,50,201,np.inf],labels=['micro','small','medium','big'])
Output:
id year entry cohort jobs new
0 1 2009 0 NaN 3 micro
1 1 2012 1 2012.0 3 micro
2 1 2013 0 2012.0 4 micro
3 1 2014 0 2012.0 11 small
4 2 2010 1 2010.0 11 small
5 2 2011 0 2010.0 12 small
6 2 2012 0 2010.0 13 small
7 3 2007 0 NaN 38 small
8 3 2008 0 NaN 58 medium
9 3 2012 1 2012.0 58 medium
10 3 2013 0 2012.0 70 medium
11 4 2007 0 NaN 231 big
12 4 2008 0 NaN 241 big
For multiple conditions you have to go for np.select not np.where. Hope that helps.
numpy.select(condlist, choicelist, default=0)
Where condlist is the list of your condtions, and choicelist is the
list of choices if condition is met. default = 0, here you can put
that as np.nan
Using np.select for doing the same with the help of .between i.e
np.select([df['jobs'].between(1,10),
df['jobs'].between(10,50),
df['jobs'].between(50,200),
df['jobs'].between(200,np.inf)],
['Micro','Small','Median','Big']
,'NaN')