Have a dataframe mortgage_data with columns name mortgage_amount and month (in asceding order)
mortgage_amount_paid = 1000
mortgage_data:
name mortgage_amount month
mark 400 1
mark 500 2
mark 200 3
How to deduct and update mortgage_amount in ascending order or month using mortgage_amount_paid row by row in a dataframe
and add a column paid_status as yes if mortgage_amount_paid is fully deducted for that amount and no if not like this
if mortgage_amount_paid = 1000
mortgage_data:
name mortgage_amount month mortgage_amount_updated paid_status
mark 400 1 0 full
mark 500 2 0 full
mark 200 3 100 partial
ex:
if mortgage_amount_paid = 600
mortgage_data:
name mortgage_amount month mortgage_amount_updated paid_status
mark 400 1 0 full
mark 500 2 300 partial
mark 200 3 200 zero
tried this:
import numpy as np
mortgage_amount_paid = 1000
df['mortgage_amount_updated'] = np.where(mortgage_amount_paid - df['mortgage_amount'].cumsum() >=0 , 0, df['mortgage_amount'].cumsum() - mortgage_amount_paid)
df['paid_status'] = np.where(df['mortgage_amount_updated'],'full','partial')
IIUC, you can use masks:
mortgage_amount_paid = 600
# amount saved - debt
m1 = df['mortgage_amount'].cumsum().sub(mortgage_amount_paid)
# is it positive?
m2 = m1>0
# is the previous month also positive?
m3 = m2.shift(fill_value=False)
df['mortgage_amount_updated'] = (m1.clip(0, mortgage_amount_paid)
.mask(m3, df['mortgage_amount'])
)
df['paid_status'] = np.select([m3, m2], ['zero', 'partial'], 'full')
output:
name mortgage_amount month mortgage_amount_updated paid_status
0 mark 400 1 0 full
1 mark 500 2 300 partial
2 mark 200 3 200 zero
Idea is the cumsum before partial should less than mortgage_amount_paid and there could be at most one partial
mortgage_amount_paid = 600
m = df['mortgage_amount'].cumsum()
df['paid_status'] = np.select(
[m <= mortgage_amount_paid,
(m > mortgage_amount_paid) & (m.shift() < mortgage_amount_paid)
],
['full', 'partial'],
default='zero'
)
df['mortgage_amount_updated'] = np.select(
[df['paid_status'].eq('full'),
df['paid_status'].eq('partial')],
[0, m-mortgage_amount_paid],
default=df['mortgage_amount']
)
print(df)
name mortgage_amount month paid_status mortgage_amount_updated
0 mark 400 1 full 0
1 mark 500 2 partial 300
2 mark 200 3 zero 200
Related
I want to make the sum of each 'Group' which have at least one 'Customer' with an 'Active' Bail.
Sample Input :
Customer ID Group Bail Amount
0 23453 NAFNAF Active 200
1 23849 LINDT Active 350
2 23847 NAFNAF Inactive 100
3 84759 CARROUF Inactive 20
For example 'NAFNAF' has 2 customers, including one with an active bail.
Output expected :
NAFNAF : 300
LINDT : 350
TOTAL ACTIVE: 650
I don't wanna change the original dataframe
You can use:
(df.assign(Bail=df.Bail.eq('Active'))
.groupby('Group')[['Bail', 'Amount']].agg('sum')
.loc[lambda d: d['Bail'].ge(1), ['Amount']]
)
output:
Amount
Group
LINDT 350
NAFNAF 300
Full output with total:
df2 = (
df.assign(Bail=df.Bail.eq('Active'))
.groupby('Group')[['Bail', 'Amount']].agg('sum')
.loc[lambda d: d['Bail'].ge(1), ['Amount']]
)
df2 = pd.concat([df2, df2.sum().to_frame('TOTAL').T])
output:
Amount
LINDT 350
NAFNAF 300
TOTAL 650
Create a boolean mask of Group with at least one active lease:
m = df['Group'].isin(df.loc[df['Bail'].eq('Active'), 'Group'])
out = df[m]
At this point, your filtered dataframe looks like:
>>> out
Customer ID Group Bail Amount
0 23453 NAFNAF Active 200
1 23849 LINDT Active 350
2 23847 NAFNAF Inactive 100
Now you can use groupby and sum:
out = df[m].groupby('Group')['Amount'].sum()
out = pd.concat([out, pd.Series(out.sum(), index=['TOTAL ACTIVE'])])
# Output
LINDT 350
NAFNAF 300
TOTAL ACTIVE 650
dtype: int64
Hi I have a dataframe that lists items that I own, along with their Selling Price.
I also have a variable that defines my current debt. Example:
import pandas as pd
current_debt = 16000
d = {
'Person' : ['John','John','John','John','John'],
'Ïtem': ['Car','Bike','Computer','Phone','TV'],
'Price':[10500,3300,2100,1100,800],
}
df = pd.DataFrame(data=d)
df
I would like to "payback" the current_debt starting with the most expensive item and continuing until the debt is paid. I would like to list the left over money aligned to the last item sold. I'm hoping the function can inlcude a groupby clause for Person as sometimes there is more than one name in the list
My expected output for the debt in the example above would be:
If anyone could help with a function to calculate this that would be fantastic. I wasnt sure whether I needed to convert the dataframe to a list or it could be kept as a dataframe. Thanks very much!
Using a cumsum transformation and np.where to cover your logic for the final price column:
import numpy as np
df = df.sort_values(["Person", "Price"], ascending=False)
df['CumPrice'] = df.groupby("Person")['Price'].transform('cumsum')
df['Diff'] = df['CumPrice'] - current_debt
df['PriceLeft'] = np.where(
df['Diff'] <= 0,
0,
np.where(
df['Diff'] < df['Price'],
df['Diff'],
df['Price']
)
)
Result:
Person Item Price CumPrice Diff PriceLeft
0 John Car 10500 10500 -5500 0
1 John Bike 3300 13800 -2200 0
2 John Computer 2100 15900 -100 0
3 John Phone 1100 17000 1000 1000
4 John TV 800 17800 1800 800
When using the np.select function,
I'd like to refer to the values of the current column I am appending it on, and set a
value based on a condition
Problem:
On the code below, my conditions are referring to the 'Profit' column where np.select would assign a value to it, but somehow my code does not obey these conditions.
In the 'Profit' column, when I would set a value of 1 Month, the value of the cell on top of it has to be 'Yes'.
Example code:
conditions = [
(df['UserID'].shift() == df['UserID']) & (df['totalSold'] >= df['totalBought']),
(df['UserID'].shift() == df['UserID']) & (df['totalSold'] >= df['totalBought']) & (df['Profit'].shift() == 'Yes'),
(df['UserID'].shift() == df['UserID']) & (df['totalSold'] >= df['totalBought']) & (df['Profit'].shift() == '1 Month')]
values = ['Yes', '1 Month', '2 Month']
df['Profit'] = np.select(conditions, values, default = "No")
Input Dataframe:
id
month
totalBought
totalSold
aaa
Jan
200
300
aaa
Feb
250
300
aaa
March
100
350
bbb
Jan
100
150
Expected Output dataframe:
id
month
totalBought
totalSold
Profit
aaa
Jan
200
300
Yes
aaa
Feb
250
300
1 Month
aaa
March
100
350
2 Month
bbb
Jan
100
150
Yes
I believe you're looking for something a little more dynamic, like this:
ge_mask = df['totalSold'].diff().fillna(-1).ge(0)
df['Profit'] = np.select([ge_mask, df['totalSold'].ge(df['totalBought'])], [ge_mask.cumsum().astype(str) + ' Month', 'Yes'])
Output:
>>> df
id month totalBought totalSold Profit
0 aaa Jan 200 300 Yes
1 aaa Feb 250 300 1 Month
2 aaa March 100 350 2 Month
3 bbb Jan 100 150 Yes
I'm trying to find for each price column the next cheapest product available on the day. my data looks something like this
data = [['29/10/18', 400, 300, 200],
['29/10/18', 250, 400, 100],
['29/10/18', 600, 600, 300],
['30/10/18', 300, 500, 100]]
df = pd.DataFrame(data, columns = ['date', 'price 1', 'price2', 'price3'])
my output would look something like this
date price1 nearestPrice1 price2 nearestPrice2
29/10/18 400 250 300 400
29/10/18 250 400 400 300
29/10/18 600 400 600 400
f = lambda row, col: df.loc[df[df['date'] == row['date']][col].sub(row[col])\
.abs().nsmallest(2).idxmax(), col]
df['nearest_price1'] = df.apply(f, col = 'price 1', axis = 1)
df['nearest_price2'] = df.apply(f, col = 'price2', axis = 1)
df['nearest_price3'] = df.apply(f, col = 'price3', axis = 1)
Outputs:
date price 1 price2 price3 nearest_price1 nearest_price2 \
0 29/10/18 400 300 200 250 400
1 29/10/18 250 400 100 400 300
2 29/10/18 600 600 300 400 400
3 30/10/18 300 500 100 300 500
nearest_price3
0 100
1 200
2 200
3 100
Explanation:
Uses a lambda function f, apply this function to each column (price 1, price2, price3), and gets the results.
It works as following:
By sub the price of other prices in same date.
It looks for the two smallest abs prices using nsmallest.
Lastly, use idxmax to index the second smallest price (because the 1st smallest price would be itself having an absolute difference of 0)
If I understand this correctly you need to find the cheapest prices for a given day starting with the cheapest then the nearest cheapest and so on...
This means that you'd need to first extract all the prices for the given day. You could do this with a simple for loop where for example if the text in the first column is '29/10/18' then add the data from the rest of the columns to a list or make a new DataFrame from it. In either case, once you have all the prices for the data you can use the .sort_values function provided with pandas and specify that you want it as ascending. Function documentation
I'm using Pandas 0.10.1
Considering this Dataframe:
Date State City SalesToday SalesMTD SalesYTD
20130320 stA ctA 20 400 1000
20130320 stA ctB 30 500 1100
20130320 stB ctC 10 500 900
20130320 stB ctD 40 200 1300
20130320 stC ctF 30 300 800
How can i group subtotals per state?
State City SalesToday SalesMTD SalesYTD
stA ALL 50 900 2100
stA ctA 20 400 1000
stA ctB 30 500 1100
I tried with a pivot table but i only can have subtotals in columns
table = pivot_table(df, values=['SalesToday', 'SalesMTD','SalesYTD'],\
rows=['State','City'], aggfunc=np.sum, margins=True)
I can achieve this on excel, with a pivot table.
If you put State and City not both in the rows, you'll get separate margins. Reshape and you get the table you're after:
In [10]: table = pivot_table(df, values=['SalesToday', 'SalesMTD','SalesYTD'],\
rows=['State'], cols=['City'], aggfunc=np.sum, margins=True)
In [11]: table.stack('City')
Out[11]:
SalesMTD SalesToday SalesYTD
State City
stA All 900 50 2100
ctA 400 20 1000
ctB 500 30 1100
stB All 700 50 2200
ctC 500 10 900
ctD 200 40 1300
stC All 300 30 800
ctF 300 30 800
All All 1900 130 5100
ctA 400 20 1000
ctB 500 30 1100
ctC 500 10 900
ctD 200 40 1300
ctF 300 30 800
I admit this isn't totally obvious.
You can get the summarized values by using groupby() on the State column.
Lets make some sample data first:
import pandas as pd
import StringIO
incsv = StringIO.StringIO("""Date,State,City,SalesToday,SalesMTD,SalesYTD
20130320,stA,ctA,20,400,1000
20130320,stA,ctB,30,500,1100
20130320,stB,ctC,10,500,900
20130320,stB,ctD,40,200,1300
20130320,stC,ctF,30,300,800""")
df = pd.read_csv(incsv, index_col=['Date'], parse_dates=True)
Then apply the groupby function and add a column City:
dfsum = df.groupby('State', as_index=False).sum()
dfsum['City'] = 'All'
print dfsum
State SalesToday SalesMTD SalesYTD City
0 stA 50 900 2100 All
1 stB 50 700 2200 All
2 stC 30 300 800 All
We can append the original data to the summed df by using append:
dfsum.append(df).set_index(['State','City']).sort_index()
print dfsum
SalesMTD SalesToday SalesYTD
State City
stA All 900 50 2100
ctA 400 20 1000
ctB 500 30 1100
stB All 700 50 2200
ctC 500 10 900
ctD 200 40 1300
stC All 300 30 800
ctF 300 30 800
I added the set_index and sort_index to make it look more like your example output, its not strictly necessary to get the results.
I think this subtotal example code is what you want (similar to excel subtotal).
I assume that you want group by columns A, B, C, D, then count column value of E.
main_df.groupby(['A', 'B', 'C']).apply(lambda sub_df:
sub_df.pivot_table(index=['D'], values=['E'], aggfunc='count', margins=True))
output:
E
A B C D
a a a a 1
b 2
c 2
all 5
b b a a 3
b 2
c 2
all 7
b b b a 3
b 6
c 2
d 3
all 14
How about this one ?
table = pd.pivot_table(data, index=['State'],columns = ['City'],values=['SalesToday', 'SalesMTD','SalesYTD'],\
aggfunc=np.sum, margins=True)
If you are interested I have just created a little function to make it more easy as you might want to apply this function 'subtotal' on many table. It works for both table created via pivot_table() and groupby(). An example of table to use it is provide on this stack overflow page : Sub Total in pandas pivot Table
def get_subtotal(table, sub_total='subtotal', get_total=False, total='TOTAL'):
"""
Parameters
----------
table : dataframe, table with multi-index resulting from pd.pivot_table() or
df.groupby().
sub_total : str, optional
Name given to the subtotal. The default is '_Sous-total'.
get_total : boolean, optional
Precise if you want to add the final total (in case you used groupeby()).
The default is False.
total : str, optional
Name given to the total. The default is 'TOTAL'.
Returns
-------
A table with the total and subtotal added.
"""
index_name1 = table.index.names[0]
index_name2 = table.index.names[1]
pvt = table.unstack(0)
mask = pvt.columns.get_level_values(index_name1) != 'All'
#print (mask)
pvt.loc[sub_total] = pvt.loc[:, mask].sum()
pvt = pvt.stack().swaplevel(0,1).sort_index()
pvt = pvt[pvt.columns[1:].tolist() + pvt.columns[:1].tolist()]
if get_total:
mask = pvt.index.get_level_values(index_name2) != sub_total
pvt.loc[(total, '' ),: ] = pvt.loc[mask].sum()
print (pvt)
return(pvt)
table = pd.pivot_table(df, index=['A'], values=['B', 'C'], columns=['D', 'E'], fill_value='0', aggfunc=np.sum/'count'/etc., margins=True, margins_name='Total')
print(table)