I am trying to populate values in the column motooutstandingbalance by subtracting the previous row actualmotordeductionfortheweek from previous row motooutstandingbalance. I am using pandas shift command but currently not getting the desired output which should be a consistent reduction in motooutstandingbalance week by week.
Final result should look like this
Here is my code
x['motooutstandingbalance']=np.where(x.salesrepid == x.shift(1).salesrepid, x.shift(1).motooutstandingbalance - x.shift(1).actualmotordeductionfortheweek, x.motooutstandingbalance)
Any ideas on how to achieve this?
This works:
start_value = 468300.0
df['motooutstandingbalance'] = (-df['actualmotordeductionfortheweek'][::-1]).append(pd.Series([start_value], index=[-1]))[::-1].cumsum().reset_index(drop=True)
Basically what I'm doing is I'm—
Taking the actualmotordeductionfortheweek column, negating it (all the values become negative), and reversing it
Adding the start value (which is positive, as opposed to all the other values which are negative) at index -1 (which is before 0, not at the very end as is usual in Python)
Reversing it back, so that the new -1 entry goes to the very beginning
Using cumsum() to add all the of values of the column. This actually work to subtract all the values from the start value, because the first value is positive and the rest of the values are negative (because x + (-y) = x - y)
Related
On python im trying to check if from a day to the next one (column by column), by ID, the values, if not all equal to zero, are correctly incremented by one or if at some point the value goes back to 0, then the next day it is either still equal to zero or incremented by one.
I have a dataframe with multiple columns, the first column is named "ID". All other columns are integers which names represent a day followed by the next day like this :
I need to check if for each ID :
all the columns (i.e for all of the days) are equal to 0 then create new column named "CHECK" which equals to 0 meaning there is no error ;
if not then look column after column and check if the value in the next column is greater than the previous column value (i.e the day before) and just incremented by 1 (e.g from 14 to 15 and not 14 to 16) then "CHECK" equals to 0 ;
if these conditions aren't satisfied it means that the next column is either equal to the previous column or lower (but not equal to zero) and in both cases it is an error then "CHECK" equals to 1 ;
But if the next value column is lower and equal to 0 then the next one to it must be either still 0 or incremented by 1. Each time it comes back to zero it must be followed by zero or incremented by 1
Each time there is a column in which the calculation comes back to zero, the next columns should either be equal to zero or getting +1 value.
If i explained everything correctly then in this example, the first two IDs are correct and their variable "CHECK" must be equal to 0 but the next ID should have a "CHECK" value 1.
I hope this is not confusing..thanks.
I tried this but would like to not use the column name but its index/position. Code not finished.`
df['check'] = np.where((df['20110531']<=df['20110530']) & ([df.columns!="ID"] != 0),1,0)
You could write a simple function to go along each row and check for your condition. In the example code below, I first set the index to ID.
df = df.set_index('ID')
def func(r):
start = r[0]
for j,i in enumerate(r):
if j == 0:
continue
if i == 0 or i == start +1:
start = i
else:
return 1
return 0
df['check'] = df.apply(func, axis = 1)
if you want to keep the original index the don't reset and use df['check'] = df.iloc[:,1:].apply(func, axis = 1)
I am trying to code a scenario where I need to iterate over a particular record columnwise but could not generalize it. Below is an example scenario :
input data
The output should be : if in a record, a particular column value is not null and less than 0 then code should iterate over next column value in the same index and look for any positive or null value appearing and then it should replace the negative value with that column value.
output data
so basically, in a particular record, if any cell contains negative value then code should check the next cell in the same record and should keep on checking until it finds a null or positive value and should replace the negative value with that value.
Hope the scenario is clear. I have attached input and expected output data for reference. I can write code using iterrows and if, else condition but I want to generalize the code for n number of columns. It is not possible to write 50 or 100 if else condition.
Thanks in advance for the help.
I think the easiest way is to transpose. Assume d is your dataframe:
dummy = abs(max(d.max())) + 1
d = d.fillna(dummy) #Replace none with the dummy value
d[d < 0] = None #Replace negative values with none
d = d.transpose()
d = d.fillna(method="bfill") #Backfill NA's by pulling from next row (next column in the original dataframe)
d = d.transpose() #Transpose back
d[d == dummy] = None #put dummy value back to None
At the moment, I'm trying to migrate my Weibull calculations from an excel macro to Python, and the tool I've been primarily using is Pandas. The formula I am currently having trouble converting from excel to Python is as follows:
Adjusted Rank = (Previous value in the adjusted rank column) * (Another column's value), but the first value in the adjusted rank column = 0
My brain is trying to copy and paste this methodology to pandas, but as you can imagine, it doesn't work that way:
DF[Adjusted Rank] = (Previous value in the adjusted rank column) * DF(Another Column), but the first value in the adjusted rank column = 0
In the end, I imagine the adjusted rank column will look like so:
Adjusted Rank
0
Some Number
Some Number
Some Number
etc.
I'm having some trouble puzzling out how to make each "cell" in the adjusted rank column refer to the previous value in the column In Pandas. Additionally, Is there a way to set only the first entry in the column equal to 0? Thanks all!
You can use shift to multiply by previous values and add a zero to the start, this should work:
df['new'] = df['adjusted_rank'].shift(period = 1, fill_value=0) * df['another_column']
I currently have a dataframe as below, which shows a change in position, add 1 unit, subtract 1 unit or do nothing (0).
I'm looking to create a second dataframe with the net position, which is either long (1) or flat (0) - assuming a net short (-1) position is not possible.
So the logic is to start with 0, switch to 1 when the first +1 'change in position' occurs (any subsequent +1 is ignored), then only switch back to 0 when a -1 is seen.
Any thoughts on how to do this? The idea is to create df2 as per below
df.cumsum() would work if each +1 'change in position' were to count, but I only wish to capture 'long or flat' not the size of any accumulated long position.
Input data frame:
Output data frame:
Here is a vectorized solution:
df['CiP'].where(df['CiP'].replace(to_replace=0, method='ffill').diff(), 0).cumsum()
Explanation:
The call to replace replaces 0 values by the preceding non-zero value.
The call to diff then points to actual changes in position.
The call to where ensures that values that do not really change are replaced by 0.
After this treatment, cumsum just works.
Edit: If you have multiple columns, then define a function as above and apply it.
def position(series):
return series.where(series.replace(to_replace=0, method='ffill').diff(), 0).cumsum()
df[list_of_columns].apply(position)
This could be slightly faster than explicitly looping over the columns.
I currently have a dataframe as below, which shows a change in position, add 1 unit, subtract 1 unit or do nothing (0).
I'm looking to create a second dataframe with the net position, which is either long (1) or flat (0) - assuming a net short (-1) position is not possible.
So the logic is to start with 0, switch to 1 when the first +1 'change in position' occurs (any subsequent +1 is ignored), then only switch back to 0 when a -1 is seen.
Any thoughts on how to do this? The idea is to create df2 as per below
df.cumsum() would work if each +1 'change in position' were to count, but I only wish to capture 'long or flat' not the size of any accumulated long position.
Input data frame:
Output data frame:
Here is a vectorized solution:
df['CiP'].where(df['CiP'].replace(to_replace=0, method='ffill').diff(), 0).cumsum()
Explanation:
The call to replace replaces 0 values by the preceding non-zero value.
The call to diff then points to actual changes in position.
The call to where ensures that values that do not really change are replaced by 0.
After this treatment, cumsum just works.
Edit: If you have multiple columns, then define a function as above and apply it.
def position(series):
return series.where(series.replace(to_replace=0, method='ffill').diff(), 0).cumsum()
df[list_of_columns].apply(position)
This could be slightly faster than explicitly looping over the columns.