Group pandas dataframe by quantile of single column - python

Sorry if this is duplicate post - I can't find a related post though
from random import seed
seed(100)
P = pd.DataFrame(np.random.randint(0, 100, size=(1000, 2)), columns=list('AB'))
What I'd like is to group P by the quartiles/quantiles/deciles/etc of column A and then calculate a aggregate statistic (such as mean) by group. I can define deciles of the column as
P['A'].quantile(np.arange(10) / 10)
I'm not sure how to groupby the deciles of A. Thanks in advance!

If you want to group P e.g. by quartiles, run:
gr = P.groupby(pd.qcut(P.A, 4, labels=False))
Then you can perform any operations on these groups.
For presentation, below you have just a printout of P limited to
20 rows:
for key, grp in gr:
print(f'\nGroup: {key}\n{grp}')
which gives:
Group: 0
A B
0 8 24
3 10 94
10 9 93
15 4 91
17 7 49
Group: 1
A B
7 34 24
8 15 60
12 27 4
13 31 1
14 13 83
Group: 2
A B
4 52 98
5 53 66
9 58 16
16 59 67
18 47 65
Group: 3
A B
1 67 87
2 79 48
6 98 14
11 86 2
19 61 14
As you can see, each group (quartile) has 5 members, so the grouping is
correct.
As a supplement
If you are interested in borders of each quartile, run:
pd.qcut(P.A, 4, labels=False, retbins=True)[1]
Then cut returns 2 results (a tuple). The first element (number 0) is
the result returned before, but we are this time interested in the
second element (number 1) - the bin borders.
For your data they are:
array([ 4. , 12.25, 40.5 , 59.5 , 98. ])
So e.g. the first quartile is between 4 and 12.35.

You can use the quantile Series to make another column, to marking each row with its quantile label, and then group by that column. numpy searchsorted is very useful to do this:
import numpy as np
import pandas as pd
from random import seed
seed(100)
P = pd.DataFrame(np.random.randint(0, 100, size=(1000, 2)), columns=list('AB'))
q = P['A'].quantile(np.arange(10) / 10)
P['G'] = P['A'].apply(lambda x : q.index[np.searchsorted(q, x, side='right')-1])
Since the quantile Series stores the lower values of the quantile intervals, be sure to pass the parameter side='right' to np.searchsorted to not get 0 (the minimum should be 1 or you have one index more than you need).
Now you can elaborate your statistics by doing, for example:
P.groupby('G').agg(['sum', 'mean']) #add to the list all the statistics method you wish

Related

calculate new column values based on conditions in pandas

I have columns in the pandas dataframe df_profit:
profit_date profit
0 01.04 70
1 02.04 80
2 03.04 80
3 04.04 100
4 05.04 120
5 06.04 120
6 07.04 120
7 08.04 130
8 09.04 140
9 10.04 140
And I have the second dataframe df_deals:
deals_date
0 03.04
1 05.04
2 06.04
I want to create a new column 'delta' in the df_profit and let it be equal to delta between current value and previous value in 'profit' column. But I want the delta to be calculated only after the first date in the 'profit_date' is equal to the date in the column 'deal_date' of df_deals dataframe and previous value in the delta calculation to be always the same and equal to the value when the first date in 'profit_date' was equal to the first date in 'deals_date'.
So, the result would look like:
profit_date profit delta
0 01.04 70
1 02.04 80
2 03.04 80
3 04.04 100 20
4 05.04 120 40
5 06.04 120 40
6 07.04 120 40
7 08.04 130 50
8 09.04 140 60
9 10.04 140 60
For the next time you should provide better data to make it easier to help (dataframe creation so that we can copy paste your code).
I think this codes does what you want:
import pandas as pd
df_profit = pd.DataFrame(columns=["profit_date", "profit"],
data=[
["01.04", 70],
["02.04", 80],
["03.04", 80],
["04.04", 100],
["05.04", 120],
["06.04", 120],
["07.04", 120],
["08.04", 130],
["09.04", 140],
["10.04", 140]])
df_deals = pd.DataFrame(columns=["deals_date"], data=["03.04", "05.04", "06.04"])
# combine both dataframes, based on date columns
df = df_profit.merge(right=df_deals, left_on="profit_date", right_on="deals_date", how="left")
# find the first value (first row with deals date) and set it to 'base'
df["base"] = df.loc[df["deals_date"].first_valid_index()]["profit"]
# calculate delta
df["delta"] = df["profit"] - df["base"]
# Remove unused values
df.loc[:df["deals_date"].first_valid_index(), "delta"] = None
# remove temporary cols
df.drop(columns=["base", "deals_date"], inplace=True)
print(df)
output is:
profit_date profit delta
0 01.04 70 NaN
1 02.04 80 NaN
2 03.04 80 NaN
3 04.04 100 20.0
4 05.04 120 40.0
5 06.04 120 40.0
6 07.04 120 40.0
7 08.04 130 50.0
8 09.04 140 60.0
9 10.04 140 60.0
You can try this one for don't get NaN values
start_profit = df_profit.loc[(df_profit["profit_date"] == df_deals.iloc[0][0])]
start_profit = start_profit.iloc[0][1]
for i in range(len(df_profit)):
if int(str(df_profit.iloc[i][0]).split(".")[0]) > 3 and int(str(df_profit.iloc[i][0]).split(".")[1]) >= 4:
df_profit.loc[i,"delta"] = df_profit.iloc[i][1]-start_profit
Hope it helps

group by pandas dataframe and select maximun value within sequence

I have a pandas dataframe that represents elevation differences between points every 10 degrees for several target Turbines. I have selected the elevation differences that follow a criteria and I have added a column that represents if they are consecutive or not (metDegDiff = 10 represents consecutive points).
How can I select the maximum value of elevDif by targTurb in 3 or more consecutive 10 degree points?
ridgeDF2 = pd.DataFrame(data = {
'MetID':['A06_40','A06_50','A06_60','A06_70','A06_80','A06_100','A06_110','A06_140','A07_110','A07_130','A07_140','A08_100','A08_110','A08_120','A08_130','A08_220'],
'targTurb':['A06','A06','A06','A06','A06','A06','A06','A06','A07','A07','A07','A08','A08','A08','A08','A08'],
'metDeg':[30,50,60,70,80,100,110,140,110,130,140,100,110,120,130,220],
'elevDif':[1.433234, 1.602997,3.227997,2.002991,2.414001,2.96402,1.513,1.793976,1.612,2.429993,1.639008,1.500977,3.048004,2.174011,1.813995,1.527008],
'metDegDiff':[20,10,10,10,10,20,10,30,-30,20,10,-40,10,10,10,30]})
[Dbg]>>> ridgeDF2
MetID targTurb metDeg elevDif metDegDiff
0 A06_40 A06 30 1.433234 20
1 A06_50 A06 50 1.602997 10
2 A06_60 A06 60 3.227997 10
3 A06_70 A06 70 2.002991 10
4 A06_80 A06 80 2.414001 10
5 A06_100 A06 100 2.964020 20
6 A06_110 A06 110 1.513000 10
7 A06_140 A06 140 1.793976 30
8 A07_110 A07 110 1.612000 -30
9 A07_130 A07 130 2.429993 20
10 A07_140 A07 140 1.639008 10
11 A08_100 A08 100 1.500977 -40
12 A08_110 A08 110 3.048004 10
13 A08_120 A08 120 2.174011 10
14 A08_130 A08 130 1.813995 10
15 A08_220 A08 220 1.527008 30
In the example, for A06 there are 4 rows that have consecutive 10 metDeg values (rows 1, 2, 3, and 4) and for A8 there are 3 rows (rows 12, 13 and 14). Note that those 2 series have a length of 3 or more.
So, the output would be the maximum elevDif inside those two selected series. Like this:
MetID targTurb metDeg elevDif metDegDiff
A06_60 A06 60 3.227997 10
A08_110 A08 110 3.048004 10
The code below should work. You can run each line separately to see what is happening.
ridgeDF2['t/f'] = ridgeDF2['metDegDiff'] != 10
ridgeDF2['t/f'] = ridgeDF2['t/f'].shift().fillna(0).cumsum()
ridgeDF2['count'] = ridgeDF2.groupby('t/f')['t/f'].transform(len)-1
ridgeDF2['count'] = np.where(ridgeDF2['count'] >= 3,True,False)
ridgeDF2.loc[ridgeDF2['metDegDiff'] != 10,'count'] = False
highest = ridgeDF2.loc[ridgeDF2['count'] == True]
highest = highest.loc[highest.groupby(['targTurb','metDegDiff','t/f'])['elevDif'].idxmax()]
highest.drop(columns = ['t/f','count'])
Chained solution
ridgeDF2.loc[ridgeDF2[((ridgeDF2.assign(group=(ridgeDF2.metDegDiff!=10).cumsum())).groupby('group')['metDegDiff'].transform(lambda x: (x==10)& (x.count()>=3)))].groupby('targTurb')['elevDif'].idxmax()]
Step by step solution
.cumsum() metDegDiff to create groups where the first element is not 10.
ridgeDF2=ridgeDF2.assign(group=(ridgeDF2.metDegDiff!=10).cumsum())
Apply multiple filter to get rid of metDegDiff not equal to 10 in groups generated above and to retain groups where count of consecutive values=10 is equal or more than 3. I chain groupby() ,.transform() and boolean selection to a achieve this
g=ridgeDF2[ridgeDF2.groupby('group')['metDegDiff'].transform(lambda x: (x==10)& (x.count()>=3))]
From what remains above, select indexes with maximum values
g.loc[g.groupby('targTurb')['elevDif'].idxmax()]
Outcome
MetID targTurb metDeg elevDif metDegDiff
2 A06_60 A06 60 3.227997 10
12 A08_110 A08 110 3.048004 10
Timing
%timeit ridgeDF2.loc[ridgeDF2[((ridgeDF2.assign(group=(ridgeDF2.metDegDiff!=10).cumsum())).groupby('group')['metDegDiff'].transform(lambda x: (x==10)& (x.count()>=3)))].groupby('targTurb')['elevDif'].idxmax()]
9.01 ms ± 1.84 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
What you can do is to create a group column for the same consecutive value in metDegDiff and same targTurb, using shift and cumsum. Then you can use this group column to select where the group has more or equal (ge) 3 values obtained by map the group number with the value_counts of the group number, and where the value in metDegDiff is equal (eq) to 10. Now that you have only the groups of interest, you can sort_values on elevDif and drop_duplicates on the group column to keep the maximum value per group. You end with drop the column gr and sort_values per targTurb if necessary.
ridgeDF2['metDegDiff'] = ridgeDF2['metDeg'].diff() #I assume calculated this way
#create a group number with same consecutive values and same targTurb
ridgeDF2['gr'] = (ridgeDF2['metDegDiff'].ne(ridgeDF2['metDegDiff'].shift())
|(ridgeDF2['targTurb'].ne(ridgeDF2['targTurb'].shift()))
).cumsum()
#get the result dataframe
res_ = (ridgeDF2.loc[ridgeDF2['metDegDiff'].eq(10) #row with 10 in metDegDiff
&ridgeDF2['gr'].map(ridgeDF2['gr'].value_counts()).ge(3)] #and row with group of greater equal 3 values
.sort_values(by='elevDif') # ascending sort of the elevDif
.drop_duplicates('gr', keep='last') #keep the last row pergroup having higher number
.drop('gr', axis=1) #remove the extra group column
.sort_values('targTurb') #if you need
)
and you get the rows you want
print (res_)
MetID targTurb metDeg elevDif metDegDiff
2 A06_60 A06 60 3.227997 10.0
12 A08_110 A08 110 3.048004 10.0

python pandas: How dropping items in dateframe

I have a huge amount of points in my dateframe, so I would want to drop some of them (ideally keeping the mean values).
e.g. currently I have
date calltime
0 1491928756414930 4643
1 1491928756419607 166
2 1491928756419790 120
3 1491928756419927 142
4 1491928756420083 121
5 1491928756420217 109
6 1491928756420409 52
7 1491928756420476 105
8 1491928756420605 35
9 1491928756420654 120
10 1491928756420787 105
11 1491928756420907 93
12 1491928756421013 37
13 1491928756421062 112
14 1491928756421187 41
Is there any way to drop some amount of items, based on sampling?
To give more details. My problem is number of values for very close intervals e.g. 1491928756421062 and 1491928756421187
So I have a chart like
And instead I wanted to somehow have a mean value for those close intervals. But maybe grouped by a second...
I would use sample(), but as you said it selects randomly. If you want to take sample according to some logic, for instance, only keeping rows whose value is mean *.9 < value < mean * 1.1, you can try the following code. Actually, it all depends on your sampling strategy.
As an example, something like this could be done.
test.csv:
1491928756414930,4643
1491928756419607,166
1491928756419790,120
1491928756419927,142
1491928756420083,121
1491928756420217,109
1491928756420409,52
1491928756420476,105
1491928756420605,35
1491928756420654,120
1491928756420787,105
1491928756420907,93
1491928756421013,37
1491928756421062,112
1491928756421187,41
sampling:
df = pd.read_csv("test.csv", ",", header=None)
mean = df[1].mean()
my_sample = df[(mean *.90 < df[1]) & (df[1] < mean * 1.10)]
You're looking for resample
df.set_index(pd.to_datetime(df.date)).calltime.resample('s').mean()
This is a more complete example
tidx = pd.date_range('2000-01-01', periods=10000, freq='10ms')
df = pd.DataFrame(dict(calltime=np.random.randint(200, size=len(tidx))), tidx)
fig, axes = plt.subplots(2, figsize=(25, 10))
df.plot(ax=axes[0])
df.resample('s').mean().plot(ax=axes[1])
fig.tight_layout()

Pandas: Iterate over rows and find frequency of occurances

I have a dataframe with 2 columns and 3000 rows.
First column is representing time in time-steps. For example first row is 0, second is 1, ..., last one is 2999.
Second column is representing pressure. The pressure changes as we iterate over the rows, but shows a repetitive behaviour. So every few steps we see that it goes to its minimum value (which is 375), then goes up again, then again at 375 etc.
What I want to do in Python, is to iterate over the rows and see:
1) at which time-steps we see pressure is at its minimum
2)Find the frequency between the minimum values.
import numpy as np
import pandas as pd
import numpy.random as rnd
import scipy.linalg as lin
from matplotlib.pylab import *
import re
from pylab import *
import datetime
df = pd.read_csv('test.csv')
row = next(df.iterrows())[0]
dataset = np.loadtxt(df, delimiter=";")
df.columns = ["Timestamp", "Pressure"]
print(df[[0, 1]])
You don't need to iterate row-wise, you can compare the entire column against the min value to mask it, you can then use the mask to find the timestep diff:
Data setup:
In [44]:
df = pd.DataFrame({'timestep':np.arange(20), 'value':np.random.randint(375, 400, 20)})
df
Out[44]:
timestep value
0 0 395
1 1 377
2 2 392
3 3 396
4 4 377
5 5 379
6 6 384
7 7 396
8 8 380
9 9 392
10 10 395
11 11 393
12 12 390
13 13 393
14 14 397
15 15 396
16 16 393
17 17 379
18 18 396
19 19 390
mask the df by comparing the column against the min value:
In [45]:
df[df['value']==df['value'].min()]
Out[45]:
timestep value
1 1 377
4 4 377
We can use the mask with loc to find the corresponding 'timestep' value and use diff to find the interval differences:
In [48]:
df.loc[df['value']==df['value'].min(),'timestep'].diff()
Out[48]:
1 NaN
4 3.0
Name: timestep, dtype: float64
You can divide the above by 1/60 to find frequency wrt to 1 minute or whatever frequency unit you desire

Quickly sampling large number of rows from large dataframes in python

I have a very large dataframe (about 1.1M rows) and I am trying to sample it.
I have a list of indexes (about 70,000 indexes) that I want to select from the entire dataframe.
This is what Ive tried so far but all these methods are taking way too much time:
Method 1 - Using pandas :
sample = pandas.read_csv("data.csv", index_col = 0).reset_index()
sample = sample[sample['Id'].isin(sample_index_array)]
Method 2 :
I tried to write all the sampled lines to another csv.
f = open("data.csv",'r')
out = open("sampled_date.csv", 'w')
out.write(f.readline())
while 1:
total += 1
line = f.readline().strip()
if line =='':
break
arr = line.split(",")
if (int(arr[0]) in sample_index_array):
out.write(",".join(e for e in (line)))
Can anyone please suggest a better method? Or how I can modify this to make it faster?
Thanks
We don't have your data, so here is an example with two options:
after reading: use a pandas Index object to select a subset via the .iloc selection method
while reading: a predicate with the skiprows parameter
Given
A collection of indices and a (large) sample DataFrame written to test.csv:
import pandas as pd
import numpy as np
indices = [1, 2, 3, 10, 20, 30, 67, 78, 900, 2176, 78776]
df = pd.DataFrame(np.random.randint(0, 100, size=(1000000, 4)), columns=list("ABCD"))
df.to_csv("test.csv", header=False)
df.info()
Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 4 columns):
A 1000000 non-null int32
B 1000000 non-null int32
C 1000000 non-null int32
D 1000000 non-null int32
dtypes: int32(4)
memory usage: 15.3 MB
Code
Option 1 - after reading
Convert a sample list of indices to an Index object and slice the loaded DataFrame:
idxs = pd.Index(indices)
subset = df.iloc[idxs, :]
print(subset)
The .iat and .at methods are even faster, but require scalar indices.
Option 2 - while reading (Recommended)
We can write a predicate that keeps selected indices as the file is being read (more efficient):
pred = lambda x: x not in indices
data = pd.read_csv("test.csv", skiprows=pred, index_col=0, names="ABCD")
print(data)
See also the issue that led to extending skiprows.
Results
The same output is produced from the latter options:
A B C D
1 74 95 28 4
2 87 3 49 94
3 53 54 34 97
10 58 41 48 15
20 86 20 92 11
30 36 59 22 5
67 49 23 86 63
78 98 63 60 75
900 26 11 71 85
2176 12 73 58 91
78776 42 30 97 96

Categories