Data frame segmentation and dropping - python

I have the following DataFrame in pandas:
A = [1,10,23,45,24,24,55,67,73,26,13,96,53,23,24,43,90],
B = [24,23,29, BW,49,59,72, BW,9,183,17, txt,2,49,BW,479,BW]
I want to create a new column and in that column I want to have values from column A based on the condition on column B. Conditions are if there is no ''txt'' in between two consecutive ''BW'', then I will have those on column C. But if there is ''txt'' between two consecutive ''BW'', I want to drop all those values. So the expected output should look like:
A = [1,10,23,45,24,24,55,67,73,26,13,96,53,23,24,43,90],
B = [24,23,29, BW,49,59,72, BW,9,183,17, txt,2,49,BW,479,BW]
C = [1,10,23, BW, 24,24,55, BW, nan, nan, nan, nan, nan, nan, BW, 43,BW]
I have no clue how to do it. Any help is much appreciated.

EDIT:
Updated answer which was missing the values of BW in the final df.
import pandas as pd
import numpy as np
BW = 999
txt = -999
A = [1,10,23,45,24,24,55,67,73,26,13,96,53,23,24,43,90]
B = [24,23,29, BW,49,59,72, BW,9,183,17, txt,2,49,BW,479,BW]
df = pd.DataFrame({'A': A, 'B': B})
df = df.assign(group = (df[~df['B'].between(BW,BW)].index.to_series().diff() > 1).cumsum())
df['C'] = np.where(df.group == df[df.B == txt].group.values[0], np.nan, df.A)
df['C'] = np.where(df['B'] == BW, df['B'], df['C'])
df['C'] = df['C'].astype('Int64')
df = df.drop('group', axis=1)
In [435]: df
Out[435]:
A B C
0 1 24 1
1 10 23 10
2 23 29 23
3 45 999 999 <-- BW
4 24 49 24
5 24 59 24
6 55 72 55
7 67 999 999 <-- BW
8 73 9 <NA>
9 26 183 <NA>
10 13 17 <NA>
11 96 -999 <NA> <-- txt is in the middle of BW
12 53 2 <NA>
13 23 49 <NA>
14 24 999 999 <-- BW
15 43 479 43
16 90 999 999 <-- BW
You can achieve it like so, assuming BW and txt are specific values I just filled them with some random number to differentiate them
In [277]: BW = 999
In [278]: txt = -999
In [293]: A = [1,10,23,45,24,24,55,67,73,26,13,96,53,23,24,43,90]
...: B = [24,23,29, BW,49,59,72, BW,9,183,17, txt,49,BW,479,BW]
In [300]: df = pd.DataFrame({'A': A, 'B': B})
In [301]: df
Out[301]:
A B
0 1 24
1 10 23
2 23 29
3 45 999
4 24 49
5 24 59
6 55 72
7 67 999
8 73 9
9 26 183
10 13 17
11 96 -999
12 53 2
13 23 49
14 24 999
15 43 479
16 90 999
First lets split the different groups of values, here I am splitting them into unique groups where each group contains the values of B that are between the value BW and the next BW.
In [321]: df = df.assign(group = (df[~df['B'].between(BW,BW)].index.to_series().diff() > 1).cumsum())
In [322]: df
Out[322]:
A B group
0 1 24 0.00000000
1 10 23 0.00000000
2 23 29 0.00000000
3 45 999 NaN
4 24 49 1.00000000
5 24 59 1.00000000
6 55 72 1.00000000
7 67 999 NaN
8 73 9 2.00000000
9 26 183 2.00000000
10 13 17 2.00000000
11 96 -999 2.00000000
12 53 2 2.00000000
13 23 49 2.00000000
14 24 999 NaN
15 43 479 3.00000000
16 90 999 NaN
Next with the use of np.where() we can replace the values depending on the condition that you set.
In [360]: df['C'] = np.where(df.group == df[df.B == txt].group.values[0], np.nan, df.B)
In [432]: df
Out[432]:
A B group C
0 1 24 0.00000000 24.00000000
1 10 23 0.00000000 23.00000000
2 23 29 0.00000000 29.00000000
3 45 999 NaN 999.00000000
4 24 49 1.00000000 49.00000000
5 24 59 1.00000000 59.00000000
6 55 72 1.00000000 72.00000000
7 67 999 NaN 999.00000000
8 73 9 2.00000000 NaN
9 26 183 2.00000000 NaN
10 13 17 2.00000000 NaN
11 96 -999 2.00000000 NaN
12 53 2 2.00000000 NaN
13 23 49 2.00000000 NaN
14 24 999 NaN 999.00000000
15 43 479 3.00000000 479.00000000
16 90 999 NaN 999.00000000
Here we need to set the where B is equal to BW for C back to the values of B.
In [488]: df['C'] = np.where(df['B'] == BW, df['B'], df['C'])
In [489]: df
Out[489]:
A B group C
0 1 24 0.00000000 24.00000000
1 10 23 0.00000000 23.00000000
2 23 29 0.00000000 29.00000000
3 45 999 NaN 999.00000000
4 24 49 1.00000000 49.00000000
5 24 59 1.00000000 59.00000000
6 55 72 1.00000000 72.00000000
7 67 999 NaN 999.00000000
8 73 9 2.00000000 NaN
9 26 183 2.00000000 NaN
10 13 17 2.00000000 NaN
11 96 -999 2.00000000 NaN
12 53 2 2.00000000 NaN
13 23 49 2.00000000 NaN
14 24 999 NaN 999.00000000
15 43 479 3.00000000 479.00000000
16 90 999 NaN 999.00000000
Lastly just convert the float column to int and drop the group column which we do not need anymore. If you want to maintain that the NaN values are np.nan then ignore the conversion to Int64.
In [396]: df.C = df.C.astype('Int64')
In [397]: df
Out[397]:
A B group C
0 1 24 0.00000000 24
1 10 23 0.00000000 23
2 23 29 0.00000000 29
3 45 999 NaN 999
4 24 49 1.00000000 49
5 24 59 1.00000000 59
6 55 72 1.00000000 72
7 67 999 NaN 999
8 73 9 2.00000000 <NA>
9 26 183 2.00000000 <NA>
10 13 17 2.00000000 <NA>
11 96 -999 2.00000000 <NA>
12 53 2 2.00000000 <NA>
13 23 49 2.00000000 <NA>
14 24 999 NaN 999
15 43 479 3.00000000 479
16 90 999 NaN 999
In [398]: df = df.drop('group', axis=1)
In [435]: df
Out[435]:
A B C
0 1 24 24
1 10 23 23
2 23 29 29
3 45 999 999
4 24 49 49
5 24 59 59
6 55 72 72
7 67 999 999
8 73 9 <NA>
9 26 183 <NA>
10 13 17 <NA>
11 96 -999 <NA>
12 53 2 <NA>
13 23 49 <NA>
14 24 999 999
15 43 479 479
16 90 999 999

I don't know if this is the most efficient way to do it, but you can create a new column called mask from mapping the values in column B the following way: 'BW' to True, 'txt' to False and all other values to np.nan.
Then if you forward fill the NaN from mask, and backward fill the NaN from mask and logically combine the results (set equal to True as long as one of the forward or backward filled columns is False), you can create a column called final_mask where all of the values between consecutive BW containing txt are filled in with True.
You can then use .apply to select the value of column A only when the final_mask is False and column B isn't 'BW', select column B if final_mask is False and column B is 'BW', and np.nan otherwise.
import numpy as np
import pandas as pd
A = [1,10,23,45,24,24,55,67,73,26,13,96,53,23,24,43,90]
B = [24,23,29, 'BW',49,59,72, 'BW',9,183,17, 'txt',2,49,'BW',479,'BW']
df = pd.DataFrame({'A':A,'B':B})
df["mask"] = df["B"].apply(lambda x: True if x == 'BW' else False if x == 'txt' else np.nan)
df["ffill"] = df["mask"].fillna(method="ffill")
df["bfill"] = df["mask"].fillna(method="bfill")
df["final_mask"] = (df["ffill"] == False) | (df["bfill"] == False)
df["C"] = df.apply(lambda x: x['A'] if (
(x['final_mask'] == False) & (x['B'] != 'BW'))
else x['B'] if ((x['final_mask'] == False) & (x['B'] == 'BW'))
else np.nan, axis=1
)
>>> df
A B mask ffill bfill final_mask C
0 1 24 NaN NaN True False 1
1 10 23 NaN NaN True False 10
2 23 29 NaN NaN True False 23
3 45 BW True True True False BW
4 24 49 NaN True True False 24
5 24 59 NaN True True False 24
6 55 72 NaN True True False 55
7 67 BW True True True False BW
8 73 9 NaN True False True NaN
9 26 183 NaN True False True NaN
10 13 17 NaN True False True NaN
11 96 txt False False False True NaN
12 53 2 NaN False True True NaN
13 23 49 NaN False True True NaN
14 24 BW True True True False BW
15 43 479 NaN True True False 43
16 90 BW True True True False BW
Dropping the columns we created along the way:
df.drop(columns=['mask','ffill','bfill','final_mask'])
A B C
0 1 24 1
1 10 23 10
2 23 29 23
3 45 BW BW
4 24 49 24
5 24 59 24
6 55 72 55
7 67 BW BW
8 73 9 NaN
9 26 183 NaN
10 13 17 NaN
11 96 txt NaN
12 53 2 NaN
13 23 49 NaN
14 24 BW BW
15 43 479 43
16 90 BW BW

Related

Pandas drop multiple in range value using isin

Given a df
a
0 1
1 2
2 1
3 7
4 10
5 11
6 21
7 22
8 26
9 51
10 56
11 83
12 82
13 85
14 90
I would like to drop rows if the value in column a is not within these multiple range
(10-15),(25-30),(50-55), (80-85). Such that these range are made from the 'lbotandltop`
lbot =[10, 25, 50, 80]
ltop=[15, 30, 55, 85]
I am thinking this can be achieve via pandas isin
df[df['a'].isin(list(zip(lbot,ltop)))]
But, it return empty df instead.
The expected output is
a
10
11
26
51
83
82
85
You can use numpy broadcasting to create a boolean mask where for each row it returns True if the value is within any of the ranges and filter df with it.:
out = df[((df[['a']].to_numpy() >=lbot) & (df[['a']].to_numpy() <=ltop)).any(axis=1)]
Output:
a
4 10
5 11
8 26
9 51
11 83
12 82
13 85
Create values in flatten list comprehension with range:
df = df[df['a'].isin([z for x, y in zip(lbot,ltop) for z in range(x, y+1)])]
print (df)
a
4 10
5 11
8 26
9 51
11 83
12 82
13 85
Or use np.concatenate for flatten list of ranges:
df = df[df['a'].isin(np.concatenate([range(x, y+1) for x, y in zip(lbot,ltop)]))]
A method that uses between():
df[pd.concat([df['a'].between(x, y) for x,y in zip(lbot, ltop)], axis=1).any(axis=1)]
output:
a
4 10
5 11
8 26
9 51
11 83
12 82
13 85
If your values in the two lists are sorted, a method that doesn't require any loop would be to use pandas.cut and checking that you obtain the same group cutting on the two lists:
# group based on lower bound
id1 = pd.cut(df['a'], bins=lbot+[float('inf')], labels=range(len(lbot)),
right=False) # include lower bound
# group based on upper bound
id2 = pd.cut(df['a'], bins=[0]+ltop, labels=range(len(ltop)))
# ensure groups are identical
df[id1.eq(id2)]
output:
a
4 10
5 11
8 26
9 51
11 83
12 82
13 85
intermediate groups:
a id1 id2
0 1 NaN 0
1 2 NaN 0
2 1 NaN 0
3 7 NaN 0
4 10 0 0
5 11 0 0
6 21 0 1
7 22 0 1
8 26 1 1
9 51 2 2
10 56 2 3
11 83 3 3
12 82 3 3
13 85 3 3
14 90 3 NaN

What cause the different times function if the dataframe has a columns name?

I tested this two snippets,the df share the same structure other than the df.columns, so what causes the difference between them? And how should I change my second snippet, for example,should I always use the pandas.DataFrame.mul or use the other method to avoid this?
# test1
df = pd.DataFrame(np.random.randint(100, size=(10, 10))) \
.assign(Count=np.random.rand(10))
df.iloc[:, 0:3] *= df['Count']
df
Out[1]:
0 1 2 3 4 5 6 7 8 9 Count
0 26.484949 68.217006 4.902341 61 10 13 31 15 10 11 0.645974
1 56.845743 70.085965 28.106758 79 56 47 82 83 62 40 0.934480
2 33.590667 78.496281 1.634114 94 3 91 16 41 93 55 0.326823
3 51.031974 15.886152 26.145821 67 31 20 81 21 10 8 0.012706
4 47.156128 82.234199 10.458328 24 8 68 44 24 4 50 0.517130
5 18.733256 61.675649 23.531239 74 61 97 20 12 0 95 0.360815
6 4.521820 26.165427 26.145821 68 10 77 67 92 82 11 0.606739
7 24.547026 62.610129 23.531239 50 45 69 94 56 77 56 0.412445
8 52.969897 75.692843 9.804683 73 74 5 10 60 51 77 0.125309
9 21.963128 30.837825 19.609366 75 9 50 68 10 82 96 0.687966
#test2
df = pd.DataFrame(np.random.randint(100, size=(10, 10))) \
.assign(Count=np.random.rand(10))
df.columns = ['find', 'a', 'b', 3, 4, 5, 6, 7, 8, 9, 'Count']
df.iloc[:, 0:3] *= df['Count']
df
Out[2]:
find a b 3 4 5 6 7 8 9 Count
0 NaN NaN NaN 63 63 47 81 3 48 34 0.603953
1 NaN NaN NaN 70 48 41 27 78 75 23 0.839635
2 NaN NaN NaN 5 38 52 23 3 75 4 0.515159
3 NaN NaN NaN 40 49 31 25 63 48 25 0.483255
4 NaN NaN NaN 42 89 46 47 78 30 5 0.693555
5 NaN NaN NaN 68 83 81 87 7 54 3 0.108306
6 NaN NaN NaN 74 48 99 67 80 81 36 0.361500
7 NaN NaN NaN 10 19 26 41 11 24 33 0.705899
8 NaN NaN NaN 38 51 83 78 7 31 42 0.838703
9 NaN NaN NaN 2 7 63 14 28 38 10 0.277547
df.iloc[:,0:3] is a dataframe with three series, named find, a, and b. df['Count'] is a series named Count. When you multiply these, Pandas tries to match up same-named series, but since there are none, it ends up generating NaN values for all the slots. Then it assigns these NaN:s back to the dataframe.
I think that using .mul with an appropriate axis= is the way around this, but I may be wrong about that...

How to create a rolling window in pandas with another condition

I have a data frame with 2 columns
df = pd.DataFrame(np.random.randint(0,100,size=(100, 2)), columns=list('AB'))
A B
0 11 10
1 61 30
2 24 54
3 47 52
4 72 42
... ... ...
95 61 2
96 67 41
97 95 30
98 29 66
99 49 22
100 rows × 2 columns
Now I want to create a third column, which is a rolling window max of col 'A' BUT
the max has to be lower than the corresponding value in col 'B'. In other words I want the value of the 4 (using a window size of 4) in column 'A' closest to the value in col 'B', yet smaller than B
So for example in row
3 47 52
the new value I am looking for, is not 61 but 47, because it is the highest value of the 4 that is not higher than 52
pseudo code
df['C'] = df['A'].rolling(window=4).max() where < df['B']
You can use concat + shift to create a wide DataFrame with the previous values, which makes complicated rolling calculations a bit easier.
Sample Data
np.random.seed(42)
df = pd.DataFrame(np.random.randint(0, 100, size=(100, 2)), columns=list('AB'))
Code
N = 4
# End slice ensures same default min_periods behavior to `.rolling`
df1 = pd.concat([df['A'].shift(i).rename(i) for i in range(N)], axis=1).iloc[N-1:]
# Remove values larger than B, then find the max of remaining.
df['C'] = df1.where(df1.lt(df.B, axis=0)).max(1)
print(df.head(15))
A B C
0 51 92 NaN # Missing b/c min_periods
1 14 71 NaN # Missing b/c min_periods
2 60 20 NaN # Missing b/c min_periods
3 82 86 82.0
4 74 74 60.0
5 87 99 87.0
6 23 2 NaN # Missing b/c 82, 74, 87, 23 all > 2
7 21 52 23.0 # Max of 21, 23, 87, 74 which is < 52
8 1 87 23.0
9 29 37 29.0
10 1 63 29.0
11 59 20 1.0
12 32 75 59.0
13 57 21 1.0
14 88 48 32.0
You can use a custom function to .apply to the rolling window. In this case, you can use a default argument to pass in the B column.
df = pd.DataFrame(np.random.randint(0,100,size=(100, 2)), columns=('AB'))
def rollup(a, B=df.B):
ix = a.index.max()
b = B[ix]
return a[a<b].max()
df['C'] = df.A.rolling(4).apply(rollup)
df
# returns:
A B C
0 8 17 NaN
1 23 84 NaN
2 75 84 NaN
3 86 24 23.0
4 52 83 75.0
.. .. .. ...
95 38 22 NaN
96 53 48 38.0
97 45 4 NaN
98 3 92 53.0
99 91 86 53.0
The NaN values occur when no number in the window of A is less than B or at the start of the series when the window is too big for the first few rows.
You can use where to replace values that don't fulfill the condition with np.nan and then use rolling(window=4, min_periods=1):
In [37]: df['C'] = df['A'].where(df['A'] < df['B'], np.nan).rolling(window=4, min_periods=1).max()
In [38]: df
Out[38]:
A B C
0 0 1 0.0
1 1 2 1.0
2 2 3 2.0
3 10 4 2.0
4 4 5 4.0
5 5 6 5.0
6 10 7 5.0
7 10 8 5.0
8 10 9 5.0
9 10 10 NaN

How to randomly drop rows in Pandas dataframe until there are equal number of values in a column?

I have a dataframe pd with two columns, X and y.
In pd[y] I have integers from 1 to 10 inclusive. However they have different frequencies:
df[y].value_counts()
10 6645
9 6213
8 5789
7 4643
6 2532
5 1839
4 1596
3 878
2 815
1 642
I want to cut down my dataframe so that there are equal number of occurrences for each label. As I want an equal number of each label, the minimum frequency is 642. So I only want to keep 642 randomly sampled rows of each class label in my dataframe so that my new dataframe has 642 for each class label.
I thought this might have helped however stratifying only keeps the same percentage of each label but I want all my labels to have the same frequency.
As an example of a dataframe:
df = pd.DataFrame()
df['y'] = sum([[10]*6645, [9]* 6213,[8]* 5789, [7]*4643,[6]* 2532, [5]*1839,[4]* 1596,[3]* 878, [2]*815, [1]* 642],[])
df['X'] = [random.choice(list('abcdef')) for i in range(len(df))]
Use pd.sample with groupby-
df = pd.DataFrame(np.random.randint(1, 11, 100), columns=['y'])
val_cnt = df['y'].value_counts()
min_sample = val_cnt.min()
print(min_sample) # Outputs 7 in as an example
print(df.groupby('y').apply(lambda s: s.sample(min_sample)))
Output
y
y
1 68 1
8 1
82 1
17 1
99 1
31 1
6 1
2 55 2
15 2
81 2
22 2
46 2
13 2
58 2
3 2 3
30 3
84 3
61 3
78 3
24 3
98 3
4 51 4
86 4
52 4
10 4
42 4
80 4
53 4
5 16 5
87 5
... ..
6 26 6
18 6
7 56 7
4 7
60 7
65 7
85 7
37 7
70 7
8 93 8
41 8
28 8
20 8
33 8
64 8
62 8
9 73 9
79 9
9 9
40 9
29 9
57 9
7 9
10 96 10
67 10
47 10
54 10
97 10
71 10
94 10
[70 rows x 1 columns]

Combine duplicated columns within a DataFrame

If I have a dataframe that has columns that include the same name, is there a way to combine the columns that have the same name with some sort of function (i.e. sum)?
For instance with:
In [186]:
df["NY-WEB01"].head()
Out[186]:
NY-WEB01 NY-WEB01
DateTime
2012-10-18 16:00:00 5.6 2.8
2012-10-18 17:00:00 18.6 12.0
2012-10-18 18:00:00 18.4 12.0
2012-10-18 19:00:00 18.2 12.0
2012-10-18 20:00:00 19.2 12.0
How might I collapse the NY-WEB01 columns (there are a bunch of duplicate columns, not just NY-WEB01) by summing each row where the column name is the same?
I believe this does what you are after:
df.groupby(lambda x:x, axis=1).sum()
Alternatively, between 3% and 15% faster depending on the length of the df:
df.groupby(df.columns, axis=1).sum()
EDIT: To extend this beyond sums, use .agg() (short for .aggregate()):
df.groupby(df.columns, axis=1).agg(numpy.max)
pandas >= 0.20: df.groupby(level=0, axis=1)
You don't need a lambda here, nor do you explicitly have to query df.columns; groupby accepts a level argument you can specify in conjunction with the axis argument. This is cleaner, IMO.
# Setup
np.random.seed(0)
df = pd.DataFrame(np.random.choice(50, (5, 5)), columns=list('AABBB'))
df
A A B B B
0 44 47 0 3 3
1 39 9 19 21 36
2 23 6 24 24 12
3 1 38 39 23 46
4 24 17 37 25 13
<!_ >
df.groupby(level=0, axis=1).sum()
A B
0 91 6
1 48 76
2 29 60
3 39 108
4 41 75
Handling MultiIndex columns
Another case to consider is when dealing with MultiIndex columns. Consider
df.columns = pd.MultiIndex.from_arrays([['one']*3 + ['two']*2, df.columns])
df
one two
A A B B B
0 44 47 0 3 3
1 39 9 19 21 36
2 23 6 24 24 12
3 1 38 39 23 46
4 24 17 37 25 13
To perform aggregation across the upper levels, use
df.groupby(level=1, axis=1).sum()
A B
0 91 6
1 48 76
2 29 60
3 39 108
4 41 75
or, if aggregating per upper level only, use
df.groupby(level=[0, 1], axis=1).sum()
one two
A B B
0 91 0 6
1 48 19 57
2 29 24 36
3 39 39 69
4 41 37 38
Alternate Interpretation: Dropping Duplicate Columns
If you came here looking to find out how to simply drop duplicate columns (without performing any aggregation), use Index.duplicated:
df.loc[:,~df.columns.duplicated()]
A B
0 44 0
1 39 19
2 23 24
3 1 39
4 24 37
Or, to keep the last ones, specify keep='last' (default is 'first'),
df.loc[:,~df.columns.duplicated(keep='last')]
A B
0 47 3
1 9 36
2 6 12
3 38 46
4 17 13
The groupby alternatives for the two solutions above are df.groupby(level=0, axis=1).first(), and ... .last(), respectively.
Here is possible simplier solution for common aggregation functions like sum, mean, median, max, min, std - only use parameters axis=1 for working with columns and level:
#coldspeed samples
np.random.seed(0)
df = pd.DataFrame(np.random.choice(50, (5, 5)), columns=list('AABBB'))
print (df)
print (df.sum(axis=1, level=0))
A B
0 91 6
1 48 76
2 29 60
3 39 108
4 41 75
df.columns = pd.MultiIndex.from_arrays([['one']*3 + ['two']*2, df.columns])
print (df.sum(axis=1, level=1))
A B
0 91 6
1 48 76
2 29 60
3 39 108
4 41 75
print (df.sum(axis=1, level=[0,1]))
one two
A B B
0 91 0 6
1 48 19 57
2 29 24 36
3 39 39 69
4 41 37 38
Similar it working for index, then use axis=0 instead axis=1:
np.random.seed(0)
df = pd.DataFrame(np.random.choice(50, (5, 5)), columns=list('ABCDE'), index=list('aabbc'))
print (df)
A B C D E
a 44 47 0 3 3
a 39 9 19 21 36
b 23 6 24 24 12
b 1 38 39 23 46
c 24 17 37 25 13
print (df.min(axis=0, level=0))
A B C D E
a 39 9 0 3 3
b 1 6 24 23 12
c 24 17 37 25 13
df.index = pd.MultiIndex.from_arrays([['bar']*3 + ['foo']*2, df.index])
print (df.mean(axis=0, level=1))
A B C D E
a 41.5 28.0 9.5 12.0 19.5
b 12.0 22.0 31.5 23.5 29.0
c 24.0 17.0 37.0 25.0 13.0
print (df.max(axis=0, level=[0,1]))
A B C D E
bar a 44 47 19 21 36
b 23 6 24 24 12
foo b 1 38 39 23 46
c 24 17 37 25 13
If need use another functions like first, last, size, count is necessary use coldspeed answer

Categories