Given a df
a
0 1
1 2
2 1
3 7
4 10
5 11
6 21
7 22
8 26
9 51
10 56
11 83
12 82
13 85
14 90
I would like to drop rows if the value in column a is not within these multiple range
(10-15),(25-30),(50-55), (80-85). Such that these range are made from the 'lbotandltop`
lbot =[10, 25, 50, 80]
ltop=[15, 30, 55, 85]
I am thinking this can be achieve via pandas isin
df[df['a'].isin(list(zip(lbot,ltop)))]
But, it return empty df instead.
The expected output is
a
10
11
26
51
83
82
85
You can use numpy broadcasting to create a boolean mask where for each row it returns True if the value is within any of the ranges and filter df with it.:
out = df[((df[['a']].to_numpy() >=lbot) & (df[['a']].to_numpy() <=ltop)).any(axis=1)]
Output:
a
4 10
5 11
8 26
9 51
11 83
12 82
13 85
Create values in flatten list comprehension with range:
df = df[df['a'].isin([z for x, y in zip(lbot,ltop) for z in range(x, y+1)])]
print (df)
a
4 10
5 11
8 26
9 51
11 83
12 82
13 85
Or use np.concatenate for flatten list of ranges:
df = df[df['a'].isin(np.concatenate([range(x, y+1) for x, y in zip(lbot,ltop)]))]
A method that uses between():
df[pd.concat([df['a'].between(x, y) for x,y in zip(lbot, ltop)], axis=1).any(axis=1)]
output:
a
4 10
5 11
8 26
9 51
11 83
12 82
13 85
If your values in the two lists are sorted, a method that doesn't require any loop would be to use pandas.cut and checking that you obtain the same group cutting on the two lists:
# group based on lower bound
id1 = pd.cut(df['a'], bins=lbot+[float('inf')], labels=range(len(lbot)),
right=False) # include lower bound
# group based on upper bound
id2 = pd.cut(df['a'], bins=[0]+ltop, labels=range(len(ltop)))
# ensure groups are identical
df[id1.eq(id2)]
output:
a
4 10
5 11
8 26
9 51
11 83
12 82
13 85
intermediate groups:
a id1 id2
0 1 NaN 0
1 2 NaN 0
2 1 NaN 0
3 7 NaN 0
4 10 0 0
5 11 0 0
6 21 0 1
7 22 0 1
8 26 1 1
9 51 2 2
10 56 2 3
11 83 3 3
12 82 3 3
13 85 3 3
14 90 3 NaN
Related
I have this pandas dataframe. It is sorted by the "h" column. What I want is to add two new columns where:
The items of each zone, will have a max boundary and a min boundary. (They will be the same for every item in the zone). The max boundary will be the minimum "h" value of the previous zone, and the min boundary will be the maximum "h" value of the next zone
name h w set row zone
ZZON5 40 36 A 0 0
DWOPN 38 44 A 1 0
5SWYZ 37 22 B 2 0
TFQEP 32 55 B 3 0
OQ33H 26 41 A 4 1
FTJVQ 24 25 B 5 1
F1RK2 20 15 B 6 1
266LT 18 19 A 7 1
HSJ3X 16 24 A 8 2
L754O 12 86 B 9 2
LWHDX 11 68 A 10 2
ZKB2F 9 47 A 11 2
5KJ5L 7 72 B 12 3
CZ7ET 6 23 B 13 3
SDZ1B 2 10 A 14 3
5KWRU 1 59 B 15 3
what i hope for:
name h w set row zone maxB minB
ZZON5 40 36 A 0 0 26
DWOPN 38 44 A 1 0 26
5SWYZ 37 22 B 2 0 26
TFQEP 32 55 B 3 0 26
OQ33H 26 41 A 4 1 32 16
FTJVQ 24 25 B 5 1 32 16
F1RK2 20 15 B 6 1 32 16
266LT 18 19 A 7 1 32 16
HSJ3X 16 24 A 8 2 18 7
L754O 12 86 B 9 2 18 7
LWHDX 11 68 A 10 2 18 7
ZKB2F 9 47 A 11 2 18 7
5KJ5L 7 72 B 12 3 9
CZ7ET 6 23 B 13 3 9
SDZ1B 2 10 A 14 3 9
5KWRU 1 59 B 15 3 9
Any ideas?
First group-by zone and find the minimum and maximum of them
min_max_zone = df.groupby('zone').agg(min=('h', 'min'), max=('h', 'max'))
Now you can use apply:
df['maxB'] = df['zone'].apply(lambda x: min_max_zone.loc[x-1, 'min']
if x-1 in min_max_zone.index else np.nan)
df['minB'] = df['zone'].apply(lambda x: min_max_zone.loc[x+1, 'max']
if x+1 in min_max_zone.index else np.nan)
I have a dataframe pd with two columns, X and y.
In pd[y] I have integers from 1 to 10 inclusive. However they have different frequencies:
df[y].value_counts()
10 6645
9 6213
8 5789
7 4643
6 2532
5 1839
4 1596
3 878
2 815
1 642
I want to cut down my dataframe so that there are equal number of occurrences for each label. As I want an equal number of each label, the minimum frequency is 642. So I only want to keep 642 randomly sampled rows of each class label in my dataframe so that my new dataframe has 642 for each class label.
I thought this might have helped however stratifying only keeps the same percentage of each label but I want all my labels to have the same frequency.
As an example of a dataframe:
df = pd.DataFrame()
df['y'] = sum([[10]*6645, [9]* 6213,[8]* 5789, [7]*4643,[6]* 2532, [5]*1839,[4]* 1596,[3]* 878, [2]*815, [1]* 642],[])
df['X'] = [random.choice(list('abcdef')) for i in range(len(df))]
Use pd.sample with groupby-
df = pd.DataFrame(np.random.randint(1, 11, 100), columns=['y'])
val_cnt = df['y'].value_counts()
min_sample = val_cnt.min()
print(min_sample) # Outputs 7 in as an example
print(df.groupby('y').apply(lambda s: s.sample(min_sample)))
Output
y
y
1 68 1
8 1
82 1
17 1
99 1
31 1
6 1
2 55 2
15 2
81 2
22 2
46 2
13 2
58 2
3 2 3
30 3
84 3
61 3
78 3
24 3
98 3
4 51 4
86 4
52 4
10 4
42 4
80 4
53 4
5 16 5
87 5
... ..
6 26 6
18 6
7 56 7
4 7
60 7
65 7
85 7
37 7
70 7
8 93 8
41 8
28 8
20 8
33 8
64 8
62 8
9 73 9
79 9
9 9
40 9
29 9
57 9
7 9
10 96 10
67 10
47 10
54 10
97 10
71 10
94 10
[70 rows x 1 columns]
I have created a days difference column in a pandas dataframe, and I'm looking to add a column that has the sum of a specific value over a given days window backwards
Notice that I can supply a date column for each row if it is needed, but the diff was created as days difference from the first day of the data.
Example
df = pd.DataFrame.from_dict({'diff': [0,0,1,2,2,2,2,10,11,15,18],
'value': [10,11,15,2,5,7,8,9,23,14,15]})
df
Out[12]:
diff value
0 0 10
1 0 11
2 1 15
3 2 2
4 2 5
5 2 7
6 2 8
7 10 9
8 11 23
9 15 14
10 18 15
I want to add 5_days_back_sum column that will sum the past 5 days, including same day so the result would be like this
Out[15]:
5_days_back_sum diff value
0 21 0 10
1 21 0 11
2 36 1 15
3 58 2 2
4 58 2 5
5 58 2 7
6 58 2 8
7 9 10 9
8 32 11 23
9 46 15 14
10 29 18 15
How can I achieve that? Originally I have a date column to create the diff column, if that helps its available
Use custom function with boolean indexing for filtering range with sum:
def f(x):
return df.loc[(df['diff'] >= x - 5) & (df['diff'] <= x), 'value'].sum()
df['5_days_back_sum'] = df['diff'].apply(f)
print (df)
diff value 5_days_back_sum
0 0 10 21
1 0 11 21
2 1 15 36
3 2 2 58
4 2 5 58
5 2 7 58
6 2 8 58
7 10 9 9
8 11 23 32
9 15 14 46
10 18 15 29
Similar solution with between:
def f(x):
return df.loc[df['diff'].between(x - 5, x), 'value'].sum()
df['5_days_back_sum'] = df['diff'].apply(f)
print (df)
diff value 5_days_back_sum
0 0 10 21
1 0 11 21
2 1 15 36
3 2 2 58
4 2 5 58
5 2 7 58
6 2 8 58
7 10 9 9
8 11 23 32
9 15 14 46
10 18 15 29
dfa = pd.DataFrame({'a':[1,2,3,4],'b':[4,5,7,6]})
Expected output
a b
0 1 4
1 2 5
I could achieve this using following way
>>> dfa[(dfa.a == 1) | (dfa.a == 2)]
a b
0 1 4
1 2 5
But this is not really scalable since I want to do something similar to
?? dfa[(dfa.a has-any range(5,50))
I think you need boolean indexing with isin with np.arange or range:
print (np.arange(5,51))
[ 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50]
print (dfa[dfa.a.isin(np.arange(5,51))])
Or:
print (dfa[dfa.a.isin(range(5,51))])
Solution with between:
print (dfa[dfa['a'].between(5, 50)])
Sample (one value is changed to 8):
dfa = pd.DataFrame({'a':[1,2,3,8],'b':[4,5,7,6]})
print (dfa)
a b
0 1 4
1 2 5
2 3 7
3 8 6
print (dfa[dfa.a.isin(np.arange(5,51))])
a b
3 8 6
print (dfa[dfa.a.isin(range(5,51))])
a b
3 8 6
print (dfa[dfa['a'].between(5, 50)])
a b
3 8 6
This will also do:
import pandas as pd
dfa = pd.DataFrame({'a':[1,2,3,4],'b':[4,5,7,6]})
print dfa['a'].between(5, 50).any()
#False
print dfa['b'].between(5, 50).any()
#True
print ((5 <= dfa) & (dfa <= 50)).any() # all columns together
#a False
#b True
#dtype: bool
I can use pandas replace to replace values in a dataframe using a dictionary:
prod_dict = {1:'Productive',2:'Moderate',3:'None'}
df['val'].replace(prod_dict,inplace=True)
What do I do if I want to replace a set of values in the dataframe with a single number. E.g I want to map all values from 1 to 20 to 1; all values from 21 to 40 to 2 and all values from 41 to 100 to 3. How do I specify this in a dictionary and use it in pandas replace?
You can do that using apply to traverse and apply function on every element, and lambda to write a function to replace the key with the value of in your dictionary.
I will go through a quick example here.
First, I will create a dataframe to showcase the algorithm
df = pd.DataFrame(range(50), columns=list('B'))
This function should generate a list of values between i,j .
def genValues(i,j):
return [x for x in range(j+1) if x >=i]
I will create lambda function to map the values.
df['E']= df['B'].apply(lambda x: 1 if x in genValues(0,20) else 2 if x in genValues(21,40) else 3 if x in genValues(41,100) else x)
print df
The output:
B E
0 0 1
1 1 1
2 2 1
3 3 1
4 4 1
5 5 1
6 6 1
7 7 1
8 8 1
9 9 1
10 10 1
11 11 1
12 12 1
13 13 1
14 14 1
15 15 1
16 16 1
17 17 1
18 18 1
19 19 1
20 20 1
21 21 2
22 22 2
23 23 2
24 24 2
25 25 2
26 26 2
27 27 2
28 28 2
29 29 2
30 30 2
31 31 2
32 32 2
33 33 2
34 34 2
35 35 2
36 36 2
37 37 2
38 38 2
39 39 2
40 40 2
41 41 3
42 42 3
43 43 3
44 44 3
45 45 3
46 46 3
47 47 3
48 48 3
49 49 3
You can replace the column by replacing it:
df['B']= df['B'].apply(lambda x: 1 if x in genValues(0,20) else 2 if x in genValues(21,40) else 3 if x in genValues(41,100) else x)