Pandas replace issue - python

I can use pandas replace to replace values in a dataframe using a dictionary:
prod_dict = {1:'Productive',2:'Moderate',3:'None'}
df['val'].replace(prod_dict,inplace=True)
What do I do if I want to replace a set of values in the dataframe with a single number. E.g I want to map all values from 1 to 20 to 1; all values from 21 to 40 to 2 and all values from 41 to 100 to 3. How do I specify this in a dictionary and use it in pandas replace?

You can do that using apply to traverse and apply function on every element, and lambda to write a function to replace the key with the value of in your dictionary.
I will go through a quick example here.
First, I will create a dataframe to showcase the algorithm
df = pd.DataFrame(range(50), columns=list('B'))
This function should generate a list of values between i,j .
def genValues(i,j):
return [x for x in range(j+1) if x >=i]
I will create lambda function to map the values.
df['E']= df['B'].apply(lambda x: 1 if x in genValues(0,20) else 2 if x in genValues(21,40) else 3 if x in genValues(41,100) else x)
print df
The output:
B E
0 0 1
1 1 1
2 2 1
3 3 1
4 4 1
5 5 1
6 6 1
7 7 1
8 8 1
9 9 1
10 10 1
11 11 1
12 12 1
13 13 1
14 14 1
15 15 1
16 16 1
17 17 1
18 18 1
19 19 1
20 20 1
21 21 2
22 22 2
23 23 2
24 24 2
25 25 2
26 26 2
27 27 2
28 28 2
29 29 2
30 30 2
31 31 2
32 32 2
33 33 2
34 34 2
35 35 2
36 36 2
37 37 2
38 38 2
39 39 2
40 40 2
41 41 3
42 42 3
43 43 3
44 44 3
45 45 3
46 46 3
47 47 3
48 48 3
49 49 3
You can replace the column by replacing it:
df['B']= df['B'].apply(lambda x: 1 if x in genValues(0,20) else 2 if x in genValues(21,40) else 3 if x in genValues(41,100) else x)

Related

How to extract the value of column 1 when column 2 changes?(python)

I have a pandas.DataFrame of the form.(It doesn't matter if you use numpy.)
I want to output a value of 'moID' whenever the value of column 'time' changes.
I'll show you a simple example below.
I will mark the row that should be output as '<<<'.
index 'moID' 'time'
0 1 0 <<<
1 25 0
2 3 1 <<<
3 45 1
4 12 1
5 2 2 <<<
6 34 1 <<<
7 4 1
8 12 1
9 2 3 <<<
10 5 3
11 37 3
12 85 0 <<<
13 2 0
14 45 1 <<<
15 55 1
16 2 3 <<<
17 23 3
18 42 0 <<<
19 1 0
20 42 1 <<<
21 2 2 <<<
22 41 2
23 3 1 <<<
24 52 1
25 2 1
26 24 3 <<<
27 3 3
28 5 3
result is :
index 'moID'
1
3
2
34
2
85
45
2
42
42
2
3
24
help me please.
You can use shift + ne to see if consecutive rows match and create a boolean Series (where it's False if the time is the same but True if it's different). Then use it as a mask to filter the desired items:
out = df.loc[df['time'].ne(df['time'].shift()), 'moID']
Output:
0 1
2 3
5 2
6 34
9 2
12 85
14 45
16 2
18 42
20 42
21 2
23 3
26 24
Name: moID, dtype: int64
You can use boolean indexing the following way:
result = df.moID[df.time.diff() != 0]
df.time.diff() != 0 generates a Series of boolean and it is used
to index moID column.
The result, for your source data, is:
0 1
2 3
5 2
6 34
9 2
12 85
14 45
16 2
18 42
20 42
21 2
23 3
26 24
Name: moID, dtype: int64
The left column is the index and the right one - actual values.

Pandas dataframe problem. Create column where a row cell gets the value of another row cell

I have this pandas dataframe. It is sorted by the "h" column. What I want is to add two new columns where:
The items of each zone, will have a max boundary and a min boundary. (They will be the same for every item in the zone). The max boundary will be the minimum "h" value of the previous zone, and the min boundary will be the maximum "h" value of the next zone
name h w set row zone
ZZON5 40 36 A 0 0
DWOPN 38 44 A 1 0
5SWYZ 37 22 B 2 0
TFQEP 32 55 B 3 0
OQ33H 26 41 A 4 1
FTJVQ 24 25 B 5 1
F1RK2 20 15 B 6 1
266LT 18 19 A 7 1
HSJ3X 16 24 A 8 2
L754O 12 86 B 9 2
LWHDX 11 68 A 10 2
ZKB2F 9 47 A 11 2
5KJ5L 7 72 B 12 3
CZ7ET 6 23 B 13 3
SDZ1B 2 10 A 14 3
5KWRU 1 59 B 15 3
what i hope for:
name h w set row zone maxB minB
ZZON5 40 36 A 0 0 26
DWOPN 38 44 A 1 0 26
5SWYZ 37 22 B 2 0 26
TFQEP 32 55 B 3 0 26
OQ33H 26 41 A 4 1 32 16
FTJVQ 24 25 B 5 1 32 16
F1RK2 20 15 B 6 1 32 16
266LT 18 19 A 7 1 32 16
HSJ3X 16 24 A 8 2 18 7
L754O 12 86 B 9 2 18 7
LWHDX 11 68 A 10 2 18 7
ZKB2F 9 47 A 11 2 18 7
5KJ5L 7 72 B 12 3 9
CZ7ET 6 23 B 13 3 9
SDZ1B 2 10 A 14 3 9
5KWRU 1 59 B 15 3 9
Any ideas?
First group-by zone and find the minimum and maximum of them
min_max_zone = df.groupby('zone').agg(min=('h', 'min'), max=('h', 'max'))
Now you can use apply:
df['maxB'] = df['zone'].apply(lambda x: min_max_zone.loc[x-1, 'min']
if x-1 in min_max_zone.index else np.nan)
df['minB'] = df['zone'].apply(lambda x: min_max_zone.loc[x+1, 'max']
if x+1 in min_max_zone.index else np.nan)

Incremental assignment in pandas dataframe to determine month from week number without date element

I'm having week numbers in the dataframe from 1 to 52 e.g. [1,2,3,4,5,6,7,8,..52]
I'm trying to create a new column for month but it would mean an incremental assignment like [1,2,3,4] = 1, [5,6,7,8] = 2, .. [49,50,51,52] = 12
I tried getting the records by multiple of 4 using df[df["week"]%4==0] and then ffill it but seems like we can only assign it all to the same number which is not what I want. Instead I want to assign [1..12] accordingly. Is there another way to do this?
Subtract 1 first and then use integer division by 4:
df = pd.DataFrame({'week':range(1,53)})
df['new'] = (df["week"] - 1)//4
print (df.head(10))
week new
0 1 0
1 2 0
2 3 0
3 4 0
4 5 1
5 6 1
6 7 1
7 8 1
8 9 2
9 10 2
print (df.tail(10))
week new
42 43 10
43 44 10
44 45 11
45 46 11
46 47 11
47 48 11
48 49 12
49 50 12
50 51 12
51 52 12
If want starting by 1 it is possible, but last value is 13:
df['new'] = ((df["week"] - 1)//4) + 1
print (df.head(10))
week new
0 1 1
1 2 1
2 3 1
3 4 1
4 5 2
5 6 2
6 7 2
7 8 2
8 9 3
9 10 3
print (df.tail(10))
week new
42 43 11
43 44 11
44 45 12
45 46 12
46 47 12
47 48 12
48 49 13
49 50 13
50 51 13
51 52 13
If want values between 1 and 12 (but some groups has more like 4 values) use, solution by #Aryerez, thank you:
df['new'] = ((df["week"] - 1) // (52 / 12)).astype(int) + 1
print (df.head(10))
week new
0 1 1
1 2 1
2 3 1
3 4 1
4 5 1
5 6 2
6 7 2
7 8 2
8 9 2
9 10 3
print (df.tail(10))
week new
42 43 10
43 44 10
44 45 11
45 46 11
46 47 11
47 48 11
48 49 12
49 50 12
50 51 12
51 52 12
EDIT: For 5 values in each 3rd group use:
df['new'] = ((df["week"] + 4) // (52 / 12)).astype(int)
print (df.head(15))
week new
0 1 1
1 2 1
2 3 1
3 4 1
4 5 2
5 6 2
6 7 2
7 8 2
8 9 3
9 10 3
10 11 3
11 12 3
12 13 3
13 14 4
14 15 4
print (df.tail(15))
week new
37 38 9
38 39 9
39 40 10
40 41 10
41 42 10
42 43 10
43 44 11
44 45 11
45 46 11
46 47 11
47 48 12
48 49 12
49 50 12
50 51 12
51 52 12

Summing values across given range of days difference backwards - Pandas

I have created a days difference column in a pandas dataframe, and I'm looking to add a column that has the sum of a specific value over a given days window backwards
Notice that I can supply a date column for each row if it is needed, but the diff was created as days difference from the first day of the data.
Example
df = pd.DataFrame.from_dict({'diff': [0,0,1,2,2,2,2,10,11,15,18],
'value': [10,11,15,2,5,7,8,9,23,14,15]})
df
Out[12]:
diff value
0 0 10
1 0 11
2 1 15
3 2 2
4 2 5
5 2 7
6 2 8
7 10 9
8 11 23
9 15 14
10 18 15
I want to add 5_days_back_sum column that will sum the past 5 days, including same day so the result would be like this
Out[15]:
5_days_back_sum diff value
0 21 0 10
1 21 0 11
2 36 1 15
3 58 2 2
4 58 2 5
5 58 2 7
6 58 2 8
7 9 10 9
8 32 11 23
9 46 15 14
10 29 18 15
How can I achieve that? Originally I have a date column to create the diff column, if that helps its available
Use custom function with boolean indexing for filtering range with sum:
def f(x):
return df.loc[(df['diff'] >= x - 5) & (df['diff'] <= x), 'value'].sum()
df['5_days_back_sum'] = df['diff'].apply(f)
print (df)
diff value 5_days_back_sum
0 0 10 21
1 0 11 21
2 1 15 36
3 2 2 58
4 2 5 58
5 2 7 58
6 2 8 58
7 10 9 9
8 11 23 32
9 15 14 46
10 18 15 29
Similar solution with between:
def f(x):
return df.loc[df['diff'].between(x - 5, x), 'value'].sum()
df['5_days_back_sum'] = df['diff'].apply(f)
print (df)
diff value 5_days_back_sum
0 0 10 21
1 0 11 21
2 1 15 36
3 2 2 58
4 2 5 58
5 2 7 58
6 2 8 58
7 10 9 9
8 11 23 32
9 15 14 46
10 18 15 29

grouping by id and a condition

I have a dataframe df
df=DataFrame({'id': ['a','a','a','a','a','a','a','b','b','b','b','b','b','b','b','b','b'],
'min':[10,17,21,22,22,7,58,15,17,19,19,19,19,19,25,26,26],
'day':[15,15,15,15,15,17,17,41,41,41,41,41,41,41,57,57,57]})
that looks like
id min day
0 a 10 15
1 a 17 15
2 a 21 15
3 a 30 15
4 a 50 15
5 a 57 17
6 a 58 17
7 b 15 41
8 b 17 41
9 b 19 41
10 b 19 41
11 b 19 41
12 b 19 41
13 b 19 41
14 b 25 57
15 b 26 57
16 b 26 57
I want a new column that categorizes the data in a certain format based on the id and the relationship between the rows as follows, if min value difference for consecutive rows is less than 8 and the day value is the same I want to assign them to the same group, so my output would look like.
id min day category
0 a 10 15 1
1 a 17 15 1
2 a 21 15 1
3 a 30 15 2
4 a 50 15 3
5 a 57 17 4
6 a 58 17 4
7 b 15 41 5
8 b 17 41 5
9 b 19 41 5
10 b 19 41 5
11 b 19 41 5
12 b 19 41 5
13 b 19 41 5
14 b 25 57 6
15 b 26 57 6
16 b 26 57 6
hope this helps. let me know your views.
All the best.
import pandas as pd
df=pd.DataFrame({'id': ['a','a','a','a','a','a','a','b','b','b','b','b','b','b','b','b','b'],
'min':[10,17,21,22,22,7,58,15,17,19,19,19,19,19,25,26,26],
'day':[15,15,15,15,15,17,17,41,41,41,41,41,41,41,57,57,57]})
# initialize the catagory to 1 for counter increament
cat =1
# for the first row the catagory will be 1
new_series = [cat]
# loop will start from 1 and not from 0 because we cannot perform operation on iloc -1
for i in range(1,len(df)):
if df.iloc[i]['day'] == df.iloc[i-1]['day']:
if df.iloc[i]['min'] - df.iloc[i-1]['min'] > 8:
cat+=1
else:
cat+=1
new_series.append(cat)
df['catagory']= new_series
print(df)

Categories