I have a dataframe df that looks like this:
id Category Time
1 176 12 00:00:00
2 4956 2 00:00:00
3 583 4 00:00:04
4 9395 2 00:00:24
5 176 12 00:03:23
which is basically a set of id and the category of item they used at a particular Time. I use df.groupby['id'] and then I want to see if they used the same category or different and assign True or False respectively (or NaN if that was the first item for that particular id. I also filtered out the data to remove all the ids with only one Time.
For example one of the groups may look like
id Category Time
1 176 12 00:00:00
2 176 12 00:03:23
3 176 2 00:04:34
4 176 2 00:04:54
5 176 2 00:05:23
and I want to perform an operation to get
id Category Time Transition
1 176 12 00:00:00 NaN
2 176 12 00:03:23 False
3 176 2 00:04:34 True
4 176 2 00:04:54 False
5 176 2 00:05:23 False
I thought about doing an apply of some sorts to the Category column after groupby but I am having trouble figuring out the right function.
you don't need a groupby here, you just need sort and shift.
df.sort(['id', 'Time'], inplace=True)
df['Transition'] = df.Category != df.Category.shift(1)
df.loc[df.id != df.id.shift(1), 'Transition'] = np.nan
i haven't tested this, but it should do the trick
Related
I have a dataset with a few columns. I would like to slice the data frame with finding a string "M22" in the column "Run number". I am able to do so. However, I would like to count the number of unique rows that contained the string "M22".
Here is what I have done for the below table (example):
RUN_NUMBER DATE_TIME CULTURE_DAY AGE_HRS AGE_DAYS
335991M 6/30/2022 0 0 0
M220621 7/1/2022 1 24 1
M220678 7/2/2022 2 48 2
510091M 7/3/2022 3 72 3
M220500 7/4/2022 4 96 4
335991M 7/5/2022 5 120 5
M220621 7/6/2022 6 144 6
M220678 7/7/2022 7 168 7
335991M 7/8/2022 8 192 8
M220621 7/9/2022 9 216 9
M220678 7/10/2022 10 240 10
here is the results I got:
RUN_NUMBER
335991M 0
510091M 0
335992M 0
M220621 3
M220678 3
M220500 1
Now I need to count the strings/rows that contained "M22" : so I need to get 3 as output.
Use the following approach with pd.Series.unique function:
df[df['RUN_NUMBER'].str.contains("M22")]['RUN_NUMBER'].unique().size
Or a more faster alternative using numpy.char.find function:
(np.char.find(df['RUN_NUMBER'].unique().astype(str), 'M22') != -1).sum()
3
I have a table that is cronologically sorted, with an state and an amount fore each date. The table looks as follows:
Date
State
Amount
01/01/2022
1
1233.11
02/01/2022
1
16.11
03/01/2022
2
144.58
04/01/2022
1
298.22
05/01/2022
2
152.34
06/01/2022
2
552.01
07/01/2022
3
897.25
To generate the dataset:
pd.DataFrame({'date': ["01/08/2022","02/08/2022","03/08/2022","04/08/2022","05/08/2022","06/08/2022","07/08/2022","08/08/2022","09/08/2022","10/08/2022","11/08/2022"], 'state' : [1,1,2,2,3,1,1,2,2,2,1],'amount': [144,142,166,144,142,166,144,142,166,142,166]})
I want to add a column called rank that is increased when the state changes. So if you have twenty times state 1, it is just rank 1. If then you have state 2, when the state 1 appears again, the rank is increased. That is, if for two days in a row State is 1, Rank is 1. Then, another state appears. When State 1 appears again, Rank would increment to 2.
I want to add a column called "Rank" which has a value that increments itself if a given state appears again. It is like a counter amount of times that state appear consecutively. That it, if state. An example would be as follows:
Date
State
Amount
Rank
01/01/2022
1
1233.11
1
02/01/2022
1
16.11
1
03/01/2022
2
144.58
1
04/01/2022
1
298.22
2
05/01/2022
2
152.34
2
06/01/2022
2
552.01
2
07/01/2022
3
897.25
1
This could be also understanded as follows:
Date
State
Amount
Rank_State1
Rank_State2
Rank_State2
01/01/2022
1
1233.11
1
02/01/2022
1
16.11
1
03/01/2022
2
144.58
1
04/01/2022
1
298.22
2
05/01/2022
2
152.34
2
06/01/2022
2
552.01
2
07/01/2022
3
897.25
1
Does anyone know how to build that Rank column starting from the previous table?
Your problem is in the general category of state change accumulation, which suggests an approach using cumulative sums and booleans.
Here's one way you can do it - maybe not the most elegant, but I think it does what you need
import pandas as pd
someDF = pd.DataFrame({'date': ["01/08/2022","02/08/2022","03/08/2022","04/08/2022","05/08/2022","06/08/2022","07/08/2022","08/08/2022","09/08/2022","10/08/2022","11/08/2022"], 'state' : [1,1,2,2,3,1,1,2,2,2,1],'amount': [144,142,166,144,142,166,144,142,166,142,166]})
someDF["StateAccumulator"] = someDF["state"].apply(str).cumsum()
def groupOccurrence(someRow):
sa = someRow["StateAccumulator"]
s = str(someRow["state"])
stateRank = len("".join([i if i != '' else " " for i in sa.split(s)]).split())\
+ int((sa.split(s)[0] == '') or (int(sa.split(s)[-1] == '')) and sa[-1] != s)
return stateRank
someDF["Rank"] = someDF.apply(lambda x: groupOccurrence(x), axis=1)
If I understand correctly, this is the result you want - "Rank" is intended to represent the number of times a given set of contiguous states have appeared:
date state amount StateAccumulator Rank
0 01/08/2022 1 144 1 1
1 02/08/2022 1 142 11 1
2 03/08/2022 2 166 112 1
3 04/08/2022 2 144 1122 1
4 05/08/2022 3 142 11223 1
5 06/08/2022 1 166 112231 2
6 07/08/2022 1 144 1122311 2
7 08/08/2022 2 142 11223112 2
8 09/08/2022 2 166 112231122 2
9 10/08/2022 2 142 1122311222 2
10 11/08/2022 1 166 11223112221 3
Notes:
instead of the somewhat hacky string cumsum method I'm using here, you could probably use a list accumulation function and then use a pandas split-apply-combine method to do the counting in the lambda function
you would then apply a state change boolean, and do a cumsum on the state change boolean, filtered/grouped on the state value (so, how many state changes do we have for any given state)
state change boolean is done like this:
someDF["StateChange"] = someDF["state"] != someDF["state"].shift()
so for a given state at a given row, you'd count how many state changes had occurred in the previous rows.
I have a dataframe like this
Time
Buy
Sell
Bin
0
09:15:01
3200
0
3573.0
1
09:15:01
0
4550
3562.0
2
09:15:01
4250
0
3565.0
3
09:15:01
0
5150
3562.0
4
09:15:01
1200
0
3563.0
..
...
...
...
...
292
09:15:01
375
0
3564.0
293
09:15:01
175
0
3564.0
294
09:15:01
0
25
3564.0
295
09:15:01
400
0
3564.0
(Disregard 'Time' currently just using a static value)
what would be the most efficient way to
sum up all the Buys and sells within each bin and remove duplicates
Currently im using
Step1.
final_df1['Buy'] = final_df1.groupby(final_df1['Bin'])['Buy'].transform('sum')
Step2.
final_df1['Sell'] = final_df1.groupby(final_df1['Bin'])['Sell'].transform('sum')
Step3.
##remove duplicates
final_df1 = final_df1.groupby('Bin', as_index=False).max()
using agg or sum or cumsum just removed all the other columns from the resulting df
Ideally there should be distinct bins with sum of buy and/or sell
The output must be
Time
Buy
Sell
Bin
0
09:15:01
3200
0
3573.0
1
09:15:01
450
4550
3562.0
2
09:15:01
4250
3625
3565.0
292
09:15:01
950
25
3564.0
This can also be achieved by using pivot_table from pandas.
Here is the simple recreated example for your code:
import numpy as np
import pandas as pd
df=pd.DataFrame({'buy':[1,2,3,0,3,2,1],
'sell':[2,3,4,0,5,4,3],
'bin':[1,2,1,2,1,2,1],
'time': [1,1,1,1,1,1,1]
})
df_output=df.pivot_table(columns='bin', values=['buy','sell'], aggfunc=np.sum)
Output will look like this:
bin 1 2
buy 8 4
sell 14 7
In case you want the output you mentioned:
we can take the transpose of the above dataframe output:
df_output.T
Or use a groupby as below to input dataframe:
df.groupby(['bin'])[['sell', 'buy']].sum()
The output is as below:
sell buy
bin
1 14 8
2 7 4
If you also need time in the dataframe, we could do it by using and separating the aggregate function for each column:
df.groupby("bin").agg({ "sell":"sum", "buy":"sum", "time":"first"})
The output is as below:
sell buy time
bin
1 14 8 1
2 7 4 1
I have a data frame which looks like this:
student_id
session_id
reading_level_id
st_week
end_week
1
3334
3
3
3
1
3335
2
4
4
2
3335
2
2
2
2
3336
2
2
3
2
3337
2
3
3
2
3339
2
3
4
...
There are multiple session_id's, st_weeks and end_weeks for every student_id. Im trying to group the data by 'student_id' and I want to calculate the difference between the maximum(end_week) and the minimum (st_week) for each student.
Aiming for an output that would look something like this:
Student_id
Diff
1
1
2
2
....
I am relatively new to Python as well as Stack Overflow and have been trying to find an appropriate solution - any help is appreciated.
Using the data you shared, a simpler solution is possible:
Group by student_id, and pass False argument to the as_index parameter (this works for a dataframe, and returns a dataframe);
Next, use a named aggregation to get the `max week for end week and the min week for st_week for each group
Get the difference between max_wk and end_wk
Finally, keep only the required columns
(
df.groupby("student_id", as_index=False)
.agg(max_wk=("end_week", "max"), min_wk=("st_week", "min"))
.assign(Diff=lambda x: x["max_wk"] - x["min_wk"])
.loc[:, ["student_id", "Diff"]]
)
student_id Diff
0 1 1
1 2 2
There's probably a more efficient way to do this, but I broke this into separate steps for the grouping to get max and min values for each id, and then created a new column representing the difference. I used numpy's randint() function in this example because I didn't have access to a sample dataframe.
import pandas as pd
import numpy as np
# generate dataframe
df = pd.DataFrame(np.random.randint(0,100,size=(1200, 4)), columns=['student_id', 'session_id', 'st_week', 'end_week'])
# use groupby to get max and min for each student_id
max_vals = df.groupby(['student_id'], sort=False)['end_week'].max().to_frame()
min_vals = df.groupby(['student_id'], sort=False)['st_week'].min().to_frame()
# use join to put max and min back together in one dataframe
merged = min_vals.join(max_vals)
# use assign() to calculate difference as new column
merged = merged.assign(difference=lambda x: x.end_week - x.st_week).reset_index()
merged
student_id st_week end_week difference
0 40 2 99 97
1 23 5 74 69
2 78 9 93 84
3 11 1 97 96
4 97 24 88 64
... ... ... ... ...
95 54 0 96 96
96 18 0 99 99
97 8 18 97 79
98 75 21 97 76
99 33 14 93 79
You can create a custom function and apply it to a group-by over students:
def week_diff(g):
return g.end_week.max() - g.st_week.min()
df.groupby("student_id").apply(week_diff)
Result:
student_id
1 1
2 2
dtype: int64
So I want to cluster the records in this table to find which records are 'similar' (i.e. have enough in common). An example of the table is as follows:
author beginpage endpage volume publication year id_old id_new
0 NaN 495 497 NaN 1975 1 1
1 NaN 306 317 14 1997 2 2
2 lowry 265 275 193 1951 3 3
3 smith p k 76 85 150 1985 4 4
4 NaN 248 254 NaN 1976 5 5
5 hamill p 85 100 391 1981 6 6
6 NaN 1513 1523 7 1979 7 7
7 b oregan 737 740 353 1991 8 8
8 NaN 503 517 98 1975 9 9
9 de wijs 503 517 98 1975 10 10
In this small table, the last row should get 'new_id' equal to 9, to show that these two records are similar.
To make this happen I wrote the code below, which works fine for a small number of records. However, I want to use my code for a table with 15000 records. And of course, if you do the maths, with this code this is going to take way too long.
Anyone who could help me make this code more efficient? Thanks in advance!
My code, where 'dfhead' is the table with the records:
for r in range(0,len(dfhead)):
for o_r in range(r+1,len(dfhead)):
if ((dfhead.loc[r,c] == dfhead.loc[o_r,c]).sum() >= 3) :
if (dfhead.loc[o_r,['id_new']] > dfhead.loc[r,['id_new']]).sum() ==1:
dfhead.loc[o_r,['id_new']] = dfhead.loc[r,['id_new']]
If you are only trying to detect whole equalities between "beginpage", "endpage","volume", "publication", "year", you should try to work on duplicates. I'm not sure about this as your code is still a mistery for me.
Something like this might work (your column "id" needs to be named "id_old" at first in the dataframe though):
cols = ["beginpage", "endpage","volume", "publication", "year"]
#isolate duplicated rows
duplicated = df[df.duplicated(cols, keep=False)]
#find the minimum key to keep
temp = duplicated.groupby(cols, as_index=False)['index'].min()
temp.rename({'id_old':'id_new'}, inplace=True, axis=1)
#import the "minimum key" to duplicated by merging the dataframes
duplicated = duplicated.merge(temp, on=cols, how="left")
#gather the "un-duplicated" rows
unduplicated = df[~df.duplicated(cols, keep=False)]
#concatenate both datasets and reset the index
new_df = unduplicated.append(duplicated)
new_df.reset_index(drop=True, inplace=True)
#where "id_new" is empty, then the data comes from "unduplicated"
#and you could fill the datas from id_old
ix = new_df[new_df.id_new.isnull()].index
new_df.loc[ix, 'id_new'] = new_df.loc[ix, 'id_old']