I am trying to make a list of the number of elements in each group in a pandas series. In my dataframe i have column called ID, and all values occur multiple times. I want to make a list containing the frequency of each element in the order by which they occur.
So an example of the column ID is [1,2,3,3,3,2,1,5,2,3,1,2,4,3]
this should produce [3,4,5,1,1] since the group-ID 1 occurs 3 times, the group-ID 2 occurs 4 times etc. I have made a code that does this perfectly:
group_list = df.ID.unique().tolist()
group_size = []
for i in group_list:
group_size.append(df.ID.value_counts()[i])
The problem is that it takes way to long to finish. I have 5 million rows, and i let it run for 50 minutes, and it still didn't finish! I tried running it for the first 30-50 rows and it works as intended.
To me it would be logical to simply use value_counts(sort=False) but it doesn't give me the group-ID frequencies in the order they occur in my series. I also tried implementing extend because i read it should be faster, but I get a "numpy.int64 object is not iterable".
Given a Series
ser = pd.Series([1,2,3,3,3,2,1,5,2,3,1,2,4,3])
You can do the following:
ser.value_counts().reindex(ser.unique()).tolist()
Out: [3, 4, 5, 1, 1]
Reindex will reorder the value_counts items based on the order they appear.
Related
I'm working with a large dataset that includes all police stops in my city since 2014. The dataset has millions of rows, but sometimes there are multiple rows for a single stop (so, if the police stopped a group of 4 people, it is included in the database as 4 separate rows even though it's all the same stop). I'm looking to create a new column in the dataset orderInStop, which is a count of how many people were stopped in sequential order. The first person caught up in the stop would have a value of 1, the second person a value of 2, and so on.
To do so, I have used the groupby() function to group all rows that match on time & location, which is the indication that the rows are all part of the same stop. I can manage to create a new column that includes the TOTAL count of the number of people in the stop (so, if there were 4 rows with the same time & location, all four rows have a value of 4 for the new orderInStop variable. But I need the first row in the group to have a value of 1, the second a value of 2, the third 3, and the fourth 4.
Below is my code attempt at iterating through each group I've created to sequentially count each row within each group, but the code doesn't quite work (it populates the entire column rather than each row within the groups). Any help to tweak this code would be much appreciated!
Note: I also tried using logical operators in a for loop, to essentially ask IF the time & location column values match for the current and previous rows, but ran into too many problems with 'the truth values of a Series is ambiguous' errors, so instead I'm trying to use groupby().
Attempt that creates a total count rather than sequential count:
df['order2'] = df.groupby(by=["Date_Time_Occur", "Location"])['orderInStop'].transform('count')
Attempt that fails, to iterate through each row in each group:
df['order3'] = 1
grp = df.groupby(by=["Date_Time_Occur", "Location"])
for name, groups in grp:
count = 1
for row in groups:
df['order3'] = count
count = count + 1
In your example for row in groups iterates over the column names, since groups is a DataFrame.
To iterate over each row you could do
df['order3'] = 1
grp = df.groupby(by=["Date_Time_Occur", "Location"])
for name, groups in grp:
count = 1
for i, row in groups.iterrows(): # i will be index, row a pandas Series
df['order3'] = count
count = count + 1
Note that your solution relies on pandas groupby to preserve row order. This should be the case, see this question, but there is very likely a shorter & safer solution (see fsimonjetz comment for a starting point).
I am looking to take my 1 column and 500 row data, and make it either sets of 5, so 100 sets of 5. Or compare a series of 5 numbers to 5 numbers at a time ( chunks of 5 ) in the data.
I want to see how many times if any, a sequence might show up in the data, or fragments of the sequence and how often.
the numbers are from 1-69 and sets of 5. But my data is all one long column with no spaces.
I eventually want to make a list of the sets that contain 2 or more numbers with matches, but one step at a time, right.
Oh or would a 100x5 matrix work? I don't know how to do that either.
Thank you for your time.
You can reshape a numpy array and create a dataframe from it. Below are 100x5 adn 5x100 examples
df = pd.DataFrame({'data':[random.randrange(1,70) for _ in range(500)]})
pd.DataFrame(df['data'].to_numpy().reshape((100, 5)))
pd.DataFrame(df['data'].to_numpy().reshape((5, 100)))
I have a certain business requirement for which I am having trouble in implementing a faster solution (current solution takes 3 hrs per iteration)
Eg: Say I have a df
and there's a list :
l = [[a,b,c],[d,e,f]]
To do:
Compare all the list values across customer and check if they exist or not
If they exist then find the corresponding min and max date1
Currently the pseudo working code I have is :
for each customer:
group by customer and add column having code column into a list
for each list value:
check if particular list value exists (in case check if [a,b,c] exists in first loop)
if exists:
check for min date by group etc
This multiple for loop is taking too long to execute since I have 100k+ customers.
Any way to further improve this? I already eliminated one for loop reducing time from 10hrs to 3
l = [['a','b','c'],['d','e','f']]
Firstly flatten your list:
from pandas.core.common import flatten
l=list(flatten(l))
Then do boolean masking to check if the customer exists or not in your dataframe:
newdf=df[df['code'].isin(l)]
Finally do groupby():
#The below code groupby 'code':
newdf=newdf.groupby('code').agg(max_date1=('date1','max'),min_date1=('date1','min'))
#If You want to groupby customerid and code then use:
newdf=newdf.groupby(['customerid','code']).agg(max_date1=('date1','max'),min_date1=('date1','min'))
Now If you print newdf you will get your desired output
I slightly modified my approach.
Instead of looping through each customer (I have 100k+ customers)
I looped through each list :
checked if customers were present or not and then looped through filtered customers
This reduced the time by a couple of hours.
Thanks again for your help
I have a dataframe (df_data) with 14 columns for info over 1 month. I pulled out one week's data (df1) and made a list of all the account numbers there (accounts1)
What I am trying to do is take that accounts1 list and have it go through each value in the list, checking if it is counted more than once in df_data and if so, to save that account number to a new list for repeats only.
Then I want to take that repeats list and pull the 14 columns out of the original df_data so I can have all the rows of all 14 columns for each occurrence of that account number.
I'm getting stuck with the list of repeated account numbers, I used the following code, which seems to have worked to create a list with results
cnt = collections.Counter(accounts1)
repeats.append([k for k, v in cnt.items() if v > 1])
print((repeats).count)
but the amount of elements in that list is right under 3,000. When I used the .unique and checked the difference it should be a little over 5,000. What am I doing wrong? And how can I then use those elements to pull the columns from the original dataframe?
Basically say I had
accounts1 = df1['accntnum'] = [0,1,2,5,8,2,5,0,0,7]
I would want it to cycle through and pull out each repeat from df_data and return a list of them like
repeats = [0, 2, 5, 7]
(There are numbers in the monthly df_data that are in weekly df1 but may not be repeated there yet)
Then I'd like to use that list to pull from df_data['accntnum'], thinking something like
df_repeats = df_data[df_data['accntnum'] isin repeats]]
Oh also, I'm really only interested in the first occurrence of a repeat. There is a date and time column that can help sort those out in the end though. Thank you in advance!
So I think this is a relatively simple question:
I have a Pandas data frame (A) that has a key column (which is not unique/will have repeats of the key)
I have another Pandas data frame (B) that has a key column, which may have many matching entries/repeats.
So what I'd like is a bunch of data frames (a list, or a bunch of slice parameters, etc.), one for each key in A (regardless of whether it's unique or not)
In [bad] pseudocode:
for each key in A:
resultDF[] = Rows in B where B.key = key
I can easily do this iteratively with loops, but I've read that you're supposed to slice/merge/join data frames holistically, so I'm trying to see if I can find a better way of doing this.
A join will give me all the stuff that matches, but that's not exactly what I'm looking for, since I need a resulting dataframe for each key (i.e. for every row) in A.
Thanks!
EDIT:
I was trying to be brief, but here are some more details:
Eventually, what I need to do is generate some simple statistical metrics for elements in the columns of each row.
In other words, I have a DF, call it A, and it has a r rows, with c columns, one of which is a key. There may be repeats on the key.
I want to "match" that key with another [set of?] dataframe, returning however many rows match the key. Then, for that set of rows, I want to, say, determine the min and max of certain element (and std. dev, variance, etc.) and then determine if the corresponding element in A falls within that range.
You're absolutely right that it's possible that if row 1 and row 3 of DF A have the same key -- but potentially DIFFERENT elements -- they'd be checked against the same result set (the ranges of which obviously won't change). That's fine. These won't likely ever be big enough to make that an issue (but if there's the better way of doing it, that's great).
The point is that I need to be able to do the "in range" and stat summary computation for EACH key in A.
Again, I can easily do all of this iteratively. But this seems like the sort of thing pandas could do well, and I'm just getting into using it.
Thanks again!
FURTHER EDIT
The DF looks like this:
df = pd.DataFrame([[1,2,3,4,1,2,3,4], [28,15,13,11,12,23,21,15],['keyA','keyB','keyC','keyD', 'keyA','keyB','keyC','keyD']]).T
df.columns = ['SEQ','VAL','KEY']
SEQ VAL KEY
0 1 28 keyA
1 2 15 keyB
2 3 13 keyC
3 4 11 keyD
4 1 12 keyA
5 2 23 keyB
6 3 21 keyC
7 4 15 keyD
Both DF's A and B are of this format.
I can iterative get the resultant sets by:
loop_iter = len(A) / max(A['SEQ_NUM'])
for start in range(0, loop_iter):
matchA = A.iloc[start::loop_iter, :]['KEY']
That's simple. But I guess I'm wondering if I can do this "inline". Also, if for some reason the numeric ordering breaks (i.e. the SEQ get out of order) this this won't work. There seems to be no reason NOT to do it explicitly splitting on the keys, right? So perhaps I have TWO questions: 1). How to split on keys, iteratively (i.e. accessing a DF one row at a time), and 2). How to match a DF and do summary statistics, etc., on a DF that matches on the key.
So, once again:
1). Iterate through DF A, going one at a time, and grabbing a key.
2). Match the key to the SET (matchB) of keys in B that match
3). Do some stats on "values" of matchB, check to see if val.A is in range, etc.
4). Profit!
Ok, from what I understand, the problem at its most simple is that you have a pd.Series of values (i.e. a["key"], which let's just call keys), which correspond to the rows of a pd.DataFrame (the df called b), such that set(b["key"]).issuperset(set(keys)). You then want to apply some function to each group of rows in b where the b["key"] is one of the values in keys.
I'm purposefully disregarding the other df -- a -- that you mention in your prompt, because it doesn't seem to bear any significance to the problem, other than being the source of keys.
Anyway, this is a fairly standard sort of operation -- it's a groupby-apply.
def descriptive_func(df):
"""
Takes a df where key is always equal and returns some summary.
:type df: pd.DataFrame
:rtype: pd.Series|pd.DataFrame
"""
pass
# filter down to those rows we're interested in
valid_rows = b[b["key"].isin(set(keys))]
# this groups by the value and applies the descriptive func to each sub df in turn
summary = valid_rows.groupby("key").apply(descriptive_func)
There are a few built in methods on the groupby object that are useful. For example, check out valid_rows.groupby("key").sum() or valid_rows.groupby("key").describe(). Under the covers, these are really similar uses of apply. The shape of the returned summary is determined by the applied function. The unique grouped-by values -- those of b["key"] -- always constitute the index, but if the applied function returns a scalar, summary is a Series; if the applied function returns a Series, then summary constituted of the return Series as rows; if the applied function returns a DataFrame, then the result is a multiindex DataFrame. This is a core pattern in Pandas, and there's a whole, whole lot to explore here.