groupby and select columns from Pandas DataFrame - python

I have a DataFrame at daily level :
day | type| rev |impressions| yearmonth
2015-10-01| a | 1999| 1000 |201510
2015-10-02| a | 300 | 6777 |201510
2015-11-07| b | 2000| 4999 |201511
Yearmonth is a column I added to the DataFrame. Task is to group by yearmonth, ( and may be type and then sum up all the columns(or select a value) and select them as the new DataFrame.
On grouping the above DataFrame, we should be getting one row for one month .
yearmonth| type| rev |impressions
201510 | a | 2299| 7777
201511 | b | 2000| 4999
Let us say df is the DataFrame , I tried doing
test = df.groupby('yearmonth')
I checked the methods available for test ( test.) but I did not see anything where we can select columns and also aggregate them there ( I guess we can use agg for sum) .
Any inputs?

add the as_index parameter
like this:
test = df.groupby('yearmonth', as_index=False)
here is a reference:
enter link description here

Related

Merging 2 datasets in Python

I have 2 diffferent datasets and I want to merge these 2 datasets based on column "country" with the common country names and dropping the ones different. I have done it with inner merge, but the dataset is not as I want to have.
inner_merged = pd.merge(TFC_DATA,CO2_DATA,how="inner",on="country")
TFC_DATA (in the orginal dataset there exits a column called year but I've dropped it):
| Country | TFP |
| Angola | 0.8633379340171814 |
| Angola | 0.9345720410346984 |
| Angola | 1.0301895141601562 |
| Angola | 1.0850582122802734 |
.
.
.
CO2_DATA:
| Country | year | GDP | co2
| Afghanistan | 2005 | 25397688320.0 | 1
| Afghanistan | 2006 | 28704401408.0 | 2
| Afghanistan | 2007 | 34507530240.0 | 2
| Afghanistan | 2008 | 1.0850582122802734 | 3
| Afghanistan | 2009 | 1.040212631225586 | 1
.
.
.
What I want is
Output
|Country|Year|gdp|co2|TFP
Angola|2005|51967275008.0|19.006|0.8633379340171814
Angola|2006|66748907520.0|19.006|0.9345720410346984
Angola|2007|87085293568.0|19.006|1.0301895141601562
.
.
.
What I have instead
Output
Country|Year|gdp|co2|Year|TFP
Angola|2005|51967275008.0|19.006|2005|0.8633379340171814
Angola|2005|51967275008.0|19.006|2006|0.9345720410346984
Angola|2005|51967275008.0|19.006|2007|1.0301895141601562
Angola|2005|51967275008.0|19.006|2008|1.0850582122802734
Angola|2005|51967275008.0|19.006|2009|1.040212631225586
Angola|2005|51967275008.0|19.006|2010|1.0594196319580078
Angola|2005|51967275008.0|19.006|2011|1.036203384399414
Angola|2005|51967275008.0|19.006|2012|1.076979637145996
Angola|2005|51967275008.0|19.006|2013|1.0862818956375122
Angola|2005|51967275008.0|19.006|2014|1.096832513809204
Angola|2005|51967275008.0|19.006|2015|1.0682281255722046
Angola|2005|51967275008.0|19.006|2016|1.0160540342330933
Angola|2005|51967275008.0|19.006|2017|1.0
I expected the datas of the countrys' merge in one dataset but it duplicates itself until the second one data is over then the second one does the same
TFC_DATA (in the orginal dataset there exits a column called year but
I've dropped it):
Well, based on your expected output, you should not drop the column Year from the dataframe TFC_DATA. Only then, you can use pandas.merge (as shown below). Because otherwise, you'll have duplicated values.
pd.merge(CO2_DATA, TFC_DATA, left_on=["country", "year"], right_on=["country", "Year"])
OR :
pd.merge(CO2_DATA, TFC_DATA.rename(columns={"Year": "year"}), on=["country", "year"])
pd.merge() function performs an inner join by default that means it only includes rows that have matching values in the specified columns.
Use a different join type one option is to use a left outer join, which will include all rows from the left dataset (TFC_DATA) and only the matching rows from the right dataset (CO2_DATA).
Specify a left outer join using the how="left" parameter in the pd.merge() function.
merged_data = pd.merge(TFC_DATA, CO2_DATA, how="left", on="country")
After #abokey's comment
EDIT
First, create a new column in the TFC_DATA dataset with the year value
TFC_DATA["year"] = TFC_DATA.index.year
Group the TFC_DATA dataset by "country" and "year", and compute the mean TFP value for each group
TFC_DATA_agg = TFC_DATA.groupby(["country", "year"]).mean()
Reset the index to make "country" and "year" columns in the resulting dataset
TFC_DATA_agg = TFC_DATA_agg.reset_index()
Perform the inner merge, using "country" and "year" as the merge keys
merged_data = pd.merge(CO2_DATA, TFC_DATA_agg, how="inner", on=["country", "year"])

How to aggregate in pandas with some conditions?

I want to aggregate my data in this way:
df.groupby('date').agg({ 'user_id','nunique',
'user_id':'nunique' ONLY WHERE purchase_flag==1})
date | user_id | purchase_flag
4-1-2020 | 1 | 1
4-1-2020 | 1 | 1 (purchased second time but still same unique user on that day)
4-1-2020 | 2 | 0
In this case I want the output to looks like:
date | total_users | total_users_who_purchased
4-1-2020 | 2 | 1
How can I best achieve this?
Try this by creating helper column in your dataframe to indicate users who purchased first then groupby and aggregate on that helper column:
df["user_id_purchased"] = df["user_id"].where(df["purchase_flag"].astype(bool))
df_output = df.groupby("date", as_index=False).agg(
total_users=("user_id", "nunique"),
total_users_who_purchased=("user_id_purchased", "nunique"),
)
Output:
date total_users total_users_who_purchased
0 4-1-2020 2 1
I think that one way to achieve this goal is using .loc
df.loc[ (df["purchase_flag"]==1)].user_id.nunique
Implementation to get your output:
details = { 'date' : ['4-1-2020'],
'total_users' : df.user_id.nunique(),
'total_users_who_purchased' :
df.loc(df["purchase_flag"]==1)].user_id.nunique()}
df2 = pd.DataFrame(details)
df2

Pandas Merge two tables with the second tables' one column transposed

Table 1
df1 = pd.DataFrame({'df1_id':['1','2','3'],'col1':["a","b","c"],'col2':["d","e","f"]})
Table 2
df2 = pd.DataFrame({'df1_id':['1','2','1','1'],'date':['01-05-2021','03-05-2021','05-05-2021','03-05-2021'],'data':[12,13,16,9],'test':['g','h','j','i'],'test2':['k','l','m','n']})
Result Table
Brief Explanation on how the Result table needs to be created:
I have two data frames and I want to merge them based on a df_id. But the date column from second table should be transposed into the resultant table.
The date columns for the result table will be a range between the min date and max date from the second table
The column values for the dates in the result table will be from the data column of the second table.
Also the test column from the second table will only take its value of the latest date for the result table
I hope this is clear. Any suggestion or help regarding this will be greatly appreciated.
I have tried using pivot on the second table and then trying to merge the pivoted second table df1 but its not working. I do not know how to get only one row for the latest value of test.
Note: I am trying to solve this problem using vectorization and do not want to serially parse through each row
You need to pivot your df2 into two separate table as we need data and test values and then merge both resulting pivot table with df1
df1 = pd.DataFrame({'df1_id':['1','2','3'],'col1':["a","b","c"],'col2':["d","e","f"]})
df2 = pd.DataFrame({'df1_id':['1','2','1','1'],'date':['01-05-2021','03-05-2021','03-05-2021','05-05-2021'],'data':[12,13,9,16],'test':['g','h','i','j']})
test_piv = df2.pivot(index=['df1_id'],columns=['date'],values=['test'])
data_piv = df2.pivot(index=['df1_id'],columns=['date'],values=['data'])
max_test = test_piv['test'].ffill(axis=1).iloc[:,-1].rename('test')
final = df1.merge(data_piv['data'],left_on=df1.df1_id, right_index=True, how='left')
final = final.merge(max_test,left_on=df1.df1_id, right_index=True, how='left')
and hence your resulting final dataframe as below
| | df1_id | col1 | col2 | 01-05-2021 | 03-05-2021 | 05-05-2021 | test |
|---:|---------:|:-------|:-------|-------------:|-------------:|-------------:|:-------|
| 0 | 1 | a | d | 12 | 9 | 16 | j |
| 1 | 2 | b | e | nan | 13 | nan | h |
| 2 | 3 | c | f | nan | nan | nan | nan |
Here is the solution for the question:
I first sort df2 based of df1_id and date to ensure that table entries are in order.
Then I drop duplicates based on df_id and select the last row to ensure I have the latest values for test and test2
Then I pivot df2 to get the corresponding date as column and data as its value
Then I merge the table with df2_pivoted to combine the latest values of test and test2
Then I merge with df1 to get the resultant table
df1 = pd.DataFrame({'df1_id':['1','2','3'],'col1':["a","b","c"],'col2':["d","e","f"]})
df2 = pd.DataFrame({'df1_id':['1','2','1','1'],'date':['01-05-2021','03-05-2021','05-05-2021','03-05-2021'],'data':[12,13,16,9],'test':['g','h','j','i'],'test2':['k','l','m','n']})
df2=df2.sort_values(by=['df1_id','date'])
df2_latest_vals = df2.drop_duplicates(subset=['df1_id'],keep='last')
df2_pivoted = df2.pivot_table(index=['df1_id'],columns=['date'],values=['data'])
df2_pivoted = df2_pivoted.droplevel(0,axis=1).reset_index()
df2_pivoted = pd.merge(df2_pivoted,df2_latest_vals,on='df1_id')
df2_pivoted = df2_pivoted.drop(columns=['date','data'])
result = pd.merge(df1,df2_pivoted,on='df1_id',how='left')
result
Note: I have not been able to figure out how to get the entire date range between 01-05-2021 and 05-05-2021 and show the empty values as NaN. If anyone can help please edit the answer

Pandas DataFrame: Fill NA values based on group mean

I would like to update the NA values of a Pandas DataFrame column with the values in a groupby object.
Let's illustrate with an example:
We have the following DataFrame columns:
|--------|-------|-----|-------------|
| row_id | Month | Day | Temperature |
|--------|-------|-----|-------------|
| 1 | 1 | 1 | 14.3 |
| 2 | 1 | 1 | 14.8 |
| 3 | 1 | 2 | 13.1 |
|--------|-------|-----|-------------|
We're simply measuring temperature multiple times a day for many months. Now, let's assume that for some of our records, the temperature reading failed and we have a NA.
|--------|-------|-----|-------------|
| row_id | Month | Day | Temperature |
|--------|-------|-----|-------------|
| 1 | 1 | 1 | 14.3 |
| 2 | 1 | 1 | 14.8 |
| 3 | 1 | 2 | 13.1 |
| 4 | 1 | 2 | NA |
| 5 | 1 | 3 | 14.8 |
| 6 | 1 | 4 | NA |
|--------|-------|-----|-------------|
We could just use panda's .fillna(), however we want to be a little more sophisticated. Since there are multiple readings per day (there could be 100's per day), we'd like to take the daily average and use that as our fill value.
we can get the daily averages with a simple groupby:
avg_temp_by_month_day = df.groupby(['month'])['day'].mean()
Which gives us the means for each day by month. The question is, how best to fill the NA values with the groupby values?
We could use an apply(),
df['temperature'] = df.apply(
lambda row: avg_temp_by_month_day.loc[r['month'], r['day']] if pd.isna(r['temperature']) else r['temperature'],
axis=1
)
however this is really slow (1M+ records).
Is there a vectorized approach, perhaps using np.where(), or maybe creating another Series and merging.
What's the a more efficient way to perform this operation?
Thank you!
I'm not sure if this is the fastest, however instead of taking ~1 hour for apply, it takes ~20 sec for +1M records. The below code has been updated to work on 1 or many columns.
local_avg_cols = ['temperature'] # can work with multiple columns
# Create groupby's to get local averages
local_averages = df.groupby(['month', 'day'])[local_avg_cols].mean()
# Convert to DataFrame and prepare for merge
local_averages = pd.DataFrame(local_averages, columns=local_avg_cols).reset_index()
# Merge into original dataframe
df = df.merge(local_averages, on=['month', 'day'], how='left', suffixes=('', '_avg'))
# Now overwrite na values with values from new '_avg' col
for col in local_avg_cols:
df[col] = df[col].mask(df[col].isna(), df[col+'_avg'])
# Drop new avg cols
df = df.drop(columns=[col+'_avg' for col in local_avg_cols])
If anyone finds a more efficient way to do this, (efficient in processing time, or in just readability), I'll unmark this answer and mark yours. Thank you!
I'm guessing what speeds down your process are two things. First, you don't need to convert your groupby to a dataframe. Second, you don't need the for loop.
from pandas import DataFrame
from numpy import nan
# Populating the dataset
df = {"Month": [1] * 6,
"Day": [1, 1, 2, 2, 3, 4],
"Temperature": [14.3, 14.8, 13.1, nan, 14.8, nan]}
# Creating the dataframe
df = pd.DataFrame(df, columns=df.keys())
local_averages = df.groupby(['Month', 'Day'])['Temperature'].mean()
df = df.merge(local_averages, on=['Month', 'Day'], how='left', suffixes=('', '_avg'))
# Filling the missing values of the Temperature column with what is available in Temperature_avg
df.Temperature.fillna(df.Temperature_avg, inplace=True)
df.drop(columns="Temperature_avg", inplace=True)
Groupby is a resource heavy process so make the most out of it when you use it. Furthermore, as you already know loops are not a good idea when it comes to dataframes. Additionally, if you have a large data you may want to avoid creating extra variables from it. I may put the groupby into the merge if my data has 1m rows and many columns.

Pivot or transpose in SQL or Pandas

I have a table of the form:
item_code | attribute | time_offset | mean | median | description | ...
The attribute column has one of 40 possible values and the time_offset column can be an integer from 0 to 20.
I want to transform this table to a wide one of the form:
item_code | <attribute1>_<time_offset1>_mean | <attribute1>_<time_offset1>_median | <attribute1>_<time_offset1>_description | <attribute1>_<time_offset1>_... | <attribute2>...
I can do this either in SQL or in Pandas but I'm having difficulty with the fact that some of the columns are not numeric, so it is hard to come up with an aggregation function for them.
I can guarantee that each combination of item_code, attribute and time_offset will have only one row, so I do not need an aggregation function. Is there something like a transpose operation that will allow me to do what I am looking for?

Categories