Filtering of groups in dataframes - python

I try to filter out groups using pandas, I have tried the groupby but can't find out how to filter whole groups out with criteria from the DF. Below is a print of my dataframe. I want to group the users (1-4) and then filter based on whether they have a primary or not, then only show the users who do not have a primary account. Anyone got an idea for this?
So far my code looks like
df=pd.read_csv("accounts_test.csv")
grouped = df.groupby('User')
Dataframe:
User primary account_type
0 1 NaN current_acc
1 1 yes savings
2 1 NaN invest
3 2 NaN current_acc
4 2 NaN invest
5 2 NaN savings
6 3 NaN savings
7 3 yes current_acc
8 3 NaN invest
9 4 NaN savings
10 4 NaN invest
11 4 NaN current_acc
Wanted output after filtering:
User primary account_type
3 2 NaN current_acc
4 2 NaN invest
5 2 NaN savings
9 4 NaN savings
10 4 NaN invest
11 4 NaN current_acc

you can try via groupby()+filter():
df.groupby('User').filter(lambda x:x['primary'].ne('yes').all())
OR
via use groupby()+transform() as a mask and then pass it to df:
df[df.groupby('User')['primary'].transform(lambda x:x.ne('yes').all())]

Another way using only fast optimized vectorized operation (without using lambda function):
Use .loc + .groupby() + .transform() on 'all':
df.loc[df['primary'].ne('yes').groupby(df['User']).transform('all')]
or, if your NaN is null value equivalent to np.nan, you can also use:
df.loc[df['primary'].isna().groupby(df['User']).transform('all')]
Result:
User primary account_type
3 2 NaN current_acc
4 2 NaN invest
5 2 NaN savings
9 4 NaN savings
10 4 NaN invest
11 4 NaN current_acc

Related

Creating a dataframe from monthly values which dont start on january

So, i have some data in list form, such as:
Q=[2,3,4,5,6,7,8,9,10,11,12] #values
M=[11,0,1,2,3,4,5,6,7,8,9] #months
Y=[2010,2011,2011,2011,2011,2011,2011,2011,2011,2011,2011] #years
And i want to get a dataframe, with one row per year, and one column per month, adding the data of Q on the positions given by M and Y.
so far i have tried a couple of things, my current code is as follows:
def save_data(data_list,year_info,month_info):
#how many datapoints
n_data=len(data_list)
#how many years
y0=year_info[0]
yf=year_info[n_data-1]
n_years=yf-y0+1
#creating the list i want to fill out
df_list=[[math.nan]*12]*n_years
ind=0
for y in range(n_years):
for m in range(12):
if ind<len(data_list):
if year_info[ind]-y0==y and month_info[ind]==m:
df_list[y][m]=data_list[ind]
ind+=1
df=pd.DataFrame(df_list)
return df
I get this output:
0
1
2
3
4
5
6
7
8
9
10
11
0
3
4
5
6
7
8
9
10
11
12
nan
2
1
3
4
5
6
7
8
9
10
11
12
nan
2
And i want to get:
0
1
2
3
4
5
6
7
8
9
10
11
0
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
2
1
3
4
5
6
7
8
9
10
11
12
nan
nan
I have tried doing a bunch of diferent things, but so far nothing has worked, I'm wondering if there's a more straightforward way of doing this, my code seems to be overwriting in a weird way, i do not know for instance why is there a 2 on the last value of second row, since that's the first value of my list.
Thanks in advance!
Try pivot:
(pd.DataFrame({'Y':Y,'M':M,'Q':Q})
.pivot(index='Y', columns='M', values='Q')
)
Output:
M 0 1 2 3 4 5 6 7 8 9 11
Y
2010 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2.0
2011 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 11.0 12.0 NaN

Check multiple columns data format and append results to one column in Pandas

Given a toy dataset as follows:
id room area situation
0 1 A-102 world under construction
1 2 NaN 24 under construction
2 3 B309 NaN NaN
3 4 C·102 25 under decoration
4 5 E_1089 hello under decoration
5 6 27 NaN under plan
6 7 27 NaN NaN
I need to check three columns: room, area, situation based on the following conditions:
(1) if room name is not number, alphabet, - (NaNs are also considered as invalid one), then returns incorrect room name for check column;
(2) if area is not number or NaNs, then returns and appends area is not numbers to the existing check column.
(3) if situation has under decoration, then returns and appends decoration is in the content to the existing check column.
Please note I have other columns to check in real data and I need to append new check results by seperators ;.
How could I get the expected result like this:
id room area situation check
0 1 A-102 world under construction area is not numbers
1 2 NaN 24 under construction incorrect room name
2 3 B309 NaN NaN NaN
3 4 C·102 25 under decoration incorrect room name; decoration is in the content
4 5 E_1089 hello under decoration incorrect room name; area is not numbers; decoration is in the content
5 6 27 NaN under plan NaN
6 7 27 NaN NaN NaN
My code so far:
Room name check:
df['check'] = np.where(df.room.str.match('^[a-zA-Z\d\-]*$'), np.NaN, 'incorrect room name')
Out:
id room area situation check
0 1 A-102 world under construction nan
1 2 NaN 24 under construction nan
2 3 B309 NaN NaN nan
3 4 C·102 25 under decoration incorrect room name
4 5 E_1089 hello under decoration incorrect room name
5 6 27 NaN under plan nan
6 7 27 NaN NaN nan
Area check:
df['check'] = df['check'].where(df.area.str.contains('^\d+$', na = True),
'area is not a numbers')
Out:
id room area situation check
0 1 A-102 world under construction area is not a numbers
1 2 NaN 24 under construction nan
2 3 B309 NaN NaN nan
3 4 C·102 25 under decoration incorrect room name
4 5 E_1089 hello under decoration area is not a numbers
5 6 27 NaN under plan nan
6 7 27 NaN NaN nan
Situation check:
df['check'] = df['check'].where(df.situation.str.contains('under decoration', na = True),
'decoration is in the content')
Out:
id room area situation check
0 1 A-102 world under construction decoration is in the content
1 2 NaN 24 under construction decoration is in the content
2 3 B309 NaN NaN nan
3 4 C·102 25 under decoration incorrect room name
4 5 E_1089 hello under decoration area is not a numbers
5 6 27 NaN under plan decoration is in the content
6 7 27 NaN NaN nan
Thanks.
First was changed output from each test by numpy.where, then zip each array and apply custom function for join if no missing value:
a = np.where(df.room.str.match('^[a-zA-Z\d\-]*$', na = False), None,
'incorrect room name')
b = np.where(df.area.str.contains('^\d+$', na = True), None,
'area is not a numbers')
c = np.where(df.situation.str.contains('under decoration', na = False),
'decoration is in the content', None)
f = (lambda x: ';'.join(y for y in x if pd.notna(y))
if any(pd.notna(np.array(x))) else np.nan )
df['check'] = [f(x) for x in zip(a,b,c)]
print(df)
id room area situation \
0 1 A-102 world under construction
1 2 NaN 24 under construction
2 3 B309 NaN NaN
3 4 C·102 25 under decoration
4 5 E_1089 hello under decoration
5 6 27 NaN under plan
6 7 27 NaN NaN
check
0 area is not a numbers
1 incorrect room name
2 NaN
3 incorrect room name;decoration is in the content
4 incorrect room name;area is not a numbers;deco...
5 NaN
6 NaN
I reworked your conditions a bit so the result comes closer to your expected output:
a = np.where(df.room.str.match('^[a-zA-Z\d\-]*$').notnull(), pd.NA, 'incorrect room name')
b = np.where(df["area"].str.isnumeric() & df["area"].notnull(), pd.NA, 'area is not a numbers')
c = np.where(df.situation.str.contains('under decoration', na = False), 'decoration is in the content', pd.NA)
s = (pd.concat([pd.Series(i, index=df.index) for i in (a, b, c)], axis = 1)
.stack().groupby(level = 0).agg("; ".join))
print(df.assign(check=s))
id room area situation check
0 1 A-102 world under construction area is not a numbers
1 2 NaN 24 under construction incorrect room name
2 3 B309 NaN NaN area is not a numbers; decoration is in the co...
3 4 C·102 25 under decoration decoration is in the content
4 5 E_1089 hello under decoration area is not a numbers; decoration is in the co...
5 6 27 NaN under plan area is not a numbers
6 7 27 NaN NaN area is not a numbers; decoration is in the co...
You can try this:
import os
import glob
import pandas as pd
os.chdir(r"C:\Users\Rameez PC\Desktop\python data files 2\")
extension = 'xlsx'
all_filenames = [i for i in glob.glob('*.{}'.format(extension))]
#combine all files in the list
combined_xlsx1 = pd.concat([pd.read_excel(f) for f in all_filenames] )
#export to csv
combined_xlsx1.to_excel( "combined.xlsx", index=False, encoding='utf-8-sig')

How fill unstinting numeric values in df column

so I am trying to add rows to data frame that should follow a numeric order 1 to 52
but my data is missing numbers, so I need to add these rows and fill these spots with NaN values or null.
df = pd.DataFrame("Weeks": [1,2,3,15,16,20,21,52],
"Values": [10,10,10,10,50,60,70,40])
Desired output:
Weeks Values
1 10
2 10
3 10
4 NaN
5 NaN
6 NaN
7 NaN
8 NaN
...
52 40
and so on until it reach Weeks = 52
My solution:
new_df = pd.DataFrame("Weeks": "" , "Values":"")
for x in range(1,53):
for i in df.Weeks:
if x == i:
new_df["Weeks"] = x
new_df["Values"] = df.Values[i]
The problem it is super inefficient, anyone know a way to do it in much efficient way?
You could use set_index to set the Weeks as index an reindex with a range up to the maximum week:
df.set_index('Weeks').reindex(range(1,df.Weeks.max()))
Or accounting for the minimum week too:
df.set_index('Weeks').reindex(range(*df.Weeks.agg(('min', 'max'))))
Values
Weeks
1 10.0
2 10.0
3 10.0
4 NaN
5 NaN
6 NaN
7 NaN
8 NaN
9 NaN
10 NaN
11 NaN
12 NaN
13 NaN
14 NaN
15 10.0
16 50.0
17 NaN
...

How do I pivot a pandas DataFrame and then add hierarchical columns?

Can someone please help me understand the steps to convert a Python pandas DataFrame that is in record form (data set A), into one that is pivoted with nested columns (as shown in data set B)?
For this question the underlying schema has the following rules:
Each ProjectID appears once
Each ProjectID is associated to a single PM
Each ProjectID is associated to a single Category
Multiple ProjectIDs can be associated with a single Category
Multiple ProjectIDs can be associated with a single PM
Input Data Set A
df_A = pd.DataFrame({'ProjectID':[1,2,3,4,5,6,7,8],
'PM':['Bob','Jill','Jack','Jack','Jill','Amy','Jill','Jack'],
'Category':['Category A','Category B','Category C','Category B','Category A','Category D','Category B','Category B'],
'Comments':['Justification 1','Justification 2','Justification 3','Justification 4','Justification 5','Justification 6','Justification 7','Justification 8'],
'Score':[10,7,10,5,15,10,0,2]})
Desired Output
Notice above the addition of a nested index across the columns. Also notice that 'Comments' and 'Score' both appear at the same level beneath 'ProjectID'. Finally see how the desired output does NOT aggregate any data, but groups/merges the category data into one row per category value.
I have tried so far:
df_A.set_index(['Category','ProjectID'],append=True).unstack() - This would only work if I first create a nested index of ['Category','ProjectID] and ADD that to the original numerical index created with a standard dataframe, however it repeats each instance of a Category/ProjectID match as its own row (because of the original index).
df_A.groupby() - I wasn't able to use this because it appears to force aggregation of some sort in order to get all of the values of a single category on a single row.
df_A.pivot('Category','ProjectID',values='Comments') - I can perform a pivot to avoid unwanted aggregation and it starts to look similar to my intended output, but can only see the 'Comments' field and also cannot set nested columns this way. I receive an error when trying to set values=['Comments','Score'] in the pivot statement.
I think the answer is somewhere between pivot, unstack, set_index, or groupby, but I don't know how to complete the pivot, and then add the appropriate nested column index.
I'd appreciate any thoughts you all have.
Question updated based on Mr. T's comments. Thank you.
I think this is what you are looking for:
pd.DataFrame(df_A.set_index(['PM', 'ProjectID', 'Category']).sort_index().stack()).T.stack(2)
Out[4]:
PM Amy Bob ... Jill
ProjectID 6 1 ... 5 7
Comments Score Comments Score ... Comments Score Comments Score
Category ...
0 Category A NaN NaN Justification 1 10 ... Justification 5 15 NaN NaN
Category B NaN NaN NaN NaN ... NaN NaN Justification 7 0
Category C NaN NaN NaN NaN ... NaN NaN NaN NaN
Category D Justification 6 10 NaN NaN ... NaN NaN NaN NaN
[4 rows x 16 columns]
EDIT:
To select rows by category you should get rid of the row index 0 by adding .xs():
In [3]: df_A_transformed = pd.DataFrame(df_A.set_index(['PM', 'ProjectID', 'Category']).sort_index().stack()).T.stack(2).xs(0)
In [4]: df_A_transformed
Out[4]:
PM Amy Bob ... Jill
ProjectID 6 1 ... 5 7
Comments Score Comments Score ... Comments Score Comments Score
Category ...
Category A NaN NaN Justification 1 10 ... Justification 5 15 NaN NaN
Category B NaN NaN NaN NaN ... NaN NaN Justification 7 0
Category C NaN NaN NaN NaN ... NaN NaN NaN NaN
Category D Justification 6 10 NaN NaN ... NaN NaN NaN NaN
[4 rows x 16 columns]
In [5]: df_A_transformed.loc['Category B']
Out[5]:
PM ProjectID
Amy 6 Comments NaN
Score NaN
Bob 1 Comments NaN
Score NaN
Jack 3 Comments NaN
Score NaN
4 Comments Justification 4
Score 5
8 Comments Justification 8
Score 2
Jill 2 Comments Justification 2
Score 7
5 Comments NaN
Score NaN
7 Comments Justification 7
Score 0
Name: Category B, dtype: object

How can I use apply with pandas rolling_corr()

I posted this a while ago but no one could solve the problem.
first let's create some correlated DataFrames and call rolling_corr(), with dropna() as I am going to sparse it up later and no min_period set as I want to keep the results robust and consistent with the set window
hey=(DataFrame(np.random.random((15,3)))+.2).cumsum()
hoo=(DataFrame(np.random.random((15,3)))+.2).cumsum()
hey_corr= rolling_corr(hey.dropna(),hoo.dropna(), 4)
gives me
In [388]: hey_corr
Out[388]:
0 1 2
0 NaN NaN NaN
1 NaN NaN NaN
2 NaN NaN NaN
3 0.991087 0.978383 0.992614
4 0.974117 0.974871 0.989411
5 0.966969 0.972894 0.997427
6 0.942064 0.994681 0.996529
7 0.932688 0.986505 0.991353
8 0.935591 0.966705 0.980186
9 0.969994 0.977517 0.931809
10 0.979783 0.956659 0.923954
11 0.987701 0.959434 0.961002
12 0.907483 0.986226 0.978658
13 0.940320 0.985458 0.967748
14 0.952916 0.992365 0.973929
now when I sparse it up it gives me...
hey.ix[5:8,0] = np.nan
hey.ix[6:10,1] = np.nan
hoo.ix[5:8,0] = np.nan
hoo.ix[6:10,1] = np.nan
hey_corr_sparse = rolling_corr(hey.dropna(),hoo.dropna(), 4)
hey_corr_sparse
Out[398]:
0 1 2
0 NaN NaN NaN
1 NaN NaN NaN
2 NaN NaN NaN
3 0.991273 0.992557 0.985773
4 0.953041 0.999411 0.958595
11 0.996801 0.998218 0.992538
12 0.994919 0.998656 0.995235
13 0.994899 0.997465 0.997950
14 0.971828 0.937512 0.994037
chucks of data are missing, it looks like we only have data where the dropna() can form a complete window across the dataframe
I can solve the problem with a ugly iter-fudge as follows...
hey_corr_sparse = DataFrame(np.nan, index=hey.index,columns=hey.columns)
for i in hey_corr_sparse.columns:
hey_corr_sparse.ix[:,i] = rolling_corr(hey.ix[:,i].dropna(),hoo.ix[:,i].dropna(), 4)
hey_corr_sparse
Out[406]:
0 1 2
0 NaN NaN NaN
1 NaN NaN NaN
2 NaN NaN NaN
3 0.991273 0.992557 0.985773
4 0.953041 0.999411 0.958595
5 NaN 0.944246 0.961917
6 NaN NaN 0.941467
7 NaN NaN 0.963183
8 NaN NaN 0.980530
9 0.993865 NaN 0.984484
10 0.997691 NaN 0.998441
11 0.978982 0.991095 0.997462
12 0.914663 0.990844 0.998134
13 0.933355 0.995848 0.976262
14 0.971828 0.937512 0.994037
Does anyone in the community know if it is possible make this an array function to give this result, I've attempted to use .apply but drawn a blank, is it even possible to .apply a function that works on two data structures (hey and hoo in this example)?
many thanks, LW
you can try this:
>>> def sparse_rolling_corr(ts, other, window):
... return rolling_corr(ts.dropna(), other[ts.name].dropna(), window).reindex_like(ts)
...
>>> hey.apply(sparse_rolling_corr, args=(hoo, 4))

Categories