Is there an easy and straightforward way to load the output from sp.stats.describe() into a DataFrame, including the value names? It doesn't seem to be a dictionary format or something related. Ofcourse I can manually attach the relevant column names (see below), but was wondering whether it might be possible to directly load into a DataFrame with named columns.
import pandas as pd
import scipy as sp
data = pd.DataFrame({'a': [1, 2, 3, 4, 5], 'b': [1, 2, 3, 4, 5]})
sp.stats.describe(data['a'])
pd.DataFrame(a)
pd.DataFrame(a).transpose().rename(columns={0: 'N', 1: 'Min,Max',
2: 'Mean', 3: 'Var',
4: 'Skewness',
5: 'Kurtosis'})
You can use _fields for columns names from named tuple:
a = sp.stats.describe(data['a'])
df = pd.DataFrame([a], columns=a._fields)
print (df)
nobs minmax mean variance skewness kurtosis
0 5 (1, 5) 3.0 2.5 0.0 -1.3
Also is possible create dictionary from named tuples by _asdict:
d = sp.stats.describe(data['a'])._asdict()
df = pd.DataFrame([d], columns=d.keys())
print (df)
nobs minmax mean variance skewness kurtosis
0 5 (1, 5) 3.0 2.5 0.0 -1.3
Related
So I have a dataframe as follows:
import pandas as pd
import numpy as np
df = pd.DataFrame(np.array([[1, 2, 3, 3, 2, 1], [4, 3, 6, 6 ,3 ,4], [7, 2, 9, 9, 2, 7]]),
columns=['a', 'b', 'c', 'a_select','b_select','c_select'])
df
Now, I may need to reorganize the dataframe (or use two) to accomplish this, but...
I'd like to select the 2 largest values from each '_select' column per row, then use that to mean the corresponding column.
For example, row 1 would mean the values from a & b, row 2 a & c (NOT the values from the _select columns that we're looking at).
Currently I'm just iterating each row - as that seems rather simple, but slow with a large dataset - however I can't figure out how to use an apply or lambda function to do the equivelant (or if it's even possible).
Simple oneliner using nlargest
>>> df.filter(like='select').apply(lambda s: s.nlargest(2), 1).mean(1)
For performance, maybe numpy is useful:
>>> np.sort(df.filter(like='select').to_numpy(), 1)[:, -2:].mean(1)
To get values from the first columns, use argsort
>>> arr = df.filter(like='select').to_numpy()
>>> df[['a', 'b', 'c']].to_numpy()[[[x] for x in np.arange(len(arr))],
np.argsort(arr, 1)][:, -2:].mean(1)
array([1.5, 5. , 8. ])
I have a dataframe that has duplicated time indices and I would like to get the mean across all for the previous 2 days (I do not want to drop any observations; they are all information that I need). I've checked pandas documentation and read previous posts on Stackoverflow (such as Apply rolling mean function on data frames with duplicated indices in pandas), but could not find a solution. Here's an example of how my data frame look like and the output I'm looking for. Thank you in advance.
data:
import pandas as pd
df = pd.DataFrame({'id': [1,1,1,2,3,3,4,4,4],'t': [1, 2, 3, 2, 1, 2, 2, 3, 4],'v1':[1, 2, 3, 4, 5, 6, 7, 8, 9]})
output:
t
v2
1
-
2
-
3
4.167
4
5
5
6.667
A rough proposal to concatenate 2 copies of the input frame in which values in 't' are replaced respectively by values of 't+1' and 't+2'. This way, the meaning of the column 't' becomes "the target day".
Setup:
import pandas as pd
df = pd.DataFrame({'id': [1,1,1,2,3,3,4,4,4],
't': [1, 2, 3, 2, 1, 2, 2, 3, 4],
'v1':[1, 2, 3, 4, 5, 6, 7, 8, 9]})
Implementation:
len = df.shape[0]
incr = pd.DataFrame({'id': [0]*len, 't': [1]*len, 'v1':[0]*len}) # +1 in 't'
df2 = pd.concat([df + incr, df + incr + incr]).groupby('t').mean()
df2 = df2[1:-1] # Drop the days that have no full values for the 2 previous days
df2 = df2.rename(columns={'v1': 'v2'}).drop('id', axis=1)
Output:
v2
t
3 4.166667
4 5.000000
5 6.666667
Thank you for all the help. I ended up using groupby + rolling (2 Day), and then drop duplicates (keep the last observation).
For a pandas dataframe of:
import pandas as pd
df = pd.DataFrame({
'id': [1, 1, 2, 1], 'anomaly_score':[5, 10, 8, 100], 'match_level_0':[np.nan, 1, 1, 1], 'match_level_1':[np.nan, np.nan, 1, 1], 'match_level_2':[np.nan, 1, 1, 1]
})
display(df)
df = df.groupby(['id', 'match_level_0']).agg(['mean', 'sum'])
I want to calculate the largest rows per group.
df.columns = ['__'.join(col).strip() for col in df.columns.values]
df.groupby(['id'])['anomaly_score__mean'].nlargest(2)
Works but requires to flatten the multiindex for the columns.
Instead I want to directly use,
df.groupby(['id'])[('anomaly_score', 'mean')].nlargest(2)
But this fails with the key not being found.
Interestingly, it works just fine when not grouping:
df[('anomaly_score', 'mean')].nlargest(2)
For me working grouping by Series with first level of MultiIndex, but it seems bug why not working like in your solution:
print (df[('anomaly_score', 'mean')].groupby(level=0).nlargest(2))
id match_level_0
1 1.0 55
2 1.0 8
Name: (anomaly_score, mean), dtype: int64
print (df[('anomaly_score', 'mean')].groupby(level='id').nlargest(2))
I want to add a column after the agg() function with the number of non-NaNs.
"len" does not distinguish between NaNs and non-Nans.
In the example bellow, I am exploring the fact that np.sum and np.mean only use non-NaNs into account. But there must be a better way of reaching the same result?
import pandas as pd
import numpy as np
df = pd.DataFrame({'a': [1, 1, 1, 0, 0, 0], 'b': [4, np.NaN, 2, 1, 6, np.NaN]})
df = df.groupby("a")['b'].agg([np.sum, np.mean, np.median, np.std, len])
df['count_non_nan'] = df['sum'] / df['mean']
df
Replace np.NaN's with 0.
import pandas as pd
import numpy as np
df = pd.DataFrame({'a': [1, 1, 1, 0, 0, 0], 'b': [4, np.NaN, 2, 1, 6, np.NaN]})
df = df.replace(np.nan, 0)
df = df.groupby("a")['b'].agg([np.sum, np.mean, np.median, np.std, len])
df['count_non_nan'] = df['sum'] / df['mean']
df
print(df)
Define a function counting non-null values in the source Series:
def countNotNull(grp):
return grp.notnull().sum()
Don't be misguided by the name of notnull function. It takes into account
also NaN values.
Then include this function in your aggregation:
df.groupby("a")['b'].agg([np.sum, np.mean, np.median, np.std, len, countNotNull])
The result is:
sum mean median std len countNotNull
a
0 7.0 3.5 3.5 3.535534 3.0 2.0
1 6.0 3.0 3.0 1.414214 3.0 2.0
I am filling up NaN values in one column of my dataframe using the followikng code:
for i in tqdm(range(nadf.shape[0])):
a = nadf["primary"][i]
nadf["count"][i] = np.ceil(d[a]*a)
This code replaces the NaN values in the "count" by multiplying the corresponding value of the "primary" in a dictionary d with the value of "primary". The nadf has 16 million rows. I understand that the execution will be slow, but is there a method to speed this up?
If I understood your question and dataframe value in a right way, the problem can be solved the following way by using pandas internal functionality:
Please follow comments in code, feel free to ask questions.
import pandas as pd
import numpy as np
import math
def fill_nan(row, _d):
"""fill nan values in "count" column based on "primary" column value and dictionary _d"""
if math.isnan(row["count"]):
return np.ceil(_d[row["primary"]]) * row["primary"]
return row["count"] # else not nan
if __name__ == "__main__":
d = {1: 10, 2: 20, 3: 30}
df = pd.DataFrame({
"primary": [1, 2, 3, 1, 2, 1, 2],
"count": [10.1, 4, 5, np.nan, np.nan, 4, np.nan]
})
df["count"] = df.apply(lambda row: fill_nan(row, d), axis=1) # changes nan here
print(df)