Retrieving the average of averages in Python DataFrame - python

I have a mass pandas DataFrame df:
year count
1983 5
1983 4
1983 7
...
2009 8
2009 11
2009 30
and I aim to sample 10 data points per year 100 times and get the mean and standard deviation of count per year. The signs of the count values are determined randomly.
I want to randomly sample 10 data per year, which can be done by:
new_df = pd.DataFrame(columns=['year', 'count'])
ref = df.year.unique()
for i in range(len(ref)):
appended_df = df[df['year'] == ref[i]].sample(n=10)
new_df = pd.concat([new_df,appended_df])
Then, I assign a sign to count randomly (so that by random chance the count could be positive or negative) and rename it to value, which can be done by:
vlist = []
for i in range(len(new_df)):
if randint(0,1) == 0:
vlist.append(new_df.count.iloc[i])
else:
vlist.append(new_df.count.iloc[i] * -1)
new_data['value'] = vlist
Getting a mean and standard deviation per each year is quite simple:
xdf = new_data.groupby("year").agg([np.mean, np.std]).reset_index()
But I can't seem to find an optimal way to try this sampling 100 times per year, store the mean values, and get the mean and standard deviation of those 100 means per year. I could think of using for loop, but it would take too much of a runtime.
Essentially, the output should be in the form of the following (the values are arbitrary here):
year mean_of_100_means total_sd
1983 4.22 0.43
1984 -6.39 1.25
1985 2.01 0.04
...
2007 11.92 3.38
2008 -5.27 1.67
2009 1.85 0.99
Any insights would be appreciated.

Try:
def fn(x):
_100_means = [x.sample(10).mean() for i in range(100)]
return {
"mean_of_100_means": np.mean(_100_means),
"total_sd": np.std(_100_means),
}
print(df.groupby("year")["count"].apply(fn).unstack().reset_index())
EDIT: Changed the computation of means.
Prints:
year mean_of_100_means total_sd
0 1983 48.986 8.330787
1 1984 48.479 10.384896
2 1985 48.957 7.854900
3 1986 50.821 10.303847
4 1987 50.198 9.835832
5 1988 47.497 8.678749
6 1989 46.763 9.197387
7 1990 49.696 8.837589
8 1991 46.979 8.141969
9 1992 48.555 8.603597
10 1993 50.220 8.263946
11 1994 48.735 9.954741
12 1995 49.759 8.532844
13 1996 49.832 8.998654
14 1997 50.306 9.038316
15 1998 49.513 9.024341
16 1999 50.532 9.883166
17 2000 49.195 9.177008
18 2001 50.731 8.309244
19 2002 48.792 9.680028
20 2003 50.251 9.384759
21 2004 50.522 9.269677
22 2005 48.090 8.964458
23 2006 49.529 8.250701
24 2007 47.192 8.682196
25 2008 50.124 9.337356
26 2009 47.988 8.053438
The dataframe was created:
data = []
for y in range(1983, 2010):
for i in np.random.randint(0, 100, size=1000):
data.append({"year": y, "count": i})
df = pd.DataFrame(data)

I think you can use pandas groupby and sample functions together to take 10 samples from each year of your DataFrame. If you put this in a loop, then you can sample it 100 times, and combine the results.
It sounds like you only need the standard deviation of the 100 means (and you don't need the standard deviation of the sample of 10 observations), so you can calculate only the mean in your groupby and sample, then calculate the standard deviation from each of those 100 means when you are creating the total_sd column of your final DataFrame.
import numpy as np
import pandas as pd
np.random.seed(42)
## create a random DataFrame with 100 entries for the years 1980-1999, length 2000
df = pd.DataFrame({
'year':[year for year in list(range(1980, 2000)) for _ in range(100)],
'count':np.random.randint(1,100,size=2000)
})
list_of_means = []
## sample 10 observations from each year, and repeat this process 100 times, storing the mean for each year in a list
for _ in range(100):
df_sample = df.groupby("year").sample(10).groupby("year").mean()
list_of_means.append(df_sample['count'].tolist())
array_of_means = [np.array(x) for x in list_of_means]
result = pd.DataFrame({
'year': df.year.unique(),
'mean_of_100_means': [np.mean(k) for k in zip(*array_of_means)],
'total_sd': [np.std(k) for k in zip(*array_of_means)]
})
This results in:
>>> result
year mean_of_100_means total_sd
0 1980 50.316 8.656948
1 1981 48.274 8.647643
2 1982 47.958 8.598455
3 1983 49.357 7.854620
4 1984 48.977 8.523484
5 1985 49.847 7.114485
6 1986 47.338 8.220143
7 1987 48.106 9.413085
8 1988 53.487 9.237561
9 1989 47.376 9.173845
10 1990 46.141 9.061634
11 1991 46.851 7.647189
12 1992 49.389 7.743318
13 1993 52.207 9.333309
14 1994 47.271 8.177815
15 1995 52.555 8.377355
16 1996 47.606 8.668769
17 1997 52.584 8.200558
18 1998 51.993 8.695232
19 1999 49.054 8.178929

Related

Create columns of relative values with a set starting base value

In python, I have a dataframe similar to this:
print(transport)
year cycles cars lorries
1993 249 21000 1507
1994 249 21438 1539
1995 257 21817 1581
1996 253 22364 1630
1997 253 22729 1668
I would like to look at the change over time for each form of transport, relative to the earliest value i.e. for 1993. I would like to create a new dataframe based on transport where 1993 is the base year with the value set to 100.0 and all subsequent values be relative to that.
I'm really new to python and I can't figure out how to approach this.
You can divide all the values by the corresponding number from 1993 and multiply by 100.0 to get the results -
df = df.set_index('year')
df / df.loc[1993] * 100.0
# cycles cars lorries
# year
# 1993 100.000000 100.000000 100.000000
# 1994 100.000000 102.085714 102.123424
# 1995 103.212851 103.890476 104.910418
# 1996 101.606426 106.495238 108.161911
# 1997 101.606426 108.233333 110.683477
You can use iterrows:
transport.set_index("year", inplace=True)
y1993 = transport[transport.index == 1993]
for index, row in transport.iterrows():
for column in transport.columns:
transport.loc[index, column] = ((row[column])/(y1993[column].values[0])) * 100
transport
Output
year
cycles
cars
lorries
1993
100
100
100
1994
100
102.086
102.123
1995
103.213
103.89
104.91
1996
101.606
106.495
108.162
1997
101.606
108.233
110.683

Sorting an Dataframe wtih Index in form of List

I have a pandas df that I need to sort based on a fixed order given in a list. The problem that I'm having is that the sort I'm attempting is not moving the data rows in the designated order that I'm expecting from the order of the list. My list and DataFrame (df) looks like this:
months = ['5','6','7','8','9','10','11','12','1','2']
df =
year 1992 1993 1994 1995
month
1 -0.343107 -0.211959 0.437974 -1.219363
2 -0.383353 0.888650 1.054926 0.714846
5 0.057198 1.246682 0.042684 0.275701
6 -0.100018 -0.801554 0.001111 0.382633
7 -0.283815 0.204448 0.350705 0.130652
8 0.042195 -0.433849 -1.481228 -0.236004
9 1.059776 0.875214 0.304638 0.127819
10 -0.328911 -0.256656 1.081157 1.057449
11 -0.488213 -0.957050 -0.813885 1.403822
12 0.973031 -0.246714 0.600157 0.579038
The closest that I have gotten is this -
newdf = pd.DataFrame(df.values, index=list(months))
...but it does not move the rows. This command only adds the months in the index column w/out moving the data.
0 1 2 3
5 -0.343107 -0.211959 0.437974 -1.219363
6 -0.383353 0.888650 1.054926 0.714846
7 0.057198 1.246682 0.042684 0.275701
8 -0.100018 -0.801554 0.001111 0.382633
9 -0.283815 0.204448 0.350705 0.130652
10 0.042195 -0.433849 -1.481228 -0.236004
11 1.059776 0.875214 0.304638 0.127819
12 -0.328911 -0.256656 1.081157 1.057449
1 -0.488213 -0.957050 -0.813885 1.403822
2 0.973031 -0.246714 0.600157 0.579038
I need the result to look like -
year 1992 1993 1994 1995
month
5 0.057198 1.246682 0.042684 0.275701
6 -0.100018 -0.801554 0.001111 0.382633
7 -0.283815 0.204448 0.350705 0.130652
8 0.042195 -0.433849 -1.481228 -0.236004
9 1.059776 0.875214 0.304638 0.127819
10 -0.328911 -0.256656 1.081157 1.057449
11 -0.488213 -0.957050 -0.813885 1.403822
12 0.973031 -0.246714 0.600157 0.579038
1 -0.343107 -0.211959 0.437974 -1.219363
2 -0.383353 0.888650 1.054926 0.714846
Assuming df.index is dtype('int64'), first convert months to integers. Then use loc:
months = [*map(int, months)]
out = df.loc[months]
If df.index is dtype('O'), you can use loc right away, i.e. you don't need the first line.
Output:
year 1992 1993 1994 1995
month
5 0.057198 1.246682 0.042684 0.275701
6 -0.100018 -0.801554 0.001111 0.382633
7 -0.283815 0.204448 0.350705 0.130652
8 0.042195 -0.433849 -1.481228 -0.236004
9 1.059776 0.875214 0.304638 0.127819
10 -0.328911 -0.256656 1.081157 1.057449
11 -0.488213 -0.957050 -0.813885 1.403822
12 0.973031 -0.246714 0.600157 0.579038
1 -0.343107 -0.211959 0.437974 -1.219363
2 -0.383353 0.888650 1.054926 0.714846

Display mean and deviation values on grouped boxplot in Python

I want to display mean and standard deviation values above each of the boxplots in the grouped boxplot (see picture).
My code is
import pandas as pd
import seaborn as sns
from os.path import expanduser as ospath
df = pd.read_excel(ospath('~/Documents/Python/Kandidatspeciale/TestData.xlsx'),'Ark1')
bp = sns.boxplot(y='throw angle', x='incident angle',
data=df,
palette="colorblind",
hue='Bat type')
bp.set_title('Rubber Comparison',fontsize=15,fontweight='bold', y=1.06)
bp.set_ylabel('Throw Angle [degrees]',fontsize=11.5)
bp.set_xlabel('Incident Angle [degrees]',fontsize=11.5)
Where my dataframe, df, is
Bat type incident angle throw angle
0 euro 15 28.2
1 euro 15 27.5
2 euro 15 26.2
3 euro 15 27.7
4 euro 15 26.4
5 euro 15 29.0
6 euro 30 12.5
7 euro 30 14.7
8 euro 30 10.2
9 china 15 29.9
10 china 15 31.1
11 china 15 24.9
12 china 15 27.5
13 china 15 31.2
14 china 15 24.4
15 china 30 9.7
16 china 30 9.1
17 china 30 9.5
I tried with the following code. It needs to be independent of number of x (incident angles), for instance it should do the job for more angles of 45, 60 etc.
m=df.mean(axis=0) #Mean values
st=df.std(axis=0) #Standard deviation values
for i, line in enumerate(bp['medians']):
x, y = line.get_xydata()[1]
text = ' μ={:.2f}\n σ={:.2f}'.format(m[i], st[i])
bp.annotate(text, xy=(x, y))
Can somebody help?
This question brought me here since I was also looking for a similar solution with seaborn.
After some trial and error, you just have to change the for loop to:
for i in range(len(m)):
bp.annotate(
' μ={:.2f}\n σ={:.2f}'.format(m[i], st[i]),
xy=(i, m[i]),
horizontalalignment='center'
)
This change worked for me (although I just wanted to print the actual median values). You can also add changes like the fontsize, color or style (i.e., weight) just by adding them as arguments in annotate.

Applying matrix product in specific pandas columns

I have a pandas DataFrame structured in the following way
0 1 2 3 4 5 6 7 8 9
0 42 2012 106 1200 0.112986 -0.647709 -0.303534 31.73 14.80 1096
1 42 2012 106 1200 0.185159 -0.588728 -0.249392 31.74 14.80 1097
2 42 2012 106 1200 0.199910 -0.547780 -0.226356 31.74 14.80 1096
3 42 2012 106 1200 0.065741 -0.796107 -0.099782 31.70 14.81 1097
4 42 2012 106 1200 0.116718 -0.780699 -0.043169 31.66 14.78 1094
5 42 2012 106 1200 0.280035 -0.788511 -0.171763 31.66 14.79 1094
6 42 2012 106 1200 0.311319 -0.663151 -0.271162 31.78 14.79 1094
In which columns 4, 5 and 6 are actually the components of a vector. I want to apply a matrix multiplication in these columns, that is to replace columns 4, 5 and 6 with the vector resulting of a the multiplication of the previous vector with a matrix.
What I did was
DC=[[ .. definition of multiplication matrix .. ]]
def rotate(vector):
return dot(DC, vector)
data[[4,5,6]]=data[[4,5,6]].apply(rotate, axis='columns')
Which I thought should work, but the returned DataFrame is exactly the same as the original.
What am I missing here?
You code is correct but very slow. You can use values property to get the ndarray and use dot() to transform all the vectors at once:
import numpy as np
import pandas as pd
DC = np.random.randn(3, 3)
df = pd.DataFrame(np.random.randn(1000, 10))
df2 = df.copy()
df[[4,5,6]] = np.dot(DC, df[[4,5,6]].values.T).T
def rotate(vector):
return np.dot(DC, vector)
df2[[4,5,6]] = df2[[4,5,6]].apply(rotate, axis='columns')
df.equals(df2)
On my PC, it's about 90x faster.

can not remove a trend components and a seasonal components

I am trying to make a model for predicting energy production, by using ARMA model.
 
The data I can use for training is as following;
(https://github.com/soma11soma11/EnergyDataSimulationChallenge/blob/master/challenge1/data/training_dataset_500.csv)
ID Label House Year Month Temperature Daylight EnergyProduction
0 0 1 2011 7 26.2 178.9 740
1 1 1 2011 8 25.8 169.7 731
2 2 1 2011 9 22.8 170.2 694
3 3 1 2011 10 16.4 169.1 688
4 4 1 2011 11 11.4 169.1 650
5 5 1 2011 12 4.2 199.5 763
...............
11995 19 500 2013 2 4.2 201.8 638
11996 20 500 2013 3 11.2 234 778
11997 21 500 2013 4 13.6 237.1 758
11998 22 500 2013 5 19.2 258.4 838
11999 23 500 2013 6 22.7 122.9 586
As shown above, I can use data from July 2011 to May 2013 for training.
Using the training, I want to predict energy production on June 2013 for each 500 house.
The problem is that the time series data is not stationary and has trend components and seasonal components (I checked it as following.).
import csv
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data_train = pd.read_csv('../../data/training_dataset_500.csv')
rng=pd.date_range('7/1/2011', '6/1/2013', freq='M')
house1 = data_train[data_train.House==1][['EnergyProduction','Daylight','Temperature']].set_index(rng)
fig, axes = plt.subplots(nrows=1, ncols=3)
for i, column in enumerate(house1.columns):
house1[column].plot(ax=axes[i], figsize=(14,3), title=column)
plt.show()
With this data, I cannot implement ARMA model to get good prediction. So I want to get rid of the trend components and a seasonal components and make the time series data stationary. I tried this problem, but I could not remove these components and make it stationary..
I would recommend the Hodrick-Prescott (HP) filter, which is widely used in macroeconometrics to separate long-term trending component from short-term fluctuations. It is implemented statsmodels.api.tsa.filters.hpfilter.
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
df = pd.read_csv('/home/Jian/Downloads/data.csv', index_col=[0])
# get part of the data
x = df.loc[df.House==1, 'Daylight']
# hp-filter, set parameter lamb=129600 following the suggestions for monthly data
x_smoothed, x_trend = sm.tsa.filters.hpfilter(x, lamb=129600)
fig, axes = plt.subplots(figsize=(12,4), ncols=3)
axes[0].plot(x)
axes[0].set_title('raw x')
axes[1].plot(x_trend)
axes[1].set_title('trend')
axes[2].plot(x_smoothed)
axes[2].set_title('smoothed x')

Categories