How to find the optimized or correct peaks - python

I have following graph
I am using python's scipy.signal.find_peaks to find the peaks. But I am not sure how do I do that. I did following :
per = np.percentile(x,[70])
peaks_control = findPeaks(x, per[0])
where x is the signal
array([1.07541259e+09, 1.13851049e+09, 1.19241492e+09, 1.23706527e+09,
1.27240093e+09, 1.29836131e+09, 1.31217483e+09, 1.32037296e+09,
1.31908858e+09, 1.30896503e+09, 1.29216550e+09, 1.26958042e+09,
1.24561632e+09, 1.21202121e+09, 1.16869371e+09, 1.11054499e+09,
1.04006154e+09, 9.65663403e+08, 8.87706760e+08, 8.09340093e+08,
7.37568765e+08, 6.79736364e+08, 6.38576457e+08, 6.06062937e+08,
5.80650350e+08, 5.55089744e+08, 5.36334499e+08, 5.20236597e+08,
5.06529837e+08, 4.91825175e+08, 4.77937063e+08, 4.65475058e+08,
4.56520513e+08, 4.48393240e+08, 4.41944988e+08, 4.34822844e+08,
4.33688578e+08, 4.33451049e+08, 4.36256177e+08, 4.33553613e+08,
4.29191142e+08, 4.28492541e+08, 4.24465967e+08, 4.20074825e+08,
4.19935897e+08, 4.16652681e+08, 4.12419580e+08, 4.11747552e+08,
4.08801166e+08, 4.02351981e+08, 3.99620513e+08, 3.98716550e+08,
3.46023077e+08, 3.53969464e+08, 4.17131235e+08, 5.19363869e+08,
6.50956410e+08, 8.01530303e+08, 9.50162937e+08, 1.08249790e+09,
1.18242378e+09, 1.22732168e+09, 1.20123077e+09, 1.21067599e+09,
1.21556410e+09, 1.21272261e+09, 1.20310023e+09, 1.18692774e+09,
1.16694033e+09, 1.14330117e+09, 1.11635338e+09, 1.07947529e+09,
1.03222145e+09, 9.73427972e+08, 9.08558974e+08, 8.39966200e+08,
7.70457343e+08, 7.04976224e+08, 6.49436131e+08, 6.02085548e+08,
5.68915385e+08, 5.41638928e+08, 5.18758741e+08, 5.01973660e+08,
4.88766667e+08, 4.77643823e+08, 4.65681818e+08, 4.56193240e+08,
4.46851515e+08, 4.36135198e+08, 4.32282984e+08, 4.27913520e+08,
4.23408625e+08, 4.24119580e+08, 4.22399068e+08, 4.22415385e+08,
4.20193939e+08, 4.17638462e+08, 4.14822378e+08, 4.10636364e+08,
4.08388345e+08, 4.04844522e+08, 4.00571562e+08, 4.00841026e+08,
4.00764802e+08, 4.00432867e+08, 4.00336364e+08, 4.00724709e+08,
4.03048019e+08, 3.57437995e+08, 3.62371096e+08, 4.16658741e+08,
5.10148019e+08, 6.31750117e+08, 7.65175991e+08, 8.96832168e+08,
1.01666597e+09, 1.10373263e+09, 1.14380816e+09, 1.11629790e+09,
1.12228904e+09, 1.12378788e+09, 1.11974825e+09, 1.10812774e+09,
1.09125035e+09, 1.07033566e+09, 1.04667389e+09, 1.02016830e+09,
9.86036830e+08, 9.42176457e+08, 8.88900233e+08, 8.27962005e+08,
7.64362238e+08, 7.00755245e+08, 6.42390909e+08, 5.92395338e+08,
5.52426107e+08, 5.26319114e+08, 5.03317249e+08, 4.85524942e+08,
4.70421911e+08, 4.59389510e+08, 4.51644988e+08, 4.46288578e+08,
4.41076923e+08, 4.37533566e+08, 4.31993007e+08, 4.28625641e+08,
4.25406294e+08, 4.21161538e+08, 4.19049650e+08, 4.16719347e+08,
4.13124242e+08, 4.08404429e+08, 4.06154545e+08, 4.03386014e+08,
4.00980420e+08, 3.99442657e+08, 3.97792075e+08, 3.95606527e+08,
3.97922378e+08, 3.98345221e+08, 3.96253613e+08, 3.95703030e+08,
3.96108392e+08, 3.67136830e+08, 3.58382051e+08, 3.95844289e+08,
4.70853846e+08, 5.76629837e+08, 6.97682284e+08, 8.21169930e+08,
9.32588112e+08, 1.01885804e+09, 1.06315152e+09, 1.05128159e+09,
1.03944545e+09, 1.03769580e+09, 1.03132145e+09, 1.02008601e+09,
1.00327389e+09, 9.85387646e+08, 9.66403030e+08, 9.44620746e+08,
9.18596737e+08, 8.82269697e+08, 8.37750816e+08, 7.84877156e+08,
7.27590443e+08, 6.70183217e+08, 6.14567832e+08, 5.67404895e+08,
5.30862471e+08, 5.03108625e+08, 4.84348718e+08, 4.68116550e+08,
4.55809907e+08, 4.46616783e+08, 4.39725175e+08, 4.34323077e+08])
The peaks I get are adjacent to each other, as I can see that there is little bump in second, third and forth peak sites.
How should I calculate it and ignore such adjacent ones. To calculate the width, prominecne, etc I need to calculate the peaks. If I know it already I might be able to put some threshold.

As you asked in the comments, I'll provide you an example. Please note, this is only an example, an Exploratory Data Analysis is always needed to choose the best way to reach your goal.
So, let's create some noisy data
import numpy as np
from scipy.signal import find_peaks, periodogram
import matplotlib.pyplot as plt
size = 100
a = np.linspace(1, .5, size)
x = np.linspace(0, 50, size)
np.random.seed(0)
y = a * np.sin(x) + np.random.normal(0, .1, size) + 5
now, let's try to find the peaks with find_peaks from scipy.signal
peaks = find_peaks(y)[0]
plt.plot(x, y)
plt.plot(x[peaks], y[peaks], marker='o', ls='none')
plt.show()
as you can see, there are some "wrong" peaks. We need to set distance argument in find_peaks (see documentation).
Let us suppose that we don't know the distance between the peaks. In this case, we can see that the data are periodic. So we can find the period with a periodogram and use the period as distance in find_peaks
_f, _p = periodogram(y, nfft=2**6)
# calculate the sample rate of x
sample_rate = 1 / np.median(np.diff(x))
periods = 1 / _f[1:] / sample_rate
density = _p[1:] / _p[1:].max()
max_density_idx = density.argmax()
period = periods[max_density_idx]
plt.semilogx(periods, density)
plt.scatter(period, density[max_density_idx], color='r')
plt.title(f"period {period:.2f}")
plt.show()
Now we can use the period as distance argument in find_peaks
peaks = find_peaks(y, distance=period)[0]
plt.plot(x, y)
plt.plot(x[peaks], y[peaks], marker='o', ls='none')
plt.show()
update
In your specific case, it's a little bit different.
Define the signal (I'll call variables X and Y)
Y = np.array([1.07541259e+09, 1.13851049e+09, 1.19241492e+09, 1.23706527e+09, 1.27240093e+09, 1.29836131e+09, 1.31217483e+09, 1.32037296e+09, 1.31908858e+09, 1.30896503e+09, 1.29216550e+09, 1.26958042e+09, 1.24561632e+09, 1.21202121e+09, 1.16869371e+09, 1.11054499e+09, 1.04006154e+09, 9.65663403e+08, 8.87706760e+08, 8.09340093e+08, 7.37568765e+08, 6.79736364e+08, 6.38576457e+08, 6.06062937e+08, 5.80650350e+08, 5.55089744e+08, 5.36334499e+08, 5.20236597e+08, 5.06529837e+08, 4.91825175e+08, 4.77937063e+08, 4.65475058e+08, 4.56520513e+08, 4.48393240e+08, 4.41944988e+08, 4.34822844e+08, 4.33688578e+08, 4.33451049e+08, 4.36256177e+08, 4.33553613e+08, 4.29191142e+08, 4.28492541e+08, 4.24465967e+08, 4.20074825e+08, 4.19935897e+08, 4.16652681e+08, 4.12419580e+08, 4.11747552e+08, 4.08801166e+08, 4.02351981e+08, 3.99620513e+08, 3.98716550e+08, 3.46023077e+08, 3.53969464e+08, 4.17131235e+08, 5.19363869e+08, 6.50956410e+08, 8.01530303e+08, 9.50162937e+08, 1.08249790e+09, 1.18242378e+09, 1.22732168e+09, 1.20123077e+09, 1.21067599e+09, 1.21556410e+09, 1.21272261e+09, 1.20310023e+09, 1.18692774e+09, 1.16694033e+09, 1.14330117e+09, 1.11635338e+09, 1.07947529e+09, 1.03222145e+09, 9.73427972e+08, 9.08558974e+08, 8.39966200e+08, 7.70457343e+08, 7.04976224e+08, 6.49436131e+08, 6.02085548e+08, 5.68915385e+08, 5.41638928e+08, 5.18758741e+08, 5.01973660e+08, 4.88766667e+08, 4.77643823e+08, 4.65681818e+08, 4.56193240e+08, 4.46851515e+08, 4.36135198e+08, 4.32282984e+08, 4.27913520e+08, 4.23408625e+08, 4.24119580e+08, 4.22399068e+08, 4.22415385e+08, 4.20193939e+08, 4.17638462e+08, 4.14822378e+08, 4.10636364e+08, 4.08388345e+08, 4.04844522e+08, 4.00571562e+08, 4.00841026e+08, 4.00764802e+08, 4.00432867e+08, 4.00336364e+08, 4.00724709e+08, 4.03048019e+08, 3.57437995e+08, 3.62371096e+08, 4.16658741e+08, 5.10148019e+08, 6.31750117e+08, 7.65175991e+08, 8.96832168e+08, 1.01666597e+09, 1.10373263e+09, 1.14380816e+09, 1.11629790e+09, 1.12228904e+09, 1.12378788e+09, 1.11974825e+09, 1.10812774e+09, 1.09125035e+09, 1.07033566e+09, 1.04667389e+09, 1.02016830e+09, 9.86036830e+08, 9.42176457e+08, 8.88900233e+08, 8.27962005e+08, 7.64362238e+08, 7.00755245e+08, 6.42390909e+08, 5.92395338e+08, 5.52426107e+08, 5.26319114e+08, 5.03317249e+08, 4.85524942e+08, 4.70421911e+08, 4.59389510e+08, 4.51644988e+08, 4.46288578e+08, 4.41076923e+08, 4.37533566e+08, 4.31993007e+08, 4.28625641e+08, 4.25406294e+08, 4.21161538e+08, 4.19049650e+08, 4.16719347e+08, 4.13124242e+08, 4.08404429e+08, 4.06154545e+08, 4.03386014e+08, 4.00980420e+08, 3.99442657e+08, 3.97792075e+08, 3.95606527e+08, 3.97922378e+08, 3.98345221e+08, 3.96253613e+08, 3.95703030e+08, 3.96108392e+08, 3.67136830e+08, 3.58382051e+08, 3.95844289e+08, 4.70853846e+08, 5.76629837e+08, 6.97682284e+08, 8.21169930e+08, 9.32588112e+08, 1.01885804e+09, 1.06315152e+09, 1.05128159e+09, 1.03944545e+09, 1.03769580e+09, 1.03132145e+09, 1.02008601e+09, 1.00327389e+09, 9.85387646e+08, 9.66403030e+08, 9.44620746e+08, 9.18596737e+08, 8.82269697e+08, 8.37750816e+08, 7.84877156e+08, 7.27590443e+08, 6.70183217e+08, 6.14567832e+08, 5.67404895e+08, 5.30862471e+08, 5.03108625e+08, 4.84348718e+08, 4.68116550e+08, 4.55809907e+08, 4.46616783e+08, 4.39725175e+08, 4.34323077e+08])
X = np.arange(Y.size)
Since Y.size is 200 and in your plot there are 200 secs, I assume that the sample rate is 1 sec.
If we search for the peaks with default distance we find a lot of unwanted peaks
peaks = find_peaks(Y)[0]
plt.plot(X, Y)
plt.plot(X[peaks], Y[peaks], marker='o', ls='none')
plt.show()
Let's do a periodogram
_f, _p = periodogram(Y, nfft=2**12)
# the sample rate of your signal
sample_rate = 1
periods = 1 / _f[1:] / sample_rate
density = _p[1:] / _p[1:].max()
max_density_idx = density.argmax()
period = periods[max_density_idx]
p_peaks_idx = find_peaks(density)[0]
plt.semilogx(periods, density)
plt.scatter(period, density[max_density_idx], color='r')
period_peaks = []
for p_peak in p_peaks_idx:
if density[p_peak] < .1:
continue
period_peaks.append(periods[p_peak])
plt.scatter(periods[p_peak], density[p_peak])
plt.text(periods[p_peak], density[p_peak], f"{periods[p_peak]:.1f} ", ha='right', va='center')
plt.title('periodogram')
plt.show()
We found two main periods
period_peaks
[56.888888888888886, 28.444444444444443]
If we use the higher density period (56.9, the fundamental, or 1st harmonic) we miss a peak
peaks = find_peaks(Y, distance=period_peaks[0])[0]
plt.plot(X, Y)
plt.plot(X[peaks], Y[peaks], marker='o', ls='none')
plt.show()
This could be because
you have too few observations
the periodicity is not constant
If we empirically subtract a quantity (say 10) from the period, we find all the peaks
peaks = find_peaks(Y, distance=period_peaks[0] - 10)[0]
plt.plot(X, Y)
plt.plot(X[peaks], Y[peaks], marker='o', ls='none')
plt.show()
So we got peaks at
X[peaks]
array([ 7, 61, 118, 174])
taking the difference, we see they're not regular (at this sample rate and with these few observations)
np.diff(X[peaks])
array([54, 57, 56])

Related

Change the scale of the graph image

I try to generate a graph and save an image of the graph in python. Although the "plotting" of the values seems ok and I can get my picture, the scale of the graph is badly shifted.
If you compare the correct graph from tutorial example with my bad graph generated from different dataset, the curves are cut at the bottom to early: Y-axis should start just above the highest values and I should also see the curves for the highest X-values (in my case around 10^3).
But honestly, I think that problem is the scale of the y-axis, but actually do not know what parameteres should I change to fix it. I tried to play with some numbers (see below script), but without any good results.
This is the code for calculation and generation of the graph image:
import numpy as np
hic_data = load_hic_data_from_reads('/home/besy/Hi-C/MOREX/TCC35_parsedV2/TCC35_V2_interaction_filtered.tsv', resolution=100000)
min_diff = 1
max_diff = 500
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(12, 12))
for cnum, c in enumerate(hic_data.chromosomes):
if c in ['ChrUn']:
continue
dist_intr = []
for diff in xrange(min_diff, min((max_diff, 1 + hic_data.chromosomes[c]))):
beg, end = hic_data.section_pos[c]
dist_intr.append([])
for i in xrange(beg, end - diff):
dist_intr[-1].append(hic_data[i, i + diff])
mean_intrp = []
for d in dist_intr:
if len(d):
mean_intrp.append(float(np.nansum(d)) / len(d))
else:
mean_intrp.append(0.0)
xp, yp = range(min_diff, max_diff), mean_intrp
x = []
y = []
for k in xrange(len(xp)):
if yp[k]:
x.append(xp[k])
y.append(yp[k])
l = plt.plot(x, y, '-', label=c, alpha=0.8)
plt.hlines(mean_intrp[2], 3, 5.25 + np.exp(cnum / 4.3), color=l[0].get_color(),
linestyle='--', alpha=0.5)
plt.text(5.25 + np.exp(cnum / 4.3), mean_intrp[2], c, color=l[0].get_color())
plt.plot(3, mean_intrp[2], '+', color=l[0].get_color())
plt.xscale('log')
plt.yscale('log')
plt.ylabel('number of interactions')
plt.xlabel('Distance between bins (in 100 kb bins)')
plt.grid()
plt.ylim(2, 250)
_ = plt.xlim(1, 110)
fig.savefig('/home/besy/Hi-C/MOREX/TCC35_V2_results/filtered/TCC35_V2_decay.png', dpi=fig.dpi)
I think that problem is in scale I need y-axis to start from 10^-1 (0.1), in order to change this I tried this:
min_diff = 0.1
.
.
.
dist_intr = []
for diff in xrange(min_diff, min((max_diff, 0.1 + hic_data.chromosomes[c]))):
.
.
.
plt.ylim((0.1, 20))
But this values return: "integer argument expected, got float"
I also tried to play with:
max_diff, plt.ylim and plt.xlim parameters little bit, but nothing changed to much.
I would like to ask you what parameter/s and how I need change to generate image of the correctly focused graph. Thank you in advance.

Using SciPy curve_fit to predict post final score

I have a post, and I need to predict the final score as close as I can.
Apparently using curve_fit should do the trick, although I am not really understanding how I should use it.
I have two known values, that I collect 2 minutes after the post is posted.
Those are the comment count, referred as n_comments, and the vote count, referred as n_votes.
After an hour, I check the post again, and get the final_score (sum of all votes) value, which is what I want to predict.
I've looked at different examples online, but they all use multiple data points (I have just 2), also, my initial data point contains more information (n_votes and n_comments) as I've found that without the other you cannot accurately predict the score.
To use curve_fit you need a function. Mine looks like this:
def func(datapoint,k,t,s):
return ((datapoint[0]*k+datapoint[1]*t)*60*datapoint[2])*s
And a sample datapoint looks like this:
[n_votes, n_comments, hour]
This is the broken mess of my attempt, and the result doesn't look right at all.
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
initial_votes_list = [3, 1, 2, 1, 0]
initial_comment_list = [0, 3, 0, 1, 64]
final_score_list = [26,12,13,14,229]
# Those lists contain data about multiple posts; I want to predict one at a time, passing the parameters to the next.
def func(x,k,t,s):
return ((x[0]*k+x[1]*t)*60*x[2])*s
x = np.array([3, 0, 1])
y = np.array([26 ,0 ,2])
#X = [[a,b,c] for a,b,c in zip(i_votes_list,i_comment_list,[i for i in range(len(i_votes_list))])]
popt, pcov = curve_fit(func, x, y)
plt.plot(x, [ 1 , func(x, *popt), 2], 'g--',
label='fit: a=%5.3f, b=%5.3f, c=%5.3f' % tuple(popt))
plt.xlabel('Time')
plt.ylabel('Score')
plt.legend()
plt.show()
The plot should display the initial/final score and the current prediction.
I have some doubts regarding the function too.. Initially this is what it looked like :
(votes_per_minute + n_comments) * 60 * hour
But I replaced votes_per_minute with just votes. Considering that I collect this data after 2 minutes, and that I have a parameter there, I'd say that it's not too bad but I don't know really.
Again, who guarantees that this is the best function possible? It would be nice to have the function discovered automatically, but I think this is ML territory...
EDIT:
Regarding the measurements: I can get as many as I want (every 15-30-60s), although they have to be collected while the post has =< 3 minutes of age.
Disclaimer: This is just a suggestion on how you may approach this problem. There might be better alternatives.
I think, it might be helpful to take into consideration the relationship between elapsed-time-since-posting and the final-score. The following curve from [OC] Upvotes over time for a reddit post models the behavior of the final-score or total-upvotes-count in time:
The curve obviously relies on the fact that once a post is online, you expect somewhat linear ascending upvotes behavior that slowly converges/ stabilizes around a maximum (and from there you have a gentle/flat slope).
Moreover, we know that usually the number of votes/comments is ascending in function of time. the relationship between these elements can be considered to be a series, I chose to model it as a geometric progression (you can consider arithmetic one if you see it is better). Also, you have to keep in mind that you are counting some elements twice; Some users commented and upvoted so you counted them twice, also some can comment multiple times but can upvote only one time. I chose to consider that only 70% (in code p = 0.7) of the users are unique commenters and that users who commented and upvoted represent 60% (in code e = 1-0.6 = 0.4)of the the total number of users (commenters and upvoters), the result of these assumptions:
So we have two equation to model the score so you can combine them and take their average. In code this would look like this:
import warnings
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from mpl_toolkits.mplot3d import axes3d
# filter warnings
warnings.filterwarnings("ignore")
class Cfit:
def __init__(self, votes, comments, scores, fit_size):
self.votes = votes
self.comments = comments
self.scores = scores
self.time = 60 # prediction time
self.fit_size = fit_size
self.popt = []
def func(self, x, a, d, q):
e = 0.4
b = 1
p = 0.7
return (a * np.exp( 1-(b / self.time**d )) + q**self.time * e * (x + p*self.comments[:len(x)]) ) /2
def fit_then_predict(self):
popt, pcov = curve_fit(self.func, self.votes[:self.fit_size], self.scores[:self.fit_size])
return popt, pcov
# init
init_votes = np.array([3, 1, 2, 1, 0])
init_comments = np.array([0, 3, 0, 1, 64])
final_scores = np.array([26, 12, 13, 14, 229])
# fit and predict
cfit = Cfit(init_votes, init_comments, final_scores, 15)
popt, pcov = cfit.fit_then_predict()
# plot expectations
fig = plt.figure(figsize = (15,15))
ax1 = fig.add_subplot(2,3,(1,3), projection='3d')
ax1.scatter(init_votes, init_comments, final_scores, 'go', label='expected')
ax1.scatter(init_votes, init_comments, cfit.func(init_votes, *popt), 'ro', label = 'predicted')
# axis
ax1.set_xlabel('init votes count')
ax1.set_ylabel('init comments count')
ax1.set_zlabel('final score')
ax1.set_title('fincal score = f(init votes count, init comments count)')
plt.legend()
# evaluation: diff = expected - prediction
diff = abs(final_scores - cfit.func(init_votes, *popt))
ax2 = fig.add_subplot(2,3,4)
ax2.plot(init_votes, diff, 'ro', label='fit: a=%5.3f, d=%5.3f, q=%5.3f' % tuple(popt))
ax2.grid('on')
ax2.set_xlabel('init votes count')
ax2.set_ylabel('|expected-predicted|')
ax2.set_title('|expected-predicted| = f(init votes count)')
# plot expected and predictions as f(init-votes)
ax3 = fig.add_subplot(2,3,5)
ax3.plot(init_votes, final_scores, 'gx', label='fit: a=%5.3f, d=%5.3f, q=%5.3f' % tuple(popt))
ax3.plot(init_votes, cfit.func(init_votes, *popt), 'rx', label='fit: a=%5.3f, d=%5.3f, q=%5.3f' % tuple(popt))
ax3.set_xlabel('init votes count')
ax3.set_ylabel('final score')
ax3.set_title('fincal score = f(init votes count)')
ax3.grid('on')
# plot expected and predictions as f(init-comments)
ax4 = fig.add_subplot(2,3,6)
ax4.plot(init_votes, final_scores, 'gx', label='fit: a=%5.3f, d=%5.3f, q=%5.3f' % tuple(popt))
ax4.plot(init_votes, cfit.func(init_votes, *popt), 'rx', label='fit: a=%5.3f, d=%5.3f, q=%5.3f' % tuple(popt))
ax4.set_xlabel('init comments count')
ax4.set_ylabel('final score')
ax4.set_title('fincal score = f(init comments count)')
ax4.grid('on')
plt.show()
The output of the previous code is the following:
Well obviously the provided data-set is too small to evaluate any approach so it is up to you to test this more.
The main idea here is that you assume your data to follow a certain function/behavior (described in func) but you give it certain degrees of freedom (your parameters: a, d, q), and using curve_fit you try to approximate the best combination of these variables that will fit your input data to your output data. Once you have the returned parameters from curve_fit (in code popt) you just run your function using those parameters, like this for example (add this section at the end of the previous code):
# a function similar to func to predict scores for a certain values
def score(votes_count, comments_count, popt):
e, b, p = 0.4, 1, 0.7
a, d, q = popt[0], popt[1], popt[2]
t = 60
return (a * np.exp( 1-(b / t**d )) + q**t * e * (votes_count + p*comments_count )) /2
print("score for init-votes = 2 & init-comments = 0 is ", score(2, 0, popt))
Output:
score for init-votes = 2 & init-comments = 0 is 14.000150386210994
You can see that this output is close to the correct value 13 and hopefully with more data you can have better/ more accurate approximations of your parameters and consequently better "predictions".

Why doesn't "beta.fit" come out right?

import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
observed = [0.294, 0.2955, 0.235, 0.2536, 0.2423, 0.2844, 0.2099, 0.2355, 0.2946, 0.3388, 0.2202, 0.2523, 0.2209, 0.2707, 0.1885, 0.2414, 0.2846, 0.328, 0.2265, 0.2563, 0.2345, 0.2845, 0.1787, 0.2392, 0.2777, 0.3076, 0.2108, 0.2477, 0.234, 0.2696, 0.1839, 0.2344, 0.2872, 0.3224, 0.2152, 0.2593, 0.2295, 0.2702, 0.1876, 0.2331, 0.2809, 0.3316, 0.2099, 0.2814, 0.2174, 0.2516, 0.2029, 0.2282, 0.2697, 0.3424, 0.2259, 0.2626, 0.2187, 0.2502, 0.2161, 0.2194, 0.2628, 0.3296, 0.2323, 0.2557, 0.2215, 0.2383, 0.2166, 0.2315, 0.2757, 0.3163, 0.2311, 0.2479, 0.2199, 0.2418, 0.1938, 0.2394, 0.2718, 0.3297, 0.2346, 0.2523, 0.2262, 0.2481, 0.2118, 0.241, 0.271, 0.3525, 0.2323, 0.2513, 0.2313, 0.2476, 0.232, 0.2295, 0.2645, 0.3386, 0.2334, 0.2631, 0.226, 0.2603, 0.2334, 0.2375, 0.2744, 0.3491, 0.2052, 0.2473, 0.228, 0.2448, 0.2189, 0.2149]
a, b, loc, scale = stats.beta.fit(observed,floc=0,fscale=1)
ax = plt.subplot(111)
ax.hist(observed, alpha=0.75, color='green', bins=104, density=True)
ax.plot(np.linspace(0, 1, 100), stats.beta.pdf(np.linspace(0, 1, 100), a, b))
plt.show()
The α and β is out of whack (α=6.056697373013153,β=409078.57804704335)
The fitting image is also unreasonable. Histograms and beta distributions differ in height on the Y-axis.
The data of average is about 0.25, but calculated according to the expected value of beta distribution, 6.05/(6.05+409078.57)=1.47891162469e-05.This seems counterintuitive.
I think you are messing up a bit the code with whatever your observation is.
The main point to consider is that your beta fit will have both a and b, as well as loc and scale.
If you perform your fit using fixed loc/scale, i.e. scipy.stats.beta.fit(observed, floc=0, fscale=1), then your fitted a and b are: a = 33.26401059422594 and b = 99.0180817184922.
On the other hand, if you perform your fit with variable loc and scale, i.e. scipy.stats.beta.fit(observed), then you must compute / consider scipy.stats.beta.pdf() to include also those as parameter, which are, with your data, a = 6.056697380819225, b = 409078.5780469263, loc = 0.15710752697400227, scale = 6373.831662619217.
According to its documentation, the probability density above is defined in the “standardized” form. To shift and/or scale the distribution use the loc and scale parameters. Specifically, beta.pdf(x, a, b, loc, scale) is identically equivalent to beta.pdf(y, a, b) / scale with y = (x - loc) / scale.
Hence, the theoretical mean/average should be modified accordingly to include the scaling and location transformations.

How to uniformly resample a non-uniform signal using SciPy?

I have an (x, y) signal with non-uniform sample rate in x. (The sample rate is roughly proportional to 1/x). I attempted to uniformly re-sample it using scipy.signal's resample function. From what I understand from the documentation, I could pass it the following arguments:
scipy.resample(array_of_y_values, number_of_sample_points, array_of_x_values)
and it would return the array of
[[resampled_y_values],[new_sample_points]]
I'd expect it to return an uniformly sampled data with a roughly identical form of the original, with the same minimal and maximalx value. But it doesn't:
# nu_data = [[x1, x2, ..., xn], [y1, y2, ..., yn]]
# with x values in ascending order
length = len(nu_data[0])
resampled = sg.resample(nu_data[1], length, nu_data[0])
uniform_data = np.array([resampled[1], resampled[0]])
plt.plot(nu_data[0], nu_data[1], uniform_data[0], uniform_data[1])
plt.show()
blue: nu_data, orange: uniform_data
It doesn't look unaltered, and the x scale have been resized too. If I try to fix the range: construct the desired uniform x values myself and use them instead, the distortion remains:
length = len(nu_data[0])
resampled = sg.resample(nu_data[1], length, nu_data[0])
delta = (nu_data[0,-1] - nu_data[0,0]) / length
new_samplepoints = np.arange(nu_data[0,0], nu_data[0,-1], delta)
uniform_data = np.array([new_samplepoints, resampled[0]])
plt.plot(nu_data[0], nu_data[1], uniform_data[0], uniform_data[1])
plt.show()
What is the proper way to re-sample my data uniformly, if not this?
Please look at this rough solution:
import matplotlib.pyplot as plt
from scipy import interpolate
import numpy as np
x = np.array([0.001, 0.002, 0.005, 0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10, 20])
y = np.exp(-x/3.0)
flinear = interpolate.interp1d(x, y)
fcubic = interpolate.interp1d(x, y, kind='cubic')
xnew = np.arange(0.001, 20, 1)
ylinear = flinear(xnew)
ycubic = fcubic(xnew)
plt.plot(x, y, 'X', xnew, ylinear, 'x', xnew, ycubic, 'o')
plt.show()
That is a bit updated example from scipy page. If you execute it, you should see something like this:
Blue crosses are initial function, your signal with non uniform sampling distribution. And there are two results - orange x - representing linear interpolation, and green dots - cubic interpolation. Question is which option you prefer? Personally I don't like both of them, that is why I usually took 4 points and interpolate between them, then another points... to have cubic interpolation without that strange ups. That is much more work, and also I can't see doing it with scipy, so it will be slow. That is why I've asked about size of the data.

Python boxplot showing means and confidence intervals

How can I create a boxplot like the one below, in Python? I want to depict means and confidence bounds only (rather than proportions of IQRs, as in matplotlib boxplot).
I don't have any version constraints, and if your answer has some package dependency that's OK too. Thanks!
Use errorbar instead. Here is a minimal example:
import matplotlib.pyplot as plt
x = [2, 4, 3]
y = [1, 3, 5]
errors = [0.5, 0.25, 0.75]
plt.figure()
plt.errorbar(x, y, xerr=errors, fmt = 'o', color = 'k')
plt.yticks((0, 1, 3, 5, 6), ('', 'x3', 'x2', 'x1',''))
Note that boxplot is not the right approach; the conf_intervals parameter only controls the placement of the notches on the boxes (and we don't want boxes anyway, let alone notched boxes). There is no way to customize the whiskers except as a function of IQR.
Thanks to America, I propose a way to automatize this kind of graph a little bit.
Below an example of code generating 20 arrays from a normal distribution with mean=0.25 and std=0.1.
I used the formula W = t * s / sqrt(n), to calculate the margin of error of the confidence interval, with t the constant from the t distribution (see scipy.stats.t), s the standard deviation and n the number of values in an array.
list_samples=list() # making a list of arrays
for i in range(20):
list.append(np.random.normal(loc=0.25, scale=0.1, size=20))
def W_array(array, conf=0.95): # function that returns W based on the array provided
t = stats.t(df = len(array) - 1).ppf((1 + conf) /2)
W = t * np.std(array, ddof=1) / np.sqrt(len(array))
return W # the error
W_list = list()
mean_list = list()
for i in range(len(list_samples)):
W_list.append(W_array(list_samples[i])) # makes a list of W for each array
mean_list.append(np.mean(list_samples[i])) # same for the means to plot
plt.errorbar(x=mean_list, y=range(len(list_samples)), xerr=W_list, fmt='o', color='k')
plt.axvline(.25, ls='--') # this is only to demonstrate that 95%
# of the 95% CI contain the actual mean
plt.yticks([])
plt.show();

Categories