I'm a PhD student in sociology working on my dissertation. In the course of some data analysis, I have bumped up against the following problem.
I have a table of measured values of some variable over a series of years. The values count, "how many events of a certain type there are in a given year"? Here is a sample of what it looks like:
year var
1983 22
1984 55
1985 34
1986 29
1987 15
1988 20
1989 41
So, e.g. in 1984, 55 such events occured over the whole year.
One way to represent this data over the domain of real numbers in [1983, 1990) is with a piecewise function f:
f(x) = var if floor(x) == year, for all x in [1983, 1990).
This function plots a series of horizontal lines of width 1, mapping a bar chart of the variable. The area under each of these lines is equal to the variable's value in that year. However, for this variable, I know that in each year, the rate is not constant over the whole year. In other words, the events don't suddenly jump from one yearly rate to another rate overnight on Dec 31, as the (discontinuous) function f seems to present. I don't know exactly how the rate changes, but I'd like to assume a smooth transition from year to year.
So, what I want is a function g which is both continuous and smooth (continuously differentiable) over the domain [1983, 1990), which also preserves the yearly totals. That is, the definite integral of g from 1984 to 1985 must still equal 55, and same for all other years. (So, for example, an n-degree polynomial which hits all the midpoints of the bars will NOT work.) Also, I'd like g to be a piecewise function, with all the pieces relatively simple -- quadratics would be best, or a sinusoid.
In sum: I want g to be a series of parabolas defined over each year, which smoothly transition from one to the other (left and right limits of g'(x) should be equal at the year boundaries), and where the area under each parabola is equal to the totals given by my data above.
I've drawn a crude version of what I want here. The cartoon uses the same data as above, with the black curve representing my hoped-for function, g. Toward the right end things got particularly bad, esp 1988 and 1989. But it's just meant to show a picture of what I would like to end up with.
Thanks for your help, or for pointing me towards other resources you think might be helpful!
PS I have looked at this paper which is linked inside this question. I agree with the authors (see section 4) that if I could replace my data with pseudodata d' using matrix A, from which I could very simply generate some sort of smooth function, that would be great, but they do not say how A could be obtained. Just some food for thought. Thanks again!
PPS What I need is a reliable method of generating g, given ANY data table as above. I actually have hundreds of these kinds of yearly count data, so I need a general solution.
You need the integral of your curve to go through a specific set of points, defined by the cumulative totals, so...
Interpolate between the cumulative totals to get an integral curve, and then
take the derivative of that to get the function you're looking for.
Since you want your function to be "continuous and smooth", i.e., C1-continuous, the integral curve you interpolate needs to be C2-continuous, i.e., it has to have continuous first and second derivatives. You can use polynomial interpolation, sinc interpolation, splines of sufficient degree, etc.
Using "natural" cubic splines to interpolate the integral will give you a piece-wise quadratic derivative that seems to satisfy all your requirements.
There's a pretty good description of the natural cubic splines here: http://mathworld.wolfram.com/CubicSpline.html
If your goal is to transform discrete data into a continuous representation, I would recommend looking up Kernel Density Estimation. KDE essentially models each data point as a (usually) Gaussian distribution and sums up the distribution, resulting in a smooth continuous distribution. This blog does a very thorough treatment of KDE using the SciPy module.
One of the downsides of KDE is that it does not provide an analytic solution. If that is your goal, I would recommend looking up polynomial regression.
Related
I have a sample of example of two data as below;
If I was to plot the two, Data A would have a much smoother line graph and data B would have more spikey graph. How can I use pandas to sort of deternimne/differentiate the smoothness of dataset e.g with a calculation on the data and giving it an index which I can equate to the smoothness f the data. I looked for a solution and there was a suggestion using difference of Standard deviation. This was based on R. Any ideas on this? What sort of calculation would give me what i want? Can anyone point me in the right direction?
Standard deviation doesn't necessarily mean smoothness in the sense you seem to mean. A straight line graph (y=x) A:1 B:2 C:3 D:4 would be smooth for what you mean right? Whereas A:4 B:1 C:3 B:2 would not (it would go up and down/change direction). I think what you are looking for is a change of slope calculation (derivative of the function at different points or gradient).
In this case it's actually quite simple. Just calculate the sum of the absolute difference between each point. The one with the greatest total is more "spikey".
You can shift the data (pandas.shift), subtract the shift from the original, take the absolute value and then the sum.
I am analyzing a time-series dataset that I am pretty sure can be broken down using fft. I want to develop a model to estimate the data using a sum of sin/cos but I am having trouble with the syntax to find the frequencies in python
Here is a graph of the data
data graph
And here's a link to the original data: https://drive.google.com/open?id=1mqZtQ-txdd_AFbKGBlbSL6903CK-_kXl
Most of the examples I have seen have multiple samples per second/time period, however the data in this set represent by-minute observations of some metric. Because of this, I've had trouble translating the answers online to this problem
Here's my naive first approach
X = fftpack.fft(data)
freqs = fftpack.fftfreq(len(data))
plt.plot(freqs, np.abs(X))
plt.show()
Instead of peaking at the major frequencies, my plot only has one peak at 0.
result
The FFT you posted has been shifted so that 0 is at the center. Data to the left of the center represents negative frequencies and to the right represents positive frequencies. If you zoom in and look more closely, I think you will see that there are two peaks close to the center that you are interpreting as a single peak at 0. Just looking at the positive side, the location of this peak will tell you which frequency is contributing significant signal power.
Like you said, your x-axis is probably incorrect. scipy.fftpack.fftfreq needs to know the time between samples (in seconds, I think) of your time-domain signal to correctly determine the bandwidth and create the x-axis array in Hz. This should do it:
dt = 60 # 60 seconds between samples
freqs = fftpack.fftfreq(len(data),dt)
I have the a dataframe which includes heights. The data can not go below zero. That's why i can not use standard deviation as this data is not a normal distribution. I can not use 68-95-99.7 rule here because it fails in my case. Here is my dataframe, mean and SD.
0.77132064
0.02075195
0.63364823
0.74880388
0.49850701
0.22479665
0.19806286
0.76053071
0.16911084
0.08833981
Mean: 0.41138725956196015
Std: 0.2860541519582141
If I get 2 std, as you can see the number becomes negative.
-2 x std calculation = 0.41138725956196015 - 0.2860541519582141 x 2 = -0,160721044354468
I have tried using percentile and not satisfied with it to be honest. How can i apply Chebyshev's inequality to this problem? Here what i did so far:
np.polynomial.Chebyshev(df['Heights'])
But this returns numbers not a SD level i can measure. Or do you think Chebyshev is the best choice in my case?
Expected solution:
I am expecting to get a range like 75% next height will be between 0.40 - 0.43 etc.
EDIT1: Added histogram
To be more clear, I have added my real data's histogram
EDIT2: Some values from real data
Mean: 0.007041500928135767
Percentile 50: 0.0052000000000000934
Percentile 90: 0.015500000000000047
Std: 0.0063790857035425025
Var: 4.06873389299246e-05
Thanks a lot
You seem to be confusing two ideas from the same mathematician, Chebyshev. These ideas are not the same.
Chebysev's inequality states a fact that is true for many probability distributions. For two standard deviations, it states that three-fourths of the data items will lie within two standard deviations from the mean. As you state, for normal distributions about 19/20 of the items will lie in that interval, but Chebyshev's inequality is an absolute bound that is met by practically all distributions. The fact that your data values are never negative does not change the truth of the inequality; it just makes the actual proportion of values in the interval even larger, so the inequality is even more true (in a sense).
Chebyshev polynomials do not involve statistics, but are simply a series (or two series) of polynomials, commonly used in calculating approximations for computer functions. That is what np.polynomial.Chebyshev involves, and therefore does not seem useful to you at all.
So calculate Chebyshev's inequality yourself. There is no need for a special function for that, since it is so easy (this is Python 3 code):
def Chebyshev_inequality(num_std_deviations):
return 1 - 1 / num_std_deviations**2
You can change that to handle the case where k <= 1 but the idea is obvious.
In your particular case: the inequality says that at least 3/4, or 75%, of the data items will lie within 2 standard deviations of the mean, which means more than 0.41138725956196015 - 2 * 0.2860541519582141 and less than than 0.41138725956196015 + 2 * 0.2860541519582141 (note the different signs), which simplifies to the interval
[-0.16072104435446805, 0.9834955634783884]
In your data, 100% of your data values are in that interval, so Chebyshev's inequality was correct (of course).
Now, if your goal is to predict or estimate where a certain percentile is, Chebyshev's inequality does not help much. It is an absolute lower bound, so it gives one limit to a percentile. For example, by what we did above we know that the 12.5'th percentile is at or above -0.16072104435446805 and the 87.5'th percentile is at or below 0.9834955634783884. Those facts are true but are probably not what you want. If you want an estimate that is closer to the actual percentile, this is not the way to go. The 68-95-99.7 rule is an estimate--the actual locations may be higher or lower, but if the distribution is normal than the estimate will not be far off. Chebyshev's inequality does not do that kind of estimate.
If you want to estimate the 12.5'th and 87.5'th percentiles (showing where 75 percent of all the population will fall) you should calculate those percentiles of your sample and use those values. If you don't know more details about the kind of distribution you have, I don't see any better way. There are reasons why normal distributions are so popular!
It sounds like you want the boundaries for the middle 75% of your data.
The middle 75% of the data is between the 12.5th percentile and the 87.5th percentile, so you can use the quantile function to get the values at the locations:
[df['Heights'].quantile(0.5 - 0.75/2), df['Heights'].quantile(0.5 + 0.75/2)]
#[0.09843618875, 0.75906485625]
As per What does it mean when the standard deviation is higher than the mean? What does that tell you about the data? - Quora, SD is a measure of "spread" and mean is a measure of "position". As you can see, these are more or less independent things. Now, if all your samples are positive, SD cannot be greater than the mean because of the way it's calculated, but 2 or 3 SDs very well can.
So, basically, SD being roughly equal to the mean means that your data are all over the place.
Now, a random variable that's strictly positive indeed cannot be normally distributed. But for a rough estimation, seeing that you still have a bell shape, we can pretend it is and still use SD as a rough measure of the spread (though, since 2 and 3 SD can go into negatives, they lack any physical meaning here whatsoever and so are unusable for the sake of our pretention):
E.g. to get a rough prediction of grass growth, you can still take the mean and apply whatever growth model you're using to it -- that will get the new, prospective mean. Then applying the same to meanĀ±SD will give an idea of the new SD.
This is very rough, of course. But to get any better, you'll have to somehow check which distribution you're dealing with and use its peak and spread characteristics instead of mean and SD. And in any case, your prediction will not be any better than your growth model -- studies of which are anything but conclusive judging by e.g. https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1365-3040.2005.01490.x (not a single formula there).
I need help writing code that would allow me to baseline each peak in a set of peaks (enthalpy vs. time isothermal titration calorimetry data).
The data is created by the ITC instrument in this fashion (where '##' signifies the start of a peak and the data are listed below as time [seconds], enthalpy [ucal/s], and temperature [deg. C but unnecessary as it is usually held constant]):
#0
2.00,13.585249,25.00761
4.00,13.585438,25.00699
6.00,13.585557,25.00688
8.00,13.585472,25.00804
#1,6.0000
302.00,13.607173,25.00958
304.00,13.607608,25.00931
306.00,13.607758,25.00965
There are well over 100 points per peak (I've shortened it above), and I'd like to incorporate a linear equation to zero each enthalpy value in each peak so I may integrate each peak to produce a binding plot. I'd welcome any help/advice; thank you!
I was able to do it. Thank you to those who replied! I will leave this here to anyone who may need to baseline peaks with a linear fit in the future (assuming the first point and last 40 points will suffice in a decent fit line like it does in ITC):
#defining function to calculate baseline of peaks in x vs y graph
def calc_baseline(x,y):
zeroed_y=[]
for n in range(len(y)):
line_y=array(y[n][0:1]+y[n][-41:-1])
line_x=array(x[n][0:1]+x[n][-41:-1])
p=scipy.polyfit(baseline_x,baseline_y,1)
baseline_y=array(x[n])*p[0]+p[1]
zeroed_y.append(baseline_y)
return zeroed_y
#defining function to zero baselines of peaks in x vs y graph, assuming number_injections is a known integer
def zero_baseline(number_injections,y,zeroed_y):
zeroed_y_lists=[]
for i in range(0,number_injections+1):
zeroed_y=y[i]-zeroed_y[i]
zeroed_y_lists.append(zeroed_y)
return zeroed_y_lists
I need to be able to extrapolate, using 2-n data points, a curved trend line which can then be 'queried'. In my head it would look a bit like this (the blue line):
This is for a 'calories calculator' - I have data points regarding the amount of calories burned for an activity based on a certain weight: e.g. if you're 65kg, you'll burn around 420 calories, if you're 70kg, you'll burn 480 calories, if you're 75kg, you'll burn 550 calories, etc. etc. One axis would be for calories, the other for weight.
Obviously if I wanted to find out the amount of calories burned for a weight where I don't have a data point, I would need a trend line to 'query', which brings me on to the second part of my question: how would I go about doing this?
In summary:
How do I extrapolate a trend line in Python?
How do I 'query' this trend line to get estimates based on a point on this trend line?
numpy and scipy contain routines that let you fit expressions to data points. once you have an expression you can plot it for any range of time you like.
this answer - Nonlinear e^(-x) regression using scipy, python, numpy - contains an example of non-linear regression (it's an exponential, like in your question, but one with negative exponent - in general, fitting and extrapolating exponentials with positive exponents is a bad idea because the extrapolation is so sensitive to noise / uncertainty that it quickly becomes meaningless).