Covariance of two columns of a dataframe - python

Please forgive this question if it sounds too trivial, but I want to be sure I'm on the right track.
I have a data frame similar to the following, and I'm interested in understanding whether the two variables A and B vary together or otherwise.
A B
0 34.4534 35.444248
1 34.8915 24.693800
2 0.0000 21.586316
3 34.7767 23.783602
I am asked to plot a covariance between the two. However, from my research, it seems covariance is a single-calculated value just like mean and standard deviation, not a distribution like pdf/cdf that one can plot.
Is my perception about covariance right? What advice could you give me for some other way to understand the variability between these variables?

Is your perception right? - Yes
Covariance is a measure of the joint variability of two random variables and is represented by one number. This number is
positive if they "behave similar" (which means roughly that positive peaks in variable 1 coincide with positive peaks in variable 2)
zero if they do not covary
negative if they "behave similar" but with an inverse relationship (that is, negative peaks align with positive peaks and vice versa)
import pandas as pd
# create 3 random variables; var 3 is based on var 1, so they should covary
data = np.random.randint(-9,9,size=(20,3))
data[:,2] = data[:,0] + data[:,2]*0.5
df = pd.DataFrame(data,columns=['var1','var2','var3'])
df.plot(marker='.')
We see that var1 and var3 seem to covary; so in order to compute the covariance between all variables, pandas comes in handy:
>>> df.cov()
var1 var2 var3
var1 31.326316 -5.389474 30.684211
var2 -5.389474 21.502632 -10.907895
var3 30.684211 -10.907895 37.776316
Since the actual values of covariance depend on the scale of your input variables, you typically normalize the covariance by the respective standard deviations which gives you the correlation as a measure of covariance, ranging from -1 (anticorrelated) to 1 (correlated). With pandas, this reads
>>> df.corr()
var1 var2 var3
var1 1.000000 -0.207657 0.891971
var2 -0.207657 1.000000 -0.382724
var3 0.891971 -0.382724 1.000000
from which it becomes clear, that var1 and var3 exhibit a strong correlation, exactly as we expect it to be.
What advice could you give me for some other way to understand the variability between these variables? - Depends on the data
Since we don't know anything about the nature of your data, this is hard to say. Perhaps just as a starter (without intending to be exhaustive), some hints at what you could look at:
Spearman's rank correlation: more robust than Pearson correlation coefficient, what we have used above; Pearson basically only looks at linear correlation and produces less correct result if your data exhibits some sort of non-linearity; in the case of possible non-linear relationships in your data, you should go for Spearman
Autocorrelation: think about a sinusoidal signal which triggers another signal but with a time lag of 90º (which represents a cosine). In that case, the typical covariance/correlation will tell you that the relationship is weak, and may (falsely) lead you to the conclusion that there is no causal effect between both signals. Autocorrelation basically is the correlation for shifted versions of your time series, thus allowing to detect lagged correlation.
probably much more, but perhaps that's good for a start

Related

Regression analysis for linear regression

I have a regression model where my target variable (days) quantitative values ranges between 2 to 30. My RMSE is 2.5 and all the other X variables(nominal) are categorical and hence I have dummy encoded them.
I want to know what would be a good value of RMSE? I want to get something within 1-1.5 or even lesser but I am unaware what I should do to achieve the same.
Note# I have already tried feature selection and removing features will less importance.
Any ideas would be appreciated.
If your x values are categorical then it does not necessarily make much sense binding them to a uniform grid. Who's to say category A and B should be spaced apart the same as B and C. Assuming that they are will only lead to incorrect representation of your results.
As your choice of scale is the unknowns, you would be better in terms of visualisation to set your uniform x grid as being the day number and then seeing where the categories would place on the y scale if given a linear relationship.
RMS Error doesn't come into it at all if you don't have quantitative data for x and y.

How can i apply Chebyshev's inequality to this case?

I have the a dataframe which includes heights. The data can not go below zero. That's why i can not use standard deviation as this data is not a normal distribution. I can not use 68-95-99.7 rule here because it fails in my case. Here is my dataframe, mean and SD.
0.77132064
0.02075195
0.63364823
0.74880388
0.49850701
0.22479665
0.19806286
0.76053071
0.16911084
0.08833981
Mean: 0.41138725956196015
Std: 0.2860541519582141
If I get 2 std, as you can see the number becomes negative.
-2 x std calculation = 0.41138725956196015 - 0.2860541519582141 x 2 = -0,160721044354468
I have tried using percentile and not satisfied with it to be honest. How can i apply Chebyshev's inequality to this problem? Here what i did so far:
np.polynomial.Chebyshev(df['Heights'])
But this returns numbers not a SD level i can measure. Or do you think Chebyshev is the best choice in my case?
Expected solution:
I am expecting to get a range like 75% next height will be between 0.40 - 0.43 etc.
EDIT1: Added histogram
To be more clear, I have added my real data's histogram
EDIT2: Some values from real data
Mean: 0.007041500928135767
Percentile 50: 0.0052000000000000934
Percentile 90: 0.015500000000000047
Std: 0.0063790857035425025
Var: 4.06873389299246e-05
Thanks a lot
You seem to be confusing two ideas from the same mathematician, Chebyshev. These ideas are not the same.
Chebysev's inequality states a fact that is true for many probability distributions. For two standard deviations, it states that three-fourths of the data items will lie within two standard deviations from the mean. As you state, for normal distributions about 19/20 of the items will lie in that interval, but Chebyshev's inequality is an absolute bound that is met by practically all distributions. The fact that your data values are never negative does not change the truth of the inequality; it just makes the actual proportion of values in the interval even larger, so the inequality is even more true (in a sense).
Chebyshev polynomials do not involve statistics, but are simply a series (or two series) of polynomials, commonly used in calculating approximations for computer functions. That is what np.polynomial.Chebyshev involves, and therefore does not seem useful to you at all.
So calculate Chebyshev's inequality yourself. There is no need for a special function for that, since it is so easy (this is Python 3 code):
def Chebyshev_inequality(num_std_deviations):
return 1 - 1 / num_std_deviations**2
You can change that to handle the case where k <= 1 but the idea is obvious.
In your particular case: the inequality says that at least 3/4, or 75%, of the data items will lie within 2 standard deviations of the mean, which means more than 0.41138725956196015 - 2 * 0.2860541519582141 and less than than 0.41138725956196015 + 2 * 0.2860541519582141 (note the different signs), which simplifies to the interval
[-0.16072104435446805, 0.9834955634783884]
In your data, 100% of your data values are in that interval, so Chebyshev's inequality was correct (of course).
Now, if your goal is to predict or estimate where a certain percentile is, Chebyshev's inequality does not help much. It is an absolute lower bound, so it gives one limit to a percentile. For example, by what we did above we know that the 12.5'th percentile is at or above -0.16072104435446805 and the 87.5'th percentile is at or below 0.9834955634783884. Those facts are true but are probably not what you want. If you want an estimate that is closer to the actual percentile, this is not the way to go. The 68-95-99.7 rule is an estimate--the actual locations may be higher or lower, but if the distribution is normal than the estimate will not be far off. Chebyshev's inequality does not do that kind of estimate.
If you want to estimate the 12.5'th and 87.5'th percentiles (showing where 75 percent of all the population will fall) you should calculate those percentiles of your sample and use those values. If you don't know more details about the kind of distribution you have, I don't see any better way. There are reasons why normal distributions are so popular!
It sounds like you want the boundaries for the middle 75% of your data.
The middle 75% of the data is between the 12.5th percentile and the 87.5th percentile, so you can use the quantile function to get the values at the locations:
[df['Heights'].quantile(0.5 - 0.75/2), df['Heights'].quantile(0.5 + 0.75/2)]
#[0.09843618875, 0.75906485625]
As per What does it mean when the standard deviation is higher than the mean? What does that tell you about the data? - Quora, SD is a measure of "spread" and mean is a measure of "position". As you can see, these are more or less independent things. Now, if all your samples are positive, SD cannot be greater than the mean because of the way it's calculated, but 2 or 3 SDs very well can.
So, basically, SD being roughly equal to the mean means that your data are all over the place.
Now, a random variable that's strictly positive indeed cannot be normally distributed. But for a rough estimation, seeing that you still have a bell shape, we can pretend it is and still use SD as a rough measure of the spread (though, since 2 and 3 SD can go into negatives, they lack any physical meaning here whatsoever and so are unusable for the sake of our pretention):
E.g. to get a rough prediction of grass growth, you can still take the mean and apply whatever growth model you're using to it -- that will get the new, prospective mean. Then applying the same to mean±SD will give an idea of the new SD.
This is very rough, of course. But to get any better, you'll have to somehow check which distribution you're dealing with and use its peak and spread characteristics instead of mean and SD. And in any case, your prediction will not be any better than your growth model -- studies of which are anything but conclusive judging by e.g. https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1365-3040.2005.01490.x (not a single formula there).

How to interpret the upper/lower bound of a datapoint with confidence intervals?

Given a list of values:
>>> from scipy import stats
>>> import numpy as np
>>> x = list(range(100))
Using student t-test, I can find the confidence interval of the distribution at the mean with an alpha of 0.1 (i.e. at 90% confidence) with:
def confidence_interval(alist, v, itv):
return stats.t.interval(itv, df=len(alist)-1, loc=v, scale=stats.sem(alist))
x = list(range(100))
confidence_interval(x, np.mean(x), 0.1)
[out]:
(49.134501289005009, 49.865498710994991)
But if I were to find the confidence interval at every datapoint, e.g. for the value 10:
>>> confidence_interval(x, 10, 0.1)
(9.6345012890050086, 10.365498710994991)
How should the interval of the values be interpreted? Is it statistically/mathematical sound to interpret that at all?
Does it goes something like:
At 90% confidence, we know that the data point 10 falls in the interval (9.6345012890050086, 10.365498710994991),
aka.
At 90% confidence, we can say that the data point falls at 10 +- 0.365...
So can we interpret the interval as some sort of a box plot of the datapoint?
In short
Your call gives the interval of confidence for the mean parameter of a normal law of unknown parameters of which you observed 100 observations with an average of 10 and a stdv of 29. It is furthermore not sound to interpret it, since your distribution is clearly not normal, and because 10 is not the observed mean.
TL;DR
There are a lot misconceptions floating around confidence intervals, most of which seemingly stems from a misunderstanding of what we are confident about. Since there is some confusion in your understanding of confidence interval maybe a broader explanation will give a deeper understanding of the concepts you are handling, and hopefully definitely rule out any source of error.
Clearing out misconceptions
Very briefly to set things up. We are in a situation where we want to estimate a parameter, or rather, we want to test a hypothesis for the value of a parameter parameterizing the distribution of a random variable. e.g: Let's say I have a normally distributed variable X with mean m and standard deviation sigma, and I want to test the hypothesis m=0.
What is a parametric test
This a process for testing a hypothesis on a parameter for a random variable. Since we only have access to observations which are concrete realizations of the random variable, it generally procedes by computing a statistic of these realizations. A statistic is roughly a function of the realizations of a random variable. Let's call this function S, we can compute S on x_1,...,x_n which are as many realizations of X.
Therefore you understand that S(X) is a random variable as well with distribution, parameters and so on! The idea is that for standard tests, S(X) follows a very well known distribution for which values are tabulated. e.g: http://www.sjsu.edu/faculty/gerstman/StatPrimer/t-table.pdf
What is a confidence interval?
Given what we've just said, a definition for a confidence interval would be: the range of values for the tested parameter, such that if the observations were to have been generated from a distribution parametrized by a value in that range, it would not have probabilistically improbable.
In other words, a confidence interval gives an answer to the question: given the following observations x_1,...,x_n n realizations of X, can we confidently say that X's distribution is parametrized by such value. 90%, 95%, etc... asserts the level of confidence. Usually, external constraints fix this level (industrial norms for quality assessment, scientific norms e.g: for the discovery of new particles).
I think it is now intuitive to you that:
The higher the confidence level, the larger the confidence interval. e.g. for a confidence of 100% the confidence interval would range across all the possible values as soon as there is some uncertainty
For most tests, under conditions I won't describe, the more observations we have, the more we can restrain the confidence interval.
At 90% confidence, we know that the data point 10 falls in the interval (9.6345012890050086, 10.365498710994991)
It is wrong to say that and it is the most common source of mistakes. A 90% confidence interval never means that the estimated parameter has 90% percent chance of falling into that interval. When the interval is computed, it covers the parameter or it does not, it is not a matter of probability anymore. 90% is an assessment of the reliability of the estimation procedure.
What is a student test?
Now let's come to your example and look at it under the lights of what we've just said. You to apply a Student test to your list of observations.
First: a Student test aims at testing a hypothesis of equality between the mean m of a normally distributed random variable with unknown standard deviation, and a certain value m_0.
The statistic associated with this test is t = (np.mean(x) - m_0)/(s/sqrt(n)) where x is your vector of observations, n the number of observations and s the empirical standard deviation. With no surprise, this follows a Student distribution.
Hence, what you want to do is:
compute this statistic for your sample, compute the confidence interval associated with a Student distribution with this many degrees of liberty, this theoretical mean, and confidence level
see if your computed t falls into that interval, which tells you if you can rule out the equality hypothesis with such level of confidence.
I wanted to give you an exercise but I think I've been lengthy enough.
To conclude on the use of scipy.stats.t.interval. You can use it one of two ways. Either computing yourself the t statistic with the formula shown above and check if t fits in the interval returned by interval(alpha, df) where df is the length of your sampling. Or you can directly call interval(alpha, df, loc=m, scale=s) where m is your empirical mean, and s the empirical standard deviatation (divided by sqrt(n)). In such case, the returned interval will directly be the confidence interval for the mean.
So in your case your call gives the interval of confidence for the mean parameter of a normal law of unknown parameters of which you observed 100 observations with an average of 10 and a stdv of 29. It is furthermore not sound to interpret it, beside the error of interpretation I've already pointed out, since your distribution is clearly not normal, and because 10 is not the observed mean.
Resources
You can check out the following resources to go further.
wikipedia links to have quick references and an elborated overview
https://en.wikipedia.org/wiki/Confidence_interval
https://en.wikipedia.org/wiki/Student%27s_t-test
https://en.wikipedia.org/wiki/Student%27s_t-distribution
To go further
http://osp.mans.edu.eg/tmahdy/papers_of_month/0706_statistical.pdf
I haven't read it but the one below seems quite good.
https://web.williams.edu/Mathematics/sjmiller/public_html/BrownClasses/162/Handouts/StatsTests04.pdf
You should also check out p-values, you will find a lot of similarities and hopefully you understand them better after reading this post.
https://en.wikipedia.org/wiki/P-value#Definition_and_interpretation
Confidence intervals are hopelessly counter-intuitive. Especially for programmers, I dare say as a programmer.
Wikipedida uses a 90% confidence to illustrate a possible interpretation:
Were this procedure to be repeated on numerous samples, the fraction of calculated confidence intervals (which would differ for each sample) that encompass the true population parameter would tend toward 90%.
In other words
The confidence interval provides information about a statistical parameter (such as the mean) of a sample.
The interpretation of e.g. a 90% confidence interval would be: If you repeat the experiment an infinite number of times 90% of the resulting confidence intervals will contain the true parameter.
Assuming the code to compute the interval is correct (which I have not checked) you can use it to calculate the confidence interval of the mean (because of the t-distribution, which models the sample mean of a normally distributed population with unknown standard deviation).
For practical purposes it makes sense to pass in the sample mean. Otherwise you are saying "if I pretended my data had a sample mean of e.g. 10, the confidence interval of the mean would be [9.6, 10.3]".
The particular data passed into the confidence interval does not make sense either. Numbers increasing in a range from 0 to 99 are very unlikely to be drawn from a normal distribution.

autocorrelation function of time series data with numpy

I have been trying to calculate an autocorrelation function, as defined in statistical mechanics, using numpy. Most of the documentation I found is relative to functions like correlate and convolve. However, for a given random variable x these functions just seem to calculate the sum
ACF(dt) = sum_{t=0}^T [(x(t)*x(t+dt)]
instead of the average
ACF(dt) = mean[x(t)*x(t+dt)]
so in fact for calculating an autocorrelation function one would need to do something like:
acf = np.correlate(x,x,mode='full')
acf_half = acf[acf.size / 2:]
ldata = len(acf)
acf = np.array([x/(ldata-i) for i,x in enumerate(acf_half)])
Of course we would need to subtract mean(x)**2 from the resulting acf to be correct.
Can anyone confirm that this is correct?
Generally speaking, the autocorrelation, correlation, etc. is the sum (integral). Sometimes it is normalized, but not averaged in the sense as you've written above. This is because they are defined in terms of the mathematical convolution operation, which is simply the integral that you've written as a sum above.
The brackets at the stat mech page indicate a thermal average, which is an ensemble or time average over the 'experiment' taking place many times at many different states at some temperature. This (the finite temperature) causes the fluctuations that give rise to the 'statistical' nature of the problem, and cause the decay of the correlation (loss of long range order). This simply means that you should find the autocorrelation of several datasets, and average those together, but do not take the mean of the function.
As far as I can tell, your code is attempting to weigh the correlation at dt by the length of the overlap length dt, but I do not believe that this is correct.
With respect to the subtraction of <s>2, that's in the case of the spin model, where <s> would be the mean spin (magnetization), so I believe you are correct in that you should use mean(x)**2.
As a side-note, I would suggest using mode='same' instead of 'full' so that the domain of your correlation matches the domain of your input without having to look at just one-half of the output (here the output is symmetric, so it doesn't really make a difference).

Generalized least square on large dataset

I'd like to linearly fit the data that were NOT sampled independently. I came across generalized least square method:
b=(X'*V^(-1)*X)^(-1)*X'*V^(-1)*Y
The equation is Matlab format; X and Y are coordinates of the data points, and V is a "variance matrix".
The problem is that due to its size (1000 rows and columns), the V matrix becomes singular, thus un-invertable. Any suggestions for how to get around this problem? Maybe using a way of solving generalized linear regression problem other than GLS? The tools that I have available and am (slightly) familiar with are Numpy/Scipy, R, and Matlab.
Instead of:
b=(X'*V^(-1)*X)^(-1)*X'*V^(-1)*Y
Use
b= (X'/V *X)\X'/V*Y
That is, replace all instances of X*(Y^-1) with X/Y. Matlab will skip calculating the inverse (which is hard, and error prone) and compute the divide directly.
Edit: Even with the best matrix manipulation, some operations are not possible (for example leading to errors like you describe).
An example of that which may be relevant to your problem is if try to solve least squares problem under the constraint the multiple measurements are perfectly, 100% correlated. Except in rare, degenerate cases this cannot be accomplished, either in math or physically. You need some independence in the measurements to account for measurement noise or modeling errors. For example, if you have two measurements, each with a variance of 1, and perfectly correlated, then your V matrix would look like this:
V = [1 1; ...
1 1];
And you would never be able to fit to the data. (This generally means you need to reformulate your basis functions, but that's a longer essay.)
However, if you adjust your measurement variance to allow for some small amount of independence between the measurements, then it would work without a problem. For example, 95% correlated measurements would look like this
V = [1 0.95; ...
0.95 1 ];
You can use singular value decomposition as your solver. It'll do the best that can be done.
I usually think about least squares another way. You can read my thoughts here:
http://www.scribd.com/doc/21983425/Least-Squares-Fit
See if that works better for you.
I don't understand how the size is an issue. If you have N (x, y) pairs you still only have to solve for (M+1) coefficients in an M-order polynomial:
y = a0 + a1*x + a2*x^2 + ... + am*x^m

Categories