from random import *
def main():
t = 0
for i in range(1000): # thousand
t += random()
print(t/1000)
main()
I was looking at the source code for a sample program my professor gave me and I came across this RNG. can anyone explain how this RNG works?
If you plotted the points, you would see that this actually produces a Gaussian ("normal") distribution about the mean of the random function.
Generate random numbers following a normal distribution in C/C++ talks about random number generation; it's a pretty common technique to do this if all you have is a uniform number generator like in standard C.
What I've given you here is a histogram of 100,000 values drawn from your function (of course, returned not printed, if you aren't familiar with python). The y axis is the frequency that the value appears, the x axis is the bin of the value. As you can see, the average value is 1/2, and by 3 standard deviations (99.7 percent of the data) we have almost no values in the range. That should be intuitive; we "usually" get 1/2, and very rarely get .99999
Have a look at the documentation. Its quite well written:
https://docs.python.org/2/library/random.html
The idea is that that program generates a random number 1000 times which is sufficiently enough to get mean as 0.5
The program is using the Central Limit Theorem - sums of independent and identically distributed random variables X with finite variance asymptotically converge to a normal (a.k.a. Gaussian) distribution whose mean is the sum of the means, and variance is the sum of the variances. Scaling this by N, the number of X's summed, gives the sample mean (a.k.a. average). If the expected value of X is μ and the variance of X is σ2, the expected value of the sample mean is also μ and it has variance σ2 / N.
Since a Uniform(0,1) has mean 0.5 and variance 1/12, your algorithm will generate results that are pretty close to normally distributed with a mean of 0.5 and a variance of 1/12000. Consequently 99.7% of the outcomes should fall within +/-3 standard deviations of the mean, i.e., in the range 0.5+/-0.0274.
This is a ridiculously inefficient way to generate normals. Better alternatives include the Box-Muller method, Polar method, or ziggurat method.
The thing making this random is the random() function being called. random() will generate 1 (for most practical purposes) random float between 0 and 1.
>>>random()
0.1759916412898097
>>>random()
0.5489228122596088
etc.
The rest of it is just adding each random to a total and then dividing by the number of randoms, essentially finding the average of all 1000 randoms, which as Cyber pointed out is actually not a random number at all.
Related
Let me say right from the beginning that I know just a little bit about statistics, but not enough to figure out this problem.
I'm trying to create a list of n random floating point numbers to simulate annual stock market returns. The numbers should range from -30.0 to +30.0 with an average of 7.0. The numbers should be distributed mostly around the average, but they should be well distributed. Basically, it should be a flattened bell curve, so there should be a good chance of having some negative numbers as well as some numbers closer to the upper limit.
I know numpy has functions to create random numbers that are distributed in different ways, but not sure how to specify these parameters.
You need to sample from a nomal distribution that has your chosen mean (mu) and a large standard deviation (sigma) to flatten it.
Here's some code to get you started
import numpy as np
mu, sigma, n = 7.0, 3.0, 1000 # mean and standard deviation
s = np.random.normal(mu, sigma, n)
Note that the larger sigma is, the further your samples will be from the mean and could exceed your +-30 limits. So choose a suitable sigma (like three sigmas from your mean to the extreme), or you'll have to clip your numbers to the limit.
Also note that stock market returns aren't necessarily normally distributed.
I have the a dataframe which includes heights. The data can not go below zero. That's why i can not use standard deviation as this data is not a normal distribution. I can not use 68-95-99.7 rule here because it fails in my case. Here is my dataframe, mean and SD.
0.77132064
0.02075195
0.63364823
0.74880388
0.49850701
0.22479665
0.19806286
0.76053071
0.16911084
0.08833981
Mean: 0.41138725956196015
Std: 0.2860541519582141
If I get 2 std, as you can see the number becomes negative.
-2 x std calculation = 0.41138725956196015 - 0.2860541519582141 x 2 = -0,160721044354468
I have tried using percentile and not satisfied with it to be honest. How can i apply Chebyshev's inequality to this problem? Here what i did so far:
np.polynomial.Chebyshev(df['Heights'])
But this returns numbers not a SD level i can measure. Or do you think Chebyshev is the best choice in my case?
Expected solution:
I am expecting to get a range like 75% next height will be between 0.40 - 0.43 etc.
EDIT1: Added histogram
To be more clear, I have added my real data's histogram
EDIT2: Some values from real data
Mean: 0.007041500928135767
Percentile 50: 0.0052000000000000934
Percentile 90: 0.015500000000000047
Std: 0.0063790857035425025
Var: 4.06873389299246e-05
Thanks a lot
You seem to be confusing two ideas from the same mathematician, Chebyshev. These ideas are not the same.
Chebysev's inequality states a fact that is true for many probability distributions. For two standard deviations, it states that three-fourths of the data items will lie within two standard deviations from the mean. As you state, for normal distributions about 19/20 of the items will lie in that interval, but Chebyshev's inequality is an absolute bound that is met by practically all distributions. The fact that your data values are never negative does not change the truth of the inequality; it just makes the actual proportion of values in the interval even larger, so the inequality is even more true (in a sense).
Chebyshev polynomials do not involve statistics, but are simply a series (or two series) of polynomials, commonly used in calculating approximations for computer functions. That is what np.polynomial.Chebyshev involves, and therefore does not seem useful to you at all.
So calculate Chebyshev's inequality yourself. There is no need for a special function for that, since it is so easy (this is Python 3 code):
def Chebyshev_inequality(num_std_deviations):
return 1 - 1 / num_std_deviations**2
You can change that to handle the case where k <= 1 but the idea is obvious.
In your particular case: the inequality says that at least 3/4, or 75%, of the data items will lie within 2 standard deviations of the mean, which means more than 0.41138725956196015 - 2 * 0.2860541519582141 and less than than 0.41138725956196015 + 2 * 0.2860541519582141 (note the different signs), which simplifies to the interval
[-0.16072104435446805, 0.9834955634783884]
In your data, 100% of your data values are in that interval, so Chebyshev's inequality was correct (of course).
Now, if your goal is to predict or estimate where a certain percentile is, Chebyshev's inequality does not help much. It is an absolute lower bound, so it gives one limit to a percentile. For example, by what we did above we know that the 12.5'th percentile is at or above -0.16072104435446805 and the 87.5'th percentile is at or below 0.9834955634783884. Those facts are true but are probably not what you want. If you want an estimate that is closer to the actual percentile, this is not the way to go. The 68-95-99.7 rule is an estimate--the actual locations may be higher or lower, but if the distribution is normal than the estimate will not be far off. Chebyshev's inequality does not do that kind of estimate.
If you want to estimate the 12.5'th and 87.5'th percentiles (showing where 75 percent of all the population will fall) you should calculate those percentiles of your sample and use those values. If you don't know more details about the kind of distribution you have, I don't see any better way. There are reasons why normal distributions are so popular!
It sounds like you want the boundaries for the middle 75% of your data.
The middle 75% of the data is between the 12.5th percentile and the 87.5th percentile, so you can use the quantile function to get the values at the locations:
[df['Heights'].quantile(0.5 - 0.75/2), df['Heights'].quantile(0.5 + 0.75/2)]
#[0.09843618875, 0.75906485625]
As per What does it mean when the standard deviation is higher than the mean? What does that tell you about the data? - Quora, SD is a measure of "spread" and mean is a measure of "position". As you can see, these are more or less independent things. Now, if all your samples are positive, SD cannot be greater than the mean because of the way it's calculated, but 2 or 3 SDs very well can.
So, basically, SD being roughly equal to the mean means that your data are all over the place.
Now, a random variable that's strictly positive indeed cannot be normally distributed. But for a rough estimation, seeing that you still have a bell shape, we can pretend it is and still use SD as a rough measure of the spread (though, since 2 and 3 SD can go into negatives, they lack any physical meaning here whatsoever and so are unusable for the sake of our pretention):
E.g. to get a rough prediction of grass growth, you can still take the mean and apply whatever growth model you're using to it -- that will get the new, prospective mean. Then applying the same to mean±SD will give an idea of the new SD.
This is very rough, of course. But to get any better, you'll have to somehow check which distribution you're dealing with and use its peak and spread characteristics instead of mean and SD. And in any case, your prediction will not be any better than your growth model -- studies of which are anything but conclusive judging by e.g. https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1365-3040.2005.01490.x (not a single formula there).
I have a number X of integers (very large) and a probability p with which I want to draw a sample s (a number) from X following a Poisson distribution. For example, if X = 10^8 and p=0.05, I expect s to be the number of heads we get.
I was able to easily do this with random.binomial as:
s=np.random.binomial(n=X, p=p)
How can I apply the same idea using random.poisson?
Just multiply p and X:
np.random.poisson(10**8 * 0.05)
The probability to get more than 10**8 is numerically zero.
Professor #pjs emphasizes that we are combining probability and number into a rate which is the parameter of the Poisson process.
Further worth mentioning that for such a large number you'll find the pmf's of Binomial and Poisson very similar to each other and also (using probability function or "cdf" as engineers call it) to a Gaussian.
https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.random.poisson.html
import numpy as np
s = np.random.poisson(size=n, lam=p)
Given a list of values:
>>> from scipy import stats
>>> import numpy as np
>>> x = list(range(100))
Using student t-test, I can find the confidence interval of the distribution at the mean with an alpha of 0.1 (i.e. at 90% confidence) with:
def confidence_interval(alist, v, itv):
return stats.t.interval(itv, df=len(alist)-1, loc=v, scale=stats.sem(alist))
x = list(range(100))
confidence_interval(x, np.mean(x), 0.1)
[out]:
(49.134501289005009, 49.865498710994991)
But if I were to find the confidence interval at every datapoint, e.g. for the value 10:
>>> confidence_interval(x, 10, 0.1)
(9.6345012890050086, 10.365498710994991)
How should the interval of the values be interpreted? Is it statistically/mathematical sound to interpret that at all?
Does it goes something like:
At 90% confidence, we know that the data point 10 falls in the interval (9.6345012890050086, 10.365498710994991),
aka.
At 90% confidence, we can say that the data point falls at 10 +- 0.365...
So can we interpret the interval as some sort of a box plot of the datapoint?
In short
Your call gives the interval of confidence for the mean parameter of a normal law of unknown parameters of which you observed 100 observations with an average of 10 and a stdv of 29. It is furthermore not sound to interpret it, since your distribution is clearly not normal, and because 10 is not the observed mean.
TL;DR
There are a lot misconceptions floating around confidence intervals, most of which seemingly stems from a misunderstanding of what we are confident about. Since there is some confusion in your understanding of confidence interval maybe a broader explanation will give a deeper understanding of the concepts you are handling, and hopefully definitely rule out any source of error.
Clearing out misconceptions
Very briefly to set things up. We are in a situation where we want to estimate a parameter, or rather, we want to test a hypothesis for the value of a parameter parameterizing the distribution of a random variable. e.g: Let's say I have a normally distributed variable X with mean m and standard deviation sigma, and I want to test the hypothesis m=0.
What is a parametric test
This a process for testing a hypothesis on a parameter for a random variable. Since we only have access to observations which are concrete realizations of the random variable, it generally procedes by computing a statistic of these realizations. A statistic is roughly a function of the realizations of a random variable. Let's call this function S, we can compute S on x_1,...,x_n which are as many realizations of X.
Therefore you understand that S(X) is a random variable as well with distribution, parameters and so on! The idea is that for standard tests, S(X) follows a very well known distribution for which values are tabulated. e.g: http://www.sjsu.edu/faculty/gerstman/StatPrimer/t-table.pdf
What is a confidence interval?
Given what we've just said, a definition for a confidence interval would be: the range of values for the tested parameter, such that if the observations were to have been generated from a distribution parametrized by a value in that range, it would not have probabilistically improbable.
In other words, a confidence interval gives an answer to the question: given the following observations x_1,...,x_n n realizations of X, can we confidently say that X's distribution is parametrized by such value. 90%, 95%, etc... asserts the level of confidence. Usually, external constraints fix this level (industrial norms for quality assessment, scientific norms e.g: for the discovery of new particles).
I think it is now intuitive to you that:
The higher the confidence level, the larger the confidence interval. e.g. for a confidence of 100% the confidence interval would range across all the possible values as soon as there is some uncertainty
For most tests, under conditions I won't describe, the more observations we have, the more we can restrain the confidence interval.
At 90% confidence, we know that the data point 10 falls in the interval (9.6345012890050086, 10.365498710994991)
It is wrong to say that and it is the most common source of mistakes. A 90% confidence interval never means that the estimated parameter has 90% percent chance of falling into that interval. When the interval is computed, it covers the parameter or it does not, it is not a matter of probability anymore. 90% is an assessment of the reliability of the estimation procedure.
What is a student test?
Now let's come to your example and look at it under the lights of what we've just said. You to apply a Student test to your list of observations.
First: a Student test aims at testing a hypothesis of equality between the mean m of a normally distributed random variable with unknown standard deviation, and a certain value m_0.
The statistic associated with this test is t = (np.mean(x) - m_0)/(s/sqrt(n)) where x is your vector of observations, n the number of observations and s the empirical standard deviation. With no surprise, this follows a Student distribution.
Hence, what you want to do is:
compute this statistic for your sample, compute the confidence interval associated with a Student distribution with this many degrees of liberty, this theoretical mean, and confidence level
see if your computed t falls into that interval, which tells you if you can rule out the equality hypothesis with such level of confidence.
I wanted to give you an exercise but I think I've been lengthy enough.
To conclude on the use of scipy.stats.t.interval. You can use it one of two ways. Either computing yourself the t statistic with the formula shown above and check if t fits in the interval returned by interval(alpha, df) where df is the length of your sampling. Or you can directly call interval(alpha, df, loc=m, scale=s) where m is your empirical mean, and s the empirical standard deviatation (divided by sqrt(n)). In such case, the returned interval will directly be the confidence interval for the mean.
So in your case your call gives the interval of confidence for the mean parameter of a normal law of unknown parameters of which you observed 100 observations with an average of 10 and a stdv of 29. It is furthermore not sound to interpret it, beside the error of interpretation I've already pointed out, since your distribution is clearly not normal, and because 10 is not the observed mean.
Resources
You can check out the following resources to go further.
wikipedia links to have quick references and an elborated overview
https://en.wikipedia.org/wiki/Confidence_interval
https://en.wikipedia.org/wiki/Student%27s_t-test
https://en.wikipedia.org/wiki/Student%27s_t-distribution
To go further
http://osp.mans.edu.eg/tmahdy/papers_of_month/0706_statistical.pdf
I haven't read it but the one below seems quite good.
https://web.williams.edu/Mathematics/sjmiller/public_html/BrownClasses/162/Handouts/StatsTests04.pdf
You should also check out p-values, you will find a lot of similarities and hopefully you understand them better after reading this post.
https://en.wikipedia.org/wiki/P-value#Definition_and_interpretation
Confidence intervals are hopelessly counter-intuitive. Especially for programmers, I dare say as a programmer.
Wikipedida uses a 90% confidence to illustrate a possible interpretation:
Were this procedure to be repeated on numerous samples, the fraction of calculated confidence intervals (which would differ for each sample) that encompass the true population parameter would tend toward 90%.
In other words
The confidence interval provides information about a statistical parameter (such as the mean) of a sample.
The interpretation of e.g. a 90% confidence interval would be: If you repeat the experiment an infinite number of times 90% of the resulting confidence intervals will contain the true parameter.
Assuming the code to compute the interval is correct (which I have not checked) you can use it to calculate the confidence interval of the mean (because of the t-distribution, which models the sample mean of a normally distributed population with unknown standard deviation).
For practical purposes it makes sense to pass in the sample mean. Otherwise you are saying "if I pretended my data had a sample mean of e.g. 10, the confidence interval of the mean would be [9.6, 10.3]".
The particular data passed into the confidence interval does not make sense either. Numbers increasing in a range from 0 to 99 are very unlikely to be drawn from a normal distribution.
I would like to generate random integers on an interval min to max. For a uniform distribution in numpy:
numpy.random.randint(min,max,n)
does exactly what I want.
However, I would now like to give the distribution of random numbers an exponential bias. There are a number of suggestions for this e.g. Pseudorandom Number Generator - Exponential Distribution as well as the numpy function numpy.random.RandomState.exponential, but these do not address how to constrain the distribution to integers between min and max. I'm not sure how to do this, whilst still ensuring a random distribution.
The exponential distribution is a continuous distribution. What you probably want is its discrete equivalent, the geometric distribution. Numpy's implementation generates strictly positive integers, i.e, 1,2,3,..., so you'll want add min-1 to shift it, and then truncate by rejecting/throwing away results > max. That, in turn, means generating them one-by-one add adding the non-rejected values to a list until you get the desired number. (You could also determine analytically what proportion you expect to be rejected, and scale your n accordingly, but you'll still likely end up a few short or with a few too many.)
It's possible to do this without rejection, but you'd have to create your own inversion, determine the probability of exceeding max, and generate uniforms to between 0 and that probability to feed to your inversion algorithm. Rejection is simpler even though it's less efficient.
May be you can try summing up all the bias. Then the probability of generating an integer j= bias of j / total bias. You can use monte carlo simulation to implement this.