Python/Numpy/Scipy: Draw Poisson random values with different lambda - python

My problem is to extract in the most efficient way N Poisson random values (RV) each with a different mean/rate Lam. Basically the size(RV) == size(Lam).
Here it is a naive (very slow) implementation:
import numpy as NP
def multi_rate_poisson(Lam):
rv = NP.zeros(NP.size(Lam))
for i,lam in enumerate(Lam):
rv[i] = NP.random.poisson(lam=lam, size=1)
return rv
That, on my laptop, with 1e6 samples gives:
Lam = NP.random.rand(1e6) + 1
timeit multi_poisson(Lam)
1 loops, best of 3: 4.82 s per loop
Is it possible to improve from this?

Although the docstrings don't document this functionality, the source indicates it is possible to pass an array to the numpy.random.poisson function.
>>> import numpy
>>> # 1 dimension array of 1M random var's uniformly distributed between 1 and 2
>>> numpyarray = numpy.random.rand(1e6) + 1
>>> # pass to poisson
>>> poissonarray = numpy.random.poisson(lam=numpyarray)
>>> poissonarray
array([4, 2, 3, ..., 1, 0, 0])
The poisson random variable returns discrete multiples of one, and approximates a bell curve as lambda grows beyond one.
>>> import matplotlib.pyplot
>>> count, bins, ignored = matplotlib.pyplot.hist(
numpy.random.poisson(
lam=numpy.random.rand(1e6) + 10),
14, normed=True)
>>> matplotlib.pyplot.show()
This method of passing the array to the poisson generator appears to be quite efficient.
>>> timeit.Timer("numpy.random.poisson(lam=numpy.random.rand(1e6) + 1)",
'import numpy').repeat(3,1)
[0.13525915145874023, 0.12136101722717285, 0.12127304077148438]

Related

How to make a mixed random variable in scipy.stats

I am trying to understand the random variables from scipy.stats. I can sample from a uniform random variable:
from scipy.stats import uniform
print(uniform.rvs(size=1000))
But how can I make a random variable that with 0.5 probability samples uniformly from 0..1 and with 0.5 prob samples uniformly from 5..6?
I could write a loop that picks a random number between 0 and 1. If it is < .5, then picks a random number between 0 and 1. If it is >= .5 pick a random number between 0 and 1 and add 5. But I would really like to be able to call it like:
mixed_uniform.rvs(size=1000)
I also need to use the survival function of this mixed distribution.
For the distribution, a mix of a custom function to do the transformation, then use vectorize() to apply it will be more efficient than looping.
In [1]: from scipy.stats import uniform
In [2]: r = uniform.rvs(size=1000)
In [3]: r
Out[3]:
array([7.48816182e-02, 4.63880797e-01, 8.75315477e-01, 3.61116729e-01,
...
3.13473322e-01, 3.45434625e-01, 9.49993090e-01, 1.55553018e-01])
In [4]: type(r)
Out[4]: numpy.ndarray
In [8]: def f(a):
...: a *= 2
...: if a > 1: a += 4
...: return a
...:
In [10]: import numpy
In [11]: vf = numpy.vectorize(f)
In [12]: r2 = numpy.vectorize(f)(r)
In [13]: r2
Out[13]:
array([1.49763236e-01, 9.27761594e-01, 5.75063095e+00, 7.22233457e-01,
...
6.26946644e-01, 6.90869250e-01, 5.89998618e+00, 3.11106036e-01])
In [14]: max(r2)
Out[14]: 5.999360665646841
In [15]: min(r2)
Out[15]: 0.0004563758727054168
In [17]: len([x for x in r2 if x<=2])
Out[17]: 504
In [18]: len([x for x in r2 if x>=5])
Out[18]: 496
I generate a random distribution of 1000 numbers between 0 and 1 and randomly chose a element from the list. if the element is greater than .5 then add 5
from scipy.stats import uniform
import random
min_number=0
max_number=1
size=1000
number_pool= uniform.rvs(min_number,max_number,size=size)
plt.hist(number_pool)
plt.show()
def getValue(number_pool):
val=random.choice(number_pool)
if val>.5:
val+=5
return val
print(getValue(number_pool))

numpy random array values between -1 and 1

what is the best way to create a NumPy array of a given size with values randomly and uniformly spread between -1 and 1?
I tried 2*np.random.rand(size)-1
I'm not sure. Try:
s = np.random.uniform(-1, 1, size)
reference: https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.uniform.html
I can use numpy.arange:
import numpy as np
print(np.arange(start=-1.0, stop=1.0, step=0.2, dtype=np.float))
The step parameter defines the size and the uniformity in the distribution of the elements.
In your solution the np.random.rand(size) returns random floats in the half-open interval [0.0, 1.0)
this means 2 * np.random.rand(size) - 1 returns numbers in the half open interval [0, 2) - 1 := [-1, 1), i.e. range including -1 but not 1.
If this is what you wish to do then it is okay.
But, if you wish to generate numbers in the open interval (-1, 1), i.e. between -1 and 1 and hence not including either -1 or 1, may I suggest the following -
from numpy.random import default_rng
rg = default_rng(2)
size = (5,5)
rand_arr = rg.random(size)
rand_signs = rg.choice([-1,1], size)
rand_arr = rand_arr * rand_signs
print(rand_arr)
I have used the new suggested Generator per numpy, see link https://numpy.org/devdocs/reference/random/index.html#quick-start
100% working Code:
a = np.random.uniform(-1,1)
print(a)

Root mean square of a function in python

I want to calculate root mean square of a function in Python. My function is in a simple form like y = f(x). x and y are arrays.
I tried Numpy and Scipy Docs and couldn't find anything.
I'm going to assume that you want to compute the expression given by the following pseudocode:
ms = 0
for i = 1 ... N
ms = ms + y[i]^2
ms = ms / N
rms = sqrt(ms)
i.e. the square root of the mean of the squared values of elements of y.
In numpy, you can simply square y, take its mean and then its square root as follows:
rms = np.sqrt(np.mean(y**2))
So, for example:
>>> y = np.array([0, 0, 1, 1, 0, 1, 0, 1, 1, 1]) # Six 1's
>>> y.size
10
>>> np.mean(y**2)
0.59999999999999998
>>> np.sqrt(np.mean(y**2))
0.7745966692414834
Do clarify your question if you mean to ask something else.
You could use the sklearn function
from sklearn.metrics import mean_squared_error
rmse = mean_squared_error(y_actual,[0 for _ in y_actual], squared=False)
numpy.std(x) tends to rms(x) in cases of mean(x) value tends to 0 (thanks to #Seb), like it can be with sound records, vibrations, and other signals of fluctuations from zero.
rms = lambda x_seq: (sum(x*x for x in x_seq)/len(x_seq))**(1/2)
In case you'd like to frame your array before compute RMS, this is a numpy solution:
nframes = 1000
rms = np.array([
np.sqrt(np.mean(arr**2))
for arr in np.array_split(arr,nframes)
])
If you'd like to specify frame length instead of frame counts, you'd do this first:
frame_length = 200
arr_length = arr.shape[0]
nframes = arr_length // frame_length +1

Obtaining Legendre polynomial form once Legendre coefficients are determined

I have obtained the coefficients for the Legendre polynomial that best fits my data. Now I am needing to determine the value of that polynomial at each time-step of my data. I need to do this so that I can subtract the fit from my data. I have looked at the documentation for the Legendre module, and I'm not sure if I just don't understand my options or if there isn't a native tool in place for what I want. If my data-points were evenly spaced, linspace would be a good option, but that's not the case here. Does anyone have a suggestion for what to try?
For those who would like to demand a minimum working example of code, just use a random array, get the coefficients, and tell me from there how you would proceed. The values themselves don't matter. It's the technique that I'm asking about here. Thanks.
To simplify Ahmed's example
In [1]: from numpy.polynomial import Polynomial, Legendre
In [2]: p = Polynomial([0.5, 0.3, 0.1])
In [3]: x = np.random.rand(10) * 10
In [4]: y = p(x)
In [5]: pfit = Legendre.fit(x, y, 2)
In [6]: plot(*pfit.linspace())
Out[6]: [<matplotlib.lines.Line2D at 0x7f815364f310>]
In [7]: plot(x, y, 'o')
Out[7]: [<matplotlib.lines.Line2D at 0x7f81535d8bd0>]
The Legendre functions are scaled and offset, as the data should be confined to the interval [-1, 1] to get any advantage over the usual power basis. If you want the coefficients for plain old Legendre functions
In [8]: pfit.convert()
Out[8]: Legendre([ 0.53333333, 0.3 , 0.06666667], [-1., 1.], [-1., 1.])
But that isn't recommended.
Once you have a function, you can just generate a numpy array for the timepoints:
>>> import numpy as np
>>> timepoints = [1,3,7,15,16,17,19]
>>> myarray = np.array(timepoints)
>>> def mypolynomial(bins, pfinal): #pfinal is just the estimate of the final array (i'll do quadratic)
... a,b,c = pfinal # obviously, for a*x^2 + b*x + c
... return (a*bins**2) + b*bins + c
>>> mypolynomial(myarray, (1,1,0))
array([ 2, 12, 56, 240, 272, 306, 380])
It automatically evaluates it for each timepoint is in the numpy array.
Now all you have to do is rewrite mypolynomial to go from a simple quadratic example to a proper one for a Legendre polynomial. Treat the function as if it were evaluating a float to return the value, and when called on the numpy array it will automatically evaluate it for each value.
EDIT:
Let's say I wanted to generalize this to all standard polynomials:
>>> import numpy as np
>>> timepoints = [1,3,7,15,16,17,19]
>>> myarray = np.array(timepoints)
>>> def mypolynomial(bins, pfinal): #pfinal is just the estimate of the final array (i'll do quadratic)
>>> hist = np.zeros((1, len(myarray))) # define blank return
... for i in range(len(pfinal)):
... # fixed a typo here, was pfinal[-i] which would give -0 rather than -1, since negative indexing starts at -1, not -0
... const = pfinal[-i-1] # negative index to go from 0 exponent to highest exponent
... hist += const*(bins**i)
... return hist
>>> mypolynomial(myarray, (1,1,0))
array([ 2, 12, 56, 240, 272, 306, 380])
EDIT2: Typo fix
EDIT3:
#Ahmed is perfect right when he states Homer's rule is good for numerical stability. The implementation here would be as follows:
>>> def horner(coeffs, x):
... acc = 0
... for c in coeffs:
... acc = acc * x + c
... return acc
>>> horner((1,1,0), myarray)
array([ 2, 12, 56, 240, 272, 306, 380])
Slightly modified to keep the same argument order as before, from the code here:
http://rosettacode.org/wiki/Horner%27s_rule_for_polynomial_evaluation#Python
When you're using a nice library to fit polynomials, the library will in my experience usually have a function to evaluate them. So I think it is useful to know how you're generating these coefficients.
In the example below, I used two functions in numpy, legfit and legval which made it trivial to both fit and evaluate the Legendre polynomials without any need to invoke Horner's rule or do the bookkeeping yourself. (Though I do use Horner's rule to generate some example data.)
Here's a complete example where I generate some sparse data from a known polynomial, fit a Legendre polynomial to it, evaluate that polynomial on a dense grid, and plot. Note that the fitting and evaluating part takes three lines thanks to the numpy library doing all the heavy lifting.
It produces the following figure:
import numpy as np
### Setup code
def horner(coeffs, x):
"""Evaluate a polynomial at a point or array"""
acc = 0.0
for c in reversed(coeffs):
acc = acc * x + c
return acc
x = np.random.rand(10) * 10
true_coefs = [0.1, 0.3, 0.5]
y = horner(true_coefs, x)
### Fit and evaluate
legendre_coefs = np.polynomial.legendre.legfit(x, y, 2)
new_x = np.linspace(0, 10)
new_y = np.polynomial.legendre.legval(new_x, legendre_coefs)
### Plotting only
try:
import pylab
pylab.ion() # turn on interactive plotting
pylab.figure()
pylab.plot(x, y, 'o', new_x, new_y, '-')
pylab.xlabel('x')
pylab.ylabel('y')
pylab.title('Fitting Legendre polynomials and evaluating them')
pylab.legend(['original sparse data', 'fit'])
except:
print("Can't start plots.")

Interval containing specified percent of values

With numpy or scipy, is there any existing method that will return the endpoints of an interval which contains a specified percent of the values in a 1D array? I realize that this is simple to write myself, but it seems like the kind of thing that might be built in, although I can't find it.
E.g:
>>> import numpy as np
>>> x = np.random.randn(100000)
>>> print(np.bounding_interval(x, 0.68))
Would give approximately (-1, 1)
You can use np.percentile:
In [29]: x = np.random.randn(100000)
In [30]: p = 0.68
In [31]: lo = 50*(1 - p)
In [32]: hi = 50*(1 + p)
In [33]: np.percentile(x, [lo, hi])
Out[33]: array([-0.99206523, 1.0006089 ])
There is also scipy.stats.scoreatpercentile:
In [34]: scoreatpercentile(x, [lo, hi])
Out[34]: array([-0.99206523, 1.0006089 ])
I don't know of a built-in function to do it, but you can write one using the math package to specify approximate indices like this:
from __future__ import division
import math
import numpy as np
def bound_interval(arr_in, interval):
lhs = (1 - interval) / 2 # Specify left-hand side chunk to exclude
rhs = 1 - lhs # and the right-hand side
sorted = np.sort(arr_in)
lower = sorted[math.floor(lhs * len(arr_in))] # use floor to get index
upper = sorted[math.floor(rhs * len(arr_in))]
return (lower, upper)
On your specified array, I got the interval (-0.99072237819851039, 0.98691691784955549). Pretty close to (-1, 1)!

Categories