I am very confused about how to sample measurement error using normal distribution (Gaussian pdf) in Python.
What I want to do is just to create noise (error) under Gaussian pdf and add it to measured values. In short, I put the problem as follows:
Inputs:
M(i) - measurement value; i = 1...n, n - number of measurements;
Output:
M_noisy(i) = M(i) + noise(i);
where, noise(i) - noise in measurement; M(i) - measurement value.
Important: This noise should be as a zero-mean Gaussian noise with variance equal to, 10 % of the measurement value.
I put the following code but I could not continue...
My code:
import numpy as np
# sigma - standard deviation of M
# mu - mean value of M
# n - number of measurements
# I dont know if this is correct or not:
noise = sigma * np.random.randn(n) + mu;
## M_noisy(i) - ?
Thanks for any answers/suggestions in advance.
random_scale_ammounts = np.random.randn(n)
#creates a list of values between -1 and 1
offset_from_mean = sigma *random_scales #randomly -std to +std
noise = offset_from_mean + mu;
clean_y_data = np.arange(n)
noisy_y_data = clean_y_data + noise
might be what you are after?
Related
The Situation
I am currently writing a program that will later on be used to analyze a signal that is somewhat of a asymmetric Gaussian. I am interested in how many frequencies I need to reproduce the signal somewhat exact and especially the amplitudes of those frequencies.
Before I input the real data I'm testing the program with a default (asymmetric) Gaussian, as can be seen in the code below.
My Problem
To ensure I that get the amplitudes right, I am rebuilding the original signal using the whole frequency spectrum, but there are some difficulties. I get to reproduce the signal somewhat well multiplying amp with 0.16, which I got by looking at the fraction rebuild/original. Of course, this is really unsatisfying and can't be the correct solution.
To be precise the difference is not dependant on the time length and seems to be a Gaussian too, following the form of the original, increasing in asymmetry according to the Skewnorm function itself. The amplitude of the difference function is correlated linear to 'height'.
My Question
I am writing this post because I am out of ideas for getting the amplitude right. Maybe anyone has had the same / a similar problem and can share their solution for this / give a hint.
Further information
Before focusing on a (asymmetric) Gaussian I analyzed periodic signals and rectangular pulses, which sadly were very unstable to variations in the time length of the input signal. In this context, I experimented with window functions, which seemed to speed up the process and increase the stability, the reason being that I had to integrate the peaks. Working with the Gaussian I got told to take each peak, received via the bare fft and ditch the integration approach, therefore my incertitude considering the amplitude described above. Maybe anyone got an opinion on the approach chosen by me and if necessary can deliver an improvement.
Code
from numpy.fft import fft, fftfreq
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import skewnorm
np.random.seed(1234)
def data():
height = 1
data = height * skewnorm.pdf(t, a=0, loc=t[int(N/2)])
# noise_power = 1E-6
# noise = np.random.normal(scale=np.sqrt(noise_power), size=t.shape)
# data += noise
return data
def fft_own(data):
freq = fftfreq(N, dt)
data_fft = fft(data) * np.pi
amp = 2/N * np.abs(data_fft) # * factor (depending on t1)
# amp = 2/T * np.abs(data_fft)**2
phase = np.angle(data_fft)
peaks, = np.where(amp >= 0) # use whole spectrum for rebuild
return freq, amp, phase, peaks
def rebuild(fft_own):
freq, amp, phase, peaks = fft_own
df = freq[1] - freq[0]
data_rebuild = 0
for i in peaks:
amplitude = amp[i] * df
# amplitude = amp[i] * 0.1
# amplitude = np.sqrt(amp[i] * df)
data_rebuild += amplitude * np.exp(0+1j * (2*np.pi * freq[i] * t
+ phase[i]))
f, ax = plt.subplots(1, 1)
# mask = (t >= 0) & (t <= t1-1)
ax.plot(t, data_init, label="initial signal")
ax.plot(t, np.real(data_rebuild), label="rebuild")
# ax.plot(t[mask], (data_init - np.real(data_rebuild))[mask], label="diff")
ax.set_xlim(0, t1-1)
ax.legend()
t0 = 0
t1 = 10 # diff(t0, t1) ∝ df
# T = t1- t0
N = 4096
t = np.linspace(t0, t1, int(N))
dt = (t1 - t0) / N
data_init = data()
fft_init = fft_own(data_init)
rebuild_init = rebuild(fft_init)
You should get a perfect reconstruction if you divide amp by N, and remove all your other factors.
Currently you do:
data_fft = fft(data) * np.pi # Multiply by pi
amp = 2/N * np.abs(data_fft) # Multiply by 2/N
amplitude = amp[i] * df # Multiply by df = 1/(dt*N) = 1/10
This means that you currently multiply by a total of pi * 2 / 10, or 0.628, that you shouldn't (only the 1/N factor in there is correct).
Correct code:
def fft_own(data):
freq = fftfreq(N, dt)
data_fft = fft(data)
amp = np.abs(data_fft) / N
phase = np.angle(data_fft)
peaks, = np.where(amp >= 0) # use whole spectrum for rebuild
return freq, amp, phase, peaks
def rebuild(fft_own):
freq, amp, phase, peaks = fft_own
data_rebuild = 0
for i in peaks:
data_rebuild += amp[i] * np.exp(0+1j * (2*np.pi * freq[i] * t
+ phase[i]))
Your program can be significantly simplified by using ifft. Simply set to 0 those frequencies in data_fft that you don't want to include in the reconstruction, and apply ifft to it:
data_fft = fft(data)
data_fft[np.abs(data_fft) < threshold] = 0
rebuild = ifft(data_fft).real
Note that the Fourier transform of a Gaussian is a Gaussian, so you won't be picking out individual peaks, you are picking a compact range of frequencies that will always include 0. This is an ideal low-pass filter.
I would like to generate randomly a sample in the interval [0;1000] of size 1E4. But following two different distribution : one in the [0;1] interval following an increasing exponential distribution which will generate a bench of values close to 1 with respect to the others close to 0. And then, after to generate the second part of my sample in the [1;1000] interval following an 1/r distribution.
I think the easier way to do this is to split the global sample in two different samples. But I don't know how to deal with it. I tried to use some distribution of the scipy librairy but I didn't find a way to use them properly in order to generate my global sample. What do you think?
You can use two separate distributions but you need to make sure that the number of samples matches at the boundary point (r == 1). So you need to estimate the required number of samples for the left (exp) and right (1/r) distribution. The integrals over the two distributions give the following:
integrate exp(r) from 0 to 1: exp(1) - 1 == 1.7183
integrate 1/r from 1 to 1000: log(1000) == 6.9078
This means the probability for the left distribution at r = 1 is exp(1) / 1.7183 and for the right distribution it is 1 / 6.9078. So this gives a ratio of ratio = left / right = 10.928. This means you need to generate ratio times more values in the right interval than in the left one in order to end up with the same number of samples at the boundary. Let's say you want to generate N samples in total this means you need to generate N1 = N / (ratio + 1) samples in the left (exp) interval and N2 = N * ratio / (ratio + 1) samples in the right (1/r) interval.
So here's some sample code:
import matplotlib.pyplot as plt
import numpy as np
r_max = 1000.0
integral_1 = np.exp(1.0) - 1.0 # 1.7183
integral_2 = np.log(r_max) # 6.9078
N = 2_000_000 # total number of samples
ratio = np.exp(1.0) / integral_1 * integral_2 # 10.928
N1 = int(N / (ratio + 1))
N2 = int(N * ratio / (ratio + 1))
# Use inverse transform sampling in the following.
s1 = np.log(integral_1 * np.random.random(size=N1) + 1.0)
s2 = np.exp(integral_2 * np.random.random(size=N2))
samples = np.concatenate((s1, s2))
np.random.shuffle(samples) # optionally shuffle the samples
plt.hist(samples, bins=int(20 * r_max))
plt.xlim([0, 5])
plt.show()
Which produces the following distribution:
I am trying to randomly generate a bunch of points for a graph in python to test k-means clustering algorithm with. Here's my code.
N = 100
random_x0 = np.random.randn(N) + (np.random.randint(0,100) * np.random.randint(1,4))
random_x1 = np.random.randn(N) + (np.random.randint(0,100) * np.random.randint(1,4))
random_x2 = np.random.randn(N) + (np.random.randint(0,100) * np.random.randint(1,4))
random_y0 = np.random.randn(N) + (np.random.randint(0,100) * np.random.randint(1,4))
random_y1 = np.random.randn(N) + (np.random.randint(0,100) * np.random.randint(1,4))
random_y2 = np.random.randn(N) + (np.random.randint(0,100) * np.random.randint(1,4))
As you might imagine, each set of random_x[index] coordinates is matched with it's y counterpart.
(random_x0, random_y0), (random_x1, random_y1), (random_x2, random_y2)
Since I am testing a clustering algorithm, I want my data points to be SOMEWHAT clustered...but this seems like too much. I tried to add a random number from 1-100, then multiple that by a random number from 1-4....what am I doing wrong to get such consistent random results?
randn is a random gaussian variable with zero mean and variance equal to one. In order to generate a Gaussian variable with mean m and standard deviation s one would do m + s*randn(). Since you do randn(N) + constant you basically create gaussian variables with standard deviation one and mean equal to constant. Now constant is given by a random variable that can vary from 0 to 297, i.e. the spread in centroids is much larger than the variance. You probably want a centroid (i.e. mean) spread that's a few standard deviations. You can also pass multiple mean and std.dev. values to random.normal for example:
np.random.normal(loc=[0, 1, 2], scale=[0.5, 0.75, 1.0], size=(N, 3))
First, you need to decide what kind of distribution is wanted. Let's say its Gauss, so we can use random.gauss.
I't create a function which generates a 2D point with Gauss distribution:
def generate_point(mean_x, mean_y, deviation_x, deviation_y):
return random.gauss(mean_x, deviation_x), random.gauss(mean_y, deviation_y)
Then, decide how many clusters, how many points per cluster and which deviation to use for clusters and points within a cluster. For example:
cluster_mean_x = 100
cluster_mean_y = 100
cluster_deviation_x = 50
cluster_deviation_y = 50
point_deviation_x = 5
point_deviation_y = 5
number_of_clusters = 5
points_per_cluster = 50
Then, generate cluster centres:
cluster_centers = [generate_point(cluster_mean_x,
cluster_mean_y,
cluster_deviation_x,
cluster_deviation_y)
for i in range(number_of_clusters)]
Then generate the actual points for each cluster:
points = [generate_point(center_x,
center_y,
point_deviation_x,
point_deviation_y)
for center_x, center_y in cluster_centers
for i in range(points_per_cluster)]
I want to use Hawkes process to model some data. I could not find whether PyMC supports Hawkes process. More specifically I want an observed variable with Hawkes Process and learn a posterior on its params.
If it is not there, then could I define it in PyMC in some way e.g. #deterministic etc.??
It's been quite a long time since your question, but I've worked it out on PyMC today so I'd thought I'd share the gist of my implementation for the other people who might get across the same problem. We're going to infer the parameters λ and α of a Hawkes process. I'm not going to cover the temporal scale parameter β, I'll leave that as an exercise for the readers.
First let's generate some data :
def hawkes_intensity(mu, alpha, points, t):
p = np.array(points)
p = p[p <= t]
p = np.exp(p - t)
return mu + alpha * np.sum(p)
def simulate_hawkes(mu, alpha, window):
t = 0
points = []
lambdas = []
while t < window:
m = hawkes_intensity(mu, alpha, points, t)
s = np.random.exponential(scale=1/m)
ratio = hawkes_intensity(mu, alpha, points, t + s)
t = t + s
if t < window:
points.append(t)
lambdas.append(ratio)
else:
break
points = np.sort(np.array(points, dtype=np.float32))
lambdas = np.array(lambdas, dtype=np.float32)
return points, lambdas
# parameters
window = 1000
mu = 8
alpha = 0.25
points, lambdas = simulate_hawkes(mu, alpha, window)
num_points = len(points)
We just generated some temporal points using some functions that I adapted from there : https://nbviewer.jupyter.org/github/MatthewDaws/PointProcesses/blob/master/Temporal%20points%20processes.ipynb
Now, the trick is to create a matrix of size (num_points, num_points) that contains the temporal distance of the ith point from all the other points. So the (i, j) point of the matrix is the temporal interval separating the ith point to the jth. This matrix will be used to compute the sum of the exponentials of the Hawkes process, ie. the self-exciting part. The way to create this matrix as well as the sum of the exponentials is a bit tricky. I'd recommend to check every line yourself so you can see what they do.
tile = np.tile(points, num_points).reshape(num_points, num_points)
tile = np.clip(points[:, None] - tile, 0, np.inf)
tile = np.tril(np.exp(-tile), k=-1)
Σ = np.sum(tile, axis=1)[:-1] # this is our self-exciting sum term
We have points and we have a matrix containg the sum of the excitations term.
The duration between two consecutive events of a Hawkes process follow an exponential distribution of parameter λ = λ0 + ∑ excitation. This is what we are going to model, but first we have to compute the duration between two consecutive points of our generated data.
interval = points[1:] - points[:-1]
We're now ready for inference:
with pm.Model() as model:
λ = pm.Exponential("λ", 1)
α = pm.Uniform("α", 0, 1)
lam = pm.Deterministic("lam", λ + α * Σ)
interarrival = pm.Exponential(
"interarrival", lam, observed=interval)
trace = pm.sample(2000, tune=4000)
pm.plot_posterior(trace, var_names=["λ", "α"])
plt.show()
print(np.mean(trace["λ"]))
print(np.mean(trace["α"]))
7.829
0.284
Note: the tile matrix can become quite large if you have many data points.
I have a array of CSV values representing a digital output. It has been gathered using an analog oscilloscope so it is not a perfect digital signal. I'm trying to filter out the data to have a perfect digital signal for calculating the periods (which may vary).
I would also like to define the maximum error i get from this filtration.
Something like this:
Idea
Apply a treshold od the data. Here is a pseudocode:
for data_point_raw in data_array:
if data_point_raw < 0.8: data_point_perfect = LOW
if data_point_raw > 2 : data_point_perfect = HIGH
else:
#area between thresholds
if previous_data_point_perfect == Low : data_point_perfect = LOW
if previous_data_point_perfect == HIGH: data_point_perfect = HIGH
There are two problems bothering me.
This seems like a common problem in digital signal processing, however i haven't found a predefined standard function for it. Is this an ok way to perform the filtering?
How would I get the maximum error?
Here's a bit of code that might help.
from __future__ import division
import numpy as np
def find_transition_times(t, y, threshold):
"""
Given the input signal `y` with samples at times `t`,
find the times where `y` increases through the value `threshold`.
`t` and `y` must be 1-D numpy arrays.
Linear interpolation is used to estimate the time `t` between
samples at which the transitions occur.
"""
# Find where y crosses the threshold (increasing).
lower = y < threshold
higher = y >= threshold
transition_indices = np.where(lower[:-1] & higher[1:])[0]
# Linearly interpolate the time values where the transition occurs.
t0 = t[transition_indices]
t1 = t[transition_indices + 1]
y0 = y[transition_indices]
y1 = y[transition_indices + 1]
slope = (y1 - y0) / (t1 - t0)
transition_times = t0 + (threshold - y0) / slope
return transition_times
def periods(t, y, threshold):
"""
Given the input signal `y` with samples at times `t`,
find the time periods between the times at which the
signal `y` increases through the value `threshold`.
`t` and `y` must be 1-D numpy arrays.
"""
transition_times = find_transition_times(t, y, threshold)
deltas = np.diff(transition_times)
return deltas
if __name__ == "__main__":
import matplotlib.pyplot as plt
# Time samples
t = np.linspace(0, 50, 501)
# Use a noisy time to generate a noisy y.
tn = t + 0.05 * np.random.rand(t.size)
y = 0.6 * ( 1 + np.sin(tn) + (1./3) * np.sin(3*tn) + (1./5) * np.sin(5*tn) +
(1./7) * np.sin(7*tn) + (1./9) * np.sin(9*tn))
threshold = 0.5
deltas = periods(t, y, threshold)
print("Measured periods at threshold %g:" % threshold)
print(deltas)
print("Min: %.5g" % deltas.min())
print("Max: %.5g" % deltas.max())
print("Mean: %.5g" % deltas.mean())
print("Std dev: %.5g" % deltas.std())
trans_times = find_transition_times(t, y, threshold)
plt.plot(t, y)
plt.plot(trans_times, threshold * np.ones_like(trans_times), 'ro-')
plt.show()
The output:
Measured periods at threshold 0.5:
[ 6.29283207 6.29118893 6.27425846 6.29580066 6.28310224 6.30335003]
Min: 6.2743
Max: 6.3034
Mean: 6.2901
Std dev: 0.0092793
You could use numpy.histogram and/or matplotlib.pyplot.hist to further analyze the array returned by periods(t, y, threshold).
This is not an answer for your question, just and suggestion that may help. Im writing it here because i cant put image in comment.
I think you should normalize data somehow, before any processing.
After normalization to range of 0...1 you should apply your filter.
If you're really only interested in the period, you could plot the Fourier Transform, you'll have a peak where the frequency of the signals occurs (and so you have the period). The wider the peak in the Fourier domain, the larger the error in your period measurement
import numpy as np
data = np.asarray(my_data)
np.fft.fft(data)
Your filtering is fine, it's basically the same as a schmitt trigger, but the main problem you might have with it is speed. The benefit of using Numpy is that it can be as fast as C, whereas you have to iterate once over each element.
You can achieve something similar using the median filter from SciPy. The following should achieve a similar result (and not be dependent on any magnitudes):
filtered = scipy.signal.medfilt(raw)
filtered = numpy.where(filtered > numpy.mean(filtered), 1, 0)
You can tune the strength of the median filtering with medfilt(raw, n_samples), n_samples defaults to 3.
As for the error, that's going to be very subjective. One way would be to discretise the signal without filtering and then compare for differences. For example:
discrete = numpy.where(raw > numpy.mean(raw), 1, 0)
errors = np.count_nonzero(filtered != discrete)
error_rate = errors / len(discrete)