I'm trying to implement a simple Monte Carlo in Python (to which I'm fairly new). Coming from C I'm probably following the wrongest path since my code is far too slow for what I'm asking: I have a potential hard sphere-like (see V_pot(r) in the code) for 60 3d particles and periodic boundary conditions (PBC), so I defined the following functions
import timeit
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
from numpy import inf
#
L, kb, d, eps, DIM = 100, 1, 1, 1, 3
r_c, T = L/2, eps/(.5*kb)
beta = 1/(kb*T)
#
def dist(A, B):
d = A - B
d -= L*np.around(d/L)
return np.sqrt(np.sum(d**2))
#
def V_pot(r):
V = -eps*(d**6/r**6 - d**6/r_c**6)
if r > r_c:
V = 0
elif r < d:
V = inf
return V
#
def ener(config):
V_jk_val, j = 0, N
#
while (j > 0):
j -= 1
i = 0
while (i < j):
V_jk_val += V_pot(dist(config[j,:], config[i,:]))
i += 1
#
return V_jk_val
#
def acc(en_n, en_o):
d_en = en_n-en_o
if (d_en <= 0):
acc_val = 1
else:
acc_val = np.exp(-beta*(d_en))
return acc_val
#
then, starting from the configuration (where every line of the array represents the coordinates of a 3D particle)
config = np.array([[16.24155657, 57.41672173, 94.39565792],
[76.38121764, 55.88334066, 5.72255163],
[38.41393783, 58.09432145, 6.26448054],
[86.44286438, 61.37100899, 91.97737383],
[37.7315366 , 44.52697269, 23.86320444],
[ 0.59231801, 39.20183376, 89.63974115],
[38.00998141, 3.84363202, 52.74021401],
[99.53480756, 69.97688928, 21.43528924],
[49.62030291, 93.60889503, 15.73723259],
[54.49195524, 0.6431965 , 25.37401196],
[33.82527814, 25.37776021, 67.4320553 ],
[64.61952893, 46.8407798 , 4.93960443],
[60.47322732, 16.48140136, 33.26481306],
[19.71667792, 46.56999616, 35.61044526],
[ 5.33252557, 4.44393836, 60.55759256],
[44.95897856, 7.81728046, 10.26000715],
[86.5548395 , 49.74079452, 4.80480133],
[52.47965686, 42.831448 , 22.03890639],
[ 2.88752006, 59.84605062, 22.75760029],
[ 9.49231045, 42.08653603, 40.63380097],
[13.90093641, 74.40377984, 32.62917915],
[97.44839233, 90.47695772, 91.60794836],
[51.29501624, 27.03796277, 57.09525454],
[10.30180295, 21.977336 , 69.54173272],
[59.61327648, 14.29582325, 11.70942289],
[89.52722796, 26.87758644, 76.34934637],
[82.03736088, 78.5665713 , 23.23587395],
[79.77571695, 66.140968 , 53.6784269 ],
[82.86070472, 40.82189833, 51.48739072],
[99.05647523, 98.63386809, 6.33888993],
[31.02997123, 66.99709163, 95.88332332],
[97.71654767, 59.24793618, 5.20183793],
[ 6.79964473, 45.01258652, 48.69477807],
[93.34977049, 55.20537774, 82.35693526],
[17.35577815, 20.45936211, 29.27981422],
[55.51942207, 52.22875901, 3.6616131 ],
[61.45612224, 36.50170405, 62.89796773],
[23.55822368, 7.09069623, 37.38274914],
[39.57082799, 58.95457592, 48.0304924 ],
[93.94997617, 64.34383203, 77.63346308],
[17.47989107, 90.01113402, 81.00648645],
[86.79068539, 66.35768515, 56.64402907],
[98.71924121, 38.33749023, 73.4715132 ],
[ 0.42356139, 78.32172925, 15.19883322],
[77.75572529, 2.60088767, 56.4683935 ],
[49.76486142, 3.01800153, 93.48019286],
[42.54483899, 4.27174457, 4.38942325],
[66.75777178, 41.1220603 , 19.64484167],
[19.69520773, 41.09230171, 2.51986091],
[73.20493772, 73.16590392, 99.19174281],
[94.16756184, 72.77653334, 10.32128552],
[29.95281655, 27.58596604, 85.12791195],
[ 2.44803886, 32.82333962, 41.6654683 ],
[23.9665915 , 49.94906612, 37.42701059],
[30.40282934, 39.63854309, 47.16572743],
[56.04809276, 30.19705527, 29.15729635],
[ 2.50566522, 70.37965564, 16.78016719],
[28.39713572, 4.04948368, 27.72615789],
[26.11873563, 41.49557167, 14.38703697],
[81.91731981, 12.10514972, 12.03083427]])
I make the 5000 time steps of the simulation with the following code
N = 60
TIME_MC = 5000
DELTA_LIST = [d]
#d/6, d/3, d, 2*d, 3*d
np.random.seed(19680801)
en_mc_delta = np.zeros((TIME_MC, len(DELTA_LIST)))
start = timeit.default_timer()
config_tmp = config
#
for iD, Delta in enumerate(DELTA_LIST):
t=0
while (t < TIME_MC):
for k in range(N):
RND = np.random.rand()
config_tmp[k,:] = config[k,:] + Delta*(np.random.random_sample((1,3))-.5)
en_o, en_n = ener(config), ener(config_tmp)
ACC = acc(en_n, en_o)
if (RND < ACC):
config[k,:] = config_tmp[k,:]
en_o = en_n
en_mc_delta[t][iD] = en_o
t += 1
stop = timeit.default_timer()
print('Time: ', stop-start)
following the rule of the Metropolis algorithm for the acceptance of the proposed move extracted with config_tmp[k,:] = config[k,:] + Delta*(np.random.random_sample((1,3))-.5).
I made some attempts to check where the code get stuck and I found that the function ener (also because of the function dist) is extremely slow: it takes something like ~0.02s to calculate the energy of a configuration, which means something around ~6000s to run the complete simulation (60 particles, 5000 proposed moves).
The outer for it's just to calculate the results for different values of Delta.
Running this code with TIME_MC=60 can make you an idea of how much slow is this code (~218s) which takes just some seconds if implemented in C. I read some other question about how to speed up Python codes but I can't understand how to do it here.
EDIT:
I'm now almost sure that the problem is in the function dist, since just to calculate PBC distance between two 3D vectors it takes around ~0.0012s which gives crazy long times when you calculate it 5000*60 times.
Note that this is a partial answer continued from comments on the original question.
Here's an example of how "unrolling" numpy's function can improve performance when replaced with a more direct calculation of the distance. Note that this was not verified to be equivalent, especially concerning the rounding. The principle still applies, I think.
import random
import time
import numpy as np
L = 100
inv_L = 0.01
vec_length = 10
repetitions = 100000
def dist_np(A, B):
d = A - B
d -= L*np.around(d/L)
return np.sqrt(np.sum(d**2))
def dist_direct(A, B):
sum = 0
for i in range(0, len(A)):
diff = (A[0,i] - B[0,i])
diff -= L * int(diff * inv_L)
sum += diff * diff
return np.sqrt(sum)
vec1 = np.zeros((1,vec_length))
vec2 = np.zeros((1,vec_length))
for i in range(0, vec_length):
vec1[0,i] = random.random()
vec2[0,i] = random.random()
print("with numpy method:")
start = time.time()
for i in range(0, repetitions):
dist_np(vec1, vec2)
print("done in {}".format(time.time() - start))
print("with direct method:")
start = time.time()
for i in range(0, repetitions):
dist_direct(vec1, vec2)
print("done in {}".format(time.time() - start))
Output:
with numpy method:
done in 6.332799911499023
with direct method:
done in 1.0938000679016113
Play around with the average vector length and the repetitions to see where the sweet spot is. I expect the performance gain is not constant when varying these meta-parameters.
Related
I'm implementing a negative sampling algorithm in JAX. The idea is to sample negatives from a range excluding from this range a number of non-acceptable outputs. My current solution is close to the following:
import jax.numpy as jnp
import jax
max_range = 5
n_samples = 2
true_cases = jnp.array(
[
[1,2],
[1,4],
[0,5]
]
)
# i combine the true cases in a dictionary of the following form:
non_acceptable_as_negatives = {
0: jnp.array([5]),
1: jnp.array([2,4]),
2: jnp.array([]),
3: jnp.array([]),
4: jnp.array([]),
5: jnp.array([])
}
negatives = []
key = jax.random.PRNGKey(42)
for i in true_cases[:,0]:
key,use_key = jax.random.split(key,2)
p = jnp.ones((max_range+1,))
p = p.at[non_acceptable_as_negatives[int(i)]].set(0)
p = p / p.sum()
negatives.append(
jax.random.choice(use_key,
jnp.arange(max_range+1),
(1, n_samples),
replace=False,
p=p,
)
)
However this seems
a) rather complicated and
b) is not very performant as the true cases in the original contain ~200_000 entries and max range is ~ 50_000. How can i improve this solution? And is there a more JAX way to store arrays of varying size which i currently store in the non_acceptable_as_negatives dict?
Thanks in a advance
You'll generally achieve better performance in JAX (as in NumPy) if you can avoid loops and use vectorized operations instead. If I'm understanding your function correctly, I think the following does roughly the same thing, but using vmap.
Since JAX does not support dictionary lookups based on traced values, I replaced your dict with a padded array
import jax.numpy as jnp
import jax
max_range = 5
n_samples = 2
fill_value = max_range + 1
true_cases = jnp.array([
[1,2],
[1,4],
[0,5]
])
non_acceptable_as_negatives = jnp.array([
[5, fill_value],
[2, 4],
])
#jax.vmap
def func(key, true_case):
p = jnp.ones(max_range + 1)
idx = true_cases[0]
replace = non_acceptable_as_negatives.at[idx].get(fill_value=fill_value)
p = p.at[replace].set(0, mode='drop')
return jax.random.choice(key, max_range + 1, (n_samples,), replace=False, p=p)
key = jax.random.PRNGKey(42)
keys = jax.random.split(key, len(true_cases))
result = func(keys, true_cases)
print(result)
[[3 1]
[5 1]
[1 5]]
Jax array are immutable. It means that you can't edit it without copying the entire array. Here the main problem is that you create the vector p two times at each iteration. I advice you to compute the probabilities only once via numpy:
import numpy as np
non_acceptable_as_negatives = {
0: np.array([5]),
1: np.array([2,4]),
2: np.array([]),
3: np.array([]),
4: np.array([]),
5: np.array([])
}
probas = np.ones((max_range+1, max_range+1))
for k, idx in non_acceptable_as_negatives.items():
for i in idx:
probas[k, i] = 0
probas = probas / probas.sum(axis=1, keepdims=True)
probas = jnp.array(probas)
Then, to further speed-up the algorithm, you can compile the choice function. You can try:
from functools import partial
#partial(jax.jit, static_argnums=1)
def sample(key, max_range, probas):
key, use_key = jax.random.split(key, 2)
return jax.random.choice(use_key,
jnp.arange(max_range+1),
(1, n_samples),
replace=False,
p=probas[i],
), key
And finally:
for i in true_cases[:,0]:
neg, key = aux(key, max_range, probas)
negatives.append(neg)
I have the coordinates of four points. Can anyone help me find the coordinates of one point that satisfies the condition: the distances from the finding point to four input points are in the range of 1.9 and 2.5?
import numpy as np
dist_min = 1.9
dist_max = 2.5
# this show no points satisfied
input_points1 = [[ 7.57447956, 6.67658376, 10.79921475],
[ 8.98026868, 7.69010703, 12.89377068],
[ 6.22242062, 7.73362942, 12.87947421],
[ 10.0000000, 9.00000000, 8.500000000]]
#this has
input_points2 = [[ 7.57447956, 6.67658376, 10.79921475],
[ 8.98026868, 7.69010703, 12.89377068],
[ 6.22242062, 7.73362942, 12.87947421],
[ 6.22473072, 4.74175054, 12.96455411]]
def Distance(point1, point2):
return np.linalg.norm(point1 - point2)
Here a method that finds a random point:
import numpy as np
dist_min = 1.9
dist_max = 2.5
# this show no points satisfied
input_points1 = [[ 7.57447956, 6.67658376, 10.79921475],
[ 8.98026868, 7.69010703, 12.89377068],
[ 6.22242062, 7.73362942, 12.87947421],
[ 10.0000000, 9.00000000, 8.500000000]]
#this has
input_points2 = [[ 7.57447956, 6.67658376, 10.79921475],
[ 8.98026868, 7.69010703, 12.89377068],
[ 6.22242062, 7.73362942, 12.87947421],
[ 6.22473072, 4.74175054, 12.96455411]]
def Distance(point1, point2):
return np.linalg.norm(np.array(point1) - np.array(point2))
def find_point(input_points):
dmax = max([Distance(input_points[i], input_points[j])
for i in range(len(input_points)-1)
for j in range(i+1, len(input_points))])
if dmax > 2 * dist_max:
return None
found = False
while not found:
ip = np.random.choice(len(input_points))
p = np.random.normal(size=3)
r = np.random.uniform(dist_min, dist_max)
x = p / np.linalg.norm(p) * r + np.array(input_points[ip])
found = True
for i in input_points:
d = Distance(i, x)
if d <= dist_min or d >= dist_max:
found = False
continue
return(x)
a = find_point(input_points1)
print(a)
# NONE
b = find_point(input_points2)
print([Distance(i, b) for i in input_points2])
# [2.4877643881304805, 2.1439232926982417, 2.2860134633791795, 1.9466840567560841]
This looks like something you could use a K-Means (link to Wikipedia) function for with just 1 centroid and then check that the point's distance is the right distance away from all the points in the data. Perhaps not the most elegant or efficient solution, but it should work.
K-Means Code adapted from this tutorial on K-Means:
import pandas as pd
from sklearn.cluster import KMeans
# data is whatever set of points you have
df = pd.DataFrame(data)
# fit k means with 1 centroid lol
k_means = KMeans(n_clusters=1)
labels=k_means.fit_predict(df)
centroid = k_means.cluster_centers_[:,0]
# compare to all points
for point in data:
assert distance(point, centroid) >= 1.9
assert distance(point, centroid) <= 2.5
I am trying to make a program, that tells me when a note has been pressed.
I have the following notes exported as a .wav file (The C Major Scale 4 times with different rhythms, dynamics and in different octaves):
I can get the volumes of my sound file using the following code:
from scipy.io import wavfile
def get_volume(file):
sr, data = wavfile.read(file)
if data.ndim > 1:
data = data[:, 0]
return data
volumes = get_volume("FILE")
Here are some information about the output:
Max: 27851
Min: -25664
Mean: -0.7569383391943734
A Sample from the array: [ -7987 -8615 -8983 -9107 -9019 -8750 -8324 -7752 -7033 -6156
-5115 -3920 -2610 -1245 106 1377 2520 3515 4364 5077
5659 6113 6441 6639 6708 6662 6518 6288 5962 5525
4963 4265 3420 2418 1264 -27 -1429 -2901 -4388 -5814
-7101 -8186 -9028 -9614 -9955 -10077 -10012 -9785 -9401 -8846]
And here is what I get when I plot the volumes array (x is the index, y is the volume):
I want to get the indices of the start and end of the notes like the ones in the image (Did it by hand not accurate):
When I looked at the data I realized, that it is a 1d array and I also noticed, that when a note gets louder or quiter it is not smooth. It is like a ZigZag, but there is still a trend. So basically I can't just get the gradients (slope) of each point. So I though about grouping notes into batches and getting the average gradient there and thus doing the calculations with it, like so:
def get_average_gradient(arr):
# Calculates average gradient
return sum([i - (sum(arr) / len(arr)) for i in arr]) / len(arr)
def get_note_start_end(arr_size, batch_size, arr):
# Finds start and end indices
ranges = []
curr_range = [0]
prev_slope = curr_slope = "NO SLOPE"
has_ended = False
for i, j in enumerate(arr):
if j > 0:
curr_slope = "INCREASING"
elif j < 0:
curr_slope = "DECREASING"
else:
curr_slope = "NO SLOPE"
if prev_slope == "DECREASING" and not has_ended:
if i == len(arr) - 1 or arr[i + 1] < 0:
if curr_slope != "DECREASING":
curr_range.append((i + 1) * batch_size + batch_size)
ranges.append(curr_range)
curr_range = [(i + 1) * batch_size + batch_size + 1]
has_ended = True
if has_ended and curr_slope == "INCREASING":
has_ended = False
prev_slope = curr_slope
ranges[-1][-1] = arr_size - 1
return ranges
def get_notes(batch_size, arr):
# Gets the gradients of the batches
out = []
for i in range(0, len(arr), batch_size):
if i + batch_size > len(arr):
gradient = get_average_gradient(arr[i:])
else:
gradient = get_average_gradient(arr[i: i+batch_size])
# print(gradient, i)
out.append(gradient)
return get_note_start_end(len(arr), batch_size, out)
notes = get_notes(128, volumes)
The problem with this is, that if the batch size is too small, then it returns the indices of small peaks, which aren't a note on their own. If the batch size is too big then the program misses the start and end indices.
I also tried to get the notes, by using the silence.
Here is the code I used:
from pydub import AudioSegment, silence
audio = intro = AudioSegment.from_wav("C - Major - Test.wav")
dBFS = audio.dBFS
notes = silence.detect_nonsilent(audio, min_silence_len=50, silence_thresh=dBFS-10)
This worked the best, but it still wasn't good enough. Here is what I got:
It some notes pretty well, but it wasn't able to identify notes accurately if the notes themselves didn't become very quite before a different one was played (Like in the second scale and in the fourth scale).
I have been thinking about this problem for days and I have basically tried most if not all of the good(?) ideas I had. I am new to analysing audio files. Maybe I am using the wrong data to do what I want to do. Maybe I need to use the frequency data (I tried getting it, but couldn't make sense of it)
Frequency code:
from scipy.fft import *
from scipy.io import wavfile
import matplotlib.pyplot as plt
def get_freq(file, start_time, end_time):
sr, data = wavfile.read(file)
if data.ndim > 1:
data = data[:, 0]
else:
pass
# Fourier Transform
N = len(data)
yf = rfft(data)
xf = rfftfreq(N, 1 / sr)
return xf, yf
FILE = "C - Major - Test.wav"
plt.plot(*get_freq(FILE, 0, 10))
plt.show()
And the frequency graph:
And here is the .wav file:
https://drive.google.com/file/d/1CERH-eovu20uhGoV1_O3B2Ph-4-uXpiP/view?usp=sharing
Any help is appreciated :)
think this is what you need:
first you convert negative numbers into positive ones and smooth the line to eliminate noise, to find the lower peaks yo work with the negative values.
from scipy.io import wavfile
import matplotlib.pyplot as plt
from scipy.signal import find_peaks
import numpy as np
from scipy.signal import savgol_filter
def get_volume(file):
sr, data = wavfile.read(file)
if data.ndim > 1:
data = data[:, 0]
return data
v1 = abs(get_volume("test.wav"))
#Smooth the curve
volumes=savgol_filter(v1,10000 , 3)
lv=volumes*-1
#find peaks
peaks,_ = find_peaks(volumes,distance=8000,prominence=300)
lpeaks,_= find_peaks(lv,distance=8000,prominence=300)
# plot them
plt.plot(volumes)
plt.plot(peaks,volumes[peaks],"x")
plt.plot(lpeaks,volumes[lpeaks],"o")
plt.plot(np.zeros_like(volumes), "--", color="gray")
plt.show()
Plot with your test file, x marks the high peaks and o the lower peaks
This article presents two python libraries (Aubio, librosa) to achieve what you need and includes examples of how to use them: How to Use Python to Detect Music Onsets by Lynn Zheng
in the following code, I want to optimize the objective function using optuna.
"""
Additional modules
pip install optuna
pip install scikit-optimize
"""
import time
from py_wake.examples.data.hornsrev1 import V80
from py_wake.examples.data.hornsrev1 import Hornsrev1Site # We work with the Horns Rev 1 site, which comes already set up with PyWake.
from py_wake import BastankhahGaussian
import numpy as np
import optuna
from py_wake.turbulence_models import GCLTurbulence
from py_wake.deflection_models.jimenez import JimenezWakeDeflection
from py_wake.wind_turbines.power_ct_functions import PowerCtFunctionList, PowerCtTabular
def newSite(x,y):
xNew=np.array([x[0]+560*i for i in range(4)])
yNew=np.array([y[0]+560*i for i in range(4)])
x_newsite=np.array([xNew[0],xNew[0],xNew[0],xNew[1],xNew[1],xNew[1],xNew[2],xNew[2],xNew[2]])
y_newsite=np.array([yNew[0],yNew[1],yNew[2],yNew[0],yNew[1],yNew[2],yNew[0],yNew[1],yNew[2]])
return (x_newsite,y_newsite)
def objective(trial):
site = Hornsrev1Site()
x, y = site.initial_position.T
x_newsite,y_newsite=newSite(x,y)
windTurbines = V80()
# We ask values of c from optuna.
c = []
for i in range(9):
for l in range(360):
for k in range(23):
varname = f'c{i}'
minv, maxv, stepv = 0, 1, 1
c.append(trial.suggest_int(varname, minv, maxv, step=stepv))
C=np.array(c)
C=C.reshape((9,360,23))
for item in range(9):
for j in range(10,370,10):
for i in range(j-10,j):
C[item][i]=C[item][j-5]
windTurbines.powerCtFunction = PowerCtFunctionList(
key='operating',
powerCtFunction_lst=[PowerCtTabular(ws=[0, 100], power=[0, 0], power_unit='w', ct=[0, 0]), # 0=No power and ct
windTurbines.powerCtFunction], # 1=Normal operation
default_value=1)
print(C)
operating = np.ones((9,360,23)) # shape=(#wt,wd,ws)
operating[C <= 0.5]=0
wf_model = BastankhahGaussian(site, windTurbines,deflectionModel=JimenezWakeDeflection(),turbulenceModel=GCLTurbulence())
# run wind farm simulation
sim_res = wf_model(
x_newsite, y_newsite, # wind turbine positions
h=None, # wind turbine heights (defaults to the heights defined in windTurbines)
wd=None, # Wind direction (defaults to site.default_wd (0,1,...,360 if not overriden))
ws=None, # Wind speed (defaults to site.default_ws (3,4,...,25m/s if not overriden))
operating=operating
)
for i in range(9):
for l in range(360):
for k in range(23):
if sim_res.TI_eff[i][l][k]-0.14 > 0 :
sim_res.Power[i][l][k]=sim_res.Power[i][l][k]-10000*(sim_res.TI_eff[i][l][k]-0.14)**2
print(-float(np.sum(sim_res.Power))/1e+11)
return -float(np.sum(sim_res.Power)/1e+11)
def optuna_hpo():
t0 = time.perf_counter()
num_trials = 300
sampler = optuna.integration.SkoptSampler()
study = optuna.create_study(sampler=sampler, direction="maximize")
study.optimize(objective, n_trials=num_trials)
print(f"Best params: {study.best_params}")
print(f"Best value: {study.best_value}\n")
print(f'elapse: {round(time.perf_counter() - t0)}s')
# start
optuna_hpo()
The problem is the initial guess is c which is an array of (9,360,23). but when I run the following code print(f"Best params: {study.best_params}") only print 9 values of c while I need 9*360*23 so I think it just changes these 9 values, not all the c values.
I have also tried another way to write c like this:
for i in range(9*360*23):
varname = f'c{i}'
minv, maxv, stepv = 0, 1, 1
c.append(trial.suggest_int(varname, minv, maxv, step=stepv))
but in this way, the code will stop after 7,8 iteration because of a memory problem. I have also done this using a university server so I think there would be a problem with it. so now I want to know is there any way to apply all c values in optimization? I mean instead of just changing 9 values of them changing all of them.
How can I find anomalous values from following data. I am simulating a sinusoidal pattern. While I can plot the data and spot any anomalies or noise in data, but how can I do it without plotting the data. I am looking for simple approaches other than Machine learning methods.
import random
import numpy as np
import matplotlib.pyplot as plt
N = 10 # Set signal sample length
t1 = -np.pi # Simulation begins at t1
t2 = np.pi; # Simulation ends at t2
in_array = np.linspace(t1, t2, N)
print("in_array : ", in_array)
out_array = np.sin(in_array)
plt.plot(in_array, out_array, color = 'red', marker = "o") ; plt.title("numpy.sin()")
Inject random noise
noise_input = random.uniform(-.5, .5); print("Noise : ",noise_input)
in_array[random.randint(0,len(in_array)-1)] = noise_input
print(in_array)
plt.plot(in_array, out_array, color = 'red', marker = "o") ; plt.title("numpy.sin()")
Data with noise
I've thought of the following approach to your problem, since you have only some values that are anomalous in the time vector, it means that the rest of the values have a regular progression, which means that if we gather all the data points in the vector under clusters and calculate the average step for the biggest cluster (which is essentially the pool of values that represent the real deal), then we can use that average to do a triad detection, in a given threshold, over the vector and detect which of the elements are anomalous.
For this we need two functions: calculate_average_step which will calculate that average for the biggest cluster of close values, and then we need detect_anomalous_values which will yield the indexes of the anomalous values in our vector, based on that average calculated earlier.
After we detected the anomalous values, we can go ahead and replace them with an estimated value, which we can determine from our average step value and by using the adjacent points in the vector.
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
def calculate_average_step(array, threshold=5):
"""
Determine the average step by doing a weighted average based on clustering of averages.
array: our array
threshold: the +/- offset for grouping clusters. Aplicable on all elements in the array.
"""
# determine all the steps
steps = []
for i in range(0, len(array) - 1):
steps.append(abs(array[i] - array[i+1]))
# determine the steps clusters
clusters = []
skip_indexes = []
cluster_index = 0
for i in range(len(steps)):
if i in skip_indexes:
continue
# determine the cluster band (based on threshold)
cluster_lower = steps[i] - (steps[i]/100) * threshold
cluster_upper = steps[i] + (steps[i]/100) * threshold
# create the new cluster
clusters.append([])
clusters[cluster_index].append(steps[i])
# try to match elements from the rest of the array
for j in range(i + 1, len(steps)):
if not (cluster_lower <= steps[j] <= cluster_upper):
continue
clusters[cluster_index].append(steps[j])
skip_indexes.append(j)
cluster_index += 1 # increment the cluster id
clusters = sorted(clusters, key=lambda x: len(x), reverse=True)
biggest_cluster = clusters[0] if len(clusters) > 0 else None
if biggest_cluster is None:
return None
return sum(biggest_cluster) / len(biggest_cluster) # return our most common average
def detect_anomalous_values(array, regular_step, threshold=5):
"""
Will scan every triad (3 points) in the array to detect anomalies.
array: the array to iterate over.
regular_step: the step around which we form the upper/lower band for filtering
treshold: +/- variation between the steps of the first and median element and median and third element.
"""
assert(len(array) >= 3) # must have at least 3 elements
anomalous_indexes = []
step_lower = regular_step - (regular_step / 100) * threshold
step_upper = regular_step + (regular_step / 100) * threshold
# detection will be forward from i (hence 3 elements must be available for the d)
for i in range(0, len(array) - 2):
a = array[i]
b = array[i+1]
c = array[i+2]
first_step = abs(a-b)
second_step = abs(b-c)
first_belonging = step_lower <= first_step <= step_upper
second_belonging = step_lower <= second_step <= step_upper
# detect that both steps are alright
if first_belonging and second_belonging:
continue # all is good here, nothing to do
# detect if the first point in the triad is bad
if not first_belonging and second_belonging:
anomalous_indexes.append(i)
# detect the last point in the triad is bad
if first_belonging and not second_belonging:
anomalous_indexes.append(i+2)
# detect the mid point in triad is bad (or everything is bad)
if not first_belonging and not second_belonging:
anomalous_indexes.append(i+1)
# we won't add here the others because they will be detected by
# the rest of the triad scans
return sorted(set(anomalous_indexes)) # return unique indexes
if __name__ == "__main__":
N = 10 # Set signal sample length
t1 = -np.pi # Simulation begins at t1
t2 = np.pi; # Simulation ends at t2
in_array = np.linspace(t1, t2, N)
# add some noise
noise_input = random.uniform(-.5, .5);
in_array[random.randint(0, len(in_array)-1)] = noise_input
noisy_out_array = np.sin(in_array)
# display noisy sin
plt.figure()
plt.plot(in_array, noisy_out_array, color = 'red', marker = "o");
plt.title("noisy numpy.sin()")
# detect anomalous values
average_step = calculate_average_step(in_array)
anomalous_indexes = detect_anomalous_values(in_array, average_step)
# replace anomalous points with an estimated value based on our calculated average
for anomalous in anomalous_indexes:
# try forward extrapolation
try:
in_array[anomalous] = in_array[anomalous-1] + average_step
# else try backwward extrapolation
except IndexError:
in_array[anomalous] = in_array[anomalous+1] - average_step
# generate sine wave
out_array = np.sin(in_array)
plt.figure()
plt.plot(in_array, out_array, color = 'green', marker = "o");
plt.title("cleaned numpy.sin()")
plt.show()
Noisy sine:
Cleaned sine:
Your problem relies in the time vector (which is of 1 dimension). You will need to apply some sort of filter on that vector.
First thing that came to mind was medfilt (median filter) from scipy and it looks something like this:
from scipy.signal import medfilt
l1 = [0, 10, 20, 30, 2, 50, 70, 15, 90, 100]
l2 = medfilt(l1)
print(l2)
the output of this will be:
[ 0. 10. 20. 20. 30. 50. 50. 70. 90. 90.]
the problem with this filter though is that if we apply some noise values to the edges of the vector like [200, 0, 10, 20, 30, 2, 50, 70, 15, 90, 100, -50] then the output would be something like [ 0. 10. 10. 20. 20. 30. 50. 50. 70. 90. 90. 0.] and obviously this is not ok for the sine plot since it will produce the same artifacts for the sine values array.
A better approach to this problem is to treat the time vector as an y output and it's index values as the x input and do a linear regression on the "time linear function", not the quotes, it just means we're faking the 2 dimensional model by applying a fake X vector. The code implies the use of scipy's linregress (linear regression) function:
from scipy.stats import linregress
l1 = [5, 0, 10, 20, 30, -20, 50, 70, 15, 90, 100]
l1_x = range(0, len(l1))
slope, intercept, r_val, p_val, std_err = linregress(l1_x, l1)
l1 = intercept + slope * l1_x
print(l1)
whose output will be:
[-10.45454545 -1.63636364 7.18181818 16. 24.81818182
33.63636364 42.45454545 51.27272727 60.09090909 68.90909091
77.72727273]
Now let's apply this to your time vector.
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats import linregress
N = 20
# N = 10 # Set signal sample length
t1 = -np.pi # Simulation begins at t1
t2 = np.pi; # Simulation ends at t2
in_array = np.linspace(t1, t2, N)
# add some noise
noise_input = random.uniform(-.5, .5);
in_array[random.randint(0, len(in_array)-1)] = noise_input
# apply filter on time array
in_array_x = range(0, len(in_array))
slope, intercept, r_val, p_val, std_err = linregress(in_array_x, in_array)
in_array = intercept + slope * in_array_x
# generate sine wave
out_array = np.sin(in_array)
print("OUT ARRAY")
print(out_array)
plt.plot(in_array, out_array, color = 'red', marker = "o") ; plt.title("numpy.sin()")
plt.show()
the output will be:
the resulting signal will be an approximation of the original, as it is with any form of extrapolation/interpolation/regression filtering.