Harmonic oscillator - python

I'm trying to solve the simple pendulum in python. My goal is to save my results into a file in order to make a plot afterwards. Should I put my the code that saves the data in the loop or define a new function ?
NB: I'm a beginner.
Thank you.
import numpy as np
g = 9.8
L = 3
THETA_0 = np.pi / 4
THETA_DOT_0 = 0
def get_theta_double_dot(theta):
return - (g / L) * np.sin(theta)
def theta(t):
theta = THETA_0
theta_dot = THETA_DOT_0
dela_t = 0.01
for tps in np.arange(0, t, delta_t):
theta_double_dot = get_theta_double_dot(theta)
theta = theta + (theta_dot * delta_t)
theta_dot = theta_dot + (theta_double_dot * delta_t)
return theta

Definitely store your results in a variable, and then save the whole thing into a file afterwards. Saving during the loop is "messy" and less efficient because of I/O calls constantly.
Note that this is not true if you are dealing with millions of entries, in which case you'd have to spare memory a bit and write to file in "batches". But it doesn't feel like this should be a problem in your case.

Related

Faster Way to Implement Gaussian Smoothing? (Python 3.10, NumPy)

I'm attempting to implement a Gaussian smoothing/flattening function in my Python 3.10 script to flatten a set of XY-points. For each data point, I'm creating a Y buffer and a Gaussian kernel, which I use to flatten each one of the Y-points based on it's neighbours.
Here are some sources on the Gaussian-smoothing method:
Source 1
Source 2
I'm using the NumPy module for my data arrays, and MatPlotLib to plot the data.
I wrote a minimal reproducible example, with some randomly-generated data, and each one of the arguments needed for the Gaussian function listed at the top of the main function:
import numpy as np
import matplotlib.pyplot as plt
import time
def main():
dataSize = 1000
yDataRange = [-4, 4]
reachPercentage = 0.1
sigma = 10
phi = 0
amplitude = 1
testXData = np.arange(stop = dataSize)
testYData = np.random.uniform(low = yDataRange[0], high = yDataRange[1], size = dataSize)
print("Flattening...")
startTime = time.time()
flattenedYData = GaussianFlattenData(testXData, testYData, reachPercentage, sigma, phi, amplitude)
totalTime = round(time.time() - startTime, 2)
print("Flattened! (" + str(totalTime) + " sec)")
plt.title(str(totalTime) + " sec")
plt.plot(testXData, testYData, label = "Original Data")
plt.plot(testXData, flattenedYData, label = "Flattened Data")
plt.legend()
plt.show()
plt.close()
def GaussianFlattenData(xData, yData, reachPercentage, sigma, phi, amplitude):
flattenedYData = np.empty(shape = len(xData), dtype = float)
# For each data point, create a Y buffer and a Gaussian kernel, and flatten it based on it's neighbours
for i in range(len(xData)):
gaussianCenter = xData[i]
baseReachEdges = GetGaussianValueX((GetGaussianValueY(0, 0, sigma, phi, amplitude) * reachPercentage), 0, sigma, phi, amplitude)
reachEdgeIndices = [FindInArray(xData, GetClosestNum((gaussianCenter + baseReachEdges[0]), xData)),
FindInArray(xData, GetClosestNum((gaussianCenter + baseReachEdges[1]), xData))]
currDataScanNum = reachEdgeIndices[0] - reachEdgeIndices[1]
# Creating Y buffer and Gaussian kernel...
currYPoints = np.empty(shape = currDataScanNum, dtype = float)
kernel = np.empty(shape = currDataScanNum, dtype = float)
for j in range(currDataScanNum):
currYPoints[j] = yData[j + reachEdgeIndices[1]]
kernel[j] = GetGaussianValueY(j, (i - reachEdgeIndices[1]), sigma, phi, amplitude)
# Dividing kernel by its sum...
kernelSum = np.sum(kernel)
for j in range(len(kernel)):
kernel[j] = (kernel[j] / kernelSum)
# Acquiring the current flattened Y point...
newCurrYPoints = np.empty(shape = len(currYPoints), dtype = float)
for j in range(len(currYPoints)):
newCurrYPoints[j] = currYPoints[j] * kernel[j]
flattenedYData[i] = np.sum(newCurrYPoints)
return flattenedYData
def GetGaussianValueX(y, mu, sigma, phi, amplitude):
x = ((sigma * np.sqrt(-2 * np.log(y / (amplitude * np.cos(phi))))) + mu)
return [x, (mu - (x - mu))]
def GetGaussianValueY(x, mu, sigma, phi, amplitude):
y = ((amplitude * np.cos(phi)) * np.exp(-np.power(((x - mu) / sigma), 2) / 2))
return y
def GetClosestNum(base, nums):
closestIdx = 0
closestDiff = np.abs(base - nums[0])
idx = 1
while (idx < len(nums)):
currDiff = np.abs(base - nums[idx])
if (currDiff < closestDiff):
closestDiff = currDiff
closestIdx = idx
idx += 1
return nums[closestIdx]
def FindInArray(arr, value):
for i in range(len(arr)):
if (arr[i] == value):
return i
return -1
if (__name__ == "__main__"):
main()
In the example above, I generate 1,000 random data points, between the ranges of -4 and 4. The reachPercentage variable is the percentage of the Gaussian amplitude above which the Gaussian values will be inserted into the kernel. The sigma, phi and amplitude variables are all inputs to the Gaussian function which will actually generate the Gaussians for each Y-data point to be smoothened.
I wrote some additional utility functions which I needed as well.
The script above works to smoothen the generated data, and I get the following plot:
Blue being the original data, and Orange being the flattened data.
However, it takes a surprisingly long amount of time to smoothen even smaller amounts of data. In the example above I generated 1,000 data points, and it takes ~8 seconds to flatten that. With datasets exceeding 10,000 in number, it can easily take over 10 minutes.
Since this is a very popular and known way of smoothening data, I was wondering why this script ran so slow. I originally had this implemented with standard Pythons Lists with calling append, however it was extremely slow. I hoped that using the NumPy arrays instead without calling the append function would make it faster, but that is not really the case.
Is there a way to speed up this process? Is there a Gaussian-smoothing function that already exists out there, that takes in the same arguments, and that could do the job faster?
Thanks for reading my post, any guidance is appreciated.
You have a number of loops - those tend to slow you down.
Here are two examples. Refactoring GetClosestNum to this:
def GetClosestNum(base, nums):
nums = np.array(nums)
diffs = np.abs(nums - base)
return nums[np.argmin(diffs)]
and refactoring FindInArray to this:
def FindInArray(arr, value):
res = np.where(np.array(arr) - value == 0)[0]
if res.size > 0:
return res[0]
else:
return -1
lets me process 5000 datapoints in 1.5s instead of the 54s it took with your original code.
Numpy lets you do a lot of powerful stuff without looping - Jake Vanderplas has a few really good (oldie but goodie) videos on using numpy constructs in place of loops to massively increase speed - https://www.youtube.com/watch?v=EEUXKG97YRw.
After asking people on the Python forums, as well as doing some more searching online, I managed to find much faster alternatives to most of the functions I had in my loop.
In order to get a better image of which parts of the smoothing function took up the most time, I subdivided the code into 4 parts, and timed each one to see how much each part contributed to the total runtime. To my surprise, the part that took up over 90% of the time, was the first part of the loop:
gaussianCenter = xData[i]
baseReachEdges = GetGaussianValueX((GetGaussianValueY(0, 0, sigma, phi, amplitude) * reachPercentage), 0, sigma, phi, amplitude)
reachEdgeIndices = [FindInArray(xData, GetClosestNum((gaussianCenter + baseReachEdges[0]), xData)),
FindInArray(xData, GetClosestNum((gaussianCenter + baseReachEdges[1]), xData))]
currDataScanNum = reachEdgeIndices[0] - reachEdgeIndices[1]
Luckly, the people on the Python forums and here were able to assist me, and I was able find a much faster alternative GetClosestNum function (thanks Vin), as well as removing the FindInArray function:
There are also replacements in the latter parts of the loop, where instead of having 3 for loops, they were all replaced my NumPy iterative expressions.
The whole script now looks like this:
import numpy as np
import matplotlib.pyplot as plt
import time
def main():
dataSize = 3073
yDataRange = [-4, 4]
reachPercentage = 0.001
sigma = 100
phi = 0
amplitude = 1
testXData = np.arange(stop = dataSize)
testYData = np.random.uniform(low = yDataRange[0], high = yDataRange[1], size = dataSize)
print("Flattening...")
startTime = time.time()
flattenedYData = GaussianFlattenData(testXData, testYData, reachPercentage, sigma, phi, amplitude)
totalTime = round(time.time() - startTime, 2)
print("Flattened! (" + str(totalTime) + " sec)")
plt.title(str(totalTime) + " sec")
plt.plot(testXData, testYData, label = "Original Data")
plt.plot(testXData, flattenedYData, label = "Flattened Data")
plt.legend()
plt.show()
plt.close()
def GaussianFlattenData(xData, yData, reachPercentage, sigma, phi, amplitude):
flattenedYData = np.empty(shape = len(xData), dtype = float)
# For each data point, create a Y buffer and a Gaussian kernel, and flatten it based on it's neighbours
for i in range(len(xData)):
gaussianCenter = xData[i]
baseReachEdges = GetGaussianValueX((GetGaussianValueY(0, 0, sigma, phi, amplitude) * reachPercentage), 0, sigma, phi, amplitude)
reachEdgeIndices = [np.where(xData == GetClosestNum((gaussianCenter + baseReachEdges[0]), xData))[0][0],
np.where(xData == GetClosestNum((gaussianCenter + baseReachEdges[1]), xData))[0][0]]
currDataScanNum = reachEdgeIndices[0] - reachEdgeIndices[1]
# Creating Y buffer and Gaussian kernel...
currYPoints = yData[reachEdgeIndices[1] : reachEdgeIndices[1] + currDataScanNum]
kernel = GetGaussianValueY(np.arange(currDataScanNum), (i - reachEdgeIndices[1]), sigma, phi, amplitude)
# Acquiring the current flattened Y point...
flattenedYData[i] = np.sum(currYPoints * (kernel / np.sum(kernel)))
return flattenedYData
def GetGaussianValueX(y, mu, sigma, phi, amplitude):
x = ((sigma * np.sqrt(-2 * np.log(y / (amplitude * np.cos(phi))))) + mu)
return [x, (mu - (x - mu))]
def GetGaussianValueY(x, mu, sigma, phi, amplitude):
y = ((amplitude * np.cos(phi)) * np.exp(-np.power(((x - mu) / sigma), 2) / 2))
return y
def GetClosestNum(base, nums):
nums = np.asarray(nums)
return nums[(np.abs(nums - base)).argmin()]
if (__name__ == "__main__"):
main()
Instead of taking ~8 seconds to process the 1,000 data points, it now takes merely ~0.15 seconds!
It also takes ~1.75 seconds to process the 10,000 points.
Thanks for the feedback everyone, cheers!

How to solve a 9-equations system of non linear DE with python?

I'm desperately trying to solve (and display the graph) a system made of nine nonlinear differential equations which model the path of a boomerang. The system is the following:
All the letters on the left side are variables, the others are either constants or known functions depending on v_G and w_z
I have tried with scipy.odeint with no conclusive results (I had this issue but the workaround did not work.)
I begin to think that the problem is linked with the fact that these equations are nonlinear or that the function in denominator might cause a singularity that the scipy solver is simply unable to handle. However, I am not familiar with that sort of mathematical knowledge.
What possibilities python-wise do I have to solve this set of equations?
EDIT : Sorry if I was not clear enough. Since it models the path of a boomerang, my goal is not to solve analytically this system (ie I don't care about the mathematical expression of each function), but rather to get the values of each function for a specific time range (say, from t1 = 0s to t2 = 15s with an interval of 0.01s between each value) in order to display the graph of each function and the graph of the center of mass of the boomerang (X,Y,Z are its coordinates).
Here is the code I tried :
import scipy.integrate as spi
import numpy as np
#Constants
I3 = 10**-3
lamb = 1
L = 5*10**-1
mu = I3
m = 0.1
Cz = 0.5
rho = 1.2
S = 0.03*0.4
Kz = 1/2*rho*S*Cz
g = 9.81
#Initial conditions
omega0 = 20*np.pi
V0 = 25
Psi0 = 0
theta0 = np.pi/2
phi0 = 0
psi0 = -np.pi/9
X0 = 0
Y0 = 0
Z0 = 1.8
INPUT = (omega0, V0, Psi0, theta0, phi0, psi0, X0, Y0, Z0) #initial conditions
def diff_eqs(t, INP):
'''The main set of equations'''
Y=np.zeros((9))
Y[0] = (1/I3) * (Kz*L*(INP[1]**2+(L*INP[0])**2))
Y[1] = -(lamb/m)*INP[1]
Y[2] = -(1/(m * INP[1])) * ( Kz*L*(INP[1]**2+(L*INP[0])**2) + m*g) + (mu/I3)/INP[0]
Y[3] = (1/(I3*INP[0]))*(-mu*INP[0]*np.sin(INP[6]))
Y[4] = (1/(I3*INP[0]*np.sin(INP[3]))) * (mu*INP[0]*np.cos(INP[5]))
Y[5] = -np.cos(INP[3])*Y[4]
Y[6] = INP[1]*(-np.cos(INP[5])*np.cos(INP[4]) + np.sin(INP[5])*np.sin(INP[4])*np.cos(INP[3]))
Y[7] = INP[1]*(-np.cos(INP[5])*np.sin(INP[4]) - np.sin(INP[5])*np.cos(INP[4])*np.cos(INP[3]))
Y[8] = INP[1]*(-np.sin(INP[5])*np.sin(INP[3]))
return Y # For odeint
t_start = 0.0
t_end = 20
t_step = 0.01
t_range = np.arange(t_start, t_end, t_step)
RES = spi.odeint(diff_eqs, INPUT, t_range)
However, I keep getting the same problem as shown here and especially the error message :
Excess work done on this call (perhaps wrong Dfun type)
I am not quite sure what it means but it looks like the solver have troubles solving the system. In any case, when I try to display the 3D path thanks to the XYZ coordinates, I just get 3 or 4 points where there should be something like 2000.
So my questions are : - Am I doing something wrong in my code ?
- If not, is there an other maybe more sophisticated tool to solve this sytem ?
- If not, is it even possible to get what I want from this system of ODEs ?
Thanks in advance
There are several issues:
if I copy the code, it does not run
the workaround you mention does not work with odeint, the given
solution uses ode
The scipy reference for odeint says:"For new code, use
scipy.integrate.solve_ivp to solve a differential equation."
the call RES = spi.odeint(diff_eqs, INPUT, t_range) should be
consistent to the function head def diff_eqs(t, INP) . Mind the
order: RES = spi.odeint(diff_eqs,t_range, INPUT)
There are some issues about to mathematical formulas too:
have a look at the 3rd formula on your picture. It has no tendency term, it starts with a zero - what does that mean ?
it's hard to check wether you have translated the formula correctly into code since the code does not follow the formulas strictly.
Below I tried a solution with scipy solve_ivp. In case A I'm able to run a pendulum, but in case B no meaningful solution for the boomerang can be found. So check the maths, I guess some error in the mathematical expressions.
For the graphics use pandas to plot all variables together (see code below).
import scipy.integrate as spi
import numpy as np
import pandas as pd
def diff_eqs_boomerang(t,Y):
INP = Y
dY = np.zeros((9))
dY[0] = (1/I3) * (Kz*L*(INP[1]**2+(L*INP[0])**2))
dY[1] = -(lamb/m)*INP[1]
dY[2] = -(1/(m * INP[1])) * ( Kz*L*(INP[1]**2+(L*INP[0])**2) + m*g) + (mu/I3)/INP[0]
dY[3] = (1/(I3*INP[0]))*(-mu*INP[0]*np.sin(INP[6]))
dY[4] = (1/(I3*INP[0]*np.sin(INP[3]))) * (mu*INP[0]*np.cos(INP[5]))
dY[5] = -np.cos(INP[3])*INP[4]
dY[6] = INP[1]*(-np.cos(INP[5])*np.cos(INP[4]) + np.sin(INP[5])*np.sin(INP[4])*np.cos(INP[3]))
dY[7] = INP[1]*(-np.cos(INP[5])*np.sin(INP[4]) - np.sin(INP[5])*np.cos(INP[4])*np.cos(INP[3]))
dY[8] = INP[1]*(-np.sin(INP[5])*np.sin(INP[3]))
return dY
def diff_eqs_pendulum(t,Y):
dY = np.zeros((3))
dY[0] = Y[1]
dY[1] = -Y[0]
dY[2] = Y[0]*Y[1]
return dY
t_start, t_end = 0.0, 12.0
case = 'A'
if case == 'A': # pendulum
Y = np.array([0.1, 1.0, 0.0]);
Yres = spi.solve_ivp(diff_eqs_pendulum, [t_start, t_end], Y, method='RK45', max_step=0.01)
if case == 'B': # boomerang
Y = np.array([omega0, V0, Psi0, theta0, phi0, psi0, X0, Y0, Z0])
print('Y initial:'); print(Y); print()
Yres = spi.solve_ivp(diff_eqs_boomerang, [t_start, t_end], Y, method='RK45', max_step=0.01)
#---- graphics ---------------------
yy = pd.DataFrame(Yres.y).T
tt = np.linspace(t_start,t_end,yy.shape[0])
with plt.style.context('fivethirtyeight'):
plt.figure(1, figsize=(20,5))
plt.plot(tt,yy,lw=8, alpha=0.5);
plt.grid(axis='y')
for j in range(3):
plt.fill_between(tt,yy[j],0, alpha=0.2, label='y['+str(j)+']')
plt.legend(prop={'size':20})

Is there any good way to optimize the speed of this python code?

I have a following piece of code, which basically evaluates some numerical expression, and use it to integrate over certain range of values. The current piece of code runs within about 8.6 s, but I am just using mock values, and my actual array is much larger. Especially, my actual size of freq_c= (3800, 101) and size of number_bin = (3800, 100), which makes the following code really inefficient, as the total execution time will be close to 9 minutes for the actual array. One part of the code that is quite slow is evaluation of k_one_third and k_two_third, for which I have also used numexpr.evaluate(".."), which speeds up the code quite a bit by about 10-20%. But, I have avoided numexpr below, so that anyone can run it without having to install the package. Is there any more ways to improve the speed of this code? An improvement of a few factor would also be good enough. Please note that the for loop is almost unavoidable, due to memory issues, as the arrays are really large, I am manipulating each axis at a time through the loop. I also wonder if numba jit optimisation is possible here.
import numpy as np
import scipy
from scipy.integrate import simps as simps
import time
def k_one_third(x):
return (2.*np.exp(-x**2)/x**(1/3) + 4./x**(1/6)*np.exp(-x)/(1+x**(1/3)))**2
def k_two_third(x):
return (np.exp(-x**2)/x**(2/3) + 2.*x**(5/2)*np.exp(-x)/(6.+x**3))**2
def spectrum(freq_c, number_bin, frequency, gamma, theta):
theta_gamma_factor = np.einsum('i,j->ij', theta**2, gamma**2)
theta_gamma_factor += 1.
t_g_bessel_factor = 1.-1./theta_gamma_factor
number = np.concatenate((number_bin, np.zeros((number_bin.shape[0], 1), dtype=number_bin.dtype)), axis=1)
number_theta_gamma = np.einsum('jk, ik->ijk', theta_gamma_factor**2*1./gamma**3, number)
final = np.zeros((np.size(freq_c[:,0]), np.size(theta), np.size(frequency)))
for i in xrange(np.size(frequency)):
b_n_omega_theta_gamma = frequency[i]**2*number_theta_gamma
eta = theta_gamma_factor**(1.5)*frequency[i]/2.
eta = np.einsum('jk, ik->ijk', eta, 1./freq_c)
bessel_eta = np.einsum('jl, ijl->ijl',t_g_bessel_factor, k_one_third(eta))
bessel_eta += k_two_third(eta)
eta = None
integrand = np.multiply(bessel_eta, b_n_omega_theta_gamma, out= bessel_eta)
final[:,:, i] = simps(integrand, gamma)
integrand = None
return final
frequency = np.linspace(1, 100, 100)
theta = np.linspace(1, 3, 100)
gamma = np.linspace(2, 200, 101)
freq_c = np.random.randint(1, 200, size=(50, 101))
number_bin = np.random.randint(1, 100, size=(50, 100))
time1 = time.time()
spectra = spectrum(freq_c, number_bin, frequency, gamma, theta)
print(time.time()-time1)
I profiled the code and found that k_one_third() and k_two_third() are slow. There are some duplicated calculations in the two functions.
By merging the two functions into one function, and decorate it with #numba.jit(parallel=True), I got 4x speedup.
#jit(parallel=True)
def k_one_two_third(x):
x0 = x ** (1/3)
x1 = np.exp(-x ** 2)
x2 = np.exp(-x)
one = (2*x1/x0 + 4*x2/(x**(1/6)*(x0 + 1)))**2
two = (2*x**(5/2)*x2/(x**3 + 6) + x1/x**(2/3))**2
return one, two
As said in the comments large parts of the code should be rewritten to get best performance.
I have only modified the simpson integration and modified #HYRY answer a bit. This speeds up the calculation from 26.15s to 1.76s (15x), by the test-data you provided. By replacing the np.einsums with simple loops this should end up in less than a second. (About 0.4s from the improved integration, 24s from k_one_two_third(x))
For getting performance using Numba read. The latest Numba version (0.39), the Intel SVML-package and things like fastmath=True makes quite a big impact on your example.
Code
#a bit faster than HYRY's version
#nb.njit(parallel=True,fastmath=True,error_model='numpy')
def k_one_two_third(x):
one=np.empty(x.shape,dtype=x.dtype)
two=np.empty(x.shape,dtype=x.dtype)
for i in nb.prange(x.shape[0]):
for j in range(x.shape[1]):
for k in range(x.shape[2]):
x0 = x[i,j,k] ** (1/3)
x1 = np.exp(-x[i,j,k] ** 2)
x2 = np.exp(-x[i,j,k])
one[i,j,k] = (2*x1/x0 + 4*x2/(x[i,j,k]**(1/6)*(x0 + 1)))**2
two[i,j,k] = (2*x[i,j,k]**(5/2)*x2/(x[i,j,k]**3 + 6) + x1/x[i,j,k]**(2/3))**2
return one, two
#improved integration
#nb.njit(fastmath=True)
def simpson_nb(y_in,dx):
s = y[0]+y[-1]
n=y.shape[0]//2
for i in range(n-1):
s += 4.*y[i*2+1]
s += 2.*y[i*2+2]
s += 4*y[(n-1)*2+1]
return(dx/ 3.)*s
#nb.jit(fastmath=True)
def spectrum(freq_c, number_bin, frequency, gamma, theta):
theta_gamma_factor = np.einsum('i,j->ij', theta**2, gamma**2)
theta_gamma_factor += 1.
t_g_bessel_factor = 1.-1./theta_gamma_factor
number = np.concatenate((number_bin, np.zeros((number_bin.shape[0], 1), dtype=number_bin.dtype)), axis=1)
number_theta_gamma = np.einsum('jk, ik->ijk', theta_gamma_factor**2*1./gamma**3, number)
final = np.empty((np.size(frequency),np.size(freq_c[:,0]), np.size(theta)))
#assume that dx is const. on integration
#speedimprovement of the scipy.simps is about 4x
#numba version to scipy.simps(y,x) is about 60x
dx=gamma[1]-gamma[0]
for i in range(np.size(frequency)):
b_n_omega_theta_gamma = frequency[i]**2*number_theta_gamma
eta = theta_gamma_factor**(1.5)*frequency[i]/2.
eta = np.einsum('jk, ik->ijk', eta, 1./freq_c)
one,two=k_one_two_third(eta)
bessel_eta = np.einsum('jl, ijl->ijl',t_g_bessel_factor, one)
bessel_eta += two
integrand = np.multiply(bessel_eta, b_n_omega_theta_gamma, out= bessel_eta)
#reorder array
for j in range(integrand.shape[0]):
for k in range(integrand.shape[1]):
final[i,j, k] = simpson_nb(integrand[j,k,:],dx)
return final

Generating vibrato sine wave

I'm trying to create a vibrato by oscillating between two 430Hz and 450Hz, storing the 16-bit sample in the list wav. However, the audible frequency seems to increase range of oscillation across the entire clip. Does anyone know why?
edit: rewrote code to be more clear/concise
# vibrato.py
maxamp = 2**15 - 1 # max signed short
wav = []
(t, dt) = (0, 1 / 44100)
while t < 6.0:
f = 440 + 10 * math.sin(2 * math.pi * 6 * t)
samp = maxamp * math.sin(2 * math.pi * f * t)
wav.append(samp)
t += dt
--
Update: because the response uses numpy, I'll update my code for plain python3
# vibrato.py
maxamp = 2**15 - 1 # max signed short
wav = []
(t, dt) = (0, 1 / 44100)
phase = 0
while t < 6.0:
f = 440 + 10 * math.sin(2 * math.pi * 6 * t)
phase += 2 * math.pi * f * t
samp = maxamp * math.sin(phase)
wav.append(samp)
t += dt
The issue has to do with an implied phase change that goes along with changing the frequency. In short, when you calculate the response relative to each point in a timeline, it's important to note that the phase of the oscillation will be different for each frequency at each time (except at the starting point where they're all the same). Therefore, moving between frequencies is like moving between different phases. For the case of moving between two distinct frequencies, this can be corrected for post hoc by adjusting the overall signal phases based on the frequency change. I've explained this in another answer so won't explain it again here, but here just show the initial plot that highlights the problem, and how to fix the issue. Here, the main thing added is the importance of a good diagnostic plot, and the right plot for this is a spectrogram.
Here's an example:
import numpy as np
dt = 1./44100
time = np.arange(0., 6., dt)
frequency = 440. - 10*np.sin(2*math.pi*time*1.) # a 1Hz oscillation
waveform = np.sin(2*math.pi*time*frequency)
Pxx, freqs, bins, im = plt.specgram(waveform, NFFT=4*1024, Fs=44100, noverlap=90, cmap=plt.cm.gist_heat)
plt.show()
Note that the span of the frequency oscillation is increasing (as you initially heard). Applying the correction linked to above gives:
dt = 1./defaults['framerate']
time = np.arange(0., 6., dt)
frequency = 440. - 10*np.sin(2*math.pi*time*1.) # a 1Hz oscillation
phase_correction = np.add.accumulate(time*np.concatenate((np.zeros(1), 2*np.pi*(frequency[:-1]-frequency[1:]))))
waveform = np.sin(2*math.pi*time*frequency + phase_correction)
Which is much closer to what was intended, I hope.
Another way to conceptualize this, which might make more sense in the context of looping through each time step (as the OP does), and as closer to the physical model, is to keep track of the phase at each step and determine the new amplitude considering both the amplitude and phase from the previous step, and combining these with the new frequency. I don't have the patience to let this run in pure Python, but in numpy the solution looks like this, and gives a similar result:
dt = 1./44100
time = np.arange(0., 6., dt)
f = 440. - 10*np.sin(2*math.pi*time*1.) # a 1Hz oscillation
delta_phase = 2 * math.pi * f * dt
phase = np.cumsum(delta_phase) # add up the phase differences along timeline (same as np.add.accumulate)
wav = np.sin(phase)

How to filter a non periodic function

I'm new to Python programming and I wanted to know if there was a way to create a high-pass filter for a periodic function like so:
import numpy as np
from scipy.signal import lfilter, firwin, butter
from pylab import figure, plot, show
sample_rate = .0167
nsamples = 480
F_1Hz = 1.38e-4
A_1Hz = 1.0
F_15Hz = .0011
A_15Hz = .5
t = np.arange(nsamples) / sample_rate
signal = A_1Hz * np.sin(2*np.pi*F_1Hz*t) + A_15Hz*np.sin(2*np.pi*F_15Hz*t)
signal[::120] = 2
figure(1)
plot(t,signal,'b')
show()
I want to keep the higher frequency ( .0011 Hz) as well as the spikes of 2 at the certain spots, however the amplitudes of the .0011 Hz needs to stay at .5 and the spikes need to stay at an amplitude of 2, so normalizing isn't an option. Moreover, if I made the function have the spikes of 2 at a non-periodic intervals(say a spike at only signal[prime numbers]) could I still filter it correctly, with the correct amplitudes?
One possibility is to use a custom high-pass filter. A simple way to make a high-pass filter is to start with a low-pass filter:
def lp_win_sinc(tw, fc, n):
m = int(np.ceil( 2./tw) * 2)
samps = np.arange(m+1)
shift = samps - m/2
shift[m/2] = 1
h = np.sin(2 * np.pi * fc * shift)/shift
h[m/2] = 2 * np.pi * fc
h = h * np.blackman(m+1)
h = h / h.sum()
s = np.zeros(n)
s[:len(h)] = h
return np.roll(s, -m/2)
Then construct a simple high-pass
def hp_win_sinc(tw, fc, n):
hp = -lp_win_sinc(tw, fc, n)
hp[0] = hp[0] + 1
return hp
(The ideas behind these are found in http://www.dspguide.com/pdfbook.htm, look at the chapter on windowed-sinc filters.)
Note: these are the impulse responses of the respective filters. To apply them to your data you can either convolve the impulse with your data, or you can fft your data and the impulse response and take the inverse fft of their product. In your case, e.g.
hp = hp_win_sinc(0.2, 0.001, len(signal))
f_hp = np.fft.rfft(hp)
f_d = np.fft.rfft(signal)
filt_sig = np.fft.irfft( f_hp * f_d)
plotting this quick result gives:
filtered data
Depending on your exact application, you might be able to simply adjust the gain to recover the 2.0 and 0.5 amplitudes. Hope this helps. Good luck!
The answer is quite likely no.
The reason behind this blunt answer is that your spikes (which have a value of 2) stand on top of the signal. If you filter anything away, your signal amplitude may change at the spikes.
If you could change this:
signal[::120] = 2
into
signal[::120] += 2
then such a filter can be constructed. What do you want to filter away? Anything below .0011 Hz?

Categories