In Python, I'm trying to write an algorithm alias_freq(f_signal,f_sample,n), which behaves as follows:
def alias_freq(f_signal,f_sample,n):
f_Nyquist=f_sample/2.0
if f_signal<=f_Nyquist:
return n'th frequency higher than f_signal that will alias to f_signal
else:
return frequency (lower than f_Nyquist) that f_signal will alias to
The following is code that I have been using to test the above function (f_signal, f_sample, and n below are chosen arbitrarily just to fill out the code)
import numpy as np
import matplotlib.pyplot as plt
t=np.linspace(0,2*np.pi,500)
f_signal=10.0
y1=np.sin(f_signal*t)
plt.plot(t,y1)
f_sample=13.0
t_sample=np.linspace(0,int(f_sample)*(2*np.pi/f_sample),f_sample)
y_sample=np.sin(f_signal*t_sample)
plt.scatter(t_sample,y_sample)
n=2
f_alias=alias_freq(f_signal,f_sample,n)
y_alias=np.sin(f_alias*t)
plt.plot(t,y_alias)
plt.xlim(xmin=-.1,xmax=2*np.pi+.1)
plt.show()
My thinking is that if the function works properly, the plots of both y1 and y_alias will hit every scattered point from y_sample. So far I have been completely unsuccessful in getting either the if statement or the else statement in the function to do what I think it should, which makes me believe that either I don't understand aliasing nearly as well as I want to, or my test code is no good.
My questions are: Prelimarily, is the test code I'm using sound for what I'm trying to do? And primarily, what is the alias_freq function that I am looking for?
Also please note: If some Python package has a function just like this already built in, I'd love to hear about it - however, part of the reason I'm doing this is to give myself a device to understand phenomena like aliasing better, so I'd still like to see what my function should look like.
As far as I understood the question correctly, the frequency of the aliased signal is abs(sampling_rate * n - f_signal), where n is the closest integer multiple to f_signal.
Thus:
n = round(f_signal / float(f_sample))
f_alias = abs(f_sample * n - f_signal)
This should work for frequencies under and over Nyquist.
I figured out the answer to my and just realized that I forgot to post it here, sorry. Turns out it was something silly - Antii's answer is basically right, but the way I wrote the code I need a f_sample-1 in the alias_freq function, where I just had an f_sample. There's still a phase shift thing that happens sometimes, but just plugging in either 0 or pi for the phase shift has worked for me every time, I think it's just due to even or odd folding. The working function and test code is below.
import numpy as np
import matplotlib.pyplot as plt
#Given a sample frequency and a signal frequency, return frequency that signal frequency will be aliased to.
def alias_freq(f_signal,f_sample,n):
f_alias = np.abs((f_sample-1)*n - f_signal)
return f_alias
t=np.linspace(0,2*np.pi,500)
f_signal=13
y1=np.sin(f_signal*t)
plt.plot(t,y1)
f_sample=7
t_sample=np.linspace(0,int(f_sample)*(2*np.pi/f_sample),f_sample)
y_sample=np.sin((f_signal)*t_sample)
plt.scatter(t_sample,y_sample)
f_alias=alias_freq(f_signal,f_sample,3)
y_alias=np.sin(f_alias*t+np.pi)#Sometimes with phase shift, usually np.pi for integer f_signal and f_sample, sometimes without.
plt.plot(t,y_alias)
plt.xlim(xmin=-.1,xmax=2*np.pi+.1)
plt.show()
Here is a Python aliased frequency calculator based on numpy
def get_aliased_freq(f, fs):
"""
return aliased frequency of f sampled at fs
"""
import numpy as np
fn = fs / 2
if np.int(f / fn) % 2 == 0:
return f % fn
else:
return fn - (f % fn)
Related
For my current assignment, I am to establish the stability of intersection/equilibrium points between two nullclines, which I have defined as follows:
def fNullcline(F):
P = (1/k)*((1/beta)*np.log(F/(1-F))-c*F+v)
return P
def pNullcline(P):
F = (1/delta)*(pD-alpha*P+(r*P**2)/(m**2+P**2))
return F
I also have a method "stability" that applies the Hurwitz criteria on the underlying system's Jacobian:
def dPdt(P,F):
return pD-delta*F-alpha*P+(r*P**2)/(m**2+P**2)
def dFdt(P,F):
return s*(1/(1+sym.exp(-beta*(-v+c*F+k*P)))-F)
def stability(P,F):
x = sym.Symbol('x')
ax = sym.diff(dPdt(x, F),x)
ddx = sym.lambdify(x, ax)
a = ddx(P)
# shortening the code here: the same happens for b, c, d
matrix = [[a, b],[c,d]]
eigenvalues, eigenvectors = np.linalg.eig(matrix)
e1 = eigenvalues[0]
e2 = eigenvalues[1]
if(e1 >= 0 or e2 >= 0):
return 0
else:
return 1
The solution I was looking for was later provided. Basically, values became too small! So this code was added to make sure no too small values are being used for checking the stability:
set={0}
for j in range(1,210):
for i in range(1,410):
x=i*0.005
y=j*0.005
x,y=fsolve(System,[x,y])
nexist=1
for i in set:
if(abs(y-i))<0.00001:
nexist=0
if(nexist):
set.add(y)
set.discard(0)
I'm still pretty new to coding so the function in and on itself is still a bit of a mystery to me, but it eventually helped in making the little program run smoothly :) I would again like to express gratitude for all the help I have received on this question. Below, there are still some helpful comments, which is why I will leave this question up in case anyone might run into this problem in the future, and can find a solution thanks to this thread.
After a bit of back and forth, I came to realise that to avoid the log to use unwanted values, I can instead define set as an array:
set = np.arange(0, 2, 0.001)
I get a list of values within this array as output, complete with their according stabilities. This is not a perfect solution as I still get runtime errors (in fact, I now get... three error messages), but I got what I wanted out of it, so I'm counting that as a win?
Edit: I am further elaborating on this in the original post to improve the documentation, however, I would like to point out again here that this solution does not seem to be working, after all. I was too hasty! I apologise for the confusion. It's a very rocky road for me. The correct solution has since been provided, and is documented in the original question.
I'm in the middle of a big (and frankly quite hard) project so while this is my first interrogation, it probably won't be the last. Also : english is not my first langage so 'Sorry for bad english' and I'm writing this on my phone so 'Sorry for bad formating'.
Ok so : I'm trying to implement the General Number Field Sieve in Python, and I'm, at least for now, heavily relying on sympy.
Here is a peice of code where I'm struggling. In the code below, gpc(N,m) is a float list.
From sympy import Poly
From sympy.abc import x
g = Poly(gpc(N,m), x) [*]
However, when I do that, I get a polynomial over the domain RR and I would very much like to switch this to another domain D (where D will end up being ZZ['x'] but I would like this function to be general)
I'm aware of the fact that I can slightly modify [*] in
g = Poly(gpc(N,m), x, domain = D)
to get what I want. However, this wouldn't be enough. Somewhere else in my code, I need to be able to change the domain of an already constructed polynomial, and this solution wouldn't help.
When I lookep it up, I found the change_ring method so I tried this :
f = g.change_ring(D)
However, upon execution, I get the error message :
'Poly' object has no attribute 'change_ring'
So I guess that this function don't exist.
Does anyone knows how to change the domain of a polynomial ?
Thanks a lot !
It looks like creating a new Poly instance is the best approach; there are a few class methods that could help (take a look at the Poly.from_* class methods)
For example:
from sympy import Poly
from sympy.abc import x, a
g = Poly(x**3 + a*x*2 - 5*x + 6, x)
print(g) # Poly(x**3 + (2*a - 5)*x + 6, x, domain='ZZ[a]')
f = Poly.from_poly(g, *g.gens, domain='ZZ[a, b]')
print(f) # Poly(x**3 + (2*a - 5)*x + 6, x, domain='ZZ[a,b]')
I also wonder if rationalizing your floats at some point might help - see e.g. nsimplify.
I am trying to approximate the Gauss Linking integral for two straight lines in R^3 using dblquad. I've created this pair of lines as an object.
I have a form for the integrand in parametrisation variables s and t generated by a function gaussint(self,s,t) and this is working. I'm then just trying to define a function which returns the double integral over the two intervals [0,1].
Edit - the code for the function looks like this:
def gaussint(self,s,t):
formnum = self.newlens()[0]*self.newlens()[1]*np.sin(test.angle())*np.cos(test.angle())
formdenone = (np.cos(test.angle())**2)*(t*(self.newlens()[0]) - s*(self.newlens()[1]) + self.adists()[0] - self.adists()[1])**2
formdentwo = (np.sin(test.angle())**2)*(t*(self.newlens()[0]) + s*(self.newlens()[1]) + self.adists()[0] + self.adists()[1])**2
fullden = (4 + formdenone + formdentwo)**(3/2)
fullform = formnum/fullden
return fullform
The various other function calls here are just bits of linear algebra - lengths of lines, angle between them and so forth. s and t have been defined as symbols upstream, if they need to be.
The code for the integration then just looks like this (I've separated it out just to try and work out what was going on:
def approxint(self, s, t):
from scipy.integrate import dblquad
return dblquad(self.gaussint(s,t),0,1, lambda t:0,lambda t:1)
Running it gets me a lengthy bit of somewhat impenetrable process messages, followed by
ValueError: invalid callable given
Any idea where I'm going wrong?
Cheers.
I am attempting to extract Weibull distribution parameters (shape 'k' and scale 'lambda') that satisfy a certain mean and variance. In this example, the mean is 4 and the variance is 8. It is a 2-unknowns and 2-equations type of problem.
Since this algorithm works with Excel 2010's GRG Solver, I am certain it is about the way I am framing the problem, or potentially, the libraries I am using. I am not overly familiar with optimization libraries, so please let me know where the error is.
Below is the script:
from scipy.optimize import fmin_cg
import math
def weibull_mu(k, lmda): #Formula can be found on wikipedia
return lmda*math.gamma(1+1/k)
def weibull_var(k, lmda): #Formula can be found on wikipedia
return lmda**2*math.gamma(1+2/k)-weibull_mu(k, lmda)**2
def min_function(arggs):
actual_mean = 4 # specific to this example
actual_var = 8 # specific to this example
k = arggs[0]
lmda = arggs[1]
output = [weibull_mu(k, lmda)-(var_wei)]
output.append(weibull_var(k, lmda)-(actual_var)**2-(actual_mean)**2)
return output
print fmin(min_function, [1,1])
This script gives me the following error:
[...]
File "C:\Program Files\Python27\lib\site-packages\scipy\optimize\optimize.py", line 278, in fmin
fsim[0] = func(x0)
ValueError: setting an array element with a sequence.
As far as I can tell, min_function returns a multi-dimensional list, but fmin and fmin_cg does expect that the objective function returns a scalar, if I am not mistaken.
If you are searching the root of the two-equations problem, I suppose it is better that you apply the root function instead. As far as I have been able to find out, scipy does not provide any general optimizers for vector functions.
I managed to get it to work thanks to Anders Gustafsson's comment (thank you). This script now works if one returns only a scalar (in this case I used something along the lines of least-squares). Also, bounds were added by changing the optimization function to "fmin_l_bfgs_b" (again, thanks to Anders Gustafsson).
I only changed the min_function definition relative to the question.
from scipy.optimize import fmin_l_bfgs_b
import math
def weibull_mu(k, lmda):
return lmda*math.gamma(1+1/k)
def weibull_var(k, lmda):
return lmda**2*math.gamma(1+2/k)-weibull_mu(k, lmda)**2
def min_function(arggs):
actual_mean = 4. # specific to this example
actual_var = 8. # specific to this example
k = arggs[0]
lmda = arggs[1]
extracted_var = weibull_var(k, lmda)
extracted_mean = weibull_mu(k, lmda)
output = (extracted_var - actual_var)**2 + (extracted_mean - actual_mean)**2
return output
print fmin_l_bfgs_b(min_function, best_guess, approx_grad = True, bounds = [(.0000001,None),(.0000001,None)], disp = False)
Note: Please feel free to use this script for your own or professional use.
i am solving a system of ordinary differential equations using the odeint function. Is it possible (and if yes how) to parallelize easily this kind of problem?
The answer above is wrong, solving a ODE nummerically needs to calculate the function f(t,y)=y' several times per iteration, e.g. four times for Runge-Kutta. But i dont know any package for python doing this.
Numerically integrating an ODE is an intrinsically sequential operation, since you need each result to compute the following one (well, except if you're integrating from multiple starting points). So I guess the answer is no.
EDIT: Wow, I've just realised this question is more than 3 years old. I'll still leave my answer hoping it finds its way to someone in the same predicament. Sorry for that.
I had the same problem. I was able to parallelise such process as follows.
First you need dispy. In there you'll find some programs that will do the paralelization process for you. I am not an expert on dispybut I had no problems using it, and I didn't need to configure anything.
So, how to use it?
Run python dispynode.py -d. If you do not run this script before running your main program, the parallel jobs won't be performed.
Run your main program. Here I post the one I used (sorry for the mess). You'll need to change the function sim, and change accordingly to what you want to do with the results. I hope however that my program works as a reference for you.
import os, sys, inspect
#Add dispy to your path
cmd_folder = os.path.realpath(os.path.abspath(os.path.split(inspect.getfile( inspect.currentframe() ))[0]))
if cmd_folder not in sys.path:
sys.path.insert(0, cmd_folder)
# use this if you want to include modules from a subforder
cmd_subfolder = os.path.realpath(os.path.abspath(os.path.join(os.path.split(inspect.getfile( inspect.currentframe() ))[0],cmd_folder+"/dispy-3.10/")))
if cmd_subfolder not in sys.path:
sys.path.insert(0, cmd_subfolder)
#----------------------------------------#
#This function contains the differential equation to be simulated.
def sim(ic,e,O): #ic=initial conditions; e=Epsiolon; O=Omega
from scipy.integrate import ode
import numpy as np
#Diff Eq.
def sys(t,x,e,O,z,b,l):
p = 2.*e*O*np.sin(O*t)*(1-e*np.cos(O*t))/(z+(1-e*np.cos(O*t))**2)
q = (1+4.*b/l*np.cos(O*t))*(z+(1-e*np.cos(O*t)))/( z+(1-e*np.cos(O*t))**2 )
dx=np.zeros(2)
dx[0] = x[1]
dx[1] = -q*x[0]-p*x[1]
return dx
#Simulation.
t0=0; tEnd=10000.; dt=0.1
r = ode(sys).set_integrator('dop853', nsteps=10,max_step=dt) #Definition of the integrator
Y=[];S=[];T=[]
# - parameters - #
z=0.5; l=1.0; b=0.06;
# -------------- #
color=1
r.set_initial_value(ic, t0).set_f_params(e,O,z,b,l) #Set the parameters, the initial condition and the initial time
#Loop to integrate.
while r.successful() and r.t +dt < tEnd:
r.integrate(r.t+dt)
Y.append(r.y)
T.append(r.t)
if r.y[0]>1.25*ic[0]: #Bound. This is due to my own requirements.
color=0
break
#r.y contains the solutions and r.t contains the time vector.
return e,O,color #For each pair e,O return e,O and a color (0,1) which correspond to the color of the point in the stability chart (0=unstable) (1=stable)
# ------------------------------------ #
#MAIN PROGRAM where the parallel magic happens
import matplotlib.pyplot as plt
import dispy
import numpy as np
F=100 #Total files
#Range of the values of Epsilon and Omega
Epsilon = np.linspace(0,1,100)
Omega_intervals = np.linspace(0,4,F)
ic=[0.1,0]
cluster = dispy.JobCluster(sim) #This function sets that the cluster (array of processors) will be assigned the job sim.
jobs = [] #Initialize the array of jobs
for i in range(F-1):
Data_Array=[]
jobs = []
Omega=np.linspace(Omega_intervals[i], Omega_intervals[i+1],10)
print Omega
for e in Epsilon:
for O in Omega:
job = cluster.submit(ic,e,O) #Send to the cluster a job with the specified parameters
jobs.append(job) #Join all the jobs specified above
cluster.wait()
#Do the jobs
for job in jobs:
e,O,color = job()
Data_Array.append([e,O,color])
#Save the results of the simulation.
file_name='Data'+str(i)+'.txt'
f=open(file_name, 'a')
f.write(str(Data_Array))
f.close()