I want to compute this integral $\frac{1}{L}\int_{-\infty}^{t}H(t^{'})\exp(-\frac{R}{L}(t-t^{'}))dt^{'}$ using numpy.convolution, where $H(t)$ is heavside function. I am supposed to get this equals to $\exp(-\frac{R}{L}t)H(t)$
below is what I did,
I changed the limitation of the integral into -inf to +inf by change of variable multiplying a different H(t) then I used this as my function to convolve with H(t)(the one inside the integral), but the output plot is definitely not a exp function, neither I could find any mistakes in my code, please help, any hint or suggestions will be appreciated!
import numpy as np
import matplotlib.pyplot as plt
R = 1e3
L = 3.
delta = 1
Nf = 100
Nw = 200
k = np.arange(0,Nw,delta)
dt = 0.1e-3
tk = k*dt
Ng = Nf + Nw -2
n = np.arange(0,Nf+Nw-1,delta)
tn = n*dt
#define H
def H(n):
H = np.ones(n)
H[0] = 0.5
return H
#build ftns that get convoluted
f = H(Nf)
w = np.exp((-R/L)*tk)*H(Nw)
#return the value of I
It = np.convolve(w,f)/L
#return the value of Voutput, b(t)
b = H(Ng+1) - R*It
plt.plot(tn,b,'o')
plt.show()
The issue with your code is not so much programming as it is conceptual. Rewrite the convolution as Integral[HeavisideTheta[t-t']*Exp[-R/L * t'], -Inf, t] (that's Mathematica code) and upon inspection you find that H(t-t') is always 1 within the limits (except for at t'=t which is the integration limit... but that's not important). So in reality you're not actually performing a complete convolution... you're basically just taking half (or a third) of the convolution.
If you think of a convolution as inverting one sequence and then going one shift at the time and adding it all up (see http://en.wikipedia.org/wiki/Convolution#Derivations - Visual Explanation of Convolution) then what you want is the middle half... i.e. only when they're overlapping. You don't want the lead-in (4-th graph down: http://en.wikipedia.org/wiki/File:Convolution3.svg). You do want the lead-out.
Now the easiest way to fix your code is as such:
#build ftns that get convoluted
f = H(Nf)
w = np.exp((-R/L)*tk)*H(Nw)
#return the value of I
It = np.convolve(w,f)/L
max_ind = np.argmax(It)
print max_ind
It1 = It[max_ind:]
The lead-in is the only time when the convolution integral (technically sum in our case) increases... thus after the lead-in is finished the convolution integral follows Exp[-x]... so you tell python to only take values after the maximum is achieved.
#return the value of Voutput, b(t) works perfectly now!
Note: Since you need the lead-out you can't use np.convolve(a,b, mode = 'valid').
So It1 looks like:
b(t) using It1 looks like:
There is no way you can ever get exp(-x) as the general form because the equation for b(t) is given by 1 - R*exp(-x)... It can't mathematically follow an exp(-x) form. At this point there are 3 things:
The units don't really make sense... check them. The Heaviside function is 1 and R*It1 is about 10,000. I'm not sure this is an issue but just in case, the normalized curve looks as such:
You can get an exp(-x) form if you use b(t) = R*It1 - H(t)... the code for that is here (You might have to normalize depending on your needs):
b = R*It1 - H(len(It1))
# print len(tn)
plt.plot(tn[:len(b)], b,'o')
plt.show()
And the plot looks like:
Your question might still not be resolved in which case you need to explain what exactly you think was wrong. With the info you've given me... b(t) can never have an Exp[-x] form unless the equation for b(t) is messed with. As it stands in your original code It1 follows Exp[-x] in form but b(t) cannot.
I think there's a bit of confusion here about convolution. We use convolution in the time domain to calculate the response of a linear system to an arbitrary input. To do this, we need to know the impulse response of the system. Be careful switching between continuous and discrete systems - see e.g. http://en.wikipedia.org/wiki/Impulse_invariance.
The (continuous) impulse response of your system (which I assume to be for the resistor voltage of an L-R circuit) I have defined for convenience as a function of time t: IR = lambda t: (R/L)*np.exp(-(R/L)*t) * H.
I have also assumed that your input is the Heaviside step function, which I've defined on the time interval [0, 1], for a timestep of 0.001 s.
When we convolve (discretely), we effectively flip one function around and slide it along the other one, multiplying corresponding values and then taking the sum. To use the continuous impulse response with a step function which actually comprises of a sequence of Dirac delta functions, we need to multiply the continuous impulse response by the time step dt, as described in the Wikipedia link above on impulse invariance. NB - setting H[0] = 0.5 is also important.
We can visualise this operation below. Any given red marker represents the response at a given time t, and is the "sum-product" of the green input and a flipped impulse response shifted to the right by t. I've tried to show this with a few grey impulse responses.
The code to do the calculation is here.
import numpy as np
import matplotlib.pyplot as plt
R = 1e3 # Resistance
L = 3. #Inductance
dt = 0.001 # Millisecond timestep
# Define interval 1 second long, interval dt
t = np.arange(0, 1, dt)
# Define step function
H = np.ones_like(t)
H[0] = 0.5 # Correction for impulse invariance (cf http://en.wikipedia.org/wiki/Impulse_invariance)
# RL circuit - resistor voltage impulse response (cf http://en.wikipedia.org/wiki/RL_circuit)
IR = lambda t: (R/L)*np.exp(-(R/L)*t) * H # Don't really need to multiply by H as w is zero for t < 0
# Response of resistor voltage
response = np.convolve(H, IR(t)*dt, 'full')
The extra code to make the plot is here:
# Define new, longer, time array for plotting response - must be same length as response, with step dt
tp = np.arange(len(response))* dt
plt.plot(0-t, IR(t), '-', label='Impulse response (flipped)')
for q in np.arange(0.01, 0.1, 0.01):
plt.plot(q-t, IR(t), 'o-', markersize=3, color=str(10*q))
t = np.arange(-1, 1, dt)
H = np.ones_like(t)
H[t<0] = 0.
plt.plot(t, H, 's', label='Unit step function')
plt.plot(tp, response, '-o', label='Response')
plt.tight_layout()
plt.grid()
plt.xlabel('Time (s)')
plt.ylabel('Voltage (V)')
plt.legend()
plt.show()
Finally, if you still have some confusion about convolution, I strongly recommend "Digital Signal Processing: A Practical Guide for Engineers and Scientists" by Steven W. Smith.
Related
I am trying to reproduce figure 5.6 (attached) from the textbook "Modeling Infectious Diseases in Humans and Animals (official code repo)" (Keeling 2008) to verify whether my implementation of a seasonally forced SEIR (epidemiological model) is correct. An official program from the textbook that implements seasonal forcing indicates that large values of Beta 1 can lead to numerical errors, but if the figure has Beta 1 values that did not lead to numerical errors, then in principle this should not be the cause of the problem. My implementation correctly produces the graphs in row 0, column 0 and row 1, column 0 of figure 5.6 but there is no output in my figure for the remaining cells due to the numerical solution for the fraction of infected (see code at bottom) producing 0 (and the ln(0) --> -inf).
I do receive the following warnings:
ODEintWarning: Excess work done on this call
C:\Users\jared\AppData\Local\Temp\ipykernel_24972\2802449019.py:68:
RuntimeWarning: divide by zero encountered in log infected =
np.log(odeint(
C:\Users\jared\AppData\Local\Temp\ipykernel_24972\2802449019.py:68:
RuntimeWarning: invalid value encountered in log infected =
np.log(odeint(
Here is the textbook figure:
My figure within the same time range (990 - 1000 years). Natural log taken of fraction infected:
My figure but with a shorter time range (0 - 100 years). Natural log taken of fraction infected. The numerical solution for the infected population seems to fail between the 5 and 20 year mark for most of the seasonal parameters (Beta 1 and R0):
My figure with a shorter time range as above, but with no natural log taken of fraction infected.
Code to reproduce my figure:
# Code to minimally reproduce figure
import itertools
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
def seir(y, t, mu, sigma, gamma, omega, beta_zero, beta_one):
"""System of diff eqs for epidemiological model.
SEIR stands for susceptibles, exposed, infectious, and
recovered populations.
References:
[SEIR Python Program from Textbook](http://homepages.warwick.ac.uk/~masfz/ModelingInfectiousDiseases/Chapter2/Program_2.6/Program_2_6.py)
[Seasonally Forced SIR Program from Textbook](http://homepages.warwick.ac.uk/~masfz/ModelingInfectiousDiseases/Chapter5/Program_5.1/Program_5_1.py)
"""
s, e, i = y
beta = beta_zero * (1 + beta_one * np.cos(omega * t))
sdot = mu - (beta*i + mu)*s
edot = beta*s*i - (mu + sigma)*e
idot = sigma*e - (mu + gamma)*i
return sdot, edot, idot
def solve_beta_zero(basic_reproductive_rate, gamma):
"""Defined in the last paragraph of pg. 159 of textbook Keeling 2008."""
return gamma * basic_reproductive_rate
# Model parameters (see Figure 5.6 description)
mu = 0.02 / 365
sigma = 1/8
gamma = 1/5
omega = 2 * np.pi / 365 # frequency of oscillations per year
# Seasonal forcing parameters
r0s = [17, 10, 3]
b1s = [0.02, 0.1, 0.225]
# Permutes params to get tuples matching row i column j params in figure
# e.g., [(0.02, 17), (0.02, 10) ... ]
seasonal_params = [p for p in itertools.product(*(b1s, r0s))]
# Initial Conditions: I assume these are proportions of some total population
s0 = 6e-2
e0 = i0 = 1e-3
initial_conditions = [s0, e0, i0]
# Timesteps
nyears = 1000
days_per_year = 365
ndays = nyears * days_per_year
timesteps = np.arange(1, ndays+1, 1)
# Range to slice data to reproduce my figures
# NOTE: CHange the min slice or max slice for different ranges
min_slice = 990 # or 0
max_slice = 1000 # or 100
sliced = slice(min_slice * days_per_year, max_slice * days_per_year)
x_ticks = timesteps[sliced]/days_per_year
# Define figure
nrows = 3
ncols = 3
fig, ax = plt.subplots(nrows, ncols, sharex=True, figsize=(15, 8))
# Iterate through parameters and recreate figure
for i in range(nrows):
for j in range(ncols):
# Get seasonal parameters for this subplot
beta_one = seasonal_params[i * nrows + j][0]
basic_reproductive_rate = seasonal_params[i * nrows + j][1]
# Compute beta zero given the reproductive rate
beta_zero = solve_beta_zero(
basic_reproductive_rate=basic_reproductive_rate,
gamma=gamma)
# Numerically solve the model, extract only the infected solutions,
# slice those solutions to the desired time range, and then natural
# log scale them
solutions = odeint(
seir,
initial_conditions,
timesteps,
args=(mu, sigma, gamma, omega, beta_zero, beta_one))
infected_solutions = solutions[:, 2]
log_infected = np.log(infected_solutions[sliced])
# NOTE: To inspect results without natural log, uncomment the
# below line
# log_infected = infected_solutions[sliced]
# DEBUG: For shape and parameter printing
# print(
# infected_solutions.shape, 'R0=', basic_reproductive_rate, 'B1=', beta_one)
# Plot results
ax[i,j].plot(x_ticks, log_infected)
# label subplot
ax[i,j].title.set_text(rf'$(R_0=${basic_reproductive_rate}, $\beta_{1}=${beta_one})')
fig.supylabel('NaturalLog(Fraction Infected)')
fig.supxlabel('Time (years)')
Disclaimer:
My short term solution is to simply change the list of seasonal parameters to values that will produce data for that range, and this adequately illustrates the effects of seasonal forcing. The point is to reproduce the figure, though, and if the author was able to do it, others should be able to as well.
Your first (and possibly main) problem is one of scale. This diagnosis is also conform with the observations in your later experiments. The system is such that if it is started with positive values, it should stay within positive values. That negative values are reached is only possible if the step errors of the numerical method are too large.
As you can see in the original graphs, the range of values goes from exp(-7) ~= 9e-4 to exp(-12) ~= 6e-6. The value of the absolute tolerance should force at least 3 digits to be exact, so atol = 1e-10 or smaller. The relative tolerance should be adapted similarly. Viewing all components together shows that the first component has values around exp(-2.5) ~= 5e-2, so per-component tolerances should provide better results. The corresponding call is
solutions = odeint(
seir,
initial_conditions,
timesteps,
args=(mu, sigma, gamma, omega, beta_zero, beta_one),
atol = [1e-9,1e-13,1e-13], rtol=1e-11)
With these parameters I get the plots below
The first row and first column are as in the cited graphic, the others look different.
As a test and a general method to integrate in the range of small positive solutions, reformulate for the integration of the logarithms of the components. This can be done with a simple wrapper
def seir_log(log_y,t,*args):
y = np.exp(log_y)
dy = np.array(seir(y,t,*args))
return dy/y # = d(log(y))
Now the expected values have scale 1 to 10, so that the tolerances are no longer so critical, default tolerances should be sufficient, but it is better to work with documented tolerances.
log_solution = odeint(
seir_log,
np.log(initial_conditions),
timesteps,
args=(mu, sigma, gamma, omega, beta_zero, beta_one),
atol = 1e-8, rtol=1e-9)
log_infected = log_solution[sliced,2]
The bottom-left plot is still sensible to atol, with 1e-7 one gets a more wavy picture. Bounding the step size with hmax=5 also stabilizes that. With the code as above the plots are
The center plot is still different than the reference. It might be that there are different stable cycles.
I want to find the integral of output power Po in the following code:
Vo = 54.6
# defining a function for duty cycle, output current and output power
def duty_cycle(output_voltage, array_voltage):
duty_cycle = np.divide(output_voltage, array_voltage)
return duty_cycle
def output_current(array_current, duty_cycle):
output_current = np.divide(array_current, duty_cycle)
return output_current
def output_power(output_voltage, output_current):
output_power = np.multiply(output_voltage, output_current)
return output_power
#calculating duty cycle, output current and output power
D = duty_cycle(Vo, array_params['arr_v_mp'])
Io = output_current(array_params['arr_i_mp'], D)
Po = output_power(Vo, Io)
#plot ouput power
plt.ylabel('Output Power [W]')
Po.plot(style='r-')
The code above is just a part of a script. array_params is a pandas time-series data frame. When plotted pandas Series Po, it looks like this:
This is my first time calculating integral using python. After reading through the internet, I think Python's scipy module could be of help but don't really know how and which method to implement. I would appreciate your help in any manner with the above-explained problem.
To compute an integral of the form int y(x) dx from x0 to x1, with an array x_array with values from x0 to x1 and a corresponding y_array of same length, one can use numpy's trapezoidal integration:
integral = np.trapz(y_array, x_array)
which will work also for non-constant spacing x_array[i+1]-x_array[i].
If an indefinite integral (i.e. an integral F(t) = integral f(t) dt) is needed, use scipy.integrate.cumtrapz (instead of numpy.trapz for definite integrals).
integrated = scipy.integrate.cumtrapz(power, dx=timestep)
or
integrated = scipy.integrate.cumtrapz(power, x=timevalues)
To have integrated the same length as power, specify the initial value of the integral, via the optional parameter initial (e.g. initial=0) to scipy.integrate.cumtrapz.
I am interested in integrating in Fourier space after using scipy to take an fft of some data. I have been following along with this stack exchange post numerical integration in Fourier space with numpy.fft but it does not properly integrate a few test cases I have been working with. I have added a few lines to address this issue but still am not recovering the correct integrals. Below is the code I have been using to integrate my test cases. At the top of the code are the 3 test cases I have been using.
import numpy as np
import scipy.special as sp
from scipy.fft import fft, ifft, fftfreq
import matplotlib.pyplot as plt
#set number of points in array
Ns = 2**16
#create array in space
x = np.linspace(-np.pi, np.pi, Ns)
#test case 1 from stack exchange post
# y = np.exp(-x**2) # function f(x)
# ys = np.exp(-x**2) * (-2 *x) # derivative f'(x)
#test case 2
# y = np.exp(-x**2) * x - 1/2 *np.sqrt(np.pi)*sp.erf(x)
# ys = np.exp(-x**2) * -2*x**2
#test case 3
y = np.sin(x**2) + (1/4)*np.exp(x)
ys = 1/4*(np.exp(x) + 8*x*np.cos(x**2))
#find spacing in space array
ss = x[1]-x[0]
#definte fft integration function
def fft_int(N,s,dydt):
#create frequency array
f = fftfreq(N,s)
# integration step ignoring divide by 0 errors
Fys = fft(dydt)
with np.errstate(divide="ignore", invalid="ignore"):
modFys = Fys / (2*np.pi*1j*f)
#set DC term to 0, was a nan since we divided by 0
modFys[0] = 0
#take inverse fft and subtract by integration constant
fourier = ifft(modFys)
fourier = fourier-fourier[0]
#tilt correction if function doesn't approach 0 at its ends
tilt = np.sum(dydt)*s*(np.arange(0,N)/(N-1) - 1/2)
fourier = fourier + tilt
return fourier
Test case 1 was from the stack exchange post from above. If you copy paste the code from the top answer and plot you'll get something like this:
with the solid blue line being the fft integration method and the dashed orange as the analytic solution. I account for this offset with the following line of code:
fourier = fourier-fourier[0]
since I don't believe the code was setting the constant of integration.
Next for test case 2 I get a plot like this:
again with the solid blue line being the fft integration method and the dashed orange as the analytic solution. I account for this tilt in the solution using the following lines of code
tilt = np.sum(dydt)*s*(np.arange(0,N)/(N-1) - 1/2)
fourier = fourier + tilt
Finally we arrive at test case 3. Which results in the following plot:
again with the solid blue line being the fft integration method and the dashed orange as the analytic solution. This is where I'm stuck, this offset has appeared again and I'm not sure why.
TLDR: How do I correctly integrate a function in fourier space using scipy.fft?
The tilt component makes no sense. It fixes one function, but it's not a generic solution of the problem.
The problem is that the FFT induces periodicity in the signal, meaning you compute the integral of a different function. Multiplying the FFT of the signal by 1/(2*np.pi*1j*f) is equivalent to a circular convolution of the signal with ifft(1/(2*np.pi*1j*f)). "Circular" is the key here. This is just a boundary problem.
Padding the function with zeros is one way to attempt to fix this:
import numpy as np
import scipy.special as sp
from scipy.fft import fft, ifft, fftfreq
import matplotlib.pyplot as plt
def fft_int(s, dydt, N=0):
dydt_padded = np.pad(dydt, (0, N))
f = fftfreq(dydt_padded.shape[0], s)
F = fft(dydt_padded)
with np.errstate(divide="ignore", invalid="ignore"):
F = F / (2*np.pi*1j*f)
F[0] = 0
y_padded = np.real(ifft(F))
y = y_padded[0:dydt.shape[0]]
return y - np.mean(y)
N = 2**16
x = np.linspace(-np.pi, np.pi, N)
s = x[1] - x[0]
# Test case 3
y = np.sin(x**2) + (1/4)*np.exp(x)
dy = 1/4*(np.exp(x) + 8*x*np.cos(x**2))
plt.plot(y - np.mean(y))
plt.plot(fft_int(s, dy))
plt.plot(fft_int(s, dy, N))
plt.plot(fft_int(s, dy, 10*N))
plt.show()
(Blue is expected output, computed solution without padding is orange, and with increasing amount of padding, green and red.)
Here I've solved the "offset" problem by plotting all functions with their mean removed. Setting the DC component to 0 is equal to subtracting the mean. But after cropping off the padding the mean changes, so fft_int subtracts the mean again after cropping.
Anyway, note how we get an increasingly better approximation as the padding increases. To get the exact result, one would need an infinite amount of padding, which of course is unrealistic.
Test case #1 doesn't need padding, the function reaches zero at the edges of the sampled domain. We can impose such a behavior on the other cases too. In Discrete Fourier analysis this is called windowing. This would look something like this:
def fft_int(s, dydt):
dydt_windowed = dydt * np.hanning(dydt.shape[0])
f = fftfreq(dydt.shape[0], s)
F = fft(dydt_windowed)
with np.errstate(divide="ignore", invalid="ignore"):
F = F / (2*np.pi*1j*f)
F[0] = 0
y = np.real(ifft(F))
return y
However, here we get correct integration results only in the middle of the domain, with increasingly suppressed values towards to ends. So this is not a practical solution either.
My conclusion is that no, this is not possible to do. It is much easier to compute the integral with np.cumsum:
yp = np.cumsum(dy) * s
plt.plot(y - np.mean(y))
plt.plot(yp - np.mean(yp))
plt.show()
(not showing output: the two plots overlap perfectly.)
I'm doing least squares curve fitting with Python and getting decent results, but would like it to be a bit more robust.
I have data from a first order LTI system, more specifically the speed of a motor that is read by a tachymeter. I'm trying to fit the step response of the motors so I can deduce its transfer function.
The speed (v(t)) has the following form:
v(t) = K * (1 - exp(-t/T))
I'm having some outliers in the data I use though, and would like to mitigate them. This mostly happens when the speeds becomes constant. Say the speed is 10000 units, I sometimes get outliers that are 10000 +/- 400. I wonder how to set my f_scale parameter given I want my data points to stay within +/- 400 of the "actual" speed (mean). Should I set f_scale to 400 or 800? I'm not sure what exactly I should set there.
Thanks
EDIT: Some data.
I have constructed a minimal example which is for a curve similar to yours. If you had posted actual data instead of a picture, this would have gone a bit faster. The two key things to understand about robust fitting with least_squares is that you have to use a different value for the loss parameter than linear and that f_scale is used as a scaling parameter for the loss function.
Basically, from the docs, least_squares tries to
minimize F(x) = 0.5 * sum(rho(f_i(x)**2)
and setting the loss loss parameter changes rho in the above formula. For loss='linear' rho is just the identity function. When loss='soft_l1', rho(z) = 2 * ((1 + z)**0.5 - 1). f_scale is used to scale the loss function such that rho_(f**2) = C**2 * rho(f**2 / C**2). So it doesn't have the same kind of meaning as you are asking for above, it's more like a way of penalising larger errors less.
In this particular case it doesn't appear to make much difference though.
import numpy
import matplotlib.pyplot as plt
import scipy.optimize
tmax = 6000
N = 100
K = 6000
T = 200
smootht = numpy.linspace(0, tmax, 1000)
tm = numpy.linspace(0, tmax, N)
def f(t, K, T):
return K * (1 - numpy.exp(-t/T))
v = f(smootht, K, T)
vm = f(tm, K, T) + numpy.random.randn(N)*400
def error(pars):
K, T = pars
vp = f(tm, K, T)
return vm - vp
f_scales = [0.01, 1, 100]
plt.scatter(tm, vm)
for f_scale in f_scales:
r = scipy.optimize.least_squares(error, [10, 10], loss='soft_l1', f_scale=f_scale)
vp = f(smootht, *r.x)
plt.plot(smootht, vp, label=f_scale)
plt.legend()
The resulting plot looks like this:
My suggestion is to start by just experimenting with the different loss functions before playing with f_scale.
I would like to try to compute y=filter(b,a,x,zi) and dy[i]/dx[j] using FFTs rather than in the time domain for possible speedup in a GPU implementation.
I am not sure it's possible, particularly when zi is non-zero. I looked through how scipy.signal.lfilter in scipy and filter in octave are implemented. They are both done directly in the time domain, with scipy using direct form 2 and octave direct form 1 (from looking through code in DLD-FUNCTIONS/filter.cc). I haven't seen anywhere an FFT implementation analogous to fftfilt for FIR filters in MATLAB (i.e. a = [1.]).
I tried doing y = ifft(fft(b) / fft(a) * fft(x)) but this seems to be conceptually wrong. Also, I am not sure how to handle the initial transient zi. Any references, pointer to existing implementation, would be appreciated.
Example code,
import numpy as np
import scipy.signal as sg
import matplotlib.pyplot as plt
# create an IRR lowpass filter
N = 5
b, a = sg.butter(N, .4)
MN = max(len(a), len(b))
# create a random signal to be filtered
T = 100
P = T + MN - 1
x = np.random.randn(T)
zi = np.zeros(MN-1)
# time domain filter
ylf, zo = sg.lfilter(b, a, x, zi=zi)
# frequency domain filter
af = sg.fft(a, P)
bf = sg.fft(b, P)
xf = sg.fft(x, P)
yfft = np.real(sg.ifft(bf/af * xf))[:T]
# error
print np.linalg.norm(yfft - ylf)
# plot, note error is larger at beginning and with larger N
plt.figure(1)
plt.clf()
plt.plot(ylf)
plt.plot(yfft)
You can reduce the error in your existing implementation by replacing P = T + MN - 1 with P = T + 2*MN - 1. This is purely intuitive, but it seems to me that the division of bf and af will require 2*MN terms, due to wraparound.
C.S. Burrus has a pretty terse writeup of how to regard filtering, whether FIR or IIR, in a block oriented way, here. I haven't read it in detail, but I think it gives you the equations you need to implement IIR filtering by convolution, including intermediate states.
I've forgotten what little I knew about FFTs but you could take a look at sedit.py and frequency.py at http://jc.unternet.net/src/ and see if anything there would help.
Try scipy.signal.lfiltic(b, a, y, x=None) to obtain the initial conditions.
Doc text for lfiltic:
Given a linear filter (b,a) and initial conditions on the output y
and the input x, return the inital conditions on the state vector zi
which is used by lfilter to generate the output given the input.
If M=len(b)-1 and N=len(a)-1. Then, the initial conditions are given
in the vectors x and y as
x = {x[-1],x[-2],...,x[-M]}
y = {y[-1],y[-2],...,y[-N]}
If x is not given, its inital conditions are assumed zero.
If either vector is too short, then zeros are added
to achieve the proper length.
The output vector zi contains
zi = {z_0[-1], z_1[-1], ..., z_K-1[-1]} where K=max(M,N).