I have a slightly unusual problem, but I am trying to avoid re-coding FFT.
In general, I want to know this: If I have an algorithm that is implemented for type float, but it would work wherever a certain set of operations is defined (e.g. complex numbers, for which also define +, *, ...), what is the best way to use that algorithm on another type that supports those operations? In practice this is tricky because generally numeric algorithms are written for speed, not generality.
Specifically:
I am working with values with a very high dynamic range, and so I would like to store them in log space (mostly to avoid underflow).
What I'd like is the log of the FFT of some series:
x = [1,2,3,4,5]
fft_x = [ log( x_val ) for x_val in fft(x) ]
Even this will result in significant underflow. What I'd like is to store log values and use + in place of * and logaddexp in place of +, etc.
My thought of how to do this was to implement a simple LogFloat class that defines these primitive operations (but operates in log space). Then I could simply run the FFT code by letting it use my logged values.
class LogFloat:
def __init__(self, sign, log_val):
assert(float(sign) in (-1, 1))
self.sign = int(sign)
self.log_val = log_val
#staticmethod
def from_float(fval):
return LogFloat(sign(fval), log(abs(fval)))
def __imul__(self, lf):
self.sign *= lf.sign
self.log_val += lf.log_val
return self
def __idiv__(self, lf):
self.sign *= lf.sign
self.log_val -= lf.log_val
return self
def __iadd__(self, lf):
if self.sign == lf.sign:
self.log_val = logaddexp(self.log_val, lf.log_val)
else:
# subtract the smaller magnitude from the larger
if self.log_val > lf.log_val:
self.log_val = log_sub(self.log_val, lf.log_val)
else:
self.log_val = log_sub(lf.log_val, self.log_val)
self.sign *= -1
return self
def __isub__(self, lf):
self.__iadd__(LogFloat(-1 * lf.sign, lf.log_val))
return self
def __pow__(self, lf):
# note: there may be a way to do this without exponentiating
# if the exponent is 0, always return 1
# print self, '**', lf
if lf.log_val == -float('inf'):
return LogFloat.from_float(1.0)
lf_value = lf.sign * math.exp(lf.log_val)
if self.sign == -1:
# note: in this case, lf_value must be an integer
return LogFloat(self.sign**int(lf_value), self.log_val * lf_value)
return LogFloat(self.sign, self.log_val * lf_value)
def __mul__(self, lf):
temp = LogFloat(self.sign, self.log_val)
temp *= lf
return temp
def __div__(self, lf):
temp = LogFloat(self.sign, self.log_val)
temp /= lf
return temp
def __add__(self, lf):
temp = LogFloat(self.sign, self.log_val)
temp += lf
return temp
def __sub__(self, lf):
temp = LogFloat(self.sign, self.log_val)
temp -= lf
return temp
def __str__(self):
result = str(self.sign * math.exp(self.log_val)) + '('
if self.sign == -1:
result += '-'
result += 'e^' + str(self.log_val) + ')'
return result
def __neg__(self):
return LogFloat(-self.sign, self.log_val)
def __radd__(self, val):
# for sum
if val == 0:
return self
return self + val
Then, the idea would be to construct a list of LogFloats, and then use it in the FFT:
x_log_float = [ LogFloat.from_float(x_val) for x_val in x ]
fft_x_log_float = fft(x_log_float)
This can definitely be done if I re-implement FFT (and simply use LogFloat wherever I would use float before, but I thought I would ask for advice. This is a fairly recurring problem: I have a stock algorithm that I want to operate in log space (and it only uses a handful of operations like '+', '-', '', '/', etc.).
This reminds me of writing generic functions with templates, so that the return arguments, parameters, etc. are constructed from the same type. For exmaple, if you can do an FFT of floats, you should be able to easily do one on complex values (by simply using a class that provides the necessary operations for complex values).
As it currently stands, it looks like all FFT implementations are written for bleeding-edge speed, and so won't be very general. So as of now, it looks like I'd have to reimplement FFT for generic types...
The reason I'm doing this is because I want very high-precision convolutions (and the N^2 runtime is extremely slow).
Any advice would be greatly appreciated.
*Note, I might need to implement trigonometric functions for LogFloat, and that would be fine.
EDIT:
This does work because LogFloat is a commutative ring (and it doesn't require implementation of trigonometric functions for LogFloat). The simplest way to do it was to reimplement FFT, but #J.F.Sebastian also pointed out a way of using the Python generic convolution, which avoids coding the FFT (which, again, was quite easy using either a DSP textbook or the Wikipedia pseudocode).
I confess I didn't entirely keep up with the math in your question. However it sounds like what you're really wondering is how to deal with extremely small and large (in absolute value) numbers without hitting underflows and overflows. Unless I misunderstand you, I think this is similar to the problem I have working with units of money, not losing pennies on billion-dollar transactions due to rounding. If that's the case, my solution has been Python's built-in decimal-math module. The documentation is good for both Python 2 and Python 3. The short version is that decimal math is an arbitrary-precision floating- and fixed-point type. The Python modules conform to the IBM/IEEE standards for decimal math. In Python 3.3 (which is currently in alpha form, but I've been using it with no problems at all), the module has been rewritten in C for an up to 100x speed up (in my quick tests).
You could scale your time domain samples by a large number s to avoid underflow, and then, if
F( f(t) ) = X(j*w)
then
F(s f(s*t)) <-> X(w/s)
Now using the convolution theorem you can work out how to scale your final result to remove the effect of the scaling factor.
Related
A protected division is a normal division but when you divide by 0 it returns a fixed constant (usually 1).
def protected_div(x, y):
if y == 0:
return 1
return x/y
Is there a way to use this as an operator on sympy (For example replacing the standard division)?
Here is an example of what I want:
>>> import sympy as sym
>>> x = sym.Symbol('x')
>>> expr = 1/x #(protected division goes here?)
>>> expr.subs(x, 0)
1
The division has to be protected at evaluation time.
EDIT 1:
What I've tried:
1.
Using sym.lambidify with the modules parameter set:
>>> x = sym.Symbol('x')
>>> expr = 1/x
>>> lamb = sym.lambdify(x, expr, modules={'/':protected_div})
>>> print(lamb(0))
ZeroDivisionError: 0.0 cannot be raised to a negative power
This does not work because sympy converts 1/x to x**(-1) when lambidifying. I tried overriding the power operator but I don't know the function name. I've tried 'Pow', 'pow', '**' and none worked.
However if i declare the expression as expr = 1.0/x it actually does not convert to a negative power, however it does not use my custom division function. I think these types of functions are not overridable using the module parameter.
2.
#Zaz suggestion:
class floatsafe(float):
def __truediv__(self, __x):
if __x == 0:
return floatsafe(1)
return super().__truediv__(__x)
x = sym.Symbol('x')
expr = floatsafe(1)/x
print(expr.subs(x, floatsafe(0)))
Returns
zoo
Which is complex infinity.
I tried combining this approach with sym.lambdify, but the dividend is converted to a float after I lambdify the function.
In the case that the dividend is variable it also does not work:
x = sym.Symbol('x')
expr = x/0.0
a = sym.lambdify(x, expr, modules={'/':floatsafe.__truediv__})
print(inspect.getsource(a))
print(a(floatsafe(0)))
Outputs
def _lambdifygenerated(x):
return nan*x
nan
EDIT: There seems to some confusion around why I'd want that. It's for a genetic programming algorithm using sympy. A protected division is a common operator in GP so that the created solutions are valid.
The regular mathematics we use on the day-to-day is a ring on the set of real numbers, ℝ: The properties of a ring are that you have two operations (such as multiplication and addition) and one of them (such as addition) will always produce another number within the set.
You can create a more specific notion of a field (such that both operations will always produce another member in the set) by removing 0 or expanding the set to the hyperreals.
My point being, without knowing what problem exactly you're trying to solve, I would guess that instead of redefining division, it makes more sense to redefine the number system that you're using: For whatever reason, you have some system of numbers that should return 1 when divided by zero, so why not create a subclass of float, for example?
class floatD01(float):
def __truediv__(self, divisor):
if divisor == 0:
return 1
return self/divisor
You may also want to scan help(float) for any other methods related to division that you may want to change such as __divmod__, __floordiv__ (7//3 == 2), etc, and have a hard think about how you want this new mathematical group that you're creating to work and why.
Other options that may potentially be more robust would be to go nuclear and try catching all ZeroDivisionErrors and replace them with one (either by modifying the class) or within whatever code you're running or, if appropriate, implementing something like what the language R extensively uses: NA values. I'm sure there's some way (I believe in numpy) to do something along the lines of: C = [1/3, 2/2, 3/1, 4/0] # == [1/3, 2/2, 3/1, NA] sum(C) = 4.333
The solution was pretty simple actually, although I was not able to actualy overload the division operator all I had to do was create a sympy function for the protected division and use that instead.
class protected_division(sym.Function):
#classmethod
def eval(cls, x, y):
if y.is_Number:
if y.is_zero:
return sym.S.One
else:
return x/y
Then just use that in an expression:
>>> expr = protected_division(1, sym.Symbol('x'))
protected_division(1, x)
>>> expr.subs(sym.Symbol('x'), 0)
1
>>> expr.subs(sym.Symbol('x'), 3)
1/3
I did not find out how to make the class tell sym.lambdify what to do in case of a "lambdification", but you can use the modules parameters for that:
>>> def pd(x, y):
... if y == 0:
... return 1
... return x/y
...
>>> l = sym.lambdify(sym.Symbol('x'), expr, modules={'protected_division': pd})
>>> l(3)
1.6666666666666667
>>> l(0)
1
Context
So for a project I was writing some Monte Carlo code and I wanted to support both 2D coordinates and 3D coordinates. The obvious solutions were to implement either 2D and 3D versions of certain functions or to have some 'if' checking in the algorithm itself. I wanted a more elegant solution where I could just work with one algorithm capable of handling both situations. The idea I came up with was to work with some kind of neutral element 'I' for the optional third coordinate (z-direction). It's usage is as follows: if no explicit z value is provided it defaults to I. In any case the z value is effectively used in the calculations but has no effect if z = I.
For example: let a be any number then a + I = I + a = a. Likewise a x I = I x a = a
For addition and multiplication these are I = 0 and respectively I = 1. It is immediately clear that any numerical value for I will not work. For example if I have a cost function of the form xyz + (x +y +x)^2.
Implementation
Luckily, programmatically we are not constrained by mathematics to implement something like that and here's my attempt:
class NeutralElement:
def __add__(self, other):
return other
def __sub__(self, other):
return other
def __mul__(self, other):
return other
def __truediv__(self, other):
return other
def __pow__(self, other):
return self
def __radd__(self, other):
return other
def __rsub__(self, other):
return other
def __rmul__(self, other):
return other
def __rtruediv__(self, other):
return other
def __bool__(self):
return True
def __getitem__(self, index):
return self
def __str__(self):
return 'I'
def __repr__(self):
return 'I'
Usage is simple: n = NeutralElement() and you can then use n in whatever equation you want. The above implementation works well in the sense that the Monte Carlo algorithm finishes without problems and manages to give meaningful results. I can even construct a Numpy array of it and use it. Though it's not compatible with everything. Calling np.around on an array of it gives an error about '_rint' not implemented. Though I did manage to get it to work with round() and ndarray.round(). (Not reflected in the code above.)
Question
My question now is if there's anything I can do to improve compatibility with other functions or just improve this implementation itself. Also welcome are better alternative solutions to the problem described above.
After some feedback I've attempted to narrow down the scope of my question. I would like to modify the above class so that it returns a Numpy array of strings ['I', 'I', ..., 'I'] when numpy.around is used on a Numpy array of NeutralElements
I have a problem where I need to produce something which is naturally computed recursively, but where I also need to be able to interrogate the intermediate steps in the recursion if needed.
I know I can do this by passing and mutating a list or similar structure. However, this looks ugly to me and I'm sure there must be a neater way, e.g. using generators. What I would ideally love to be able to do is something like:
intermediate_results = [f(x) for x in range(T)]
final_result = intermediate_results[T-1]
in an efficient way. While my solution is not performance critical, I can't justify the massive amount of redundant effort in that first line. It looks to me like a generator would be perfect for this except for the fact that f is fundamentally much more suited to recursion in my case (which at least in my mind is the complete opposite of a generator, but maybe I'm just not thinking far enough outside of the box).
Is there a neat Pythonic way of doing something like this that I just don't know about, or do I just need to just capitulate and pollute my function f by passing it an intermediate_results list which I then mutate as a side-effect?
I have a generic solution for you using a decorator. We create a Memoize class which stores the results of previous times the function is executed (including in recursive calls). If the arguments given have already been seen, the cached versions are used to quickly lookup the result.
The custom class has the benefit over an lru_cache in that you can see the results.
from functools import wraps
class Memoize:
def __init__(self):
self.store = {}
def save(self, fun):
#wraps(fun)
def wrapper(*args):
if args not in self.store:
self.store[args] = fun(*args)
return self.store[args]
return wrapper
m = Memoize()
#m.save
def fibo(n):
if n <= 0: return 0
elif n == 1: return 1
else: return fibo(n-1) + fibo(n-2)
Then after running different things you can see what the cache contains. When you run future function calls, m.store will be used as a lookup so calculation doesn't need to be redone.
>>> f(8)
21
>>> m.store
{(1,): 1,
(0,): 0,
(2,): 1,
(3,): 2,
(4,): 3,
(5,): 5,
(6,): 8,
(7,): 13,
(8,): 21}
You could modify the save function to use the name of the function and the args as the key, so that multiple function results can be stored in the same Memoize class.
You can use your existing solution that makes many "redundant" calls to f, but employ the use of function caching to save the results to previous calls to f.
In other words, when f(x1) is called, it's input arguments and corresponding return values are saved, and the next time it is called, the result is simply pulled from the cache
see functools.lru_cache for the standard library solution to this
ie:
from functools import lru_cache
#lru_cache
intermediate_results = [f(x) for x in range(T)]
final_result = intermediate_results[T-1]
Note, however, f must be a pure function (no side-effects, 1-to-1 mapping) for this to work properly
Having considered your comments, I'll now try to give another perspective on the problem.
So, let's consider a concrete example:
def f(x):
a = 2
return g(x) + a if x != 0 else 0
def g(x):
b = 1
return h(x) - b
def h(x):
c = 1/2
return f(x-1)*(1+c)
I
First of all, it should be mentioned that (in our particular case) the algorithm has form of: f(x) = p(f(x - 1)) for some p. It follows that f(x) = p^x(f(0)) = p^x(0). That means we should just apply p to 0 x times to get the desired result, which can be done in an iterative process, so this can be written without recursion. Though I believe that your real case is much harder. Moreover, it would be too boring and uninformative to stop here)
II
Generally speaking, we can divide all possible solutions into two groups: the ones that require refactoring (i.e. rewriting functions f, g, h) and the ones that do not. I have little to offer from the latter one (and I don't think anyone can). Consider the following, however:
def fk(x, k):
a = 2
return k(gk(x, k) + a if x != 0 else 0)
def gk(x, k):
b = 1
return k(hk(x, k) - b)
def hk(x, k):
c = 1/2
return k(fk(x-1, k)*(1+c))
def printret(x):
print(x)
return x
f(4, printret) # see what happens
Inspired by continuation-passing style, but that's totally not it.
What's the point? It's something between your idea of passing a list to write down all the computations and memoizing. This k carries additional behavior with it, such as printing or writing to list (you can make a function that writes to some list, why not?). But if you look carefully you'll see that it lefts inner code of these functions practically untouched (only input and output to function are affected), so one can produce a decorator associated with a function like printret that does essentially the same thing for f, g, h.
Pros: no need to modify code, much more flexible than passing a list, no additional work (like in memoizing).
Cons: Impure (printing or modifying sth), not so flexible as we would like.
III
Now let's see how modifying function bodies can help. Don't be afraid of what's written below, take your time and play with that thing a little.
class Logger:
def __init__(self, lst, cur_val):
self.lst = lst
self.cur_val = cur_val
def bind(self, f):
res = f(self.cur_val)
return Logger([self.cur_val] + res.lst + self.lst, res.cur_val)
def __repr__(self):
return "Logger( " + repr({'value' : self.cur_val,'lst' : self.lst}) + " )"
def unit(x):
return Logger([], x)
# you can also play with lala
def lala(x):
if x <= 0:
return unit(1)
else:
return lala(x - 1).bind(lambda y: unit(2*y))
def f(x):
a = 2
if x == 0:
return unit(0)
else:
return g(x).bind(lambda y: unit(y + a))
def g(x):
b = 1
return h(x).bind(lambda y: unit(y - b))
def h(x):
c = 1/2
return f(x-1).bind(lambda y: unit(y*(1+c)))
f(4) # see for yourself
Logger is called a monad. I'm not very familiar with this concept myself, but I guess I'm doing everything right) f, g, h are functions that take a number and return a Logger instance. Logger's bind takes in a function (like f) and returns Logger with new value (computed by f) and updated 'logs'. The key point - as I see it - is the ability to do whatever we want with collected functions in the order the resulting value was calculated.
Afterword
I'm not at all some kind of 'guru' of functional programming, I believe I'm missing a lot of things here. But what I've understood is that functional programming is about inversing the flow of the program. That's why, for instance, I totally agree with your opinion about generators being opposed to functional programming. When we use generator gen in, say, function func, we yield values one by one to func and func does sth with them in e.g. a loop. The functional approach would be to make gen a function taking func as a parameter and make func perform computations on 'yielded' values. It's like gen and func exchanged their places. So the flow is inversed! And there are plenty of other ways of inversing the flow. Monads are one of them.
itertools islice gets a generator, start value and stop value. it will give you the elements between the start value and stop value as a generator. if islice is not clear you can check the docs here https://docs.python.org/3/library/itertools.html
intermediate_result = map(f, range(T))
final_result = next(itertools.islice(intermediate_result, start=T-1, stop=T))
I have just almost finished my assignment and now the only thing I have left is to define the tostring method shown here.
import math
class RegularPolygon:
def __init__(self, n = 1, l = 1):
self.__n = n
self.__l = l
def set_n(self, n):
self.__n = n
def get_n(self):
return self.__n
def addSides(self, x):
self.__n = self.__n + x
def setLength(self, l ):
self.__l = l
def getLength(self):
return self.__l
def setPerimeter(self):
return (self.__n * self.__l )
def getArea(self):
return (self.__l ** 2 / 4 * math.tan(math.radians(180/self.__n)))
def toString(self):
return
x = 3
demo_object = RegularPolygon (3, 1)
print(demo_object.get_n() , demo_object.getLength())
demo_object.addSides(x)
print(demo_object.get_n(), demo_object.getLength())
print(demo_object.getArea())
print(demo_object.setPerimeter())
Basically the tostring on what it does is return a string that has the values of the internal variables included in it. I also need help on the getArea portion too.
Assignment instructions
The assignment says
... printing a string representation of a RegularPolygon object.
So I would expect you get to choose a suitable "representation". You could go for something like this:
return f'{self.__n+2} sided regular polygon of side length {self.__l}'
or as suggested by #Roy Cohen
return f'{self.__class__.__name__}({self.__n}, {self.__l})'
However, as #Klaus D. wrote in the comments, Python is not Java, and as such has its own standards and magic methods to use instead.
I would recommend reading this answer for an explanation between the differences between the two built-in string representation magic-methods: __repr__ and __str__. By implementing these methods, they will automatically be called whenever using print() or something similar, instead of you calling .toString() every time.
Now to address the getters and setters. Typically in Python you avoid these and prefer using properties instead. See this answer for more information, but to summarise you either directly use an objects properties, or use the #property decorator to turn a method into a property.
Edit
Your area formula is likely an error with order-of-operations. Make sure you are explicit with which operation you're performing first:
return self.__l ** 2 / (4 * math.tan(math.radians(180/self.__n)) )
This may be correct :)
I have found some nice examples (here, here) of implementing SICP-like streams in Python. But I am still not sure how to handle an example like the integral found in SICP 3.5.3 "Streams as signals."
The Scheme code found there is
(define (integral integrand initial-value dt)
(define int
(cons-stream initial-value
(add-streams (scale-stream integrand dt)
int)))
int)
What is tricky about this one is that the returned stream int is defined in terms of itself (i.e., the stream int is used in the definition of the stream int).
I believe Python could have something similarly expressive and succinct... but not sure how. So my question is, what is an analogous stream-y construct in Python? (What I mean by a stream is the subject of 3.5 in SICP, but briefly, a construct (like a Python generator) that returns successive elements of a sequence of indefinite length, and can be combined and processed with operations such as add-streams and scale-stream that respect streams' lazy character.)
There are two ways to read your question. The first is simply: How do you use Stream constructs, perhaps the ones from your second link, but with a recursive definition? That can be done, though it is a little clumsy in Python.
In Python you can represent looped data structures but not directly. You can't write:
l = [l]
but you can write:
l = [None]
l[0] = l
Similarly you can't write:
def integral(integrand,initial_value,dt):
int_rec = cons_stream(initial_value,
add_streams(scale_stream(integrand,dt),
int_rec))
return int_rec
but you can write:
def integral(integrand,initial_value,dt):
placeholder = Stream(initial_value,lambda : None)
int_rec = cons_stream(initial_value,
add_streams(scale_stream(integrand,dt),
placeholder))
placeholder._compute_rest = lambda:int_rec
return int_rec
Note that we need to clumsily pre-compute the first element of placeholder and then only fix up the recursion for the rest of the stream. But this does all work (alongside appropriate definitions of all the rest of the code - I'll stick it all at the bottom of this answer).
However, the second part of your question seems to be asking how to do this naturally in Python. You ask for an "analogous stream-y construct in Python". Clearly the answer to that is exactly the generator. The generator naturally provides the lazy evaluation of the stream concept. It differs by not being naturally expressed recursively but then Python does not support that as well as Scheme, as we will see.
In other words, the strict stream concept can be expressed in Python (as in the link and above) but the idiomatic way to do it is to use generators.
It is more or less possible to replicate the Scheme example by a kind of direct mechanical transformation of stream to generator (but avoiding the built-in int):
def integral_rec(integrand,initial_value,dt):
def int_rec():
for x in cons_stream(initial_value,
add_streams(scale_stream(integrand,dt),int_rec())):
yield x
for x in int_rec():
yield x
def cons_stream(a,b):
yield a
for x in b:
yield x
def add_streams(a,b):
while True:
yield next(a) + next(b)
def scale_stream(a,b):
for x in a:
yield x * b
The only tricky thing here is to realise that you need to eagerly call the recursive use of int_rec as an argument to add_streams. Calling it doesn't start it yielding values - it just creates the generator ready to yield them lazily when needed.
This works nicely for small integrands, though it's not very pythonic. The Scheme version works by optimising the tail recursion - the Python version will exceed the max stack depth if your integrand is too long. So this is not really appropriate in Python.
A direct and natural pythonic version would look something like this, I think:
def integral(integrand,initial_value,dt):
value = initial_value
yield value
for x in integrand:
value += dt * x
yield value
This works efficiently and correctly treats the integrand lazily as a "stream". However, it uses iteration rather than recursion to unpack the integrand iterable, which is more the Python way.
In moving to natural Python I have also removed the stream combination functions - for example, replaced add_streams with +=. But we could still use them if we wanted a sort of halfway house version:
def accum(initial_value,a):
value = initial_value
yield value
for x in a:
value += x
yield value
def integral_hybrid(integrand,initial_value,dt):
for x in accum(initial_value,scale_stream(integrand,dt)):
yield x
This hybrid version uses the stream combinations from the Scheme and avoids only the tail recursion. This is still pythonic and python includes various other nice ways to work with iterables in the itertools module. They all "respect streams' lazy character" as you ask.
Finally here is all the code for the first recursive stream example, much of it taken from the Berkeley reference:
class Stream(object):
"""A lazily computed recursive list."""
def __init__(self, first, compute_rest, empty=False):
self.first = first
self._compute_rest = compute_rest
self.empty = empty
self._rest = None
self._computed = False
#property
def rest(self):
"""Return the rest of the stream, computing it if necessary."""
assert not self.empty, 'Empty streams have no rest.'
if not self._computed:
self._rest = self._compute_rest()
self._computed = True
return self._rest
def __repr__(self):
if self.empty:
return '<empty stream>'
return 'Stream({0}, <compute_rest>)'.format(repr(self.first))
Stream.empty = Stream(None, None, True)
def cons_stream(a,b):
return Stream(a,lambda : b)
def add_streams(a,b):
if a.empty or b.empty:
return Stream.empty
def compute_rest():
return add_streams(a.rest,b.rest)
return Stream(a.first+b.first,compute_rest)
def scale_stream(a,scale):
if a.empty:
return Stream.empty
def compute_rest():
return scale_stream(a.rest,scale)
return Stream(a.first*scale,compute_rest)
def make_integer_stream(first=1):
def compute_rest():
return make_integer_stream(first+1)
return Stream(first, compute_rest)
def truncate_stream(s, k):
if s.empty or k == 0:
return Stream.empty
def compute_rest():
return truncate_stream(s.rest, k-1)
return Stream(s.first, compute_rest)
def stream_to_list(s):
r = []
while not s.empty:
r.append(s.first)
s = s.rest
return r
def integral(integrand,initial_value,dt):
placeholder = Stream(initial_value,lambda : None)
int_rec = cons_stream(initial_value,
add_streams(scale_stream(integrand,dt),
placeholder))
placeholder._compute_rest = lambda:int_rec
return int_rec
a = truncate_stream(make_integer_stream(),5)
print(stream_to_list(integral(a,8,.5)))