Design pattern for transformation of array data in Python script - python

I currently have a Python class with many methods each of which performs a transformation of time series data (primarily arrays). Each method must be called in a specific order, since the inputs of each function specifically rely on the outputs of the previous. Hence my code structure looks like the following:
class Algorithm:
def __init__(self, data1, data2):
self.data1 = data1
self.data2 = data2
def perform_transformation1(self):
...perform action on self.data1
def perform_transformation2(self):
...perform action on self.data1
etc..
At the bottom of the script, I instantiate the class and then proceed to call each method on the instance, procedurally. Using object orientated programming in this case seems wrong to me.
My aims are to re-write my script in a way such that the inputs of each method are not dependent on the outputs of the preceding method, hence giving me the ability to decide whether or not to perform certain methods.
What design pattern should I be using for this purpose, and does this move more towards functional programming?

class Algorithm:
#staticmethod
def perform_transformation_a(data):
return 350 * data
#staticmethod
def perform_transformation_b(data):
return 400 * data
def perform_transformations(data):
transformations = (Algorithm.perform_transformation_a,
Algorithm.perform_transformation_b)
for transformation in transformations:
data = transformation(data)
return data
You could have the Algorithm class just be a collection of pure functions. And then have the client code (perform_transformations here) be the logic of ordering which transformations to apply. This way the Algorithm class is only responsible for algorithms and client worries about ordering the functions.

One option is to define your methods like this:
def perform_transformation(self, data=None):
if data is None:
perform action on self.data
else:
perform action on data
This way, you have the ability to call them at any time you want.

Related

How to reassign a class method after calling another method?

I am currently working on a Python project with a couple class methods that are each called tens of thousands of times. One of the issues with these methods is that they rely on data being populated via another method first, so I want to be able to raise an error if the functions are called prior to populating the data.
And before anyone asks, I opted to separate the data population stage from the class constructor. This is because the data population (and processing) is intensive and I want to manage it separately from the constructor.
Simple (inefficient) implementation
A simple implementation of this might look like:
class DataNotPopulatedError(Exception):
...
class Unblocker1:
def __init__(self):
self.data = None
self._is_populated = False
def populate_data(self, data):
self.data = data
self._is_populated = True
# It will make sense later why this is its own method
def _do_something(self):
print("Data is:", self.data)
def do_something(self):
if not self._is_populated:
raise DataNotPopulatedError
return self._do_something()
unblocker1 = Unblocker1()
# Raise an error (We haven't populated the data yet)
unblocker1.do_something()
# Don't raise an error (We populated the data first)
unblocker1.populate_data([1,2,3])
unblocker1.do_something()
My goal
Because the hypothetical do_something() method is called tens (or hundreds) of thousands of times, I would think those extra checks to make sure that the data has been populated would start to add up.
While I may be barking up the wrong tree, my first thoughts to improve the efficiency of thefunction were to dynamically re-assign the method after the data is populated. I.e., when first creating the class, the do_something() method would point to another function that only raises a DataNotPopulatedError. The populate_data() method would then both populate the data and also "unblock" do_something() by dynamically reassigning do_something() back to the function as written.
I figure the cleanest way to implement something like this would be using a decorator.
Hypothetical usage
I have no idea how to implement the technique described above, however, I did create a hypothetical usage with the inefficient method from before. Given the goal implementation, there might need to be two decorators--one for the blocked functions, and one to unblock them.
import functools
def blocked2(attr, raises):
def _blocked2(func):
#functools.wraps(func)
def wrapper(*args, **kwargs):
# Assumes `args[0]` is `self`
# If `self.key` is False, raise `raises`, otherwise call `func()`
if not getattr(args[0], attr):
raise raises
return func(*args, **kwargs)
return wrapper
return _blocked2
class Unblocker2:
def __init__(self):
self.data = None
self._is_populated = False
def populate_data(self, data):
self.data = data
self._is_populated = True
#blocked2("_is_populated", DataNotPopulatedError)
def do_something(self):
print("Data is:", self.data)
I've been having a hard time explaining what I am attempting to do, so I am open to other suggestions to accomplish a similar goal (and potentially better titles for the post). There is a decent chance I am taking the complete wrong approach here; that's just part of learning. If there is a better way of doing what I am trying to do, I am all ears!
What you are trying to do does not seem especially difficult. I suspect you are overcomplicating the task a bit. Assuming you are willing to respect your own private methods, you can do something like
class Unblocker2:
def __init__(self):
self.data = None
def populate_data(self, data):
self.data = data
self.do_something = self._do_something_populated
def do_something(self):
raise DataNotPopulatedError('Data not populated yet')
def _do_something_populated(self):
print("Data is:", self.data)
Since methods are non-data descriptors, assigning a bound method to the instance attribute do_something will shadow the class attribute. That way, instances that have data populated can avoid making a check with the minimum of redundancy.
That being said, profile your code before going off and optimizing it. You'd be amazed at which parts take the longest.

Is it conventional for a member function that updates object state to also return a value?

I have a question regarding return convention in Python. I have a class that has some data attributes and I have several functions that perform some analysis on the data and then store the results as results attributes (please see the simplified implementation below). Since, the analysis functions mainly update the results attribute, my question is what is the best practice in terms of return statements. Should I avoid updating class attributes inside the function (as in process1), and just return the data and use that to update the results attribute (as in process2)?
Thanks,
Kamran
class Analysis(object):
def __init__(self, data):
self.data = data
self.results = None
def process1(self):
self.results = [i**2 for i in self.data]
def process2(self):
return [i**2 for i in self.data]
a = Analysis([1, 2, 3])
a.process1()
a.results = a.process2()
It all depends on the use case.
First of all, You are not changing the class attributes there. You are changing the instance attributes.
Python: Difference between class and instance attributes
Secondly, If you are planning to share the results of your process among the various instances of the class,Then you can use class variables.
Third, If you are asking about instance variables, Then it depends on design choice.
Besides that, This is unnecessary I guess:-
a.results = a.process2()
You simply made allocation to be part of object itself. You can use a.process2 function for displaying results if you need that functionality.

python how to structure this class

Im having some trouble deciding the best way to structure a certain class. The class will take in some strings as settings and the methods will create different kinds of charts based on the settings....
for example. It could be called like this...
c = ChartEngine(type='line', labels='foo bar', data='1, 2', data2='3, 4')
chart = c.make_chart()
IDK if it is best to structure this as a class, or just a function that will call other functions in the same module...IDK if I should just put logic in the __init__ function that will setup to call the make_chart function or if there is some other way.
def __init__(*settings*):
self.type = type
self.labels = labels
self.data = data
....
make_chart(self.type, self.labels, self.data):
if self.type == "line":
line_chart(settings)
elif self.type == "bar":
bar_chart(settings)
...
How would you structure a class like this?
Use tests as a design tool. Write a test for one trivial minimal feature you want. Let it fail, then write the minimal code required to make it pass. Then write a other test with an other feature, let it fail and so on. Using this development cycle (TDD), you design from the user point of view and enforce proper encapsulation and abstraction of your implementation. You may want to learn about the pytest module.
At first sight I'd say that your class as you plan it will know too much about too many different things. Read about SRP and other SOLID principles to get you started on the matter. Most importantly, implement only what you need now, nothing more.
Finaly, ChartEngine looks like a use case for the abstract factory pattern which can be hard to implement with elegant code. Start with the most simple use cases and refactor early.
This is a design problem. By nature of the problem, you could use many subjective approaches. You'd optimally like to design your program in such a way that your source is generally de-coupled, leading to future changes being trivial. However over-engineering is a thing, so take into account the requirements at hand. Various approaches could yield certain benefits. Here is an example approach below.
You could have a master control class called ChartEngine (as you've mentioned) that can dispatch the various charts via Chart sub-classes. You could then implement a base class called Chart that serves as the super class to the various variations. The nature of your program suits inheritance more than composition in my opinion (again, many different approaches can be used).
class Chart(object):
def __init__(self):
pass
def draw(self):
pass
...
Then extend/override the base class' functionality via the inheritance of Chart. You could then implement something like a Line class for example:
class Line(Chart):
pass
class Bar(Chart):
pass
As for ChartEngine, you could could have the class have dispatch methods for each of the various graph types similar to routes in a MVC web app. One approach could be to have a dictionary that keeps track of the various graphs and their associated dispatch methods.
class ChartEngine(object):
type_to_dispatch = {"bar" : dispatch_bar, "line" : dispatch_line}
def __init__(self, type, labels, data):
self.type = type
self.labels = labels
self.data = data
...
def make_chart(self):
type_to_dispatch[self.type]()
def dispatch_line(self):
pass
def dispatch_bar(self):
pass

Static method-only class and subclasses in Python - is there a better design pattern?

I am pricing financial instruments, and each financial instrument object requires a day counter as a property. There are 4 kinds of day counters which have different implementations for each of their two methods, year_fraction and day_count. This day counter property on financial instruments is used in other classes when pricing, to know how to discount curves appropriately, etc. However, all of the day count methods are static, and doing nothing more than applying some formula.
So despite everything I've read online telling me to not use static methods and just have module-level functions instead, I couldn't see a way to nicely pass around the correct DayCounter without implementing something like this
class DayCounter:
__metaclass__ = abc.ABCMeta
#abc.abstractstaticmethod
def year_fraction(start_date, end_date):
raise NotImplementedError("DayCounter subclass must define a year_fraction method to be valid.")
#abc.abstractstaticmethod
def day_count(start_date, end_date):
raise NotImplementedError("DayCounter subclass must define a day_count method to be valid.")
class Actual360(DayCounter):
#staticmethod
def day_count(start_date, end_date):
# some unique formula
#staticmethod
def year_fraction(start_date, end_date):
# some unique formula
class Actual365(DayCounter):
#staticmethod
def day_count(start_date, end_date):
# some unique formula
#staticmethod
def year_fraction(start_date, end_date):
# some unique formula
class Thirty360(DayCounter):
#staticmethod
def day_count(start_date, end_date):
# some unique formula
#staticmethod
def year_fraction(start_date, end_date):
# some unique formula
class ActualActual(DayCounter):
#staticmethod
def day_count(start_date, end_date):
# some unique formula
#staticmethod
def year_fraction(start_date, end_date):
# some unique formula
So that in a pricing engine for some particular instrument that is passed an instrument as a parameter, I can use the instrument's day counter property as needed.
Am I missing something more idiomatic / stylistically acceptable in Python or does this seem like appropriate use for static method-only classes?
Example:
I have a class FxPricingEngine, which has an __init__ method passed an FxInstrument and subsequent underlying_instrument property. Then in order to use the Value method of my pricing engine I need to discount a curve with a particular day counter. I have a YieldCurve class with a discount method to which I pass self.underlying_instrument.day_counter.year_fraction such that I can apply the correct formula. Really all that the classes are serving to do is provide some logical organization for the unique implementations.
As it is, object orientation does not make any sense in your scenario: there is no data associated with an instance of your types, so any two objects of some type (e.g. Thirty360) are going to be equal (i.e. you only have singletons).
It looks like you want to be able to parametrise client code on behaviour - the data which your methods operate on is not given in a constructor but rather via the arguments of the methods. In that case, plain free functions may be a much more straight forward solution.
For instance, given some imaginary client code which operates on your counters like:
def f(counter):
counter.day_count(a, b)
# ...
counter.year_fraction(x, y)
...you could just as well imagine passing two functions as arguments right away instead of an object, e.g. have
def f(day_count, year_fraction):
day_count(a, b)
# ...
year_fraction(x, y)
On the caller side, you would pass plain functions, e.g.
f(thirty360_day_count, thirty360_year_fraction)
If you like, you could also have different names for the functions, or you could define them in separate modules to get the namespacing. You could also easily pass special functions like way (for instance if you only need the day_count to be correct but the year_fraction could be a noop).
Well to be frank it doesn't make much sense in defining static methods this way. The only purpose static methods are serving in your case is providing a namespace to your function names i.e. you can call your method like Actual365.day_count making it more clear that day_count belongs to Actual365 functionality.
But you can accomplish the same think by defining a module named actual365.
import actual365
actual365.day_count()
As far as Object Orientation is concerned, your code is not offering any advantage that OOP design offers. You have just wrapped your functions in a class.
Now I noticed all your methods are using start_date and end_date, how about using them as instance variables.
class Actual365(object):
def __init__(self, start_date, end_date):
self.start_date, self.end_date = start_date, end_date
def day_count(self):
# your unique formula
...
Besides AbstractClasses don't make much sense in Duck-Typed languages like Python. As long as some object is providing a behavior it doesn't need to inherit from some abstract class.
If that doesn't work for you, using just functions might be better approach.
No
Not that I know of. In my opinion your design pattern is ok. Especially if there is the possibility of your code growing to more than 2 functions per class and if the functions inside a class are connected.
If any combination of function is possible, use the function passing approach.
The modules approach and your approach are quite similar. The advantage or disadvantage (it depends) of modules is that your code get split up into many files. Your approach allows you to use isinstance, but you probably won't need that.
Passing functions directly
If you had only one function, you could just pass this function directly instead of using a class. But as soon as you have 2 or more functions with different implementations, classes seem fine to me. Just add a docstring to explain the usage. I assume that the two function implementation in one class are somewhat connected.
Using modules
You could use modules instead of classes (e.g. a module Actual365DayCounter and a module Actual360DayCounter) and use something like if something: import Actual360DayCounter as daycounter and else: import Actual365ayCounter as daycounter.
Or you could import all modules and put them in a dictionary (thanks to the comment by #freakish) like MODULES = { 'Actual360': Actual360, ... } and simply use MODULES[my_var].
But I doubt this would be a better design pattern, as you would split up your source coude in many tiny modules.
Another option:
One way would be to use only one class:
class DayCounter:
def __init__(self, daysOfTheYear = 360):
self.days_of_the_year = daysOfTheYear
And make the functions using self.days_of_the_year. But this only works if days_of_the_year is actually a parameter of the function. If you would have to use a lot of if ... elif .. elif ... in your function implementation, this approach would be worse

Python __call__ special method practical example

I know that __call__ method in a class is triggered when the instance of a class is called. However, I have no idea when I can use this special method, because one can simply create a new method and perform the same operation done in __call__ method and instead of calling the instance, you can call the method.
I would really appreciate it if someone gives me a practical usage of this special method.
This example uses memoization, basically storing values in a table (dictionary in this case) so you can look them up later instead of recalculating them.
Here we use a simple class with a __call__ method to calculate factorials (through a callable object) instead of a factorial function that contains a static variable (as that's not possible in Python).
class Factorial:
def __init__(self):
self.cache = {}
def __call__(self, n):
if n not in self.cache:
if n == 0:
self.cache[n] = 1
else:
self.cache[n] = n * self.__call__(n-1)
return self.cache[n]
fact = Factorial()
Now you have a fact object which is callable, just like every other function. For example
for i in xrange(10):
print("{}! = {}".format(i, fact(i)))
# output
0! = 1
1! = 1
2! = 2
3! = 6
4! = 24
5! = 120
6! = 720
7! = 5040
8! = 40320
9! = 362880
And it is also stateful.
Django forms module uses __call__ method nicely to implement a consistent API for form validation. You can write your own validator for a form in Django as a function.
def custom_validator(value):
#your validation logic
Django has some default built-in validators such as email validators, url validators etc., which broadly fall under the umbrella of RegEx validators. To implement these cleanly, Django resorts to callable classes (instead of functions). It implements default Regex Validation logic in a RegexValidator and then extends these classes for other validations.
class RegexValidator(object):
def __call__(self, value):
# validation logic
class URLValidator(RegexValidator):
def __call__(self, value):
super(URLValidator, self).__call__(value)
#additional logic
class EmailValidator(RegexValidator):
# some logic
Now both your custom function and built-in EmailValidator can be called with the same syntax.
for v in [custom_validator, EmailValidator()]:
v(value) # <-----
As you can see, this implementation in Django is similar to what others have explained in their answers below. Can this be implemented in any other way? You could, but IMHO it will not be as readable or as easily extensible for a big framework like Django.
I find it useful because it allows me to create APIs that are easy to use (you have some callable object that requires some specific arguments), and are easy to implement because you can use Object Oriented practices.
The following is code I wrote yesterday that makes a version of the hashlib.foo methods that hash entire files rather than strings:
# filehash.py
import hashlib
class Hasher(object):
"""
A wrapper around the hashlib hash algorithms that allows an entire file to
be hashed in a chunked manner.
"""
def __init__(self, algorithm):
self.algorithm = algorithm
def __call__(self, file):
hash = self.algorithm()
with open(file, 'rb') as f:
for chunk in iter(lambda: f.read(4096), ''):
hash.update(chunk)
return hash.hexdigest()
md5 = Hasher(hashlib.md5)
sha1 = Hasher(hashlib.sha1)
sha224 = Hasher(hashlib.sha224)
sha256 = Hasher(hashlib.sha256)
sha384 = Hasher(hashlib.sha384)
sha512 = Hasher(hashlib.sha512)
This implementation allows me to use the functions in a similar fashion to the hashlib.foo functions:
from filehash import sha1
print sha1('somefile.txt')
Of course I could have implemented it a different way, but in this case it seemed like a simple approach.
__call__ is also used to implement decorator classes in python. In this case the instance of the class is called when the method with the decorator is called.
class EnterExitParam(object):
def __init__(self, p1):
self.p1 = p1
def __call__(self, f):
def new_f():
print("Entering", f.__name__)
print("p1=", self.p1)
f()
print("Leaving", f.__name__)
return new_f
#EnterExitParam("foo bar")
def hello():
print("Hello")
if __name__ == "__main__":
hello()
program output:
Entering hello
p1= foo bar
Hello
Leaving hello
Yes, when you know you're dealing with objects, it's perfectly possible (and in many cases advisable) to use an explicit method call. However, sometimes you deal with code that expects callable objects - typically functions, but thanks to __call__ you can build more complex objects, with instance data and more methods to delegate repetitive tasks, etc. that are still callable.
Also, sometimes you're using both objects for complex tasks (where it makes sense to write a dedicated class) and objects for simple tasks (that already exist in functions, or are more easily written as functions). To have a common interface, you either have to write tiny classes wrapping those functions with the expected interface, or you keep the functions functions and make the more complex objects callable. Let's take threads as example. The Thread objects from the standard libary module threading want a callable as target argument (i.e. as action to be done in the new thread). With a callable object, you are not restricted to functions, you can pass other objects as well, such as a relatively complex worker that gets tasks to do from other threads and executes them sequentially:
class Worker(object):
def __init__(self, *args, **kwargs):
self.queue = queue.Queue()
self.args = args
self.kwargs = kwargs
def add_task(self, task):
self.queue.put(task)
def __call__(self):
while True:
next_action = self.queue.get()
success = next_action(*self.args, **self.kwargs)
if not success:
self.add_task(next_action)
This is just an example off the top of my head, but I think it is already complex enough to warrant the class. Doing this only with functions is hard, at least it requires returning two functions and that's slowly getting complex. One could rename __call__ to something else and pass a bound method, but that makes the code creating the thread slightly less obvious, and doesn't add any value.
Class-based decorators use __call__ to reference the wrapped function. E.g.:
class Deco(object):
def __init__(self,f):
self.f = f
def __call__(self, *args, **kwargs):
print args
print kwargs
self.f(*args, **kwargs)
There is a good description of the various options here at Artima.com
IMHO __call__ method and closures give us a natural way to create STRATEGY design pattern in Python. We define a family of algorithms, encapsulate each one, make them interchangeable and in the end we can execute a common set of steps and, for example, calculate a hash for a file.
I just stumbled upon a usage of __call__() in concert with __getattr__() which I think is beautiful. It allows you to hide multiple levels of a JSON/HTTP/(however_serialized) API inside an object.
The __getattr__() part takes care of iteratively returning a modified instance of the same class, filling in one more attribute at a time. Then, after all information has been exhausted, __call__() takes over with whatever arguments you passed in.
Using this model, you can for example make a call like api.v2.volumes.ssd.update(size=20), which ends up in a PUT request to https://some.tld/api/v2/volumes/ssd/update.
The particular code is a block storage driver for a certain volume backend in OpenStack, you can check it out here: https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/nexenta/jsonrpc.py
EDIT: Updated the link to point to master revision.
I find a good place to use callable objects, those that define __call__(), is when using the functional programming capabilities in Python, such as map(), filter(), reduce().
The best time to use a callable object over a plain function or a lambda function is when the logic is complex and needs to retain some state or uses other info that in not passed to the __call__() function.
Here's some code that filters file names based upon their filename extension using a callable object and filter().
Callable:
import os
class FileAcceptor(object):
def __init__(self, accepted_extensions):
self.accepted_extensions = accepted_extensions
def __call__(self, filename):
base, ext = os.path.splitext(filename)
return ext in self.accepted_extensions
class ImageFileAcceptor(FileAcceptor):
def __init__(self):
image_extensions = ('.jpg', '.jpeg', '.gif', '.bmp')
super(ImageFileAcceptor, self).__init__(image_extensions)
Usage:
filenames = [
'me.jpg',
'me.txt',
'friend1.jpg',
'friend2.bmp',
'you.jpeg',
'you.xml']
acceptor = ImageFileAcceptor()
image_filenames = filter(acceptor, filenames)
print image_filenames
Output:
['me.jpg', 'friend1.jpg', 'friend2.bmp', 'you.jpeg']
Specify a __metaclass__ and override the __call__ method, and have the specified meta classes' __new__ method return an instance of the class, viola you have a "function" with methods.
We can use __call__ method to use other class methods as static methods.
class _Callable:
def __init__(self, anycallable):
self.__call__ = anycallable
class Model:
def get_instance(conn, table_name):
""" do something"""
get_instance = _Callable(get_instance)
provs_fac = Model.get_instance(connection, "users")
One common example is the __call__ in functools.partial, here is a simplified version (with Python >= 3.5):
class partial:
"""New function with partial application of the given arguments and keywords."""
def __new__(cls, func, *args, **kwargs):
if not callable(func):
raise TypeError("the first argument must be callable")
self = super().__new__(cls)
self.func = func
self.args = args
self.kwargs = kwargs
return self
def __call__(self, *args, **kwargs):
return self.func(*self.args, *args, **self.kwargs, **kwargs)
Usage:
def add(x, y):
return x + y
inc = partial(add, y=1)
print(inc(41)) # 42
This is too late but I'm giving an example. Imagine you have a Vector class and a Point class. Both take x, y as positional args. Let's imagine you want to create a function that moves the point to be put on the vector.
4 Solutions
put_point_on_vec(point, vec)
Make it a method on the vector class. e.g my_vec.put_point(point)
Make it a method on the Point class. my_point.put_on_vec(vec)
Vector implements __call__, So you can use it like my_vec_instance(point)
This is actually part of some examples I'm working on for a guide for dunder methods explained with Maths that I'm gonna release sooner or later.
I left the logic of moving the point itself because this is not what this question is about
I'm a novice, but here is my take: having __call__ makes composition easier to code. If f, g are instance of a class Function that has a method eval(self,x), then with __call___ one could write f(g(x)) as opposed to f.eval(g.eval(x)).
A neural network can be composed from smaller neural networks, and in pytorch we have a __call__ in the Module class:
Here's an example of where __call__ is used in practice: in pytorch, when defining a neural network (call it class MyNN(nn.Module) for example) as a subclass of torch.nn.module.Module, one typically defines a forward method for the class, but typically when applying an input tensor x to an instance model=MyNN() we just write model(x) as opposed to model.forward(x) but both give the same answer. If you dig into the source for torch.nn.module.Module here
https://pytorch.org/docs/stable/_modules/torch/nn/modules/module.html#Module
and search for __call__ one can eventually trace it back - at least in some cases - to a call to self.forward
import torch.nn as nn
import torch.nn.functional as F
class MyNN(nn.Module):
def __init__(self):
super().__init__()
self.layer1 = nn.Linear(784, 200)
def forward(self, x):
return F.ReLU(self.layer1(x))
x=torch.rand(10,784)
model=MyNN()
print(model(x))
print(model.forward(x))
will print the same values in the last two lines as the Module class has implemented __call___ so that's what I believe Python turns to when it sees model(x), and the __call__ which in turn eventually calls model.forward(x))
The function call operator.
class Foo:
def __call__(self, a, b, c):
# do something
x = Foo()
x(1, 2, 3)
The __call__ method can be used to redefined/re-initialize the same object. It also facilitates the use of instances/objects of a class as functions by passing arguments to the objects.
import random
class Bingo:
def __init__(self,items):
self._items=list(items)
random.shuffle(self._items,random=None)
def pick(self):
try:
return self._items.pop()
except IndexError:
raise LookupError('It is empty now!')
def __call__(self):
return self.pick()
b= Bingo(range(3))
print(b.pick())
print(b())
print(callable(b))
Now the output can be..(As first two answer keeps shuffling between [0,3])
2
1
True
You can see that Class Bingo implements _call_ method, an easy way to create a function like objects that have an internal state that must be kept across invocations like remaining items in Bingo. Another good use case of _call_ is Decorators. Decorators must be callable and it is sometimes convenient to "remember"
something between calls of decorators. (i.e., memoization- caching the results of expensive computation for later use.)

Categories