Multiprocessing managers and custom classes - python

I do not know why, but I get this strange error whenever I try to pass to the method of a shared object shared custom class object. Python version: 3.6.3
Code:
from multiprocessing.managers import SyncManager
class MyManager(SyncManager): pass
class MyClass: pass
class Wrapper:
def set(self, ent):
self.ent = ent
MyManager.register('MyClass', MyClass)
MyManager.register('Wrapper', Wrapper)
if __name__ == '__main__':
manager = MyManager()
manager.start()
try:
obj = manager.MyClass()
lst = manager.list([1,2,3])
collection = manager.Wrapper()
collection.set(lst) # executed fine
collection.set(obj) # raises error
except Exception as e:
raise
Error:
---------------------------------------------------------------------------
Traceback (most recent call last):
File "D:\Program Files\Python363\lib\multiprocessing\managers.py", line 228, in serve_client
request = recv()
File "D:\Program Files\Python363\lib\multiprocessing\connection.py", line 251, in recv
return _ForkingPickler.loads(buf.getbuffer())
File "D:\Program Files\Python363\lib\multiprocessing\managers.py", line 881, in RebuildProxy
return func(token, serializer, incref=incref, **kwds)
TypeError: AutoProxy() got an unexpected keyword argument 'manager_owned'
---------------------------------------------------------------------------
What's the problem here?

I ran into this too, as noted, this is a bug in Python multiprocessing (see issue #30256) and the pull request that corrects this has not yet been merged. The pull request has since been superseded by another PR that makes the same change but adds a test as well.
Apart from manually patching your local installation, you have three other options:
you could use the MakeProxyType() callable to specify your proxytype, without relying on the AutoProxy proxy generator,
you could define a custom proxy class,
you can patch the bug with a monkeypatch
I'll describe those options below, after explaining what AutoProxy does:
What's the point of the AutoProxy class
The multiprocessing Manager pattern gives access to shared values by putting the values all in the same, dedicated 'canonical values server' process. All other processes (clients) talk to the server through proxies that then pass messages back and forth with the server.
The server does need to know what methods are acceptable for the type of object, however, so clients can produce a proxy object with the same methods. This is what the AutoProxy object is for. Whenever a client needs a new instance of your registered class, the default proxy the client creates is an AutoProxy, which then asks the server to tell it what methods it can use.
Once it has the method names, it calls MakeProxyType to construct a new class and then creates an instance for that class to return.
All this is deferred until you actually need an instance of the proxied type, so in principle AutoProxy saves a little bit of memory if you are not using certain classes you have registered. It's very little memory, however, and the downside is that this process has to take place in each client process.
These proxy objects use reference counting to track when the server can remove the canonical value. It is that part that is broken in the AutoProxy callable; a new argument is passed to the proxy type to disable reference counting when the proxy object is being created in the server process rather than in a client but the AutoProxy type wasn't updated to support this.
So, how can you fix this? Here are those 3 options:
Use the MakeProxyType() callable
As mentioned, AutoProxy is really just a call (via the server) to get the public methods of the type, and a call to MakeProxyType(). You can just make these calls yourself, when registering.
So, instead of
from multiprocessing.managers import SyncManager
SyncManager.register("YourType", YourType)
use
from multiprocessing.managers import SyncManager, MakeProxyType, public_methods
# arguments: classname, sequence of method names
YourTypeProxy = MakeProxyType("YourType", public_methods(YourType))
SyncManager.register("YourType", YourType, YourTypeProxy)
Feel free to inline the MakeProxyType() call there.
If you were using the exposed argument to SyncManager.register(), you should pass those names to MakeProxyType instead:
# SyncManager.register("YourType", YourType, exposed=("foo", "bar"))
# becomes
YourTypeProxy = MakeProxyType("YourType", ("foo", "bar"))
SyncManager.register("YourType", YourType, YourTypeProxy)
You'd have to do this for all the pre-registered types, too:
from multiprocessing.managers import SyncManager, AutoProxy, MakeProxyType, public_methods
registry = SyncManager._registry
for typeid, (callable, exposed, method_to_typeid, proxytype) in registry.items():
if proxytype is not AutoProxy:
continue
create_method = hasattr(managers.SyncManager, typeid)
if exposed is None:
exposed = public_methods(callable)
SyncManager.register(
typeid,
callable=callable,
exposed=exposed,
method_to_typeid=method_to_typeid,
proxytype=MakeProxyType(f"{typeid}Proxy", exposed),
create_method=create_method,
)
Create custom proxies
You could not rely on multiprocessing creating a proxy for you. You could just write your own. The proxy is used in all processes except for the special 'managed values' server process, and the proxy should pass messages back and forth. This is not an option for the already-registered types, of course, but I'm mentioning it here because for your own types this offers opportunities for optimisations.
Note that you should have methods for all interactions that need to go back to the 'canonical' value instance, so you'd need to use properties to handle normal attributes or add __getattr__, __setattr__ and __delattr__ methods as needed.
The advantage is that you can have very fine-grained control over what methods actually need to exchange data with the server process; in my specific example, my proxy class caches information that is immutable (the values would never change once the object was created), but were used often. That includes a flag value that controls if other methods would do something, so the proxy could just check the flag value and not talk to the server process if not set. Something like this:
class FooProxy(BaseProxy):
# what methods the proxy is allowed to access through calls
_exposed_ = ("__getattribute__", "expensive_method", "spam")
#property
def flag(self):
try:
v = self._flag
except AttributeError:
# ask for the value from the server, "realvalue.flag"
# use __getattribute__ because it's an attribute, not a property
v = self._flag = self._callmethod("__getattribute__", ("flag",))
return flag
def expensive_method(self, *args, **kwargs):
if self.flag: # cached locally!
return self._callmethod("expensive_method", args, kwargs)
def spam(self, *args, **kwargs):
return self._callmethod("spam", args, kwargs
SyncManager.register("Foo", Foo, FooProxy)
Because MakeProxyType() returns a BaseProxy subclass, you can combine that class with a custom subclass, saving yourself having to write any methods that just consist of return self._callmethod(...):
# a base class with the methods generated for us. The second argument
# doubles as the 'permitted' names, stored as _exposed_
FooProxyBase = MakeProxyType(
"FooProxyBase",
("__getattribute__", "expensive_method", "spam"),
)
class FooProxy(FooProxyBase):
#property
def flag(self):
try:
v = self._flag
except AttributeError:
# ask for the value from the server, "realvalue.flag"
# use __getattribute__ because it's an attribute, not a property
v = self._flag = self._callmethod("__getattribute__", ("flag",))
return flag
def expensive_method(self, *args, **kwargs):
if self.flag: # cached locally!
return self._callmethod("expensive_method", args, kwargs)
def spam(self, *args, **kwargs):
return self._callmethod("spam", args, kwargs
SyncManager.register("Foo", Foo, FooProxy)
Again, this won't solve the issue with standard types nested inside other proxied values.
Apply a monkeypatch
I use this to fix the AutoProxy callable, this should automatically avoid patching when you are running a Python version where the fix has already been applied to the source code:
# Backport of https://github.com/python/cpython/pull/4819
# Improvements to the Manager / proxied shared values code
# broke handling of proxied objects without a custom proxy type,
# as the AutoProxy function was not updated.
#
# This code adds a wrapper to AutoProxy if it is missing the
# new argument.
import logging
from inspect import signature
from functools import wraps
from multiprocessing import managers
logger = logging.getLogger(__name__)
orig_AutoProxy = managers.AutoProxy
#wraps(managers.AutoProxy)
def AutoProxy(*args, incref=True, manager_owned=False, **kwargs):
# Create the autoproxy without the manager_owned flag, then
# update the flag on the generated instance. If the manager_owned flag
# is set, `incref` is disabled, so set it to False here for the same
# result.
autoproxy_incref = False if manager_owned else incref
proxy = orig_AutoProxy(*args, incref=autoproxy_incref, **kwargs)
proxy._owned_by_manager = manager_owned
return proxy
def apply():
if "manager_owned" in signature(managers.AutoProxy).parameters:
return
logger.debug("Patching multiprocessing.managers.AutoProxy to add manager_owned")
managers.AutoProxy = AutoProxy
# re-register any types already registered to SyncManager without a custom
# proxy type, as otherwise these would all be using the old unpatched AutoProxy
SyncManager = managers.SyncManager
registry = managers.SyncManager._registry
for typeid, (callable, exposed, method_to_typeid, proxytype) in registry.items():
if proxytype is not orig_AutoProxy:
continue
create_method = hasattr(managers.SyncManager, typeid)
SyncManager.register(
typeid,
callable=callable,
exposed=exposed,
method_to_typeid=method_to_typeid,
create_method=create_method,
)
Import the above and call the apply() function to fix multiprocessing. Do so before you start the manager server!

Solution editing multiprocessing source code
The original answer by Sergey requires you to edit multiprocessing source code as follows:
Find your multiprocessing package (mine, installed via Anaconda, was in /anaconda3/lib/python3.6/multiprocessing).
Open managers.py
Add the key argument manager_owned=True to the AutoProxy function.
Original AutoProxy:
def AutoProxy(token, serializer, manager=None, authkey=None,
exposed=None, incref=True):
...
Edited AutoProxy:
def AutoProxy(token, serializer, manager=None, authkey=None,
exposed=None, incref=True, manager_owned=True):
...
Solution via code, at run time
I have managed to solve the unexpected keyword argument TypeError exception without editing directly the source code of multiprocessing by instead adding these few lines of code where I use multiprocessing's Managers:
import multiprocessing
# Backup original AutoProxy function
backup_autoproxy = multiprocessing.managers.AutoProxy
# Defining a new AutoProxy that handles unwanted key argument 'manager_owned'
def redefined_autoproxy(token, serializer, manager=None, authkey=None,
exposed=None, incref=True, manager_owned=True):
# Calling original AutoProxy without the unwanted key argument
return backup_autoproxy(token, serializer, manager, authkey,
exposed, incref)
# Updating AutoProxy definition in multiprocessing.managers package
multiprocessing.managers.AutoProxy = redefined_autoproxy

Found temporary solution here.
I've managed to fix it by adding needed keyword to initializer of AutoProxy in multiprocessing\managers.py Though, I don't know if this kwarg is responsible for anything.

Related

How to test operations in a context manager using pytest

I have a database handler that utilizes SQLAlchemy ORM to communicate with a database. As part of SQLAlchemy's recommended practices, I interact with the session by using it as a context manager. How can I test what a function called inside the context manager using that context manager has done?
EDIT: I realized the file structure mattered due to the complexity in introduced. I re-structured the code below to more closely mirror what the end file structure will be like, and what a common production repo in my environment would look like, with code being defined in one file and tests in a completely separate file.
For example:
Code File (delete_things_from_table.py):
from db_handler import delete, SomeTable
def delete_stuff(handler):
stmt = delete(SomeTable)
with handler.Session.begin() as session:
session.execute(stmt)
session.commit()
Test File:
import pytest
import delete_things_from_table as dlt
from db_handler import Handler
def test_delete_stuff():
handler = db_handler()
dlt.delete_stuff(handler):
# Test that session.execute was called
# Test the value of 'stmt'
# Test that session.commit was called
I am not looking for a solution specific to SQLAlchemy; I am only utilizing this to highlight what I want to test within a context manager, and any strategies for testing context managers are welcome.
After sleeping on it, I came up with a solution. I'd love additional/less complex solutions if there are any available, but this works:
import pytest
import delete_things_from_table as dlt
from db_handler import Handler
class MockSession:
def __init__(self):
self.execute_params = []
self.commit_called = False
def execute(self, *args, **kwargs):
self.execute_params.append(["call", args, kwargs])
return self
def commit(self):
self.commit_called = True
return self
def begin(self):
return self
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
pass
def test_delete_stuff(monkeypatch):
handler = db_handler()
# Parens in 'MockSession' below are Important, pass an instance not the class
monkeypatch.setattr(handler, Session, MockSession())
dlt.delete_stuff(handler):
# Test that session.execute was called
assert len(handler.Session.execute_params)
# Test the value of 'stmt'
assert str(handler.Session.execute_params[0][1][0]) == "DELETE FROM some_table"
# Test that session.commit was called
assert handler.Session.commit_called
Some key things to note:
I created a static mock instead of a MagicMock as it's easier to control the methods/data flow with a custom mock class
Since the SQLAlchemy session context manager requires a begin() to start the context, my mock class needed a begin. Returning self in begin allows us to test the values later.
context managers rely on on the magic methods __enter__ and __exit__ with the argument signatures you see above.
The mocked class contains mocked methods which alter instance variables allowing us to test later
This relies on monkeypatch (there are other ways I'm sure), but what's important to note is that when you pass your mock class you want to patch in an instance of the class and not the class itself. The parentheses make a world of difference.
I don't think it's an elegant solution, but it's working. I'll happily take any suggestions for improvement.

Is there a way to sync a serializable structure with python multiprocessing?

If you create a new Process in python, it will serialize and copy the entire available scope, as far as I understand it. If you use multiprocessing.Pipe() it also allows sending various things, not just raw bytes.
However, instead of sending, I simply want to update a variable that contains a simple POD object like this:
class MyStats:
def __init__(self):
self.bytes_read = 0
self.bytes_written = 0
So say that in a process, when I update these stats, I want to tell python to serialize it and send it to the parent process' side somehow. I don't want to have to create multiprocessing.Value for each and every one of these things, that sounds super tedious.
Is there a way to tell python to pass and overwrite a specific object property somehow?
A manager is what you need here: it will be slower but all data stored inside will be automatically synced with other processes. Here is a simple example below:
from multiprocessing.managers import BaseManager, public_methods, NamespaceProxy
from multiprocessing import Process
def make_proxy(name, cls, base=None):
"""
Args:
name : A string that should match the variable name the proxy will be assigned to
cls : The class for which you want to create a proxy for
base : If you are subclassing NamespaceProxy (or any other implementation) and want to use that subclass as the
base for this new proxy, then pass the subclass as the base using this argument
"""
exposed = public_methods(cls) + ['__getattribute__', '__setattr__', '__delattr__']
return _MakeProxyType(name, exposed, base)
def _MakeProxyType(name, exposed, base=None):
"""
Attempts to replicate multiprocessing.managers.MakeProxType properly
"""
if base is None:
base = NamespaceProxy
exposed = tuple(exposed)
dic = {}
for meth in exposed:
if hasattr(base, meth):
continue
exec('''def %s(self, *args, **kwds):
return self._callmethod(%r, args, kwds)''' % (meth, meth), dic)
ProxyType = type(name, (base,), dic)
ProxyType._exposed_ = exposed
return ProxyType
class MyStats:
def __init__(self):
self.bytes_read = 0
self.bytes_written = 0
def worker(my_stats):
my_stats.bytes_read = 100
print("Worker process read 100 bytes!")
# Remember to set the name of the variable and the "name" argument to the same value otherwise you will have trouble
# pickling this. If for some reason you cannot do this then you must change the variable's __qualname__ property to
# reflect where the object actually resides so pickle can find it.
MyStatsProxy = make_proxy('MyStatsProxy', MyStats)
if __name__ == "__main__":
# Register our proxy and start the manager process
BaseManager.register("MyStats", MyStats, MyStatsProxy)
manager = BaseManager()
manager.start()
# Create our shared instance and modify it from another process
my_stats = manager.MyStats()
p = Process(target=worker, args=(my_stats,))
p.start()
p.join()
# Check value from main process
print(f"In main process, bytes read are {my_stats.bytes_read}!")
Output
Worker process read 100 bytes!
In main process, bytes read are 100!
Check this question and its answers for more useful information about managers/registering classes and alternate methods to achieve the same result
Note: Keep in mind that managers return pickled values for all objects you access through it. So any modifications you do on mutable objects should be done from within an instance method rather than requesting the mutable object through the proxy and modifying it from outside. For example, doing below will not modify the attribute some_list in the manager at all, only the local copy (to the process) of this attribute will be modified:
my_stats.some_list[0] = "some value"
Instead, you should create an instance method for modifications and call that instead:
my_stats.modify_list(0, "some value")
Alternatively, you can also force the manager to update the mutable object by re-assigning the new value for the object:
local_copy = my_stats.some_list
local_copy[0] = "some value"
my_stats.some_list = local_copy

Add post behavior to python's assignment (=) operator

I'm working with a redis database and I'd like to integrate my prototype code to my baseline as seamlessly as possible so I'm trying to bury most of the inner workings with the interface from the python
client to the redis server code into a few base classes that I will subclass throughout my production code.
I'm wondering if the assignment operator in python = is a callable and whether if it possible to modify the callable's pre and post behavior, particularly the post behavior such that I can call object.save() so that the redis cache would be updated behind the scenes without having to explicitly call it.
For example,
# using the redis-om module
from redis_om import JsonModel
kwargs = {'attr1': 'someval1', 'attr2': 'someval2'}
jsonModel = JsonModel(**kwargs)
# as soon as assignment completes, redis database
# has the new value updated without needing to
# call jsonModel.save()
jsonModel.attr1 = 'newvalue'
You can make such things with proxy class through __getattr__ and __setattr__ methods:
class ProxySaver:
def __init__(self, model):
self.__dict__['_model'] = model
def __getattr__(self, attr):
return getattr(self._model, attr)
def __setattr__(self, attr, value):
setattr(self._model, attr, value)
self._model.save()
p = ProxySaver(jsonModel)
print(p.attr1)
p.attr1 = 'test'
But if attributes has a complex types (list, dict, objects, ... ), assignment for nested objects will not been detected and save call will be skipped (p.attr1.append('test'), p.attr1[0] = 'test2').

Inherit class Worker on Odoo15

In one of my Odoo installation I need to setup the socket_timeout variable of WorkerHTTP class directly from Python code, bypassing the usage of environment variable ODOO_HTTP_SOCKET_TIMEOUT.
If you never read about it, you can check here for more info: https://github.com/odoo/odoo/commit/49e3fd102f11408df00f2c3f6360f52143911d74#diff-b4207a4658979fdb11f2f2fa0277f483b4e81ba59ed67a5e84ee260d5837ef6d
In Odoo15, which i'm using, Worker classes are located at odoo/service/server.py
My idea was to inherit constructor for Worker class and simply setup self.sock_timeout = 10 or another value, but I can't make it work with inheritance.
EDIT: I almost managed it to work, but I have problems with static methods.
STEP 1:
Inherit WorkerHTTP constructor and add self.socket_timeout = 10
Then, I also have to inherit PreforkServer and override process_spawn() method so I can pass WorkerHttpExtend instead of WorkerHTTP, as argument for worker_spawn() method.
class WorkerHttpExtend(WorkerHTTP):
""" Setup sock_timeout class variable when WorkerHTTP object gets initialized"""
def __init__(self, multi):
super(WorkerHttpExtend, self).__init__(multi)
self.sock_timeout = 10
logging.info(f'SOCKET TIMEOUT: {self.sock_timeout}')
class PreforkServerExtend(PreforkServer):
""" I have to inherit PreforkServer and override process_spawn()
method so I can pass WorkerHttpExtend
instead of WorkerHTTP, as argument for worker_spawn() method.
"""
def process_spawn(self):
if config['http_enable']:
while len(self.workers_http) < self.population:
self.worker_spawn(WorkerHttpExtend, self.workers_http)
if not self.long_polling_pid:
self.long_polling_spawn()
while len(self.workers_cron) < config['max_cron_threads']:
self.worker_spawn(WorkerCron, self.workers_cron)
STEP 2:
static method start() should initialize PreforkServer with PreforkServerExtend, not with PreforkServer (last line in the code below). This is where I start to have problems.
def start(preload=None, stop=False):
"""Start the odoo http server and cron processor."""
global server
load_server_wide_modules()
if odoo.evented:
server = GeventServer(odoo.service.wsgi_server.application)
elif config['workers']:
if config['test_enable'] or config['test_file']:
_logger.warning("Unit testing in workers mode could fail; use --workers 0.")
server = PreforkServer(odoo.service.wsgi_server.application)
STEP 3:
At this point if I wanna go further (which I did) I should copy the whole start() method and import all package I need to make it work
import odoo
from odoo.service.server import WorkerHTTP, WorkerCron, PreforkServer, load_server_wide_modules, \
GeventServer, _logger, ThreadedServer, inotify, FSWatcherInotify, watchdog, FSWatcherWatchdog, _reexec
from odoo.tools import config
I did it and then in my custom start() method I wrote line
server = PreforkServerExtend(odoo.service.wsgi_server.application)
but even then, how do I tell to execute my start() method, instead of the original one??
I'm sure this would eventually work (mabe not safely, but would work) because at some point I wasn't 100% sure what I was doing, so I put my inherit classes WorkerHttpExtend and PreforkServerExtend in the original odoo/service/server.py and initialized server obj with PreforkServerExtend instead of PreforkServer.
server = PreforkServer(odoo.service.wsgi_server.application)
It works then: I get custom socket timeout value, print and logging info when Odoo service start, because PreforkServerExtend will call custom class on cascade at that point, otherwise my inherited class are there but they will never be called.
So I guess if I could tell the system to run my start() method I would have done it.
STEP 4 (not reached yet):
I'm pretty sure that start() method is called in odoo/cli/server.py, in main() method:
rc = odoo.service.server.start(preload=preload, stop=stop)
I could go deeper but I don't think the effort is worth for what I need.
So technically if I would be able to tell the system which start() method to choose, I would have done it. Still not sure it is safe procedure (probably not much actually, but at this point I was just experimenting), but I wonder if there is an easier method to set up socket timeout without using environment variable ODOO_HTTP_SOCKET_TIMEOUT.
I'm pretty sure there is an easier method than i'm doing, with low level python or maybe even with a class in odoo/service/server, but I can't figure out for now. If some one has an idea, let me know!
Working solution: I have been introduced to Monkeypatch in this post
Possible for a class to look down at subclass constructor?
This has solved my problem, now I'm able to patch process_request method of class WorkerHTTP :
import errno
import fcntl
import socket
import odoo
import odoo.service.server as srv
class WorkerHttpProcessRequestPatch(srv.WorkerHTTP):
def process_request(self, client, addr):
client.setblocking(1)
# client.settimeout(self.sock_timeout)
client.settimeout(10) # patching timeout setup to a needed value
client.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
flags = fcntl.fcntl(client, fcntl.F_GETFD) | fcntl.FD_CLOEXEC
fcntl.fcntl(client, fcntl.F_SETFD, flags)
self.server.socket = client
try:
self.server.process_request(client, addr)
except IOError as e:
if e.errno != errno.EPIPE:
raise
self.request_count += 1
# Switch process_request class attribute - this is what I needed to make it work
odoo.service.server.WorkerHTTP.process_request = WorkerHttpProcessRequestPatch.process_request

Passing extra metadata to a RequestHandler using python's SocketServer and Children

I'm implementing a python application which is using ThreadingTCPServer and a custom subclass of BaseRequestHandler. The problem with this is that the ThreadingTCPServer seems to automatically spawn threads and create instances of the handler, calling their handle() function. However this leaves me with no way to pass data to the handler other than using global variables or class variables, both of which seem hackish. Is there any better way to do it?
Ideally this should be something like:
class ThreadedTCPServer(ThreadingTCPServer):
def process_request(self, *args, **kwargs):
ThreadingTCPServer.process_request(self, data, *args, **kwargs)
with the handler like
class ThreadedTCPRequestHandler(BaseRequestHandler):
def handle(self,data):
#do something with data
I stumbled upon the very same thing. My solution was the following:
class ThreadedTCPRequestHandler(SocketServer.StreamRequestHandler):
def handle(self):
print(self.server.mycustomdata)
class ThreadedTCPServer(SocketServer.ThreadingTCPServer):
pass
server = ThreadedTCPServer((args.host, args.port), ThreadedTCPRequestHandler)
server.mycustomdata = 'foo.bar.z'
server.serve_forever()
The RequestHandler is called with a server object as a third parameter, and it is saved as self.server attribute, so you can access it. If you would set this attribute to a callable, you could easily call it, too:
def handle(self):
mycustomdata = self.server.mycustomdata()
The first answer worked for me, but I think it is cleaner to alter the __init__ method and pass the attribute in the constructor:
class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
def __init__(self, host_port_tuple, streamhandler, Controllers):
super().__init__(host_port_tuple, streamhandler)
self.Controllers = Controllers
Note the third parameter 'Controllers' in the constructor, then the call to super without that parameter, then setting the new attribute Controllers to the property self.Controllers. The rest of the class is unchanged. Then, in your Requesthandler, you get access to the parameter using the 'server' attribute, as described above:
def handle(self):
self.Controllers = self.server.Controllers
<rest of your code>
It's much the same as the answer above but I find it a little cleaner because the constructor is overloaded and you simply add the attribute you want in the constructor:
server = ServerInterface.ThreadedTCPServer((HOST, PORT), ServerInterface.ThreadedTCPRequestHandler, Controllers)
Since handle is implemented by your BaseRequest subclass, it can get the data from itself without having it passed by the caller. (handle could also be a callable attribute of the request instance, such as a lambda—explicit user_data arguments are normally unnecessary in idiomatically designed python.)
Looking at the SocketServer code, it should be straightforward to override finish_request to pass the additional data to your BaseRequestHandler subtype constructor which would store it in the instance for handle to use.

Categories