I have a simple 'echo' PB client and server where the client sends an object to the server which echo the same object back to the client:
The client:
from twisted.spread import pb
from twisted.internet import reactor
from twisted.python import util
from amodule import aClass
factory = pb.PBClientFactory()
reactor.connectTCP("localhost", 8282, factory)
d = factory.getRootObject()
d.addCallback(lambda object: object.callRemote("echo", aClass()))
d.addCallback(lambda response: 'server echoed: '+response)
d.addErrback(lambda reason: 'error: '+str(reason.value))
d.addCallback(util.println)
d.addCallback(lambda _: reactor.stop())
reactor.run()
The server:
from twisted.application import internet, service
from twisted.internet import protocol
from twisted.spread import pb
from amodule import aClass
class RemoteClass(pb.RemoteCopy, aClass):
pass
pb.setUnjellyableForClass(aClass, RemoteClass)
class PBServer(pb.Root):
def remote_echo(self, a):
return a
application = service.Application("Test app")
# Prepare managers
clientManager = internet.TCPServer(8282, pb.PBServerFactory(PBServer()));
clientManager.setServiceParent(application)
if __name__ == '__main__':
print "Run with twistd"
import sys
sys.exit(1)
The aClass is a simple class implementing Copyable:
from twisted.spread import pb
class aClass(pb.Copyable):
pass
When i run the above code, i get this error:
twisted.spread.jelly.InsecureJelly: Module builtin not allowed (in type builtin.RemoteClass).
In fact, the object is sent to the server without any problem since it was secured with pb.setUnjellyableForClass(aClass, RemoteClass) on the server side, but once it gets returned to the client, that error is raised.
Am looking for a way to get an easy way to send/receive my objects between two peers.
Perspective broker identifies classes by name when talking about them over the network. A class gets its name in part from the module in which it is defined. A tricky problem with defining classes in a file that you run from the command line (ie, your "main script") is that they may end up with a surprising name. When you do this:
python foo.py
The module name Python gives to the code in foo.py is not "foo" as one might expect. Instead it is something like "__main__" (which is why the if __name__ == "__main__": trick works).
However, if some other part of your application later tries to import something from foo.py, then Python re-evaluates its contents to create a new module named "foo".
Additionally, the classes defined in the "__main__" module of one process may have nothing to do with the classes defined in the "__main__" module of another process. This is the case in your example, where __main__.RemoteClass is defined in your server process but there is no RemoteClass in the __main__ module of your client process.
So, PB gets mixed up and can't complete the object transfer.
The solution is to keep the amount of code in your main script to a minimum, and in particular to never define things with names there (no classes, no function definitions).
However, another problem is the expectation that a RemoteCopy can be sent over PB without additional preparation. A Copyable can be sent, creating a RemoteCopy on the peer, but this is not a symmetric relationship. Your client also needs to allow this by making a similar (or different) pb.setUnjellyableForClass call.
Related
In one of my Odoo installation I need to setup the socket_timeout variable of WorkerHTTP class directly from Python code, bypassing the usage of environment variable ODOO_HTTP_SOCKET_TIMEOUT.
If you never read about it, you can check here for more info: https://github.com/odoo/odoo/commit/49e3fd102f11408df00f2c3f6360f52143911d74#diff-b4207a4658979fdb11f2f2fa0277f483b4e81ba59ed67a5e84ee260d5837ef6d
In Odoo15, which i'm using, Worker classes are located at odoo/service/server.py
My idea was to inherit constructor for Worker class and simply setup self.sock_timeout = 10 or another value, but I can't make it work with inheritance.
EDIT: I almost managed it to work, but I have problems with static methods.
STEP 1:
Inherit WorkerHTTP constructor and add self.socket_timeout = 10
Then, I also have to inherit PreforkServer and override process_spawn() method so I can pass WorkerHttpExtend instead of WorkerHTTP, as argument for worker_spawn() method.
class WorkerHttpExtend(WorkerHTTP):
""" Setup sock_timeout class variable when WorkerHTTP object gets initialized"""
def __init__(self, multi):
super(WorkerHttpExtend, self).__init__(multi)
self.sock_timeout = 10
logging.info(f'SOCKET TIMEOUT: {self.sock_timeout}')
class PreforkServerExtend(PreforkServer):
""" I have to inherit PreforkServer and override process_spawn()
method so I can pass WorkerHttpExtend
instead of WorkerHTTP, as argument for worker_spawn() method.
"""
def process_spawn(self):
if config['http_enable']:
while len(self.workers_http) < self.population:
self.worker_spawn(WorkerHttpExtend, self.workers_http)
if not self.long_polling_pid:
self.long_polling_spawn()
while len(self.workers_cron) < config['max_cron_threads']:
self.worker_spawn(WorkerCron, self.workers_cron)
STEP 2:
static method start() should initialize PreforkServer with PreforkServerExtend, not with PreforkServer (last line in the code below). This is where I start to have problems.
def start(preload=None, stop=False):
"""Start the odoo http server and cron processor."""
global server
load_server_wide_modules()
if odoo.evented:
server = GeventServer(odoo.service.wsgi_server.application)
elif config['workers']:
if config['test_enable'] or config['test_file']:
_logger.warning("Unit testing in workers mode could fail; use --workers 0.")
server = PreforkServer(odoo.service.wsgi_server.application)
STEP 3:
At this point if I wanna go further (which I did) I should copy the whole start() method and import all package I need to make it work
import odoo
from odoo.service.server import WorkerHTTP, WorkerCron, PreforkServer, load_server_wide_modules, \
GeventServer, _logger, ThreadedServer, inotify, FSWatcherInotify, watchdog, FSWatcherWatchdog, _reexec
from odoo.tools import config
I did it and then in my custom start() method I wrote line
server = PreforkServerExtend(odoo.service.wsgi_server.application)
but even then, how do I tell to execute my start() method, instead of the original one??
I'm sure this would eventually work (mabe not safely, but would work) because at some point I wasn't 100% sure what I was doing, so I put my inherit classes WorkerHttpExtend and PreforkServerExtend in the original odoo/service/server.py and initialized server obj with PreforkServerExtend instead of PreforkServer.
server = PreforkServer(odoo.service.wsgi_server.application)
It works then: I get custom socket timeout value, print and logging info when Odoo service start, because PreforkServerExtend will call custom class on cascade at that point, otherwise my inherited class are there but they will never be called.
So I guess if I could tell the system to run my start() method I would have done it.
STEP 4 (not reached yet):
I'm pretty sure that start() method is called in odoo/cli/server.py, in main() method:
rc = odoo.service.server.start(preload=preload, stop=stop)
I could go deeper but I don't think the effort is worth for what I need.
So technically if I would be able to tell the system which start() method to choose, I would have done it. Still not sure it is safe procedure (probably not much actually, but at this point I was just experimenting), but I wonder if there is an easier method to set up socket timeout without using environment variable ODOO_HTTP_SOCKET_TIMEOUT.
I'm pretty sure there is an easier method than i'm doing, with low level python or maybe even with a class in odoo/service/server, but I can't figure out for now. If some one has an idea, let me know!
Working solution: I have been introduced to Monkeypatch in this post
Possible for a class to look down at subclass constructor?
This has solved my problem, now I'm able to patch process_request method of class WorkerHTTP :
import errno
import fcntl
import socket
import odoo
import odoo.service.server as srv
class WorkerHttpProcessRequestPatch(srv.WorkerHTTP):
def process_request(self, client, addr):
client.setblocking(1)
# client.settimeout(self.sock_timeout)
client.settimeout(10) # patching timeout setup to a needed value
client.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
flags = fcntl.fcntl(client, fcntl.F_GETFD) | fcntl.FD_CLOEXEC
fcntl.fcntl(client, fcntl.F_SETFD, flags)
self.server.socket = client
try:
self.server.process_request(client, addr)
except IOError as e:
if e.errno != errno.EPIPE:
raise
self.request_count += 1
# Switch process_request class attribute - this is what I needed to make it work
odoo.service.server.WorkerHTTP.process_request = WorkerHttpProcessRequestPatch.process_request
This is a project template so I can't change much...
I will omit what I believe are irrelevant parts:
#file server.py
import functions
import json
import socket
funcs = {}
class JSONRPCServer:
"""The JSON-RPC server."""
def __init__(self, host, port):
self.host = host
self.port = port
self.sock = None
def register(self, name, function):
"""Registers a function."""
funcs[name] = function
(...)
if __name__ == "__main__":
# Test the JSONRPCServer class
server = JSONRPCServer('0.0.0.0', 8000)
# Register functions
server.register('hello', functions.hello)
server.register('greet', functions.greet)
server.register('add', functions.add)
server.register('sub', functions.sub)
server.register('mul', functions.mul)
server.register('div', functions.div)
print(funcs)
# Start the server
server.start()
Here this will print all my functions inside the funcs dict.
I have another file that needs the contents of funcs but for testing I have this:
#file test.py
from server import funcs
print(funcs)
This prints an empty dictionary. How do I make it so that funcs keeps it's values across these two files?
When you run the test.py file, anything within the
if __name__ == "__main__"
of the server.py isn't being run, since it isn't the main file (when you run server.py directly, it IS going into the if __main__ and therefore filling up the funcs dict). Therefore, all those server.register calls aren't being run when you run the test.py file, and hence your funcs dict is empty.
Maybe put that piece of code with all the register calls in a different function, and call that directly?
When you directly run server.py, it prints populated func as __name__=="__main__" evaluates to true. This doesn't work when server.py is imported in another module though.
Also, as a good practice you should add a function to server.py to fetch the funcs instead of relying on global.
Also, you can refactor the logic inside name==main to a function (eg. start_server) so that any module can start the server by just calling this function.
I have a server that contains a class which performs an expensive computation
during its initialization. I want to initialize this class once, inside the main() method of the server module, before starting the server. Then, I want other modules that import the server module to be able to retrieve the instance of this class.
Example (the sleep emulates the server running)
import time
# I want to store the shared_instance of this global variable
shared_instance = None
class Shared:
def __init__(self):
# Expensive computation that I only want to run once
pass
def main():
global shared_instance
shared_instance = Shared() # Now instance_of_scorer is not None anymore
print(shared_instance)
print("Starting server...")
time.sleep(1000)
if __name__ == '__main__':
main()
When I run this server it prints:
<__main__.Shared object at 0x000001865A3C4320>
Starting server...
Now I have other module that should be able to see the instance:
import server
print(server.shared_instance)
However, shared_instance is not '<main.Shared object at 0x000001865A3C4320>' as expected. It is 'None'. Could you please tell me want I'm doing wrong and how can I solve this issue and achieve this functionality?.
Many thanks
I have a Python project that relies on a particular module, receivers.py, being imported.
I want to write a test to make sure it is imported, but I also want to write other tests for the behaviour of the code within the module.
The trouble is, that if I have any tests anywhere in my test suite that import or patch anything from receivers.py then it will automatically import the module, potentially making the test for import pass wrongly.
Any ideas?
(Note: specifically this is a Django project.)
One (somewhat imperfect) way of doing it is to use the following TestCase:
from django.test import TestCase
class ReceiverConnectionTestCase(TestCase):
"""TestCase that allows asserting that a given receiver is connected
to a signal.
Important: this will work correctly providing you:
1. Do not import or patch anything in the module containing the receiver
in any django.test.TestCase.
2. Do not import (except in the context of a method) the module
containing the receiver in any test module.
This is because as soon as you import/patch, the receiver will be connected
by your test and will be connected for the entire test suite run.
If you want to test the behaviour of the receiver, you may do this
providing it is a unittest.TestCase, and there is no import from the
receiver module in that test module.
Usage:
# myapp/receivers.py
from django.dispatch import receiver
from apples.signals import apple_eaten
from apples.models import Apple
#receiver(apple_eaten, sender=Apple)
def my_receiver(sender, **kwargs):
pass
# tests/integration_tests.py
from apples.signals import apple_eaten
from apples.models import Apple
class TestMyReceiverConnection(ReceiverConnectionTestCase):
def test_connection(self):
self.assert_receiver_is_connected(
'myapp.receivers.my_receiver',
signal=apple_eaten, sender=Apple)
"""
def assert_receiver_is_connected(self, receiver_string, signal, sender):
receivers = signal._live_receivers(sender)
receiver_strings = [
"{}.{}".format(r.__module__, r.__name__) for r in receivers]
if receiver_string not in receiver_strings:
raise AssertionError(
'{} is not connected to signal.'.format(receiver_string))
This works because Django runs django.test.TestCases before unittest.TestCases.
I am trying to load a module according to some settings. I have found a working solution but I need a confirmation from an advanced python developer that this solution is the best performance wise as the API endpoint which will use it will be under heavy load.
The idea is to change the working of an endpoint based on parameters from the user and other systems configuration. I am loading the correct handler class based on these settings. The goal is to be able to easily create new handlers without having to modify the code calling the handlers.
This is a working example :
./run.py :
from flask import Flask, abort
import importlib
import handlers
app = Flask(__name__)
#app.route('/')
def api_endpoint():
try:
endpoint = "simple" # Custom logic to choose the right handler
handlerClass = getattr(importlib.import_module('.'+str(endpoint), 'handlers'), 'Handler')
handler = handlerClass()
except Exception as e:
print(e)
abort(404)
print(handlerClass, handler, handler.value, handler.name())
# Handler processing. Not yet implemented
return "Hello World"
if __name__ == "__main__":
app.run(host='0.0.0.0', port=8080, debug=True)
One "simple" handler example. A handler is a module which needs to define an Handler class :
./handlers/simple.py :
import os
class Handler:
def __init__(self):
self.value = os.urandom(5)
def name(self):
return "simple"
If I understand correctly, the import is done on each query to the endpoint. It means IO in the filesystem with lookup for the modules, ...
Is it the correct/"pythonic" way to implement this strategy ?
Question moved to codereview. Thanks all for your help : https://codereview.stackexchange.com/questions/96533/extension-pattern-in-a-flask-controller-using-importlib
I am closing this thread.