This is a project template so I can't change much...
I will omit what I believe are irrelevant parts:
#file server.py
import functions
import json
import socket
funcs = {}
class JSONRPCServer:
"""The JSON-RPC server."""
def __init__(self, host, port):
self.host = host
self.port = port
self.sock = None
def register(self, name, function):
"""Registers a function."""
funcs[name] = function
(...)
if __name__ == "__main__":
# Test the JSONRPCServer class
server = JSONRPCServer('0.0.0.0', 8000)
# Register functions
server.register('hello', functions.hello)
server.register('greet', functions.greet)
server.register('add', functions.add)
server.register('sub', functions.sub)
server.register('mul', functions.mul)
server.register('div', functions.div)
print(funcs)
# Start the server
server.start()
Here this will print all my functions inside the funcs dict.
I have another file that needs the contents of funcs but for testing I have this:
#file test.py
from server import funcs
print(funcs)
This prints an empty dictionary. How do I make it so that funcs keeps it's values across these two files?
When you run the test.py file, anything within the
if __name__ == "__main__"
of the server.py isn't being run, since it isn't the main file (when you run server.py directly, it IS going into the if __main__ and therefore filling up the funcs dict). Therefore, all those server.register calls aren't being run when you run the test.py file, and hence your funcs dict is empty.
Maybe put that piece of code with all the register calls in a different function, and call that directly?
When you directly run server.py, it prints populated func as __name__=="__main__" evaluates to true. This doesn't work when server.py is imported in another module though.
Also, as a good practice you should add a function to server.py to fetch the funcs instead of relying on global.
Also, you can refactor the logic inside name==main to a function (eg. start_server) so that any module can start the server by just calling this function.
Related
So say there are two running codes: script1 and script2.
I want script2 to be able to run a function in script1.
script1 will be some kind of background process that will run "forever".
The point is to be able to make an API for a background process, E.G. a server.
The unclean way to do it would be to have a file transmit the orders from script2. script1 would then execute it with exec(). However, I would like to use a module or something cleaner because then I would be able to output classes and not only text.
EDIT: example:
script1:
def dosomething(args):
# do something
return information
while True:
# Do something in a loop
script2:
# "import" the background process
print(backgroundprocess.dosomething(["hello", (1, 2, 3)]))
The execution would look like this:
Run script1
Run script2 in a parallel window
Summary
The XMLRPC modules are designed for this purpose.
The docs include a worked out example for a server (script1) and a client (script2).
Server Example
from xmlrpc.server import SimpleXMLRPCServer
from xmlrpc.server import SimpleXMLRPCRequestHandler
class RequestHandler(SimpleXMLRPCRequestHandler):
rpc_paths = ('/RPC2',)
# Create server
with SimpleXMLRPCServer(('localhost', 8000),
requestHandler=RequestHandler) as server:
server.register_introspection_functions()
# Register pow() function; this will use the value of
# pow.__name__ as the name, which is just 'pow'.
server.register_function(pow)
# Register a function under a different name
def adder_function(x, y):
return x + y
server.register_function(adder_function, 'add')
# Register an instance; all the methods of the instance are
# published as XML-RPC methods (in this case, just 'mul').
class MyFuncs:
def mul(self, x, y):
return x * y
server.register_instance(MyFuncs())
# Run the server's main loop
server.serve_forever()
Client Example
import xmlrpc.client
s = xmlrpc.client.ServerProxy('http://localhost:8000')
print(s.pow(2,3)) # Returns 2**3 = 8
print(s.add(2,3)) # Returns 5
print(s.mul(5,2)) # Returns 5*2 = 10
# Print list of available methods
print(s.system.listMethods())
I would like to use flask to run some functions. Assume you have a file called myapp.py with a function run
def run():
return 'special routed hello world'
and this main flask file, something like this
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'hello world'
#app.route('/<myapp>')
def open_app(myapp):
from myapp import run
return run()
So obvisouly that doesn't work, but how can I dynamically call these run functions when flask is already running. Is this even possible?
In other words: when someone opens for example .../foobar, the function open_app with parameter foobar is begin called. In that function, from the file foobar.py (let's assume that file exists) import function run, run it and return the result from that function.
In fact, it is possible to use importlib, especially import_modulein combination with getattr, to dynamically call up functions of a module. And I have security concerns too.
The following two examples show a kind of simple RPC implementation.
The first example uses a dictionary for modules. If a module with the name is available, the run function is called. It allows a strong restriction. Optimization is certainly possible and probably necessary.
The second example shows a possibility of calling different functions within different modules with parameters. In spite of everything, as with the previous version, all modules are in one package called "actions" to ensure that calls can be limited. I also think a variant with POST is more suitable for this purpose than using variable rules.
Remember these are strong simplifications. Protocols such as JSON-RPC will certainly help as a guide during implementation.
from flask import Flask
from flask import jsonify, request, jsonify
from importlib import import_module
from actions import *
app = Flask(__name__)
app.secret_key = 'your secret here'
#app.route('/exec/<string:action>', methods=['POST'])
def exec(action):
result = cmddict[action].run()
return jsonify(result=result)
#app.route('/call', methods=['POST'])
def call():
data = request.get_json()
module = data.get('module')
method = data.get('method')
params = data.get('params')
try:
# import module by name
m = import_module(f'actions.{module}', __name__)
# get function by name
f = getattr(m, method)
# call function with params
result = f(**params) if isinstance(params, dict) else f(*params)
return jsonify(result=result, error=None)
except Exception as err:
return jsonify(result=None, error=f'{err}')
# ./actions/__init__py
__all__ = ['demo']
from importlib import import_module
cmddict = {}
for _ in __all__:
cmddict[_] = import_module(f'actions.{_}', __name__)
__all__.append('cmddict')
# ./actions/demo.py
def run():
return f'hello world'
def func(*args, **kwargs):
print('func', args, kwargs)
``
I have a server that contains a class which performs an expensive computation
during its initialization. I want to initialize this class once, inside the main() method of the server module, before starting the server. Then, I want other modules that import the server module to be able to retrieve the instance of this class.
Example (the sleep emulates the server running)
import time
# I want to store the shared_instance of this global variable
shared_instance = None
class Shared:
def __init__(self):
# Expensive computation that I only want to run once
pass
def main():
global shared_instance
shared_instance = Shared() # Now instance_of_scorer is not None anymore
print(shared_instance)
print("Starting server...")
time.sleep(1000)
if __name__ == '__main__':
main()
When I run this server it prints:
<__main__.Shared object at 0x000001865A3C4320>
Starting server...
Now I have other module that should be able to see the instance:
import server
print(server.shared_instance)
However, shared_instance is not '<main.Shared object at 0x000001865A3C4320>' as expected. It is 'None'. Could you please tell me want I'm doing wrong and how can I solve this issue and achieve this functionality?.
Many thanks
Say I have two files:
# spam.py
import library_Python3_only as l3
def spam(x,y)
return l3.bar(x).baz(y)
and
# beans.py
import library_Python2_only as l2
...
Now suppose I wish to call spam from within beans. It's not directly possible since both files depend on incompatible Python versions. Of course I can Popen a different python process, but how could I pass in the arguments and retrieve the results without too much stream-parsing pain?
Here is a complete example implementation using subprocess and pickle that I actually tested. Note that you need to use protocol version 2 explicitly for pickling on the Python 3 side (at least for the combo Python 3.5.2 and Python 2.7.3).
# py3bridge.py
import sys
import pickle
import importlib
import io
import traceback
import subprocess
class Py3Wrapper(object):
def __init__(self, mod_name, func_name):
self.mod_name = mod_name
self.func_name = func_name
def __call__(self, *args, **kwargs):
p = subprocess.Popen(['python3', '-m', 'py3bridge',
self.mod_name, self.func_name],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE)
stdout, _ = p.communicate(pickle.dumps((args, kwargs)))
data = pickle.loads(stdout)
if data['success']:
return data['result']
else:
raise Exception(data['stacktrace'])
def main():
try:
target_module = sys.argv[1]
target_function = sys.argv[2]
args, kwargs = pickle.load(sys.stdin.buffer)
mod = importlib.import_module(target_module)
func = getattr(mod, target_function)
result = func(*args, **kwargs)
data = dict(success=True, result=result)
except Exception:
st = io.StringIO()
traceback.print_exc(file=st)
data = dict(success=False, stacktrace=st.getvalue())
pickle.dump(data, sys.stdout.buffer, 2)
if __name__ == '__main__':
main()
The Python 3 module (using the pathlib module for the showcase)
# spam.py
import pathlib
def listdir(p):
return [str(c) for c in pathlib.Path(p).iterdir()]
The Python 2 module using spam.listdir
# beans.py
import py3bridge
delegate = py3bridge.Py3Wrapper('spam', 'listdir')
py3result = delegate('.')
print py3result
Assuming the caller is Python3.5+, you have access to a nicer subprocess module. Perhaps you could user subprocess.run, and communicate via pickled Python objects sent through stdin and stdout, respectively. There would be some setup to do, but no parsing on your side, or mucking with strings etc.
Here's an example of Python2 code via subprocess.Popen
p = subprocess.Popen(python3_args, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
stdout, stderr = p.communicate(pickle.dumps(python3_args))
result = pickle.load(stdout)
You could create a simple script as such :
import sys
import my_wrapped_module
import json
params = sys.argv
script = params.pop(0)
function = params.pop(0)
print(json.dumps(getattr(my_wrapped_module, function)(*params)))
You'll be able to call it like that :
pythonx.x wrapper.py myfunction param1 param2
This is obviously a security hazard though, be careful.
Also note that if your params are anything else than string or integers, you'll have some issues, so maybe think about transmitting params as a json string, and convert it using json.loads() in the wrapper.
It's possible to use the multiprocessing.managers module to achieve what you want. It does require a small amount of hacking though.
Given a module that has functions you want to expose then you need to create a Manager that can create proxies for those functions.
manager process that serves proxies to the py3 functions:
from multiprocessing.managers import BaseManager
import spam
class SpamManager(BaseManager):
pass
# Register a way of getting the spam module.
# You can use the exposed arg to control what is exposed.
# By default only "public" functions (without a leading underscore) are exposed,
# but can only ever expose functions or methods.
SpamManager.register("get_spam", callable=(lambda: spam), exposed=["add", "sub"])
# specifying the address as localhost means the manager is only visible to
# processes on this machine
manager = SpamManager(address=('localhost', 50000), authkey=b'abc',
serializer='xmlrpclib')
server = manager.get_server()
server.serve_forever()
I've redefined spam to contain two function called add and sub.
# spam.py
def add(x, y):
return x + y
def sub(x, y):
return x - y
client process that uses the py3 functions exposed by the SpamManager.
from __future__ import print_function
from multiprocessing.managers import BaseManager
class SpamManager(BaseManager):
pass
SpamManager.register("get_spam")
m = SpamManager(address=('localhost', 50000), authkey=b'abc',
serializer='xmlrpclib')
m.connect()
spam = m.get_spam()
print("1 + 2 = ", spam.add(1, 2)) # prints 1 + 2 = 3
print("1 - 2 = ", spam.sub(1, 2)) # prints 1 - 2 = -1
spam.__name__ # Attribute Error -- spam is a module, but its __name__ attribute
# is not exposed
Once set up, this form gives an easy way of accessing functions and values. It also allows these functions and values to be used them in a similar way that you might use them if they were not proxies. Finally, it allows you to set a password on the server process so that only authorised processes can access the manager. That the manager is long running, also means that a new process doesn't have to be started for each function call you make.
One limitation is that I've used the xmlrpclib module rather than pickle to send data back and forth between the server and the client. This is because python2 and python3 use different protocols for pickle. You could fix this by adding your own client to multiprocessing.managers.listener_client that uses an agreed upon protocol for pickling objects.
I have a simple 'echo' PB client and server where the client sends an object to the server which echo the same object back to the client:
The client:
from twisted.spread import pb
from twisted.internet import reactor
from twisted.python import util
from amodule import aClass
factory = pb.PBClientFactory()
reactor.connectTCP("localhost", 8282, factory)
d = factory.getRootObject()
d.addCallback(lambda object: object.callRemote("echo", aClass()))
d.addCallback(lambda response: 'server echoed: '+response)
d.addErrback(lambda reason: 'error: '+str(reason.value))
d.addCallback(util.println)
d.addCallback(lambda _: reactor.stop())
reactor.run()
The server:
from twisted.application import internet, service
from twisted.internet import protocol
from twisted.spread import pb
from amodule import aClass
class RemoteClass(pb.RemoteCopy, aClass):
pass
pb.setUnjellyableForClass(aClass, RemoteClass)
class PBServer(pb.Root):
def remote_echo(self, a):
return a
application = service.Application("Test app")
# Prepare managers
clientManager = internet.TCPServer(8282, pb.PBServerFactory(PBServer()));
clientManager.setServiceParent(application)
if __name__ == '__main__':
print "Run with twistd"
import sys
sys.exit(1)
The aClass is a simple class implementing Copyable:
from twisted.spread import pb
class aClass(pb.Copyable):
pass
When i run the above code, i get this error:
twisted.spread.jelly.InsecureJelly: Module builtin not allowed (in type builtin.RemoteClass).
In fact, the object is sent to the server without any problem since it was secured with pb.setUnjellyableForClass(aClass, RemoteClass) on the server side, but once it gets returned to the client, that error is raised.
Am looking for a way to get an easy way to send/receive my objects between two peers.
Perspective broker identifies classes by name when talking about them over the network. A class gets its name in part from the module in which it is defined. A tricky problem with defining classes in a file that you run from the command line (ie, your "main script") is that they may end up with a surprising name. When you do this:
python foo.py
The module name Python gives to the code in foo.py is not "foo" as one might expect. Instead it is something like "__main__" (which is why the if __name__ == "__main__": trick works).
However, if some other part of your application later tries to import something from foo.py, then Python re-evaluates its contents to create a new module named "foo".
Additionally, the classes defined in the "__main__" module of one process may have nothing to do with the classes defined in the "__main__" module of another process. This is the case in your example, where __main__.RemoteClass is defined in your server process but there is no RemoteClass in the __main__ module of your client process.
So, PB gets mixed up and can't complete the object transfer.
The solution is to keep the amount of code in your main script to a minimum, and in particular to never define things with names there (no classes, no function definitions).
However, another problem is the expectation that a RemoteCopy can be sent over PB without additional preparation. A Copyable can be sent, creating a RemoteCopy on the peer, but this is not a symmetric relationship. Your client also needs to allow this by making a similar (or different) pb.setUnjellyableForClass call.