I have got a design issue.
I have made a hotel booking server class that has got functions like "book_room", "cancel_reservation", "list_of_rooms", "get_user_reservations" and now I would like to make this server to be able to connect to other servers so that I could make one master server and many slaves. So when I do for example list_of_rooms I get list of rooms from both master and slaves. Or when I do "get_user_reservations" I get reservations from all the servers.
So I thought that I will make a class to hold both master and slave servers and call functions on all of them:
class Master(object):
def __init__(self):
local = HotelServer()
self.slaves = [local]
def add_slaves(self, hotel_server):
self.slaves.append(hotel_server)
And that I would make Master class to have all the functions from server
def get_user_reservations(self, user):
result = []
for slave in self.slaves:
result += slave.get_user_reservations(user)
return result
def list_of_rooms(self, user):
result = []
for slave in self.slaves:
result += slave.list_of_rooms(user)
return result
Is it good idea?
Is there any pattern to do this kind of node-network servers?
Next thing is that most of the functions will be similiar so can I do something like
class Master(object):
def __init__(self):
local = HotelServer()
self.slaves = [local]
for f_name, f_content in HotelServer.__dict__.items():
if is_function(f_name) and is_public_method(f_name):
def function(*args):
result = []
for slave in self.slaves:
result += slave.f_name(*args)
return result
setattr(self, f_name, function)
So it would fetch every method from HotelServer and made function this method on each slave?
Related
Is there any way to instantiate a subclass object as an extension of a superclass object, in such a way that the subclass retains the arguments passed to the superclass?
I'm working on a simple melody generator to get back into programming. The idea was that a Project can contain an arbitrary number of Instruments, which can have any number of Sequences.
Every subordinate object would retain all of the information of the superior objects (e.g. every instrument shares the project's port device, and so forth).
I figured I could do something like this:
import rtmidi
class Project:
def __init__(self, p_port=None):
self.port = p_port
# Getter / Setter removed for brevity
class Instrument(Project):
def __init__(self, p_channel=1)
self.channel = p_channel
# Getter / Setter removed for brevity
def port_setup():
midi_out = rtmidi.MidiOut()
selected_port = midi_out.open_port(2)
return selected_port
if __name__ == '__main__':
port = port_setup()
project = Project(p_port=port)
project.inst1 = Instrument()
print(project.port, project.inst1.port)
The expectation was that the new instrument would extend the created Project and inherit the port passed to its parent.
However, that doesn't work; the project and instrument return different objects, so there seems to be no relation between the objects at all. A quick Google search also doesn't turn up any information, which I assume means I'm really missing something.
Is there a proper way to set up nested structures like this?
Your relationship is that each Project has many Instruments. An Instrument is not a Project.
One first step could be to tell each Instrument which project is belongs to:
import rtmidi
class Project:
def __init__(self, p_port=None):
self.port = p_port
# Getter / Setter removed for brevity
class Instrument:
def __init__(self, project, p_channel=1)
self.project = project
self.channel = p_channel
# Getter / Setter removed for brevity
def port_setup():
midi_out = rtmidi.MidiOut()
selected_port = midi_out.open_port(2)
return selected_port
if __name__ == '__main__':
port = port_setup()
project = Project(p_port=port)
project.inst1 = Instrument(project)
print(project.port, project.inst1.project.port)
My aim is to provide to a web framework access to a Pyro daemon that has time-consuming tasks at the first loading. So far, I have managed to keep in memory (outside of the web app) a single instance of a class that takes care of the time-consuming loading at its initialization. I can also query it with my web app. The code for the daemon is:
Pyro4.expose
#Pyro4.behavior(instance_mode='single')
class Store(object):
def __init__(self):
self._store = ... # the expensive loading
def query_store(self, query):
return ... # Useful query tool to expose to the web framework.
# Not time consuming, provided self._store is
# loaded.
with Pyro4.Daemon() as daemon:
uri = daemon.register(Thing)
with Pyro4.locateNS() as ns:
ns.register('thing', uri)
daemon.requestLoop()
The issue I am having is that although a single instance is created, it is only created at the first proxy query from the web app. This is normal behavior according to the doc, but not what I want, as the first query is still slow because of the initialization of Thing.
How can I make sure the instance is already created as soon as the daemon is started?
I was thinking of creating a proxy instance of Thing in the code of the daemon, but this is tricky because the event loop must be running.
EDIT
It turns out that daemon.register() can accept either a class or an object, which could be a solution. This is however not recommended in the doc (link above) and that feature apparently only exists for backwards compatibility.
Do whatever initialization you need outside of your Pyro code. Cache it somewhere. Use the instance_creator parameter of the #behavior decorator for maximum control over how and when an instance is created. You can even consider pre-creating server instances yourself and retrieving one from a pool if you so desire? Anyway, one possible way to do this is like so:
import Pyro4
def slow_initialization():
print("initializing stuff...")
import time
time.sleep(4)
print("stuff is initialized!")
return {"initialized stuff": 42}
cached_initialized_stuff = slow_initialization()
def instance_creator(cls):
print("(Pyro is asking for a server instance! Creating one!)")
return cls(cached_initialized_stuff)
#Pyro4.behavior(instance_mode="percall", instance_creator=instance_creator)
class Server:
def __init__(self, init_stuff):
self.init_stuff = init_stuff
#Pyro4.expose
def work(self):
print("server: init stuff is:", self.init_stuff)
return self.init_stuff
Pyro4.Daemon.serveSimple({
Server: "test.server"
})
But this complexity is not needed for your scenario, just initialize the thing (that takes a long time) and cache it somewhere. Instead of re-initializing it every time a new server object is created, just refer to the cached pre-initialized result. Something like this;
import Pyro4
def slow_initialization():
print("initializing stuff...")
import time
time.sleep(4)
print("stuff is initialized!")
return {"initialized stuff": 42}
cached_initialized_stuff = slow_initialization()
#Pyro4.behavior(instance_mode="percall")
class Server:
def __init__(self):
self.init_stuff = cached_initialized_stuff
#Pyro4.expose
def work(self):
print("server: init stuff is:", self.init_stuff)
return self.init_stuff
Pyro4.Daemon.serveSimple({
Server: "test.server"
})
I've been reading about weak and strong references in Python, specifically regarding errors that look like
ReferenceError: weakly-referenced object no longer exists
Here I have a basic RPC interface that passes objects from client to server, where the server then saves those objects into a predefined class. Here's a basic outline of all the structures in my code. Note the behavior of "flags":
Client side:
# target = 'file.txt', flags = [(tuple, tuple), (tuple, tuple)]
def file_reminder(self, flags, target):
target = os.path.abspath(target)
c = rpyc.connect("localhost", port)
# flags can be referenced here
return c.root.file_reminder(flags, target)
Server side:
class MyService(rpyc.Service):
jobs = EventLoop().start()
# this is what's called from the client side
def exposed_file_reminder(self, flags, target):
reminder = FileReminder(flags, target)
self.jobs.add_reminder(reminder)
# reminder.flags can be referenced here
return "Added a new reminder"
class FileReminder(object):
def __init__(self, flags, target):
self.flags = flags
self.target = target
def __str__(self):
return str(self.flags) + target
class EventLoop(threading.Thread):
def __init__(self):
self.reminders = []
def add_reminder(self, reminder):
# reminder.flags can be referenced here
self.reminders.append(reminder)
def run(self):
while True:
for reminder in self.reminders:
# reminder.flags is no longer defined here
print reminder
The issue here is the "flags" argument always throwing a ReferenceError when printed in the thread (or manipulated in any way within the Thread's run() function). Note, target is processed just fine. When I change "flags" to an immutable, like a string, no ReferenceError is popping up. This is making my head scratch so any help would be appreciated!
Using Python GC on Compound Objects, I was able to fix this, although I do not know if it was done using "best practices".
Here's what I think the error was: although there were many references to the list itself, there were no explicit references to the tuples within that list. What I did to fix it was create a deep copy of the list on the instantiation of a FileReminder
For example
def __init__(self, flags, target):
self.flags = []
for flag in flags:
flags.append(flag)
This seems to work!
How do I maintain different sessions or local state with my zerorpc server?
For example (below), if I have a multiple clients, subsequent clients will overwrite the model state. I thought about each client having an ID, and the RPC logic will try to separate the variables that way, but tbis seems messy and how would I clear out old states/variables once the clients disconnect?
Server
import zerorpc
import FileLoader
class MyRPC(object):
def load(self, myFile):
self.model = FileLoader.load(myFile)
def getModelName(self):
return self.model.name
s = zerorpc.Server(MyRPC())
s.bind("tcp://0.0.0.0:4242")
s.run()
Client 1
import zerorpc
c = zerorpc.Client()
c.connect("tcp://127.0.0.1:4242")
c.load("file1")
print c.getModelName()
Client 2
import zerorpc
c = zerorpc.Client()
c.connect("tcp://127.0.0.1:4242")
c.load("file2") # AAAHH! The previously loaded model gets overwritten here!
print c.getModelName()
Not sure about sessions...but if you want to get back different models? Maybe you could just have once function that instantiates a new Model()?
import zerorpc
import FileLoader
models_dict ={} # Keep track of our models
def get_model(file):
if file in models_dict:
return models_dict[file]
models_dict[file] = MyModel(file)
return model
class MyModel(object):
def __init__(self, file):
if file:
self.load(file)
def load(self, myFile):
self.model = FileLoader.load(myFile)
def getModelName(self):
return self.model.name
s = zerorpc.Server(<mypackagename.mymodulename>) # Supply the name of current package/module
s.bind("tcp://0.0.0.0:4242")
s.run()
Client:
import zerorpc
c = zerorpc.Client()
c.connect("tcp://127.0.0.1:4242")
print c.get_model("file1")
I'm working on a project in Tornado that relies heavily on the asynchronous features of the library. By following the chat demo, I've managed to get long-polling working with my application, however I seem to have run into a problem with the way it all works.
Basically what I want to do is be able to call a function on the UpdateManager class and have it finish the asynchronous request for any callbacks in the waiting list. Here's some code to explain what I mean:
update.py:
class UpdateManager(object):
waiters = []
attrs = []
other_attrs = []
def set_attr(self, attr):
self.attrs.append(attr)
def set_other_attr(self, attr):
self.other_attrs.append(attr)
def add_callback(self, cb):
self.waiters.append(cb)
def send(self):
for cb in self.waiters:
cb(self.attrs, self.other_attrs)
class LongPoll(tornado.web.RequestHandler, UpdateManager):
#tornado.web.asynchronous
def get(self):
self.add_callback(self.finish_request)
def finish_request(self, attrs, other_attrs):
# Render some JSON to give the client, etc...
class SetSomething(tornado.web.RequestHandler):
def post(self):
# Handle the stuff...
self.add_attr(some_attr)
(There's more code implementing the URL handlers/server and such, however I don't believe that's necessary for this question)
So what I want to do is make it so I can call UpdateManager.send from another place in my application and still have it send the data to the waiting clients. The problem is that when you try to do this:
from update import UpdateManager
UpdateManager.send()
it only gets the UpdateManager class, not the instance of it that is holding user callbacks. So my question is: is there any way to create a persistent object with Tornado that will allow me to share a single instance of UpdateManager throughout my application?
Don't use instance methods - use class methods (after all, you're already using class attributes, you just might not realize it). That way, you don't have to instantiate the object, and can instead just call the methods of the class itself, which acts as a singleton:
class UpdateManager(object):
waiters = []
attrs = []
other_attrs = []
#classmethod
def set_attr(cls, attr):
cls.attrs.append(attr)
#classmethod
def set_other_attr(cls, attr):
cls.other_attrs.append(attr)
#classmethod
def add_callback(cls, cb):
cls.waiters.append(cb)
#classmethod
def send(cls):
for cb in cls.waiters:
cb(cls.attrs, cls.other_attrs)
This will make...
from update import UpdateManager
UpdateManager.send()
work as you desire it to.