Maintain sessions with zerorpc - python

How do I maintain different sessions or local state with my zerorpc server?
For example (below), if I have a multiple clients, subsequent clients will overwrite the model state. I thought about each client having an ID, and the RPC logic will try to separate the variables that way, but tbis seems messy and how would I clear out old states/variables once the clients disconnect?
Server
import zerorpc
import FileLoader
class MyRPC(object):
def load(self, myFile):
self.model = FileLoader.load(myFile)
def getModelName(self):
return self.model.name
s = zerorpc.Server(MyRPC())
s.bind("tcp://0.0.0.0:4242")
s.run()
Client 1
import zerorpc
c = zerorpc.Client()
c.connect("tcp://127.0.0.1:4242")
c.load("file1")
print c.getModelName()
Client 2
import zerorpc
c = zerorpc.Client()
c.connect("tcp://127.0.0.1:4242")
c.load("file2") # AAAHH! The previously loaded model gets overwritten here!
print c.getModelName()

Not sure about sessions...but if you want to get back different models? Maybe you could just have once function that instantiates a new Model()?
import zerorpc
import FileLoader
models_dict ={} # Keep track of our models
def get_model(file):
if file in models_dict:
return models_dict[file]
models_dict[file] = MyModel(file)
return model
class MyModel(object):
def __init__(self, file):
if file:
self.load(file)
def load(self, myFile):
self.model = FileLoader.load(myFile)
def getModelName(self):
return self.model.name
s = zerorpc.Server(<mypackagename.mymodulename>) # Supply the name of current package/module
s.bind("tcp://0.0.0.0:4242")
s.run()
Client:
import zerorpc
c = zerorpc.Client()
c.connect("tcp://127.0.0.1:4242")
print c.get_model("file1")

Related

Server class design to make master-slave server nodes network

I have got a design issue.
I have made a hotel booking server class that has got functions like "book_room", "cancel_reservation", "list_of_rooms", "get_user_reservations" and now I would like to make this server to be able to connect to other servers so that I could make one master server and many slaves. So when I do for example list_of_rooms I get list of rooms from both master and slaves. Or when I do "get_user_reservations" I get reservations from all the servers.
So I thought that I will make a class to hold both master and slave servers and call functions on all of them:
class Master(object):
def __init__(self):
local = HotelServer()
self.slaves = [local]
def add_slaves(self, hotel_server):
self.slaves.append(hotel_server)
And that I would make Master class to have all the functions from server
def get_user_reservations(self, user):
result = []
for slave in self.slaves:
result += slave.get_user_reservations(user)
return result
def list_of_rooms(self, user):
result = []
for slave in self.slaves:
result += slave.list_of_rooms(user)
return result
Is it good idea?
Is there any pattern to do this kind of node-network servers?
Next thing is that most of the functions will be similiar so can I do something like
class Master(object):
def __init__(self):
local = HotelServer()
self.slaves = [local]
for f_name, f_content in HotelServer.__dict__.items():
if is_function(f_name) and is_public_method(f_name):
def function(*args):
result = []
for slave in self.slaves:
result += slave.f_name(*args)
return result
setattr(self, f_name, function)
So it would fetch every method from HotelServer and made function this method on each slave?

Can't mock the class method in python

I have a class that I try to mock in tests. The class is located in server/cache.py and looks like:
class Storage(object):
def __init__(self, host, port):
# set up connection to a storage engine
def store_element(self, element, num_of_seconds):
# store something
def remove_element(self, element):
# remove something
This class is used in server/app.py similar to this one:
import cache
STORAGE = cache.Storage('host', 'port')
STORAGE.store_element(1, 5)
Now the problem arise when I try to mock it in the tests:
import unittest, mock
import server.app as application
class SomeTest(unittest.TestCase):
# part1
def setUp(self):
# part2
self.app = application.app.test_client()
This clearly does not work during the test, if I can't connect to a storage. So I have to mock it somehow by writing things in 'part1, part2'.
I tried to achieve it with
#mock.patch('server.app.cache') # part 1
mock.side_effect = ... # hoping to overwriting the init function to do nothing
But it still tries to connect to a real host. So how can I mock a full class here correctly? P.S. I reviewed many many questions which look similar to me, but in vain.

How to Log to a variable or write observer that sends messages to variable in Twisted / Autobahn

I am writing a websocket client that will receive updates every few seconds or so utilizing autobahn with twisted. I am successfully logging the data using multiple observers, however I want to use part of the messages I am receiving to send to a dataframe (and eventually plot in real time). My assumption is that I can log to a variable as well as a file-like object, but I cannot figure out how to do that. What is the correct way to achieve this.
I have very thoroughly read the docs for the current and legacy twisted loggers:
twisted.log https://twistedmatrix.com/documents/current/core/howto/logging.html
twisted.logger https://twistedmatrix.com/documents/current/core/howto/logger.html
In my code I have tried to use a zope.interface and #provider as referenced in the new twisted.logger package to create a custom log observer but have had no luck thus far even getting a custom log observer to print, let alone even send data to a variable.
from twisted.internet import reactor
from autobahn.twisted.websocket import WebSocketClientFactory, WebSocketClientProtocol, connectWS
from twisted.logger import (globalLogBeginner, Logger, globalLogPublisher,
jsonFileLogObserver, ILogObserver)
import sys
import io
import json
from pandas import DataFrame
def loggit(message):
log.info("Echo: {message!r}", message=message)
class ClientProtocol(WebSocketClientProtocol):
def onConnect(self, response):
print("Server connected: {0}".format(response.peer))
def initMessage(self):
message_data = {}
message_json = json.dumps(message_data)
print "sendMessage: " + message_json
self.sendMessage(message_json)
def onOpen(self):
print "onOpen calls initMessage()"
self.initMessage()
def onMessage(self, msg, binary, df):
loggit(msg)
def onClose(self, wasClean, code, reason):
print("WebSocket connection closed: {0}".format(reason))
if __name__ == '__main__':
factory = WebSocketClientFactory("wss://ws-feed.whatever.com")
factory.protocol = ClientProtocol
#provider(ILogObserver)
def customObserver(whatgoeshere?):
print event
observers = [jsonFileLogObserver(io.open("loga.json", "a")),
jsonFileLogObserver(io.open("logb.json", "a")), customObserver(Whatgoeshere?)]
log = Logger()
globalLogBeginner.beginLoggingTo(observers)
connectWS(factory)
reactor.run()
A log observer is simply a callable object that takes a dictionary containing all the values that are part of the log message.
This means you can have an instance of a class with a __call__ method decorated with #zope.interface.implementer(ILogObserver), or a function decorated with #zope.interface.provider(ILogObserver), which can perform that role.
Here's an example of some code which logs some values to a text file, a JSON file, and an in-memory statistics collector which sums things up on the fly.
import io
from zope.interface import implementer
from twisted.logger import (globalLogBeginner, Logger, jsonFileLogObserver,
ILogObserver, textFileLogObserver)
class Something(object):
log = Logger()
def doSomething(self, value):
self.log.info("Doing something to {value}",
value=value)
#implementer(ILogObserver)
class RealTimeStatistics(object):
def __init__(self):
self.stats = []
def __call__(self, event):
if 'value' in event:
self.stats.append(event['value'])
def reportCurrent(self):
print("Current Sum Is: " + repr(sum(self.stats)))
if __name__ == "__main__":
stats = RealTimeStatistics()
globalLogBeginner.beginLoggingTo([
jsonFileLogObserver(io.open("log1.json", "ab")),
textFileLogObserver(io.open("log2.txt", "ab")),
stats, # here we pass our log observer
], redirectStandardIO=False)
something = Something()
something.doSomething(1)
something.doSomething(2)
something.doSomething(3)
stats.reportCurrent()

How to make spydlay module to work like httplib/http.client?

I have to test server based on Jetty. This server can work with its own protocol, HTTP, HTTPS and lastly it started to support SPDY. I have some stress tests which are based on httplib /http.client -- each thread start with similar URL (some data in query string are variable), adds execution time to global variable and every few seconds shows some statistics. Code looks like:
t_start = time.time()
connection.request("GET", path)
resp = connection.getresponse()
t_stop = time.time()
check_response(resp)
QRY_TIMES.append(t_stop - t_start)
Client working with native protocol shares httplib API, so connection may be native, HTTPConnection or HTTPSConnection.
Now I want to add SPDY test using spdylay module. But its interface is opaque and I don't know how to change its opaqueness into something similar to httplib interface. I have made test client based on example but while 2nd argument to spdylay.urlfetch() is class name and not object I do not know how to use it with my tests. I have already add tests to on_close() method of my class which extends spdylay.BaseSPDYStreamHandler, but it is not compatibile with other tests. If it was instance I would use it outside of spdylay.urlfetch() call.
How can I use spydlay in a code that works based on httplib interfaces?
My only idea is to use global dictionary where url is a key and handler object is a value. It is not ideal because:
new queries with the same url will overwrite previous response
it is easy to forget to free handler from global dictionary
But it works!
import sys
import spdylay
CLIENT_RESULTS = {}
class MyStreamHandler(spdylay.BaseSPDYStreamHandler):
def __init__(self, url, fetcher):
super().__init__(url, fetcher)
self.headers = []
self.whole_data = []
def on_header(self, nv):
self.headers.append(nv)
def on_data(self, data):
self.whole_data.append(data)
def get_response(self, charset='UTF8'):
return (b''.join(self.whole_data)).decode(charset)
def on_close(self, status_code):
CLIENT_RESULTS[self.url] = self
def spdy_simply_get(url):
spdylay.urlfetch(url, MyStreamHandler)
data_handler = CLIENT_RESULTS[url]
result = data_handler.get_response()
del CLIENT_RESULTS[url]
return result
if __name__ == '__main__':
if '--test' in sys.argv:
spdy_response = spdy_simply_get('https://localhost:8443/test_spdy/get_ver_xml.hdb')
I hope somebody can do spdy_simply_get(url) better.

How to get the return value (like Ajax) using task queue on Google App Engine

I can use a task queue to change the database value, but how can I get the return value like Ajax using task queue?
This is my code:
from google.appengine.api.labs import taskqueue
from google.appengine.ext import db
from google.appengine.ext import webapp
from google.appengine.ext.webapp import template
from google.appengine.ext.webapp.util import run_wsgi_app
import os
class Counter(db.Model):
count = db.IntegerProperty(indexed=False)
class BaseRequestHandler(webapp.RequestHandler):
def render_template(self, filename, template_values={}):
values={
}
template_values.update(values)
path = os.path.join(os.path.dirname(__file__), 'templates', filename)
self.response.out.write(template.render(path, template_values))
class CounterHandler(BaseRequestHandler):
def get(self):
self.render_template('counters.html',{'counters': Counter.all()})
def post(self):
key = self.request.get('key')
# Add the task to the default queue.
for loop in range(0,1):
a=taskqueue.add(url='/worker', params={'key': key})
#self.redirect('/')
self.response.out.write(a)
class CounterWorker(webapp.RequestHandler):
def post(self): # should run at most 1/s
key = self.request.get('key')
def txn():
counter = Counter.get_by_key_name(key)
if counter is None:
counter = Counter(key_name=key, count=1)
else:
counter.count += 1
counter.put()
db.run_in_transaction(txn)
self.response.out.write('sss')#used for get by task queue
def main():
run_wsgi_app(webapp.WSGIApplication([
('/', CounterHandler),
('/worker', CounterWorker),
]))
if __name__ == '__main__':
main()
How can I show the 'sss'?
The current Task Queue API doesn't support processing return values or sending them back to the point of origin. Your appengine process isn't long-lived enough for that programming paradigm to work.
In your example, it looks like what you want is something like this:
Create task
Return AJAX code that will poll a task-status handler
Task processes, updates datastore with a return value
Task-status url returns updated value
Alternatively, if you don't want to return the 'sss' to the client but instead need it for further processing, you'll need to split your method into multiple parts. The first part creates the task and then exits. At the end of the task's process, it adds a new task itself to call back into the second part with the return value.

Categories