I came across with strange issue while using Thrift server in TThreadedServer mode. My test client makes parallel calls (100 calls) to the server. Say two methods in my thrift service
are load_account() and getRequestQueueCoun(). These methods have async calls with send_load_account and recv_load_account(). send_getRequestQueueCoun() and recv_getRequestQueueCoun().
The problem I am facing is response of send_getRequestQueueCoun() call is being captured in recv_load_account().
I found the response in the following line
def recv_load_account(self, ):
(fname, mtype, rseqid) = self._iprot.readMessageBegin() # here fname is the other method.
Server initialization Code -
handler = SyncServiceHandler(settings.SERVER_NAME,settings.SERVER_LISTEN_IP,settings.SERVER_LISTEN_PORT,isDispatcher)
transport = TSocket.TServerSocket(settings.SERVER_LISTEN_IP, settings.SERVER_LISTEN_PORT)
processor = SyncService.Processor(handler)
tfactory = TTransport.TBufferedTransportFactory()
pfactory = TBinaryProtocol.TBinaryProtocolFactory()
server = TServer.TThreadedServer(processor, transport, tfactory, pfactory)
I am running two thrift instance in localhost on different ports.
I am using Thrift with Python 2.7.
I tried my best to quickly draft my problem. If it's still not clear, let me know if I can elaborate it.
Thanks in Advance.
Panakj
Related
Hi all I have the following code but for some reason I keep getting the following error but it seems to work on a colleagues pc. We can't seem to figure out why this won't work on mine.
We have also double checked that we're importing the same socketio using dir()
I've tried specifying the namespace both on sio.connect and in the sio.emit but still no luck!
socketio.exceptions.BadNamespaceError: / is not a connected namespace.
bearerToken = 'REDACT'
core = 'REDACT'
output = 'REDACT'
import socketio
import json
def getListeners(token, coreUrl, outputId):
sio = socketio.Client(reconnection_attempts=5, request_timeout=5)
sio.connect(url=coreUrl, transports='websocket')
#sio.on('mwedge:batch:stats')
def batchStats(data):
if (outputId in data['outputStats']):
listeners = data['outputStats'][outputId][16]
print("Number of listeners ", len(listeners))
ips = []
for listener in listeners:
ips.append(listener[1])
print("Ips", ips)
def authCallback(data):
print(json.dumps(data))
sio.emit(event='auth',
data={
'token': token
},
callback=authCallback)
getListeners(bearerToken, core, output)
The Socket.IO connection involves a number of exchanges between the client and the server. The connect() function initiates this process, but this continues in the background. The connection ends when the handler for your connect event is invoked. At this point you can emit.
The problem with your code is that you are not waiting until the connection handshakes are completed, so your emit() call happens before there is a connection established. The solution is to add a connect event handler, and move your emit() call there.
As an additional note, I suggest you set up your event handlers before you call the connect() function.
I don't understand the difference between websocket (websocket client) and websockets.
I would like to understand the difference to know which one uses to be optimal.
I have a code that seems to do the same thing.
import websockets
import asyncio
import json
async def capture_data():
uri = "wss://www.bitmex.com/realtime?subscribe=instrument:XBTUSD"
#uri = "wss://www.bitmex.com/realtime"
async with websockets.connect(uri) as websocket:
while True:
data = await websocket.recv()
data = json.loads(data)
try :
#print('\n', data['data'][0]["lastPrice"])
print('\n', data)
except KeyError:
continue
asyncio.get_event_loop().run_until_complete(capture_data())
import websocket
import json
def on_open(ws):
print("opened")
auth_data = {
'op': 'subscribe',
'args': ['instrument:XBTUSD']
}
ws.send(json.dumps(auth_data))
def on_message(ws, message):
message = json.loads(message)
print(message)
def on_close(ws):
print("closed connection")
socket = "wss://www.bitmex.com/realtime"
ws = websocket.WebSocketApp(socket,
on_open=on_open,
on_message=on_message,
on_close=on_close)
ws.run_forever()
EDIT:
Is Python powerful enough to receive a lot of data via websocket?
In times of high traffic, I could have thousands of messages per second and even tens of thousands for short periods of time.
I've seen a lot of things about Nodejs that might be better than python but I don't know js and even if I just copy code from the internet I'm not sure I wouldn't waste time if I have to send the data to my python script and then process it.
websocket-client implements only client-side. If you want to create a WebSocket Server you have to use websockets.
websocket-client implements WebSocketApp which is more JavaScript like implementation of websocket.
I prefer WebSocketApp run as thread:
socket = "wss://www.bitmex.com/realtime"
ws = websocket.WebSocketApp(socket,
on_open=on_open,
on_message=on_message,
on_close=on_close)
wst = threading.Thread(target=lambda: ws.run_forever())
wst.daemon = True
wst.start()
# your code continues here ...
Now you have asynchronous WebSocket. Your mainframe program is running and every time, when you receive the message, function on_message is called. on_message process the message automatically, without mainframe interruption.
websockets is a Python standard libary, currently suggested to use.
As far as I understand the main difference, from the client API programming, is that websocket-client support callbacks programming paradigm, in a way similar of javascript web client APIs.
Read the websockets documentation FAQs:
Are there onopen, onmessage, onerror, and onclose callbacks?
No, there aren’t.
websockets provides high-level, coroutine-based APIs. Compared to callbacks, coroutines make it easier to manage control flow in concurrent code.
If you prefer callback-based APIs, you should use another library.
With current Python 3 release the common / suggested way to manage asyncronous coroutines is via asyncio (used by websockets standard library approach).
So, answering your last question (Python vs nodejs), asyncio just allows to manage asyncronous I/O (networking-based in this case) events, as NodeJs does natively. broadley speaking, using asyncio you can achieve I/O websockets performances similar to that obtained using Nodejs.
For our new, open lab device automation standard (https://gitlab.com/SiLA2/sila_python) we would like to run the devices (=gRPC servers) in two modes: a simulation mode and a real mode (with the same set of remote calls, but in the first case it shall just return simulated responses in the second it should communicate with the hardware.
My first idea was to create two almost identical gRPC servicer python classes in separated modules, like in this example:
in hello_sim.py:
class SayHello(SayHello_pb2_grpc.SayHelloServicer):
#... implementation of the simulation servicer
def SayHello(self, request, context):
# simulation code ...
return SayHello_pb2.SayHello_response("simulation")
in hello_real.py:
class SayHello(SayHello_pb2_grpc.SayHelloServicer):
#... implementation of the real servicer
def SayHello(self, request, context):
# real hardware code
return SayHello_pb2.SayHello_response("real")
and then, after creating the gRPC server in server.py I could switch between simulation and real mode by re-registration of the servcier at the gRPC server like, e.g.:
server.py
# imports, init ...
grpc_server = GRPCServer(ThreadPoolExecutor(max_workers=10))
sh_sim = SayHello_sim.SayHello()
sh_real = SayHello_real.SayHello()
SayHello_pb2_grpc.add_SayHelloServicer_to_server(sh_sim, grpc_server)
grpc_server.run()
# ..... and later, still while the same grpc server is running, re-register, like
SayHello_pb2_grpc.add_SayHelloServicer_to_server(sh_real, grpc_server)
to be able to call the real hardware code;
or by exchanging the reference to the servicer object, like, e.g.:
# imports, init ...
grpc_server = GRPCServer(ThreadPoolExecutor(max_workers=10))
sh_sim = SayHello_sim.SayHello()
sh_real = SayHello_real.SayHello()
sh_current = sh_sim
SayHello_pb2_grpc.add_SayHelloServicer_to_server(sh_current , grpc_server)
grpc_server.run()
# ..... and then later, still while the same grpc server is running, re-register the real Servicer, like
sh_current = sh_real
# so that the server would just use the other servicer object for the next calls ...
but both strategies are not working :(
When calling the server in simulation mode from a gRPC client, I would expect that it should reply (according to the example): "simulation"
gRPC_client.py
# imports, init ....
response = self.SayHello_stub.SayHello()
print(response)
>'simulation'
and after switching to real-mode (by any mechanism) "real":
# after switching to real mode ...
response = self.SayHello_stub.SayHello()
print(response)
>'real'
What is the most clean and elegant solution to achieve this mode switching without completely shutting down the gRPC server (and by that loosing the connection to the client) ?
Thanks a lot for your help in advance !
PS:(Shutting down the gRPC server and re-registering would of course work, but this is not, what we want.)
A kind colleague of mine provided me with a good suggestion:
One could use the dependency injection concept (s. https://en.wikipedia.org/wiki/Dependency_injection), like:
class SayHello(pb2_grpc.SayHelloServicer):
def inject_implementation(SayHelloImplmentation):
self.implementation = SayHelloImplementation;
def SayHello(self, request, context):
return self.implementation.sayHello(request, context)
for a full example, see
https://gitlab.com/SiLA2/sila_python/tree/master/examples/simulation_real_mode_demo
I'm trying to get websockets to work between two machines. One pc and one raspberry pi to be exact.
On the PC I'm using socket.io as a client to connect to the server on the raspberry pi.
With the following code I iniated the connection and try to send predefined data.
var socket = io.connect(ip + ':8080');
socket.send('volumes', { data: data });
On the raspberry pi, the websocket server looks like this:
from tornado import web, ioloop
from sockjs.tornado import SockJSRouter, SockJSConnection
class EchoConnection(SockJSConnection):
def on_message(self, msg):
self.send(msg)
def check_origin(self, origin):
return True
if __name__ == '__main__':
EchoRouter = SockJSRouter(EchoConnection, '/echo')
app = web.Application(EchoRouter.urls)
app.listen(8080)
ioloop.IOLoop.instance().start()
But the connection is never established. And I don't know why. In the server log I get:
WARNING:tornado.access:404 GET /socket.io/1/?t=1412865634790
(192.168.0.16) 9.01ms
And in the Inspector on the pc there is this error message:
XMLHttpRequest cannot load http://192.168.0.10:8080/socket.io/1/?t=1412865634790. Origin sp://793b6d4588ead99e1780e35b71d24d1b285328f8.hue is not allowed by Access-Control-Allow-Origin.
I am out of ideas and don't know what to do. Can you help me?
Thank you!
Well, the solution for your problem has to do with the internal design of the sockjs-tornado library more than with the socket.io library.
Basically, your problem has to do with cross origin request i.e. the html that is generating the request to the websocket server is not at the same origin as the websocket server. I can see from your code that you already identified the problem ( and you tried to solve it by redefining the method "check_origin") but you didn´t find the proper way to do it, basically because within this library is not the SockJSConnection class the one that extends tornado WebSocketHandler and so redefining its "check_origin" is useless. If you dig a little bit into the code, you will see that there exists one class defined, namely SockJSWebSocketHandler that has a redefinition of such method itself, which relies on the tornado implementation if it returns true, but that also allows you to avoid that check using a setting parameter :
class SockJSWebSocketHandler(websocket.WebSocketHandler):
def check_origin(self, origin):
***
allow_origin = self.server.settings.get("websocket_allow_origin", "*")
if allow_origin == "*":
return True
So, to summarize, you just need to include the setting "websocket_allow_origin"="*" in the server settings and everything should work properly =D
if __name__ == '__main__':
EchoRouter = SockJSRouter(EchoConnection, '/echo', user_settings={"websocket_allow_origin":"*"})
for a django application I'm working on, I need to implement a two ways RPC so :
the clients can call RPC methods from the platform and
the platform can call RPC methods from each client.
As the clients will mostly be behind NATs (which means no public IPs, and unpredictable weird firewalling policies), the platform to client way has to be initiated by the client.
I have a pretty good idea on how I can write this from scratch, I also think I can work something out of the publisher/subscriber model of twisted, but I've learned that there is always a best way to do it in python.
So I'm wondering what would be the best way to do it, that would also integrate the best to django. The code will have to be able to scope with hundreds of clients in short term, and (we hope) with thousands of clients in medium/long term.
So what library/implementation would you advice me to use ?
I'm mostly looking to starting points for RTFM !
websocket is a moving target, with new specifications from time to time. Brave developpers implements server side library, but few implements client side. The client for web socket is a web browser.
websocket is not the only way for a server to talk to a client, event source is a simple and pragmatic way to push information to a client. It's just a never ending page. Twitter fire hose use this tricks before its specification. The client open a http connection and waits for event. The connection is kept open, and reopen if there is some troubles (connection cut, something like that).
No timeout, you can send many events in one connection.
The difference between websocket and eventsource is simple. Websocket is bidirectionnal and hard to implement. Eventsource is unidirectionnal and simple to implement, both client and server side.
You can use eventsource as a zombie controller. Each client connects and reconnect to the master and wait for instruction. When instruction is received, the zombie acts and if needed can talk to its master, with a classical http connection, targeting the django app.
Eventsource keep the connection open, so you need an async server, like tornado. Django need a sync server, so, you need both, with a dispatcher, like nginx. Django or a cron like action talks to the async server, wich talks to the right zombie. Zombie talks to django, so, the async server doesn't need any peristance, it's just a hub with plugged zombies.
Gevent is able to handle such http server but there is no decent doc and examples for this point. It's a shame. I want a car, you give me a screw.
You can also use Tornado + Tornadio + Socket.io. That's what we are using right now for notifications, and the amount of code that you should write is not that much.
from tornadio2 import SocketConnection, TornadioRouter, SocketServer
class RouterConnection(SocketConnection):
__endpoints__ = {'/chat': ChatConnection,
'/ping': PingConnection,
'/notification' : NotificationConnection
}
def on_open(self, info):
print 'Router', repr(info)
MyRouter = TornadioRouter(RouterConnection)
# Create socket application
application = web.Application(
MyRouter.apply_routes([(r"/", IndexHandler),
(r"/socket.io.js", SocketIOHandler)]),
flash_policy_port = 843,
flash_policy_file = op.join(ROOT, 'flashpolicy.xml'),
socket_io_port = 3001,
template_path=os.path.join(os.path.dirname(__file__), "templates/notification")
)
class PingConnection(SocketConnection):
def on_open(self, info):
print 'Ping', repr(info)
def on_message(self, message):
now = dt.utcnow()
message['server'] = [now.hour, now.minute, now.second, now.microsecond / 1000]
self.send(message)
class ChatConnection(SocketConnection):
participants = set()
unique_id = 0
#classmethod
def get_username(cls):
cls.unique_id += 1
return 'User%d' % cls.unique_id
def on_open(self, info):
print 'Chat', repr(info)
# Give user unique ID
self.user_name = self.get_username()
self.participants.add(self)
def on_message(self, message):
pass
def on_close(self):
self.participants.remove(self)
def broadcast(self, msg):
for p in self.participants:
p.send(msg)
here is a really simple solution I could came up with :
import tornado.ioloop
import tornado.web
import time
class MainHandler(tornado.web.RequestHandler):
#tornado.web.asynchronous
def get(self):
self.set_header("Content-Type", "text/event-stream")
self.set_header("Cache-Control", "no-cache")
self.write("Hello, world")
self.flush()
for i in range(0, 5):
msg = "%d<br>" % i
self.write("%s\r\n" % msg) # content
self.flush()
time.sleep(5)
application = tornado.web.Application([
(r"/", MainHandler),
])
if __name__ == "__main__":
application.listen(8888)
tornado.ioloop.IOLoop.instance().start()
and
curl http://localhost:8888
gives output when it comes !
Now, I'll just have to implement the full event-source spec and some kind of data serialization between the server and the clients, but that's trivial. I'll post an URL to the lib I'll write here when it'll be done.
I've recently played with Django, Server-Sent Events and WebSocket, and I've wrote an article about it at http://curella.org/blog/2012/jul/17/django-push-using-server-sent-events-and-websocket/
Of course, this comes with the usual caveats that Django probably isn't the best fit for evented stuff, and both protocols are still drafts.