How to synchronize socket SID on server and client? - python

I have python_socketio 5.0.4 on client and Flask-SocketIO 5.0.1 on server. When client connects to server, I would like to synchronize client's SID between client and server. However, when I print SID on client, it's different from SID printed on server.
Is there any way to make SID of client same on server as on client?
Here is code for client:
import socketio
sio = socketio.Client()
sio.connect("http://localhost:5000")
print(sio.sid) # czNJ6NXIAXP9-vgmAAAK
sio.emit("test_event")
And here for server:
from flask import Flask, request
from flask_socketio import SocketIO
app = Flask(__name__)
sio = SocketIO(app)
#sio.on("test_event")
def test_event():
print(request.sid) # ukJhK9ZIiXY_gTMAAAL <--- this is different SID
sio.run(app)

The problem is that in the client you are accessing your sid as sio.sid. The sid property of the Socket.IO client is private, it is not supposed to be used.
Instead, use the sio.get_sid() method to obtain the sid. This used to not be a problem, but in the latest revision of the Socket.IO protocol each namespace is required to have a different sid, so the get_sid() method should be used to obtain the correct one for your namespace.
If you were using a non-default namespace you can pass it as an argument as follows: sio.get_sid(namespace='/my-namespace').

Related

Flask Socket on Ubuntu AWS is not receive message/data, but in the local machine it works fine, no Connect refused

Situation:
I deploy my flask server in the AWS EC2 ubuntu and running. And react.js running on my local machine try to test is remote server setup correctly and works, contains Restful API and WebSocket.
Question:
Everything works fine on the local machine which python and react both running on the local machine. When l make flask running on the AWS, the Restful API works fine, but WebSocket is not working, there is no connect refused error, switch protocol is 101 and in the inspector network section is 200 status. Just remote server cannot receiving data. I am not sure what was happended at this time and how to fix that, anyone has the same experience?
this one is my client
const endPoint = "http://3.237.172.105:5000/friends";
const socket = io.connect(endPoint);
const addFriends=(friend)=>{
socket.emit("Addedfriend",{username:name , friendName:friendName});
}
this one is my flask start file: call python3 app.py to run
from logging import debug
from flask import Flask
from flask_socketio import SocketIO
from flask_cors import CORS
from Controller.logReg import logReg
from Controller.profile import profile
from Controller.ticket import ticket
from Controller.personal import personal
import logging
#log = logging.getLogger('werkzeug')
#log.setLevel(logging.ERROR)
app = Flask(__name__)
socketio = SocketIO(app, cors_allowed_origins="*")
app.register_blueprint(logReg)
app.register_blueprint(profile)
app.register_blueprint(ticket)
app.register_blueprint(personal)
print("socket started ... ...")
CORS(app)
if __name__ == '__main__':
print("socket opened")
socketio.run(app, host='0.0.0.0', port=5000)
this is my socket file: print data is not print anything and I am not sure why
#socketio.on('Addedfriend', namespace='/friends')
def add_friend(data, t):
print("this is from addFriend" + str(data))
# print("friend SID: "+str(request.sid))
user = data['username']
friend = data['friendName']
user_friends = FriendsDB.find_one({"username": user})['friends']
if friend in user_friends:
emit("Addedfriend", {"result": "already added", "friendPhoto":"", "friendStatus": False})
return
if FriendsDB.find_one({"username": friend}) is None:
emit("Addedfriend", {"result": "Not Exist", "friendPhoto": "", "friendStatus": False})
return
this is the network screenshot
this is AWS IP address
this is AWS inbound rule

How to handle individual socket.io client events in python?

I am primarily a Javascript developer but I am trying to replicate a server I wrote in Node.js in Python. The server uses Socket.io to communicate with clients and I am having some trouble replicating this specific behaviour in Python:
io.on('connection', function(socket){
socket.on('disconnect', function(){ });
});
I would like to handle each client's events and messages separately from one another. Any way I could do this in Python? I am using the package flask_socketio to wrap the sockets. Cheers.
As far as I can see you just want connection and disconnection handlers?
You can do that as follows in Python:
from flask import Flask, render_template
from flask_socketio import SocketIO, emit
app = Flask(__name__)
socketio = SocketIO(app)
#socketio.on('connect')
def connect():
# your connection logic here
#socketio.on('disconnect')
def disconnect():
# your disconnection logic here
if __name__ == '__main__':
socketio.run(app)

Flask SocketIO - connect client to server immediately after server start

I've created a single socket endpoint on the server side that looks like this:
Server.py
from client import sockio_client
from flask_socketio import SocketIO
from flask import Flask
app = Flask(__name__)
socketio = SocketIO(app)
#socketio.on('status_update')
def status_update(data):
print('got something: ',data)
#app.before_first_request
def start_ws_client():
# now that server is started, connect client
sockio_client.connect('http://localhost:5000')
if __name__ == "__main__":
socketio.run(app,debug=True)
And the corresponding client:
Client.py
import socketio
from threading import Thread
sockio_client = socketio.Client()
# wait to connect until server actually started
# bunch of code
def updater():
while True:
sockio_client.emit('status_update', 42)
time.sleep(10)
t = Thread(target=updater)
t.start()
I've got a single background thread running outside of the server and I would like to update clients with the data it periodically emits. I'm sure there is more than one way to do this, but the two options I came up with were to either (i) pass a reference to the socketio object in server.py above to the update function in client by encapsulating the update function in an object or closure which has a reference to the socketio object, or (ii) just use a websocket client from the background job to communicate to the server. Option one just felt funny so I went with (ii), which feels... okish
Now obviously the server has to be running before I can connect the client, so I thought I could use the before_first_request decorator to make sure I only attempt to connect the client after the server has started. However every time I try, I get:
socketio.exceptions.ConnectionError: Connection refused by the server
At this point the server is definitely running, but no connections will be accepted. If I were to comment out the sockio_client.connect in server.py, and connect from an entirely separate script, everything works as expected. What am I doing wrong? Also, if there are much better ways to do this, please tear it apart.

one strange behavior of xmlrpc

Here is the code:
server part and called from machine 10.42.0.1:
from xmlrpc.server import SimpleXMLRPCServer
from xmlrpc.server import SimpleXMLRPCRequestHandler
class RequestHandler(SimpleXMLRPCRequestHandler):
rpc_paths = ()
server = SimpleXMLRPCServer(('10.42.0.1', 8000),
requestHandler=RequestHandler)
server.register_function(adder, 'add')
print('initialize finish')
server.serve_forever()
client part, and call from machine 10.42.0.2:
import xmlrpc.client
s = xmlrpc.client.ServerProxy('http://10.42.0.1:8000')
print(s.add(2,3))
However, I've got the error message from machine 10.42.0.2:
ConnectionRefusedError: [Errno 111]. And telnet 10.42.0.1 8000 also failed.
Then, I change this sentence:
server = SimpleXMLRPCServer(('10.42.0.1', 8000),
requestHandler=RequestHandler)
to:
server = SimpleXMLRPCServer(('', 8000),
requestHandler=RequestHandler)
And restart xmlrpc server, the xmlrpc client works this time. Then I change this sentence to:
server = SimpleXMLRPCServer(('10.42.0.1', 8001),
requestHandler=RequestHandler)
And open a new xmlrpc server, and change the client code to:
import xmlrpc.client
s = xmlrpc.client.ServerProxy('http://10.42.0.1:8001')
print(s.add(2,3))
And start a new xmlrpc client, the client now also works.
Anyone help me explain this strange phenomenon?
A bit like arp table not built.

How to deal with thrift client disconnection issue

My project use bottle and HBase, client connect to HBase via python thrift client, code simplify like this
#!/usr/bin/env python
from bottle import route, run, default_app, request
client = HBaseClient()
#route('/', method='POST')
def index():
data = client.getdata()
return data
Now the issue is if client disconnect, our request will be failed. So it requires to make sure client keep alive.
One solution is using connection pool, is there any connection pool I can refer to?
Any other solution for this issue?
Looks happybase can deal this issue
HappyBase has a connection pool that tries to deal with broken connections to some extent: http://happybase.readthedocs.org/en/latest/user.html#using-the-connection-pool

Categories