My project use bottle and HBase, client connect to HBase via python thrift client, code simplify like this
#!/usr/bin/env python
from bottle import route, run, default_app, request
client = HBaseClient()
#route('/', method='POST')
def index():
data = client.getdata()
return data
Now the issue is if client disconnect, our request will be failed. So it requires to make sure client keep alive.
One solution is using connection pool, is there any connection pool I can refer to?
Any other solution for this issue?
Looks happybase can deal this issue
HappyBase has a connection pool that tries to deal with broken connections to some extent: http://happybase.readthedocs.org/en/latest/user.html#using-the-connection-pool
Related
I've created a single socket endpoint on the server side that looks like this:
Server.py
from client import sockio_client
from flask_socketio import SocketIO
from flask import Flask
app = Flask(__name__)
socketio = SocketIO(app)
#socketio.on('status_update')
def status_update(data):
print('got something: ',data)
#app.before_first_request
def start_ws_client():
# now that server is started, connect client
sockio_client.connect('http://localhost:5000')
if __name__ == "__main__":
socketio.run(app,debug=True)
And the corresponding client:
Client.py
import socketio
from threading import Thread
sockio_client = socketio.Client()
# wait to connect until server actually started
# bunch of code
def updater():
while True:
sockio_client.emit('status_update', 42)
time.sleep(10)
t = Thread(target=updater)
t.start()
I've got a single background thread running outside of the server and I would like to update clients with the data it periodically emits. I'm sure there is more than one way to do this, but the two options I came up with were to either (i) pass a reference to the socketio object in server.py above to the update function in client by encapsulating the update function in an object or closure which has a reference to the socketio object, or (ii) just use a websocket client from the background job to communicate to the server. Option one just felt funny so I went with (ii), which feels... okish
Now obviously the server has to be running before I can connect the client, so I thought I could use the before_first_request decorator to make sure I only attempt to connect the client after the server has started. However every time I try, I get:
socketio.exceptions.ConnectionError: Connection refused by the server
At this point the server is definitely running, but no connections will be accepted. If I were to comment out the sockio_client.connect in server.py, and connect from an entirely separate script, everything works as expected. What am I doing wrong? Also, if there are much better ways to do this, please tear it apart.
I have a very simple Python (Flask socket.io) application which works as a server and another app written in AngularJS which is a client.
In order to handle connected and disconnected client I use respectlivy:
#socketio.on('connect')
def on_connect():
print("Client connected")
#socketio.on('disconnect')
def on_disconnect():
print("Client disconnected")
When Client connects to my app I get information about it, in case if client disconnect (for example because of problems with a network) I don't get any information.
What is the proper way to handle the situation in which client disconnects unexpectedly?
There are two types of connections: using long-pooling or WebSocket.
When you use WebSocket clients knows instantly that server was disconnected.
In the case of long-polling, there is need to set ping_interval and pint_timeout parameters (I also find information about heartbeat_interval and heartbeat_timeout but I don't know how they are related to ping_*).
From the server perspective: it doesn't know that client was disconnected and the only way to get that information is to set ping_interval and ping_timeout.
I am testing Redis in heroku. I have simple flask app that can create the redis client and kill the client but ...
#app.route('/client-status')
def client_status():
redis.client_setname("first")
redis.client_kill('addr') #10.157.2.68:60097
return "Success"
The question is how to get addr ? I know a way to get name redis.client_getname()...
you should use the command CLIENT LIST http://redis.io/commands/client-list to get all the information about the client connections server, and then retrieve the address field
I am trying to validate the SSL connection between client and server.
I have two python scripts send.py for producer and receive.py for consumer.
I am using below code to make connection.
import pika
ssl_option = {'certfile': '/home/rmqca/client1/cert.pem', 'keyfile': '/home/rmqca/client1/key.pem'}
parameters = pika.ConnectionParameters(host='localhost', port=5671, ssl=True, ssl_options=ssl_option)
connection = pika.BlockingConnection(parameters)
ALso, in my rabbitmq.config, I am using the below parameters:
{ssl_listeners, [5671]},
{ssl_options, [{cacertfile, "/home/rmqca/testca/cacert.pem"},
{certfile, "/home/rmqca/server/cert.pem"},
{keyfile, "/home/rmqca/server/key.pem"},
{verify, verify_peer},
{fail_if_no_peer_cert, true}]}
This works fine when I try connecting through SSL.
But As I wanted to cover negative usecase, like if I make connection without ssl, like using code:
import pika
connection = pika.BlockingConnection()
then, as per my understanding, my client should not be able to connect to server. But currently it is connecting fine. I am not sure why this is happening. Am I doing anything wrong here?
I have a web server running on Django.
Users can create events postponed in time.
These events must be recorded in queue and processed on another server.
Initially I thought to take the Twisted. something like:
#client - django server
factory = pb.PBClientFactory()
reactor.connectTCP(server_ip, server_port, factory)
d = factory.login(credentials.UsernamePassword(login, paswd),)
d.addCallbacks(self.good_connected,self.bad_connected)
d.addCallback(self.add_to_queue)
reactor.run()
def add_to_queue(self, p)
p.callRemote("pickup", data)
#server - twisted server
def perspective_pickup(self, data)
reactor.callLater(timeout, self.pickup_from_queue)
But now I have big doubts about this approach. Maybe do not use twisted? or connect it with Django differently
Run twisted inside of Django is not a good idea anyway. So, try Celery or run HTTP server with twisted and use urllib on django side to send data to twisted server.