It's been 1 month since I started using django-channels and now I have a feeling that I am not disconnecting websockets properly.
When I disconnect I want to destroy the group completely if no one is there and it should be no sign of existence.
When I'm rejecting connections I raise channels.exceptions.DenyConnection or send {'accepted': 'False'}
I was just wondering if this is the right way to do things that I've mentioned or not.
Try calling self.close()
From the channels Documentation:
class MyConsumer(WebsocketConsumer):
def connect(self):
# Called on connection.
# To accept the connection call:
self.accept()
# Or accept the connection and specify a chosen subprotocol.
# A list of subprotocols specified by the connecting client
# will be available in self.scope['subprotocols']
self.accept("subprotocol")
# To reject the connection, call:
self.close()
As far as I've understood this, the way to close a group is by using group_discard.
def disconnect(self, close_code):
async_to_sync(self.channel_layer.group_discard)("yourgroupname", self.channel_name)
Without having tested this, I would assume that raising an exception would result in an error 500 at the client. And a client receiving an error would probably interpret that not as "closed normally".
See channel docs here: https://channels.readthedocs.io/en/latest/topics/channel_layers.html#groups
Related
To simplify things, assume a TCP client-server app where the client sends a request and the server responds. The server uses sendall to respond to each client.
Now assume a bad client that sends requests to the server but doesn't really handle the responses. I.e. the client never calls socket.recv. (It doesn't have to be a bad client btw...it may be a slow consumer on the other end).
What ends up happening, is that the server keeps sending responses using sendall, until I'm assuming a buffer gets full, and then at some point sendall blocks and never returns.
This seems like a common problem to me so what would be the recommended solution?
Is there something like a try-send that would raise or return an EWOULDBLOCK (or similar) if the recipient's buffer is full? I'd like to avoid non-blocking select type calls if possible (happy to go that way if there are no alternatives).
Thank you in advance.
Following rveed's comment, here's a solution that works for my case:
def send_to_socket(self, sock: socket.socket, message: bytes) -> bool:
try:
sock.settimeout(10.0) # protect against bad clients / slow consumers by making this timeout (instead of blocking)
res = sock.sendall(message)
sock.settimeout(None) # put back to blocking (if needed for subsequent calls to recv, etc. using this socket)
if res is not None:
return False
return True
except socket.timeout as st:
# do whatever you need to here
return False
except Exception as ex:
# handle other exceptions here
return False
If needed, instead of setting the timeout to None afterwards (i.e. back to blocking), you can store the previous timeout value (using gettimeout) and restore to that.
I have a BlockingConnection, and I follow the examples of pika documentation. But in all of them, the example of code to start consuming messages are:
connection = pika.BlockingConnection()
channel = connection.channel()
channel.basic_consume('test', on_message)
try:
channel.start_consuming()
except KeyboardInterrupt:
channel.stop_consuming()
connection.close()
(with more or less details).
I have to code many scripts, and I want to run one after another (for test/research purposes). But the above code require that I added ^C in each one.
I try to add some timeouts explained in the documentation, but I haven't luck. For example, if I find a parameter for set if client don't consuming any message in the last X seconds, then script finish. Is this posible in pika lib? or I have to change the approach?
Don't use start_consuming if you don't want your code to block. Either use SelectConnection or this method that uses consume. You can add a timeout to the parameters passed to consume.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
import pika
parameters = pika.ConnectionParameters(host="localhost")
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
def ack_message(channel, method):
"""Note that `channel` must be the same pika channel instance via which
the message being ACKed was retrieved (AMQP protocol constraint).
"""
if channel.is_open:
channel.basic_ack(method.delivery_tag)
else:
# Channel is already closed, so we can't ACK this message;
# log and/or do something that makes sense for your app in this case.
pass
def callback(channel,method, properties, body):
ack_message(channel,method)
print("body",body, flush=True)
channel.basic_consume(
queue="hello", on_message_callback=callback)
channel.start_consuming()
connection.close()
I the original code is the answer of Luke Bakken.
But I have edited the code a lil bit.
:)
It's too late but perhaps someone gets benefited from that. You can use blocked_connection_timeout argument in pika.ConnectionParameters() as follows,
connection = pika.BlockingConnection(
pika.ConnectionParameters(
heartbeat=600,
blocked_connection_timeout=600,
host=self.queue_host,
port=constants.RABBTIMQ_PORT,
virtual_host=self.rabbitmq_virtual_host,
credentials=pika.PlainCredentials(
username=self.rabbitmq_username,
password=self.rabbitmq_password
)
)
)
With my current setup, I'm running a server with Django and I'm trying to automate backing up to the cloud whenever a POST/PUT action is made. To circumvent the delay (Ping to server hovers around 100ms and an action can reach upwards of 10 items posted at once), I decided to create a separate entity with a requests client and simply have this handle all backing up functions.
To do this, I have that entity listen via UNX using twisted and I send it a string through it whenever I hit an endpoint. The problem however is that if too many end points get called at once or get called in rapid succession, the data sent over the socket no longer comes in order. Is there any way to prevent this? Code below:
UNX Server:
class BaseUNXServerProtocol(LineOnlyReceiver):
rest_client = RestClient()
def connectionMade(self):
print("UNIX Client connected!")
def lineReceived(self, line):
print("Line Received!")
def dataReceived(self, data):
string = data.decode("utf-8")
jstring = json.loads(data)
if jstring['command'] == "upload_object":
self.rest_client.upload(jstring['model_name'], jstring['model_id'])
Unix Client:
class BaseUnixClient(object):
path = BRANCH_UNX_PATH
connected = False
def __init__(self):
self.init_vars()
self.connect()
def connect(self):
if os.path.exists(self.path):
self.client = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
self.client.connect(self.path)
self.connected = True
else:
print("Could not connect to path: {}".format(self.path))
def call_to_upload(self, model_class, model_id, upload_type):
self.send_string(_messages.branch_upload_message(model_class, model_id, upload_type))
Endpoint perform_create: (Essentially a hook that gets called whenever a new object is POSTed)
def perform_create(self, serializer):
instance = serializer.save()
# Call for upload/notify
UnixClient().call_to_upload(model_class=type(instance).__name__, model_id=instance.id, upload_type="create")
SOCK_STREAM connections are always ordered. Data on one connection comes out in the same order it went in (or the connection breaks).
THe only obvious problem with the code you shared is that you shouldn't override dataReceived on a LineOnlyReceiver subclass. All your logic belongs in lineReceived.
That wouldn't cause out-of-order data problems but it could lead to framing issues (like partial JSON messages being processed, or multiple messages being combined) which would probably cause json.loads to raise an exception.
So, to answer your question: data is delivered in order. If you are seeing out-of-order operation, it's because the data is being sent in a different order than you expect or because there is a divergence between the order of data delivery and the order of observable side-effects. I don't see any way to provide a further diagnosis without seeing more of your code.
Seeing your sending code, the problem is that you're using a new connection for every perform_create operation. There is no guarantee about delivery order across different connections. Even if your program does:
establish connection a
send data on connection a
establish connection b
send data on connection b
close connection a
close connection b
The receiver may decide to process data on connection b before data on connection a. This is because the underlying event notification system (select, epoll_wait, etc) doesn't (ever, as far as I know) preserve information about the ordering of the events it is reporting on. Instead, results come out in a pseudo-random order or a boring deterministic order (such as ascending by file descriptor number).
To fix your ordering problem, make one UnixClient and use it for all of your perform_create calls.
Im new to twisted. I have written a client which connects to a server on two ports 8037 and 8038. I understand that the factory creates two connection objects. Now when i press Ctrl-C, it says
Connection Lost Connection to the other side was lost in a non clean fashion.
Connection Lost Connection to the other side was lost in a non clean fashion.
Below is the code:
from twisted.internet import protocol,reactor
class TestClient(protocol.Protocol):
def __init__(self):
pass
def connectionMade(self):
print "Connected "
self.sayHello()
def connectionLost(self,reason):
self.transport.loseConnection()
def sayHello(self):
self.transport.write("Hello")
def dataReceived(self,data):
print "Received data ",data
class TestClientFactory(protocol.ClientFactory):
def buildProtocol(self,addr):
return TestClient()
def clientConnectionFailed(self,connectory,reason):
print "Connection Failed ",reason.getErrorMessage()
def clientConnectionLost(self,connector,reason):
print "Connection Lost ",reason.getErrorMessage()
reactor.connectTCP("<server_ip>",8037,TestClientFactory())
reactor.connectTCP("<server_ip>",8038,TestClientFactory())
reactor.run()
How can i make the client close both tcp connections cleanly ?.
How to call the sayHello() method for only one connection ?
Im new to twisted, so an example would be helpful.
Thanks
When you are connected, if you want to call sayHello, you can use the thought of rpc.
For example, you send a message like 'sayHello_args', parse msg and call sayhello by args.
If you don't want to send any msg. When you connected, d.addCallback(sayHello) to call.
d = defer.succeed(0)
d.addCallback(lambda _ : self.sayHello())
And if you want to close connection, to use reactor.stop()
Unclean connection shutdown is really nothing to worry about. Getting a clean exit would potentially make your shutdown process slower and buggier because it requires a bunch of additional code, and you have to be able to deal with abnormal network connection termination no matter what. In fact calling it "clean" is maybe even a bit misleading: "simultaneously confirmed" might be closer to what it's actually telling you about how the connection was closed.
As far as how to call sayHello, I don't fully understand your question, but if you use AMP, calling a method on the opposite side of the connection is pretty easy.
I need to set up a jabber bot, using python, that will send messages based on the online/offline availability of several contacts.
I've been looking into pyxmpp and xmpppy, but couldn't find any way (at least nothing straightforward) to check the status of a given contact.
Any pointers on how to achieve this?
Ideally I would like something like e.g. bot.status_of("contact1#gmail.com") returning "online"
I don't think it is possible in the way you want it because the presence of contacts (which contains the information about their availability) is received asynchronously by the bot.
You will have to write a presence handler function and registered it with the connection. This function will get called whenever a presence is received from a contact. The parameter of the call will tell you if the contact is online or not. Depending upon it you can send the message to the contact.
Using xmpppy you do it something like this:
def connect(jid, password, res, server, proxy, use_srv):
conn = xmpp.Client(jid.getDomain())
if not conn.connect(server=server, proxy=proxy, use_srv=use_srv):
log( 'unable to connect to server.')
return None
if not conn.auth(jid.getNode(), password, res):
log( 'unable to authorize with server.')
return None
conn.RegisterHandler( 'presence', callback_presence)
return conn
conn = connect(...)
def callback_presence(sess, pres):
if pres.getStatus() == "online":
msg = xmpp.Message(pres.getFrom(), "Hi!")
conn.send(msg)
PS: I have not tested the code but it should be something very similar to this.
What you want is done via a <presence type="probe"/>. This is done on behalf of the client, and SHOULD not be done by them (as per the RFC for XMPP IM). Since this is a bot, you could implement the presence probe, and receive the current presence of a given entity. Remember to send the probe to the bare JID (sans resource), because the server responds on behalf of clients for presence probes. This means your workflow will look like:
<presence/> // I'm online! BOT
<presence from="juliet#capulet.org/balcony"/> RESPONSE
<presence from="romeo#montague.net/hallway"/> // and so on... RESPONSE
<presence type="probe" to="benvolio#montague.net"/> BOT
<presence from="benvoio#montague.net/hallway"> RESPONSE
<status>Huzzah!</status>
<priority>3</priority>
</presence>
Take a look at that portion of the RFC for more in depth information on how your call flow should behave.
What you need to do is:
Connect.
Declare a presence handler. That handler maintains a cache of every contact's presence (see details below)
Send initial presence to the server, which will provoke the reception of presence statuses from all of your online contacts, which will in turn trigger the handler.
The status_of() method reads the cache and deduces the contact's presence status immediately.
Now, it depends on what presence information you need. For the sake of simplicity, let's pretend you just need an "online"/"offline" value. The cache would be a dictionary whose keys are the bare (no resource) JIDs, and the values are a set of connected resources for this JID. For example:
{'foo#bar.com': set(['work', 'notebook']), 'bob#example.net': set(['gtalk'])}
Now, when you receive an "available" presence from a certain JID/resource:
if jid not in cache:
cache[jid] = set()
cache[jid].add(resource)
Reciprocally, when you receive an "unavailable" presence:
if jid in cache: # bad people send "unavailable" just to make your app crash
cache[jid].discard(resource)
if 0 == len(cache[jid]):
del cache[jid]
And now:
def is_online(jid):
return jid in cache
Of course, if you want more detailed information, you could maintain not only the list of available resources for a contact but also the status, status message, priority, etc. of each resource.