Im new to twisted. I have written a client which connects to a server on two ports 8037 and 8038. I understand that the factory creates two connection objects. Now when i press Ctrl-C, it says
Connection Lost Connection to the other side was lost in a non clean fashion.
Connection Lost Connection to the other side was lost in a non clean fashion.
Below is the code:
from twisted.internet import protocol,reactor
class TestClient(protocol.Protocol):
def __init__(self):
pass
def connectionMade(self):
print "Connected "
self.sayHello()
def connectionLost(self,reason):
self.transport.loseConnection()
def sayHello(self):
self.transport.write("Hello")
def dataReceived(self,data):
print "Received data ",data
class TestClientFactory(protocol.ClientFactory):
def buildProtocol(self,addr):
return TestClient()
def clientConnectionFailed(self,connectory,reason):
print "Connection Failed ",reason.getErrorMessage()
def clientConnectionLost(self,connector,reason):
print "Connection Lost ",reason.getErrorMessage()
reactor.connectTCP("<server_ip>",8037,TestClientFactory())
reactor.connectTCP("<server_ip>",8038,TestClientFactory())
reactor.run()
How can i make the client close both tcp connections cleanly ?.
How to call the sayHello() method for only one connection ?
Im new to twisted, so an example would be helpful.
Thanks
When you are connected, if you want to call sayHello, you can use the thought of rpc.
For example, you send a message like 'sayHello_args', parse msg and call sayhello by args.
If you don't want to send any msg. When you connected, d.addCallback(sayHello) to call.
d = defer.succeed(0)
d.addCallback(lambda _ : self.sayHello())
And if you want to close connection, to use reactor.stop()
Unclean connection shutdown is really nothing to worry about. Getting a clean exit would potentially make your shutdown process slower and buggier because it requires a bunch of additional code, and you have to be able to deal with abnormal network connection termination no matter what. In fact calling it "clean" is maybe even a bit misleading: "simultaneously confirmed" might be closer to what it's actually telling you about how the connection was closed.
As far as how to call sayHello, I don't fully understand your question, but if you use AMP, calling a method on the opposite side of the connection is pretty easy.
Related
With my current setup, I'm running a server with Django and I'm trying to automate backing up to the cloud whenever a POST/PUT action is made. To circumvent the delay (Ping to server hovers around 100ms and an action can reach upwards of 10 items posted at once), I decided to create a separate entity with a requests client and simply have this handle all backing up functions.
To do this, I have that entity listen via UNX using twisted and I send it a string through it whenever I hit an endpoint. The problem however is that if too many end points get called at once or get called in rapid succession, the data sent over the socket no longer comes in order. Is there any way to prevent this? Code below:
UNX Server:
class BaseUNXServerProtocol(LineOnlyReceiver):
rest_client = RestClient()
def connectionMade(self):
print("UNIX Client connected!")
def lineReceived(self, line):
print("Line Received!")
def dataReceived(self, data):
string = data.decode("utf-8")
jstring = json.loads(data)
if jstring['command'] == "upload_object":
self.rest_client.upload(jstring['model_name'], jstring['model_id'])
Unix Client:
class BaseUnixClient(object):
path = BRANCH_UNX_PATH
connected = False
def __init__(self):
self.init_vars()
self.connect()
def connect(self):
if os.path.exists(self.path):
self.client = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
self.client.connect(self.path)
self.connected = True
else:
print("Could not connect to path: {}".format(self.path))
def call_to_upload(self, model_class, model_id, upload_type):
self.send_string(_messages.branch_upload_message(model_class, model_id, upload_type))
Endpoint perform_create: (Essentially a hook that gets called whenever a new object is POSTed)
def perform_create(self, serializer):
instance = serializer.save()
# Call for upload/notify
UnixClient().call_to_upload(model_class=type(instance).__name__, model_id=instance.id, upload_type="create")
SOCK_STREAM connections are always ordered. Data on one connection comes out in the same order it went in (or the connection breaks).
THe only obvious problem with the code you shared is that you shouldn't override dataReceived on a LineOnlyReceiver subclass. All your logic belongs in lineReceived.
That wouldn't cause out-of-order data problems but it could lead to framing issues (like partial JSON messages being processed, or multiple messages being combined) which would probably cause json.loads to raise an exception.
So, to answer your question: data is delivered in order. If you are seeing out-of-order operation, it's because the data is being sent in a different order than you expect or because there is a divergence between the order of data delivery and the order of observable side-effects. I don't see any way to provide a further diagnosis without seeing more of your code.
Seeing your sending code, the problem is that you're using a new connection for every perform_create operation. There is no guarantee about delivery order across different connections. Even if your program does:
establish connection a
send data on connection a
establish connection b
send data on connection b
close connection a
close connection b
The receiver may decide to process data on connection b before data on connection a. This is because the underlying event notification system (select, epoll_wait, etc) doesn't (ever, as far as I know) preserve information about the ordering of the events it is reporting on. Instead, results come out in a pseudo-random order or a boring deterministic order (such as ascending by file descriptor number).
To fix your ordering problem, make one UnixClient and use it for all of your perform_create calls.
I have inherited python/twisted code written by a former employee.
The code that I have (and it works) opens a serial port and receives data for 5 seconds, then writes it back in reverse order. Here is the code:
from twisted.internet import reactor
from twisted.internet.protocol import Protocol
from twisted.internet.serialport import SerialPort
import serial
class ReverseEchoProtocol(Protocol):
"""Wait for specific amount of data.
Regardless of success, closes connection timeout seconds after opening.
"""
def __init__(self, port, timeout, logger):
self._logger = logger
self._timeout = timeout
def connectionMade(self):
self._logger.info('RS485 connection made.')
reactor.callLater(self._timeout, self.transport.loseConnection, 'timeout')
def connectionLost(self, reason):
self._logger.info('RS485 connection lost. ' + str(reason))
def dataReceived(self, data):
self._logger.info('RS485 received data. ' + repr(data))
self.transport.write(data[::-1])
self.transport.flushOutput()
And from inside a python function the above code is initiated with this call:
protocol = ReverseEchoProtocol(port, 5 self._logger)
try:
port.open(protocol)
except serial.SerialException as err:
# print/log err.message here
return False
return True
This call to port.open returns immediately after successfully opening the port (well before the 5 seconds complete)
Here is what I am trying to write. From inside a python function, I need to initiate a serial transaction. It needs to wait for the transaction to either complete, fail or timeout.
Here is what the serial transaction needs to do:
The transaction is passed in a string and a timeout value.
The serial port is opened. Failure to open results in an error being returned
The string is written to the serial port. Failure to write results in an error being returned
If write is successful, the same port is then continually read for "timeout" seconds. As data is read (could be multiple reads), it is appended to a string.
After "timeout" seconds, the string of all data read from the port during that time is returned (or the empty string if nothing is read).
Here is my question....trying to adapt the code I already have, I can write a new protocol. In connectionMade, it can do the write, initiate the read and then setup a timeout by calling reactor.callLater. Inside dataReceived I can append the read data to a string. And inside connectionLost I can return the string read.
But how do I make the python function calling port.open wait until the transaction completes? (Is there something like a reactor.wait function or a join function?) Also, if there is an error (or exception), how do I pass that up (a try block?) How do I pass the string back up to the python function?
I think the code I inherited gets me close...I just need those couple of questions answered for me to be able to complete the task.
To answer your first two questions, you are looking for reactor.run() to run the twisted mainloop but it sounds like and looks like you are expecting a blocking api and twisted is event driven which could possibly mean you are forcing the use of twisted. You could just use the serial module directly without twisted to get what you want done. If you do want to be event driven to be non-blocking then you will have to ask more specific questions about that.
I'm having a bizarre issue. Basically, the problem I have right now is dealing with two different LineReceiver servers that are connected to each other. Essentially, if I were to input something into server A, then I want some output to appear in server B. And I would like to do this vice versa. I am running two servers on two different source files (also running them on different processes via & shellscript) ServerA.py and ServerB.py where the ports are (12650 and 12651) respectively. I am also connecting to each server using telnet.
from twisted.internet import protocol, reactor
from twisted.protocols.basic import LineReceiver
class ServerA(LineReceiver);
def connectionMade(self):
self.transport.write("Is Server A\n")
def dataReceived(self, data):
self.sendLine(data)
def lineReceived(self, line):
self.transport.write(line)
def main():
client = protocol.ClientFactory()
client.protocol = ServerA
reactor.connectTCP("localhost", 12650, client)
server = protocol.ServerFactory()
server.protocol = ServerA
reactor.listenTCP(12651, server)
reactor.run()
if __name__ == '__main__':
main()
My issue is the use of sendLine. When I try to do a sendLine call from serverA with some arbitrary string, serverA ends up spitting out the exact string instead of sending it down the connection which was done in main(). Exactly why is this happening? I've been looking around and tried each solution I came across and I can't seem to get it to work properly. The bizarre thing is my friend is essentially doing the same thing and he gets some working results but this is the simplest program I could think of to try to figure out the cause for this strange phenomenon.
In any case, the gist is, I'm expecting to get the input I put into serverA to appear in serverB.
Note: Server A and Server B have the exact same source code save for the class names and ports.
You have overridden dataReceived. That means that lineReceived will never be called, because it is LineReceiver's dataReceived implementation that eventually calls lineReceived, and you're never calling up to it.
You should only need to override lineReceived and then things should work as you expect.
I am having a python socket client program. what I need to do is my client should wait for server to be available for connection then connect with the server. any Idea to get this?
Thanks in advance....
Probably the easiest solution is to run socket.connect_ex() in while loop, something like (assuming you want to use tcp)
import socket
from time import sleep
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
while s.connect_ex(("10.0.0.1", 80)) != 0:
sleep(10)
I would recommend creating a function that connects to the server then wrap it using decorator. This will keep the two different logic of connecting to the server and retrying separate, which can be a better way to maintain your code.
However, this could be a bit of an overkill and could end up over complicating the code if you are only going attempt to reconnect to the server once, but if other functions within the code require reattempting, I would highly recommend using the decorator since it can
reduce redundancy in the code.
def solve_issue():
sleep(10)
def attempt_reconnect(func,*args,**kwargs):
MAX_RETRY=2
for i in range(MAX_RETRY):
try:
return_value=func(*args,**kwargs)
break
except Exception as e:
print("error"+str(e))
return_value=e
solve_issue()
return return_value
#attempt_reconnect
def connect_to_server():
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
I'm using python's telnetlib to connect to a remote telnet server. I'm having a hard time detecting if the connection is still open, or if the remote server closed it on me.
I will notice the connection is closed the next time I try to read or write to it, but would like to have a way to detect it on demand.
Is there a way to send some sort of an 'Are You There' packet without affecting the actual connection? The telnet RFC supports an "are you there" and "NOP" commands - just not sure how to get telnetlib to send them!
You should be able to send a NOP this way:
from telnetlib import IAC, NOP
...
telnet_object.sock.sendall(IAC + NOP)
I've noticed that for some reason sending only once was not enough ... I've "discovered it" by accident, I had something like this:
def check_alive(telnet_obj):
try:
if telnet_obj.sock: # this way I've taken care of problem if the .close() was called
telnet_obj.sock.send(IAC+NOP) # notice the use of send instead of sendall
return True
except:
logger.info("telnet send failed - dead")
pass
# later on
logger.info("is alive %s", check_alive(my_telnet_obj))
if check_alive(my_telnet_obj):
# do whatever
after a few runs I've noticed that the log message was saying "is alive True", but the code didn't entered the "if", and that the log message "telnet send failed - dead" was printed, so in my last implementation, as I was saying here, I'm just calling the .send() method 3 times (just in case 2 were not enough).
That's my 2 cents, hope it helps
Following up on David's solution, after close() on the interface, the sock attribute changes from being a socket._socketobject to being the integer 0. The call to .sendall fails with an AttributeError if the socket is closed, so you may as well just check its type.
Tested with Linux and Windows 7.
The best way to detect if a connection is close it's by socket object. So,it's easier to check it this way,
def is_connected(telnet_obj):
return telnet_obj.get_socket().fileno()
If it is closed return -1
I took this code from this question.