I've got a Python class I'm using to emulate an IMAP server for a unit test. When the handle() callback happens, I try to use recv() to get the data that just arrived but I'm blocking on the recv() call. I thought recv() would return with the data that was received but instead, it just sits there.
In case it matters, I can see that the server and main thread are different in the log output.
I'm assuming that this has something to do with the fact that the socket was just opened but no data was sent through it. Do I need to check to see if that's the case before trying to call recv()?
I've created the server using:
class ThreadedTCPServer(SocketServer.ThreadingMixIn, SocketServer.TCPServer):
pass
...
# inside another class, we've got:
def startUp(self):
server = ThreadedTCPServer(('localhost', 5593), FakeImapServerHandler)
server.allow_reuse_address = True
server_thread = threading.Thread(target=server.serve_forever)
server_thread.setDaemon(True)
server_thread.start()
Then inside some other code in our unit test, we try to connect to the fake server:
print("about to connect")
self.__imapCon = imaplib.IMAP4("localhost", 5593)
print("about to login") # we never get here
self.__imapCon.login(Config.config["imapUser"], Config.config["imapPassw"])
The server is implemented like:
class FakeImapServerHandler(SocketServer.BaseRequestHandler):
def handle(self):
print("handling request")
incoming = self.request.recv(1024)
print("request was: " + incoming ) # never get here
(Posted as an answer this time) Per the IMAP protocol, servers speak first, sending an OK greeting to the client. Both your server and client seem to be trying to recv() from each other at the same time:
atlas% telnet chaos 143
Trying 192.168.1.5...
Connected to chaos
Escape character is '^]'.
* OK chaos Cyrus IMAP4 v2.2.13-Debian-2.2.13-19 server ready
Related
hi i make model server client which works fine and i also create separate GUI which need to two input server IP and port it only check whether server is up or not. But when i run server and then run my GUI and enter server IP and port it display connected on GUI but on server side it throw this error. The Server Client working fine but integration of GUI with server throw below error on server side.
conn.send('Hi'.encode()) # send only takes string BrokenPipeError: [Errno 32] Broken pip
This is server Code:
from socket import *
# Importing all from thread
import threading
# Defining server address and port
host = 'localhost'
port = 52000
data = " "
# Creating socket object
sock = socket()
# Binding socket to a address. bind() takes tuple of host and port.
sock.bind((host, port))
# Listening at the address
sock.listen(5) # 5 denotes the number of clients can queue
def clientthread(conn):
# infinite loop so that function do not terminate and thread do not end.
while True:
# Sending message to connected client
conn.send('Hi'.encode('utf-8')) # send only takes string
data =conn.recv(1024)
print (data.decode())
while True:
# Accepting incoming connections
conn, addr = sock.accept()
# Creating new thread. Calling clientthread function for this function and passing conn as argument.
thread = threading.Thread(target=clientthread, args=(conn,))
thread.start()
conn.close()
sock.close()
This is part of Gui Code which cause problem:
def isOpen(self, ip, port):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect((ip, int(port)))
data=s.recv(1024)
if data== b'Hi':
print("connected")
return True
except:
print("not connected")
return False
def check_password(self):
self.isOpen('localhost', 52000)
Your problem is simple.
Your client connects to the server
The server is creating a new thread with an infinite loop
The server sends a simple message
The client receives the message
The client closes the connection by default (!!!), since you returned from its method (no more references)
The server tries to receive a message, then proceeds (Error lies here)
Since the connection has been closed by the client, the server cannot send nor receive the next message inside the loop, since it is infinite. That is the cause of the error! Also there is no error handling in case of closing the connection, nor a protocol for closing on each side.
If you need a function that checks whether the server is online or not, you should create a function, (but I'm sure a simple connect is enough), that works like a ping. Example:
Client function:
def isOpen(self, ip, port):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect((str(ip), int(port)))
s.send("ping".encode('utf-8'))
return s.recv(1024).decode('utf-8') == "pong" # return whether the response match or not
except:
return False # cant connect
Server function:
def clientthread(conn):
while True:
msg = conn.recv(1024).decode('utf-8') #receiving a message
if msg == "ping":
conn.send("pong".encode('utf-8')) # sending the response
conn.close() # closing the connection on both sides
break # since we only need to check whether the server is online, we break
From your previous questions I can tell you have some problems understanding how TCP socket communication works. Please take a moment and read a few articles about how to communicate through sockets. If you don't need live communications (continous data stream, like a video, game server, etc), only login forms for example, please stick with well-known protocols, like HTTP. Creating your own reliable protocol might be a little complicated if you just got into socket programming.
You could use flask for an HTTP back-end.
I have to pass a data from my test cases to a mock server.
What is the best way to do that ?
This is what I have so far
mock_server.py
class ThreadedUDPServer(SocketServer.ThreadingMixIn, SocketServer.UDPServer):
pass
class ThreadedUDPRequestHandler(SocketServer.BaseRequestHandler):
def __init__(self, request, client_address, server):
SocketServer.BaseRequestHandler.__init__(self,request,client_address,server)
def handle(self):
print server.data #this is where i need the data
class server_wrap:
def __init__(self):
self.server = ThreadedUDPServer( ("127.0.0.1",49555) , ThreadedUDPRequestHandler)
def set_data(self,data)
self.server.data = data
def start(self)
server_thread = threading.Thread(target=self.server.serve_forever())
def stop(self)
self.server.shutdown()
test_mock.py
server_inst = server_wrap()
server_inst.start()
#code which sets the data and expects the handle method to print the data set
server_inst.stop()
The problem which i have with this code is, the execution stops at server_inst.start(), where the server goes in to an infinite listening mode
Other Solutions that I have tried, but failed:
Using global variables
Using queues
starting mock_server.py
with its own main
Let me know about any other possible solutions. Thanks in advance
Update 1:
Using separate threads to send data to the socket:
Changes
test_mock.py
def test_set_data(data)
server_inst = server_wrap()
server_inst.set_data(data)
server_inst.start()
if __name__ == "__main__":
thread = Thread(target=test_set_data, args=("foo_data))
thread.setDaemon(True)
thread.start()
#test code which verifies if data set is same
#works so far, able to pass data
#problem starts now
thread = Thread(target=test_set_data, args=("bar_data))
thread.setDaemon(True)
thread.start()
#says address already in use error
#Tried calling server.shuddown() in handle , but error persists. Also there is no thread.shop in threading.Thread object
Thanks
The server should go to listening mode.
You don't need the server_inst.stop until all the data was sent, and the test finishes. Maybe in you test tear down, or when the the test suite is completed.
To send data to the server, and let the handle pick it, you should open a socket on anohter thread. Then send the data to the server via this socket.
This code should look something like this:
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(("127.0.0.1",49555))
sock.send(... the data ...)
received = sock.recv(1024) # the handle can send a response
sock.close()
Add a function in your django code, which does run on another thread. This function will open the socket, connect, send the data and get the response. You can call it from a view, a middleware etc.
I'm trying to pass a TCP connection to a Twisted subprocess with adoptStreamConnection, but I can't figure out how to get the Process disposed in the main process after doing that.
My desired flow looks like this:
Finish writing any data the Protocol transport has waiting
When we know the write buffer is empty send the AMP message to transfer the socket to the subprocess
Dispose the Protocol instance in the main process
I tried doing nothing, loseConnection, abortConnection, and monkey patching _socketClose out and using loseConnection. See code here:
import weakref
from twisted.internet import reactor
from twisted.internet.endpoints import TCP4ServerEndpoint
from twisted.python.sendmsg import getsockfam
from twisted.internet.protocol import Factory, Protocol
import twisted.internet.abstract
class EchoProtocol(Protocol):
def dataReceived(self, data):
self.transport.write(data)
class EchoFactory(Factory):
protocol = EchoProtocol
class TransferProtocol(Protocol):
def dataReceived(self, data):
self.transport.write('main process still listening!: %s' % (data))
def connectionMade(self):
self.transport.write('this message should make it to the subprocess\n')
# attempt 1: do nothing
# everything works fine in the adopt (including receiving the written message), but old protocol still exists (though isn't doing anything)
# attempt 1: try calling loseConnection
# we lose connection before the adopt opens the socket (presumably TCP disconnect message was sent)
#
# self.transport.loseConnection()
# attempt 2: try calling abortConnection
# result is same as loseConnection
#
# self.transport.abortConnection()
# attempt 3: try monkey patching the socket close out and calling loseConnection
# result: same as doing nothing-- adopt works (including receiving the written message), old protocol still exists
#
# def ignored(*args, **kwargs):
# print 'ignored :D'
#
# self.transport._closeSocket = ignored
# self.transport.loseConnection()
reactor.callLater(0, adopt, self.transport.fileno())
class ServerFactory(Factory):
def buildProtocol(self, addr):
p = TransferProtocol()
self.ref = weakref.ref(p)
return p
f = ServerFactory()
def adopt(fileno):
print "does old protocol still exist?: %r" % (f.ref())
reactor.adoptStreamConnection(fileno, getsockfam(fileno), EchoFactory())
port = 1337
endpoint = TCP4ServerEndpoint(reactor, port)
d = endpoint.listen(f)
reactor.run()
In all cases the Protocol object still exists in the main process after the socket has been transferred. How can I clean this up?
Thanks in advance.
Neither loseConnection nor abortConnection tell the reactor to "forget" about a connection; they close the connection, which is very different; they tell the peer that the connection has gone away.
You want to call self.transport.stopReading() and self.transport.stopWriting() to remove the references to it from the reactor.
Also, it's not valid to use a weakref to test for the remaining existence of an object unless you call gc.collect() first.
As far as making sure that all the data has been sent: the only reliable way to do that is to have an application-level acknowledgement of the data that you've sent. This is why protocols that need a handshake that involves changing protocols - say, for example, STARTTLS - have a specific handshake where the initiator says "I'm going to switch" (and then stops sending), then the peer says "OK, you can switch now". Another way to handle that in this case would be to hand the data you'd like to write to the subprocess via some other channel, instead of passing it to transport.write.
I am writing a simple client-server program in python. In the client program, I am creating two threads (using Python's threading module), one for receiving, one for sending. The receiving thread continuously receives strings from the server side; while the sending thread continuously listens to the user input (using raw_input()) and send it to the server side. The two threads communicate using a Queue (which is natively synchronized, LIKE!).
The basic logic is like following:
Receiving thread:
global queue = Queue.Queue(0)
def run(self):
while 1:
receive a string from the server side
if the string is QUIT signal:
sys.exit()
else:
put it into the global queue
Sending thread:
def run(self):
while 1:
str = raw_input()
send str to the server side
fetch an element from the global queue
deal with the element
As you can see, in the receiving thread, I have a if condition to test whether the server has sent a "QUIT signal" to the client. If it has, then I want the whole program to stop.
The problem here is that for most of its time, the sending thread is blocked by "raw_input()" and waiting for the user input. When it is blocked, calling "sys.exit()" from the other thread (receiving thread) will not terminate the sending thread immediately. The sending thread has to wait for the user to type something and hit the enter button.
Could anybody inspire me how to get around with this? I do not mind using alternatives of "raw_input()". Actually I do not even mind changing the whole structure.
-------------EDIT-------------
I am running this on a linux machine, and my Python version is 2.7.5
You could just make the sending thread daemonic:
send_thread = SendThread() # Assuming this inherits from threading.Thread
send_thread.daemon = True # This must be called before you call start()
The Python interpreter won't be blocked from exiting if the only threads left running are daemons. So, if the only thread left is send_thread, your program will exit, even if you're blocked on raw_input.
Note that this will terminate the sending thread abruptly, no matter what its doing. This could be dangerous if it accesses external resources that need to be cleaned up properly or shouldn't be interrupted (like writing to a file, for example). If you're doing anything like that, protect it with a threading.Lock, and only call sys.exit() from the receiving thread if you can acquire that same Lock.
The short answer is you can't. input() like a lot of such input commands is blocking and it's blocking whether everything about the thread has been killed. You can sometimes call sys.exit() and get it to work depending on the OS, but it's not going to be consistent. Sometimes you can kill the program by deferring out to the local OS. But, then you're not going to be widely cross platform.
What you might want to consider if you have this is to funnel the functionality through the sockets. Because unlike input() we can do timeouts, and threads and kill things rather easily. It also gives you the ability to do multiple connections and maybe accept connections more broadly.
import socket
import time
from threading import Thread
def process(command, connection):
print("Command Entered: %s" % command)
# Any responses are written to connection.
connection.send(bytes('>', 'utf-8'))
class ConsoleSocket:
def __init__(self):
self.keep_running_the_listening_thread = True
self.data_buffer = ''
Thread(target=self.tcp_listen_handle).start()
def stop(self):
self.keep_running_the_listening_thread = False
def handle_tcp_connection_in_another_thread(self, connection, addr):
def handle():
while self.keep_running_the_listening_thread:
try:
data_from_socket = connection.recv(1024)
if len(data_from_socket) != 0:
self.data_buffer += data_from_socket.decode('utf-8')
else:
break
while '\n' in self.data_buffer:
pos = self.data_buffer.find('\n')
command = self.data_buffer[0:pos].strip('\r')
self.data_buffer = self.data_buffer[pos + 1:]
process(command, connection)
except socket.timeout:
continue
except socket.error:
if connection is not None:
connection.close()
break
Thread(target=handle).start()
connection.send(bytes('>', 'utf-8'))
def tcp_listen_handle(self, port=23, connects=5, timeout=2):
"""This is running in its own thread."""
sock = socket.socket()
sock.settimeout(timeout)
sock.bind(('', port))
sock.listen(connects) # We accept more than one connection.
while self.keep_running_the_listening_thread:
connection = None
try:
connection, addr = sock.accept()
address, port = addr
if address != '127.0.0.1': # Only permit localhost.
connection.close()
continue
# makes a thread deals with that stuff. We only do listening.
connection.settimeout(timeout)
self.handle_tcp_connection_in_another_thread(connection, addr)
except socket.timeout:
pass
except OSError:
# Some other error.
if connection is not None:
connection.close()
sock.close()
c = ConsoleSocket()
def killsocket():
time.sleep(20)
c.stop()
Thread(target=killsocket).start()
This launches a listener thread for the connections set on port 23 (telnet), and you connect and it passes that connection off to another thread. And it starts a killsocket thread that disables the various threads and lets them die peacefully (for demonstration purposes). You cannot however connect localhost within this code, because you'd need input() to know what to send to the server, which recreates the problem.
I'm trying to test some code that reconnects to a server after a disconnect. This works perfectly fine outside the tests, but it fails to acknowledge that the socket has disconnected when running the tests.
I'm using a Gevent Stream Server to mock a real listening server:
import gevent.server
from gevent import queue
class TestServer(gevent.server.StreamServer):
def __init__(self, *args, **kwargs):
super(TestServer, self).__init__(*args, **kwargs)
self.sockets = {}
def handle(self, socket, address):
self.sockets[address] = (socket, queue.Queue())
socket.sendall('testing the connection\r\n')
gevent.spawn(self.recv, address)
def recv(self, address):
socket = self.sockets[address][0]
queue = self.sockets[address][1]
print 'Connection accepted %s:%d' % address
try:
for data in socket.recv(1024):
queue.put(data)
except:
pass
def murder(self):
self.stop()
for sock in self.sockets.iteritems():
print sock
sock[1][0].shutdown(socket.SHUT_RDWR)
sock[1][0].close()
self.sockets = {}
def run_server():
test_server = TestServer(('127.0.0.1', 10666))
test_server.start()
return test_server
And my test looks like this:
def test_can_reconnect(self):
test_server = run_server()
client_config = {'host': '127.0.0.1', 'port': 10666}
client = Connection('test client', client_config, get_config())
client.connect()
assert client.socket_connected
test_server.murder()
#time.sleep(4) #tried sleeping. no dice.
assert not client.socket_connected
assert client.server_disconnect
test_server = run_server()
client.reconnect()
assert client.socket_connected
It fails at assert not client.socket_connected.
I detect for "not data" during recv. If it's None, then I set some variables so that other code can decide whether or not to reconnect (don't reconnect if it was a user_disconnect and so on). This behavior works and has always worked for me in the past, I've just never tried to make a test for it until now. Is there something odd with socket connections and local function scopes or something? it's like the connection still exists even after stopping the server.
The code I'm trying to test is open: https://github.com/kyleterry/tenyks.git
If you run the tests, you will see the one I'm trying to fix fail.
Trying to run a unit test with a real socket is a tough row to hoe. It's going to be tricky as only one set of tests can run at a time, as the server port will be used, and it's going to be slow as the sockets get set up and torn down. To top it off if this is really a unit test you don't want to test the socket, just the code that's using the socket.
If you mock the socket calls you can throw exceptions willy nilly from the mocked code and ensure that the code making use of the socket does the right thing. You don't need a real socket to ensure that the class under test does the right thing, you can fake it if you can wrap the socket calls in an object. Pass in a reference to the socket object when constructing your class and you're ready to go.
My suggestion is to wrap the socket calls in a class that supports sendall, recv, and all the methods you call on the socket. Then you can swap out the actual Socket class with a TestReconnectSocket (or whatever) and run your tests.
Take a look at mox, a python mocking framework.
Vague response, but my immediate reaction would be that your recv() call is blocking and keeping the socket alive - have you tried making the socket non-blocking, and catching the error on close instead?
One thing to keep in mind when testing sockets like this, is that operating systems don't like to reopen a socket soon after it has been in use. You can set a socket option to tell it to go ahead and reuse it anyways. Right after you create the socket set the socket's option:
mysocket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
Hopefully this will fix your issue. You may have to do it on both the server and client side depending on which one is giving you the problems.
you are calling shutdown(socket.SHUT_RDWR) so this doesn't seem like a problem with recv blocking.
however, you are using gevent.socket.socket.recv, so please check your gevent version, there is an issue with recv() that causes it to block if the underlying file descriptor is closed (version < v0.13.0)
you may still need gevent.sleep() to do cooperative yield and give the client an opportunity to exit the recv() call.