I have this client Python program, sending to a Java server program.
My client was working fine before adding the while loop. I needed to add a while loop to keep sending user input; now the text does not appear on the other end until I kill the program (the Python client), then all the text I entered in Python will appear on the other end, but only after killing the program! What causes this?
import socket
HOST = "192.168.0.76"
PORT = 8080
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((HOST, PORT))
while (True):
sock.sendall(raw_input(""))
Taking a quick look at the socket docs, I noticed that there is a setsockopt() function. If you look at the docs (man setsockopt), there's a SO_SNDBUF flag, which controls the buffer size while sending. If you didn't send enough, it probably doesn't flush until there's more data.
However, there doesn't appear to be a way to flush sockets (due to TCP frame sizes), and this related answer might give you some more hints: Python Socket Flush
change the loop to:
while True:
input = raw_input("")
sock.send(input)
Related
Every UDP server example I can find uses a while True loop to listen for incoming data. I'm attempting to use a single UDP socket server as part a kivy window that's also doing other things. As soon as I implement the server's while True loop everything locks up, as I guess I would expect it to do.
How do I listen on a UDP port and also have the rest of the program continue functioning?
I've tried moving the UDP server handling to another (udp_server.py) file and then importing the function, but since I'm importing the while loop nothing changes.
I've also tried assigning the received data to a variable inside udp_server.py and then just importing that variable, with udp_server.py already running separately, but even that is locking up my main program.
I'm 99.99% sure it's just some basic thing that I should already know, but I'm new to Python. Thanks in advance for any help.
Thank you Chris!!!!!!
I'm sure I'm understating the complexity of threading, but it works great now and the only thing I had to add was:
def thread_function():
from udp_server import amx_rx
# do stuff with amx_rx...
# class TouchPanel stuff...
if __name__ == '__main__':
x = threading.Thread(target=thread_function, daemon=True)
x.start()
try:
TouchPanel().run()
except KeyboardInterrupt:
raise
Now I have a running program with a UDP socket listening in the background! Thank you!!!
I have this code:
host, port = sys.argv[1:3]
port=int(port)
s = socket.socket()
s.bind((host,port))
s.listen(5)
while True:
conn, addr = s.accept()
threading.Thread(target=handle,args=(conn,)).start()
I need to stop my code using Ctrl-C, but Python doesn't receive Ctrl-C when it waits for new connection (s.accept()). How can I solve this problem?
In order to stop the socket connection, you can call the shutdown method like so:
s.shutdown(socket.SHUT_WR)
(This SHUT_WR stops all new writes and reads)
However, while your code is running, it is suspended while trying to make the TCP connection. In order to stop it via Ctrl-C, you'll need to run the socket on another thread, giving your main thread the ability to wake up to the interrupt and send the shutdown message.
You can use shutdown() or close() for your need
A wonderful explanation (from AIX 4.3 Communications Programming Concepts) is given below
Once a socket is no longer required, the calling program can discard the socket by applying a close subroutine to the socket descriptor. If a reliable delivery socket has data associated with it when a close takes place, the system continues to attempt data transfer. However, if the data is still undelivered, the system discards the data. Should the application program have no use for any pending data, it can use the shutdown subroutine on the socket prior to closing it.
This is a simple client-server example where the server returns whatever the client sends, but reversed.
Server:
import socketserver
class MyTCPHandler(socketserver.BaseRequestHandler):
def handle(self):
self.data = self.request.recv(1024)
print('RECEIVED: ' + str(self.data))
self.request.sendall(str(self.data)[::-1].encode('utf-8'))
server = socketserver.TCPServer(('localhost', 9999), MyTCPHandler)
server.serve_forever()
Client:
import socket
import threading
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
s.connect(('localhost',9999))
def readData():
while True:
data = s.recv(1024)
if data:
print('Received: ' + data.decode('utf-8'))
t1 = threading.Thread(target=readData)
t1.start()
def sendData():
while True:
intxt = input()
s.send(intxt.encode('utf-8'))
t2 = threading.Thread(target=sendData)
t2.start()
I took the server from an example I found on Google, but the client was written from scratch. The idea was having a client that can keep sending and receiving data from the server indefinitely.
Sending the first message with the client works. But when I try to send a second message, I get this error:
ConnectionAbortedError: [WinError 10053] An established connection was
aborted by the software in your host machine
What am I doing wrong?
For TCPServer, the handle method of the handler gets called once to handle the entire session. This may not be entirely clear from the documentation, but socketserver is, like many libraries in the stdlib, meant to serve as clear sample code as well as to be used directly, which is why the docs link to the source, where you can clearly see that it's only going to call handle once per connection (TCPServer.get_request is defined as just calling accept on the socket).
So, your server receives one buffer, sends back a response, and then quits, closing the connection.
To fix this, you need to use a loop:
def handle(self):
while True:
self.data = self.request.recv(1024)
if not self.data:
print('DISCONNECTED')
break
print('RECEIVED: ' + str(self.data))
self.request.sendall(str(self.data)[::-1].encode('utf-8'))
A few side notes:
First, using BaseRequestHandler on its own only allows you to handle one client connection at a time. As the introduction in the docs says:
These four classes process requests synchronously; each request must be completed before the next request can be started. This isn’t suitable if each request takes a long time to complete, because it requires a lot of computation, or because it returns a lot of data which the client is slow to process. The solution is to create a separate process or thread to handle each request; the ForkingMixIn and ThreadingMixIn mix-in classes can be used to support asynchronous behaviour.
Those mixin classes are described further in the rest of the introduction, and farther down the page, and at the bottom, with a nice example at the end. The docs don't make it clear, but if you need to do any CPU-intensive work in your handler, you want ForkingMixIn; if you need to share data between handlers, you want ThreadingMixIn; otherwise it doesn't matter much which you choose.
Note that if you're trying to handle a large number of simultaneous clients (more than a couple dozen), neither forking nor threading is really appropriate—which means TCPServer isn't really appropriate. For that case, you probably want asyncio, or a third-party library (Twisted, gevent, etc.).
Calling str(self.data) is a bad idea. You're just going to get the source-code-compatible representation of the byte string, like b'spam\n'. What you want is to decode the byte string into the equivalent Unicode string: self.data.decode('utf8').
There's no guarantee that each sendall on one side will match up with a single recv on the other side. TCP is a stream of bytes, not a stream of messages; it's perfectly possible to get half a message in one recv, and two and a half messages in the next one. When testing with a single connection on localhost with the system under light load, it will probably appear to "work", but as soon as you try to deploy any code that assumes that each recv gets exactly one message, your code will break. See Sockets are byte streams, not message streams for more details. Note that if your messages are just lines of text (as they are in your example), using StreamRequestHandler and its rfile attribute, instead of BaseRequestHandler and its request attribute, solves this problem trivially.
You probably want to set server.allow_reuse_address = True. Otherwise, if you quit the server and re-launch it again too quickly, it'll fail with an error like OSError: [Errno 48] Address already in use.
Is there a better way of listening on a port and reading in UDP data?
I do a
self.udps.bind((self.address,self.port)
ata, addr = self.udps.recvfrom(1024)
It seems to get locked in this state until it gets that data, in a bare script or in a thread.
This works well, but if you want to say get it to stop listening, it won't until it receives data and moves on to realize it needs to stop listening. I've had to send UDP data to the port each time to get it to gracefully shut down. Is there a way to get it to stop listening immediately with a specific condition?
recfrom waits until data arrives on the specified port.
If you don't want it to listen forever, set a timeout:
self.udps.bind((self.address,self.port)
self.udps.settimeout(60.0) # set 1min timeout
while some_condition:
try:
ata, addr = self.udps.recvfrom(1024)
except socket.timeout:
pass # try again while some_condition
else:
# work with the received data ...
I have a threaded python socket server that opens a new thread for each connection.
The thread is a very simple communication based on question and answer.
Basically client sends initial data transmission, server takes it run an external app that does stuff to the transmission and returns a reply that the server will send back and the loop will begin again until client disconnects.
Now because the client will be on a mobile phone thus an unstable connection I get left with open threads no longer connected and because the loop starts with recv it is rather difficult to break on lost connectivity this way.
I was thinking on adding a send before the recv to test if connection is still alive but this might not help at all if the client disconnects after my failsafe send as the client sends a data stream every 5 seconds only.
I noticed the recv will break sometimes but not always and in those cases I am left with zombie threads using resources.
Also this could be a solid vulnerability for my system to be DOSed.
I have looked through the python manual and Googled since thursday trying to find something for this but most things I find are related to client and non blocking mode.
Can anyone point me in the right direction towards a good way on fixing this issue?
Code samples:
Listener:
serversocket = socket(AF_INET, SOCK_STREAM)
serversocket.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1)
serversocket.bind(addr)
serversocket.listen(2)
logg("Binded to port: " + str(port))
# Listening Loop
while 1:
clientsocket, clientaddr = serversocket.accept()
threading.Thread(target=handler, args=(clientsocket, clientaddr,port,)).start()
# This is useless as it will never get here
serversocket.close()
Handler:
# Socket connection handler (Threaded)
def handler(clientsocket, clientaddr, port):
clientsocket.settimeout(15)
# Loop till client closes connection or connection drops
while 1:
stream = ''
while 1:
ending = stream[-6:] # get stream ending
if ending == '.$$$$.':
break
try:
data = clientsocket.recv(1)
except:
sys.exit()
if not data:
sys.exit()
# this is the usual point where thread is closed when a client closes connection normally
stream += data
# Clear the line ending
stream = base64.b64encode(stream[:-6])
# Send data to be processed
re = getreply(stream)
# Send response to client
try:
clientsocket.send(re + str('.$$$$.'))
except:
sys.exit()
As you can see there are three conditions that at least one should trigger exit if connection fails but sometimes they do not.
Sorry, but I think that threaded idea in this case is not good. As you do not need to process/do a lot of stuff in these threads (workers?) and most of the time these threads are waiting for socket (is the blocking operation, isn't it?) I would advice to read about event-driven programming. According to sockets this pattern is extremly useful, becouse you can do all stuff in one thread. You are communicate with one socket at a time, but the rest of connections are just waiting to data so there is almost no loss. When you send several bytes you just check that maybe another connection requires carrying. You can read about select
and epoll.
In python there is several libraries to play with this nicly:
libev (c library wrapper) - pyev
tornado
twisted
I used tornado in some projects and it is done this task very good. Libev is nice also, but is a c-wrapper so it is a little bit low-level (but very nice for some tasks).
So you should use socket.settimeout(float) with the clientsocket like one of the comments suggested.
The reason you don't see any difference is, when you call socket.recv(bufsize[, flags]) and the timeout runs out an socket.timeout exception is thrown and you catch that exception and exit.
try:
data = clientsocket.recv(1)
except:
sys.exit()
should be somthing like:
try:
data = clientsocket.recv(1)
except timeout:
#timeout occurred
#handle it
clientsocket.close()
sys.exit()