Using pyBluez, I use the following code to advertise and listen for a bluetooth connection:
def connect_socket():
global client_sock
try:
server_sock = BluetoothSocket(RFCOMM)
server_sock.bind(("", PORT_ANY))
server_sock.listen(1)
port = server_sock.getsockname()[1]
uuid = "00001101-0000-1000-8000-00805F9B34FB"
advertise_service(server_sock, "GSA",
service_id=uuid,
service_classes=[uuid, SERIAL_PORT_CLASS],
profiles=[SERIAL_PORT_PROFILE])
print("Waiting for connection on RFCOMM channel %d" % port)
client_sock, client_info = server_sock.accept()
print("Accepted connection from ", client_info)
except Exception as e: (yes, I know I'm catching all exceptions)
print(e)
I use the following to call the above and send data out from the socket. (I wind up waiting for a connection on every possible channel, which is not desirable, but that's not my only problem or the one that's prompting this question, though I'd like to fix it, too.)
def write_bt(message):
global client_sock
if client_sock is None:
threading.Thread(target=connect_socket).start()
if client_sock is not None:
try:
client_sock.send(message)
except Exception as e:
gsa_msg.message(e)
client_sock = None
I also need to receive data from the socket and write it to a usb connection. For this, I use the following:
def forward_bt_to_usb():
global client_sock
global serUSB
if (client_sock is not None) and (serUSB is not None):
try:
data = client_sock.recv(1024)
serUSB.write(data)
except Exception as e:
gsa_msg.error(e)
client_sock = None
Both write_bt() and forward_bt_to_usb() get called continuously from a loop and are communicating with the same client, but there isn't always data being received over the socket, and forward_bt_to_usb() seems to block everything in that case.
I believe that I probably have all of this structured improperly for what I'm trying to do, or perhaps I just need to have separate threads for sending and receiving data, but it's not obvious to me how to do that (Initially I just put some of the code from forward_bt_to_usb() in a separate thread, without realizing that that would just keep creating new threads as forward_bt_to_usb() kept getting called.)
It seems that what I'm trying to do should be pretty straightforward and certainly not novel, but I haven't been able to find examples or an explanation that I've been able to implement.
Related
I asked a question about my server to client code because I had many problems with and someone told me that the solution to the problems I had was to make a peer to peer chat which I have now done.
Server.py
import socket, threading
host = "127.0.0.1"
port = 4000
s = socket.socket()
s.bind((host,port))
s.listen(5)
client_sockets = []
users = []
print("Listening")
def handle_client(conn):
while True:
try:
data = conn.recv(512)
for x in client_sockets:
try:
x.send(data)
except Exception as e:
print(e)
except:
pass
while True:
conn,addr = s.accept()
client_sockets.append(conn)
print("Connections from", addr[0], "on port",addr[1])
threading.Thread(target = handle_client,args = (conn,)).start()
Client.py
import socket,threading
host = "127.0.0.1"
port = 4000
s = socket.socket()
s.connect((host,port))
def echo_data(sock):
while True:
try:
data = sock.recv(512)
print(data)
except:
pass
while True:
threading.Thread(target=echo_data,args=(s,)).start()
msg = input("Enter your message : ")
s.send(msg.encode())
The problems is that when I run the client and try talking to another client the message doesn't get sent unless the other client hits enter and also that brings me to my second problem, when the clients send messages to each other they get received in this format:
b'hi'Enter your message :
This is the link to my previous question
I will start with general problems not directly related to the question:
except: pass is generally a bad idea, specially when things go wrong because it will hide potentially useful messages. It is allowed by the language but should never exist in real code
in client.py you start a receiving thread per message, while you only need one for the whole client. You should start the thread outside the loop:
threading.Thread(target=echo_data,args=(s,)).start()
while True:
msg = input("Enter your message : ")
s.send(msg.encode())
Now for the questions:
the message doesn't get sent unless the other client hits enter
It can be caused by an IDE. Specifically, IDLE is known to behave poorly with multi-threaded scripts. If you correctly use one single receiving thread and starts the script from the command line (python client.py) it should work correctly
the messages get recived in this format: b'hi'Enter your message
sock.recv(sz) returns a byte string. You need to decode it to convert it to a Python 3 unicode string:
data = sock.recv(512)
print(data.decode())
But that is not all. It is fine for tests, but you should at least allow clients to disconnect from the server and when they do, remove them from client_sockets. And it is common not to send back a message to the sender. So you could improve the server.py loop:
while True:
try:
data = conn.recv(512)
for x in client_sockets:
if x != conn: # do not echo to sender
x.send(data)
except Exception as e: # problem in connection: exit the loop
print(e)
break
# clear the connection
conn.close()
client_sockets.remove(conn)
I am preparing new driver (that should work over TCP/IP) and I am having kind of issue.
The main idea is, that there will be two separated loops.
Loop binds on port and keeps listening for incoming connection requests. Once a request is accepted, it authorizes client, if authorized, passes his connection to second loop. After client is passed to second loop, it should continue in listening for incoming connections. If new client connects, it is passed to second loop, and so on..
Second loop takes connected client, and manages sending and receiving data with him. However, it also checks if there is not some new connection, and if so, it will close actual connection to client and use new one.
This should ensure that if client connection is lost, we do not have to wait for timeout to get new connection (which is the case why I am creating this new driver). If new connection is activated, we just close the old one and continue communication with modem with much shorter break time.
Here is the code (simplified):
class SocketDriver(Process):
def __init__(self):
Process.__init__(self)
self.stop_event = Event()
self.client = None
self.addr = None
self.client_queue = Queue()
def connection_manager(self):
while not self.stop_event.is_set():
try:
log.info('Binding to %s on port: %s' % (self.atmel_name, self.port))
self.socket_object.bind((self.ip, self.port))
break
except socket.error, e:
if e.errno == errno.EADDRINUSE:
log.error('Socket error: %s, re-trying' % e)
time.sleep(5)
else:
log.exception('Unknown exception')
break
while not self.stop_event.is_set():
self.socket_object.listen(5)
log.info('Waiting for incoming connections')
# This cannot be touched, because accept command wont pass until something connect
client, addr = self.socket_object.accept()
log.info('Accepted connection from %s:%s' % (addr[0], addr[1]))
self.client_queue.put([client, addr])
def open(self, port, bootloader=False):
"""
Open port and load drivers
"""
self.socket_object = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
#start new thread takes 1st argument as a function name to be run, second is the tuple of arguments to the function.
start_new_thread(self.connection_manager ,())
def run(self):
"""
Loop throught FIFO structures, check for incomming packet,
send to peripheral device, receive response and deliver it
"""
log.info('Starting SocketDriver loop')
while not self.stop_event.is_set():
# Watchdog touch
open(touchfile, 'w').close()
#Check if there is new client available
if not self.client_queue.empty():
log.info('New client found!')
if self.client != None:
self.client.close()
conn_data = self.client_queue.get()
self.client = conn_data[0]
self.addr = conn_data[1]
log.info('Connected to new client with address: %s:%s' % (self.addr[0], self.addr[1]))
# Check if there is any new connection
if self.client != None:
log.info('Client found, checking queues')
for out_queue, in_queue, driver_id in self.fifocom_list:
while not self.stop_event.is_set():
if not out_queue.empty():
self.write_packet(atmel_packet, driver_id)
# Vycitame a spracujeme odpoved
ready = select.select([self.client], [], [], 1)
if ready[0]:
log.info('[%s] Reading', self.source_name)
atmel_packet = self.read_packet()
time.sleep(0.1)
The basic idea is to keep listening for new connections while communicating with actually connected client.
However, when I put anything else into "self.client_queue", the "if not self.client_queue.empty():" works. However, when I put there client and address from "self.socket_object.accept()", it crushes:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/queues.py", line 266, in _feed
send(obj)
TypeError: expected string or Unicode object, NoneType found
Could anyone please explain what is going on here? I have read that there is problem with moving opened socets between processes, but the thread is being run within one process.
This is my server program, how can it send the data received from each client to every other client?
import socket
import os
from threading import Thread
import thread
def listener(client, address):
print "Accepted connection from: ", address
while True:
data = client.recv(1024)
if not data:
break
else:
print repr(data)
client.send(data)
client.close()
host = socket.gethostname()
port = 10016
s = socket.socket()
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind((host,port))
s.listen(3)
th = []
while True:
print "Server is listening for connections..."
client, address = s.accept()
th.append(Thread(target=listener, args = (client,address)).start())
s.close()
If you need to send a message to all clients, you need to keep a collection of all clients in some way. For example:
clients = set()
clients_lock = threading.Lock()
def listener(client, address):
print "Accepted connection from: ", address
with clients_lock:
clients.add(client)
try:
while True:
data = client.recv(1024)
if not data:
break
else:
print repr(data)
with clients_lock:
for c in clients:
c.sendall(data)
finally:
with clients_lock:
clients.remove(client)
client.close()
It would probably be clearer to factor parts of this out into separate functions, like a broadcast function that did all the sends.
Anyway, this is the simplest way to do it, but it has problems:
If one client has a slow connection, everyone else could bog down writing to it. And while they're blocking on their turn to write, they're not reading anything, so you could overflow the buffers and start disconnecting everyone.
If one client has an error, the client whose thread is writing to that client could get the exception, meaning you'll end up disconnecting the wrong user.
So, a better solution is to give each client a queue, and a writer thread servicing that queue, alongside the reader thread. (You can then extend this in all kinds of ways—put limits on the queue so that people stop trying to talk to someone who's too far behind, etc.)
As Anzel points out, there's a different way to design servers besides using a thread (or two) per client: using a reactor that multiplexes all of the clients' events.
Python 3.x has some great libraries for this built in, but 2.7 only has the clunky and out-of-date asyncore/asynchat and the low-level select.
As Anzel says, Python SocketServer: sending to multiple clients has an answer using asyncore, which is worth reading. But I wouldn't actually use that. If you want to write a reactor-based server in Python 2.x, I'd either use a better third-party framework like Twisted, or find or write a very simple one that sits directly on select.
I am trying to have a client connect to my server, and have a stream of communication between them. The only reason the connection should break is due to network errors, or unless the client wants to stop talking.
The issue I am running into is keeping the handler in a tight loop, and parsing the JSON.
My server code is :
#!/usr/bin/env python
import SocketServer
import socket
import json
import time
class MyTCPServer(SocketServer.ThreadingTCPServer):
allow_reuse_address = True
class MyTCPServerHandler(SocketServer.BaseRequestHandler):
def handle(self):
while 1:
try:
networkData = (self.request.recv(1024).strip())
try:
jsonInputData = json.loads(networkData)
print jsonInputData
try:
if jsonInputData['type'] == 'SAY_HI':
print "HI"
except Exception, e:
print "no hi"
pass
try:
if jsonInputData['type'] == 'GO_AWAY':
print "Going away!"
except Exception, e:
print "no go away"
pass
except Exception, e:
pass
#time.sleep(0.001)
#print "JSON Error", e
except Exception, e:
#time.sleep(0.001)
pass
#print "No message", e
server = MyTCPServer(('192.168.1.115', 13373), MyTCPServerHandler)
server.serve_forever()
My client code is simple :
#!/usr/bin/env python
import socket
import json
import time
import sys
hostname = '192.168.1.103'
port = 13373
try:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((hostname,port))
except Exception, e:
print "Error, could not open socket: ", e
data = {'type':'SAY_HI'}
sock.send(json.dumps(data))
data = {'type':'SAY_BYE'}
sock.send(json.dumps(data))
Sometimes I'll see the messages being sent, "SAY_HI" and "SAY_BYE", but most of the times, no data is being displayed on the server side.
This question is really not clear, but calling self.request.recv(1024) is very likely not what you want to do. You're eliminating all of the nice application-level handling that TCP will happily do for you. If you change that to self.request.recv(8) or a similarly very small number (such that recv() returns whenever it receives data, and doesn't try to fill your buffer), you may get better results.
Ultimately this is super-simplistic change, even if it works, that will not work in a larger context. You will need to be handling exceptions from your json parser on the server side and waiting for more data until an entire well-formed message is received.
This is a hopelessly more complex subject than will be handled generally in any SO answer. If you're going to be doing any amount of raw sockets programming, you absolutely must own a copy of Unix Network Programming, Volume 1.
I want my python application to be able to tell when the socket on the other side has been dropped. Is there a method for this?
Short answer:
use a non-blocking recv(), or a blocking recv() / select() with a very
short timeout.
Long answer:
The way to handle socket connections is to read or write as you need to, and be prepared to handle connection errors.
TCP distinguishes between 3 forms of "dropping" a connection: timeout, reset, close.
Of these, the timeout can not really be detected, TCP might only tell you the time has not expired yet. But even if it told you that, the time might still expire right after.
Also remember that using shutdown() either you or your peer (the other end of the connection) may close only the incoming byte stream, and keep the outgoing byte stream running, or close the outgoing stream and keep the incoming one running.
So strictly speaking, you want to check if the read stream is closed, or if the write stream is closed, or if both are closed.
Even if the connection was "dropped", you should still be able to read any data that is still in the network buffer. Only after the buffer is empty will you receive a disconnect from recv().
Checking if the connection was dropped is like asking "what will I receive after reading all data that is currently buffered ?" To find that out, you just have to read all data that is currently bufferred.
I can see how "reading all buffered data", to get to the end of it, might be a problem for some people, that still think of recv() as a blocking function. With a blocking recv(), "checking" for a read when the buffer is already empty will block, which defeats the purpose of "checking".
In my opinion any function that is documented to potentially block the entire process indefinitely is a design flaw, but I guess it is still there for historical reasons, from when using a socket just like a regular file descriptor was a cool idea.
What you can do is:
set the socket to non-blocking mode, but than you get a system-depended error to indicate the receive buffer is empty, or the send buffer is full
stick to blocking mode but set a very short socket timeout. This will allow you to "ping" or "check" the socket with recv(), pretty much what you want to do
use select() call or asyncore module with a very short timeout. Error reporting is still system-specific.
For the write part of the problem, keeping the read buffers empty pretty much covers it. You will discover a connection "dropped" after a non-blocking read attempt, and you may choose to stop sending anything after a read returns a closed channel.
I guess the only way to be sure your sent data has reached the other end (and is not still in the send buffer) is either:
receive a proper response on the same socket for the exact message that you sent. Basically you are using the higher level protocol to provide confirmation.
perform a successful shutdow() and close() on the socket
The python socket howto says send() will return 0 bytes written if channel is closed. You may use a non-blocking or a timeout socket.send() and if it returns 0 you can no longer send data on that socket. But if it returns non-zero, you have already sent something, good luck with that :)
Also here I have not considered OOB (out-of-band) socket data here as a means to approach your problem, but I think OOB was not what you meant.
It depends on what you mean by "dropped". For TCP sockets, if the other end closes the connection either through
close() or the process terminating, you'll find out by reading an end of file, or getting a read error, usually the errno being set to whatever 'connection reset by peer' is by your operating system. For python, you'll read a zero length string, or a socket.error will be thrown when you try to read or write from the socket.
From the link Jweede posted:
exception socket.timeout:
This exception is raised when a timeout occurs on a socket
which has had timeouts enabled via a prior call to settimeout().
The accompanying value is a string whose value is currently
always “timed out”.
Here are the demo server and client programs for the socket module from the python docs
# Echo server program
import socket
HOST = '' # Symbolic name meaning all available interfaces
PORT = 50007 # Arbitrary non-privileged port
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
s.listen(1)
conn, addr = s.accept()
print 'Connected by', addr
while 1:
data = conn.recv(1024)
if not data: break
conn.send(data)
conn.close()
And the client:
# Echo client program
import socket
HOST = 'daring.cwi.nl' # The remote host
PORT = 50007 # The same port as used by the server
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
s.send('Hello, world')
data = s.recv(1024)
s.close()
print 'Received', repr(data)
On the docs example page I pulled these from, there are more complex examples that employ this idea, but here is the simple answer:
Assuming you're writing the client program, just put all your code that uses the socket when it is at risk of being dropped, inside a try block...
try:
s.connect((HOST, PORT))
s.send("Hello, World!")
...
except socket.timeout:
# whatever you need to do when the connection is dropped
If I'm not mistaken this is usually handled via a timeout.
I translated the code sample in this blog post into Python: How to detect when the client closes the connection?, and it works well for me:
from ctypes import (
CDLL, c_int, POINTER, Structure, c_void_p, c_size_t,
c_short, c_ssize_t, c_char, ARRAY
)
__all__ = 'is_remote_alive',
class pollfd(Structure):
_fields_ = (
('fd', c_int),
('events', c_short),
('revents', c_short),
)
MSG_DONTWAIT = 0x40
MSG_PEEK = 0x02
EPOLLIN = 0x001
EPOLLPRI = 0x002
EPOLLRDNORM = 0x040
libc = CDLL('libc.so.6')
recv = libc.recv
recv.restype = c_ssize_t
recv.argtypes = c_int, c_void_p, c_size_t, c_int
poll = libc.poll
poll.restype = c_int
poll.argtypes = POINTER(pollfd), c_int, c_int
class IsRemoteAlive: # not needed, only for debugging
def __init__(self, alive, msg):
self.alive = alive
self.msg = msg
def __str__(self):
return self.msg
def __repr__(self):
return 'IsRemoteAlive(%r,%r)' % (self.alive, self.msg)
def __bool__(self):
return self.alive
def is_remote_alive(fd):
fileno = getattr(fd, 'fileno', None)
if fileno is not None:
if hasattr(fileno, '__call__'):
fd = fileno()
else:
fd = fileno
p = pollfd(fd=fd, events=EPOLLIN|EPOLLPRI|EPOLLRDNORM, revents=0)
result = poll(p, 1, 0)
if not result:
return IsRemoteAlive(True, 'empty')
buf = ARRAY(c_char, 1)()
result = recv(fd, buf, len(buf), MSG_DONTWAIT|MSG_PEEK)
if result > 0:
return IsRemoteAlive(True, 'readable')
elif result == 0:
return IsRemoteAlive(False, 'closed')
else:
return IsRemoteAlive(False, 'errored')
Trying to improve on #kay response. I made a more pythonic version
(Note that it was not yet tested in a "real-life" environment, and only on Linux)
This detects if the remote side closed the connection, without actually consuming the data:
import socket
import errno
def remote_connection_closed(sock: socket.socket) -> bool:
"""
Returns True if the remote side did close the connection
"""
try:
buf = sock.recv(1, socket.MSG_PEEK | socket.MSG_DONTWAIT)
if buf == b'':
return True
except BlockingIOError as exc:
if exc.errno != errno.EAGAIN:
# Raise on unknown exception
raise
return False
Here is a simple example from an asyncio echo server:
import asyncio
async def handle_echo(reader, writer):
addr = writer.get_extra_info('peername')
sock = writer.get_extra_info('socket')
print(f'New client: {addr!r}')
# Initial of client command
data = await reader.read(100)
message = data.decode()
print(f"Received {message!r} from {addr!r}")
# Simulate a long async process
for _ in range(10):
if remote_connection_closed(sock):
print('Remote side closed early')
return
await asyncio.sleep(1)
# Write the initial message back
print(f"Send: {message!r}")
writer.write(data)
await writer.drain()
writer.close()
async def main():
server = await asyncio.start_server(
handle_echo, '127.0.0.1', 8888)
addrs = ', '.join(str(sock.getsockname()) for sock in server.sockets)
print(f'Serving on {addrs}')
async with server:
await server.serve_forever()
if __name__ == '__main__':
asyncio.run(main())