I am writing a python socket client which
Send out message one (e.g. Hello) every 5 seconds and message two (e.g. 15 seconds) every 15 seconds
Receive message at any time
I mean to do the send and receive in different thread. However it is still blocking.
Does anyone has suggestion?
Thread #1
threading.Thread(target=Thread2, args=(sock)).start()
sock.recv(1024)
Thread #2
def Thread2(sock):
count = 0
while True:
sleep(5)
count = count + 5
sock.send('Hello')
if count % 15 == 0
sock.send('15 seconds')
It is not blocking. It's just that your main thread does nothing after first sock.recv(1024). You have to tell it to constantly gather the data:
MAIN THREAD
threading.Thread(target=Thread2, args=(sock,)).start()
while True:
data = sock.recv(1024)
if not data:
break
print data
Note that you won't be able to interrupt that process easily. In order to do that you need to set thread as daemon:
MAIN THREAD
t = threading.Thread(target=Thread2, args=(sock,))
t.daemon = True
t.start()
while True:
data = sock.recv(1024)
if not data:
break
print data
Also when you are passing args remember to pass a tuple, i.e. args=(sock,) instead of args=(sock). For Python args=(sock) is equivalent to args=sock. This is probably the culprit!
I can't see more issues in your code.
Related
I'm trying to write a python client to listen to a gRPC stream (fire hose). It constantly keeps streaming. There is no "on completion".
Proto:
rpc Start (StartParameters) returns (stream Progress) {}
In the client I tried writing the following, but as the Start rpc does not return "on complete", I don't get the control to the for loop to print (event).
rsp = self.stub.Start(params)
for event in rsp:
print(event)
Can somebody please help me with a python codeto handle or capture all the events in rsp after a timeout (2 mins) and then print each event in rsp.
I got this working, posting this incase if somebody else is looking for an answer
def collect_responses(self, response_iterator, response_queue):
for response in response_iterator:
response_queue.put(response)
def call_rpc(self)
response_stream = stub.Start(params)
response_queue = queue.Queue()
thread = threading.Thread(target=self.collect_responses,
args=(response_stream, response_queue))
thread.start()
time.sleep(120) # or have a different trigger to say, cancel stream
response_stream.cancel()
thread.join()
while not response_queue.empty():
item = response_queue.get()
print(item)
I need to process frames from a webcam and send a few selected frames to a remote websocket server. The server answers immediately with a confirmation message (much like an echo server).
Frame processing is slow and cpu intensive so I want to do it using a separate thread pool (producer) to use all the available cores. So the client (consumer) just sits idle until the pool has something to send.
My current implementation, see below, works fine only if I add a small sleep inside the producer test loop. If I remove this delay I stop receiving any answer from the server (both the echo server and from my real server). Even the first answer is lost, so I do not think this is a flood protection mechanism.
What am I doing wrong?
import tornado
from tornado.websocket import websocket_connect
from tornado import gen, queues
import time
class TornadoClient(object):
url = None
onMessageReceived = None
onMessageSent = None
ioloop = tornado.ioloop.IOLoop.current()
q = queues.Queue()
def __init__(self, url, onMessageReceived, onMessageSent):
self.url = url
self.onMessageReceived = onMessageReceived
self.onMessageSent = onMessageSent
def enqueueMessage(self, msgData, binary=False):
print("TornadoClient.enqueueMessage")
self.ioloop.add_callback(self.addToQueue, (msgData, binary))
print("TornadoClient.enqueueMessage done")
#gen.coroutine
def addToQueue(self, msgTuple):
yield self.q.put(msgTuple)
#gen.coroutine
def main_loop(self):
connection = None
try:
while True:
while connection is None:
try:
print("Connecting...")
connection = yield websocket_connect(self.url)
print("Connected " + str(connection))
except Exception, e:
print("Exception on connection " + str(e))
connection = None
print("Retry in a few seconds...")
yield gen.Task(self.ioloop.add_timeout, time.time() + 3)
try:
print("Waiting for data to send...")
msgData, binaryVal = yield self.q.get()
print("Writing...")
sendFuture = connection.write_message(msgData, binary=binaryVal)
print("Write scheduled...")
finally:
self.q.task_done()
yield sendFuture
self.onMessageSent("Sent ok")
print("Write done. Reading...")
msg = yield connection.read_message()
print("Got msg.")
self.onMessageReceived(msg)
if msg is None:
print("Connection lost")
connection = None
print("main loop completed")
except Exception, e:
print("ExceptionExceptionException")
print(e)
connection = None
print("Exit main_loop function")
def start(self):
self.ioloop.run_sync(self.main_loop)
print("Main loop completed")
######### TEST METHODS #########
def sendMessages(client):
time.sleep(2) #TEST only: wait for client startup
while True:
client.enqueueMessage("msgData", binary=False)
time.sleep(1) # <--- comment this line to break it
def testPrintMessage(msg):
print("Received: " + str(msg))
def testPrintSentMessage(msg):
print("Sent: " + msg)
if __name__=='__main__':
from threading import Thread
client = TornadoClient("ws://echo.websocket.org", testPrintMessage, testPrintSentMessage)
thread = Thread(target = sendMessages, args = (client, ))
thread.start()
client.start()
My real problem
In my real program I use a "window like" mechanism to protect the consumer (an autobahn.twisted.websocket server): the producer can send up to a maximum number of un-acknowledge messages (the webcam frames), then stops waiting for half of the window to free up.
The consumer sends a "PROCESSED" message back acknowleding one or more messages (just a counter, not by id).
What I see on the consumer log is that the messages are processed and the answer is sent back but these acks vanish somewhere in the network.
I have little experience with asynchio so I wanted to be sure that I'm not missing any yield, annotation or something else.
This is the consumer side log:
2017-05-13 18:59:54+0200 [-] TX Frame to tcp4:192.168.0.5:48964 : fin = True, rsv = 0, opcode = 1, mask = -, length = 21, repeat_length = None, chopsize = None, sync = False, payload = {"type": "PROCESSED"}
2017-05-13 18:59:54+0200 [-] TX Octets to tcp4:192.168.0.5:48964 : sync = False, octets = 81157b2274797065223a202250524f434553534544227d
This is neat code. I believe the reason you need a sleep in your sendMessages thread is because, otherwise, it keeps calling enqueueMessage as fast as possible, millions of times per second. Since enqueueMessage does not wait for the enqueued message to be processed, it keeps calling IOLoop.add_callback as fast as it can, without giving the loop enough opportunity to execute the callbacks.
The loop might make some progress running on the main thread, since you're not actually blocking it. But the sendMessages thread adds callbacks much faster than the loop can handle them. By the time the loop has popped one message from the queue and has begun to process it, millions of new callbacks are added already, which the loop must execute before it can advance to the next stage of message-processing.
Therefore, for your test code, I think it's correct to sleep between calls to enqueueMessage on the thread.
I have just written a very simple udp chatting program for fun. It worked as supposed but now I have no ideas how to safely exit it. It seems that calling sys.exit() is not acceptable due to the problem here: Why does sys.exit() not exit when called inside a thread in Python?
Simply raising signals by ctrl + c will fail because it will be intercepted by raw_input().
Is there any decent way to deal with it?
Here is my code snippet:
import socket
import threading
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
address = ('192.168.1.xxx', 31500)
target_address = ('192.168.1.xxx', 31500)
s.bind(address)
print('waiting for input...')
def rcv():
while True:
data, addr = s.recvfrom(2048)
if not data:
continue
print 'Received: #### ', data
print '\n'
def send():
while True:
msg = raw_input()
if not msg:
continue
s.sendto(msg, target_address)
t1 = threading.Thread(target = rcv)
t2 = threading.Thread(target = send)
t1.start()
t2.start()
Replacing my previous answer:
I modified your code to:
import socket
import threading
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
address = ('192.168.1.xxx', 31500)
target_address = ('192.168.1.xxx', 31500)
s.bind(address)
EXIT_MSG_GUARD = "#Quit!"
print('waiting for input...')
def rcv():
while True:
data, addr = s.recvfrom(2048)
if not data:
continue
print 'Received: #### ', data
print '\n'
def send():
while True:
msg = raw_input()
if not msg:
continue
else:
s.sendto(msg, target_address)
if msg == EXIT_MSG_GUARD:
return
t1 = threading.Thread(target = rcv)
t2 = threading.Thread(target = send)
t1.setDaemon(True)
t1.start()
t2.start()
There are 2 things that I did:
Add a message guard (in my example #Quit!, but you can change it to anything you like): when the user inputs that text, after sending it, the t2 thread ends.
Make thread t1 daemonic, meaning that when the main thread and all the non daemon threads will end, this will end too, and the program will exit.
Note: Daemon threads are abruptly stopped at shutdown. Their resources (such as open files, database transactions, etc.) may not be released properly. If you want your threads to stop gracefully, make them non-daemonic and use a suitable signalling mechanism such as an Event.
As an alternative to daemon threads, you could use the solution posted here (beware of the limitations as well!). It is nicer (and recommended) but, will require more work including a little bit of change to your rcv function
The issue I have is that my chat client is supposed to recieve and print data from server when the server sends it, and then allow the client to reply.
This works fine, except that the entire process stops when the client is prompted to reply. So messages pile up until you type something, and after you do that, then it prints all the recieved messages.
Not sure how to fix this, so I decided why not have the client's time to type a reply timeout after 5 seconds, so that the replies can come through regardless. It's pretty flawed, because the input will reset itself, but it works better anyways.
Here's the function that needs to have a timeout:
# now for outgoing data
def outgoing():
global out_buffer
while 1:
user_input=input("your message: ")+"\n"
if user_input:
out_buffer += [user_input.encode()]
# for i in wlist:
s.send(out_buffer[0])
out_buffer = []
How should I go about using a timeout? I was thinking of using time.sleep, but that just pauses the entire operation.
I tried looking for documentation. But I didn't find anything that would help me make the program count up to a set limit, then continue.
Any idea's about how to solve this? (Doesn't need to use a timeout, just needs to stop the message pileup before the clients reply can be sent) (Thanks to all who helped me get this far)
For Ionut Hulub:
from socket import *
import threading
import json
import select
import signal # for trying to create timeout
print("client")
HOST = input("connect to: ")
PORT = int(input("on port: "))
# create the socket
s = socket(AF_INET, SOCK_STREAM)
s.connect((HOST, PORT))
print("connected to:", HOST)
#--------- need 2 threads for handling incoming and outgoing messages--
# 1: create out_buffer:
out_buffer = []
# for incoming data
def incoming():
rlist,wlist,xlist = select.select([s], out_buffer, [])
while 1:
for i in rlist:
data = i.recv(1024)
if data:
print("\nreceived:", data.decode())
# now for outgoing data
def outgoing():
global out_buffer
while 1:
user_input=input("your message: ")+"\n"
if user_input:
out_buffer += [user_input.encode()]
# for i in wlist:
s.send(out_buffer[0])
out_buffer = []
thread_in = threading.Thread(target=incoming, args=())
thread_out = threading.Thread(target=outgoing, args=())
thread_in.start() # this causes the thread to run
thread_out.start()
thread_in.join() # this waits until the thread has completed
thread_out.join()
We can use signals for the same. I think the below example will be useful for you.
import signal
def timeout(signum, frame):
raise Exception
#this is an infinite loop, never ending under normal circumstances
def main():
print 'Starting Main ',
while 1:
print 'in main ',
#SIGALRM is only usable on a unix platform
signal.signal(signal.SIGALRM, timeout)
#change 5 to however many seconds you need
signal.alarm(5)
try:
main()
except:
print "whoops"
I am currently working on a school project where the assignment, among other things, is to set up a threaded server/client system. Each client in the system is supposed to be assigned its own thread on the server when connecting to it. In addition i would like the server to run other threads, one concerning input from the command line and another concerning broadcasting messages to all clients. However, I can't get this to run as i want to. It seems like the threads are blocking each other. I would like my program to take inputs from the command line, at the "same time" as the server listens to connected clients, and so on.
I am new to python programming and multithreading, and allthough I think my idea is good, I'm not suprised my code doesn't work. Thing is I'm not exactly sure how I'm going to implement the message passing between the different threads. Nor am I sure exactly how to implement the resource lock commands properly. I'm going to post the code for my server file and my client file here, and I hope someone could help me with this. I think this actually should be two relative simple scripts. I have tried to comment on my code as good as possible to some extend.
import select
import socket
import sys
import threading
import client
class Server:
#initializing server socket
def __init__(self, event):
self.host = 'localhost'
self.port = 50000
self.backlog = 5
self.size = 1024
self.server = None
self.server_running = False
self.listen_threads = []
self.local_threads = []
self.clients = []
self.serverSocketLock = None
self.cmdLock = None
#here i have also declared some events for the command line input
#and the receive function respectively, not sure if correct
self.cmd_event = event
self.socket_event = event
def openSocket(self):
#binding server to port
try:
self.server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.server.bind((self.host, self.port))
self.server.listen(5)
print "Listening to port " + str(self.port) + "..."
except socket.error, (value,message):
if self.server:
self.server.close()
print "Could not open socket: " + message
sys.exit(1)
def run(self):
self.openSocket()
#making Rlocks for the socket and for the command line input
self.serverSocketLock = threading.RLock()
self.cmdLock = threading.RLock()
#set blocking to non-blocking
self.server.setblocking(0)
#making two threads always running on the server,
#one for the command line input, and one for broadcasting (sending)
cmd_thread = threading.Thread(target=self.server_cmd)
broadcast_thread = threading.Thread(target=self.broadcast,args=[self.clients])
cmd_thread.daemon = True
broadcast_thread.daemon = True
#append the threads to thread list
self.local_threads.append(cmd_thread)
self.local_threads.append(broadcast_thread)
cmd_thread.start()
broadcast_thread.start()
self.server_running = True
while self.server_running:
#connecting to "knocking" clients
try:
c = client.Client(self.server.accept())
self.clients.append(c)
print "Client " + str(c.address) + " connected"
#making a thread for each clientn and appending it to client list
listen_thread = threading.Thread(target=self.listenToClient,args=[c])
self.listen_threads.append(listen_thread)
listen_thread.daemon = True
listen_thread.start()
#setting event "client has connected"
self.socket_event.set()
except socket.error, (value, message):
continue
#close threads
self.server.close()
print "Closing client threads"
for c in self.listen_threads:
c.join()
def listenToClient(self, c):
while self.server_running:
#the idea here is to wait until the thread gets the message "client
#has connected"
self.socket_event.wait()
#then clear the event immidiately...
self.socket_event.clear()
#and aquire the socket resource
self.serverSocketLock.acquire()
#the below is the receive thingy
try:
recvd_data = c.client.recv(self.size)
if recvd_data == "" or recvd_data == "close\n":
print "Client " + str(c.address) + (" disconnected...")
self.socket_event.clear()
self.serverSocketLock.release()
return
print recvd_data
#I put these here to avoid locking the resource if no message
#has been received
self.socket_event.clear()
self.serverSocketLock.release()
except socket.error, (value, message):
continue
def server_cmd(self):
#this is a simple command line utility
while self.server_running:
#got to have a smart way to make this work
self.cmd_event.wait()
self.cmd_event.clear()
self.cmdLock.acquire()
cmd = sys.stdin.readline()
if cmd == "":
continue
if cmd == "close\n":
print "Server shutting down..."
self.server_running = False
self.cmdLock.release()
def broadcast(self, clients):
while self.server_running:
#this function will broadcast a message received from one
#client, to all other clients, but i guess any thread
#aspects applied to the above, will work here also
try:
send_data = sys.stdin.readline()
if send_data == "":
continue
else:
for c in clients:
c.client.send(send_data)
self.serverSocketLock.release()
self.cmdLock.release()
except socket.error, (value, message):
continue
if __name__ == "__main__":
e = threading.Event()
s = Server(e)
s.run()
And then the client file
import select
import socket
import sys
import server
import threading
class Client(threading.Thread):
#initializing client socket
def __init__(self,(client,address)):
threading.Thread.__init__(self)
self.client = client
self.address = address
self.size = 1024
self.client_running = False
self.running_threads = []
self.ClientSocketLock = None
def run(self):
#connect to server
self.client.connect(('localhost',50000))
#making a lock for the socket resource
self.clientSocketLock = threading.Lock()
self.client.setblocking(0)
self.client_running = True
#making two threads, one for receiving messages from server...
listen = threading.Thread(target=self.listenToServer)
#...and one for sending messages to server
speak = threading.Thread(target=self.speakToServer)
#not actually sure wat daemon means
listen.daemon = True
speak.daemon = True
#appending the threads to the thread-list
self.running_threads.append(listen)
self.running_threads.append(speak)
listen.start()
speak.start()
#this while-loop is just for avoiding the script terminating
while self.client_running:
dummy = 1
#closing the threads if the client goes down
print "Client operating on its own"
self.client.close()
#close threads
for t in self.running_threads:
t.join()
return
#defining "listen"-function
def listenToServer(self):
while self.client_running:
#here i acquire the socket to this function, but i realize I also
#should have a message passing wait()-function or something
#somewhere
self.clientSocketLock.acquire()
try:
data_recvd = self.client.recv(self.size)
print data_recvd
except socket.error, (value,message):
continue
#releasing the socket resource
self.clientSocketLock.release()
#defining "speak"-function, doing much the same as for the above function
def speakToServer(self):
while self.client_running:
self.clientSocketLock.acquire()
try:
send_data = sys.stdin.readline()
if send_data == "close\n":
print "Disconnecting..."
self.client_running = False
else:
self.client.send(send_data)
except socket.error, (value,message):
continue
self.clientSocketLock.release()
if __name__ == "__main__":
c = Client((socket.socket(socket.AF_INET, socket.SOCK_STREAM),'localhost'))
c.run()
I realize this is quite a few code lines for you to read through, but as I said, I think the concept and the script in it self should be quite simple to understand. It would be very much appriciated if someone could help me synchronize my threads in a proper way =)
Thanks in advance
---Edit---
OK. So I now have simplified my code to just containing send and receive functions in both the server and the client modules. The clients connecting to the server gets their own threads, and the send and receive functions in both modules operetes in their own separate threads. This works like a charm, with the broadcast function in the server module echoing strings it gets from one client to all clients. So far so good!
The next thing i want my script to do, is taking specific commands, i.e. "close", in the client module to shut down the client, and join all running threads in the thread list. Im using an event flag to notify the listenToServer and the main thread that the speakToServer thread has read the input "close". It seems like the main thread jumps out of its while loop and starts the for loop that is supposed to join the other threads. But here it hangs. It seems like the while loop in the listenToServer thread never stops even though server_running should be set to False when the event flag is set.
I'm posting only the client module here, because I guess an answer to get these two threads to synchronize will relate to synchronizing more threads in both the client and the server module also.
import select
import socket
import sys
import server_bygg0203
import threading
from time import sleep
class Client(threading.Thread):
#initializing client socket
def __init__(self,(client,address)):
threading.Thread.__init__(self)
self.client = client
self.address = address
self.size = 1024
self.client_running = False
self.running_threads = []
self.ClientSocketLock = None
self.disconnected = threading.Event()
def run(self):
#connect to server
self.client.connect(('localhost',50000))
#self.client.setblocking(0)
self.client_running = True
#making two threads, one for receiving messages from server...
listen = threading.Thread(target=self.listenToServer)
#...and one for sending messages to server
speak = threading.Thread(target=self.speakToServer)
#not actually sure what daemon means
listen.daemon = True
speak.daemon = True
#appending the threads to the thread-list
self.running_threads.append((listen,"listen"))
self.running_threads.append((speak, "speak"))
listen.start()
speak.start()
while self.client_running:
#check if event is set, and if it is
#set while statement to false
if self.disconnected.isSet():
self.client_running = False
#closing the threads if the client goes down
print "Client operating on its own"
self.client.shutdown(1)
self.client.close()
#close threads
#the script hangs at the for-loop below, and
#refuses to close the listen-thread (and possibly
#also the speak thread, but it never gets that far)
for t in self.running_threads:
print "Waiting for " + t[1] + " to close..."
t[0].join()
self.disconnected.clear()
return
#defining "speak"-function
def speakToServer(self):
#sends strings to server
while self.client_running:
try:
send_data = sys.stdin.readline()
self.client.send(send_data)
#I want the "close" command
#to set an event flag, which is being read by all other threads,
#and, at the same time set the while statement to false
if send_data == "close\n":
print "Disconnecting..."
self.disconnected.set()
self.client_running = False
except socket.error, (value,message):
continue
return
#defining "listen"-function
def listenToServer(self):
#receives strings from server
while self.client_running:
#check if event is set, and if it is
#set while statement to false
if self.disconnected.isSet():
self.client_running = False
try:
data_recvd = self.client.recv(self.size)
print data_recvd
except socket.error, (value,message):
continue
return
if __name__ == "__main__":
c = Client((socket.socket(socket.AF_INET, socket.SOCK_STREAM),'localhost'))
c.run()
Later on, when I get this server/client system up and running, I will use this system on some elevator models we have here on the lab, with each client receiving floor orders or "up" and "down" calls. The server will be running an distribution algorithm and updating the elevator queues on the clients that are most appropriate for the requested order. I realize it's a long way to go, but I guess one should just take one step at the time =)
Hope someone has the time to look into this. Thanks in advance.
The biggest problem I see with this code is that you have far too much going on right away to easily debug your problem. Threading can get extremely complicated because of how non-linear the logic becomes. Especially when you have to worry about synchronizing with locks.
The reason you are seeing clients blocking on each other is because of the way you are using your serverSocketLock in your listenToClient() loop in the server. To be honest this isn't exactly your problem right now with your code, but it became the problem when I started to debug it and turned the sockets into blocking sockets. If you are putting each connection into its own thread and reading from them, then there is no reason to use a global server lock here. They can all read from their own sockets at the same time, which is the purpose of the thread.
Here is my recommendation to you:
Get rid of all the locks and extra threads that you don't need, and start from the beginning
Have the clients connect as you do, and put them in their thread as you do. And simply have them send data every second. Verify that you can get more than one client connecting and sending, and that your server is looping and receiving. Once you have this part working, you can move on to the next part.
Right now you have your sockets set to non-blocking. This is causing them all to spin really fast over their loops when data is not ready. Since you are threading, you should set them to block. Then the reader threads will simply sit and wait for data and respond immediately.
Locks are used when threads will be accessing shared resources. You obviously need to for any time a thread will try and modify a server attribute like a list or a value. But not when they are working on their own private sockets.
The event you are using to trigger your readers doesn't seem necessary here. You have received the client, and you start the thread afterwards. So it is ready to go.
In a nutshell...simplify and test one bit at a time. When its working, add more. There are too many threads and locks right now.
Here is a simplified example of your listenToClient method:
def listenToClient(self, c):
while self.server_running:
try:
recvd_data = c.client.recv(self.size)
print "received:", c, recvd_data
if recvd_data == "" or recvd_data == "close\n":
print "Client " + str(c.address) + (" disconnected...")
return
print recvd_data
except socket.error, (value, message):
if value == 35:
continue
else:
print "Error:", value, message
Backup your work, then toss it - partially.
You need to implement your program in pieces, and test each piece as you go. First, tackle the input part of your program. Don't worry about how to broadcast the input you received. Instead worry that you are able to successfully and repeatedly receive input over your socket. So far - so good.
Now, I assume you would like to react to this input by broadcasting to the other attached clients. Well too bad, you can't do that yet! Because, I left one minor detail out of the paragraph above. You have to design a PROTOCOL.
What is a protocol? It's a set of rules for communication. How does your server know when the client had finished sending it's data? Is it terminated by some special character? Or perhaps you encode the size of the message to be sent as the first byte or two of the message.
This is turning out to be a lot of work, isn't it? :-)
What's a simple protocol. A line-oriented protocol is simple. Read 1 character at a time until you get to the end of record terminator - '\n'. So, clients would send records like this to your server --
HELO\n
MSG DAVE Where Are Your Kids?\n
So, assuming you have this simple protocol designed, implement it. For now, DON'T WORRY ABOUT THE MULTITHREADING STUFF! Just worry about making it work.
Your current protocol is to read 1024 bytes. Which may not be bad, just make sure you send 1024 byte messages from the client.
Once you have the protocol stuff setup, move on to reacting to the input. But for now you need something that will read input. Once that is done, we can worry about doing something with it.
jdi is right, you have too much program to work with. Pieces are easier to fix.