The problem I've got right now is one concerning this chat client I've been trying to get working for some days now. It's supposed to be an upgrade of my original chat client, that could only reply to people if it received a message first.
So after asking around and researching people I decided to use select.select to handle my client.
Problem is it has the same problem as always.
*The loop gets stuck on receiving and won't complete until it receives something*
Here's what I wrote so far:
import select
import sys #because why not?
import threading
import queue
print("New Chat Client Using Select Module")
HOST = input("Host: ")
PORT = int(input("Port: "))
s = socket(AF_INET,SOCK_STREAM)
print("Trying to connect....")
s.connect((HOST,PORT))
s.setblocking(0)
# Not including setblocking(0) because select handles that.
print("You just connected to",HOST,)
# Lets now try to handle the client a different way!
while True:
# Attempting to create a few threads
Reading_Thread = threading.Thread(None,s)
Reading_Thread.start()
Writing_Thread = threading.Thread()
Writing_Thread.start()
Incoming_data = [s]
Exportable_data = []
Exceptions = []
User_input = input("Your message: ")
rlist,wlist,xlist = select.select(Incoming_data,Exportable_data,Exceptions)
if User_input == True:
Exportable_data += [User_input]
Your probably wondering why I've got threading and queues in there.
That's because people told me I could solve the problem by using threading and queues, but after reading documentation, looking for video tutorials or examples that matched my case. I still don't know at all how I can use them to make my client work.
Could someone please help me out here? I just need to find a way to have the client enter messages as much as they'd like without waiting for a reply. This is just one of the ways I am trying to do it.
Normally you'd create a function in which your While True loop runs and can receive the data, which it can write to some buffer or queue to which your main thread has access.
You'd need to synchronize access to this queue so as to avoid data races.
I'm not too familiar with Python's threading API, however creating a function which runs in a thread can't be that hard. Lemme find an example.
Turns out you could create a class with a function where the class derives from threading.Thread. Then you can create an instance of your class and start the thread that way.
class WorkerThread(threading.Thread):
def run(self):
while True:
print 'Working hard'
time.sleep(0.5)
def runstuff():
worker = WorkerThread()
worker.start() #start thread here, which will call run()
You can also use a simpler API and create a function and call thread.start_new_thread(fun, args) on it, which will run that function in a thread.
def fun():
While True:
#do stuff
thread.start_new_thread(fun) #run in thread.
Related
Essentially Im using the socketserver python library to try and handle communications from a central server to multiple raspberry pi4 and esp32 peripherals. Currently i have the socketserver running serve_forever, then the request handler calls a method from a processmanager class which starts a process that should handle the actual communication with the client.
It works fine if i use .join() on the process such that the processmanager method doesnt exit, but thats not how i would like it to run. Without .join() i get a broken pipe error as soon as the client communication process tries to send a message back to the client.
This is the process manager class, it gets defined in the main file and buildprocess is called through the request handler of the socketserver class:
import multiprocessing as mp
mp.allow_connection_pickling()
import queuemanager as qm
import hostmain as hmain
import camproc
import keyproc
import controlproc
# method that gets called into a process so that class and socket share memory
def callprocess(periclass, peritype, clientsocket, inqueue, genqueue):
periclass.startup(clientsocket)
class ProcessManager(qm.QueueManager):
def wipeproc(self, target):
# TODO make wipeproc integrate with the queue manager rather than directly to the class
for macid in list(self.procdict.keys()):
if target == macid:
# calls proc kill for the class
try:
self.procdict[macid]["class"].prockill()
except Exception as e:
print("exception:", e, "in wipeproc")
# waits for process to exit naturally (class threads to close)
self.procdict[macid]["process"].join()
# remove dict entry for this macid
self.procdict.pop(macid)
# called externally to create the new process and append to procdict
def buildprocess(self, peritype, macid, clientsocket):
# TODO put some logic here to handle the differences of the controller process
# generates queue object
inqueue = mp.Queue()
# creates periclass instance based on type
if peritype == hmain.cam:
periclass = camproc.CamMain(self, inqueue, self.genqueue)
elif peritype == hmain.keypad:
print("to be added to")
elif peritype == hmain.motion:
print("to be added to")
elif peritype == hmain.controller:
print("to be added to")
# init and start call for the new process
self.procdict[macid] = {"type": peritype, "inqueue": inqueue, "class": periclass, "process": None}
self.procdict[macid]["process"] = mp.Process(target=callprocess,
args=(self.procdict[macid]["class"], self.procdict[macid]["type"], clientsocket, self.procdict[macid]["inqueue"], self.genqueue))
self.procdict[macid]["process"].start()
# updating the process dictionary before class obj gets appended
# if macid in list(self.procdict.keys()):
# self.wipeproc(macid)
print(self.procdict)
print("client added")
to my eye, all the pertinent objects should be stored in the procdict dictionary but as i mentioned it just gets a broken pipe error unless i join the process with self.procdict[macid]["process"].join() before the end of the buildprocess method
I would like it to exit the method but leave the communication process running as is, ive tried a few different things with restructuring what gets defined within the process and without, but to no avail. Thus far i havent been able to find any pertinent solutions online but of course i may have missed something too.
Thankyou for reading this far if you did! Ive been stuck on this for a couple days so any help would be appreciated, this is my first project with multiprocessing and sockets on any sort of scale.
#################
Edit to include pastebin with all the code:
https://pastebin.com/u/kadytoast/1/PPWfyCFT
Without .join() i get a broken pipe error as soon as the client communication process tries to send a message back to the client.
That's because at the time when the request handler handle() returns, the socketserver does shutdown the connection. That socketserver simplifies the task of writing network servers means it does certain things automatically which are usually done in the course of network request handling. Your code is not quite making the intended use of socketserver. Especially, for handling requests asynchronously, Asynchronous Mixins are intended. With the ForkingMixIn the server will spawn a new process for each request, in contrast to your current code which does this by itself with mp.Process. So, I think you have basically two options:
code less of the request handling yourself and use the provided socketserver methods
stay with your own handling and don't use socketserver at all, so it won't get in the way.
I am running a python app where I for various reasons have to host my program on a server in one part of the world and then have my database in another.
I tested via a simple script, and from my home which is in a neighboring country to the database server, the time to write and retrieve a row from the database is about 0.035 seconds (which is a nice speed imo) compared to 0,16 seconds when my python server in the other end of the world performs same action.
This is an issue as I am trying to keep my python app as fast as possible so I was wondering if there is a smart way to do this?
As I am running my code synchronously my program is waiting every time it has to write to the db, which is about 3 times a second so the time adds up. Is it possible to run the connection to the database in a separate thread or something, so it doesn't halt the whole program while it tries to send data to the database? Or can this be done using asyncio (I have no experience with async code)?
I am really struggling figuring out a good way to solve this issue.
In advance, many thanks!
Yes, you can create a thread that does the writes in the background. In your case, it seems reasonable to have a queue where the main thread puts things to be written and the db thread gets and writes them. The queue can have a maximum depth so that when too much stuff is pending, the main thread waits. You could also do something different like drop things that happen too fast. Or, use a db with synchronization and write a local copy. You also may have an opportunity to speed up the writes a bit by committing multiple at once.
This is a sketch of a worker thread
import threading
import queue
class SqlWriterThread(threading.Thread):
def __init__(self, db_connect_info, maxsize=8):
super().__init__()
self.db_connect_info = db_connect_info
self.q = queue.Queue(maxsize)
# TODO: Can expose q.put directly if you don't need to
# intercept the call
# self.put = q.put
self.start()
def put(self, statement):
print(f"DEBUG: Putting\n{statement}")
self.q.put(statement)
def run(self):
db_conn = None
while True:
# get all the statements you can, waiting on first
statements = [self.q.get()]
try:
while True:
statements.append(self.q.get(), block=False)
except queue.Empty:
pass
try:
# early exit before connecting if channel is closed.
if statements[0] is None:
return
if not db_conn:
db_conn = do_my_sql_connect()
try:
print("Debug: Executing\n", "--------\n".join(f"{id(s)} {s}" for s in statements))
# todo: need to detect closed connection, then reconnect and resart loop
cursor = db_conn.cursor()
for statement in statements:
if statement is None:
return
cursor.execute(*statement)
finally:
cursor.commit()
finally:
for _ in statements:
self.q.task_done()
sql_writer = SqlWriterThread(('user', 'host', 'credentials'))
sql_writer.put(('execute some stuff',))
Trying to fix a friends code where the loop doesn't continue until a for loop is satisfied. I feel it is something wrong with the readbuffer. Basically, we want the while loop to loop continuously, but if the for loop is satisfied run that. Is someone could help me understand what is happening in the readbuffer and temp, I'd be greatly thankful.
Here's the snippet:
s = openSocket()
joinRoom(s)
readbuffer = ""
while True:
readbuffer = readbuffer + s.recv(1024)
temp = string.split(readbuffer, "\n")
readbuffer = temp.pop()
for line in temp:
user = getUser(line)
message = getMessage(line)
Base on my understanding to your question, you want to execute the for loop while continues to receive packets.
I'm not sure what you did in getUser and getMessage, if there are I/O operations (read/write files, DB I/O, send/recv ...) in them you can use async feature in python to write asynchronous programs. (See: https://docs.python.org/3/library/asyncio-task.html)
I assume, however, you are just extracting a single element from line, which involves no I/O operations. In that case, async won't help. If getUser and getMessage really take too much CPU time, you can put the for loop in a new thread, making string operations non-blocking. (See: https://docs.python.org/3/library/threading.html)
from threading import Thread
def getUserProfile(lines, profiles, i):
for line in lines:
user = getUser(line)
message = getMessage(line)
profiles.append((user, message))
profiles = []
threads = []
s = openSocket()
joinRoom(s)
while True:
readbuffer = s.recv(1024)
lines = readbuffer.decode('utf-8').split('\n')
t = Thread(target=getUserProfile, args=(lines, profiles, count))
t.run()
threads.append(t)
# If somehow the loop may be interrupted,
# These two lines should be added to wait for all threads to finish
for th in threads:
th.join() # will block main thread until all threads are terminated
Update
Of course this is not a typical way to solve this issue, it's just easier to understand for beginners, and for simple assignments.
One better way is to use something like Future, making send/recv asynchronous, and pass a callback to it so that it can pass the received data to your callback. If you want to move heavy CPU workload to another thread create an endless loop(routine), just create a Thread in callback or somewhere else, depending on your architecture design.
I implemented a lightweight distributed computing framework for my network programming course. And I wrote my own future class for the project if anyone is interested.
This question already has answers here:
thread.start_new_thread vs threading.Thread.start
(2 answers)
Closed 9 years ago.
Level beginner. I have a confusion regarding the thread creation methods in python. To be specific is there any difference between the following two approaches:
In first approach I am using import thread module and later I am creating a thread by this code thread.start_new_thread(myfunction,()) as myfunction() doesn't have any args.
In second approach I am using from threading import Thread and later I am creating threads by doing something like this: t = Thread(target=myfunction)then t.start()
The reason why I am asking is because my programme works fine for second approach but when I use first approach it doesn't works as intended. I am working on a Client-Server programme. Thanks
The code is as below:
#!/usr/bin/env python
import socket
from threading import Thread
import thread
data = 'default'
tcpSocket = ''
def start_server():
global tcpSocket
tcpSocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
tcpSocket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
tcpSocket.bind(('',1520))
tcpSocket.listen(3)
print "Server is up...."
def service():
global tcpSocket
(clientSocket,address) = tcpSocket.accept()
print "Client connected with: ", address
# data = 'default'
send_data(clientSocket,"Server: This is server\n")
global data
while len(data):
data = receive_data(clientSocket)
send_data(clientSocket,"Client: "+data)
print "Client exited....\nShutting the server"
clientSocket.close()
tcpSocket.close()
def send_data(socket,data):
socket.send(data)
def receive_data(socket):
global data
data = socket.recv(2048)
return data
start_server()
for i in range(2):
t = Thread(target=service)
t.start()
#thread.start_new_thread(service,())
#immortal can you explain bit more please. I didn't get it sorry. How can main thread die? It should start service() in my code then the server waits for client. I guess it should wait rather than to die.
Your main thread calls:
start_server()
and that returns. Then your main thread executes this:
for i in range(2):
t = Thread(target=service)
t.start()
#thread.start_new_thread(service,())
Those also complete almost instantly, and then your main thread ends.
At that point, the main thread is done. Python enters its interpreter shutdown code.
Part of the shutdown code is waiting to .join() all (non-daemon) threads created by the threading module. That's one of the reasons it's far better not to use thread unless you know exactly what you're doing. For example, if you're me ;-) But the only times I've ever used thread are in the implementation of threading, and to write test code for the thread module.
You're entirely on your own to manage all aspects of a thread module thread's life. Python's shutdown code doesn't wait for those threads. The interpreter simply exits, ignoring them completely, and the OS kills them off (well, that's really up to the OS, but on all major platforms I know of the OS does just kill them ungracefully in midstream).
I'm currently programming a python class which acts as a client.
Because I don't want to block the main thread, receiving of packets is done in another thread and a callback function is called if a packet arrives.
The received packets are either broadcast messages or a reply for a command sent by the client. The function for sending commands is synchronous, it blocks until the reply arrives so it can directly return the result.
Simplified example:
import socket
import threading
class SocketThread(threading.Thread):
packet_received_callback = None
_reply = None
_reply_event = threading.Event()
def run(self):
self._initialize_socket()
while True:
# This function blocks until a packet arrives
p = self._receive_packet()
if self._is_reply(p):
self._reply = p
self._reply_event.set()
else:
self.packet_received_callback(p)
def send_command(self, command):
# Send command via socket
self.sock.send(command)
# Wait for reply
self._reply_event.wait()
self._reply_event.clear()
return self._process_reply(self._reply)
The problem which I'm facing now is that I can't send commands in the callback function because that would end in a deadlock (send_command waits for a reply but no packets can be received because the thread which receives packets is actually executing the callback function).
My current solution is to start a new thread each time to call the callback function. But that way a lot of threads are spawned and it will be difficult to ensure that packets are processed synchronously in heavy traffic situations.
Does anybody know a more elegant solution or am I going the right way?
Thanks for your help!
A proper answer to this question depends a lot on the details of the problem you are trying to solve, but here is one solution:
Rather than invoking the callback function immediately upon receiving the packet, I think it would make more sense for the socket thread to simply store the packet that it received and continue polling for packets. Then when the main thread has time, it can check for new packets that have arrived and act on them.
Recently had another idea, let me know how you think about it. It's just a general approach to solve such problems in case someone else has a similar problem and needs to use multi-threading.
import threading
import queue
class EventBase(threading.Thread):
''' Class which provides a base for event-based programming. '''
def __init__(self):
self._event_queue = queue.Queue()
def run(self):
''' Starts the event loop. '''
while True:
# Get next event
e = self._event_queue.get()
# If there is a "None" in the queue, someone wants to stop
if not e:
break
# Call event handler
e[0](*e[1], **e[2])
# Mark as done
self._event_queue.task_done()
def stop(self, join=True):
''' Stops processing events. '''
if self.is_alive():
# Put poison-pill to queue
self._event_queue.put(None)
# Wait until finished
if join:
self.join()
def create_event_launcher(self, func):
''' Creates a function which can be used to call the passed func in the event-loop. '''
def event_launcher(*args, **kwargs):
self._event_queue.put((func, args, kwargs))
return event_launcher
Use it like so:
event_loop = eventbase.EventBase()
event_loop.start()
# Or any other callback
sock_thread.packet_received_callback = event_loop.create_event_launcher(my_event_handler)
# ...
# Finally
event_loop.stop()