Python Threading/Request issue - python

I have a multi-threading code in Python (firing several threads every second and closing them after), and it used to work fine. Recently, I added a new function (thread) for listening to a server for some tables (as they are streamed out from the server), through a Get Request (10 seconds timeout).
The issue is that the code works fine for about 1-2 hours and then I get the python thread error of "error: can't start new thread", with having only ~20 active threads.
I tried having a singleton pool of thread and using it, but it did not help at all.
On a side note, removing this get request from the function resolves the issue and the code runs perfectly.
Please let me know your opinions,
Thank you.
def getStreamData(self):
if (self.liveTablesTimer == None):
self.startLiveTablesTimer()
print("LiveTables timer started")
self.voidTableCount += 1 # counting for connection refresh
def separateThread():
try:
#return 0
self.streamInConnection = requests.get(self.liveTablesUrl, stream=True, verify=False, timeout=10)
#print("Live tables request sent as:", self.liveTablesUrl)
if self.streamInConnection.encoding is None:
self.streamInConnection.encoding = 'utf-8'
for line in self.streamInConnection.iter_lines(decode_unicode=True):
if line and self.userName != None:
#print("Raw stream received", line)
self.streamData.emit(line)
except:
print("getLiveTables stream link timeout")
self.streamInConnection.close()
if (self.voidTableCount>6*5): #5 min
try:
self.voidTableCount=0
pass
except:
pass
finally:
return 0
try:
print("Starting thread for receiving liveTables data")
#self.consCheck.threadExecutor.submit(separateThread)
thread = threading.Thread(target=separateThread, args=[], daemon = True)
thread.start()
except Exception as err:
print("liveTables stream error:", err)
error image

Strangely, I removed the 'verify' parameter from the request and it resolved the issue.
requests.get(self.liveTablesUrl, stream=True, timeout=10)

Related

Terminating thread thats waiting in python

I am trying to learn how to use threading with networking and need to stop a thread if a connection is closed. I have a list of multiple connections and a thread that checks if they are still open.
If a connection has closed i need to terminate the GetData() thread but i don't know how to do that without checking for an exit event every loop. The problem is that the GetData() thread doesn't loop but sits at line 25 and waits for a response. If the connection has closed it never gets a response and just keeps sitting there until i kill the program.
How do i kill a thread outside of the thread. I understand that this is not easily done with threading but is there maybe some other library that allows this? I also tried using multiprocessing instead but i couldn't get it to work so i just gave up on that.
from threading import Thread
import socket
def MakeSocket():
try:
global MainSocket
MainSocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print ("Socket successfully created")
except socket.error as err:
print ("socket creation failed with error %s" %(err))
MainSocket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
MainSocket.bind(('', 1337))
MainSocket.listen(5)
global c
global addr
c, addr = MainSocket.accept()
Thread(target=GetData, args=()).start()
Thread(target=CheckIfOpen, args=()).start()
def GetData():
while True:
try:
#Try to recieve data from connection.
RecievedData = c.recv(4096).decode()
print(RecievedData)
except:
print ("\nError: GetData failed.\n")
return
def CheckIfOpen():
while True:
#wait 5 sec between each test
time.sleep(5)
try:
#try to send data "test".
c.send("test".encode())
except:
#if it fails then the connection has been closed. Close the GetData thread
#Don't know how to close the GetData thread
MakeSocket()
I know this looks silly but it isn't all my code. I just changed it a bit and included the important parts. It still contains the same problem. I don't have all those global variables in my actual code.
I realized what was wrong.
https://reddit.com/r/learnpython/comments/kx925z/terminating_thread_thats_waiting/
u/oefd pointed something i missed out in the comments.

How to capture or save all the gRPC Stream

I'm trying to write a python client to listen to a gRPC stream (fire hose). It constantly keeps streaming. There is no "on completion".
Proto:
rpc Start (StartParameters) returns (stream Progress) {}
In the client I tried writing the following, but as the Start rpc does not return "on complete", I don't get the control to the for loop to print (event).
rsp = self.stub.Start(params)
for event in rsp:
print(event)
Can somebody please help me with a python codeto handle or capture all the events in rsp after a timeout (2 mins) and then print each event in rsp.
I got this working, posting this incase if somebody else is looking for an answer
def collect_responses(self, response_iterator, response_queue):
for response in response_iterator:
response_queue.put(response)
def call_rpc(self)
response_stream = stub.Start(params)
response_queue = queue.Queue()
thread = threading.Thread(target=self.collect_responses,
args=(response_stream, response_queue))
thread.start()
time.sleep(120) # or have a different trigger to say, cancel stream
response_stream.cancel()
thread.join()
while not response_queue.empty():
item = response_queue.get()
print(item)

Tornado websocket client loosing response messages?

I need to process frames from a webcam and send a few selected frames to a remote websocket server. The server answers immediately with a confirmation message (much like an echo server).
Frame processing is slow and cpu intensive so I want to do it using a separate thread pool (producer) to use all the available cores. So the client (consumer) just sits idle until the pool has something to send.
My current implementation, see below, works fine only if I add a small sleep inside the producer test loop. If I remove this delay I stop receiving any answer from the server (both the echo server and from my real server). Even the first answer is lost, so I do not think this is a flood protection mechanism.
What am I doing wrong?
import tornado
from tornado.websocket import websocket_connect
from tornado import gen, queues
import time
class TornadoClient(object):
url = None
onMessageReceived = None
onMessageSent = None
ioloop = tornado.ioloop.IOLoop.current()
q = queues.Queue()
def __init__(self, url, onMessageReceived, onMessageSent):
self.url = url
self.onMessageReceived = onMessageReceived
self.onMessageSent = onMessageSent
def enqueueMessage(self, msgData, binary=False):
print("TornadoClient.enqueueMessage")
self.ioloop.add_callback(self.addToQueue, (msgData, binary))
print("TornadoClient.enqueueMessage done")
#gen.coroutine
def addToQueue(self, msgTuple):
yield self.q.put(msgTuple)
#gen.coroutine
def main_loop(self):
connection = None
try:
while True:
while connection is None:
try:
print("Connecting...")
connection = yield websocket_connect(self.url)
print("Connected " + str(connection))
except Exception, e:
print("Exception on connection " + str(e))
connection = None
print("Retry in a few seconds...")
yield gen.Task(self.ioloop.add_timeout, time.time() + 3)
try:
print("Waiting for data to send...")
msgData, binaryVal = yield self.q.get()
print("Writing...")
sendFuture = connection.write_message(msgData, binary=binaryVal)
print("Write scheduled...")
finally:
self.q.task_done()
yield sendFuture
self.onMessageSent("Sent ok")
print("Write done. Reading...")
msg = yield connection.read_message()
print("Got msg.")
self.onMessageReceived(msg)
if msg is None:
print("Connection lost")
connection = None
print("main loop completed")
except Exception, e:
print("ExceptionExceptionException")
print(e)
connection = None
print("Exit main_loop function")
def start(self):
self.ioloop.run_sync(self.main_loop)
print("Main loop completed")
######### TEST METHODS #########
def sendMessages(client):
time.sleep(2) #TEST only: wait for client startup
while True:
client.enqueueMessage("msgData", binary=False)
time.sleep(1) # <--- comment this line to break it
def testPrintMessage(msg):
print("Received: " + str(msg))
def testPrintSentMessage(msg):
print("Sent: " + msg)
if __name__=='__main__':
from threading import Thread
client = TornadoClient("ws://echo.websocket.org", testPrintMessage, testPrintSentMessage)
thread = Thread(target = sendMessages, args = (client, ))
thread.start()
client.start()
My real problem
In my real program I use a "window like" mechanism to protect the consumer (an autobahn.twisted.websocket server): the producer can send up to a maximum number of un-acknowledge messages (the webcam frames), then stops waiting for half of the window to free up.
The consumer sends a "PROCESSED" message back acknowleding one or more messages (just a counter, not by id).
What I see on the consumer log is that the messages are processed and the answer is sent back but these acks vanish somewhere in the network.
I have little experience with asynchio so I wanted to be sure that I'm not missing any yield, annotation or something else.
This is the consumer side log:
2017-05-13 18:59:54+0200 [-] TX Frame to tcp4:192.168.0.5:48964 : fin = True, rsv = 0, opcode = 1, mask = -, length = 21, repeat_length = None, chopsize = None, sync = False, payload = {"type": "PROCESSED"}
2017-05-13 18:59:54+0200 [-] TX Octets to tcp4:192.168.0.5:48964 : sync = False, octets = 81157b2274797065223a202250524f434553534544227d
This is neat code. I believe the reason you need a sleep in your sendMessages thread is because, otherwise, it keeps calling enqueueMessage as fast as possible, millions of times per second. Since enqueueMessage does not wait for the enqueued message to be processed, it keeps calling IOLoop.add_callback as fast as it can, without giving the loop enough opportunity to execute the callbacks.
The loop might make some progress running on the main thread, since you're not actually blocking it. But the sendMessages thread adds callbacks much faster than the loop can handle them. By the time the loop has popped one message from the queue and has begun to process it, millions of new callbacks are added already, which the loop must execute before it can advance to the next stage of message-processing.
Therefore, for your test code, I think it's correct to sleep between calls to enqueueMessage on the thread.

Infinite running server-side python script?

I want to replace Cron Jobs for "keeping" my program alive because it calls every XX interval whether or not the scrip is already called, creating duplicate entries.
I investigated the issue, and had a few approaches. One was to modify my program so it checks if it is already called and closes itself. The one I went after was to detach it completely from Cronjob by calling itself over and over again with execfile which works exactly how I want except the following problem:
RuntimeError: maximum recursion depth exceeded
Is there a way to keep the program in "infinite loop" without getting a Stack Overflow?
Here is my code, its a program that checks Mails, and converts them into MySQL DB entries.
imap = imaplib.IMAP4(hst)
try:
imap.login(usr, pwd)
except Exception as e:
errormsg = e
time.sleep(30)
print "IMAP error: " + str(errormsg)
execfile('/var/www/html/olotool/converter.py')
raise IOError(e)
# Authentification & Fetch Step
while True:
time.sleep(5)
'''
The script will always result in an error if there
are no mails left to check in the inbox. It then
goes into sleep mode and relaunches itself to check
if new mails have arrived.
'''
try:
imap.select("Inbox") # Tell Imap where to go
result, data = imap.uid('search', None, "ALL")
latest = data[0].split()[-1]
result, data = imap.uid('fetch', latest, '(RFC822)')
raw = data[0][1] # This contains the Mail Data
msg = email.message_from_string(raw)
except Exception as e:
disconnect(imap)
time.sleep(60)
execfile('/var/www/html/olotool/converter.py')
raise IOError(e)
I solved the problem myself with the only way I see it possible right now.
First I changed my exception in above code:
except Exception as e:
disconnect(imap)
print "Converter: No messages left"
raise os._exit(0)
# This is a special case since this Exception is
# no error thus os._exit(0) gives no false-positives
As you see I refrain from using execfile now. Instead I wrote a controller script that checks the status of my converter.py and launches it if it is not already running:
while True:
presL = os.popen('pgrep -lf python').read()
print "________________________________________"
print "Starting PIDcheck"
print "Current Processes: "
print presL # Check Processes
presRconverter = find('\d{7} python converter.py', presL)
if presRconverter:
# Store the PID
convPID = find('\d{7}', presRconverter)
print "Converter is running at PID: " + convPID
else:
print "PID Controller: Converter not running"
try:
print "PID Controller: Calling converter"
subprocess.check_call('python converter.py', shell=True)
except subprocess.CalledProcessError as e:
errormsg = e
print "Couldn't call Converter Module"
sendMail(esender,ereceiver,esubject,etext,server)
print "Error notification send"
raise IOError(e)
# If we got until here without ERROR, the call was Successfull
print "PID Controller: Call successful"
print "________________________________________"
time.sleep(60)
This method does not raise an: RuntimeError: maximum recursion depth exceeded. Also this provides you with a nohup.out file if you run the controller with command nohup python converter.py where you can see any problems for errorhandling.
I hope I could help anyone running into the same issue.
Something along the lines of this should work without having to resort to subprocess checking and such:
def check_mail_loop():
imap = imaplib.IMAP4(hst)
# Build some function to login, and, in the event of an error, sleep for n seconds and call login function again.
imap.login(usr, pwd)
while True:
try:
imap.select("Inbox")
result, data = imap.uid('search', None, "ALL")
if result and data:
latest = data[0].split()[-1]
result, data = imap.uid('fetch', latest, '(RFC822)')
raw = data[0][1] # This contains the Mail Data
msg = email.message_from_string(raw)
time.sleep(5)
except SomeRelevantException as e:
logging.log(e)
time.sleep(60)
pass
In the event of some random error that you didn't foresee, use a process control manager like supervisord or monit.

What is the best way to kill a looping python thread on exception?

I wrote a program that uses threads to keep a connection alive while the main program loops until it either has an exception or is manually closed. My program runs in 1 hour intervals and the timeout for the connection is 20 minutes, thus I spawn a thread for every connection element that exist inside of my architecture. Thus, if we have two servers to connect to it connects to both these serves and stays connected and loops through each server retrieving data.
the program I wrote works correctly, however I can't seem to find a way to handle when the program it's self throws an exception. This is to say I can't find an appropriate way to dispose of the threads when the main program excepts. When the program excepts it will just hang open because of the thread not excepting as well and it won't close correctly and will have to be closed manually.
Any suggestions on how to handle cleaning up threads on program exit?
This is my thread:
def keep_vc_alive(vcenter,credentials, api):
vm_url = str(vcenter._proxy.binding.url).split('/')[2]
while True:
try:
logging.info('staying connected %s' % str(vm_url))
vcenter.keep_session_alive()
except:
logging.info('unable to call current time of vcenter %s attempting to reconnect.' % str(vm_url))
try:
vcenter = None
connected,api_version,uuid,vcenter = vcenter_open(60, api, * credentials)
except:
logging.critical('unable to call current time of vcenter %s killing application, please have administrator restart the module.' % str(vm_url))
break
time.sleep(60*10)
Then my exception clean up code is as follows, obviously I know.stop() doesn't work, but I honestly have no idea how to do what it is im trying to do.
except Abort: # Exit without clearing the semaphore
logging.exception('ApplicationError')
try:
config_values_vc = metering_config('VSphere',['vcenter-ip','username','password','api-version'])
for k in xrange(0, len(config_values_vc['username'])): # Loop through each vcenter server
vc_thread[config_values_vc['vcenter-ip'][k]].stop()
except:
pass
#disconnect vcenter
try:
for vcenter in list_of_vc_connections:
list_of_vc_connections[vcenter].disconnect()
except:
pass
try: # Close the db is it is open (db is defined)
db.close()
except:
pass
sys.exit(1)
except SystemExit:
raise
except:
logging.exception('ApplicationError')
semaphore('ComputeLoader', False)
logging.critical('Unexpected error: %s' % sys.exc_info()[0])
raise
Instead of sleeping, wait on a threading.Event():
def keep_vc_alive(vcenter,credentials, api, event): # event is a threading.Event()
vm_url = str(vcenter._proxy.binding.url).split('/')[2]
while not event.is_set(): # If the event got set, we exit the thread
try:
logging.info('staying connected %s' % str(vm_url))
vcenter.keep_session_alive()
except:
logging.info('unable to call current time of vcenter %s attempting to reconnect.' % str(vm_url))
try:
vcenter = None
connected,api_version,uuid,vcenter = vcenter_open(60, api, * credentials)
except:
logging.critical('unable to call current time of vcenter %s killing application, please have administrator restart the module.' % str(vm_url))
break
event.wait(timeout=60*10) # Wait until the timeout expires, or the event is set.
Then, in your main thread, set the event in the exception handling code:
except Abort: # Exit without clearing the semaphore
logging.exception('ApplicationError')
event.set() # keep_alive thread will wake up, see that the event is set, and exit
The generally accepted way to stop threads in python is to use the threading.Event object.
The algorithm followed usually is something like the following:
import threading
...
threads = []
#in the main program
stop_event = threading.Event()
#create thread and store thread and stop_event together
thread = threading.Thread(target=keep_vc_alive, args=(stop_event))
threads.append((thread, stop_event))
#execute thread
thread.start()
...
#in thread (i.e. keep_vc_alive)
# check is_set in stop_event
while not stop_event.is_set():
#receive data from server, etc
...
...
#in exception handler
except Abort:
#set the stop_events
for thread, stop_event in threads:
stop_event.set()
#wait for threads to stop
while 1:
#check for any alive threads
all_finished = True
for thread in threads:
if thread.is_alive():
all_finished = False
#keep cpu down
time.sleep(1)

Categories