HTTP Server keeping thread alive in nested thread scheme - python

I have a little HTTPServer implementation I'm spinning up to listen for a callback from an API. In testing, this implimentation is keeping the innermost thread alive. Here's the server:
import http
import uuid
from http import server
class Server(server.HTTPServer):
RequestLog:list = []
ErrorList:list = []
Serve:bool = True
def __init__(self, server_address, RequestHandlerClass):
self.RequestLog = []
self.ErrorList = []
self.Serve:bool = True
return super().__init__(server_address, RequestHandlerClass)
def LogRequest(self, clientAddress, success, state, params:dict={}):
"""docstring"""
uid = uuid.uuid1()
logItem = {"RequestID" : uid,
"ClientAddress" : clientAddress,
"Success" : success,
"State" : state,
"Params" : params}
self.RequestLog.append(logItem)
def GetRequestItem(self, state):
"""docstring"""
logItem = {}
if self.RequestLog and len(self.RequestLog):
logItem = [d for d in self.RequestLog if d["State"] == state][0]
return logItem
def service_actions(self):
try:
if not self.Serve:
self.shutdown()
self.server_close()
except Exception as e:
err = e
raise e
return super().service_actions()
def handle_error(self, request, client_address):
logItem = {"clientAddress" : client_address,
"success" : False,
"state" : None,
"params" : None}
try:
self.LogRequest(**logItem)
x = request
except Exception as e:
self.shutdown()
err = e
raise e
return super().handle_error(request, client_address)
So what the server implementation above does, is log information about requests in the ResquestLog:list and then provided a method GetRequestItem that can be used to pull for the existence of a logged request. In the test I'm throwing and error and catching it with the handle_error() override. Here is the calling function that spins up the server, polls for request, and then shutdowns the sever by setting its Server.Serve method to False
def AwaitCallback(self, server_class=Server,
handler_class=OAuthGrantRequestHandler):
"""docstring"""
server_address = ("127.0.0.1", 8080)
self.Httpd = server_class(server_address, handler_class)
self.Httpd.timeout = 200
t1 = threading.Thread(target=self.Httpd.serve_forever)
try:
t1.start()
#poll for request result
result = {}
x = 0
while x < self.Timeout:
if len(self.Httpd.RequestLog) > 0:
break
time.sleep(.5)
finally:
#Terminate Server
if self.Httpd:
self.Httpd.Serve = False
if t1:
t1.join()
return
The above method sticks on the t1.join() call. Inspecting the self.Httpd object when its hung tells me that the servers serve_forever() loop is shutdown but the thread still shows its a live when calling t1.is_alive(). So what's going on? The only thing I can think of is that when self.shutdown() is called in the t1 thread it really yeilds the loop instead of shutting it down and keeps the tread alive? Documentation on shutdown just says shutdown() : Tell the serve_forever() loop to stop and wait until it does. Nice and murky. Any ideas?
Edit 1:
the answer suggested at How to stop BaseHTTPServer.serve_forever() in a BaseHTTPRequestHandler subclass? is entirly different. They're suggesting overriding all the native functionality of the socketserver.BaseServer.serve_forever() loop with a simpler implementation whereas I'm trying to correctly use the native implementation. To the best of my understanding so far, the example of my working code above, should achieve the same thing that answer is suggesting, but the child thread isn't terminating. Thus this question.

Related

Break Main Calling Thread If Child Thread Throws An Exception

I'm using threading.Thread and t.start() with a List of Callables to do long-running multithreaded processing. My main thread is blocked until all threads did finish. I'd like however t.start() to immediately return if one of the Callables throw an exception and terminate the other threads.
Using t.join() to check that the thread got executed provides no information about failures due to exception.
Here is the code:
import json
import requests
class ThreadServices:
def __init__(self):
self.obj = ""
def execute_services(self, arg1, arg2):
try:
result = call_some_process(arg1, arg2) #some method
#save results somewhere
except Exception, e:
# raise exception
print e
def invoke_services(self, stubs):
"""
Thread Spanning Function
"""
try:
p1 = "" #some value
p2 = "" #some value
# Call service 1
t1 = threading.Thread(target=self.execute_services, args=(a, b,)
# Start thread
t1.start()
# Block till thread completes execution
t1.join()
thread_pool = list()
for stub in stubs:
# Start parallel execution of threads
t = threading.Thread(target=self.execute_services,
args=(p1, p2))
t.start()
thread_pool.append(t)
for thread in thread_pool:
# Block till all the threads complete execution: Wait for all
the parallel tasks to complete
thread.join()
# Start another process thread
t2 = threading.Thread(target=self.execute_services,
args=(p1, p2)
t2.start()
# Block till this thread completes execution
t2.join()
requests.post(url, data= json.dumps({status_code=200}))
except Exception, e:
print e
requests.post(url, data= json.dumps({status_code=500}))
# Don't return anything as this function is invoked as a thread from
# main calling function
class Service(ThreadServices):
"""
Service Class
"""
def main_thread(self, request, context):
"""
Main Thread:Invokes Task Execution Sequence in ThreadedService
:param request:
:param context:
:return:
"""
try:
main_thread = threading.Thread(target=self.invoke_services,
args=(request,))
main_thread.start()
return True
except Exception, e:
return False
When i call Service().main_thread(request, context) and there is some exception executing t1, I need to get it raised in main_thread and return False. How can i implement it for this structure. Thanks!!
For one thing, you are complicating matters too much. I would do it this way:
from thread import start_new_thread as thread
from time import sleep
class Task:
"""One thread per task.
This you should do with subclassing threading.Thread().
This is just conceptual example.
"""
def __init__ (self, func, args=(), kwargs={}):
self.func = func
self.args = args
self.kwargs = kwargs
self.error = None
self.done = 0
self.result = None
def _run (self):
self.done = 0
self.error = None
self.result = None
# So this is what you should do in subclassed Thread():
try: self.result = self.func(*self.args, **self.kwargs)
except Exception, e:
self.error = e
self.done = 1
def start (self):
thread(self._run,())
def wait (self, retrexc=1):
"""Used in place of threading.Thread.join(), but it returns the result of the function self.func() and manages errors.."""
while not self.done: sleep(0.001)
if self.error:
if retrexc: return self.error
raise self.error
return self.result
# And this is how you should use your pool:
def do_something (tasknr):
print tasknr-20
if tasknr%7==0: raise Exception, "Dummy exception!"
return tasknr**120/82.0
pool = []
for task in xrange(20, 50):
t = Task(do_something, (task,))
pool.append(t)
# And only then wait for each one:
results = []
for task in pool:
results.append(task.wait())
print results
This way you can make task.wait() raise the error instead. The thread would already be stopped. So all you need to do is remove their references from pool, or whole pool, after you are done. You can even:
results = []
for task in pool:
try: results.append(task.wait(0))
except Exception, e:
print task.args, "Error:", str(e)
print results
Now, do not use strictly this (I mean Task() class) as it needs a lot of things added to be used for real.
Just subclass threading.Thread() and implement the similar concept by overriding run() and join() or add new functions like wait().

dynamically adding a resource to a python coap server with coapthon library

I am trying to build a coap server, in which I can add a new resource without the need to stop the server, recode it and restart .my server is suppossed to host two types of resources, "sensors(Sens-Me)" and "Actuators(Act-Me)" . I want that if I press the A key, a new instance of actuator should be added to the server, likewise If i Press S for Sensor .Below is my code :
from coapthon.resources.resource import Resource
from coapthon.server.coap import CoAP
class Sensor(Resource):
def __init__(self,name="Sensor",coap_server=None):
super(Sensor,self).__init__(name,coap_server,visible=True,observable=True,allow_children=True)
self.payload = "This is a new sensor"
self.resource_type = "rt1"
self.content_type = "application/json"
self.interface_type = "if1"
self.var = 0
def render_GET(self,request):
self.payload = "new sensor value ::{}".format(str(int(self.var+1)))
self.var +=1
return self
class Actuator(Resource):
def __init__(self,name="Actuator",coap_server=None):
super(Actuator,self).__init__(name,coap_server,visible=True,observable=True)
self.payload="This is an actuator"
self.resource_type="rt1"
def render_GET(self,request):
return self
class CoAPServer(CoAP):
def __init__(self, host, port, multicast=False):
CoAP.__init__(self,(host,port),multicast)
self.add_resource('sens-Me/',Sensor())
self.add_resource('act-Me/',Actuator())
print "CoAP server started on {}:{}".format(str(host),str(port))
print self.root.dump()
def main():
ip = "0.0.0.0"
port = 5683
multicast=False
server = CoAPServer(ip,port,multicast)
try:
server.listen(10)
print "executed after listen"
except KeyboardInterrupt:
server.close()
if __name__=="__main__":
main()
I am not sure what exactly do you want to do.
Is it just to replace a resource on the same route or add a new one?
Replace a resource
It is not possible according to the current coapthon version source:
https://github.com/Tanganelli/CoAPthon/blob/b6983fbf48399bc5687656be55ac5b9cce4f4718/coapthon/server/coap.py#L279
try:
res = self.root[actual_path]
except KeyError:
res = None
if res is None:
if len(paths) != i:
return False
resource.path = actual_path
self.root[actual_path] = resource
Alternatively, you can solve it in scope of request.
Say, have a registry of handlers which are used by resources and can be changed on a user input event. Well, you'll not be able to add new routes.
If you absolutely need that feature, you may request it from a developer or contribute to that project.
Add a new resource
I have extended your snippet a little bit.
I have a little experience in Python so I an not sure I've made everything properly, but it works.
There is a separate thread polling the user input and adding the same resource. Add the needed code there.
from coapthon.resources.resource import Resource
from coapthon.server.coap import CoAP
from threading import Thread
import sys
class Sensor(Resource):
def __init__(self,name="Sensor",coap_server=None):
super(Sensor,self).__init__(name,coap_server,visible=True,observable=True,allow_children=True)
self.payload = "This is a new sensor"
self.resource_type = "rt1"
self.content_type = "application/json"
self.interface_type = "if1"
self.var = 0
def render_GET(self,request):
self.payload = "new sensor value ::{}".format(str(int(self.var+1)))
self.var +=1
return self
class Actuator(Resource):
def __init__(self,name="Actuator",coap_server=None):
super(Actuator,self).__init__(name,coap_server,visible=True,observable=True)
self.payload="This is an actuator"
self.resource_type="rt1"
def render_GET(self,request):
return self
class CoAPServer(CoAP):
def __init__(self, host, port, multicast=False):
CoAP.__init__(self,(host,port),multicast)
self.add_resource('sens-Me/',Sensor())
self.add_resource('act-Me/',Actuator())
print "CoAP server started on {}:{}".format(str(host),str(port))
print self.root.dump()
def pollUserInput(server):
while 1:
user_input = raw_input("Some input please: ")
print user_input
server.add_resource('sens-Me2/', Sensor())
def main():
ip = "0.0.0.0"
port = 5683
multicast=False
server = CoAPServer(ip,port,multicast)
thread = Thread(target = pollUserInput, args=(server,))
thread.setDaemon(True)
thread.start()
try:
server.listen(10)
print "executed after listen"
except KeyboardInterrupt:
print server.root.dump()
server.close()
sys.exit()
if __name__=="__main__":
main()

Python Multiprocessing Manager - Client unable to reconnect

I am running an application which cannot sit and wait the successful/unsuccessful connection to a Python Manager. The client application should try to send some info to the supposedly running server, and in case it fails, another measure is taken. The problem is that whenever the server is down the connection takes a lot of time to return the control to the client application, and it cannot waste time waiting for it because there is other stuff to do.
I came up with a scheme where an intermediary object is in charge of the connection but it only works once. Let's say that for the first time, when there is still no connection to the server, this intermediary object handles the connecting part without blocking the client application. If, for some reason, the server goes down and comes back again, I can't get it to work anymore.
Suppose I have the following server:
# server.py
from multiprocessing import Queue, managers
from multiprocessing.queues import Empty
import select
import threading
class RServer(object):
def __init__(self, items_buffer):
self.items_buffer = items_buffer
def receive_items(self):
while True:
(_, [], []) = select.select([self.items_buffer._reader], [], [])
while True:
try:
item = self.items_buffer.get(block=False)
# do something with item
print('item received')
except Empty:
break
class SharedObjectsManager(managers.BaseManager):
pass
if __name__ == '__main__':
items_buffer = Queue()
remote_server = RServer(items_buffer)
remote_server_th = threading.Thread(target=remote_server.receive_items)
remote_server_th.start()
SharedObjectsManager.register('items_buffer', callable=lambda: items_buffer)
shared_objects_manager = SharedObjectsManager(address=('localhost', 5001),
authkey=str.encode('my_server'),
serializer='xmlrpclib')
s = shared_objects_manager.get_server()
s.serve_forever()
And here is the intermediary object to handle the connection:
# bridge.py
from multiprocessing.managers import BaseManager
import threading
import socket
class ConnectionManager():
def __init__(self):
self.remote_manager = BaseManager(address=('localhost', 5001),
authkey=b'my_server',
serializer='xmlrpclib')
self.remote_manager.register('items_buffer')
self.items_buffer = None
self.items_buffer_lock = threading.Lock()
self.connecting = False
self.connecting_lock = threading.Lock()
self.connection_started_condition = threading.Condition()
def transmit_item(self, item):
try:
with self.items_buffer_lock:
self.items_buffer.put(item)
except (AttributeError, EOFError, IOError):
with self.connection_started_condition:
with self.connecting_lock:
if not self.connecting:
self.connecting = True
connect_th = threading.Thread(target=self.connect_to_server,
name='Client Connect')
connect_th.start()
self.connection_started_condition.notify()
raise ConnectionError('Connection Error')
def connect_to_server(self):
with self.connection_started_condition:
self.connection_started_condition.wait()
try:
self.remote_manager.connect()
except socket.error:
pass
else:
try:
with self.items_buffer_lock:
self.items_buffer = self.remote_manager.items_buffer()
except (AssertionError, socket.error):
pass
with self.connecting_lock:
self.connecting = False
class ConnectionError(Exception):
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
And finally the client application:
# client.py
import time
from bridge import ConnectionManager, ConnectionError
remote_buffer = ConnectionManager()
while True:
try:
remote_buffer.transmit_item({'rubish': None})
print('item sent')
except ConnectionError:
# do something else
print('item not sent')
# do other stuff
print('doing other stuff')
time.sleep(15)
I am for sure doing something wrong with the thread but I can't figure it out. Any idea?

logging seems to have memory leak for multi-thread usage

I meet one scenario of memory leak in Python, I guess it's related with logging module for multi-thread, but I don't find why.
Version1 (With memory-leak and multi-thread call)
campaign_id_queue = Queue.Queue()
campaign_worker = {} # it has data inside, key is ID, value is Class object
for campaign_id, worker in campaign_worker.iteritems():
campaign_id.queue.put(campaign_id)
thread_list = []
for n in range(THREAD_NUM): # defined already
thread_list.append( Thread(target=parallel_run, args=(campaign_id_queue, now, n, logger)))
for thread in thread_list:
thread.daemon = True
thread.start()
campaign_id_queue.join()
# another file
def parallel_run(campaign_id_queue, now, n, logger):
while True:
try:
campaign_id = campaign_id_queue.get()
except Queue.Empty:
logger.warning('Queue empty')
else:
try:
if worker.open_clients(logger) < 0:
logger.error('error here')
continue
worker.run(now, logger)
except Exception, e:
logger.exception(e)
finally:
campaign_id_queue.task_done()
Version2 (Without memory-leak and single-thread call)
campaign_worker = {} # it has data inside, key is ID, value is Class object
for campaign_id, worker in campaign_worker.iteritems():
if worker.open_clients(logger) < 0:
logger.error('error here')
continue
worker.run(now, logger)
It's related with thread not killed after use, not related with logging module, it's solved now, thanks for attention.

Using threads in the right way

I'm working on server written in python. When the client sends a cmd the server will call a function with unknown running time. So to avoid blocking I used threading. But when looking at the child process it seems that they're not terminating, causing a lot of memory usage.
EDIT : Here is the tree of the directory : http://pastebin.com/WZDxLquC
Following answers I found on stackoverflow I implemented a custom Thread class:
sThreads.py :
import threading
class Thread(threading.Thread):
def __init__(self, aFun, args = ()):
super(Thread, self).__init__(None, aFun, None, args)
self.stopped = threading.Event()
def stop(self):
self.stopped.set()
def isStopped(self):
return self.stopped.isSet()
Then here is the server's loop:
some where in mainServer.py:
def serve_forever(self, aCustomClass, aSize = 1024):
while True:
self.conn, self.addr = self.sock.accept()
msg = self.recvMSG(4096)
if(msg):
self.handShake(msg)
print 'Accepted !'
while True:
msg = self.recvMSG(aSize)
if(msg):
t = sThreads.Thread(self.handle, (aCustomClass,))
t.start()
self.currentThreads.append(t)
if(self.workers > 0):
tt = sThreads.Thread(self.respond)
tt.start()
if(self.workers == 0 and len(self.currentThreads) > 0):
for th in self.currentThreads:
th.stop()
Using a custom Thread class will not solve the issue and it still does not stop the terminated threads!
EDIT : added the handle() and respond() methods :
def handle(self, aClass):
self.workers += 1
self.queue.put(aClass._onRecieve(self.decodeStream()))
def respond(self):
while self.workers > 0:
msgToSend, wantToSend = self.queue.get()
self.workers -= 1
if(wantToSend):
print 'I want to send :', msgToSend
continue #Send is not yet implemented !
It seems that self.queue.get() was causing all the issue ...

Categories