Restart a python function in a script after n minutes - python

I have a script that uses a server-sent event library to connect with a server that pushes events to me regularly. The issue is that the stream will freeze after a long time and I will have to restart the script manually and this is not maintainable. The structure of the current code looks like this
def listen(self):
print("listening to events .....")
try:
url = settings.EVENT_URL + "/v1/events"
auth_key = settings.KEY
headers = {
"Authorization": "Basic " + auth_key,
"Accept": "text/event-stream",
}
response = self.with_urllib3(url, headers)
client = sseclient.SSEClient(response)
for event in client.events():
# the script freezes here.
logger.info(event.data)
process(event.data)
I have tried doing something like
def start(self):
def wait():
time.sleep(10 * 60)
background = threading.Thread(name = 'background', target = self.listen)
background.daemon = True
background.start()
wait()
try:
self.start()
except:
self.start()
finally:
self.start()
But I don't know if this will work mainly because a daemon thread will keep running in the background which means I will have copies of the task running after a while.
What I need is a better way to call a function and after some elapsed time return from the function and recall it again immediately. Thanks for any help.

You could consider a construction using the signal module like shown below. As a note though, the SIGALRM signal is not compatible with Windows.
import signal
TIMEOUT = 5
def _handle_alarm(_, __):
raise TimeoutError("Some useful message")
def listen():
print("Starting to listen...")
import time
time.sleep(10)
while True:
try:
timer = signal.signal(signal.SIGALRM, _handle_alarm)
timer.alarm(TIMEOUT)
listen()
except TimeoutError:
pass

Related

Tornado websocket client loosing response messages?

I need to process frames from a webcam and send a few selected frames to a remote websocket server. The server answers immediately with a confirmation message (much like an echo server).
Frame processing is slow and cpu intensive so I want to do it using a separate thread pool (producer) to use all the available cores. So the client (consumer) just sits idle until the pool has something to send.
My current implementation, see below, works fine only if I add a small sleep inside the producer test loop. If I remove this delay I stop receiving any answer from the server (both the echo server and from my real server). Even the first answer is lost, so I do not think this is a flood protection mechanism.
What am I doing wrong?
import tornado
from tornado.websocket import websocket_connect
from tornado import gen, queues
import time
class TornadoClient(object):
url = None
onMessageReceived = None
onMessageSent = None
ioloop = tornado.ioloop.IOLoop.current()
q = queues.Queue()
def __init__(self, url, onMessageReceived, onMessageSent):
self.url = url
self.onMessageReceived = onMessageReceived
self.onMessageSent = onMessageSent
def enqueueMessage(self, msgData, binary=False):
print("TornadoClient.enqueueMessage")
self.ioloop.add_callback(self.addToQueue, (msgData, binary))
print("TornadoClient.enqueueMessage done")
#gen.coroutine
def addToQueue(self, msgTuple):
yield self.q.put(msgTuple)
#gen.coroutine
def main_loop(self):
connection = None
try:
while True:
while connection is None:
try:
print("Connecting...")
connection = yield websocket_connect(self.url)
print("Connected " + str(connection))
except Exception, e:
print("Exception on connection " + str(e))
connection = None
print("Retry in a few seconds...")
yield gen.Task(self.ioloop.add_timeout, time.time() + 3)
try:
print("Waiting for data to send...")
msgData, binaryVal = yield self.q.get()
print("Writing...")
sendFuture = connection.write_message(msgData, binary=binaryVal)
print("Write scheduled...")
finally:
self.q.task_done()
yield sendFuture
self.onMessageSent("Sent ok")
print("Write done. Reading...")
msg = yield connection.read_message()
print("Got msg.")
self.onMessageReceived(msg)
if msg is None:
print("Connection lost")
connection = None
print("main loop completed")
except Exception, e:
print("ExceptionExceptionException")
print(e)
connection = None
print("Exit main_loop function")
def start(self):
self.ioloop.run_sync(self.main_loop)
print("Main loop completed")
######### TEST METHODS #########
def sendMessages(client):
time.sleep(2) #TEST only: wait for client startup
while True:
client.enqueueMessage("msgData", binary=False)
time.sleep(1) # <--- comment this line to break it
def testPrintMessage(msg):
print("Received: " + str(msg))
def testPrintSentMessage(msg):
print("Sent: " + msg)
if __name__=='__main__':
from threading import Thread
client = TornadoClient("ws://echo.websocket.org", testPrintMessage, testPrintSentMessage)
thread = Thread(target = sendMessages, args = (client, ))
thread.start()
client.start()
My real problem
In my real program I use a "window like" mechanism to protect the consumer (an autobahn.twisted.websocket server): the producer can send up to a maximum number of un-acknowledge messages (the webcam frames), then stops waiting for half of the window to free up.
The consumer sends a "PROCESSED" message back acknowleding one or more messages (just a counter, not by id).
What I see on the consumer log is that the messages are processed and the answer is sent back but these acks vanish somewhere in the network.
I have little experience with asynchio so I wanted to be sure that I'm not missing any yield, annotation or something else.
This is the consumer side log:
2017-05-13 18:59:54+0200 [-] TX Frame to tcp4:192.168.0.5:48964 : fin = True, rsv = 0, opcode = 1, mask = -, length = 21, repeat_length = None, chopsize = None, sync = False, payload = {"type": "PROCESSED"}
2017-05-13 18:59:54+0200 [-] TX Octets to tcp4:192.168.0.5:48964 : sync = False, octets = 81157b2274797065223a202250524f434553534544227d
This is neat code. I believe the reason you need a sleep in your sendMessages thread is because, otherwise, it keeps calling enqueueMessage as fast as possible, millions of times per second. Since enqueueMessage does not wait for the enqueued message to be processed, it keeps calling IOLoop.add_callback as fast as it can, without giving the loop enough opportunity to execute the callbacks.
The loop might make some progress running on the main thread, since you're not actually blocking it. But the sendMessages thread adds callbacks much faster than the loop can handle them. By the time the loop has popped one message from the queue and has begun to process it, millions of new callbacks are added already, which the loop must execute before it can advance to the next stage of message-processing.
Therefore, for your test code, I think it's correct to sleep between calls to enqueueMessage on the thread.

How to stop Python Websocket client "ws.run_forever"

I'm starting my Python Websocket using "ws.run_forever", another source stated that I should use "run_until_complete()" but these functions only seem available to Python asyncio.
How can I stop a websocket client? Or how to start it withouth running forever.
In python websockets, you can use "ws.keep_running = False" to stop the "forever running" websocket.
This may be a little unintuitive and you may choose another library which may work better overall.
The code below was working for me (using ws.keep_running = False).
class testingThread(threading.Thread):
def __init__(self,threadID):
threading.Thread.__init__(self)
self.threadID = threadID
def run(self):
print str(self.threadID) + " Starting thread"
self.ws = websocket.WebSocketApp("ws://localhost/ws", on_error = self.on_error, on_close = self.on_close, on_message=self.on_message,on_open=self.on_open)
self.ws.keep_running = True
self.wst = threading.Thread(target=self.ws.run_forever)
self.wst.daemon = True
self.wst.start()
running = True;
testNr = 0;
time.sleep(0.1)
while running:
testNr = testNr+1;
time.sleep(1.0)
self.ws.send(str(self.threadID)+" Test: "+str(testNr)+")
self.ws.keep_running = False;
print str(self.threadID) + " Exiting thread"
There's also a close method on WebSocketApp which sets keep_running to False and also closes the socket
The documentation says to use an asynchronous dispatcher like rel. It will handle keyboard interrupt and then you can pass in a custom callback function while registering for keyboard interrupt event

Trying to implement 2 "threads" using `asyncio` module

I've played around with threading before in Python, but decided to give the asyncio module a try, especially since you can cancel a running task, which seemed like a nice detail. However, for some reason, I can't wrap my head around it.
Here's what I wanted to implement (sorry if I'm using incorrect terminology):
a downloader thread that downloads the same file every x seconds, checks its hash against the previous download and saves it if it's different.
a webserver thread that runs in the background, allowing control (pause, list, stop) of the downloader thread.
I used aiohttp for the webserver.
This is what I have so far:
class aiotest():
def __init__(self):
self._dl = None # downloader future
self._webapp = None # web server future
self.init_server()
def init_server(self):
print('Setting up web interface')
app = web.Application()
app.router.add_route('GET', '/stop', self.stop)
print('added urls')
self._webapp = app
#asyncio.coroutine
def _downloader(self):
while True:
try:
print('Downloading and verifying file...')
# Dummy sleep - to be replaced by actual code
yield from asyncio.sleep(random.randint(3,10))
# Wait a predefined nr of seconds between downloads
yield from asyncio.sleep(30)
except asyncio.CancelledError:
break
#asyncio.coroutine
def _supervisor(self):
print('Starting downloader')
self._dl = asyncio.async(self._downloader())
def start(self):
loop = asyncio.get_event_loop()
loop.run_until_complete(self._supervisor())
loop.close()
#asyncio.coroutine
def stop(self):
print('Received STOP')
self._dl.cancel()
return web.Response(body=b"Stopping... ")
This class is called by:
t = aiotest()
t.start()
This doesn't work of course, and I feel that this is a horrible piece of code.
What's unclear to me:
I stop the downloader in the stop() method, but how would I go about stopping the webserver (e.g. in a shutdown() method)?
Does the downloader need a new event loop, or can I use the loop returned by asyncio.get_event_loop()?
Do I really need something like the supervisor for what I'm trying to implement? This seems so clunky. And how do I get supervisor to keep running instead of ending after a single execution as it does now?
One last, more general question: is asyncio supposed to replace the threading module (in the future)? Or does each have its own application?
I appreciate all the pointers, remarks and clarifications!
Why current code is not working:
You're running event loop until self._supervisor() is complete. self._supervisor() creates task (it happens immediately) and finishes immediately.
You're trying to run event loop until _supervisor complete, but how and when are you going start server? I think event loop should be running until server stopped. _supervisor or other stuff can be added as task (to same event loop). aiohttp already has function to start server and event loop - web.run_app, but we can do it manually.
Your questions:
Your server will run until you stop it. You can start/stop different
coroutines while your server working.
You need only one event loop for different coroutines.
I think you don't need supervisor.
More general question: asyncio helps you to run different
functions parallel in single thread in single process. That's why
asyncio is so cool and fast. Some of your sync code with threads you
can rewrite using asyncio and it's coroutines. Moreover: asyncio can
interact with threads and processes.
It can be useful in case you still need threads and processes: here's example.
Useful notes:
It's better to use term coroutine instead of thread while we talk about asyncio coroutines that are not threads
If you use Python 3.5, you can use async/await syntax
instead of coroutine/yield from
I rewrote your code to show you idea. How to check it: run program, see console, open http://localhost:8080/stop, see console, open http://localhost:8080/start, see console, type CTRL+C.
import asyncio
import random
from contextlib import suppress
from aiohttp import web
class aiotest():
def __init__(self):
self._webapp = None
self._d_task = None
self.init_server()
# SERVER:
def init_server(self):
app = web.Application()
app.router.add_route('GET', '/start', self.start)
app.router.add_route('GET', '/stop', self.stop)
app.router.add_route('GET', '/kill_server', self.kill_server)
self._webapp = app
def run_server(self):
# Create server:
loop = asyncio.get_event_loop()
handler = self._webapp.make_handler()
f = loop.create_server(handler, '0.0.0.0', 8080)
srv = loop.run_until_complete(f)
try:
# Start downloader at server start:
asyncio.async(self.start(None)) # I'm using controllers here and below to be short,
# but it's better to split controller and start func
# Start server:
loop.run_forever()
except KeyboardInterrupt:
pass
finally:
# Stop downloader when server stopped:
loop.run_until_complete(self.stop(None))
# Cleanup resources:
srv.close()
loop.run_until_complete(srv.wait_closed())
loop.run_until_complete(self._webapp.shutdown())
loop.run_until_complete(handler.finish_connections(60.0))
loop.run_until_complete(self._webapp.cleanup())
loop.close()
#asyncio.coroutine
def kill_server(self, request):
print('Server killing...')
loop = asyncio.get_event_loop()
loop.stop()
return web.Response(body=b"Server killed")
# DOWNLOADER
#asyncio.coroutine
def start(self, request):
if self._d_task is None:
print('Downloader starting...')
self._d_task = asyncio.async(self._downloader())
return web.Response(body=b"Downloader started")
else:
return web.Response(body=b"Downloader already started")
#asyncio.coroutine
def stop(self, request):
if (self._d_task is not None) and (not self._d_task.cancelled()):
print('Downloader stopping...')
self._d_task.cancel()
# cancel() just say task it should be cancelled
# to able task handle CancelledError await for it
with suppress(asyncio.CancelledError):
yield from self._d_task
self._d_task = None
return web.Response(body=b"Downloader stopped")
else:
return web.Response(body=b"Downloader already stopped or stopping")
#asyncio.coroutine
def _downloader(self):
while True:
print('Downloading and verifying file...')
# Dummy sleep - to be replaced by actual code
yield from asyncio.sleep(random.randint(1, 2))
# Wait a predefined nr of seconds between downloads
yield from asyncio.sleep(1)
if __name__ == '__main__':
t = aiotest()
t.run_server()

Tornado not running callback

This is very simple client for tcp-chat. This is main function:
#gen.coroutine
def main():
factory = TCPClient()
stream = yield factory.connect(af=socket.AF_INET, **options.options.group_dict("connect"))
# Add notification callback
ioloop.IOLoop.instance().add_callback(notification, stream)
# Run application
app = Application(stream)
app.run()
if __name__ == '__main__':
try:
main()
ioloop.IOLoop.instance().start()
except KeyboardInterrupt:
pass
Application is running. App read data from console, send it into socket. Main loop of application:
#gen.coroutine
def run(self):
while True:
try:
s = input('> ')
command, text = self._parse_command(s)
handler = self.handler(self._stream, self)
yield handler.execute_command(command, text)
except Exception as e:
print(e)
And i have console notification. This function read response from socket and print into console:
#gen.coroutine
def notification(stream):
message_length = yield stream.read_bytes(2)
length = struct.unpack("!H", message_length)[0]
message = yield stream.read_bytes(length)
# request = Message.unpack(message=message)
sys.stdout.write('\r'+' '*(len(readline.get_line_buffer())+2)+'\r')
print(message)
sys.stdout.write('> ' + readline.get_line_buffer())
sys.stdout.flush()
ioloop.IOLoop.instance().add_callback(notification, stream)
I add this function as callback into ioloop. But this function is never running. How run notification in background? Help my please...
UPD:
I create new thread and run notification in new thread:
th = threading.Thread(target=notification, args=(self._stream, ))
th.run()
But it did not help...
input() is a blocking function; nothing else can happen while it is waiting for input. In order for the application to be responsive you must rework it to avoid blocking functions like input(), or to perform those functions on a separate thread.

How to stop BaseHTTPServer.serve_forever() in a BaseHTTPRequestHandler subclass?

I am running my HTTPServer in a separate thread (using the threading module which has no way to stop threads...) and want to stop serving requests when the main thread also shuts down.
The Python documentation states that BaseHTTPServer.HTTPServer is a subclass of SocketServer.TCPServer, which supports a shutdown method, but it is missing in HTTPServer.
The whole BaseHTTPServer module has very little documentation :(
Another way to do it, based on http://docs.python.org/2/library/basehttpserver.html#more-examples, is: instead of serve_forever(), keep serving as long as a condition is met, with the server checking the condition before and after each request. For example:
import CGIHTTPServer
import BaseHTTPServer
KEEP_RUNNING = True
def keep_running():
return KEEP_RUNNING
class Handler(CGIHTTPServer.CGIHTTPRequestHandler):
cgi_directories = ["/cgi-bin"]
httpd = BaseHTTPServer.HTTPServer(("", 8000), Handler)
while keep_running():
httpd.handle_request()
I should start by saying that "I probably wouldn't do this myself, but I have in the past". The serve_forever (from SocketServer.py) method looks like this:
def serve_forever(self):
"""Handle one request at a time until doomsday."""
while 1:
self.handle_request()
You could replace (in subclass) while 1 with while self.should_be_running, and modify that value from a different thread. Something like:
def stop_serving_forever(self):
"""Stop handling requests"""
self.should_be_running = 0
# Make a fake request to the server, to really force it to stop.
# Otherwise it will just stop on the next request.
# (Exercise for the reader.)
self.make_a_fake_request_to_myself()
Edit: I dug up the actual code I used at the time:
class StoppableRPCServer(SimpleXMLRPCServer.SimpleXMLRPCServer):
stopped = False
allow_reuse_address = True
def __init__(self, *args, **kw):
SimpleXMLRPCServer.SimpleXMLRPCServer.__init__(self, *args, **kw)
self.register_function(lambda: 'OK', 'ping')
def serve_forever(self):
while not self.stopped:
self.handle_request()
def force_stop(self):
self.server_close()
self.stopped = True
self.create_dummy_request()
def create_dummy_request(self):
server = xmlrpclib.Server('http://%s:%s' % self.server_address)
server.ping()
The event-loops ends on SIGTERM, Ctrl+C or when shutdown() is called.
server_close() must be called after server_forever() to close the listening socket.
import http.server
class StoppableHTTPServer(http.server.HTTPServer):
def run(self):
try:
self.serve_forever()
except KeyboardInterrupt:
pass
finally:
# Clean-up server (close socket, etc.)
self.server_close()
Simple server stoppable with user action (SIGTERM, Ctrl+C, ...):
server = StoppableHTTPServer(("127.0.0.1", 8080),
http.server.BaseHTTPRequestHandler)
server.run()
Server running in a thread:
import threading
server = StoppableHTTPServer(("127.0.0.1", 8080),
http.server.BaseHTTPRequestHandler)
# Start processing requests
thread = threading.Thread(None, server.run)
thread.start()
# ... do things ...
# Shutdown server
server.shutdown()
thread.join()
In my python 2.6 installation, I can call it on the underlying TCPServer - it still there inside your HTTPServer:
TCPServer.shutdown
>>> import BaseHTTPServer
>>> h=BaseHTTPServer.HTTPServer(('',5555), BaseHTTPServer.BaseHTTPRequestHandler)
>>> h.shutdown
<bound method HTTPServer.shutdown of <BaseHTTPServer.HTTPServer instance at 0x0100D800>>
>>>
I think you can use [serverName].socket.close()
In python 2.7, calling shutdown() works but only if you are serving via serve_forever, because it uses async select and a polling loop. Running your own loop with handle_request() ironically excludes this functionality because it implies a dumb blocking call.
From SocketServer.py's BaseServer:
def serve_forever(self, poll_interval=0.5):
"""Handle one request at a time until shutdown.
Polls for shutdown every poll_interval seconds. Ignores
self.timeout. If you need to do periodic tasks, do them in
another thread.
"""
self.__is_shut_down.clear()
try:
while not self.__shutdown_request:
# XXX: Consider using another file descriptor or
# connecting to the socket to wake this up instead of
# polling. Polling reduces our responsiveness to a
# shutdown request and wastes cpu at all other times.
r, w, e = select.select([self], [], [], poll_interval)
if self in r:
self._handle_request_noblock()
finally:
self.__shutdown_request = False
self.__is_shut_down.set()
Heres part of my code for doing a blocking shutdown from another thread, using an event to wait for completion:
class MockWebServerFixture(object):
def start_webserver(self):
"""
start the web server on a new thread
"""
self._webserver_died = threading.Event()
self._webserver_thread = threading.Thread(
target=self._run_webserver_thread)
self._webserver_thread.start()
def _run_webserver_thread(self):
self.webserver.serve_forever()
self._webserver_died.set()
def _kill_webserver(self):
if not self._webserver_thread:
return
self.webserver.shutdown()
# wait for thread to die for a bit, then give up raising an exception.
if not self._webserver_died.wait(5):
raise ValueError("couldn't kill webserver")
This is a simplified version of Helgi's answer for python 3.7:
import threading
import time
from http.server import ThreadingHTTPServer, SimpleHTTPRequestHandler
class MyServer(threading.Thread):
def run(self):
self.server = ThreadingHTTPServer(('localhost', 8000), SimpleHTTPRequestHandler)
self.server.serve_forever()
def stop(self):
self.server.shutdown()
if __name__ == '__main__':
s = MyServer()
s.start()
print('thread alive:', s.is_alive()) # True
time.sleep(2)
s.stop()
print('thread alive:', s.is_alive()) # False
This method I use successfully (Python 3) to stop the server from the web application itself (a web page):
import http.server
import os
import re
class PatientHTTPRequestHandler(http.server.SimpleHTTPRequestHandler):
stop_server = False
base_directory = "/static/"
# A file to use as an "server stopped user information" page.
stop_command = "/control/stop.html"
def send_head(self):
self.path = os.path.normpath(self.path)
if self.path == PatientHTTPRequestHandler.stop_command and self.address_string() == "127.0.0.1":
# I wanted that only the local machine could stop the server.
PatientHTTPRequestHandler.stop_server = True
# Allow the stop page to be displayed.
return http.server.SimpleHTTPRequestHandler.send_head(self)
if self.path.startswith(PatientHTTPRequestHandler.base_directory):
return http.server.SimpleHTTPRequestHandler.send_head(self)
else:
return self.send_error(404, "Not allowed", "The path you requested is forbidden.")
if __name__ == "__main__":
httpd = http.server.HTTPServer(("127.0.0.1", 8080), PatientHTTPRequestHandler)
# A timeout is needed for server to check periodically for KeyboardInterrupt
httpd.timeout = 1
while not PatientHTTPRequestHandler.stop_server:
httpd.handle_request()
This way, pages served via base address http://localhost:8080/static/ (example http://localhost:8080/static/styles/common.css) will be served by the default handler, an access to http://localhost:8080/control/stop.html from the server's computer will display stop.html then stop the server, any other option will be forbidden.
I tried all above possible solution and ended up with having a "sometime" issue - somehow it did not really do it - so I ended up making a dirty solution that worked all the time for me:
If all above fails, then brute force kill your thread using something like this:
import subprocess
cmdkill = "kill $(ps aux|grep '<name of your thread> true'|grep -v 'grep'|awk '{print $2}') 2> /dev/null"
subprocess.Popen(cmdkill, stdout=subprocess.PIPE, shell=True)
import http.server
import socketserver
import socket as sck
import os
import threading
class myserver:
def __init__(self, PORT, LOCATION):
self.thrd = threading.Thread(None, self.run)
self.Directory = LOCATION
self.Port = PORT
hostname = sck.gethostname()
ip_address = sck.gethostbyname(hostname)
self.url = 'http://' + ip_address + ':' + str(self.Port)
Handler = http.server.SimpleHTTPRequestHandler
self.httpd = socketserver.TCPServer(("", PORT), Handler)
print('Object created, use the start() method to launch the server')
def run(self):
print('listening on: ' + self.url )
os.chdir(self.Directory)
print('myserver object started')
print('Use the objects stop() method to stop the server')
self.httpd.serve_forever()
print('Quit handling')
print('Sever stopped')
print('Port ' + str(self.Port) + ' should be available again.')
def stop(self):
print('Stopping server')
self.httpd.shutdown()
self.httpd.server_close()
print('Need just one more request before shutting down'
def start(self):
self.thrd.start()
def help():
helpmsg = '''Create a new server-object by initialising
NewServer = webserver3.myserver(Port_number, Directory_String)
Then start it using NewServer.start() function
Stop it using NewServer.stop()'''
print(helpmsg)
Not a experience python programmer, just wanting to share my comprehensive solution. Mostly based on snippets here and there. I usually import this script in my console and it allows me to set up multiple servers for different locations using their specific ports, sharing my content with other devices on the network.
Here's a context-flavored version for Python 3.7+ which I prefer because it cleans up automatically and you can specify the directory to serve:
from contextlib import contextmanager
from functools import partial
from http.server import SimpleHTTPRequestHandler, ThreadingHTTPServer
from threading import Thread
#contextmanager
def http_server(host: str, port: int, directory: str):
server = ThreadingHTTPServer(
(host, port), partial(SimpleHTTPRequestHandler, directory=directory)
)
server_thread = Thread(target=server.serve_forever, name="http_server")
server_thread.start()
try:
yield
finally:
server.shutdown()
server_thread.join()
def usage_example():
import time
with http_server("127.0.0.1", 8087, "."):
# now you can use the web server
time.sleep(100)

Categories