I have a simple HTTP server setup like this one. It processes a slow 40 second request to open and then close gates (real metallic gates). If second HTTP query is made during execution of the first one, it is placed in queue and then executed after first run. I don't need this behavior, I need to reply with error if gate open/close procedure is in progress now.
How can I do that? There's a parameter 'request_queue_size' - but I'm not sure how to set it.
You need to follow a different strategy designing your server service. You need to keep the state of the door either in memory or in a database. Then, each time you receive a request to do something on the door, you check the current state of the door in your persistence, and then you execute the action if it is possible to do on the current state, otherwise you return an error. Also, don't forget to update the state of the door once an action completes.
'request_queue_size' seems to have no effect.
The solution was to make server multithreaded, and implement locking variable 'busy':
from socketserver import ThreadingMixIn
from http.server import BaseHTTPRequestHandler, HTTPServer
import time
from gpiozero import DigitalOutputDevice
import logging
from time import sleep
logging.basicConfig(format='%(asctime)s %(levelname)s:%(message)s', level=logging.INFO)
hostName = ''
hostPort = 9001
busy = False
class ThreadingServer(ThreadingMixIn, HTTPServer):
pass
class MyServer(BaseHTTPRequestHandler):
def do_GET(self):
global busy
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write(bytes("Hello!<br>", "utf-8"))
if self.path == '/gates':
if not busy:
busy = True
relay = DigitalOutputDevice(17) # Initialize GPIO 17
relay.on()
logging.info('Cycle started')
self.wfile.write(bytes("Cycle started<br>", "utf-8"))
sleep(2)
relay.close()
sleep(20)
relay = DigitalOutputDevice(17)
relay.on()
sleep(2)
relay.close()
logging.info('Cycle finished')
self.wfile.write(bytes("Cycle finished", "utf-8"))
busy = False
else:
# self.wfile.write(bytes("Busy now!<br>", "utf-8"))
self.send_error(503)
myServer = ThreadingServer((hostName, hostPort), MyServer)
print(time.asctime(), "Server Starts - %s:%s" % (hostName, hostPort))
try:
myServer.serve_forever()
except KeyboardInterrupt:
pass
myServer.server_close()
print(time.asctime(), "Server Stops - %s:%s" % (hostName, hostPort))
In general, the idea you're looking for is called request throttling. There are lots of implementations of this kind of thing which shouldn't be hard to dig up out there on the Web: here's one for Flask, my microframework of choice - https://flask-limiter.readthedocs.io/en/stable/
Quick usage example:
#app.route("/open_gate")
#limiter.limit("1 per minute")
def slow():
gate_robot.open_gate()
return
I'm trying to run a django development server from within a Kivy application. This did work out quite well so far.
Now i want to allow the user to continue working with the program while the server is running. My idea was to create a multiprocessing.Process for the httpd.serve_forever() to avoid a complete lock of the main program. Did work well. This is the code in my internal_django module:
import multiprocessing
import os
import time
from wsgiref.simple_server import make_server
def django_wsgi_application():
PROJECT_ROOT = os.path.dirname(os.path.abspath(__file__))
settings_module = "djangosettings"#%s.djangosettings" % PROJECT_ROOT.split(os.sep)[-1]
os.environ.update({"DJANGO_SETTINGS_MODULE":settings_module})
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
return application
class Singleton(type):
_instances = {}
def __call__(cls, *args, **kwargs):
if cls not in cls._instances:
cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
return cls._instances[cls]
class DjangoServer():
__metaclass__ = Singleton
def start(self):
self.httpd = make_server('', 8000, django_wsgi_application())
self.server = multiprocessing.Process(target=self.httpd.serve_forever)
self.server.start()
print "Now serving on port 8000..."
print "Server Process PID = %s" %self.server.pid
def stop(self):
print("shutdown initiated")
print "Server Process PID = %s" %self.server.pid
while self.server.is_alive():
self.server.terminate()
print("Server should have shut down")
time.sleep(1)
print("Server is_alive: %s" %self.server.is_alive())
self.server.join()
print("server process joined")
if __name__ == "__main__":
server = DjangoServer()
server.start()
time.sleep(3)
server.stop()
When i run this code, everything works as expected. This is what is being put out in the console:
Now serving on port 8000...
Server Process PID = 1406
shutdown initiated
Server Process PID = 1406
Server should have shut down
Server is_alive: False
server process joined
Next step was to provide a way to stop the server from within the Kivy application. For that i just wanted to use my DjangoServer class as i did before:
from internal_django import DjangoServer
class StartScreen(Screen):
def start_server(self):
server = DjangoServer()
server.start()
class StopScreen(Screen):
def stop_server(self):
server = DjangoServer()
server.stop()
But when doing so, the process once started never quits. My first idea was that the Singleton did not work as expected, and that i try to quit the wrong process. but as you can see in the output, the PID's are identical. The server receives the terminate command, but just continues to work. This is what the console looks like:
Now serving on port 8000...
Server Process PID = 1406
shutdown initiated
Server Process PID = 1406
Server should have shut down
Server should have shut down
Server should have shut down
Server should have shut down
Server should have shut down
Server should have shut down
Server should have shut down
Server should have shut down
(and so on, until i manually kill the server process)
Am i using multiprocessing in a completely wrong way? Is Kivy somehow interfering with the process?
I think the problems here might be two:
A signal handler is intercepting the TERM request sent by Process.terminate() and ignores it. To verify that simply use the signal.getsignal(signal.SIGTERM) from within the new process and print the results. To circumvent such issue you can reset the default behavior with signal.signal(signal.SIGTERM, signal.SIG_DFL), nevertheless keep in mind that there might be a reason why SIGTERM is silenced by the frameworks (I'm not familiar neither with Django nor with Kivy).
If you're using Python 2 you must consider that the interpreter does not process signals if it's blocked on a synchronization primitive from threading library (Locks, Semaphores..) or on a native C call. The serve_forever() function might fall in these cases (use the force of the source!). Quick check could be trying to run the code on Python 3 and see whether it works or not.
A quick and dirty solution consists in waiting a small amount of time and send a SIGKILL if the process is still alive.
import os
import signal
process.terminate()
process.join(1)
if process.is_alive() and os.name != 'nt':
try:
os.kill(process.pid, signal.SIGKILL)
process.join()
except OSError:
return # process might have died while checking it
On windows you cannot kill a process in such simple way that's why I test the os.name.
It's a pretty raw approach so I'd rather recommend to find the cause of the issue.
What happens if you call terminate(), then join() and skip the while loop? Also, I shuffle the code a little and factor some code into _create_server(). Please let me know if this works out for you.
class DjangoServer():
__metaclass__ = Singleton
def _create_server(self):
httpd = make_server('', 8000, django_wsgi_application())
print "Now serving on port {}...".format(httpd.server_port)
httpd.serve_forever()
def start(self):
self.server = multiprocessing.Process(target=self._create_server)
self.server.start()
print "Server Process PID = %s" %self.server.pid
def stop(self):
print("shutdown initiated")
print "Server Process PID = %s" %self.server.pid
self.server.terminate()
self.server.join()
print("server process terminated")
I have some integration testing code that spawns a HTTP server in a different process for calling against. This server could potentially get polluted by activity so I'd like the ability to start and stop new instances of it on demand.
This unfortunately isn't working... I am running into a situation where the port my server was running on is still locked after my process exits(meaning if I run the test two times quickly, it fails the second time because the port is locked).
I've tried using atexit.register to bind the shutdown method and it's not working either.
Here's the code for the server:
from BaseHTTPServer import BaseHTTPRequestHandler
import SocketServer
import atexit
class RestHTTPRequestHandler(BaseHTTPRequestHandler):
def do_GET(self):
if self.path == '/sanitycheck':
self.send_response(200)
self.send_header('Content-type', 'application/json')
self.end_headers()
self.wfile.write("{ 'text': 'You are sane.' }")
else:
self.wfile.write(self.path)
def kill_server(httpd):
open("/tmp/log", "w").write("KILLING")
httpd.shutdown()
def start_simple_server(port):
httpd = SocketServer.TCPServer(("", port), RestHTTPRequestHandler)
atexit.register(kill_server, httpd)
httpd.serve_forever()
return httpd
Nothing ever gets written to /tmp/log which makes me think that the atexit isn't getting called.
Here's how I instantiate the server:
p = Process(target=start_simple_server, args=(port,))
p.start()
And then when I'm done to terminate it, I just call:
p.terminate()
Which does kill the process and should(to my understanding) trigger the atexit call -- but it's not :(
Any thoughts?
Python atexit doesn't run when you terminate a process.
>>>import atexit
>>> def hook():
... print "hook ran"
...
>>> atexit.register(hook)
<function hook at 0x100414aa0>
>>>
# in another terminal: kill <python process id>
>>> Terminated
I wound up taking a slightly different approach inspired by some code from David Beazley... server code:
from BaseHTTPServer import BaseHTTPRequestHandler
import SocketServer
import multiprocessing
import cgi
import urlparse
class StoppableHTTPServerProcess(multiprocessing.Process):
def __init__(self, address, handler):
multiprocessing.Process.__init__(self)
self.exit = multiprocessing.Event()
self.server = StoppableHTTPServer(address, handler)
def run(self):
while not self.exit.is_set():
self.server.handle_request()
def shutdown(self):
self.exit.set()
class RestHTTPRequestHandler(BaseHTTPRequestHandler):
def do_GET(self):
self.wfile.write(self.path)
class StoppableHTTPServer(SocketServer.TCPServer):
allow_reuse_address = True
timeout = 0.5
def start_simple_server(port):
process = StoppableHTTPServerProcess(("", port), RestHTTPRequestHandler)
return process
Calling code:
p = start_simple_server(port)
p.start()
And when I'm done...
p.shutdown()
I have the following program:
import socket
import sys
import threading
import signal
class serve(threading.Thread):
def __init__(self):
super(serve, self).__init__()
self.s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.host = ''
self.port = int(sys.argv[1])
def run(self):
self.s.bind((self.host, self.port))
self.s.listen(1)
conn, addr = self.s.accept()
# Call blocks in the following recv
data = conn.recv(1000000)
conn.close()
self.s.close()
def handler(signum, frame):
print "I am the handler: "
signal.signal(signal.SIGHUP, handler)
background = serve()
background.start()
background.join()
There is a client program that connects to this but does not send any data. The problem is when a SIGHUP is sent, and "Interrupted System call" exception is being thrown. Any idea why? It is happening in python 2.6+ and on FreeBSD. I suspect it is related to http://bugs.python.org/issue1975.
If a system call is executing when a signal arrives, the system call is interrupted. I believe this is in part to prevent escalation attacks and in part to keep the process in a consistent state when the signal handler is invoked. It also would allow you to wake up a process that's hung on a system call.
To instead restart system calls after a signal is handled, use signal.siginterrupt after you set a signal handler:
signal.signal(signal.SIGHUP, handler)
signal.siginterrupt(signal.SIGHUP, false)
I am running my HTTPServer in a separate thread (using the threading module which has no way to stop threads...) and want to stop serving requests when the main thread also shuts down.
The Python documentation states that BaseHTTPServer.HTTPServer is a subclass of SocketServer.TCPServer, which supports a shutdown method, but it is missing in HTTPServer.
The whole BaseHTTPServer module has very little documentation :(
Another way to do it, based on http://docs.python.org/2/library/basehttpserver.html#more-examples, is: instead of serve_forever(), keep serving as long as a condition is met, with the server checking the condition before and after each request. For example:
import CGIHTTPServer
import BaseHTTPServer
KEEP_RUNNING = True
def keep_running():
return KEEP_RUNNING
class Handler(CGIHTTPServer.CGIHTTPRequestHandler):
cgi_directories = ["/cgi-bin"]
httpd = BaseHTTPServer.HTTPServer(("", 8000), Handler)
while keep_running():
httpd.handle_request()
I should start by saying that "I probably wouldn't do this myself, but I have in the past". The serve_forever (from SocketServer.py) method looks like this:
def serve_forever(self):
"""Handle one request at a time until doomsday."""
while 1:
self.handle_request()
You could replace (in subclass) while 1 with while self.should_be_running, and modify that value from a different thread. Something like:
def stop_serving_forever(self):
"""Stop handling requests"""
self.should_be_running = 0
# Make a fake request to the server, to really force it to stop.
# Otherwise it will just stop on the next request.
# (Exercise for the reader.)
self.make_a_fake_request_to_myself()
Edit: I dug up the actual code I used at the time:
class StoppableRPCServer(SimpleXMLRPCServer.SimpleXMLRPCServer):
stopped = False
allow_reuse_address = True
def __init__(self, *args, **kw):
SimpleXMLRPCServer.SimpleXMLRPCServer.__init__(self, *args, **kw)
self.register_function(lambda: 'OK', 'ping')
def serve_forever(self):
while not self.stopped:
self.handle_request()
def force_stop(self):
self.server_close()
self.stopped = True
self.create_dummy_request()
def create_dummy_request(self):
server = xmlrpclib.Server('http://%s:%s' % self.server_address)
server.ping()
The event-loops ends on SIGTERM, Ctrl+C or when shutdown() is called.
server_close() must be called after server_forever() to close the listening socket.
import http.server
class StoppableHTTPServer(http.server.HTTPServer):
def run(self):
try:
self.serve_forever()
except KeyboardInterrupt:
pass
finally:
# Clean-up server (close socket, etc.)
self.server_close()
Simple server stoppable with user action (SIGTERM, Ctrl+C, ...):
server = StoppableHTTPServer(("127.0.0.1", 8080),
http.server.BaseHTTPRequestHandler)
server.run()
Server running in a thread:
import threading
server = StoppableHTTPServer(("127.0.0.1", 8080),
http.server.BaseHTTPRequestHandler)
# Start processing requests
thread = threading.Thread(None, server.run)
thread.start()
# ... do things ...
# Shutdown server
server.shutdown()
thread.join()
In my python 2.6 installation, I can call it on the underlying TCPServer - it still there inside your HTTPServer:
TCPServer.shutdown
>>> import BaseHTTPServer
>>> h=BaseHTTPServer.HTTPServer(('',5555), BaseHTTPServer.BaseHTTPRequestHandler)
>>> h.shutdown
<bound method HTTPServer.shutdown of <BaseHTTPServer.HTTPServer instance at 0x0100D800>>
>>>
I think you can use [serverName].socket.close()
In python 2.7, calling shutdown() works but only if you are serving via serve_forever, because it uses async select and a polling loop. Running your own loop with handle_request() ironically excludes this functionality because it implies a dumb blocking call.
From SocketServer.py's BaseServer:
def serve_forever(self, poll_interval=0.5):
"""Handle one request at a time until shutdown.
Polls for shutdown every poll_interval seconds. Ignores
self.timeout. If you need to do periodic tasks, do them in
another thread.
"""
self.__is_shut_down.clear()
try:
while not self.__shutdown_request:
# XXX: Consider using another file descriptor or
# connecting to the socket to wake this up instead of
# polling. Polling reduces our responsiveness to a
# shutdown request and wastes cpu at all other times.
r, w, e = select.select([self], [], [], poll_interval)
if self in r:
self._handle_request_noblock()
finally:
self.__shutdown_request = False
self.__is_shut_down.set()
Heres part of my code for doing a blocking shutdown from another thread, using an event to wait for completion:
class MockWebServerFixture(object):
def start_webserver(self):
"""
start the web server on a new thread
"""
self._webserver_died = threading.Event()
self._webserver_thread = threading.Thread(
target=self._run_webserver_thread)
self._webserver_thread.start()
def _run_webserver_thread(self):
self.webserver.serve_forever()
self._webserver_died.set()
def _kill_webserver(self):
if not self._webserver_thread:
return
self.webserver.shutdown()
# wait for thread to die for a bit, then give up raising an exception.
if not self._webserver_died.wait(5):
raise ValueError("couldn't kill webserver")
This is a simplified version of Helgi's answer for python 3.7:
import threading
import time
from http.server import ThreadingHTTPServer, SimpleHTTPRequestHandler
class MyServer(threading.Thread):
def run(self):
self.server = ThreadingHTTPServer(('localhost', 8000), SimpleHTTPRequestHandler)
self.server.serve_forever()
def stop(self):
self.server.shutdown()
if __name__ == '__main__':
s = MyServer()
s.start()
print('thread alive:', s.is_alive()) # True
time.sleep(2)
s.stop()
print('thread alive:', s.is_alive()) # False
This method I use successfully (Python 3) to stop the server from the web application itself (a web page):
import http.server
import os
import re
class PatientHTTPRequestHandler(http.server.SimpleHTTPRequestHandler):
stop_server = False
base_directory = "/static/"
# A file to use as an "server stopped user information" page.
stop_command = "/control/stop.html"
def send_head(self):
self.path = os.path.normpath(self.path)
if self.path == PatientHTTPRequestHandler.stop_command and self.address_string() == "127.0.0.1":
# I wanted that only the local machine could stop the server.
PatientHTTPRequestHandler.stop_server = True
# Allow the stop page to be displayed.
return http.server.SimpleHTTPRequestHandler.send_head(self)
if self.path.startswith(PatientHTTPRequestHandler.base_directory):
return http.server.SimpleHTTPRequestHandler.send_head(self)
else:
return self.send_error(404, "Not allowed", "The path you requested is forbidden.")
if __name__ == "__main__":
httpd = http.server.HTTPServer(("127.0.0.1", 8080), PatientHTTPRequestHandler)
# A timeout is needed for server to check periodically for KeyboardInterrupt
httpd.timeout = 1
while not PatientHTTPRequestHandler.stop_server:
httpd.handle_request()
This way, pages served via base address http://localhost:8080/static/ (example http://localhost:8080/static/styles/common.css) will be served by the default handler, an access to http://localhost:8080/control/stop.html from the server's computer will display stop.html then stop the server, any other option will be forbidden.
I tried all above possible solution and ended up with having a "sometime" issue - somehow it did not really do it - so I ended up making a dirty solution that worked all the time for me:
If all above fails, then brute force kill your thread using something like this:
import subprocess
cmdkill = "kill $(ps aux|grep '<name of your thread> true'|grep -v 'grep'|awk '{print $2}') 2> /dev/null"
subprocess.Popen(cmdkill, stdout=subprocess.PIPE, shell=True)
import http.server
import socketserver
import socket as sck
import os
import threading
class myserver:
def __init__(self, PORT, LOCATION):
self.thrd = threading.Thread(None, self.run)
self.Directory = LOCATION
self.Port = PORT
hostname = sck.gethostname()
ip_address = sck.gethostbyname(hostname)
self.url = 'http://' + ip_address + ':' + str(self.Port)
Handler = http.server.SimpleHTTPRequestHandler
self.httpd = socketserver.TCPServer(("", PORT), Handler)
print('Object created, use the start() method to launch the server')
def run(self):
print('listening on: ' + self.url )
os.chdir(self.Directory)
print('myserver object started')
print('Use the objects stop() method to stop the server')
self.httpd.serve_forever()
print('Quit handling')
print('Sever stopped')
print('Port ' + str(self.Port) + ' should be available again.')
def stop(self):
print('Stopping server')
self.httpd.shutdown()
self.httpd.server_close()
print('Need just one more request before shutting down'
def start(self):
self.thrd.start()
def help():
helpmsg = '''Create a new server-object by initialising
NewServer = webserver3.myserver(Port_number, Directory_String)
Then start it using NewServer.start() function
Stop it using NewServer.stop()'''
print(helpmsg)
Not a experience python programmer, just wanting to share my comprehensive solution. Mostly based on snippets here and there. I usually import this script in my console and it allows me to set up multiple servers for different locations using their specific ports, sharing my content with other devices on the network.
Here's a context-flavored version for Python 3.7+ which I prefer because it cleans up automatically and you can specify the directory to serve:
from contextlib import contextmanager
from functools import partial
from http.server import SimpleHTTPRequestHandler, ThreadingHTTPServer
from threading import Thread
#contextmanager
def http_server(host: str, port: int, directory: str):
server = ThreadingHTTPServer(
(host, port), partial(SimpleHTTPRequestHandler, directory=directory)
)
server_thread = Thread(target=server.serve_forever, name="http_server")
server_thread.start()
try:
yield
finally:
server.shutdown()
server_thread.join()
def usage_example():
import time
with http_server("127.0.0.1", 8087, "."):
# now you can use the web server
time.sleep(100)