I use http webserver python script:
class PiFaceWebHandler(http.server.BaseHTTPRequestHandler):
def do_GET(self):
[....]
if __name__ == "__main__":
# get the port
if len(sys.argv) > 1:
port = int(sys.argv[1])
else:
port = DEFAULT_PORT
# set up PiFace Digital
PiFaceWebHandler.pifacedigital = pifacedigitalio.PiFaceDigital()
print("Starting simple PiFace web control at:\n\n"
"\thttp://{addr}:{port}\n\n"
"Change the output_port with:\n\n"
"\thttp://{addr}:{port}?output_port=0xAA\n"
.format(addr=get_my_ip(), port=port))
# run the server
server_address = ('', port)
try:
httpd = http.server.HTTPServer(server_address, PiFaceWebHandler)
httpd.serve_forever()
except KeyboardInterrupt:
print('^C received, shutting down server')
httpd.socket.close()
It's working fine, but i want the script (or another one, ) check some I/O continuously, in a while loop, ie.
And sometimes this I/O could change state with http request.
Currently, I/O changes state on http request, but i don't find the tips to change them on external trigger (another input ie).
How can i do? Where can i code the loop test?
do I make myself clear?
Thanks,
Related
I am working on a "simple" server using a threaded SocketServer in Python 3.
I am going through a lot of trouble implementing shutdown for this. The code below I found on the internet and shutdown works initially but stops working after sending a few commands from the client via telnet. Some investigation tells me it hangs in threading._shutdown... threading._wait_for_tstate_lock but so far this does not ring a bell.
My research tells me that there are ~42 different solutions, frameworks, etc. on how to do this in different python versions. So far I could not find a working approach for python3. E.g. I love telnetsrv
(https://pypi.python.org/pypi/telnetsrv/0.4) for python 2.7 (it uses greenlets from gevent) but this one does not work for python 3. So if there is a more pythonic, std lib approach or something that works reliably I would love to hear about it!
My bet currently is with socketserver but I could not figure out yet how to deal with the hanging server. I removed all the log statements and most functionality so I can post this minimal server which exposes the issue:
# -*- coding: utf-8 -*-
import socketserver
import threading
SERVER = None
def shutdown_cmd(request):
global SERVER
request.send(bytes('server shutdown requested\n', 'utf-8'))
request.close()
SERVER.shutdown()
print('after shutdown!!')
#SERVER.server_close()
class service(socketserver.BaseRequestHandler):
def handle(self):
while True:
try:
msg = str(self.request.recv(1024).strip(), 'utf-8')
if msg == 'shutdown':
shutdown_cmd(msg, self.request)
else:
self.request.send(bytes("You said '{}'\n".format(msg), "utf-8"))
except Exception as e:
pass
class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
pass
def run():
global SERVER
SERVER = ThreadedTCPServer(('', 1520), service)
server_thread = threading.Thread(target=SERVER.serve_forever)
server_thread.daemon = True
server_thread.start()
input("Press enter to shutdown")
SERVER.shutdown()
if __name__ == '__main__':
run()
It would be great being able to stop the server from the handler, too (see shutdown_cmd)
shutdown() works as expected, the server has stopped accepting new connections, but python still waiting for alive threads to terminate.
By default, socketserver.ThreadingMixIn will create new threads to handle incoming connection and by default, those are non-daemon threads, so python will wait for all alive non-daemon threads to terminate.
Of course, you could make the server spawn daemon threads, then python will not waiting:
The ThreadingMixIn class defines an attribute daemon_threads, which indicates whether or not the server should wait for thread termination. You should set the flag explicitly if you would like threads to behave autonomously; the default is False, meaning that Python will not exit until all threads created by ThreadingMixIn have exited.
class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
daemon_threads = True
But that is not the ideal solution, you should check why threads never terminate, usually, the server should stop processing connection when no new data available or client shutdown connection:
import socketserver
import threading
shutdown_evt = threading.Event()
class service(socketserver.BaseRequestHandler):
def handle(self):
self.request.setblocking(False)
while True:
try:
msg = self.request.recv(1024)
if msg == b'shutdown':
shutdown_evt.set()
break
elif msg:
self.request.send(b'you said: ' + msg)
if shutdown_evt.wait(0.1):
break
except Exception as e:
break
class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
pass
def run():
SERVER = ThreadedTCPServer(('127.0.0.1', 10000), service)
server_thread = threading.Thread(target=SERVER.serve_forever)
server_thread.daemon = True
server_thread.start()
input("Press enter to shutdown")
shutdown_evt.set()
SERVER.shutdown()
if __name__ == '__main__':
run()
I tried two solutions to implement a tcp server which runs on Python 3 on both Linux and Windows (I tried Windows 7):
using socketserver (my question) - shutdown is not working
using asyncio (posted an answer for that) - does not work on Windows
Both solutions have been based upon search results on the web. In the end I had to give up on the idea of finding a proven solution because I could not find one. Consequently I implemented my own solution (based on gevent). I post it here because I hope it will be helpful for others to avoid stuggeling the way I did.
# -*- coding: utf-8 -*-
from gevent.server import StreamServer
from gevent.pool import Pool
class EchoServer(StreamServer):
def __init__(self, listener, handle=None, spawn='default'):
StreamServer.__init__(self, listener, handle=handle, spawn=spawn)
def handle(self, socket, address):
print('New connection from %s:%s' % address[:2])
socket.sendall(b'Welcome to the echo server! Type quit to exit.\r\n')
# using a makefile because we want to use readline()
rfileobj = socket.makefile(mode='rb')
while True:
line = rfileobj.readline()
if not line:
print("client disconnected")
break
if line.strip().lower() == b'quit':
print("client quit")
break
if line.strip().lower() == b'shutdown':
print("client initiated server shutdown")
self.stop()
break
socket.sendall(line)
print("echoed %r" % line.decode().strip())
rfileobj.close()
srv = EchoServer(('', 1520), spawn=Pool(20))
srv.serve_forever()
after more research I found a sample that works using asyncio:
# -*- coding: utf-8 -*-
import asyncio
# after further research I found this relevant europython talk:
# https://www.youtube.com/watch?v=pi49aiLBas8
# * protocols and transport are useful if you do not have tons of socket based code
# * event loop pushes data in
# * transport used to push data back to the client
# found decent sample in book by wrox "professional python"
class ServerProtocol(asyncio.Protocol):
def connection_made(self, transport):
self.transport = transport
self.write('Welcome')
def connection_lost(self, exc):
self.transport = None
def data_received(self, data):
if not data or data == '':
return
message = data.decode('ascii')
command = message.strip().split(' ')[0].lower()
args = message.strip().split(' ')[1:]
#sanity check
if not hasattr(self, 'command_%s' % command):
self.write('Invalid command: %s' % command)
return
# run command
try:
return getattr(self, 'command_%s' % command)(*args)
except Exception as ex:
self.write('Error: %s' % str(ex))
def write(self, msg):
self.transport.write((msg + '\n').encode('ascii', 'ignore'))
def command_shutdown(self):
self.write('Okay. shutting down')
raise KeyboardInterrupt
def command_bye(self):
self.write('bye then!')
self.transport.close()
self.transport = None
if __name__ == '__main__':
loop = asyncio.get_event_loop()
coro = loop.create_server(ServerProtocol, '127.0.0.1', 8023)
asyncio.async(coro)
try:
loop.run_forever()
except KeyboardInterrupt:
pass
I understand that this is the most useful way to do this kind of network programming. If necessary the performance could be improved using the same code with uvloop (https://magic.io/blog/uvloop-blazing-fast-python-networking/).
Another way to shut down the server is by creating a process/thread for the serve_forever call.
After server_forever is started, simply wait for a custom flag to trigger and use server_close on the server, and terminate the process.
streaming_server = StreamingServer(('', 8000), StreamingHandler)
FLAG_KEEP_ALIVE.value = True
process_serve_forever = Process(target=streaming_server.serve_forever)
process_serve_forever.start()
while FLAG_KEEP_ALIVE.value:
pass
streaming_server.server_close()
process_serve_forever.terminate()
Creating a python syslog server for my network devices I am using the below code that comes from here https://gist.githubusercontent.com/marcelom/4218010/raw/53b643bd056d03ffc21abcfe2e1b9f6a7de005f0/pysyslog.py
This will meet my needs but I cannot seem to get any python version of sysloghandler to run. I see this is old code about 5 years or so.
I am running ubuntu 16.04 system. Everything seems to hang on the try: for initiating the server.
#!/usr/bin/env python
## Tiny Syslog Server in Python.
##
## This is a tiny syslog server that is able to receive UDP based syslog
## entries on a specified port and save them to a file.
## That's it... it does nothing else...
## There are a few configuration parameters.
LOG_FILE = 'youlogfile.log'
HOST, PORT = "0.0.0.0", 514
#
# NO USER SERVICEABLE PARTS BELOW HERE...
#
import logging
import SocketServer
logging.basicConfig(level=logging.INFO, format='%(message)s', datefmt='', filename=LOG_FILE, filemode='a')
class SyslogUDPHandler(SocketServer.BaseRequestHandler):
def handle(self):
data = bytes.decode(self.request[0].strip())
socket = self.request[1]
print( "%s : " % self.client_address[0], str(data))
logging.info(str(data))
if __name__ == "__main__":
try:
server = SocketServer.UDPServer((HOST,PORT), SyslogUDPHandler)
server.serve_forever(poll_interval=0.5)
except (IOError, SystemExit):
raise
except KeyboardInterrupt:
print ("Crtl+C Pressed. Shutting down.")
Your code works for me. If I start the server like this:
sudo python server.py
And then send a message like this:
echo this is a test | nc -u localhost 514
I see output on stdout:
('127.0.0.1 : ', 'this is a test')
And the file youlogfile.log contains:
this is a test
I suspect your problems stem from trying to use a TCP tool to connect to a UDP server.
I have a small python webserver script for hosting my own website, complete with request handling and error returning. This script worked perfectly on my PC, but when I tried it on my raspberry Pi, it would not restart every 3 minutes (Server would crash after 15, so restarting every 3 minutes seemed good).
So I rewrote my server script and it checks stuff like if it's booting up for the first time or restarting. I'll just show you the code.
#Handler class above here
...
...
class Server:
global server_class, server_adress, httpd
server_class = HTTPServer
server_adress = ('localhost', 8080)
httpd = server_class(server_adress, Handler)
def __init__(self):
self.status = False
self.process()
def process(self):
print(self.status)
process = threading.Timer(10, self.process)
process.start()
if self.status == True:
httpd.socket.close()
self.main()
if self.status == False:
self.main()
def main(self):
try:
if self.status == False:
print("Server online!")
self.status = True
httpd.serve_forever()
if self.status == True:
print("Server restarted!")
httpd.serve_forever()
except KeyboardInterrupt:
print("Server shutting down...")
httpd.socket.close()
if __name__ == "__main__":
instance = Server()
After the ten seconds of running (And it works, I can access my website on http://localhost:8080/index.html), it will continue giving the following error every ten seconds:
File "C:\Users\myname\Dropbox\Python\Webserver\html\server.py", line 187, in main httpd.serve_forever()
File "C:\Python33\lib\socketserver.py", line 237, in serve_forever poll_interval)
File "C:\Python33\lib\socketserver.py", line 155, in _eintr_retry return func(*args)
ValueError: file descriptor cannot be a negative integer (-1)
Basically, how do I fix this? I could just use a simple function with a threading timer to restart the function that is running the server, but somehow that doesn't work on my Raspberry Pi, but it does on my windows.
EDIT:
I should also note that the first time starting the script I can access the website and it's fast. After 10 seconds (after the server restarting), I can access it but it is very slow. After another 10 seconds I am not able to access my website.
The problem you get happens because you access the underlying socket of the server directly. Closing the socket is effectively like unplugging your network connection. The actual server that is sitting on top of the socket remains unaware of the fact that the socket was closed, and tries to continue to serve. As the socket was closed, there is no longer a file descriptor available (this is the error you get).
So instead of cutting the server off its connection, you should tell the server to actually shut down gracefully. This allows it to finish any ongoing connections and safely release everything it might do in the background. You can do that using the shutdown method. Executing that will internally tell the server to remember to shut down the next time the loop within serve_forever occurs.
If I remember correctly, serve_forever is a blocking method, meaning that it will not continue when it is executed. So the simplest way to make a server restart itself would be a single main thread doing this:
while True:
httpd.serve_forever()
So whenever the server stops—for whatever reason—it immediately starts again. Of course here you would now add some status variable (instead of True) which allows you to actually turn off the server. For example in the body of a KeyboardInterrupt catch, you would first set that variable to False and then shut down the server using httpd.shutdown().
Basically, my idea was to write some sort of basic server where I could connect to my computer and then run a command remotely. This didn't seem to be much of a problem; but then I had the bright idea that the next step would logically be to add some sort of threading so I could spawn multiple connections.
I read that, because of the GIL, multiprocessing.Process would be the best to try to do this. I don't completely understand threading and it's hard to find good documentation on it; so I'm kind of just throwing stuff and trying to figure out how it works.
Well, it seems like I might be close to doing this right; but I have a feeling I'm just as likely to be no where near doing this correctly. My program now does allow multiple connections, which it didn't when I first started working with threading; but once a connection is established, and then another is established, the first connection is no longer able to send a command to the server. I would appreciate it if someone could give me any help, or point me in the right direction on what I need to learn and understand.
Here's my code:
class server:
def __init__(self):
self.s = socket.socket()
try:
self.s.bind(("",69696))
self.s.listen(1)
except socket.error,(value,message):
if self.s:
self.s.close()
def connection(self):
while True:
client , address = self.s.accept()
data = client.recv(5)
password = 'hello'
while 1:
if data == password:
subprocess.call('firefox')
client.close()
else:
client.send('wrong password')
data = client.recv(5)
p = Process(target=x.connection())
p.start()
x = server()
if __name__ == '__main':
main()
Well, this answer only applies if you're on a unix or unix-like operating system(windows does not have os.fork() which we use).
One of the most common approaches for doing these things on unix platforms is to fork a new process to handle the client connection while the master process continues to listen for requests.
Below is code for a simple echo server that can handle multiple simultaneous connections. You just need to modify handle_client_connection() to fit your needs
import socket
import os
class ForkingServer:
def serve_forever(self):
self.s = socket.socket()
try:
self.s.bind(("", 9000))
self.s.listen(1)
except socket.error, (value,message):
print "error:", message
if self.s:
self.s.close()
return
while True:
client,address = self.s.accept()
pid = os.fork()
# You should read the documentation for how fork() works if you don't
# know it already
# The short version is that at this point in the code, there are 2 processes
# completely identical to each other which are simulatenously executing
# The only difference is that the parent process gets the pid of the child
# returned from fork() and the child process gets a value of 0 returned
if pid == 0:
# only the newly spawned process will execute this
self.handle_client_connection(client, address)
break
# In the meantime the parent process will continue on to here
# thus it will go back to the beginning of the loop and accept a new connection
def handle_client_connection(self, client,address):
#simple echo server
print "Got a connection from:", address
while True:
data = client.recv(5)
if not data:
# client closed the connection
break
client.send(data)
print "Connection from", address, "closed"
server = ForkingServer()
server.serve_forever()
Using the following example I can get a basic web server running but my problem is that the handle_request() blocks the do_something_else() until a request comes in. Is there any way around this to have the web server do other back ground tasks?
def run_while_true(server_class=BaseHTTPServer.HTTPServer,
handler_class=BaseHTTPServer.BaseHTTPRequestHandler):
server_address = ('', 8000)
httpd = server_class(server_address, handler_class)
while keep_running():
httpd.handle_request()
do_something_else()
You can use multiple threads of execution through the Python threading module. An example is below:
import threading
# ... your code here...
def run_while_true(server_class=BaseHTTPServer.HTTPServer,
handler_class=BaseHTTPServer.BaseHTTPRequestHandler):
server_address = ('', 8000)
httpd = server_class(server_address, handler_class)
while keep_running():
httpd.handle_request()
if __name__ == '__main__':
background_thread = threading.Thread(target=do_something_else)
background_thread.start()
# ... web server start code here...
background_thread.join()
This will cause a thread which executes do_something_else() to start before your web server. When the server shuts down, the join() call ensures do_something_else finishes before the program exits.
You should have a thread that handles http requests, and a thread that does do_something_else().