I have a remotely installed BeagleBone Black that needs to control a measurement device, a pan/tilt head, upload measured data, host a telnet server,...
I'm using Python 2.7
This is the first project in which I need to program, so a lot of questions come up.
I'd mostly like to know if what I'm doing is a reasonable way of handling what I need and why certain things don't do what I think.
Certain modules need to work together and share data. Best example is the telnet module, when the telnet user requests the position of the pan/tilt head.
As I understand it, the server is blocking the program, so I use gevent/Greenlets to run it from the "main" script.
Stripped down versions:
teln module
from gevent import monkey; monkey.patch_all() # patch functions to use gevent
import gevent
import gevent.server
from telnetsrv.green import TelnetHandler, command
__all__ = ["MyTelnetHandler", "start_server"] # used when module is loaded as "from teln import *"
class MyTelnetHandler(TelnetHandler):
"""Telnet implementation."""
def writeerror(self, text):
"""Write errors in red, preceded by 'ERROR: '."""
TelnetHandler.writeerror(self, "\n\x1b[31;5;1mERROR: {}\x1b[0m\n".format(text))
#command(["exit", "logout", "quit"], hidden=True)
def dummy(self, params):
"""Disables these commands and get them out of the "help" listing."""
pass
def start_server():
"""Server constructor, starts server."""
server = gevent.server.StreamServer(("", 2323), MyTelnetHandler.streamserver_handle)
print("server created")
try:
server.serve_forever()
finally:
server.close()
print("server finished")
"""Main loop"""
if __name__ == "__main__":
start_server()
Main script:
#! /usr/bin/env python
# coding: utf-8
from gevent import monkey; monkey.patch_all() # patch functions to gevent versions
import gevent
from gevent import Greenlet
import teln # telnet handler
from time import sleep
from sys import exit
"""Main loop"""
if __name__ == "__main__":
thread_telnet = Greenlet(teln.start_server)
print("greenlet created")
thread_telnet.start()
print("started")
sleep(10)
print("done sleeping")
i = 1
try:
while not thread_telnet.ready():
print("loop running ({:03d})".format(i))
i += 1
sleep(1)
except KeyboardInterrupt:
print("interrupted")
thread_telnet.kill()
print("killed")
exit()
The final main loop would need to run much more functions.
questions:
Is this a reasonable way of running processes/functions at the same time?
How do I get a function in the telnet module to call functions from a third module, controlling the head?
How do I make sure that the head isn't being controlled by the telnet module as well as the main script (which runs some kind of schedule)?
In the "def start_server()" function in teln module, two print commands are called when starting and stopping the server. I do not see these appearing in the terminal. What could be happening?
When I open a telnet session from a remote machine, and then close it, I get the following output (program keeps running):
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/gevent/greenlet.py", line 536, in run
result = self._run(*self.args, **self.kwargs)
File "/usr/local/lib/python2.7/dist-packages/telnetsrv/telnetsrvlib.py", line 815, in inputcooker
c = self._inputcooker_getc()
File "/usr/local/lib/python2.7/dist-packages/telnetsrv/telnetsrvlib.py", line 776, in _inputcooker_getc
ret = self.sock.recv(20)
File "/usr/local/lib/python2.7/dist-packages/gevent/_socket2.py", line 283, in recv
self._wait(self._read_event)
File "/usr/local/lib/python2.7/dist-packages/gevent/_socket2.py", line 182, in _wait
self.hub.wait(watcher)
File "/usr/local/lib/python2.7/dist-packages/gevent/hub.py", line 651, in wait
result = waiter.get()
File "/usr/local/lib/python2.7/dist-packages/gevent/hub.py", line 898, in get
return self.hub.switch()
File "/usr/local/lib/python2.7/dist-packages/gevent/hub.py", line 630, in switch
return RawGreenlet.switch(self)
cancel_wait_ex: [Errno 9] File descriptor was closed in another greenlet
Fri Sep 22 09:31:12 2017 <Greenlet at 0xb6987bc0L: <bound method MyTelnetHandler.inputcooker of <teln.MyTelnetHandler instance at 0xb69a1c38>>> failed with cancel_wait_ex
While trying out different things to get to understand how greenlets work, I have received similar ("cancel_wait_ex: [Errno 9] File descriptor was closed in another greenlet") error messages often.
I have searched around but can't find/understand what is happening and what I am supposed to do.
If something goes wrong while running a greenlet, I do not get the exception that points to the problem (for instance when I try to print an integer), but a similar error message as above. How can I see the "original" raised exception?
Related
The following code taken from the aiohttp docs https://docs.aiohttp.org/en/stable/
does work:
from aiohttp import web
async def handle(request):
name = request.match_info.get('name', "Anonymous")
text = "Hello, " + name
return web.Response(text=text)
app = web.Application()
app.add_routes([web.get('/', handle),
web.get('/{name}', handle)])
if __name__ == '__main__':
web.run_app(app)
But having the webserver hijack the main thread is not acceptable: the webserver should be on a separate non-main thread and subservient to the main backend application.
I can not determine how to run the webapp on a secondary thread. Here is what I have tried:
It is not possible to run the snippet of code in ipython repl:
I tried to run it this way:
#if __name__ == '__main__':
web.run_app(app)
and am notified something about no current event loop
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3293, in run_code
async def run_code(self, code_obj, result=None, *, async_=False):
File "<ipython-input-8-344f41746659>", line 13, in <module>
web.run_app(app)
File "/usr/local/lib/python3.8/site-packages/aiohttp/web.py", line 393, in run_app
def run_app(app: Union[Application, Awaitable[Application]], *,
File "/usr/local/Cellar/python#3.8/3.8.1/Frameworks/Python.framework/Versions/3.8/lib/python3.8/asyncio/events.py", line 628, in get_event_loop
def get_event_loop(self):
RuntimeError: There is no current event loop in thread 'Thread-11'.
So then .. what it can only be run in main? I'm missing something here..
I tried running in another standalone script but on a subservient thread:
def runWebapp():
from aiohttp import web
async def handle(request):
name = request.match_info.get('name', "Anonymous")
text = "Hello, " + name
return web.Response(text=text)
app = web.Application()
app.add_routes([web.get('/', handle),
web.get('/{name}', handle)])
web.run_app(app)
if __name__ == '__main__':
from threading import Thread
t = Thread(target=runWebapp)
t.start()
print('thread started let''s nap..')
import time
time.sleep(50)
But that gives basically the same error:
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/local/Cellar/python#3.8/3.8.1/Frameworks/Python.framework/Versions/3.8/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/usr/local/Cellar/python#3.8/3.8.1/Frameworks/Python.framework/Versions/3.8/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/git/bluej/experiments/python/aio_thread.py", line 12, in runWebapp
web.run_app(app)
File "/usr/local/lib/python3.8/site-packages/aiohttp/web.py", line 409, in run_app
loop = asyncio.get_event_loop()
File "/usr/local/Cellar/python#3.8/3.8.1/Frameworks/Python.framework/Versions/3.8/lib/python3.8/asyncio/events.py", line 639, in get_event_loop
raise RuntimeError('There is no current event loop in thread %r.'
RuntimeError: There is no current event loop in thread 'Thread-1'.
So how do I get this webapp off the main thread and make it play along with the other threads in my application
Here you go:
import http.server
import threading
import socketserver
PORT = 8000
Handler = http.server.SimpleHTTPRequestHandler
def serve_forever():
with socketserver.TCPServer(("", PORT), Handler) as httpd:
httpd.serve_forever()
if __name__ == "__main__":
threading.Thread(target=serve_forever).start()
while 1:
x = input("enter a number")
print("You entered {}".format(x))
N.B. this is a neat party trick, but not necessarily useful for production work: the documentation for the http.server module says in flaming red letters at the top of the doc page not to use it in production. But almost all python webserver frameworks operate as WSGI servers and aren't designed to work the way you seem to want them to: they generally expect to be run by something else like gunicorn or apache.
I strongly recommend if you need an HTTP server for e.g. monitoring a running app that you use asyncio instead and use coros for everything, but you can roll your own threading scenario as above if you really want to. You can see that you can still interact with the shell in the infinite input loop while you can also curl localhost:8000 to get an HTML page containing a directory listing.
Just pass in a non-default handler of your own creation to do other stuff like return app state in JSON format.
I'm running scrapy as a AWS lambda function. Inside my function I need to have a timer to see whether it's running longer than 1 minute and if so, I need to run some logic. Here is my code:
def handler():
x = 60
watchdog = Watchdog(x)
try:
runner = CrawlerRunner()
runner.crawl(MySpider1)
runner.crawl(MySpider2)
d = runner.join()
d.addBoth(lambda _: reactor.stop())
reactor.run()
except Watchdog:
print('Timeout error: process takes longer than %s seconds.' % x)
# some other logic here
watchdog.stop()
Watchdog timer class I took from this answer. The problem is the code never hits that except Watchdog block, but rather throws an exception outside:
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 1182, in run
self.function(*self.args, **self.kwargs)
File "./functions/python/my_scrapy/index.py", line 174, in defaultHandler
raise self
functions.python.my_scrapy.index.Watchdog: 1
I need to catch exception in the function. How would I go about that.
PS: I'm very new to Python.
Alright this question had me going a little crazy, here is why that doesn't work:
What the Watchdog object does is create another thread where the exception is raised but not handled (the exception is only handled in the main process). Luckily, twisted has some neat features.
You can do it running the reactor in another thread:
import time
from threading import Thread
from twisted.internet import reactor
runner = CrawlerRunner()
runner.crawl(MySpider1)
runner.crawl(MySpider2)
d = runner.join()
d.addBoth(lambda _: reactor.stop())
Thread(target=reactor.run, args=(False,)).start() # reactor will run in a different thread so it doesn't lock the script here
time.sleep(60) # Lock script here
# Now check if it's still scraping
if reactor.running:
# do something
else:
# do something else
I'm using python 3.7.0
Twisted has scheduling primitives. For example, this program runs for about 60 seconds:
from twisted.internet import reactor
reactor.callLater(60, reactor.stop)
reactor.run()
I'm trying to create a Python application in which one process (process 'A') receives a request and puts it into a ProcessPool (from concurrent.futures). In handling this request, a message may need to passed to a second process (process 'B'). I'm using tornado's iostream module to help wrap the connections and get responses.
Process A is failing to successfully connect to process B from within the ProcessPool execution. Where am I going wrong?
The client, which makes the initial request to process A:
#!/usr/bin/env python
import socket
import tornado.iostream
import tornado.ioloop
def print_message ( data ):
print 'client received', data
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM, 0)
stream = tornado.iostream.IOStream(s)
stream.connect(('localhost',2001))
stream.read_until('\0',print_message)
stream.write('test message\0')
tornado.ioloop.IOLoop().instance().start()
Process A, that received the initial request:
#!/usr/bin/env python
import tornado.ioloop
import tornado.tcpserver
import tornado.iostream
import socket
import concurrent.futures
import functools
def handle_request ( data ):
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM,0)
out_stream = tornado.iostream.IOStream(s)
out_stream.connect(('localhost',2002))
future = out_stream.read_until('\0')
out_stream.write(data+'\0')
return future.result()
class server_a (tornado.tcpserver.TCPServer):
def return_response ( self, in_stream, future ):
in_stream.write(future.result()+'\0')
def handle_read ( self, in_stream, data ):
future = self.executor.submit(handle_request,data)
future.add_done_callback(functools.partial(self.return_response,in_stream))
def handle_stream ( self, in_stream, address ):
in_stream.read_until('\0',functools.partial(self.handle_read,in_stream))
def __init__ ( self ):
self.executor = concurrent.futures.ProcessPoolExecutor()
tornado.tcpserver.TCPServer.__init__(self)
server = server_a()
server.bind(2001)
server.start(0)
tornado.ioloop.IOLoop().instance().start()
Process B, that should receive the relayed request from Process A:
#!/usr/bin/env python
import tornado.ioloop
import tornado.tcpserver
import functools
class server_b (tornado.tcpserver.TCPServer):
def handle_read ( self, in_stream, data ):
in_stream.write('server B read'+data+'\0')
def handle_stream ( self, in_stream, address ):
in_stream.read_until('\0',functools.partial(self.handle_read,in_stream))
server = server_b()
server.bind(2002)
server.start(0)
tornado.ioloop.IOLoop().instance().start()
And finally, the error returned by Process A, which is raised during the 'read_until' method:
ERROR:concurrent.futures:exception calling callback for <Future at 0x10654b890 state=finished raised OSError>
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/concurrent/futures/_base.py", line 299, in _invoke_callbacks
callback(self)
File "./a.py", line 26, in return_response
in_stream.write(future.result()+'\0')
File "/usr/local/lib/python2.7/site-packages/concurrent/futures/_base.py", line 397, in result
return self.__get_result()
File "/usr/local/lib/python2.7/site-packages/concurrent/futures/_base.py", line 356, in __get_result
raise self._exception
OSError: [Errno 9] Bad file descriptor
I'm not 100% sure why you're getting this "Bad file descriptor" error (concurrent.futures unfortunately lost backtrace information when it was backported to 2.7), but there is no IOLoop running in the ProcessPoolExecutor's worker processes, so you won't be able to use Tornado constructs like IOStream in this context (unless you spin up a new IOLoop for each task, but that may not make much sense unless you need compatibility with other asynchronous libraries).
I'm also not sure if it works to mix tornado's multi-process mode and ProcessPoolExecutor in this way. I think you may need to move the initialization of the ProcessPoolExecutor until after the start(0) call.
OK, I have resolved the issue, by updating process A to have:
def stop_loop ( future ):
tornado.ioloop.IOLoop.current().stop()
def handle_request ( data ):
tornado.ioloop.IOLoop.clear_current()
tornado.ioloop.IOLoop.clear_instance()
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM,0)
out_stream = tornado.iostream.IOStream(s)
out_stream.connect(('localhost',2002))
future = out_stream.read_until('\0')
future.add_done_callback(stop_loop)
out_stream.write(data+'\0')
tornado.ioloop.IOLoop.instance().start()
return future.result()
Even though the IOLoop hadn't previously been started in the spawned process, it was returning its parent loop when calling for the current instance. Clearing out those references has allowed a new loop for the process to be started. I don't know formally what is happening here though.
For operations in my Tornado server that are expected to block (and can't be easily modified to use things like Tornado's asynchronous HTTP request client), I have been offloading the work to separate worker processes using the multiprocessing module. Specifically, I was using a multiprocessing Pool because it offers a method called apply_async, which works very well with Tornado since it takes a callback as one of its arguments.
I recently realized that a pool preallocates the number of processes, so if they all become blocking, operations that require a new process will have to wait. I do realize that the server can still take connections since apply_async works by adding things to a task queue, and is rather immediately finished, itself, but I'm looking to spawn n processes for n amount of blocking tasks I need to perform.
I figured that I could use the add_handler method for my Tornado server's IOLoop to add a handler for each new PID that I create to that IOLoop. I've done something similar before, but it was using popen and an arbitrary command. An example of such use of this method is here. I wanted to pass arguments into an arbitrary target Python function within my scope, though, so I wanted to stick with multiprocessing.
However, it seems that something doesn't like the PIDs that my multiprocessing.Process objects have. I get IOError: [Errno 9] Bad file descriptor. Are these processes restricted somehow? I know that the PID isn't available until I actually start the process, but I do start the process. Here's the source code of an example I've made that demonstrates this issue:
#!/usr/bin/env python
"""Creates a small Tornado program to demonstrate asynchronous programming.
Specifically, this demonstrates using the multiprocessing module."""
import tornado.httpserver
import tornado.ioloop
import tornado.web
import multiprocessing as mp
import random
import time
__author__ = 'Brian McFadden'
__email__ = 'brimcfadden#gmail.com'
def sleepy(queue):
"""Pushes a string to the queue after sleeping for 5 seconds.
This sleeping can be thought of as a blocking operation."""
time.sleep(5)
queue.put("Now I'm awake.")
return
def random_num():
"""Returns a string containing a random number.
This function can be used by handlers to receive text for writing which
facilitates noticing change on the webpage when it is refreshed."""
n = random.random()
return "<br />Here is a random number to show change: {0}".format(n)
class SyncHandler(tornado.web.RequestHandler):
"""Demonstrates handing a request synchronously.
It executes sleepy() before writing some more text and a random number to
the webpage. While the process is sleeping, the Tornado server cannot
handle any requests at all."""
def get(self):
q = mp.Queue()
sleepy(q)
val = q.get()
self.write(val)
self.write('<br />Brought to you by SyncHandler.')
self.write('<br />Try refreshing me and then the main page.')
self.write(random_num())
class AsyncHandler(tornado.web.RequestHandler):
"""Demonstrates handing a request asynchronously.
It executes sleepy() before writing some more text and a random number to
the webpage. It passes the sleeping function off to another process using
the multiprocessing module in order to handle more requests concurrently to
the sleeping, which is like a blocking operation."""
#tornado.web.asynchronous
def get(self):
"""Handles the original GET request (normal function delegation).
Instead of directly invoking sleepy(), it passes a reference to the
function to the multiprocessing pool."""
# Create an interprocess data structure, a queue.
q = mp.Queue()
# Create a process for the sleepy function. Provide the queue.
p = mp.Process(target=sleepy, args=(q,))
# Start it, but don't use p.join(); that would block us.
p.start()
# Add our callback function to the IOLoop. The async_callback wrapper
# makes sure that Tornado sends an HTTP 500 error to the client if an
# uncaught exception occurs in the callback.
iol = tornado.ioloop.IOLoop.instance()
print "p.pid:", p.pid
iol.add_handler(p.pid, self.async_callback(self._finish, q), iol.READ)
def _finish(self, q):
"""This is the callback for post-sleepy() request handling.
Operation of this function occurs in the original process."""
val = q.get()
self.write(val)
self.write('<br />Brought to you by AsyncHandler.')
self.write('<br />Try refreshing me and then the main page.')
self.write(random_num())
# Asynchronous handling must be manually finished.
self.finish()
class MainHandler(tornado.web.RequestHandler):
"""Returns a string and a random number.
Try to access this page in one window immediately after (<5 seconds of)
accessing /async or /sync in another window to see the difference between
them. Asynchronously performing the sleepy() function won't make the client
wait for data from this handler, but synchronously doing so will!"""
def get(self):
self.write('This is just responding to a simple request.')
self.write('<br />Try refreshing me after one of the other pages.')
self.write(random_num())
if __name__ == '__main__':
# Create an application using the above handlers.
application = tornado.web.Application([
(r"/", MainHandler),
(r"/sync", SyncHandler),
(r"/async", AsyncHandler),
])
# Create a single-process Tornado server from the application.
http_server = tornado.httpserver.HTTPServer(application)
http_server.listen(8888)
print 'The HTTP server is listening on port 8888.'
tornado.ioloop.IOLoop.instance().start()
Here is the traceback:
Traceback (most recent call last):
File "/usr/local/lib/python2.6/dist-packages/tornado/web.py", line 810, in _stack_context
yield
File "/usr/local/lib/python2.6/dist-packages/tornado/stack_context.py", line 77, in StackContext
yield
File "/usr/local/lib/python2.6/dist-packages/tornado/web.py", line 827, in _execute
getattr(self, self.request.method.lower())(*args, **kwargs)
File "/usr/local/lib/python2.6/dist-packages/tornado/web.py", line 909, in wrapper
return method(self, *args, **kwargs)
File "./process_async.py", line 73, in get
iol.add_handler(p.pid, self.async_callback(self._finish, q), iol.READ)
File "/usr/local/lib/python2.6/dist-packages/tornado/ioloop.py", line 151, in add_handler
self._impl.register(fd, events | self.ERROR)
IOError: [Errno 9] Bad file descriptor
The above code is actually modified from an older example that used process pools. I've had it saved for reference for my coworkers and myself (hence the heavy amount of comments) for quite a while. I constructed it in such a way so that I could open two small browser windows side-by-side to demonstrate to my boss that the /sync URI blocks connections while /async allows more connections. For the purposes of this question, all you need to do to reproduce it is try to access the /async handler. It errors immediately.
What should I do about this? How can the PID be "bad"? If you run the program, you can see it be printed to stdout.
For the record, I'm using Python 2.6.5 on Ubuntu 10.04. Tornado is 1.1.
add_handler takes a valid file descriptor, not a PID. As an example of what's expected, tornado itself uses add_handler normally by passing in a socket object's fileno(), which returns the object's file descriptor. PID is irrelevant in this case.
Check out this project:
https://github.com/vukasin/tornado-subprocess
it allows you to start arbitrary processes from tornado and get a callback when they finish (with access to their status, stdout and stderr).
I'm trying to create a COM Object from a dll in a new thread in Python - so I can run a message pump in that thread:
from comtypes.client import CreateObject
import threading
class MessageThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.daemon = True
def run(self):
print "Thread starting"
connection = CreateObject("IDMessaging.IDMMFileConnection")
print "connection created"
a = CreateObject("IDMessaging.IDMMFileConnection")
print "aConnection created"
t = MessageThread()
t.start()
this is the error trace I get:
aConnection created
Thread starting
>>> Exception in thread Thread-1:
Traceback (most recent call last):
File "c:\python26\lib\threading.py", line 532, in __bootstrap_inner
self.run()
File "fred.py", line 99, in run
self.connection = CreateObject("IDMessaging.IDMMFileConnection")
File "c:\python26\lib\site-packages\comtypes\client\__init__.py", line 235, in CreateObject
obj = comtypes.CoCreateInstance(clsid, clsctx=clsctx, interface=interface)
File "c:\python26\lib\site-packages\comtypes\__init__.py", line 1145, in CoCreateInstance
_ole32.CoCreateInstance(byref(clsid), punkouter, clsctx, byref(iid), byref(p))
File "_ctypes/callproc.c", line 925, in GetResult
WindowsError: [Error -2147221008] CoInitialize has not been called
Any ideas?
You need to have called CoInitialize() (or CoInitializeEx()) on a thread before you can create COM objects on that thread.
from win32com.client.pythoncom import CoInitialize
CoInitialize()
As far as I remember (long time ago I'e programmed a lot with COM Components) you have to call CoInitialize on each thread if your COM Object uses STA.
http://msdn.microsoft.com/en-us/library/ms678543(VS.85).aspx
But I've no idea how to call that function in python.
Here is the MSDN Doc
http://msdn.microsoft.com/en-us/library/ms678543(VS.85).aspx
Just to update with current experience using PyCharm and Python 2.7:
You need to import:
from pythoncom import CoInitializeEx
from pythoncom import CoUninitialize
then for running the thread:
def run(self):
res = CoInitializeEx(0)
#<your code>
CoUninitialize()
PyCharm get confused with STA apartment, you need to enable true multithreading.
It is important that each CoInitialize() is terminated with a CoUninitialize(), so be sure your code follows this rule in case of errors, too.
As another answer has said you need to run
CoInitialize()
However it is possible that the COMObject cannot just be passed to the threads directly. You will have to use CoMarshalInterThreadInterfaceInStream() and CoGetInterfaceAndReleaseStream() to pass instance between threads
https://stackoverflow.com/a/27966218/18052428