I'm trying to write a Tornado web app which runs a local command asynchronously, as a coroutine. This is the stripped down example code:
#! /usr/bin/env python3
import shlex
import asyncio
import logging
from tornado.web import Application, url, RequestHandler
from tornado.httpserver import HTTPServer
from tornado.ioloop import IOLoop
logging.getLogger('asyncio').setLevel(logging.DEBUG)
async def run():
command = "python3 /path/to/my/script.py"
logging.debug('Calling command: {}'.format(command))
process = asyncio.create_subprocess_exec(
*shlex.split(command),
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.STDOUT
)
logging.debug(' - process created')
result = await process
stdout, stderr = result.communicate()
output = stdout.decode()
return output
def run_sync(self, path):
command = "python3 /path/to/my/script.py"
logging.debug('Calling command: {}'.format(command))
try:
result = subprocess.run(
*shlex.split(command),
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
check=True
)
except subprocess.CalledProcessError as ex:
raise RunnerError(ex.output)
else:
return result.stdout
class TestRunner(RequestHandler):
async def get(self):
result = await run()
self.write(result)
url_list = [
url(r"/test", TestRunner),
]
HTTPServer(Application(url_list, debug=True)).listen(8080)
logging.debug("Tornado server started at port {}.".format(8080))
IOLoop.configure('tornado.platform.asyncio.AsyncIOLoop')
IOLoop.instance().start()
When /path/to/my/script.py is called directly it executes as expected. Also, when I have TestHandler.get implemented as a regular, synchronous method (see run_sync), it executes correctly. However, when running the above app and calling /test, the log shows:
DEBUG:asyncio:Using selector: EpollSelector
DEBUG:asyncio:execute program 'python3' stdout=stderr=<pipe>
DEBUG:asyncio:process 'python3' created: pid 21835
However, ps shows that the process hanged:
$ ps -ef | grep 21835
berislav 21835 21834 0 19:19 pts/2 00:00:00 [python3] <defunct>
I have a feeling that I'm not implementing the right loop, or I'm doing it wrong, but all the examples I've seen show how to use asyncio.get_event_loop().run_until_complete(your_coro()), and I couldn't find much about combining asyncio and Tornado. All suggestions welcome!
Subprocesses are tricky because of the singleton SIGCHLD handler. In asyncio, this means that they only work with the "main" event loop. If you change tornado.ioloop.IOLoop.configure('tornado.platform.asyncio.AsyncIOLoop') to tornado.platform.asyncio.AsyncIOMainLoop().install(), then the example works. A few other cleanups were also necessary; here's the full code:
#! /usr/bin/env python3
import shlex
import asyncio
import logging
import tornado.platform.asyncio
from tornado.web import Application, url, RequestHandler
from tornado.httpserver import HTTPServer
from tornado.ioloop import IOLoop
logging.getLogger('asyncio').setLevel(logging.DEBUG)
async def run():
command = "python3 /path/to/my/script.py"
logging.debug('Calling command: {}'.format(command))
process = await asyncio.create_subprocess_exec(
*shlex.split(command),
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.STDOUT
)
logging.debug(' - process created')
result = await process.wait()
stdout, stderr = await process.communicate()
output = stdout.decode()
return output
tornado.platform.asyncio.AsyncIOMainLoop().install()
IOLoop.instance().run_sync(run)
Also note that tornado has its own subprocess interface in tornado.process.Subprocess, so if that's the only thing you need asyncio for, consider using the Tornado version instead. Be aware that combining Tornado and asyncio's subprocesses interfaces in the same process may produce conflicts with the SIGCHLD handler, so you should pick one or the other, or use the libraries in such a way that the SIGCHLD handler is unnecessary (for example by relying solely on stdout/stderr instead of the process's exit status).
Related
if I use asyncio to spawn subprocess which run another python script, there is warning at the end: RuntimeWarning: A loop is being detached from a child watcher with pending handlers, if the subprocess is terminate by terminate().
For example a very simple dummy:
import datetime
import time
import os
if __name__ == '__main__':
for i in range(3):
msg = 'pid({}) {}: continue'.format(os.getpid(), datetime.datetime.now())
print(msg)
time.sleep(1.0)
Then I spawn the dummy:
import asyncio
from asyncio import subprocess
T = 3 # if the T=5, which allows the thread to finish, then there is no such warning.
async def handle_proc():
p = None
try:
p = await subprocess.create_subprocess_exec(
'python3', 'dummy.py',
#'dummy.sh'
)
await asyncio.sleep(T)
finally:
if p and p.returncode is None:
p.terminate()
print('handle_proc Done!')
if __name__ == '__main__':
asyncio.run(handle_proc())
There will be the warning, if T is short enough to allow p.terminate() get called.
if I spawn a bash script there is no warning:
#!/usr/bin/env bash
set -e
N=10
T=1
for i in $(seq 1 $N)
do
echo "loop=$i/$N, sleep $T"
>&2 echo "msg in stderror: ($i/$N,$T)"
sleep $T
done
What did I do wrong here?
To get rid of this warning, you need to await p.wait() after p.terminate().
The reason you need to do this is that p.terminate() only sends signal.SIGTERM to the process. The process might still do some cleanup work after receiving the signal. If you don't wait for the process to finish terminating, your script finishes before the process and the process is cut off prematurely.
This i what I have so far...
from gpiozero import MotionSensor
import subprocess
import threading
import time
pir = MotionSensor(4)
while True:
pir.wait_for_motion()
print("Start Playing Music")
subprocess.call(['mplayer', '-vo', 'null', '-ao', 'alsa', '-playlist', 'myplaylist', '-shuffle'])
The music playing part works great, but as for the timing, I've tried threading and time, but all seem to do is pause the code for a given amount of time. I want to run the subprocess for a given amount of time, then return to wait on motion. I'm still learning. Thanks for your help.
Python 2.7 - 3.x
Create your subprocess command. I have chosen Popen.
Popen doesn't block, allowing you to interact with the process while it's running, or continue with other things in your Python program. The call to Popen returns a Popen object.
You can read the difference between subprocess.Popen and subprocess.call here
You can use shlex module to split your string command - very comfortable.
After that, you can call your command in the thread. From this moment, you can manage your task called in a thread. There is a simple example, how to do it:
Example of code:
import logging
import shlex
import subprocess
import sys
import threading
logging.basicConfig(filename='log.log',
filemode='a',
format='%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s',
datefmt='%H:%M:%S',
level=logging.INFO)
log = logging.getLogger(__name__)
def exec_cmd(command):
try:
cmd = subprocess.Popen(shlex.split(command), # nosec
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
universal_newlines=True)
_thread_command(cmd)
out, err = cmd.communicate()
log.error(err) if err else log.info(out)
except subprocess.CalledProcessError as su_err:
log.error('Calledprocerr: %s', su_err)
except OSError as os_error:
log.error('Could not execute command: %s', os_error)
def _thread_command(task, timeout=5):
"""
Thread. If task is longer than <timeout> - kill.
:param task: task to execute.
"""
task_thread = threading.Thread(target=task.wait)
task_thread.start()
task_thread.join(timeout)
if task_thread.is_alive(): # do whatever you want with your task, for example, kill:
task.kill()
logging.error('Timeout! Executed time is more than: %s', timeout)
sys.exit(1)
if __name__ == '__main__':
exec_cmd('sleep 10') # put your string command here
Tested on Centos:
[kchojnowski#zabbix4-worker1 ~]$ cat log.log
11:31:48,348 root ERROR Timeout! Executed time is more than: 5
Hello i am trying to write a simple Administrative App wich gives me access to computers shell trought telnet (this is only test for python programming practice) when i connect to my server then i have only black screen in terminal (Windows telnet client) but in log of my program there is the outpu form the subprocess and it sdoes not gets sendded to client
i have searched on Google for many solutions but none of them worked with Twisted lib properly and result was the same
my server code:
# -*- coding: utf-8 -*-
from subprocess import Popen, PIPE
from threading import Thread
from Queue import Queue # Python 2
from twisted.internet import reactor
from twisted.internet.protocol import Factory
from twisted.protocols.basic import LineReceiver
import sys
log = 'log.tmp'
def reader(pipe, queue):
try:
with pipe:
for line in iter(pipe.readline, b''):
queue.put((pipe, line))
finally:
queue.put(None)
class Server(LineReceiver):
def connectionMade(self):
self.sendLine("Creating shell...")
self.shell = Popen("cmd.exe", stdout=PIPE, stderr=PIPE, bufsize=1, shell=True)
q = Queue()
Thread(target=reader, args=[self.shell.stdout, q]).start()
Thread(target=reader, args=[self.shell.stderr, q]).start()
for _ in xrange(2):
for pipe, line in iter(q.get, b''):
if pipe == self.shell.stdout:
sys.stdout.write(line)
else:
sys.stderr.write(line)
self.sendLine("Shell created!")
def lineReceived(self, line):
print line
#stdout_data = self.shell.communicate(line)[0]
self.sendLine(line)
if __name__ == "__main__":
ServerFactory = Factory.forProtocol(Server)
reactor.listenTCP(8123, ServerFactory) ##UndefinedVariable
reactor.run() ##UndefinedVariable
You mixed a blocking program with a non-blocking program. The non-blocking parts don't get to run because the blocking parts are blocking. The blocking parts don't work because they rely on the non-blocking parts running.
Get rid of the Popen and Queue and Thread and use reactor.spawnProcess instead. Or get rid of the Twisted and use more threads for the networking.
I have the following situation:
I receive a request on a socketio server. I answer it (socket.emit(..)) and then start something with heavy computation load in another thread.
If the heavy computation is caused by subprocess.Popen (using subprocess.PIPE) it totally blocks every incoming request as long as it is being executed although it happens in a separate thread.
No problem - in this thread it was suggested to asynchronously read the result of the subprocess with a buffer size of 1 so that between these reads other threads have the chance to do something. Unfortunately this did not help for me.
I also already monkeypatched eventlet and that works fine - as long as I don't use subprocess.Popen with subprocess.PIPE in the thread.
In this code sample you can see that it only happens using subprocess.Popen with subprocess.PIPE. When uncommenting #functionWithSimulatedHeavyLoad() and instead comment functionWithHeavyLoad() everything works like charm.
from flask import Flask
from flask.ext.socketio import SocketIO, emit
import eventlet
eventlet.monkey_patch()
app = Flask(__name__)
socketio = SocketIO(app)
import time
from threading import Thread
#socketio.on('client command')
def response(data, type = None, nonce = None):
socketio.emit('client response', ['foo'])
thread = Thread(target = testThreadFunction)
thread.daemon = True
thread.start()
def testThreadFunction():
#functionWithSimulatedHeavyLoad()
functionWithHeavyLoad()
def functionWithSimulatedHeavyLoad():
time.sleep(5)
def functionWithHeavyLoad():
from datetime import datetime
import subprocess
import sys
from queue import Queue, Empty
ON_POSIX = 'posix' in sys.builtin_module_names
def enqueueOutput(out, queue):
for line in iter(out.readline, b''):
if line == '':
break
queue.put(line)
out.close()
# just anything that takes long to be computed
shellCommand = 'find / test'
p = subprocess.Popen(shellCommand, universal_newlines=True, shell=True, stdout=subprocess.PIPE, bufsize=1, close_fds=ON_POSIX)
q = Queue()
t = Thread(target = enqueueOutput, args = (p.stdout, q))
t.daemon = True
t.start()
t.join()
text = ''
while True:
try:
line = q.get_nowait()
text += line
print(line)
except Empty:
break
socketio.emit('client response', {'text': text})
socketio.run(app)
The client receives the message 'foo' after the blocking work in the functionWithHeavyLoad() function is completed. It should receive the message earlier, though.
This sample can be copied and pasted in a .py file and the behavior can be instantly reproduced.
I am using Python 3.4.3, Flask 0.10.1, flask-socketio1.2, eventlet 0.17.4
Update
If I put this into the functionWithHeavyLoad function it actually works and everything's fine:
import shlex
shellCommand = shlex.split('find / test')
popen = subprocess.Popen(shellCommand, stdout=subprocess.PIPE)
lines_iterator = iter(popen.stdout.readline, b"")
for line in lines_iterator:
print(line)
eventlet.sleep()
The problem is: I used find for heavy load in order to make the sample for you more easily reproducable. However, in my code I actually use tesseract "{0}" stdout -l deu as the sell command. This (unlike find) still blocks everything. Is this rather a tesseract issue than eventlet? But still: how can this block if it happens in a separate thread where it reads line by line with context switch when find does not block?
Thanks to this question I learned something new today. Eventlet does offer a greenlet friendly version of subprocess and its functions, but for some odd reason it does not monkey patch this module in the standard library.
Link to the eventlet implementation of subprocess: https://github.com/eventlet/eventlet/blob/master/eventlet/green/subprocess.py
Looking at the eventlet patcher, the modules that are patched are os, select, socket, thread, time, MySQLdb, builtins and psycopg2. There is absolutely no reference to subprocess in the patcher.
The good news is that I was able to work with Popen() in an application very similar to yours, after I replaced:
import subprocess
with:
from eventlet.green import subprocess
But note that the currently released version of eventlet (0.17.4) does not support the universal_newlines option in Popen, you will get an error if you use it. Support for this option is in master (here is the commit that added the option). You will either have to remove that option from your call, or else install the master branch of eventlet direct from github.
i have a simple python program that I'm using to test asyncio with subprocesses:
import sys, time
for x in range(100):
print("processing (%s/100) " % x)
sys.stdout.flush()
print("enjoy")
sys.stdout.flush()
Running this on the command line produces the desired results.
However, when called from asyncio, it never finishes
process = yield from asyncio.create_subprocess_exec(
*["python", "program.py"],
stdout=async_subprocess.PIPE,
stderr=async_subprocess.STDOUT,
cwd=working_dir
)
# this never finishes
yield from process.communicate()
ps ax shows this process is <defunct>, not sure what that means
I suspect your issue is just related to how you're calling asyncio.create_subprocess_exec and process.communiate(). This complete example works fine for me:
import asyncio
from asyncio import subprocess
#asyncio.coroutine
def do_work():
process = yield from asyncio.create_subprocess_exec(
*["python", "program.py"],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT
)
stdout, _= yield from process.communicate()
print(stdout)
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(do_work())
You have to place code that uses yield from inside of a asyncio.coroutine, and then call it inside an event loop (using loop.run_until_complete), for it to behave the way you want it to.