I have the following python code that hangs :
cmd = ["ssh", "-tt", "-vvv"] + self.common_args
cmd += [self.host]
cmd += ["cat > %s" % (out_path)]
p = subprocess.Popen(cmd, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate(in_string)
It is supposed to save a string (in_string) into a remote file over ssh.
The file is correctly saved but then the process hangs. If I use
cmd += ["echo"] instead of
cmd += ["cat > %s" % (out_path)]
the process does not hang so I am pretty sure that I misunderstand something about the way communicate considers that the process has exited.
do you know how I should write the command so the the "cat > file" does not make communicate hang ?
-tt option allocates tty that prevents the child process to exit when .communicate() closes p.stdin (EOF is ignored). This works:
import pipes
from subprocess import Popen, PIPE
cmd = ["ssh", self.host, "cat > " + pipes.quote(out_path)] # no '-tt'
p = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE)
stdout, stderr = p.communicate(in_string)
You could use paramiko -- pure Python ssh library, to write data to a remote file via ssh:
#!/usr/bin/env python
import os
import posixpath
import sys
from contextlib import closing
from paramiko import SSHConfig, SSHClient
hostname, out_path, in_string = sys.argv[1:] # get from command-line
# load parameters to setup ssh connection
config = SSHConfig()
with open(os.path.expanduser('~/.ssh/config')) as config_file:
config.parse(config_file)
d = config.lookup(hostname)
# connect
with closing(SSHClient()) as ssh:
ssh.load_system_host_keys()
ssh.connect(d['hostname'], username=d.get('user'))
with closing(ssh.open_sftp()) as sftp:
makedirs_exists_ok(sftp, posixpath.dirname(out_path))
with sftp.open(out_path, 'wb') as remote_file:
remote_file.write(in_string)
where makedirs_exists_ok() function mimics os.makedirs():
from functools import partial
from stat import S_ISDIR
def isdir(ftp, path):
try:
return S_ISDIR(ftp.stat(path).st_mode)
except EnvironmentError:
return None
def makedirs_exists_ok(ftp, path):
def exists_ok(mkdir, name):
"""Don't raise an error if name is already a directory."""
try:
mkdir(name)
except EnvironmentError:
if not isdir(ftp, name):
raise
# from os.makedirs()
head, tail = posixpath.split(path)
if not tail:
assert path.endswith(posixpath.sep)
head, tail = posixpath.split(head)
if head and tail and not isdir(ftp, head):
exists_ok(partial(makedirs_exists_ok, ftp), head) # recursive call
# do create directory
assert isdir(ftp, head)
exists_ok(ftp.mkdir, path)
It makes sense that the cat command hangs. It is waiting for an EOF. I tried sending an EOF in the string but couldn't get it to work. Upon researching this question, I found a great module for streamlining the use of SSH for command line tasks like your cat example. It might not be exactly what you need for your usecase, but it does do what your question asks.
Install fabric with
pip install fabric
Inside a file called fabfile.py put
from fabric.api import run
def write_file(in_string, path):
run('echo {} > {}'.format(in_string,path))
And then run this from the command prompt with,
fab -H username#host write_file:in_string=test,path=/path/to/file
Related
It is explained in https://stackoverflow.com/a/18422264/7238575 how one can run a subprocess and read out the results live. However, it looks like it creates a file with a name test.log to do so. This makes me worry that if multiple scripts are using this trick in the same directory the test.log file might well be corrupted. Is there a way that does not require a file to be created outside Python? Or can we make sure that each process uses a unique log file? Or am I completely misunderstanding the situation and is there no risk of simultaneous writes by different programs to the same test.log file?
You don't need to write the live output to a file. You can write it to simply to STDOUT with sys.stdout.write("your message").
On the other hand you can generate unique log files for each process:
import os
import psutil
pid = psutil.Process(os.getpid())
process_name = pid.name()
path, extension = os.path.splitext(os.path.join(os.getcwd(), "my_basic_log_file.log"))
created_log_file_name = "{0}_{1}{2}".format(path, process_name, extension)
print(created_log_file_name)
Output:
>>> python3 test_1.py
/home/my_user/test_folder/my_basic_log_file_python3.log
If you see the above example my process name was python3 so this process name was inserted to the "basic" log file name. With this solution you can create unique log files for your processes.
You can set your process name with the setproctitle.setproctitle("my_process_name").
Here is an example.
import os
import psutil
import setproctitle
setproctitle.setproctitle("milan_balazs")
pid = psutil.Process(os.getpid())
process_name = pid.name()
path, extension = os.path.splitext(os.path.join(os.getcwd(), "my_basic_log_file.log"))
created_log_file_name = "{0}_{1}{2}".format(path, process_name, extension)
print(created_log_file_name)
Output:
>>> python3 test_1.py
/home/my_user/test_folder/my_basic_log_file_milan_balazs.log
Previously I have written a quite complex and safe command caller which can make live output (not to file). You can check it:
import sys
import os
import subprocess
import select
import errno
def poll_command(process, realtime):
"""
Watch for error or output from the process
:param process: the process, running the command
:param realtime: flag if realtime logging is needed
:return: Return STDOUT and return code of the command processed
"""
coutput = ""
poller = select.poll()
poller.register(process.stdout, select.POLLIN)
fdhup = {process.stdout.fileno(): 0}
while sum(fdhup.values()) < len(fdhup):
try:
r = poller.poll(1)
except select.error as err:
if not err.args[0] == errno.EINTR:
raise
r = []
for fd, flags in r:
if flags & (select.POLLIN | select.POLLPRI):
c = version_conversion(fd, realtime)
coutput += c
else:
fdhup[fd] = 1
return coutput.strip(), process.poll()
def version_conversion(fd, realtime):
"""
There are some differences between Python2/3 so this conversion is needed.
"""
c = os.read(fd, 4096)
if sys.version_info >= (3, 0):
c = c.decode("ISO-8859-1")
if realtime:
sys.stdout.write(c)
sys.stdout.flush()
return c
def exec_shell(command, real_time_out=False):
"""
Call commands.
:param command: Command line.
:param real_time_out: If this variable is True, the output of command is logging in real-time
:return: Return STDOUT and return code of the command processed.
"""
if not command:
print("Command is not available.")
return None, None
print("Executing '{}'".format(command))
rtoutput = real_time_out
p = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
out, return_code = poll_command(p, rtoutput)
if p.poll():
error_msg = "Return code: {ret_code} Error message: {err_msg}".format(
ret_code=return_code, err_msg=out
)
print(error_msg)
print("[OK] - The command calling was successful. CMD: '{}'".format(command))
return out, return_code
exec_shell("echo test running", real_time_out=True)
Output:
>>> python3 test.py
Executing 'echo test running'
test running
[OK] - The command calling was successful. CMD: 'echo test running'
I hope my answer answers your question! :)
I want to redirect o/p of shell commands to file using variable "path" but it is not working
import os, socket, shutil, subprocess
host = os.popen("hostname -s").read().strip()
path = "/root/" + host
if os.path.exists(path):
print(path, "Already exists")
else:
os.mkdir("Directory", path , "Created")
os.system("uname -a" > path/'uname') # I want to redirect o/p of shell commands to file using varibale "path" but it is not working
os.system("df -hP"> path/'df')
I think the problem is the bare > and / symbols in the os.system command...
Here is a python2.7 example with os.system that does what you want
import os
path="./test_dir"
command_str="uname -a > {}/uname".format(path)
os.system(command_str)
Here's a very minimal example using subprocess.run. Also, search StackOverflow for "python shell redirect", and you'll get this result right away:
Calling an external command in Python
import subprocess
def run(filename, command):
with open(filename, 'wb') as stdout_file:
process = subprocess.run(command, stdout=subprocess.PIPE, shell=True)
stdout_file.write(process.stdout)
return process.returncode
run('test_out.txt', 'ls')
I have a custom input method and I have a python module to communicate with it. I'm trying to control the shell with it so everything from local stdout is printed on the remote device and everything sent from the remote device goes into local stdin, so that remote device can control the input given to the program, like if there was an input function inside the program the remote device can answer to that too (like in ssh).
I used python subprocess to control the stdin and stdout:
#! /usr/bin/python
from subprocess import Popen, PIPE
import thread
from mymodule import remote_read, remote_write
def talk2proc(dap):
while True:
try:
remote_write(dap.stdout.read())
incmd = remote_read()
dap.stdin.write(incmd)
except Exception as e:
print (e)
break
while True:
cmd = remote_read()
if cmd != 'quit':
p = Popen(['bash', '-c', '"%s"'%cmd], stdout=PIPE, stdin=PIPE, stderr=PIPE)
thread.start_new_thread(talk2proc, (p,))
p.wait()
else:
break
But it doesn't work, what should I do?
p.s.
is there a difference for windows?
I had this problem, I used this for STDIN
from subprocess import call
call(['some_app', 'param'], STDIN=open("a.txt", "rb"))
a.txt
:q
This I used for a git wrapper, this will enter the data line wise whenever there is an interrupt in some_app that is expecting and user input
There is a difference for Windows. This line won't work in Windows:
p = Popen(['bash', '-c', '"%s"'%cmd], stdout=PIPE, stdin=PIPE, stderr=PIPE)
because the equivalent of 'bash' is 'cmd.exe'.
I am trying to use Sailfish, which takes multiple fastq files as arguments, in a ruffus pipeline. I execute Sailfish using the subprocess module in python, but <() in the subprocess call does not work even when I set shell=True.
This is the command I want to execute using python:
sailfish quant [options] -1 <(cat sample1a.fastq sample1b.fastq) -2 <(cat sample2a.fastq sample2b.fastq) -o [output_file]
or (preferably):
sailfish quant [options] -1 <(gunzip sample1a.fastq.gz sample1b.fastq.gz) -2 <(gunzip sample2a.fastq.gz sample2b.fastq.gz) -o [output_file]
A generalization:
someprogram <(someprocess) <(someprocess)
How would I go about doing this in python? Is subprocess the right approach?
To emulate the bash process substitution:
#!/usr/bin/env python
from subprocess import check_call
check_call('someprogram <(someprocess) <(anotherprocess)',
shell=True, executable='/bin/bash')
In Python, you could use named pipes:
#!/usr/bin/env python
from subprocess import Popen
with named_pipes(n=2) as paths:
someprogram = Popen(['someprogram'] + paths)
processes = []
for path, command in zip(paths, ['someprocess', 'anotherprocess']):
with open(path, 'wb', 0) as pipe:
processes.append(Popen(command, stdout=pipe, close_fds=True))
for p in [someprogram] + processes:
p.wait()
where named_pipes(n) is:
import os
import shutil
import tempfile
from contextlib import contextmanager
#contextmanager
def named_pipes(n=1):
dirname = tempfile.mkdtemp()
try:
paths = [os.path.join(dirname, 'named_pipe' + str(i)) for i in range(n)]
for path in paths:
os.mkfifo(path)
yield paths
finally:
shutil.rmtree(dirname)
Another and more preferable way (no need to create a named entry on disk) to implement the bash process substitution is to use /dev/fd/N filenames (if they are available) as suggested by #Dunes. On FreeBSD, fdescfs(5) (/dev/fd/#) creates entries for all file descriptors opened by the process. To test availability, run:
$ test -r /dev/fd/3 3</dev/null && echo /dev/fd is available
If it fails; try to symlink /dev/fd to proc(5) as it is done on some Linuxes:
$ ln -s /proc/self/fd /dev/fd
Here's /dev/fd-based implementation of someprogram <(someprocess) <(anotherprocess) bash command:
#!/usr/bin/env python3
from contextlib import ExitStack
from subprocess import CalledProcessError, Popen, PIPE
def kill(process):
if process.poll() is None: # still running
process.kill()
with ExitStack() as stack: # for proper cleanup
processes = []
for command in [['someprocess'], ['anotherprocess']]: # start child processes
processes.append(stack.enter_context(Popen(command, stdout=PIPE)))
stack.callback(kill, processes[-1]) # kill on someprogram exit
fds = [p.stdout.fileno() for p in processes]
someprogram = stack.enter_context(
Popen(['someprogram'] + ['/dev/fd/%d' % fd for fd in fds], pass_fds=fds))
for p in processes: # close pipes in the parent
p.stdout.close()
# exit stack: wait for processes
if someprogram.returncode != 0: # errors shouldn't go unnoticed
raise CalledProcessError(someprogram.returncode, someprogram.args)
Note: on my Ubuntu machine, the subprocess code works only in Python 3.4+, despite pass_fds being available since Python 3.2.
Whilst J.F. Sebastian has provided an answer using named pipes it is possible to do this with anonymous pipes.
import shlex
from subprocess import Popen, PIPE
inputcmd0 = "zcat hello.gz" # gzipped file containing "hello"
inputcmd1 = "zcat world.gz" # gzipped file containing "world"
def get_filename(file_):
return "/dev/fd/{}".format(file_.fileno())
def get_stdout_fds(*processes):
return tuple(p.stdout.fileno() for p in processes)
# setup producer processes
inputproc0 = Popen(shlex.split(inputcmd0), stdout=PIPE)
inputproc1 = Popen(shlex.split(inputcmd1), stdout=PIPE)
# setup consumer process
# pass input processes pipes by "filename" eg. /dev/fd/5
cmd = "cat {file0} {file1}".format(file0=get_filename(inputproc0.stdout),
file1=get_filename(inputproc1.stdout))
print("command is:", cmd)
# pass_fds argument tells Popen to let the child process inherit the pipe's fds
someprogram = Popen(shlex.split(cmd), stdout=PIPE,
pass_fds=get_stdout_fds(inputproc0, inputproc1))
output, error = someprogram.communicate()
for p in [inputproc0, inputproc1, someprogram]:
p.wait()
assert output == b"hello\nworld\n"
The code below is outdated in Python 3.0 by being replaced by subprocess.getstatusoutput().
import commands
(ret, out) = commands.getstatusoutput('some command')
print ret
print out
The real question is what's the multiplatform alternative to this command from Python because the above code does fail ugly under Windows because getstatusoutput is supported only under Unix and Python does not tell you this, instead you get something like:
>test.py
1
'{' is not recognized as an internal or external command,
operable program or batch file.
This would be the multiplatform implementation for getstatusoutput():
def getstatusoutput(cmd):
"""Return (status, output) of executing cmd in a shell."""
"""This new implementation should work on all platforms."""
import subprocess
pipe = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True, universal_newlines=True)
output = "".join(pipe.stdout.readlines())
sts = pipe.returncode
if sts is None: sts = 0
return sts, output
I wouldn't really consider this multiplatform, but you can use subprocess.Popen:
import subprocess
pipe = subprocess.Popen('dir', stdout=subprocess.PIPE, shell=True, universal_newlines=True)
output = pipe.stdout.readlines()
sts = pipe.wait()
print sts
print output
Here's a drop-in replacement for getstatusoutput:
def getstatusoutput(cmd):
"""Return (status, output) of executing cmd in a shell."""
"""This new implementation should work on all platforms."""
import subprocess
pipe = subprocess.Popen(cmd, shell=True, universal_newlines=True,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output = str.join("", pipe.stdout.readlines())
sts = pipe.wait()
if sts is None:
sts = 0
return sts, output
This snippet was proposed by the original poster. I made some changes since getstatusoutput duplicates stderr onto stdout.
The problem is that dir isn't really a multiplatform call but subprocess.Popen allows you to execute shell commands on any platform. I would steer clear of using shell commands unless you absolutely need to. Investigate the contents of the os, os.path, and shutil packages instead.
import os
import os.path
for rel_name in os.listdir(os.curdir):
abs_name = os.path.join(os.curdir, rel_name)
if os.path.isdir(abs_name):
print('DIR: ' + rel_name)
elif os.path.isfile(abs_name):
print('FILE: ' + rel_name)
else:
print('UNK? ' + rel_name)
getstatusoutput docs say it runs the command like so:
{ cmd } 2>&1
Which obviously doesn't work with cmd.exe (the 2>&1 works fine if you need it though).
You can use Popen as above, but also include the parameter 'stderr=subprocess.STDOUT' to get the same behaviour as getstatusoutput.
My tests on Windows had returncode set to None though, which is not ideal if you're counting on the return value.