How to pipe the output of execvp to variable in python - python

I have an assignment where we are making a shell for the Linux OS. And I have a lot of questions!
I was allowed to do it in python using some of the methods from the os library. The idea is that my program should communicate directly with the linux operating system calls.
So this include:
Create
Open
Close
Read
Write
Exit
Pipe
Exec
Fork
Dup2
Wait
So far I made a working shell which can execute commands with execvp but I am having trouble with the piping stuff.
I was reading this q/a and i felt that I almost understood what i have to do
I guess I have to use Dup2 to write (and maybe read later). Also I am a little confused if I should use read() and write() at some point regarding to piping.
from os import (
execvp,
wait,
fork,
close,
pipe,
dup2,
)
from os import _exit as kill
STDIN = 0
STDOUT = 1
STDERR = 2
CHILD = 0
def piping(cmd):
reading, writing = pipe()
pid = fork()
if pid > CHILD:
wait()
close(writing)
dup2(reading, STDIN)
execvp(cmd[1][0], cmd[1])
kill(127)
elif pid == CHILD:
close(reading)
dup2(writing, STDOUT)
execvp(cmd[0][0], cmd[0])
kill(127)
else:
print('Command not found:', cmd)
piping([['ls', '-l', '/'], ['grep', 'var']])
If I run this code it works. But I don't understand some things:
How can the execvp know that it gets extra arguments from the pipe?
Why should i kill in the end and why is it 127?
How is it possible to run the execvp inside the parent? Is this also possible in C?
If I have a nested pipe eg: ls -l / | grep var | xclip -selection clipboard should I create a new fork then? (maybe some recursion)
It is not a part of the assignment to write to a file, but I might implement it as well later, when I get the piping to work.
Should I use dup2 for that as well or maybe read/write
Thank you in advance! :)

Related

Python subprocess always waits for programm [duplicate]

I'm trying to port a shell script to the much more readable python version. The original shell script starts several processes (utilities, monitors, etc.) in the background with "&". How can I achieve the same effect in python? I'd like these processes not to die when the python scripts complete. I am sure it's related to the concept of a daemon somehow, but I couldn't find how to do this easily.
While jkp's solution works, the newer way of doing things (and the way the documentation recommends) is to use the subprocess module. For simple commands its equivalent, but it offers more options if you want to do something complicated.
Example for your case:
import subprocess
subprocess.Popen(["rm","-r","some.file"])
This will run rm -r some.file in the background. Note that calling .communicate() on the object returned from Popen will block until it completes, so don't do that if you want it to run in the background:
import subprocess
ls_output=subprocess.Popen(["sleep", "30"])
ls_output.communicate() # Will block for 30 seconds
See the documentation here.
Also, a point of clarification: "Background" as you use it here is purely a shell concept; technically, what you mean is that you want to spawn a process without blocking while you wait for it to complete. However, I've used "background" here to refer to shell-background-like behavior.
Note: This answer is less current than it was when posted in 2009. Using the subprocess module shown in other answers is now recommended in the docs
(Note that the subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using these functions.)
If you want your process to start in the background you can either use system() and call it in the same way your shell script did, or you can spawn it:
import os
os.spawnl(os.P_DETACH, 'some_long_running_command')
(or, alternatively, you may try the less portable os.P_NOWAIT flag).
See the documentation here.
You probably want the answer to "How to call an external command in Python".
The simplest approach is to use the os.system function, e.g.:
import os
os.system("some_command &")
Basically, whatever you pass to the system function will be executed the same as if you'd passed it to the shell in a script.
I found this here:
On windows (win xp), the parent process will not finish until the longtask.py has finished its work. It is not what you want in CGI-script. The problem is not specific to Python, in PHP community the problems are the same.
The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in win API. If you happen to have installed pywin32 you can import the flag from the win32process module, otherwise you should define it yourself:
DETACHED_PROCESS = 0x00000008
pid = subprocess.Popen([sys.executable, "longtask.py"],
creationflags=DETACHED_PROCESS).pid
Use subprocess.Popen() with the close_fds=True parameter, which will allow the spawned subprocess to be detached from the Python process itself and continue running even after Python exits.
https://gist.github.com/yinjimmy/d6ad0742d03d54518e9f
import os, time, sys, subprocess
if len(sys.argv) == 2:
time.sleep(5)
print 'track end'
if sys.platform == 'darwin':
subprocess.Popen(['say', 'hello'])
else:
print 'main begin'
subprocess.Popen(['python', os.path.realpath(__file__), '0'], close_fds=True)
print 'main end'
Both capture output and run on background with threading
As mentioned on this answer, if you capture the output with stdout= and then try to read(), then the process blocks.
However, there are cases where you need this. For example, I wanted to launch two processes that talk over a port between them, and save their stdout to a log file and stdout.
The threading module allows us to do that.
First, have a look at how to do the output redirection part alone in this question: Python Popen: Write to stdout AND log file simultaneously
Then:
main.py
#!/usr/bin/env python3
import os
import subprocess
import sys
import threading
def output_reader(proc, file):
while True:
byte = proc.stdout.read(1)
if byte:
sys.stdout.buffer.write(byte)
sys.stdout.flush()
file.buffer.write(byte)
else:
break
with subprocess.Popen(['./sleep.py', '0'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc1, \
subprocess.Popen(['./sleep.py', '10'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc2, \
open('log1.log', 'w') as file1, \
open('log2.log', 'w') as file2:
t1 = threading.Thread(target=output_reader, args=(proc1, file1))
t2 = threading.Thread(target=output_reader, args=(proc2, file2))
t1.start()
t2.start()
t1.join()
t2.join()
sleep.py
#!/usr/bin/env python3
import sys
import time
for i in range(4):
print(i + int(sys.argv[1]))
sys.stdout.flush()
time.sleep(0.5)
After running:
./main.py
stdout get updated every 0.5 seconds for every two lines to contain:
0
10
1
11
2
12
3
13
and each log file contains the respective log for a given process.
Inspired by: https://eli.thegreenplace.net/2017/interacting-with-a-long-running-child-process-in-python/
Tested on Ubuntu 18.04, Python 3.6.7.
You probably want to start investigating the os module for forking different threads (by opening an interactive session and issuing help(os)). The relevant functions are fork and any of the exec ones. To give you an idea on how to start, put something like this in a function that performs the fork (the function needs to take a list or tuple 'args' as an argument that contains the program's name and its parameters; you may also want to define stdin, out and err for the new thread):
try:
pid = os.fork()
except OSError, e:
## some debug output
sys.exit(1)
if pid == 0:
## eventually use os.putenv(..) to set environment variables
## os.execv strips of args[0] for the arguments
os.execv(args[0], args)
You can use
import os
pid = os.fork()
if pid == 0:
Continue to other code ...
This will make the python process run in background.
I haven't tried this yet but using .pyw files instead of .py files should help. pyw files dosen't have a console so in theory it should not appear and work like a background process.

How to pass SIGINT to child process with Python subprocess.Popen() using shell = true

I am currently trying to write (Python 2.7.3) kind of a wrapper for GDB, which will allow me to dynamically switch from scripted input to interactive communication with GDB.
So far I use
self.process = subprocess.Popen(["gdb vuln"], stdin = subprocess.PIPE, shell = True)
to start gdb within my script. (vuln is the binary I want to examine)
Since a key feature of gdb is to pause the execution of the attached process and allow the user to inspect registers and memory on receiving SIGINT (STRG+C) I do need some way to pass a SIGINT signal to it.
Neither
self.process.send_signal(signal.SIGINT)
nor
os.kill(self.process.pid, signal.SIGINT)
or
os.killpg(self.process.pid, signal.SIGINT)
work for me.
When I use one of these functions there is no response. I suppose this problem arises from the use of shell=True. However, at this point I am really out of ideas.
Even my old friend Google couldn't really help me out this time, so maybe you can help me. Thank's in advance.
Cheers, Mike
Here is what worked for me:
import signal
import subprocess
try:
p = subprocess.Popen(...)
p.wait()
except KeyboardInterrupt:
p.send_signal(signal.SIGINT)
p.wait()
I looked deeper into the problem and found some interesting things. Maybe these findings will help someone in the future.
When calling gdb vuln using suprocess.Popen() it does in fact create three processes, where the pid returned is the one of sh (5180).
ps -a
5180 pts/0 00:00:00 sh
5181 pts/0 00:00:00 gdb
5183 pts/0 00:00:00 vuln
Consequently sending a SIGINT to the process will in fact send SIGINT to sh.
Besides, I continued looking for an answer and stumbled upon this post
https://bugzilla.kernel.org/show_bug.cgi?id=9039
To keep it short, what is mentioned there is the following:
When pressing STRG+C while using gdb regularly SIGINT is in fact sent to the examined program (in this case vuln), then ptrace will intercept it and pass it to gdb.
What this means is, that if I use self.process.send_signal(signal.SIGINT) it will in fact never reach gdb this way.
Temporary Workaround:
I managed to work around this problem by simply calling subprocess.popen() as follows:
subprocess.Popen("killall -s INT " + self.binary, shell = True)
This is nothing more than a first workaround. When multiple applications with the same name are running might do some serious damage. Besides, it somehow fails, if shell=True is not set.
If someone has a better fix (e.g. how to get the pid of the process startet by gdb), please let me know.
Cheers, Mike
EDIT:
Thanks to Mark for pointing out to look at the ppid of the process.
I managed to narrow down the process's to which SIGINT is sent using the following approach:
out = subprocess.check_output(['ps', '-Aefj'])
for line in out.splitlines():
if self.binary in line:
l = line.split(" ")
while "" in l:
l.remove("")
# Get sid and pgid of child process (/bin/sh)
sid = os.getsid(self.process.pid)
pgid = os.getpgid(self.process.pid)
#only true for target process
if l[4] == str(sid) and l[3] != str(pgid):
os.kill(pid, signal.SIGINT)
I have done something like the following in the past and if I recollect it seemed to work for me :
def detach_procesGroup():
os.setpgrp()
subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=detach_processGroup)

Python Multiprocessing - sending inputs to child processes

I am using the multiprocessing module in python to launch few processes in parallel. These processes are independent of each other. They generate their own output and write out the results in different files. Each process calls an external tool using the subprocess.call method.
It was working fine until I discovered an issue in the external tool where due to some error condition it goes into a 'prompt' mode and waits for the user input. Now in my python script I use the join method to wait till all the processes finish their tasks. This is causing the whole thing to wait for this erroneous subprocess call. I can put a timeout for each of the process but I do not know in advance how long each one is going to run and hence this option is ruled out.
How do I figure out if any child process is waiting for an user input and how do I send an 'exit' command to it? Any pointers or suggestions to relevant modules in python will be really appreciated.
My code here:
import subprocess
import sys
import os
import multiprocessing
def write_script(fname,e):
f = open(fname,'w')
f.write("Some useful cammnd calling external tool")
f.close()
subprocess.call(['chmod','+x',os.path.abspath(fname)])
return os.path.abspath(fname)
def run_use(mname,script):
print "ssh "+mname+" "+script
subprocess.call(['ssh',mname,script])
if __name__ == '__main__':
dict1 = {}
dict['mod1'] = ['pp1','ext2','les3','pw4']
dict['mod2'] = ['aaa','bbb','ccc','ddd']
machines = ['machine1','machine2','machine3','machine4']
log_file.write(str(dict1.keys()))
for key in dict1.keys():
arr = []
for mod in dict1[key]:
d = {}
arr.append(mod)
if ((mod == dict1[key][-1]) | (len(arr)%4 == 0)):
for i in range(0,len(arr)):
e = arr.pop()
script = write_script(e+"_temp.sh",e)
d[i] = multiprocessing.Process(target=run_use,args=(machines[i],script,))
d[i].daemon = True
for pp in d:
d[pp].start()
for pp in d:
d[pp].join()
Since you're writing a shell script to run your subcommands, can you simply tell them to read input from /dev/null?
#!/bin/bash
# ...
my_other_command -a -b arg1 arg2 < /dev/null
# ...
This may stop them blocking on input and is a really simple solution. If this doesn't work for you, read on for some other options.
The subprocess.call() function is simply shorthand for constructing a subprocess.Popen instance and then calling the wait() method on it. So, your spare processes could instead create their own subprocess.Popen instances and poll them with poll() method on the object instead of wait() (in a loop with a suitable delay). This leaves them free to remain in communication with the main process so you can, for example, allow the main process to tell the child process to terminate the Popen instance with the terminate() or kill() methods and then itself exit.
So, the question is how does the child process tell whether the subprocess is awaiting user input, and that's a trickier question. I would say perhaps the easiest approach is to monitor the output of the subprocess and search for the user input prompt, assuming that it always uses some string that you can look for. Alternatively, if the subprocess is expected to generate output continually then you could simply look for any output and if a configured amount of time goes past without any output then you declare that process dead and terminate it as detailed above.
Since you're reading the output, actually you don't need poll() or wait() - the process closing its output file descriptor is good enough to know that it's terminated in this case.
Here's an example of a modified run_use() method which watches the output of the subprocess:
def run_use(mname,script):
print "ssh "+mname+" "+script
proc = subprocess.Popen(['ssh',mname,script], stdout=subprocess.PIPE)
for line in proc.stdout:
if "UserPrompt>>>" in line:
proc.terminate()
break
In this example we assume that the process either gets hung on on UserPrompt>>> (replace with the appropriate string) or it terminates naturally. If it were to get stuck in an infinite loop, for example, then your script would still not terminate - you can only really address that with an overall timeout, but you didn't seem keen to do that. Hopefully your subprocess won't misbehave in that way, however.
Finally, if you don't know in advance the prompt that will be giving from your process then your job is rather harder. Effectively what you're asking to do is monitor an external process and know when it's blocked reading on a file descriptor, and I don't believe there's a particularly clean solution to this. You could consider running a process under strace or similar, but that's quite an awful hack and I really wouldn't recommend it. Things like strace are great for manual diagnostics, but they really shouldn't be part of a production setup.

Python: fork, pipe and exec

I want to execute a program in a python application, it will run in the background but eventually come to the foreground.
A GUI is used to interact with it. But controls are offered via a console on stdin and stdout. I want to be able to control it using my application's GUI, so my first idea was:
Fork
in the parent, dup2 stdin and stdout in order to access them
exec the child
Is this easily implementable in python and how? Are there alternative ways to achieve what I want, what would that be?
First, the python subprocess module is the correct answer.
As an subprocess example:
import subprocess
x = subprocess.check_output(["echo","one","two","three"])
Where x will be the output (python3 bytes class: x.decode('utf-8') for string)
Note that this will NOT duplicate stderr. If you need stderr as well, you can do something like:
x = subprocess.check_output(["bash","-c", 'echo foo; echo bar >&2'],stderr=subprocess.STDOUT)
Of course, there are many other ways of capturing stderr, including to a different output variable.
Using direct control
However, if you are doing something tricky and need to have direct control, examine the code below:
import os
rside, wside = os.pipe()
if not os.fork():
# Child
os.close(rside)
# Make stdout go to parent
os.dup2(wside, 1)
# Make stderr go to parent
os.dup2(wside, 2)
# Optionally make stdin come from nowhere
devnull = os.open("/dev/null", os.O_RDONLY)
os.dup2(devnull, 0)
# Execute the desired program
os.execve("/bin/bash",["/bin/bash","-c","echo stdout; echo stderr >&2"],os.environ)
print("Failed to exec program!")
sys.exit(1)
# Parent
os.close(wside)
pyrside = os.fdopen(rside)
for line in pyrside:
print("Child (stdout or stderr) said: <%s>"%line)
# Prevent zombies! Reap the child after exit
pid, status = os.waitpid(-1, 0)
print("Child exited: pid %d returned %d"%(pid,status))
Note: #Beginner's answer is flawed in a few ways: os._exit(0) was included which immediately causes the child to exit, rendering everything else pointless. No os.execve() rendering the primary goal of the question pointless. No way to access the child's stdout/stderr as another question goal.
This is reasonably easy using the standard Python subprocess module:
http://docs.python.org/py3k/library/subprocess.html
That is not much complex in structure to build !
Check this example
if os.fork():
os._exit(0)
os.setsid()
os.chdir("/")
fd = os.open("/dev/null", os.O_RDWR)
os.dup2(fd, 0)
os.dup2(fd, 1)
os.dup2(fd, 2)
if fd 2:
os.close(fd)
This python code sets an id, changes the dir, opens a file and process and close !

Is there anything like Python's pty.fork for Ruby?

I'm trying to port some Python code like the following to Ruby:
import pty
pid, fd = pty.fork
if pid == 0:
# figure out what to launch
cmd = get_command_based_on_user_input()
# now replace the forked process with the command
os.exec(cmd)
else:
# read and write to fd like a terminal
Since I need to read and write to the subprocess like a terminal, I understand that I should use Ruby's PTY module in lieu of Kernel.fork. But it does not seem to have an equivalent fork method; I must pass a command as a string. This is the closest I can get to Python's functionality:
require 'pty'
# The Ruby executable, ready to execute some codes
RUBY = %Q|/proc/#{Process.id}/exe -e "%s"|
# A small Ruby program which will eventually replace itself with another program. Very meta.
cmd = "cmd=get_command_based_on_user_input(); exec(cmd)"
r, w, pid = PTY.spawn(RUBY % cmd)
# Read and write from r and w
Obviously some of that is Linux-specific, and that's fine. And obviously some is pseudo-code, but it's the only approach I can find, and I'm only 80% sure that it will work anyway. Surely Ruby has something cleaner?
The important thing is that "get_command_based_on_user_input()" not block the parent process, which is why I stuck it in the child process.
You're probably looking for http://ruby-doc.org/stdlib-1.9.2/libdoc/pty/rdoc/PTY.html, http://www.ruby-doc.org/core-1.9.3/Process.html#method-c-fork and Create a daemon with double-fork in Ruby.
I'd open a PTY with master process, fork and reattach child to said PTY with STDIN.reopen.

Categories