How to eliminate standard output of subprocess.Popen in Python? - python

When i did something in Python as:
ping = subprocess.Popen("ping -n 1 %s" %ip, stdout=subprocess.PIPE)
it is always to print out to screen:
(subprocess.Popen object at 0x.... )
It's a bit annoying for me. Do you know how to avoid that std output ?

It looks like you're trying to get the stdout from the process by printing the ping variable in your example. That is incorrect usage of subprocess.Popen objects. In order to correctly use subprocess.Popen this is what you would write:
ping_process = subprocess.Popen(['ping', '-c', '1', ip], stdout=subprocess.PIPE)
# This will block until ping exits
stdout = ping_process.stdout.read()
print stdout
I also changed the arguments you used for ping because -n is an invalid argument on my machines implementation.

Related

What is the difference between subprocess.run & subprocess.check_output?

I am trying to send two simple commands using subprocess.run & trying to store results in a variable then print it but for one arg the output is coming for subprocess.run & for other its empty
Arg are "help" & "adb devices"
command I am sending which returns the output
result = subprocess.run("help", capture_output=True, text=True, universal_newlines=True)
print(result.stdout)
but this command with a different arg is not returning
result = subprocess.run("adb devices", capture_output=True, text=True, universal_newlines=True)
print(result.stdout)
If I try the same command with subprocess.checkoutput it returns the output can anyone explain what exactly is going on here
Is there any specific usage scenario's for these command's like when to use which one ?
c = subprocess.check_output(
"adb devices", shell=True, stderr=subprocess.STDOUT)
print(c)
output - b'List of devices attached\r\n\r\n'
It is because from the python documentation here:
run method
run method accepts the first parameter as arguments and not string.
So you can try passing the arguments in a list as:
result = subprocess.run(['abd', 'devices'], capture_output=True, text=True, universal_newlines=True)
Also,
check_output method accepts args but it has a parameter call "shell = True" Therefore, it works for multi-word args.
If you want to use the run method without a list, add shell=True in the run method parameter. (I tried for "man ls" command and it worked).

Multiple shell commands in python (Windows)

I'm working on a windows machine and I want to set a variable in the shell and want to use it with another shell command, like:
set variable = abc
echo %variable%
I know that I could do this using os.system(com1 && com2) but I also know, that this is considered 'bad style' and it should be possible by using the subprocess module, but I don't get how.
Here is what I got so far:
proc = Popen('set variable=abc', shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT)
proc.communicate(input=b'echo %variable%)
But neither line seems to work, both commands don't get executed. Also, if I type in nonexisting commands, I don't get an error. How is the proper way to do it?
Popen can only execute one command or shell script. You can simply provide the whole shell script as single argument using ; to separate the different commands:
proc = Popen('set variable=abc;echo %variable%', shell=True)
Or you can actually just use a multiline string:
>>> from subprocess import call
>>> call('''echo 1
... echo 2
... ''', shell=True)
1
2
0
The final 0 is the return-code of the process
The communicate method is used to write to the stdin of the process. In your case the process immediately ends after running set variable and so the call to communicate doesn't really do anything.
You could spawn a shell and then use communicate to write the commands:
>>> proc = Popen(['sh'], stdin=PIPE, stdout=PIPE, stderr=PIPE)
>>> proc.communicate('echo 1; echo 2\n')
('1\n2\n', '')
Note that communicate also closes the streams when it is done, so you cannot call it mulitple times. If you want an interactive session you hvae to write directly to proc.stdin and read from proc.stdout.
By the way: you can specify an env parameter to Popen so depending on the circumstances you may want to do this instead:
proc = Popen(['echo', '%variable%'], env={'variable': 'abc'})
Obviously this is going to use the echo executable and not shell built-in but it avoids using shell=True.

Python : Redirecting subprocess Popen stdout to log file

I have a python process running, having a logger object configured to print logs in a log file.
Now, I am trying to call a scala script through this python process, by using subprocess module of Python.
subprocess.Popen(scala_run_command, stdout=subprocess.PIPE, shell=True)
The issue is, whenever the python process exits, it hangs the shell, which comes to life only after explicitly running stty sane command. My guess is that it is caused because the scala script outputs to shell and hence the shell hangs, because of its stdout [something in its stdout causes the shell to lose its sanity].
For the same reason, I wanted to try to put the output of scala run script to be captured in my default log file, which does not seem to be happening using multiple ways.
So, the query boils down to, how to the get the stdout output of shell command ran through subprocess module in a log file. Even if there is a better way to achieve this instead of subprocess, run, I would love to know the ideas.
The current state of code looks like this.
__echo_command = 'echo ":load %s"'
__spark_console_command = 'spark;'
def run_scala_script(self, script):
echo_command = self.__echo_command % script
spark_console_command = self.__spark_console_command
echo_result = subprocess.run(echo_command, stdout=subprocess.PIPE, shell=True)
result = subprocess.run(spark_console_command, stdout=subprocess.PIPE, shell=True, input=echo_result.stdout)
logger.info('Scala script %s completed successfully' % script)
logger.info(result.stdout)
Use
p = subprocess.Popen(...)
followed by
stdout, stderr = p.communicate()
and then stdout and stderr will contain the output bytes from the subprocess' output streams. You can then log the stdout value.

Can Powershell read code from stdin?

I'm trying to run a Powershell subprocess from Python. I need to send Powershell code from Python to the child process. I've got this far:
import subprocess
import time
args = ["powershell", "-NoProfile", "-InputFormat None", "-NonInteractive"]
startTime = time.time()
process = subprocess.Popen(args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
process.stdin.write("Write-Host 'FINISHED';".encode("utf-8"))
result = ''
while 'FINISHED' not in result:
result += process.stdout.read(32).decode('utf-8')
if time.time() > startTime + 5:
raise TimeoutError(result)
print(result)
This times out, because nothing ever gets written to stdout. I think the Write-Host cmdlet never gets executed. Even the simple bash/Cygwin code echo "Write-Host 'FINISHED';" | powershell doesn't seem to do the job.
For comparison, sending the code block using the -Command flag works correctly.
How can I convince Powershell to run the code which I'm sending to stdin?
There a couple of things you can consider:
Invoke PowerShell in a mode where you provide it with a script file which it should execute. Write this script file prior to calling the subprocess. Use the -File <FilePath> parameter for PowerShell (cf. the docs)
If you really want to go with the stdin technique, you might be missing a newline character after the command. If this does not help, you might need to send another control character that tells PowerShell that input EOF is reached. You definitely need to consult the PowerShell docs for finding out how to 'terminate' commands on stdin. One thing you definitely need is the -Command - arguments: The value of Command can be "-", a string. or a script block. If the value of Command is "-", the command text is read from standard input. You may also want to look at this little hack: https://stackoverflow.com/a/13877874/145400
If you only want to execute one command, you can simplify your code by using out, err = subprocess.communicate(in)
I had trouble with a similar task, but I was able to solve it.
First my example code:
import subprocess
args = ["powershell.exe", "-Command", r"-"]
process = subprocess.Popen(args, stdin = subprocess.PIPE, stdout = subprocess.PIPE)
process.stdin.write(b"$data = Get-ChildItem C:\\temp\r\n")
process.stdin.write(b"Write-Host 'Finished 1st command'\r\n")
process.stdin.write(b"$data | Export-Clixml -Path c:\\temp\state.xml\r\n")
process.stdin.write(b"Write-Host 'Finished 2nd command'\r\n")
output = process.communicate()[0]
print(output.decode("utf-8"))
print("done")
The main issue was the correct argument list args. It is required to start the powershell with the -Command-flag, followed by "-" as indicated by Jan-Philipp.
Another mystery was the end-of-line character that is required to get the stuff executed. \r\n works quite well.
Getting the output of the Powershell is still an issue. But if you don't care about realtime, you can collect the output after finishing all executions by calling
output = process.communicate()[0]
However, the active Powershell will be terminated afterwards.

Python - pipelining subprocess in Windows

I'm using Windows 7, and I've tried this under Python 2.6.6 and Python 3.2.
So I'm trying to call this command line from Python:
netstat -ano | find ":80"
under Windows cmd, this line works perfectly fine.
So,
1st attempt:
output = subprocess.Popen(
[r'netstat -ano | find ":80"'],
stdout=subprocess.PIPE,
shell=True
).communicate()
An error is raised that 'find' actually didn't receive correct parameter (e.g. 'find ":80" \'):
Access denied - \
2nd attempt:
#calling netstat
cmd_netstat = subprocess.Popen(
['netstat','-ano'],
stdout = subprocess.PIPE
)
#pipelining netstat result into find
cmd_find = subprocess.Popen(
['find','":80"'],
stdin = cmd_netstat.stdout,
stdout = subprocess.PIPE
)
Again, the same error is raised.
Access denied - \
What did I do wrong? :(
EDIT:
3rd attempt (As #Pavel Repin suggested):
cmd_netstat = subprocess.Popen(
['cmd.exe', '-c', 'netstat -ano | find ":80"'],
stdout=subprocess.PIPE
).communicate()
Unfortunately, subprocess with ['cmd.exe','-c'] results in something resembling deadlock or a blank cmd window. I assume '-c' is ignored by cmd, resulting in communicate() waiting indefinitely for cmd termination. Since this is Windows, my bet bet is cmd only accepts parameter starting with slash (/). So I substituted '-c' with '/c':
cmd_netstat = subprocess.Popen(
['cmd.exe', '/c', 'netstat -ano | find ":80"'],
stdout=subprocess.PIPE
).communicate()
And...back to the same error:
Access denied - \
EDIT:
I gave up, I'll just process the string returned by 'netstat -ano' in Python. Might this be a bug?
What I suggest is that you do the maximum inside Python code. So, you can execute the following command:
# executing the command
import subprocess
output = subprocess.Popen(['netstat', '-ano'], stdout=subprocess.PIPE).communicate()
and then by parsing the output:
# filtering the output
valid_lines = [ line for line in output[0].split('\r\n') if ':80' in line ]
You will get a list of lines. On my computer, the output looks like this for port number 1900 (no html connexion active):
[' UDP 127.0.0.1:1900 *:* 1388', ' UDP 192.xxx.xxx.233:1900 *:* 1388']
In my opinion, this is easier to work with.
Note that :
option shell=True is not mandatory, but a command-line window is opened-closed quickly. See what suits you the most, but take care of command injection;
list of Popen arguments shall be a list of string. Quoting of the list parts is not necessary, subprocess will take care of it for you.
Hope this helps.
EDIT: oops, I missed the last line of the edit. Seems you've already got the idea on your own.
So I revisited this question, and found two solutions (I switched to Python 2.7 sometime ago, so I'm not sure about Python 2.6, but it should be the same.):
Replace find with findstr, and remove doublequotes
output = subprocess.Popen(['netstat','-ano','|','findstr',':80'],
stdout=subprocess.PIPE,
shell=True)
.communicate()
But this doesn't explain why "find" cannot be used, so:
Use string parameter instead of list
output = subprocess.Popen('netstat -ano | find ":80"',
stdout=subprocess.PIPE,
shell=True)
.communicate()
or
pipeout = subprocess.Popen(['netstat', '-ano'],
stdout = subprocess.PIPE)
output = subprocess.Popen('find ":80"',
stdin = pipeout.stdout,
stdout = subprocess.PIPE)
.communicate()
The problem arise from the fact that: ['find','":80"'] is actually translated into ['find,'\":80\"'].
Thus the following command is executed in Windows command shell:
>find \":80\"
Access denied - \
Proof:
Running:
output = subprocess.Popen(['echo','find','":80"'],
stdout=subprocess.PIPE,
shell=True)
.communicate()
print output[0]
returns:
find \":80\"
Running:
output = subprocess.Popen('echo find ":80"',
stdout=subprocess.PIPE,
shell=True)
.communicate()
print output[0]
returns:
find ":80"
New answer, after reading this old question again: this may be due to the two following facts:
The pipe operator executes the following commands in a sub-shell; see for instance this interesting consequence).
Python itself uses the pipe as a way to get the results back:
Note that (...) to get anything other than None in the result tuple, you need to give stdout=PIPE and/or stderr=PIPE too.
Not sure if this 'conflict' is kind of a bug, or a design choice though.

Categories