Piping command with seemingly several separate outputs to file - python

I am using speedtest-cli to get my internet speed in a python script. I would like to run this command in shell via subprocess.Popen.
Here is the command in terminal:
`speedtest-cli --share > output.log`
speedtest-cli runs the test, whilst --share provides me with an additional link in the output, pointing to an image of the speedtest result. Here is the content of output.log:
Retrieving speedtest.net configuration...
Testing from M-net (xxx.xxx.xxx.xxx)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by XXXXXXXXXXXXXXXXXXXXXX [16.12 km]: 20.902 ms
Testing download speed................................................................................
Download: 48.32 Mbit/s
Testing upload speed......................................................................................................
Upload: 12.49 Mbit/s
Share results: http://www.speedtest.net/result/670483456.png
If I run the command in terminal, I get all the test results as well as the link in the target file as expected. I confirmed it is all stdout and not another channel by using this grep trick: command | grep .
I am trying to run it in Python as follows:
subprocess.Popen(['speedtest-cli', '--share', '>', 'output.log`],
stdout=subprocess.PIPE, shell=True)
...and I also tried putting the output into the file directly via python:
with open('output.log', 'w') as f:
Popen(['speedtest-cli', '--json', '--share'], stdout=f, shell=True)
Neither of these work. I get a nice file created with the latter approach, BUT the link is not included! (the last line in the output above).
Against all the warnings of deprecation and better safety with using subprocess module, I became desparate and tried os.system():
os.system('speedtest-cli --share > output.log')
Annoyingly, this works... the full output along with the link is captured in the file.
What is going on here? How do I get the link to be captured using Popen?
I'm using Python 3.5

When using shell=True, your argument to Popen needs to be a string, not a list:
subprocess.Popen('speedtest-cli --json --share > output.log',
stdout=subprocess.PIPE, shell=True)
Compare:
>>> subprocess.Popen('echo hello', shell=True)
>>> hello
And:
>>> subprocess.Popen(['echo', 'hello'], shell=True)
>>>
When you pass a list and you are using shell=True, only the the first item is relevant and the remainder is ignored.
If you want to collect the output yourself, consider subprocess.check_output:
>>> output = subprocess.check_output(['echo', 'hello'])
>>> output
b'hello\n'
Or:
>>> output = subprocess.check_output('echo hello', shell=True)
The check_output method works with both Python 2 and Python 3. In Python 3, you also have available the run method.

Related

SSH to remote system and run python script and get output

I am playing with ssh and run the python scripts with help of these answers - Run local python script on remote server
For connecting the server using ssh, I am currently using the subprocess from python
#ssh command : ssh user#machine python < script.py - arg1 arg2
output = subprocess.Popen(f"ssh user#machine python3 < script.py", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()
Now, I get the output of the script as tuples of bytes of string, it is hard to decode the output information. The output having other information like warnings.
I tried decoding the output, but that is not looks great,
Is there any other possible ways to get the output back from ssh, for example, the script.py print a python dictionary or returns a json data, which can get it back in the same format and save in the output variable?
Thanks
There are a couple of ways to do this with subprocess depending on how up to date your Python is.
In general any byte string in Python you can turn into 'proper' text by decoding it. In your example, the command is returning a tuple which is the output of (stdout, stderr) so we need to pick one of those tuple elements with output[0] or output[1]:
>>> output = subprocess.Popen(f"ssh git#github.com", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()
>>> output[1].decode('utf-8')
"PTY allocation request failed on channel 0\r\nHi Chris! You've successfully authenticated, but GitHub does not provide shell access.\nConnection to github.com closed.\r\n"
You can tidy that up further by including the universal_newlines=True option.
Assuming you are Python 3.7 or later you can simplify your command so you capture a decoded output straight away:
>>> subprocess.run(['ssh', 'git#github.com'], capture_output=True, text=True).stderr
"PTY allocation request failed on channel 0\nHi Chris! You've successfully authenticated, but GitHub does not provide shell access.\nConnection to github.com closed.\n"
If you are struggling with the escaped newlines (\n) one approach would be to split them so you get an array of lines instead:
>>> output = subprocess.run(['ssh', 'git#github.com'], capture_output=True, text=True, universal_newlines=True).stderr.split("\n")
>>> output
['PTY allocation request failed on channel 0', "Hi Chris! You've successfully authenticated, but GitHub does not provide shell access.", 'Connection to github.com closed.', '']
>>> for line in output:
... print(line)
PTY allocation request failed on channel 0
Hi Chris! You've successfully authenticated, but GitHub does not provide shell access.
Connection to github.com closed.
print() will happily accept an array of terms to print, normally separated by a space. But you can override the separator:
print(*output, sep="\n")

Subprocess command throwing error while converting docx to pdf

I'm running the following code on my windows machine using python's subprocess module.
import subprocess
args = ["abiword --to=pdf '{}'".format('test.docx')]
process = subprocess.run(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, timeout=None)
print(process.stdout.decode())
filename = re.search('-> (.*?) using filter', process.stdout.decode())
print(filename.group(1))
But subprocess.run gives the following error:
b'\'"abiword --to=pdf \'test.docx\'"\' is not recognized as an internal or external command,\r\noperable program or batch file.\r\n'
How to resolve this error and how should I proceed now?
Also, is it correct to use abiword command in my windows machine? I want to convert my docx to pdf without installing any third party software like libreoffice.
You have 2 different problems here.
First one will be simple to solve: you do not give the correct parameters to subprocess. The first parameter (args) can be either a string which contains the full command line or a list containing the command and parameters as separate elements. So you should use either:
args = "abiword --to=pdf '{}'".format('test.docx')
(a simple string and not a list) or:
args = ["abiword", "--to=pdf", '{}'.format('test.docx')]
The second one is that until you can generate the pdf by entering the abiword --to-pdf command in a console CMD.exe window, the same command launched with subprocess.run will not give better results...
Use the full path to abiword: /usr/bin/abiword

Use of '&' in a python subprocess for background process

Are there any differences between the following 2 lines:
subprocess.Popen(command + '> output.txt', shell=True)
subprocess.Popen(command +' &> output.txt', shell=True)
As the popen already triggers the command to run in the background, should I use &? Does use of & ensure that the command runs even if the python script ends executing?
Please let me know the difference between the 2 lines and also suggest which of the 2 is better.
Thanks.
&> specifies that standard error has to be redirected to the same destination that standard output is directed. Which means both the output log of the command and error log will also be written in the output.txt file.
using > alone makes only standard output being copies to the output.txt file and the standard error can be written using command 2> error.txt

python subprocess with ffmpeg give no output

I m want to extract the scene change timestamp using the scene change detection from ffmpeg. I have to run it on a few hundreds of videos , so i wanted to use a python subprocess to loop over all the content of a folder.
My problem is that the command that i was using for getting these values on a single video involve piping the output to a file which seems to not be an option from inside a subprocess call.
this is my code :
p=subprocess.check_output(["ffmpeg", "-i", sourcedir+"/"+name+".mpg","-filter:v", "select='gt(scene,0.4)',showinfo\"","-f","null","-","2>","output"])
this one tell ffmpeg need an output
output = "./result/"+name
p=subprocess.check_output(["ffmpeg", "-i", sourcedir+"/"+name+".mpg","-filter:v", "select='gt(scene,0.4)',metadata=print:file=output","-an","-f","null","-"])
this one give me no error but doesn't create the file
this is the original command that i use directly with ffmpeg:
ffmpeg -i input.flv -filter:v "select='gt(scene,0.4)',showinfo" -f null - 2> ffout
I just need the ouput of this command to be written to a file, anyone see how i could make it work?
is there a better way then subprocess ? or just another way ? will it be easier in C?
You can redirect the stderr output directly from Python without any need for shell=True which can lead to shell injection.
It's as simple as:
with open(output_path, 'w') as f:
subprocess.check_call(cmd, stderr=f)
Things are easier in your case if you use the shell argument of the subprocess command and it should behave the same. When using the shell command, you can pass in a string as the command rather then a list of args.
cmd = "ffmpeg -i {0} -filter:v \"select='gt(scene,0.4)',showinfo\" -f {1} - 2> ffout".format(inputName, outputFile)
p=subprocess.check_output(cmd, shell=True)
If you want to pass arguments, you can easily format your string

How do I let Python see what Terminal outputs when I enter a command?

I want to run a program on Python on macOS Sierra that checks Terminal for its outputs after I automatically enter a command on it. For example, I would write in Terminal:
$ pwd
and then Terminal would output something like:
/Users/username
How would I have Python scan what Terminal outputs and set it to a variable as a string?
>>>output = (whatever Terminal outputs)
>>>print (output)
"/Users/username"
By the way, the other forums do not explain in much detail how one would do this in macOS. Therefore, this is not a duplicate of any forum.
You could pipe the output to a file and read the file.
$ pwd > output.txt
Then read the file and take further actions based on its contents.
Use the subprocess module, it has some shortcut methods to make things easier and less complicated than using Popen.
>>> import subprocess
>>> output = subprocess.check_output("pwd")
>>> print(output)
b'L:\\\r\n'
You can decode this using output.decode("UTF-8") if you like or you can use the universal_newlines keyword argument to have it done automatically as well as sorting out newlines.
>>> subprocess.check_output("pwd", universal_newlines=True)
'L:\\\n'
Edit: With #Silvio's sensible suggestion, passing all arguments you can do the following:
subprocess.check_output(["ls", "-l"])
Or if you have a string sourced from elsewhere you can call .split() which will generate a list of substrings separated by a space.
subprocess.check_output("ls -l /".split())
Note: I'm using Python3 on Windows and Gnu on Windows so I have \r\n line endings and pwd.

Categories