Are there any differences between the following 2 lines:
subprocess.Popen(command + '> output.txt', shell=True)
subprocess.Popen(command +' &> output.txt', shell=True)
As the popen already triggers the command to run in the background, should I use &? Does use of & ensure that the command runs even if the python script ends executing?
Please let me know the difference between the 2 lines and also suggest which of the 2 is better.
Thanks.
&> specifies that standard error has to be redirected to the same destination that standard output is directed. Which means both the output log of the command and error log will also be written in the output.txt file.
using > alone makes only standard output being copies to the output.txt file and the standard error can be written using command 2> error.txt
Related
I m want to extract the scene change timestamp using the scene change detection from ffmpeg. I have to run it on a few hundreds of videos , so i wanted to use a python subprocess to loop over all the content of a folder.
My problem is that the command that i was using for getting these values on a single video involve piping the output to a file which seems to not be an option from inside a subprocess call.
this is my code :
p=subprocess.check_output(["ffmpeg", "-i", sourcedir+"/"+name+".mpg","-filter:v", "select='gt(scene,0.4)',showinfo\"","-f","null","-","2>","output"])
this one tell ffmpeg need an output
output = "./result/"+name
p=subprocess.check_output(["ffmpeg", "-i", sourcedir+"/"+name+".mpg","-filter:v", "select='gt(scene,0.4)',metadata=print:file=output","-an","-f","null","-"])
this one give me no error but doesn't create the file
this is the original command that i use directly with ffmpeg:
ffmpeg -i input.flv -filter:v "select='gt(scene,0.4)',showinfo" -f null - 2> ffout
I just need the ouput of this command to be written to a file, anyone see how i could make it work?
is there a better way then subprocess ? or just another way ? will it be easier in C?
You can redirect the stderr output directly from Python without any need for shell=True which can lead to shell injection.
It's as simple as:
with open(output_path, 'w') as f:
subprocess.check_call(cmd, stderr=f)
Things are easier in your case if you use the shell argument of the subprocess command and it should behave the same. When using the shell command, you can pass in a string as the command rather then a list of args.
cmd = "ffmpeg -i {0} -filter:v \"select='gt(scene,0.4)',showinfo\" -f {1} - 2> ffout".format(inputName, outputFile)
p=subprocess.check_output(cmd, shell=True)
If you want to pass arguments, you can easily format your string
I am using speedtest-cli to get my internet speed in a python script. I would like to run this command in shell via subprocess.Popen.
Here is the command in terminal:
`speedtest-cli --share > output.log`
speedtest-cli runs the test, whilst --share provides me with an additional link in the output, pointing to an image of the speedtest result. Here is the content of output.log:
Retrieving speedtest.net configuration...
Testing from M-net (xxx.xxx.xxx.xxx)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by XXXXXXXXXXXXXXXXXXXXXX [16.12 km]: 20.902 ms
Testing download speed................................................................................
Download: 48.32 Mbit/s
Testing upload speed......................................................................................................
Upload: 12.49 Mbit/s
Share results: http://www.speedtest.net/result/670483456.png
If I run the command in terminal, I get all the test results as well as the link in the target file as expected. I confirmed it is all stdout and not another channel by using this grep trick: command | grep .
I am trying to run it in Python as follows:
subprocess.Popen(['speedtest-cli', '--share', '>', 'output.log`],
stdout=subprocess.PIPE, shell=True)
...and I also tried putting the output into the file directly via python:
with open('output.log', 'w') as f:
Popen(['speedtest-cli', '--json', '--share'], stdout=f, shell=True)
Neither of these work. I get a nice file created with the latter approach, BUT the link is not included! (the last line in the output above).
Against all the warnings of deprecation and better safety with using subprocess module, I became desparate and tried os.system():
os.system('speedtest-cli --share > output.log')
Annoyingly, this works... the full output along with the link is captured in the file.
What is going on here? How do I get the link to be captured using Popen?
I'm using Python 3.5
When using shell=True, your argument to Popen needs to be a string, not a list:
subprocess.Popen('speedtest-cli --json --share > output.log',
stdout=subprocess.PIPE, shell=True)
Compare:
>>> subprocess.Popen('echo hello', shell=True)
>>> hello
And:
>>> subprocess.Popen(['echo', 'hello'], shell=True)
>>>
When you pass a list and you are using shell=True, only the the first item is relevant and the remainder is ignored.
If you want to collect the output yourself, consider subprocess.check_output:
>>> output = subprocess.check_output(['echo', 'hello'])
>>> output
b'hello\n'
Or:
>>> output = subprocess.check_output('echo hello', shell=True)
The check_output method works with both Python 2 and Python 3. In Python 3, you also have available the run method.
I'm trying to run a Perl script from Python. I know that if run the Perl script in terminal and I want the output of the Perl script to be written a file I need to add > results.txt after perl myCode.pl. This works fine in the terminal, but when I try to do this in Python it doesn't work.
This the code:
import shlex
import subprocess
args_str = "perl myCode.pl > results.txt"
args = shlex.split(args_str)
subprocess.call(args)
Despite the > results.txt it does not output to that file but it does output to the command line.
subprocess.call("perl myCode.pl >results.txt", shell=True)
or
subprocess.call(["sh", "-c", "perl myCode.pl >results.txt"])
or
with open('results.txt', 'wb', 0) as file:
subprocess.call(["perl", "myCode.pl"], stdout=file)
The first two invoke a shell to execute the shell command perl myCode.pl > results.txt. The last one executes perl directly by having call do the redirection itself. This is the more reliable solution.
So I know how to execute a python script and have it output in command window using os.command or even subprocess. Also i know that capturing output into a text file is done via os.command('my command > me.txt".
My question here is:
IS there a way to do both of them with one command? ie execute a python script,capture the output to a text file AND have it show on the command window?
If this is not possible here is another question:
the command I want to execute takes up to 8 min to finish and writes up to 300 lines in a text file. Can I access the text file while the command is still executing in order to get some data without waiting for the command to finish executing? like access the text file every 1 minute for example?
If neither is possible then this would also do the job:
when my command executes succesfully it prints out a Done statement on the cmd as well as many other lines. Can i check in the cmd if that "Done" string was printed or it needs to be captured in a text file for that dearch to happen?
The easiest way to save the output of a command while echoing it to stdout is to use tee:
$ python script.py | tee logfile.log
if you want to follow the output of a file while it is being written use tail:
$ tail -f logfile
you might want to unbuffer or flush the output immediately to be able to read the output before a full line or a buffer is filled up:
$ unbuffer python script.py > logfile
$ tail -f logfile
or
print ("to file..", flush = True)
If you can do this from within your script rather than from the command line it would be quite easy.
with open("output.txt", "w") as output:
print>>output, "what you want as output" #prints to output file
print "what you want as output" #prints to screen
The easier way I devised is to create a function that prints to both screen and to file. The example below works when you input the output file name as an argument:
OutputFile= args.Output_File
if args.Output_File:
OF = open(OutputFile, 'w')
def printing(text):
print text
if args.Output_File:
OF.write(text + "\n")
#To print a line_of_text both to screen and file all you need to do is:
printing(line_of_text)
In windows, I am running a bat script that currently ends with a 'pause' and prompts for the user to 'Press any key to continue...'
I am unable to edit the file in this scenario and I need the script to terminate instead of hang waiting for input that will never come. Is there a way I can run this that will disable or circumvent the prompt?
I have tried piping in input and it does not seem to help. This script is being run from python via subprocess.Popen.
Try to execute cmd.exe /c YourCmdFile < nul
YourCmdFile - full path to your batch script
subprocess.call("mybat.bat", stdin=subprocess.DEVNULL)
Would call mybat.bat and redirect input from nul on windows (which disables pause as shown in other answers)
This one turned out to be a bit of a pain. The redirect of nul from Maximus worked great, thanks!
As for getting that to work in python, it came down to the following. I started with:
BINARY = "C:/Program Files/foo/bar.exe"
subprocess.call([BINARY])
Tried to add the redirection but subprocess.call escapes everything too well and we loose the redirect.
subprocess.call([BINARY + " < nul"])
subprocess.call([BINARY, " < nul"])
subprocess.call([BINARY, "<", "nul"])
Using shell=True didn't work because the space in BINARY made it choke trying to find the executable.
subprocess.call([BINARY + " < nul"], shell=True)
In the end, I had to resort back to os.system and escape myself to get the redirection.
os.system(quote(BINARY) + " < nul")
Not ideal, but it gets the job done.
If anyone knows how to get the last subprocess example to work with the space in the binary, it would be much apprecated!
For me the following code works:
p = Popen("c:\script.bat <nul", cwd="c:\")