Python subprocess.call() doesn't write content to file - python

Using Python 2.7 on Raspberry Pi B+, I want to call the command "raspistill -o image.jpg" from Python and find using this is recommended:
from subprocress import call
call(["raspistill","-o image.jpg"])
However, this doesn't work since the image.jpg isn't created although outside Python,
raspistill -o
does create the file.
Next try is to first create the image file and writing to it.
f = open("image.jpg","w")
call(["raspistill","-o image.jpg"], stdout = f)
Now the image file is created, but nothing is written to it: its size remains 0. So how can I get this to work?
Thank you.

You are passing -o image.jpg as a single argument. You should pass them like two. Here is how:
call(["raspistill", "-o", "image.jpg"])
The way you did it it's like calling raspistill "-o image.jpg" from the command line, which will likely result in an error.

First, you're creating and truncating the file image.jpg:
f = open("image.jpg","w")
Then you're sending raspistill's stdout to that same file:
call(["raspistill","-o image.jpg"], stdout = f)
When you eventually get around to close-ing the file in Python, now image.jpg is just going to hold whatever raspistill wrote to stdout. Or, if you never close it, it'll be that minus the last buffer, which may be nothing at all.
Meanwhile, you're also trying to get raspistill to create a file with the same name, by passing it as part of the -o argument. You're doing that wrong, as Ionut Hulub's answer explains. Some programs will take "-o image.jpg" "-oimage.jpg", and "-o", "image.jpg" as meaning the same thing, some won't. But, even if this one does, at best you've now got two programs fighting over what file gets created and written as image.jpg.
If raspistill has an option to write the still to stdout, then you can use that option, together with passing stdout=f, and making sure to close the file. Or, if it has an option to write to a filename, then you can use that option. But doing both is not going to work.

If you don't know how to split the command, you can use shlex.split. For example,
>>> import shlex
>>> args = shlex.split('raspistill -o image.jpg')
>>> args
['raspistill', '-o', 'image.jpg']
>>> call(args)

Related

How to put the output of ffmpeg into a pipe in Python? [duplicate]

I can successfully redirect my output to a file, however this appears to overwrite the file's existing data:
import subprocess
outfile = open('test','w') #same with "w" or "a" as opening mode
outfile.write('Hello')
subprocess.Popen('ls',stdout=outfile)
will remove the 'Hello' line from the file.
I guess a workaround is to store the output elsewhere as a string or something (it won't be too long), and append this manually with outfile.write(thestring) - but I was wondering if I am missing something within the module that facilitates this.
You sure can append the output of subprocess.Popen to a file, and I make a daily use of it. Here's how I do it:
log = open('some file.txt', 'a') # so that data written to it will be appended
c = subprocess.Popen(['dir', '/p'], stdout=log, stderr=log, shell=True)
(of course, this is a dummy example, I'm not using subprocess to list files...)
By the way, other objects behaving like file (with write() method in particular) could replace this log item, so you can buffer the output, and do whatever you want with it (write to file, display, etc) [but this seems not so easy, see my comment below].
Note: what may be misleading, is the fact that subprocess, for some reason I don't understand, will write before what you want to write. So, here's the way to use this:
log = open('some file.txt', 'a')
log.write('some text, as header of the file\n')
log.flush() # <-- here's something not to forget!
c = subprocess.Popen(['dir', '/p'], stdout=log, stderr=log, shell=True)
So the hint is: do not forget to flush the output!
Well the problem is if you want the header to be header, then you need to flush before the rest of the output is written to file :D
Are data in file really overwritten? On my Linux host I have the following behavior:
1) your code execution in the separate directory gets:
$ cat test
test
test.py
test.py~
Hello
2) if I add outfile.flush() after outfile.write('Hello'), results is slightly different:
$ cat test
Hello
test
test.py
test.py~
But output file has Hello in both cases. Without explicit flush() call stdout buffer will be flushed when python process is terminated.
Where is the problem?

python subprocess with ffmpeg give no output

I m want to extract the scene change timestamp using the scene change detection from ffmpeg. I have to run it on a few hundreds of videos , so i wanted to use a python subprocess to loop over all the content of a folder.
My problem is that the command that i was using for getting these values on a single video involve piping the output to a file which seems to not be an option from inside a subprocess call.
this is my code :
p=subprocess.check_output(["ffmpeg", "-i", sourcedir+"/"+name+".mpg","-filter:v", "select='gt(scene,0.4)',showinfo\"","-f","null","-","2>","output"])
this one tell ffmpeg need an output
output = "./result/"+name
p=subprocess.check_output(["ffmpeg", "-i", sourcedir+"/"+name+".mpg","-filter:v", "select='gt(scene,0.4)',metadata=print:file=output","-an","-f","null","-"])
this one give me no error but doesn't create the file
this is the original command that i use directly with ffmpeg:
ffmpeg -i input.flv -filter:v "select='gt(scene,0.4)',showinfo" -f null - 2> ffout
I just need the ouput of this command to be written to a file, anyone see how i could make it work?
is there a better way then subprocess ? or just another way ? will it be easier in C?
You can redirect the stderr output directly from Python without any need for shell=True which can lead to shell injection.
It's as simple as:
with open(output_path, 'w') as f:
subprocess.check_call(cmd, stderr=f)
Things are easier in your case if you use the shell argument of the subprocess command and it should behave the same. When using the shell command, you can pass in a string as the command rather then a list of args.
cmd = "ffmpeg -i {0} -filter:v \"select='gt(scene,0.4)',showinfo\" -f {1} - 2> ffout".format(inputName, outputFile)
p=subprocess.check_output(cmd, shell=True)
If you want to pass arguments, you can easily format your string

How does multitail buffer its output?

This may not be the best wording for the question. I am trying to see 2 files at once on my screen.
I run:
multitail ~/path/to/somefile.err ~/path/to/somefile.out
I have a python script with the following lines:
sys.stdout = open('~/path/to/somefile.out', 'a')
sys.stderr = open('~/path/to/somefile.err', 'a')
My multitail command seems to only output my .out file, regardless of which order I put the files in the command.
I verified that my script is indeed writing to the files. What is also interesting, was that when I run the following command:
echo "text" >> ~/path/to/somefile.err
All of a sudden I see all the output from the .err file in the multitail screen (including that which didn't show up before)!
What is going on here that I can not see?
P.S. this is my first time using multitail so maybe I overlooked something simple. If it means anything, I am using CentOS 7.
You need to pass either buffering=0 (for unbuffered) or buffering=1 (for line-buffered - probably what you want) in your call to open.
The default is buffering=-1, which is equivalent to something like buffering=512 with a value depending on the system, so nothing will be written to the file until 512 (or whatever) bytes are written.
Alternatively, you could leave buffering set to its default value, and call .flush() every time you want the data to appear in the file.
When you use >> in the shell, that will close the file when the command exits, and closing implies a flush. (You can defer the close by using exec >> file.txt)

command wrapped in os.system is ignored

(on os x 10.10.1)I am trying to use a paired-end merger (Casper) within a python script. i'm using os.system (don't want to use subprocess or pexpect modules). In my script here is the line that doesn't work:
os.system("casper %s %s -o %s"%(filein[0],filein[1],fileout))
#filein[0]: input file 1
#filein[1]: input file 2
#fileout: output prefix (default==casper)
Once my script is launched only the 2 first string parameters of this command are interpreted but not the third one, causing an output file with the default prefix name. Since my function is iterating through a lot of fastq files, they are all merged in a single "casper.fastq" file.
I tried to mess up with the part of the command that doesn't work (right after -o), putting meaningless string and still it is executed with no error and the default output, here is the "messed up line":
os.system("casper %s %s -ldkfnlqdskgfno %s"%(filein[0],filein[1],fileout))
Could anybody help in understanding what the heck is going on?
Print the command before execute it to check if your command wrapped correctly(like file name need to be quoted)
Execute your assumed output command directly to see if it is misinterpreted.

Python long listing directory (ls -l), ls *

I'm trying to to a ls -l from python, to check for the last modification date of a file.
os.listdir doesn't show the long list format.
subprocess.call shows the format, but actually prints it, and returns 0. I want to be able to put it in a variable. Any ideas ?
Also, I tried
subprocess.call("ls","*.py")
which answers
ls: cannot access *.py: No such file or directory
it works with shell=True, but if someone could explain why it doesn't work without it, I'll appreciate. If you know how to make it work, even better.
It doesn't work without shell=True because the * is a shell expansion character - going from *.py to a list of files ending in .py is a function performed by the shell itself, not ls or python.
If you want to get the output of a command invoked via subprocess, you should use subprocess.check_output() or subprocess.Popen.
ls_output = subprocess.check_output(['ls', '-l'])
With nice formatting:
import subprocess
print(subprocess.check_output(['ls', '-lh']).decode('utf-8'))

Categories