Writing files into chroot environment - python

I'm trying to write data to files in a chroot environment. Since I'm non-root user, they only way I can communicate with chroot is using schroot command.
Currently I'm using the following trick to write the data.
$ schroot -c chroot_session -r -d /tmp -- bash -c "echo \"$text\" > file.txt"
But I'm sure this will give me a lot of grief if text has some special characters, quotes etc. So whats a better way of sending $text to chroot. Most probably I'll be using the above command through a python script. Is there a simpler method?

Kinda hackish, but…
c = ConfigParser.RawConfigParser()
c.readfp(open(os.path.join('/var/lib/schroot/session', chroot_session), 'r'))
chroot_basedir = c.get(chroot_session, 'mount-location')
with open(os.path.join(chroot_basedir, '/tmp/file.txt'), 'w') as fp:
fp.write(text)
Okay, so privileges don't let you get in by any method other than schroot, huh?
p = subprocess.Popen(['schroot', '-c', name, '-r', 'tee', '/tmp/file.txt'],
stdin=subprocess.PIPE,
stdout=open('/dev/null', 'w'),
stderr=sys.stderr)
p.stdin.write(text)
p.stdin.close()
rc = p.wait()
assert rc == 0

you can use python to write $text into the file (python has the right to write),
then copy that file to file.txt

Related

stdout of subprocess.Popen not working correctly

I am unable to save my output of subprocess.Popen correctly. I get this in the file I chose. The directory specified is correct, as just above I told it to erase text already existing in it, which worked. Any solutions to this?
Code is below
f = open("hunter_logs.txt", "w")
subp = subprocess.Popen(
'docker run -p 5001-5110:5001-5110/udp -v D:\Hunter\hunter\hunter-scenarios:/hunter-scenarios europe-west3-docker.pkg.dev/hunter-all/controller-repo/hunter_controller:latest -d /hunter-scenarios -s croatia -i OPFOR', stdout=f)
Probably the process is outputting some of its logs to stderr and some to stdout. Add stderr=f as another argument to Popen() in order to capture both streams to the same file.

python subprocess with ffmpeg give no output

I m want to extract the scene change timestamp using the scene change detection from ffmpeg. I have to run it on a few hundreds of videos , so i wanted to use a python subprocess to loop over all the content of a folder.
My problem is that the command that i was using for getting these values on a single video involve piping the output to a file which seems to not be an option from inside a subprocess call.
this is my code :
p=subprocess.check_output(["ffmpeg", "-i", sourcedir+"/"+name+".mpg","-filter:v", "select='gt(scene,0.4)',showinfo\"","-f","null","-","2>","output"])
this one tell ffmpeg need an output
output = "./result/"+name
p=subprocess.check_output(["ffmpeg", "-i", sourcedir+"/"+name+".mpg","-filter:v", "select='gt(scene,0.4)',metadata=print:file=output","-an","-f","null","-"])
this one give me no error but doesn't create the file
this is the original command that i use directly with ffmpeg:
ffmpeg -i input.flv -filter:v "select='gt(scene,0.4)',showinfo" -f null - 2> ffout
I just need the ouput of this command to be written to a file, anyone see how i could make it work?
is there a better way then subprocess ? or just another way ? will it be easier in C?
You can redirect the stderr output directly from Python without any need for shell=True which can lead to shell injection.
It's as simple as:
with open(output_path, 'w') as f:
subprocess.check_call(cmd, stderr=f)
Things are easier in your case if you use the shell argument of the subprocess command and it should behave the same. When using the shell command, you can pass in a string as the command rather then a list of args.
cmd = "ffmpeg -i {0} -filter:v \"select='gt(scene,0.4)',showinfo\" -f {1} - 2> ffout".format(inputName, outputFile)
p=subprocess.check_output(cmd, shell=True)
If you want to pass arguments, you can easily format your string

Running bash command on server

I am trying to run the bash command pdfcrack in Python on a remote server. This is my code:
bashCommand = "pdfcrack -f pdf123.pdf > myoutput.txt"
import subprocess
process = subprocess.Popen(bashCommand.split(), stdout=subprocess.PIPE)
output, error = process.communicate()
I, however, get the following error message:
Non-option argument myoutput2.txt
Error: file > not found
Can anybody see my mistake?
The first argument to Popen is a list containing the command name and its arguments. > is not an argument to the command, though; it is shell syntax. You could simply pass the entire line to Popen and instruct it to use the shell to execute it:
process = subprocess.Popen(bashCommand, shell=True)
(Note that since you are redirecting the output of the command to a file, though, there is no reason to set its standard output to a pipe, because there will be nothing to read.)
A better solution, though, is to let Python handle the redirection.
process = subprocess.Popen(['pdfcrack', '-f', 'pdf123.pdf'], stdout=subprocess.PIPE)
with open('myoutput.txt', 'w') as fh:
for line in process.stdout:
fh.write(line)
# Do whatever else you want with line
Also, don't use str.split as a replacement for the shell's word splitting. A valid command line like pdfcrack -f "foo bar.pdf" would be split into the incorrect list ['pdfcrack', '-f', '"foo', 'bar.pdf"'], rather than the correct list ['pdfcrack', '-f', 'foo bar.pdf'].
> is interpreted by shell, but not valid otherwise.
So, that would work (don't split, use as-is):
process = subprocess.Popen(bashCommand, shell=True)
(and stdout=subprocess.PIPE isn't useful since all output is redirected to the output file)
But it could be better with native python for redirection to output file and passing arguments as list (handles quote protection if needed)
with open("myoutput.txt","w") as f:
process = subprocess.Popen(["pdfcrack","-f","pdf123.pdf"], stdout=subprocess.PIPE)
f.write(process.read())
process.wait()
Your mistake is > in command.
It doesn't treat this as redirection to file because normally bash does it and now you run it without using bash.
Try with shell=True if you whan to use bash. And then you don't have to split command into list.
subprocess.Popen("pdfcrack -f pdf123.pdf > myoutput.txt", shell=True)

Bash process substitution in Python with Popen

I'm attempting to create a looped video file by calling ffmpeg from the python subprocess library. Here's the part that's giving me problems:
import subprocess as sp
sp.Popen(['ffmpeg', '-f', 'concat', '-i', "<(for f in ~/Desktop/*.mp4; do echo \"file \'$f\'\"; done)", "-c", "copy", "~/Desktop/sample3.mp4"])
With the above code I'm getting the following error:
<(for f in /home/delta/Desktop/*.mp4; do echo "file '$f'"; done): No such file or directory
I did find a similarly phrased question here. But I'm not sure how the solution might apply to solving my issue.
Following the advice in the comments and looking elsewhere I ended up changing the code to this:
sp.Popen("ffmpeg -f concat -i <(for f in ~/Desktop/*.mp4; do echo \"file \'$f\'\"; done) -c copy ~/Desktop/sample3.mp4",
shell=True, executable="/bin/bash")
--which works fine. – moorej
If you need to parameterize input and output files, consider breaking out your parameters:
# sample variables
inputDirectory = os.path.expanduser('~/Desktop')
outputDirectory = os.path.expanduser('~/dest.mp4')
sp.Popen(['''ffmpef -f concat -i <(for f in "$1"/*; do
echo "file '$f'";
done) -c copy "$2" ''',
bash, # this becomes $0
inputDirectory, # this becomes $1
outputDirectory, # this becomes $2
], shell=True, executable="/bin/bash")
...as this ensures that your code won't do untoward things even when given an input directory with a hostile name like /uploads/$(rm -rf ~)'$(rm -rf ~)'. (ffmpeg is likely to fail to parse an input file with such a name, and if there's any video you don't want included in the current working directory but we'd need to know the escaping rules it uses to avoid that; but it's far better for ffmpeg to fail than to execute arbitrary code).

Any way to execute a piped command in Python using subprocess module, without using shell=True?

I want to run a piped command line linux/bash command from Python, which first tars files, and then splits the tar file. The command would look like something this in bash:
> tar -cvf - path_to_archive/* | split -b 20m -d -a 5 - "archive.tar.split"
I know that I could execute it using subprocess, by settings shell=True, and submitting the whole command as a string, like so:
import subprocess
subprocess.call("tar -cvf - path_to_archive/* | split -b 20m -d -a 5 - 'archive.tar.split'", shell=True)
...but for security reasons I would like to find a way to skip the "shell=True" part, (which takes a list of strings rather than a full command line string, and which can not handle the pipe char correctly). Is there any solution for this in Python? I.e., is it possible to set up linked pipes somehow, or some other solution?
If you want to avoid using shell=True, you can manually use subprocess pipes.
from subprocess import Popen, PIPE
p1 = Popen(["tar", "-cvf", "-", "path_to_archive"], stdout=PIPE)
p2 = Popen(["split", "-b", "20m", "-d", "-a", "5", "-", "'archive.tar.split'"], stdin=p1.stdout, stdout=PIPE)
output = p2.communicate()[0]
Note that if you do not use the shell, you will not have access to expansion of globbing characters like *. Instead you can use the glob module.
tar can split itself:
tar -L 1000000 -F name-script.sh cf split.tar largefile1 largefile2 ...
name-script.sh
#!/bin/bash
echo "${TAR_ARCHIVE/_part*.tar/}"_part"${TAR_VOLUME}".tar >&"${TAR_FD}"
To re-assemble
tar -M -F name-script.sh cf split.tar
Add this to your python program.
Is there any reason you can't use tarfile? | http://docs.python.org/library/tarfile.html
import tarfile
tar = tarfile.open("sample.tar.gz")
tar.extractall()
tar.close()
Just write like a file like object using tarfile rather than invoking subprocess.
Shameless plug, I wrote a subprocess wrapper for easier command piping in python:
https://github.com/houqp/shell.py
Example:
shell.ex("tar -cvf - path_to_archive") | "split -b 20m -d -a 5 - 'archive.tar.split'"

Categories