At the moment I am using two separate shell script to get the job done.
1) Listing the current directory and saving it as a .html file (first listing only the root directory followed by complete listing)
tree -L 1 -dH ./ >> /Volumes/BD/BD-V1.html && tree -H ./ >> /Volumes/BD/BD-V1.html
2) Using sed to remove unwanted lines (I'm on mac)
sed -i '' '/by Francesc Rocher/d' /Volumes/BD/BD-V1.html && sed -i '' '/by Steve Baker/d' /Volumes/BD/BD-V1.html && sed -i '' '/by Florian Sesser/d' /Volumes/BD/BD-V1.html
Now I want to combine them as a single script with user input for the file path. I was trying to do with python but no success
import subprocess
subprocess.call(["tree", "-d", "-L", "1"])
The above one can list the directory but I couldn't save the output (I have to do this inside of python), I tried something like this but didn't get work.
file = open('out.txt', 'w')
import subprocess
variation_string = subprocess.call(["tree", "-d", "-L", "1"])
file.write(variation_string)
file.close()
Also I'm not sure how to implement sed :(
edit: I'm a beginner
You can simply redirect the stdout to a file object:
from subprocess import check_call
with open("out.txt","w") as f:
check_call(["tree", "-d", "-L", "1"],stdout=f)
In your code you are basically trying to write the return code as that is what call returns to the file which would raise an error as write expects a string. If you wanted to store the output of running a command you would use check_output.
You can do this with the subprocess module. You can create another process that runs your command, and then communicate with it. This will give you the output.
import subprocess
file = open('out.txt', 'w')
...
command = "tree -d -L 1"
process = subprocess.Popen(command.split(), stdout=subprocess.PIPE)
output = process.communicate()[0]
...
file.write(output)
file.close()
Related
I'm new to Python. I have list of Unix commmands ("uname -a","uptime","df -h","ifconfig -a","chkconfig --list","netstat -rn","cat /proc/meminfo","ls -l /dev") and I want to run them and redirect the entire output to a .txt file. I searched a lot but didn't get a proper solution, or I understood things wrongly.
I'm able to get the output on stdout with this for loop but I can't redirect to a file.
def commandsoutput():
command = ("uname -a","uptime","df -h","ifconfig -a","chkconfig --list","netstat -rn","cat /proc/meminfo","ls -l /dev")
for i in command:
print (os.system(i))
commandsoutput()
os.system returns the exit code of the command, not its output. It is also deprecated.
Use subprocess instead:
import subprocess
def commandsoutput():
command = ("uname -a","uptime","df -h","ifconfig -a","chkconfig --list","netstat -rn","cat /proc/meminfo","ls -l /dev")
with open('log.txt', 'a') as outfile:
for i in command:
subprocess.call(i, stdout=outfile)
commandsoutput()
This answer uses os.popen, which allows you to write the output of the command in your file:
import os
def commandsoutput():
commands = ("uname -a","uptime","df -h","ifconfig -a","chkconfig --list","netstat -rn","cat /proc/meminfo","ls -l /dev")
with open('output.txt','a') as outfile:
for command in commands:
outfile.write(os.popen(command).read()+"\n")
commandsoutput()
I am invoking shell script using os.execvp() in python. my shell script has some echo statements whcih I want to redirect in file.
Here is what I am trying:
cmd = "/opt/rpm/rpm_upgrade.sh >& /opt/rpm/upgrader.log"
cmdline = ["/bin/sh", cmd]
os.execvp(cmdline[0], cmdline)
Below is the error I am getting:
Error: /bin/sh: /opt/rpm/rpm_upgrade.sh >& /opt/rpm/upgrader.log: No such file or directory
Can any one help?
This is happening because you are passing this entire string as if it were the program name to execute:
"/opt/rpm/rpm_upgrade.sh >& /opt/rpm/upgrader.log"
The easy way to fix this is:
cmdline = ["/bin/sh", "/opt/rpm/rpm_upgrade.sh",
">&", "/opt/rpm/upgrader.log"]
os.execvp(cmdline[0], cmdline)
Now sh will receive three arguments rather than one.
Or you can switch to the more full-featured subprocess module, which lets you redirect output in Python:
import subprocess
with open("/opt/rpm/upgrader.log", "wb") as outfile:
subprocess.check_call(["/opt/rpm/rpm_upgrade.sh"], shell=True,
stdout=outfile, stderr=subprocess.STDOUT)
Basically, it's a python code from our collaborator that used to generate mesh, which is developed under Linux environment. I use Cygwin to run this code on windows. The trouble part is as follows. BiV_temp.geo is also a python script. So the command is to substitute the string <> in the script BiV_temp.geo with a predefined number and the file names.
os.system('cp BiV_fiber.geo BiV_temp.geo')
cmd = "sed -i 's/<<Meshsize>>/"+"%5.2f"%(meshsize)+"/g' BiV_temp.geo"
os.system(cmd)
cmd = "sed -i 's/<<LVfilename>>/"+"\"%s\""%(LVendocutfilename)+"/g' BiV_temp.geo"
os.system(cmd)
cmd = "sed -i 's/<<RVfilename>>/"+"\"%s\""%(RVendocutfilename)+"/g' BiV_temp.geo"
os.system(cmd)
cmd = "sed -i 's/<<Epifilename>>/"+"\"%s\""%(epicutfilename)+"/g' BiV_temp.geo"
os.system(cmd)
cmd = "gmsh -3 BiV_temp.geo -o %s"%(mshfilename)
os.system(cmd)
cmd = "rm BiV_temp.geo"
os.system(cmd)
The sane solution is for your "collaborator" to write Python code which allows you to pass in these things as parameters to a function call.
you are trying to executing command code in python?
and i notice in "%5.2f"%(meshsize) you wrote %5, for adding text you should add %sand also avoid putting plus sign, it make it really hard to read and messy, i would write all in one line like this:
meshsize = 100
cmd = "sed -i 's/<<Meshsize>>/%s5.2f/g' BiV_temp.geo" %meshsize
print cmd
if your meshsize is a list of x,y,z then write it like this:
meshsize = (100,50,20)
cmd = "sed -i 's/<<Meshsize>>/%s,%s,%s5.2f/g' BiV_temp.geo" %meshsize
print cmd
hope that help.
I am trying to use python in a unix style pipe.
For example, in unix I can use a pipe such as:
$ samtools view -h somefile.bam | python modifyStdout.py | samtools view -bh - > processed.bam
I can do this by using a for line in sys.stdin: loop in the python script and that appears to work without problems.
However I would like to internalise this unix command into a python script. The files involved will be large so I would like to avoid blocking behaviour, and basically stream between processes.
At the moment I am trying to use Popen to manage each command, and pass the stdout of the first process to the stdin of the next process, and so on.
In a seperate python script I have (sep_process.py):
import sys
f = open("sentlines.txt", 'wr')
f.write("hi")
for line in sys.stdin:
print line
f.write(line)
f.close()
And in my main python script I have this:
import sys
from subprocess import Popen, PIPE
# Generate an example file to use
f = open('sees.txt', 'w')
f.write('somewhere over the\nrainbow')
f.close()
if __name__ == "__main__":
# Use grep as an example command
p1 = Popen("grep over sees.txt".split(), stdout=PIPE)
# Send to sep_process.py
p2 = Popen("python ~/Documents/Pythonstuff/Bam_count_tags/sep_process.py".split(), stdin=p1.stdout, stdout=PIPE)
# Send to final command
p3 = Popen("wc", stdin=p2.stdout, stdout=PIPE)
# Read output from wc
result = p3.stdout.read()
print result
The p2 process however fails [Errno 2] No such file or directory even though the file exists.
Do I need to implement a Queue of some kind and/or open the python function using the multiprocessing module?
The tilde ~ is a shell expansion. You are not using a shell, so it is looking for a directory called ~.
You could read the environment variable HOME and insert that. Use
os.environ['HOME']
Alternatively you could use shell=True if you can't be bothered to do your own expansion.
Thanks #cdarke, that solved the problem for using simple commands like grep, wc etc. However I was too stupid to get subprocess.Popen to work when using an executable such as samtools to provide the data stream.
To fix the issue, I created a string containing the pipe exactly as I would write it in the command line, for example:
sam = '/Users/me/Documents/Tools/samtools-1.2/samtools'
home = os.environ['HOME']
inpath = "{}/Documents/Pythonstuff/Bam_count_tags".format(home)
stream_in = "{s} view -h {ip}/test.bam".format(s=sam, ip=inpath)
pyscript = "python {ip}/bam_tags.py".format(ip=inpath)
stream_out = "{s} view -bh - > {ip}/small.bam".format(s=sam, ip=inpath)
# Absolute paths, witten as a pipe
fullPipe = "{inS} | {py} | {outS}".format(inS=stream_in,
py=pyscript,
outS=stream_out)
print fullPipe
# Translates to >>>
# samtools view -h test.bam | python ./bam_tags.py | samtools view -bh - > small.bam
I then used popen from the os module instead and this worked as expected:
os.popen(fullPipe)
I'm asking a very similar question to this one. I'm creating a pdf using wkhtmltopdf on an Ubuntu server in Django.
from tempfile import *
from subprocess import Popen, PIPE
tempfile = gettempdir()+"/results.pdf"
papersize = 'Tabloid'
orientation = 'Landscape'
command_args = "wkhtmltopdf -O %s -s %s -T 0 -R 0 -B 0 -L 0 http://pdfurl %s" %(orientation, papersize, tempfile)
popen = Popen(command_args, stdout=PIPE, stderr=PIPE)
pdf_contents = popen.stdout().read()
popen.terminate()
popen.wait()
response = HttpResponse(pdf_contents, mimetype='application/pdf')
return response
This gives me a "no such file or directory" error on the popen = Popen... line. So I changed that line to
popen = Popen(["sh", "-c", command_args], stdout=PIPE, stderr=PIPE)
and now I get a "'file' object is not callable" error on the pdf_contents =... line.
I've also tried adding .communicate() to the popen =... line but I can't seem to locate the pdf output that way. I should add that typing the command_args line into the command line creates a pdf just fine. Can anyone point me in the right direction?
wkhtmltopdf is not outputting the contents of the PDF for Popen to read it. pdf_contents correctly contains the output of the command (nothing). You will need to read the contents of the output file if you want to return it to the client (see below), or skip the output file and make wkhtmltopdf output the contents of the pdf directly,
from tempfile import *
from subprocess import Popen, PIPE
tempfile = gettempdir()+"/results.pdf"
command_args = "/path/to/wkhtmltopdf -O %s -s %s -T 0 -R 0 -B 0 -L 0 http://pdfurl %s" % ('Landscape', 'Tabloid', tempfile)
popen = Popen(["sh", "-c", command_args])
popen.wait()
f = open(tempfile, 'r')
pdf_contents = f.read()
f.close()
return HttpResponse(pdf_contents, mimetype='application/pdf')
Your first version fails because python does not know where wkhtmltopdf is located. Python will not check your path for that. Your second version passes the command to a shell which takes care of that. You achieve the same effect by passing a shell=True argument.
The second problem (as others have noted) is that you call stdout() when you shouldn't.
The third problem is that your wkhtmltopdf command is wrong. You are doing:
wkhtmltopdf -O %s -s %s -T 0 -R 0 -B 0 -L 0 http://pdfurl tempfile/results.pdf
Instead you should pass
wkhtmltopdf -O %s -s %s -T 0 -R 0 -B 0 -L 0 http://pdfurl -
That way wkhtmltopdf will write the output to standard output and you can read it. If you pass another - as the source, you can send the html over the standard input.
The reason you're getting 'file' object is not callable is because once you have your popen object, stdout is a filehandle, not a method. Don't call it, just use it:
popen = Popen(command_args, stdout=PIPE, stderr=PIPE)
pdf_contents = popen.stdout.read()
You might want to consider changing
popen = Popen(command_args, stdout=PIPE, stderr=PIPE)
pdf_contents = popen.stdout().read()
# ...
response = ...
to
pdf_contents = subprocess.check_output(command_args.split())
response = ...
or in older versions:
process = Popen(command_args.split(), stdout=PIPE, stderr=PIPE)
pdf_contents = process.stdout.read()
response = ...
I suggest you take a look at the check_output function.
EDIT: Also, don't call terminate(), as it will kill the process without waiting it to complete, possibly resulting in a corrupted PDF. You will pretty much only need to use wait(), as it will wait for the process to complete (and thus output all it has to output). When using the check_output() function, you need not to worry about it, as it waits for the process to complete by "default".
Other than that, naming a variable with the same name as a module (I'm talking about tempfile) is a bad idea. I suggest you to change it to tmpfile and to check out NamedTemporaryFiles as it is safer to use than what you are doing right now.