How to run a shell script in python from memory? - python

The application I'm writing retrieves a shell script through HTTP from Network, I want to run this script in python however I don't want to physically save it to the hard drive because I have its content already in memory, and I would like to just execute it. I have tried something like this:
import subprocess
script = retrieve_script()
popen = subprocess.Popen(scrpit, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
stdOut, stdErr = popen.communicate()
def retrieve_script_content():
# in reality I retrieve a shell script content from network,
# but for testing purposes I will just hardcode some test content here
return "echo command1" + "\n" + "echo command2" + " \n" + "echo command3"
This snippet will not work because subprocess.Popen expects you to provide only one command at a time.
Are there any alternatives to run a shell script from memory?

This snippet will not work because subprocess.Popen expects you to provide only one command at a time.
That is not the case. Instead, the reason why it doesn't work is:
The declaration of retrieve_script has to come before the call
You call it retrieve_script_content instead of retrieve_script
You misspelled script as scrpit
Just fix those and it's fine:
import subprocess
def retrieve_script():
return "echo command1" + "\n" + "echo command2" + " \n" + "echo command3"
script = retrieve_script()
popen = subprocess.Popen(script, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
stdOut, stdErr = popen.communicate()
print(stdOut);
Result:
$ python foo.py
command1
command2
command3
However, note that this will ignore the shebang (if any) and run the script with the system's sh every time.

Are you using a Unix-like OS? If so, you should be able to use a virtual filesystem to make an in-memory file-like object at which you could point subprocess.Popen:
import subprocess
import tempfile
import os
import stat
def retrieve_script_content():
# in reality I retrieve a shell script content from network,
# but for testing purposes I will just hardcode some test content here
return "echo command1" + "\n" + "echo command2" + " \n" + "echo command3"
content = retrieve_script_content()
with tempfile.NamedTemporaryFile(mode='w', delete=False, dir='/dev/shm') as f:
f.write(content)
os.chmod(f.name, stat.S_IRUSR | stat.S_IXUSR)
# print(f.name)
popen = subprocess.Popen(f.name, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
shell=True)
stdOut, stdErr = popen.communicate()
print(stdOut.decode('ascii'))
# os.unlink(f.name)
prints
command1
command2
command3
Above I used /dev/shm as the virtual filesystem since Linux systems based on Glibc always have a tmpfs mounted on /dev/shm.
If security is a concern you may wish to setup a ramfs.
One reason why you might want to use a virtual file instead of passing the script contents directly to subprocess.Popen is that the maximum size for a single string argument is limited to 131071 bytes.

You can execute multi command script with Popen. Popen only restricts you to one-command string when shell flag is False, yet it is possible to pass a list of commands. Popen's flag shell=True allows for multi-command scripts (it is considered unsecure, though what you are doing - executing scripts from the web - is already very risky).

Related

call a command line from script using python, Ubuntu OS

I am facing difficulties calling a command line from my script.I run the script but I don't get any result. Through this command line in my script I want to run a tool which produces a folder that has the output files for each line.The inputpath is already defined. Can you please help me?
for line in inputFile:
cmd = 'python3 CRISPRcasIdentifier.py -f %s/%s.fasta -o %s/%s.csv -st dna -co %s/'%(inputpath,line.strip(),outputfolder,line.strip(),outputfolder)
os.system(cmd)
You really want to use the Python standard library module subprocess. Using functions from that module, you can construct you command line as a list of strings, and each would be processed as one file name, option or value. This bypasses the shell's escaping, and eliminates the need to massage you script arguments before calling.
Besides, your code would not work, because the body block of the for statement is not indented. Python would simply not accept this code (could be you pasted into the questiong without the proper indentations).
as mentioned before, executing command vias: os.system(command) is not recomended. please use subprocess (read in python docs about this modulesubprocess_module_docs). see the code here:
for command in input_file:
p = subprocess.Popen(command, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE)
# use this if you want to communicate with child process
# p = subprocess.Popen(command, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE)
p.communicate()
# --- do the rest
I usually do like this for static command
from subprocess import check_output
def sh(command):
return check_output(command, shell=True, universal_newlines=True)
output = sh('echo hello world | sed s/h/H/')
BUT THIS IS NOT SAFE!!! It's vunerable to shell injection you should do
from subprocess import check_output
from shlex import split
def sh(command):
return check_output(split(command), universal_newlines=True)
output = sh('echo hello world')
The difference is subtle but important. shell=True will create a new shell, so pipes, etc will work. I use this when I have a big command line with pipes and that is static, I mean, it do not depend on user input. This is because this variant is vunerable to shell injection, a user can input something; rm -rf / and it will run.
The second variant only accepts one command, it will not spawn a shell, instead it will run the command directly. So no pipes and such shell things will work, and is safer.
universal_newlines=True is for getting output as string instead of bytes. Use it for text output, if you need binary output just ommit it. The default is false.
So here is the full example
from subprocess import check_output
from shlex import split
def sh(command):
return check_output(split(command), universal_newlines=True)
for line in inputFile:
cmd = 'python3 CRISPRcasIdentifier.py -f %s/%s.fasta -o %s/%s.csv -st dna -co %s/'%(inputpath,line.strip(),outputfolder,line.strip(),outputfolder)
sh(cmd)
Ps: I didn't test this

Python subprocess.Popen() not running command

I'm trying to use subprocess.Popen() to run a command in my script. The code is:
output = Popen(["hrun DAR_MeasLogDump " + log_file_name], stdout=subprocess.PIPE, stderr = subprocess.PIPE, executable="/bin/csh", cwd=cwdir, encoding='utf-8')
When I print the output, it's printing out the created shell output and not the actual command that's in the list. I tried getting rid of executable='/bin/csh', but then Popen wouldn't even run.
I also tried using subprocess.communicate(), but it didn't work either. I would also get the shell output and not the actual command run.
I want to completely avoid using shell=True because of security issues.
EDIT: In many different attempts, "hrun" is not being recoognized. "hrun" is a Pearl script that is being called, DAR_MeasLogDump is the action and log_file_name is the file that the script will call its action on. Is there any sort of set up or configuration that needs to be done in order for "hrun" to be recognized?
I think the problem is that Popen requires a list of every part of the command (command + options), the documentation for Popen inside subprocess has an example for that. So for that line in your script to work, you would need to write it like this:
output = Popen(["/bin/csh", "hrun", "DAR_MeasLogDump", log_file_name], stdout=subprocess.PIPE, stderr = subprocess.PIPE)
I've removed the executable argument, but I guess it could work that way as well.
Try:
output = Popen(["-c", "hrun DAR_MeasLogDump " +log_file_name], stdout=subprocess.PIPE, stderr = subprocess.PIPE, executable="/bin/csh", cwd=cwdir, encoding='utf-8')
csh is expecting -c "full command here". Without -c I think it just tries to open it as a file.
Specifying an odd shell and an explicit cwd seems completely out of place here (assuming cwdir is defined to the current directory).
If the first argument to subprocess is a list, no shell is involved.
result = subprocess.run(["hrun", "DAR_MeasLogDump", log_file_name],
stdout=subprocess.PIPE, stderr = subprocess.PIPE,
universal_newlines=True, check=True)
output = result.stdout
If you need this to be run under a legacy version of Python, maybe use check_output instead of run.
You generally want to avoid Popen unless you need to do something which the higher-level wrapper functions cannot do.
You are creating an instance of subprocess.Popen but not executing it.
You should try:
p = Popen(["hrun", "DAR_MeasLogDump ", log_file_name], stdout=subprocess.PIPE, stderr = subprocess.PIPE, cwd=cwdir, encoding='utf-8')
out, err = p.communicate() # This will get you output
Args should be passed as a sequence if you do not use shell=True, and then using executable should not be required.
Note that if you are not using advanced features from Popen, the doc recommends using subprocess.run:
from subprocess import run
p = run(["hrun", "DAR_MeasLogDump ", log_file_name], capture_output=True, cwd=cwdir, encoding='utf-8')
out, err = p.communicate() # This will get you output
This works with cat example:
import subprocess
log_file_name='-123.txt'
output = subprocess.Popen(['cat', 'DAR_MeasLogDump' + log_file_name],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
stdout, stderr = output.communicate()
print (stdout)
print (stderr)
I think you need only change to your 'hrun' command
It seems the same problem that I had at the beginning of a project: you have tried with windows "environment variables". It turns out that when entering the CMD or powershell it does not recognize perl, java, etc. unless you go to the folder where the .exe .py .java, etc. is located and enter the cmd, where the java.exe, python.py, etc. is.
In my ADB project, once I added in my environment variables, I no longer needed to go to the folder where the .exe .py or adb code was located.
Now you can open a CMD and it will execute any command even from your perl , so the interpreter that uses powershell will find and recognize the command.

Run a python script inside a running program

I am running a python script that launches a executable called ./abc This executable enters inside of a program and waits for a command like so:
$./abc
abc > \\waits for a command here.
What I would like to do is to enter a couple of commands like:
$./abc
abc > read_blif alu.blif
abc > resyn2
What I have so far is as follows:
import os
from array import *
os.system('./abc')
for file in os.listdir("ccts/"):
print 'read_blif ' + file + '\n'
print 'resyn2\n'
print 'print_stats\n'
print 'if -K 6\n'
print 'print_stats\n'
print 'write_blif ' + file.split('.')[0] + 'mapped.blif\n'
This however will do the following:
abc > \\stays idle and waits until I ^C and then it prints
read ...blif
resyn2
...
It prints just to the terminal. How do I make it execute this inside the program and wait until it sees the next abc > to run the next command.
Thanks
I have done something similar using subprocess.
import subprocess
cmd = './command_to_execute'
pro = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
result = pro.stdout.read()
This will execute the command specified by cmd, then reads the result into result. It will wait for there to be a result printed to the console before executing anything after the result assignment. I believe this might be what you want though your description was a bit vague.
You may be looking for the pexpect module. Here is the basic example from pexpect's documentation
# This connects to the openbsd ftp site and
# downloads the recursive directory listing.
import pexpect
child = pexpect.spawn('ftp ftp.openbsd.org')
child.expect('Name .*: ')
child.sendline('anonymous')
child.expect('Password:')
child.sendline('noah#example.com')
child.expect('ftp> ')
child.sendline('lcd /tmp')
I think it will work the same way with abc >, if your OS is compatible with pexpect.
You need to spawn a new process using subprocess library and make two pipes one for stdin and one for stdout. Using this pipes ( that are represented in python as files) you can communicate with your process
Here is an example:
import subprocess
cmd = './full/path/to/your/abc/executable'
pro = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stdin=subprocess.PIPE)
pro.stdin.write("read_blif alu.blif \n")
pro.stdout.read(3)
You can use pro.communicate but I assume you need to take the output for every command you input. something like:
abc > command1
ouput1
abc > command2
output2 -part1
output2 -part2
output2 -part3
In this way I think the PIPE approach is more useful.
Use the dir function in python to find more info about the pro object and what methods and attributes are available dir(proc). Don't forget the help built in that will display the docstrings help(pro) or help(pro.stdin).
You are making a mistake when you run os.system this will run you program in the background an you won't have control over it. Maybe you would like to look into what input/output stream.
More reading can be done here.
If you want to pipe commands into the input of an executable, the easiest way would be to use the subprocess module. You can input into the stdin of the executable and get its output with Popen.communicate.
import subprocess
for f in os.listdir('ccts/'):
p = subprocess.Popen('./abc', stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate('read_blif ' + f + '\n'
'resyn2\n'
'print_stats\n'
'if -K 6\n'
'print_stats\n'
'write_blif ' + f.split('.')[0] + 'mapped.blif\n')
# stdout will be what the program outputs and stderr will be any errors it outputs.
This will, however, close stdin of the subprocess everytime, but is the only way to reliable communicate without deadlocking. According to https://stackoverflow.com/a/28616322/5754656, you should use pexpect for an "interactive session-like" program. This is avoided here by having multiple subprocesses, assuming you can have different children run the program.
I assume that you only need the stdouts of print_stats, so you can do (as an example, you might want to handle errors):
import subprocess
def helper_open(cmd='./abc'):
return subprocess.Popen(cmd, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
for f in in os.listdir('cts/'):
helper_open().communicate('read_blif' + f + '\n'
'resyn2\n')
stats, _ = helper_open().communicate('print_stats\n')
# Do stuff with stats
stats, _ = helper_open().communicate('if -K 6\nprint_stats\n')
# Do more stuff with other stats
helper_open().communicate('write_blif ' + f.split('.')[0] + 'mapped.blif\n')

How to execute '<(cat fileA fileB)' using python?

I am writing a python program that uses other software. I was able to pass the command using subprocess.popen. I am facing a new problem: I need to concatenate multiples files as two
files and use them as the input for the external program. The command line looks like this:
extersoftware --fq --f <(cat fileA_1 fileB_1) <(cat fileA_2 fileB_2)
I cannot use shell=True because there are other commands I need to pass by variables, such as --fq.(They are not limited to --fq, here is just an example)
One possible solution is to generate middle file.
This is what I have tried:
file_1 = ['cat', 'fileA_1', 'fileB_1']
p1 = Popen(file_1, stdout=PIPE)
p2 = Popen(['>', 'output_file'], stdin=p1.stdout, stdout=PIPE)
p1.stdout.close()
output = p2.communicate()
print output
I got error message: OSError: [Errno 2] No such file or directory Which part did I do wrong?
It would be better if there is no middle file. For this reason, I am looking at named pipe. I do not quiet understand it.
I have looked at multiple questions that have been answered here. To me they are all some how different from my question here.
Thanks ahead for all your help.
The way bash handles <(..) is to:
Create a pipe
Fork a command that writes to the write end
Substitute the <(..) for /dev/fd/N where N is the input end file descriptor of the pipe (try echo <(true)).
Run the command
The command will then open /dev/fd/N, and the OS will cause that to duplicate the inherited read end of the pipe.
We can do the same thing in Python:
import subprocess
import os
# Open a pipe and run a command that writes to the write end
input_fd, output_fd = os.pipe()
subprocess.Popen(["cat", "foo.txt", "bar.txt"], shell=False, stdout=output_fd)
os.close(output_fd);
# Run a command that uses /dev/fd/* to read from the read end
proc = subprocess.Popen(["wc", "/dev/fd/" + str(input_fd)],
shell=False, stdout = subprocess.PIPE)
# Read that command's output
print proc.communicate()[0]
For example:
$ cat foo.txt
Hello
$ cat bar.txt
World
$ wc <(cat foo.txt bar.txt)
2 2 12 /dev/fd/63
$ python test.py
2 2 12 /dev/fd/4
Process substitution returns the device filename that is being used. You will have to assign the pipe to a higher FD (e.g. 20) by passing a function to preexec_fn that uses os.dup2() to copy it, and then pass the FD device filename (e.g. /dev/fd/20) as one of the arguments of the call.
def assignfd(fd, handle):
def assign():
os.dup2(handle, fd)
return assign
...
p2 = Popen(['cat', '/dev/fd/20'], preexec_fn=assignfd(20, p1.stdout.fileno()))
...
It's actually possible have it both ways -- using a shell, while passing a list of arguments through unambiguously in a way that doesn't allow them to be shell-parsed.
Use bash explicitly rather than shell=True to ensure that you have support for <(), and use "$#" to refer to the additional argv array elements, like so:
subprocess.Popen(['bash', '-c',
'extersoftware "$#" --f <(cat fileA_1 fileB_1) <(cat fileA_2 fileB_2)',
"_", # this is a dummy passed in as argv[0] of the interpreter
"--fq", # this is substituted into the shell by the "$#"
])
If you wanted to independently pass in all three arrays -- extra arguments, and the exact filenames to be passed to each cat instance:
BASH_SCRIPT=r'''
declare -a filelist1=( )
filelist1_len=$1; shift
while (( filelist1_len-- > 0 )); do
filelist1+=( "$1" ); shift
done
filelist2_len=$1; shift
while (( filelist2_len-- > 0 )); do
filelist2+=( "$1" ); shift
done
extersoftware "$#" --f <(cat "${filelist1[#]}") <(cat "${filelist2[#]}")
'''
subprocess.Popen(['bash', '-c', BASH_SCRIPT, '' +
[str(len(filelist1))] + filelist1 +
[str(len(filelist2))] + filelist2 +
["--fq"],
])
You could put more interesting logic in the embedded shell script as well, were you so inclined.
In this specific case, we may use:
import subprocess
import os
if __name__ == '__main__':
input_fd1, output_fd1 = os.pipe()
subprocess.Popen(['cat', 'fileA_1', 'fileB_1'],
shell=False, stdout=output_fd1)
os.close(output_fd1)
input_fd2, output_fd2 = os.pipe();
subprocess.Popen(['cat', 'fileA_2', 'fileB_2'],
shell=False, stdout=output_fd2)
os.close(output_fd2)
proc = subprocess.Popen(['extersoftware','--fq', '--f',
'/dev/fd/'+str(input_fd1), '/dev/fd/' + str(input_fd2)], shell=False)
Change log:
Reformatted the code so it should be easier to read now (and hopefully still syntactically correct). It's tested in Python 2.6.6 on Scientific Linux 6.5 and everything looks fine.
Removed unnecessary semicolons.

Use bat file in CGI Python on localhost

I'm quite stuck with with a complex mix of files and languages! The problem:
My webform starts a python script, as a cgi script, on localhost(apache). In this python script I want to execute a batchfile. This batchfile executes several commands, which i tested thoroughly.
If i execute the following python file in the python interpreter or in CMD it does execute the bat file.
But when I 'start' the python script from the webform it says it did it, but there are no results, so i guess something is wrong with the cgi part of the problem?!
The process is complicated, so if someone has a better way of doing this...pls reply;). I'm using windows so that makes things even more annoying sometimes.
I think it's not the script, because I try subprocess.call, os.startfile and os.system already!
It either does nothing or the webpage keeps loading(endless loop)
Python script:
import os
from subprocess import Popen, PIPE
import subprocess
print "Content-type:text/html\r\n\r\n"
p = subprocess.Popen(["test.bat"], stdout = subprocess.PIPE, stderr = subprocess.PIPE)
out, error = p.communicate()
print out
print "DONE!"
The bat file:
#echo off
::Preprocess the datasets
CMD /C java weka.filters.unsupervised.attribute.StringToWordVector -b -i data_new.arff -o data_new_std.arff -r tweetin.arff -s tweetin_std.arff
:: Make predictions with incoming tweets
CMD /C java weka.classifiers.functions.SMO -T tweetin_std.arff -t data_new_std.arff -p 2 -c first > result.txt
Thanks for your reply!!
Your bat file is redirecting the second program's output to a file, so p.communicate can only get the output of the first program. I'm assuming you want to return the content of result.txt?
I think you should skip the bat file and just do both java invocations in python. You get more control of the execution and you can check the return codes, there might be problems with java not being in the PATH environment variable when run as CGI. The following is mostly equivalent with respect to you getting the program's output back, you want to capture the second program's output if your webservice is supposed to return the predictions.
import os
import shlex
from subprocess import Popen, PIPE
import subprocess
print "Content-type:text/html\r\n\r\n"
p = subprocess.Popen(shlex.split("java weka.filters.unsupervised.attribute.StringToWordVector -b -i data_new.arff -o data_new_std.arff -r tweetin.arff -s tweetin_std.arff"),
stdout = subprocess.PIPE, stderr = subprocess.PIPE)
out, error = p.communicate()
return_code = subprocess.call(shlex.split("java weka.classifiers.functions.SMO -T tweetin_std.arff -t data_new_std.arff -p 2 -c first > result.txt"))
print out
print "DONE!"
A couple of things come to mind. You might want to try setting your Popen's shell=True. Sometimes I have noticed that's solved my problem.
p = subprocess.Popen(["test.bat"], stdout = subprocess.PIPE, stderr = subprocess.PIPE, shell=True)
You may also want to take a look at Fabric, which is perfect for this kind of automation.

Categories