I'm quite stuck with with a complex mix of files and languages! The problem:
My webform starts a python script, as a cgi script, on localhost(apache). In this python script I want to execute a batchfile. This batchfile executes several commands, which i tested thoroughly.
If i execute the following python file in the python interpreter or in CMD it does execute the bat file.
But when I 'start' the python script from the webform it says it did it, but there are no results, so i guess something is wrong with the cgi part of the problem?!
The process is complicated, so if someone has a better way of doing this...pls reply;). I'm using windows so that makes things even more annoying sometimes.
I think it's not the script, because I try subprocess.call, os.startfile and os.system already!
It either does nothing or the webpage keeps loading(endless loop)
Python script:
import os
from subprocess import Popen, PIPE
import subprocess
print "Content-type:text/html\r\n\r\n"
p = subprocess.Popen(["test.bat"], stdout = subprocess.PIPE, stderr = subprocess.PIPE)
out, error = p.communicate()
print out
print "DONE!"
The bat file:
#echo off
::Preprocess the datasets
CMD /C java weka.filters.unsupervised.attribute.StringToWordVector -b -i data_new.arff -o data_new_std.arff -r tweetin.arff -s tweetin_std.arff
:: Make predictions with incoming tweets
CMD /C java weka.classifiers.functions.SMO -T tweetin_std.arff -t data_new_std.arff -p 2 -c first > result.txt
Thanks for your reply!!
Your bat file is redirecting the second program's output to a file, so p.communicate can only get the output of the first program. I'm assuming you want to return the content of result.txt?
I think you should skip the bat file and just do both java invocations in python. You get more control of the execution and you can check the return codes, there might be problems with java not being in the PATH environment variable when run as CGI. The following is mostly equivalent with respect to you getting the program's output back, you want to capture the second program's output if your webservice is supposed to return the predictions.
import os
import shlex
from subprocess import Popen, PIPE
import subprocess
print "Content-type:text/html\r\n\r\n"
p = subprocess.Popen(shlex.split("java weka.filters.unsupervised.attribute.StringToWordVector -b -i data_new.arff -o data_new_std.arff -r tweetin.arff -s tweetin_std.arff"),
stdout = subprocess.PIPE, stderr = subprocess.PIPE)
out, error = p.communicate()
return_code = subprocess.call(shlex.split("java weka.classifiers.functions.SMO -T tweetin_std.arff -t data_new_std.arff -p 2 -c first > result.txt"))
print out
print "DONE!"
A couple of things come to mind. You might want to try setting your Popen's shell=True. Sometimes I have noticed that's solved my problem.
p = subprocess.Popen(["test.bat"], stdout = subprocess.PIPE, stderr = subprocess.PIPE, shell=True)
You may also want to take a look at Fabric, which is perfect for this kind of automation.
Related
I am facing a problem, have to pass an input after I run command: adb shell libtest_ip through python:
import subprocess
command = 'adb shell libtest_ip'
p = subprocess.Popen(command, shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
after this I have to pass input like 1 or en_us etc.. but as soon as the command to sun binary(libtest_ip is a binary), is executed, it gets stuck.
Please help me if anyone have idea how to solve this?
I think your best bet is pexpect .
Specially you can take a look at script.py that would help you creating the interactive script.
Basically, you should end up with something like this
...
self.child.expect('Whatever')
self.child.sendline('1')
self.child.expect('Whatever 2')
self.child.sendline('en-us')
...
update
Your example should work, try
#! /usr/bin/env python3
import pexpect
print(pexpect.run('/bin/echo hello'))
and running it should output
% ./test-pexpect.py
b'hello\r\n'
I am facing difficulties calling a command line from my script.I run the script but I don't get any result. Through this command line in my script I want to run a tool which produces a folder that has the output files for each line.The inputpath is already defined. Can you please help me?
for line in inputFile:
cmd = 'python3 CRISPRcasIdentifier.py -f %s/%s.fasta -o %s/%s.csv -st dna -co %s/'%(inputpath,line.strip(),outputfolder,line.strip(),outputfolder)
os.system(cmd)
You really want to use the Python standard library module subprocess. Using functions from that module, you can construct you command line as a list of strings, and each would be processed as one file name, option or value. This bypasses the shell's escaping, and eliminates the need to massage you script arguments before calling.
Besides, your code would not work, because the body block of the for statement is not indented. Python would simply not accept this code (could be you pasted into the questiong without the proper indentations).
as mentioned before, executing command vias: os.system(command) is not recomended. please use subprocess (read in python docs about this modulesubprocess_module_docs). see the code here:
for command in input_file:
p = subprocess.Popen(command, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE)
# use this if you want to communicate with child process
# p = subprocess.Popen(command, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE)
p.communicate()
# --- do the rest
I usually do like this for static command
from subprocess import check_output
def sh(command):
return check_output(command, shell=True, universal_newlines=True)
output = sh('echo hello world | sed s/h/H/')
BUT THIS IS NOT SAFE!!! It's vunerable to shell injection you should do
from subprocess import check_output
from shlex import split
def sh(command):
return check_output(split(command), universal_newlines=True)
output = sh('echo hello world')
The difference is subtle but important. shell=True will create a new shell, so pipes, etc will work. I use this when I have a big command line with pipes and that is static, I mean, it do not depend on user input. This is because this variant is vunerable to shell injection, a user can input something; rm -rf / and it will run.
The second variant only accepts one command, it will not spawn a shell, instead it will run the command directly. So no pipes and such shell things will work, and is safer.
universal_newlines=True is for getting output as string instead of bytes. Use it for text output, if you need binary output just ommit it. The default is false.
So here is the full example
from subprocess import check_output
from shlex import split
def sh(command):
return check_output(split(command), universal_newlines=True)
for line in inputFile:
cmd = 'python3 CRISPRcasIdentifier.py -f %s/%s.fasta -o %s/%s.csv -st dna -co %s/'%(inputpath,line.strip(),outputfolder,line.strip(),outputfolder)
sh(cmd)
Ps: I didn't test this
I'm trying to use subprocess.Popen() to run a command in my script. The code is:
output = Popen(["hrun DAR_MeasLogDump " + log_file_name], stdout=subprocess.PIPE, stderr = subprocess.PIPE, executable="/bin/csh", cwd=cwdir, encoding='utf-8')
When I print the output, it's printing out the created shell output and not the actual command that's in the list. I tried getting rid of executable='/bin/csh', but then Popen wouldn't even run.
I also tried using subprocess.communicate(), but it didn't work either. I would also get the shell output and not the actual command run.
I want to completely avoid using shell=True because of security issues.
EDIT: In many different attempts, "hrun" is not being recoognized. "hrun" is a Pearl script that is being called, DAR_MeasLogDump is the action and log_file_name is the file that the script will call its action on. Is there any sort of set up or configuration that needs to be done in order for "hrun" to be recognized?
I think the problem is that Popen requires a list of every part of the command (command + options), the documentation for Popen inside subprocess has an example for that. So for that line in your script to work, you would need to write it like this:
output = Popen(["/bin/csh", "hrun", "DAR_MeasLogDump", log_file_name], stdout=subprocess.PIPE, stderr = subprocess.PIPE)
I've removed the executable argument, but I guess it could work that way as well.
Try:
output = Popen(["-c", "hrun DAR_MeasLogDump " +log_file_name], stdout=subprocess.PIPE, stderr = subprocess.PIPE, executable="/bin/csh", cwd=cwdir, encoding='utf-8')
csh is expecting -c "full command here". Without -c I think it just tries to open it as a file.
Specifying an odd shell and an explicit cwd seems completely out of place here (assuming cwdir is defined to the current directory).
If the first argument to subprocess is a list, no shell is involved.
result = subprocess.run(["hrun", "DAR_MeasLogDump", log_file_name],
stdout=subprocess.PIPE, stderr = subprocess.PIPE,
universal_newlines=True, check=True)
output = result.stdout
If you need this to be run under a legacy version of Python, maybe use check_output instead of run.
You generally want to avoid Popen unless you need to do something which the higher-level wrapper functions cannot do.
You are creating an instance of subprocess.Popen but not executing it.
You should try:
p = Popen(["hrun", "DAR_MeasLogDump ", log_file_name], stdout=subprocess.PIPE, stderr = subprocess.PIPE, cwd=cwdir, encoding='utf-8')
out, err = p.communicate() # This will get you output
Args should be passed as a sequence if you do not use shell=True, and then using executable should not be required.
Note that if you are not using advanced features from Popen, the doc recommends using subprocess.run:
from subprocess import run
p = run(["hrun", "DAR_MeasLogDump ", log_file_name], capture_output=True, cwd=cwdir, encoding='utf-8')
out, err = p.communicate() # This will get you output
This works with cat example:
import subprocess
log_file_name='-123.txt'
output = subprocess.Popen(['cat', 'DAR_MeasLogDump' + log_file_name],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
stdout, stderr = output.communicate()
print (stdout)
print (stderr)
I think you need only change to your 'hrun' command
It seems the same problem that I had at the beginning of a project: you have tried with windows "environment variables". It turns out that when entering the CMD or powershell it does not recognize perl, java, etc. unless you go to the folder where the .exe .py .java, etc. is located and enter the cmd, where the java.exe, python.py, etc. is.
In my ADB project, once I added in my environment variables, I no longer needed to go to the folder where the .exe .py or adb code was located.
Now you can open a CMD and it will execute any command even from your perl , so the interpreter that uses powershell will find and recognize the command.
I have a python process running, having a logger object configured to print logs in a log file.
Now, I am trying to call a scala script through this python process, by using subprocess module of Python.
subprocess.Popen(scala_run_command, stdout=subprocess.PIPE, shell=True)
The issue is, whenever the python process exits, it hangs the shell, which comes to life only after explicitly running stty sane command. My guess is that it is caused because the scala script outputs to shell and hence the shell hangs, because of its stdout [something in its stdout causes the shell to lose its sanity].
For the same reason, I wanted to try to put the output of scala run script to be captured in my default log file, which does not seem to be happening using multiple ways.
So, the query boils down to, how to the get the stdout output of shell command ran through subprocess module in a log file. Even if there is a better way to achieve this instead of subprocess, run, I would love to know the ideas.
The current state of code looks like this.
__echo_command = 'echo ":load %s"'
__spark_console_command = 'spark;'
def run_scala_script(self, script):
echo_command = self.__echo_command % script
spark_console_command = self.__spark_console_command
echo_result = subprocess.run(echo_command, stdout=subprocess.PIPE, shell=True)
result = subprocess.run(spark_console_command, stdout=subprocess.PIPE, shell=True, input=echo_result.stdout)
logger.info('Scala script %s completed successfully' % script)
logger.info(result.stdout)
Use
p = subprocess.Popen(...)
followed by
stdout, stderr = p.communicate()
and then stdout and stderr will contain the output bytes from the subprocess' output streams. You can then log the stdout value.
I have looked at Calling an external command in Python and tried every possible way with subprocess and os.popen but nothing seems to work.
If I try
import os
stream = os.popen("program.ex -f file.dat | grep fish | head -4")
I get lines and lines of
grep: broken pipe
If I switch the grep and head commands around, it never gets to the grep command because the output from program.ex is prohibitively long (which is why I run with head -4).
Of course the following fails because of the pipes:
import subprocess as sp
cmd = "program.ex -f file.dat | grep fish | head -4"
proc = sp.Popen(cmd.split(),stdout=sp.PIPE,stderr=sp.PIPE)
stdout, stderr = proc.communicate()
So I tried breaking it down
cmd1 = "program.ex -f file.ex"
cmd2 = "head -4"
cmd3 = "grep fish"
proc1 = sp.Popen(cmd1.split(),stdout=sp.PIPE,stderr=sp.PIPE)
proc2 = sp.Popen(cmd2.split(),stdout=sp.PIPE,stdin=proc1.stdout)
proc3 = sp.Popen(cmd3.split(),stdout=sp.PIPE,stdin=proc2.stdout)
stdout, stderr = proc1.communicate()
which does run, except it gets stuck on cmd1 because the output from program.ex is prohibitively long.
Finally I tried hiding it in an external shell script and fortran program, but the fortran program does a
call system("program.ex -f file.dat | grep fish | head -4")
and I guess this messes up python again.
Note: If I do this directly in the terminal, it doesn't need to get the whole output from program.ex and the command finishes instantly.
So, my question is:
How can I get the above command to run in python like it does in the terminal (ie, head and grep the output from program.ex without needing to wait for all the output from program.ex)?
Help is greatly appreciated!
Edit:
I also tried with shell=True:
import subprocess as sp
cmd = "program.ex -f file.dat | head -4 | grep fish"
proc = sp.Popen(cmd.split(),stdout=sp.PIPE,stderr=sp.PIPE,shell=True)
stdout, stderr = proc.communicate()
which does run, and while stderr has expected (un-needed) content, stdout is empty. If I replace the above cmd variable with the name of a fortran program which calls the system command instead, then it hangs on program.ex again, probably waiting for all the output to finish.
You can use bash to handle the pipes.
It can only run script files and it won't run commands(bash -e echo gives/bin/echo: /bin/echo: cannot execute binary file)
bash -e <script to run>
if you put the commands in the script file it will run them
This still gives me an error of sorts in the stderr from the first process, but maybe it's still good enough for your purposes? Using your multiple pipes example, but calling .communicate() on the output process:
import subprocess
cmd1 = ['yes', 'fishy'] # is this similar enough to your example program?
cmd2 = ['head', '-4']
cmd3 = ['grep', 'fish']
proc1 = subprocess.Popen(cmd1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
proc2 = subprocess.Popen(cmd2, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
stdin=proc1.stdout)
proc3 = subprocess.Popen(cmd3, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
stdin=proc2.stdout)
result, err3 = proc3.communicate()
proc2.wait()
err2 = proc2.stderr.read()
proc1.stdout.close()
proc1.wait()
err1 = proc1.stderr.read() # 'yes: standard output: Broken pipe\nyes: write error\n'
This amazing, but not well known library might be what you're looking for:
https://github.com/kennethreitz/envoy
Be sure to use the Github version, not the one that gets installed with pip. It's only a single file by the way.