From Python:
output = os.popen("ps -o cmd=1").read()
print output
Output:
1
/bin/bash
python myPython.pyc
sh -c ps -o cmd=1
ps -o cmd=1
But when I run that command from terminal it returns what I want:
/sbin/init
Also, when I run "ls -l" command from python, it returns correct thing.
My main purpose is finding name of process from its PID in Python.
What should I do ?
This does not answer the question regarding why you get different output, but a better approach to solving the goal you are after is to either:
Open and read /proc/<pid>/cmdline, or
Read the symbolic link /proc/<pid>/exe
EDIT: Get rid of the popen call there and the subsequent "useless use of cat". Do this instead:
with open("/proc/"+data.get("pid")+"/cmdline") as cmd:
cmdinfo=cmd.read()
command=cmdinfo.split("\0")
print command[0]
I know you have an answer now, but the reason that your original attempt did not work is probably because popen creates a brand new process, and therefore a different process environment.
When I run 'ps -o cmd=1' from my Terminal, I get similar results as you did when you used popen.
1
bash
ps -o cmd=1
Related
Is there a way to create a python script which wraps an entire bash command including the pipes.
For example, if I have the following simple script
import sys
print sys.argv
and call it like so (from bash or ipython), I get the expected outcome:
[pkerp#pendari trell]$ python test.py ls
['test.py', 'ls']
If I add a pipe, however, the output of the script gets redirected to the pipe sink:
[pkerp#pendari trell]$ python test.py ls > out.txt
And the > out.txt portion is not in sys.argv. I understand that the shell automatically process this output, but I'm curious if there's a way to force the shell to ignore it and pass it to the process being called.
The point of this is to create something like a wrapper for the shell. I'd like to run the commands regularly, but keep track of the strace output for each command (including the pipes). Ideally I'd like to keep all of the bash features, such as tab-completion and up and down arrows and history search, and then just pass the completed command through a python script which invokes a subprocess to handle it.
Is this possible, or would I have to write my own shell to do this?
Edit
It appears I'm asking the exact same thing as this question.
The only thing you can do is pass the entire shell command as a string, then let Python pass it back to a shell for execution.
$ python test.py "ls > out.txt"
Inside test.py, something like
subprocess.call("strace " + sys.argv[1], shell=True, executable="/bin/bash")
to ensure the entire string is passed to the shell (and bash, specifically).
Well, I don't quite see what you are trying to do. The general approach would be to give the desired output destination to the script using command line options: python test.py ls --output=out.txt. Incidentally, strace writes to stderr. You could capture everything using strace python test.py > out 2> err if you want to save everything...
Edit: If your script writes to stderr as well you could use strace -o strace_out python test.py > script_out 2> script_err
Edit2: Okay, I understand better what you want. My suggestion is this: Write a bash helper:
function process_and_evaluate()
{
strace -o /tmp/output/strace_output "$#"
/path/to/script.py /tmp/output/strace_output
}
Put this in a file like ~/helper.sh. Then open a bash, source it using . ~/helper.sh.
Now you can run it like this: process_and_evaluate ls -lA.
Edit3:
To capture output / error you could extend the macro like this:
function process_and_evaluate()
{
out=$1
err=$2
shift 2
strace -o /tmp/output/strace_output "$#" > "$out" 2> "$err"
/path/to/script.py /tmp/output/strace_output
}
You would have to use the (less obvious ) process_and_evaluate out.txt err.txt ls -lA.
This is the best that I can come up with...
At least in your simple example, you could just run the python script as an argument to echo, e.g.
$ echo $(python test.py ls) > test.txt
$ more test.txt
['test.py','ls']
Enclosing a command in parenthesis with a dollar sign first executes the contents then passes the output as an argument to echo.
I want to pass the name of a file to a python script while I'm running it from the command line. I'm trying this clear command:
cat enwiki-latest-pages-articles.xml | python WikiExtractor.py -b 500K -o extracted
however, it gives an error:
'cat' is not recognized as an internal or external command, operable program or batch file.
Thanks in advance.
It seems like you're running the command in Windows. In windows, there's no cat installed unless you installed.
You can use type command instead:
type enwiki-latest-pages-articles.xml | python WikiExtractor.py -b 500K -o extracted
The correct way would be python WikiExtractor.py -b5 -o extracted -f enwiki-latest-pages-articles.xml.
And use sys.argv array of input arguments of python command from sys command.
This may help:
http://www.tutorialspoint.com/python/python_command_line_arguments.htm
I am newbie to python, and for GUIs, I use wxpython.
My Issue is this:
I have to create a debian file for two types of products(say product 1 and product 2).That can be done by running README.package.creation file. For "product1" in ".bashrc" we have to change
Product = product1
After that we have to do "make clean" in new terminal(otherwise changes in .bashrc will not take effect i.e "product" may not be equal to "product 1" if we dont follow the procedure), then we have to run ./Readme.package.creation.process. In Readme.package.creation then it takes automatically product type as "product 1"
If I does this manually it will work fine but if i do this through GUI it Readme.package.creation file will not take product type. From python null value will be sent.
Please help to solve my issue.
My code is:
subprocess.call("sed -i '/export PRODUCT/d' .bashrc", shell=True)
subprocess.call("sed -i '/export BOARD=TYpe/ a\ export PRODUCT=product1' .bashrc", shell=True)
os.chdir("/home/x/y/z")
subprocess.call("make clean", shell=True)
os.chdir("/home/x/main/src/package")
subprocess.call("sed -i 's/re.build -f -gui -p all/re.build -gui -p all -svn no/' README.package.creation", shell=True)
subprocess.call("gksu debian", shell=True)
subprocess.Popen("xfce4-terminal -e 'bash -c \"./README.package.creation -u %s\";sleep 10'" % (str(u_name)),shell=True)
How to do after that I have to follow same procedure for Product 2 also
EDIT:
How about os.environ in python?
I have tried to change with os.putenv and then os.environ seems like it doesnot work fine.
Try:
import OS
os.environ['product']='product1'
subprocess.call("make clean", shell=True)
and so on
Your problem is very simple, and so is the solution:.
In the subprocess.Popen(...), change the call from:
subprocess.Popen("xfce4-terminal -e 'bash -c \"./README.package.creation -u %s\";sleep 10'" % (str(u_name)),shell=True)
to:
subprocess.Popen("xfce4-terminal -e 'bash -c \"source ~/.bashrc; ./README.package.creation -u %s\";sleep 10'" % (str(u_name)),shell=True)
Essentially, you're asking bash to source the .bashrc file before calling the package creation command.
Another illustration:
sgulati#precise:~$ cat /tmp/1.sh
export A=100
sgulati#precise:~$ python -c "import subprocess
print subprocess.Popen(['bash', '-c', 'source /tmp/1.sh; echo \$A'], stdout=subprocess.PIPE).stdout.read()"
100
In this example, I declare the variable A=100 in /tmp/1.sh, source it and then execute echo $A. Because of source /tmp/1.sh, the value of A is known when echo $A is executed.
Please note that the syntax I used in my example is the syntax from python 2.7.3, but the concept is pretty much identical, no matter how you go about it.
I am using pexpect.run to execute a command. See below:
cmd = "grep -L killed /dir/dumps/*MAC-66.log"
output = pexpect.run(cmd)
When I run this, output equals to:
grep: /dir/dumps/*MAC-66.log: No such file or directory
But when I run the same command in my shell, it works, everytime. I don't see the problem. Any help is appreciated! Does pexpect.run require the command to be split in some fancy way?
Your shell is interpreting the glob, pexpect is not. You could either use python's glob.glob() function to evaluate the glob yourself, or run it through your shell, for example:
cmd = "bash -c 'grep -L killed /dir/dumps/*MAC-66.log'"
Also, if all you're after is output of this command, you ought to check out the subprocess module.
I want to run a command like this
grep -w 1 pattern <(tail -f mylogfile.log)
basically from a python script i want to monitor a log file for a specific string and continue with the python script as soon as i found that.
I am using os.system(), but that is hanging. The same command in bash works good.
I have a very old version of python (v2.3) and so don't have sub-process module.
do we have a way to acheive this
In Python 2.3, you need to use subprocess from SVN
import subprocess
import shlex
subprocess.call(shlex.split("/bin/bash -c 'grep -w 1 pattern <(tail -f mylogfile.log)'"))
To be explicit, you need to install it from the SVN link above.
You need to call this with /bin/bash -c due to the shell redirection you're using
EDIT
If you want to solve this with os.system(), just wrap the command in /bin/bash -c since you're using shell redirection...
os.system("/bin/bash -c 'grep -w 1 pattern <(tail -f mylogfile.log)'")
First of all, the command i think you should be using is grep -w -m 1 'pattern' <(tail -f in)
For executing commands in python, use the Popen constructor from the subprocess module. Read more at
http://docs.python.org/library/subprocess.html
If I understand correctly, you want to send the output to python like this -
tail -f mylogfile.log | grep -w 1 pattern | python yourscript.py
i.e., read all updates to the log file, and send matching lines to your script.
To read from standard input, you can use the file-like object: sys.stdin.
So your script would look like
import sys
for line in sys.stdin.readlines():
#process each line here.