I'm encountering the following problem:
I have this simple script, called test.sh:
#!/bin/bash
function hello() {
echo "hello world"
}
hello
when I run it from shell, I got the expected result:
$ ./test2.sh
hello world
However, when I try to run it from Python (2.7.?) I get the following:
>>> import commands
>>> cmd="./test2.sh"
>>> commands.getoutput(cmd)
'./test2.sh: 3: ./test2.sh: Syntax error: "(" unexpected'
I believe it somehow runs the script from "sh" rather than bash. I think so because when I run it with sh I get the same error message:
$ sh ./test2.sh
./test2.sh: 3: ./test2.sh: Syntax error: "(" unexpected
In addition, when I run the command with preceding "bash" from python, it works:
>>> cmd="bash ./test2.sh"
>>> commands.getoutput(cmd)
'hello world'
My question is: Why does python choose to run the script with sh instead of bash though I added the #!/bin/bash line at the beginning of the script? How can I make it right (I don't want to use preceding 'bash' in python since my script is being run from python by distant machines which I cant control).
Thanks!
There seems to be some other problem - the shbang and commands.getoutput should work properly as you show here. Change the shell script to just:
#!/bin/bash
sleep 100
and run the app again. Check with ps f what's the actual process tree. It's true that getoutput calls sh -c ..., but this shouldn't change which shell executes the script itself.
From a minimal test as described in the question, I see the following process tree:
11500 pts/5 Ss 0:00 zsh
15983 pts/5 S+ 0:00 \_ python2 ./c.py
15984 pts/5 S+ 0:00 \_ sh -c { ./c.sh; } 2>&1
15985 pts/5 S+ 0:00 \_ /bin/bash ./c.sh
15986 pts/5 S+ 0:00 \_ sleep 100
So in isolation, this works as expected - python calls sh -c { ./c.sh; } which is executed by the shell specified in the first line (bash).
Make sure you're executing the right script - since you're using ./test2.sh, double-check you're in the right directory and executing the right file. (Does print open('./test2.sh').read() return what you expect?)
Related
Current Situation
I created a php script, to start the python script.
Following is the script:
$python_file = "/var/www/web/test.py 2>&1 | tee -a /tmp/mylog 2>/dev/null >/dev/null &";
$command = "nohup python3 ".$python_file;
exec($command);
Problem:
After triggering the php script, the script keeps on running and finally it returns 504 error page.
Expected Solution
After triggering the above script, it needs to return immediately after the exec statement. is it possible?
add & to run in the background
$python_file = "/var/www/web/test.py 2>&1 | tee -a /tmp/mylog 2>/dev/null >/dev/null &";
$command = "nohup python3 ".$python_file . " &";
exec($command);
I am trying to start a python script on my VM from my local Mac OS
I did
ssh -i /key/path/id_rsa root#111.11.1.0 "sleep 5s; cd /root/Server;pkill -f server.py;./server.py;"
Result
It's SSH in and it quickly runs those commands and it quickly logging me out. I was expecting it to stay open in SSH session.
My script is NOT running ...
ps -aux | grep python
root 901 0.0 0.2 553164 18584 ? Ssl Jan19 20:37 /usr/bin/pytho -Es /usr/sbin/tuned -l -P
root 15444 0.0 0.0 112648 976 pts/0 S+ 19:16 0:00 grep --color=auto python
If I do this it works
ssh -i /key/path/id_rsa root#111.11.1.0 "sleep 5s; cd /root/Server"
Then
./server.py;
Then, it works.
Am I missing anything?
You might need to state the shell starting your script i.e /bin/bash server.py:
ssh -i /key/path/id_rsa root#111.11.1.0 "sleep 5s; cd /root/Server; pkill -f server.py; /bin/bash ./server.py;"
If you would like to start the script and leave it running even after you end your ssh session you could use nohup. Notice that you need to put the process in the background and redirect stdin, stdout and stderr to completly detach from the remote process:
-i /key/path/id_rsa root#111.11.1.0 "sleep 5s; cd /root/Server; nohup /bin/bash ./server.py < /dev/null > std.out 2> std.err &"
It seems like the reason that your ssh command returns imediately is because somehow the call to pkill -f server.py will also terminate the actual ssh session, since it also contains the server.py in the commandline.
I don't have my regular MacBook Pro here to test with, but I think that adding another semicolon and ending the command line with /bin/bash might do it.
I use the following snippet in a larger Python program to spawn a process in background:
import subprocess
command = "/media/sf_SharedDir/FOOBAR"
subprocess.Popen(command, shell=True)
After that I wanted to check whether the process was running when my Python program returned.
Output of ps -ef | grep -v grep | grep FOOBAR:
ap 3396 937 0 16:08 pts/16 00:00:00 /bin/sh -c /media/sf_SharedDir/FOOBAR
ap 3397 3396 0 16:08 pts/16 00:00:00 /bin/sh /media/sf_SharedDir/FOOBAR
I was surprised to see two lines of and they have differend PIDs so are those two processes running? Is there something wrong with my Popen call?
FOOBAR Script:
#!/bin/bash
while :
do
echo "still alive"
sleep 1
done
EDIT: Starting the script in a terminal ps displayes only one process.
Started via ./FOOBAR
ap#VBU:/media/sf_SharedDir$ ps -ef | grep -v grep | grep FOOBAR
ap 4115 3463 0 16:34 pts/5 00:00:00 /bin/bash ./FOOBAR
EDIT: shell=True is causing this issue (if it is one). But how would I fix that if I required shell to be True to run bash commands?
There is nothing wrong, what you see is perfectly normal. There is no "fix".
Each of your processes has a distinct function. The top-level process is running the python interpreter.
The second process, /bin/sh -c /media/sf_SharedDir/FOOBAR' is the shell that interprets the cmd line (because you want | or * or $HOME to be interpreted, you specified shell=True).
The third process, /bin/sh /media/sf_SharedDir/FOOBAR is the FOOBAR cmd. The /bin/sh comes from the #! line inside your FOOBAR program. If it were a C program, you'd just see /media/sf_SharedDir/FOOBAR here. If it were a python program, you'd see /usr/bin/python/media/sf_SharedDir/FOOBAR.
If you are really bothered by the second process, you could modify your python program like so:
command = "exec /media/sf_SharedDir/FOOBAR"
subprocess.Popen(command, shell=True)
I have a python script to start a process which I want to monitor using Nagios. When I run that script and perform ps -ef on my ubuntu EC2 instance, it shows process as python <filename>.py --arguments. For Nagios to monitor that process using check_procs, we need to supply process name. Here process name becomes 'python'.
/usr/lib/nagios/plugins/check_procs -C python
It returns the output that one python process is running. This is fine when I'm running one python process. But If I'm running multiple python scripts and monitor only few, then I have to give that particular process name. If in the above command, I give python script name, it throws an error. So I want to mask whole python <filename>.py --arguments to some other name so that while performing check_procs, I can give that new name.
If anyone have any idea, please let me know. I have checked other stackoverflow questions which suggest changing python process name using setproctitle but I want to perform it using shell.
Regards,
Sanket
You can use the check_procs command to look at arguments, which includes the module name. The following command will let you know if the python module 'module.py' is running.
/usr/lib/nagios/plugins/check_procs -c 1:1 -a module.py -C python
The -c argument lets you set the critical range. 1:1 will trigger a critical status if there is more or less than 1 process that matches running.
The -a argument will filter based on processes that contain the args 'module.py' (change it to the name of the module you want to monitor)
The -C argument will make sure that the process is a python process
If you need help figuring out how to create the service definition, I had to figure that out too. Just let me know.
REFERENCE:
check_procs plugin manpage
http://nagiosplugins.org/man/check_procs
You can't change the process name from pure Python, although you can use a wrapper (for example, written in C) to do so.
However, what you should do instead is making your program a daemon, and using a pidfile. Have a look at the python Daemon API and its implementation python-daemon.
check_procs already handles this situation.
check_procs can tell the difference between scripts launched as an argument to the interpreter vs jobs run directly a hashbang interpreter. Even though both of these look the same in the ps output!! The latter case will not be listed in check_procs -C python!
If you run your scripts explicitly via python: python <filename.py>, then you can monitor them with the check_procs -C python -a filename.py.
If you put #!/usr/bin/python in your scripts and run them as ./filename.py, then you can monitor with check_procs -C filename.py.
Example command line session showing this behavior:
#make test.py directly executable. See code below
$ chmod a+x test.py
#launch via python explicitly:
$ /usr/bin/python ./test.py &
[1] 27094
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 1 process with command name 'python'
PROCS OK: 0 processes with command name 'test.py'
PROCS OK: 1 process with args 'test.py'
#launch via python implicitly
$ ./test.py &
[2] 27134
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 1 process with command name 'python'
PROCS OK: 1 process with command name 'test.py'
PROCS OK: 2 processes with args 'test.py'
#PS 'COMMAND' output looks the same
$ ps 27094 27134
PID TTY STAT TIME COMMAND
27094 pts/6 S 0:00 /usr/bin/python ./test.py
27134 pts/6 S 0:00 /usr/bin/python ./test.py
#kill the explicit test
$ kill 27094
[1] - terminated /usr/bin/python ./test.py
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 0 processes with command name 'python'
PROCS OK: 1 process with command name 'test.py'
PROCS OK: 1 process with args 'test.py'
#kill the implicit test
$ kill 27134
[2] + terminated ./test.py
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 0 processes with command name 'python'
PROCS OK: 0 processes with command name 'test.py'
PROCS OK: 0 processes with args 'test.py'
test.py is a python script that sleeps for 2 minutes. It is chmod +x and has a hashbang #! line invoking /usr/bin/python.
#!/usr/bin/python
import time
time.sleep(120)
Create a pid file and use that file for the process lookup with nagios.
I'm not saying this is the best solution (it wouldn't scale well at all), but you can create a symbolic link to the python command and execute your script using this link. e.g.
ln -s `which python` ~/mypython
~/mypython myscript.py
Scripts launched using the link should show up as mypython in ps.
You can use subprocess.Popen to change the executable name, but you'd have to use a wrapper script (or some weird fork magic). The following code causes ps to list the executable as kwyjibo /tmp/test.py instead of /usr/bin/python /tmp/test.py:
import subprocess
p = subprocess.Popen(['kwyjibo', '/tmp/test.py'], executable='/usr/bin/python')
I wonder if anyone has any insights into this. I have a bash script that should put my ssh key onto a remote machine. Adopted from here, the script reads,
#!/usr/bin/sh
REMOTEHOST=user#remote
KEY="$HOME/.ssh/id_rsa.pub"
KEYCODE=`cat $KEY`
ssh -q $REMOTEHOST "mkdir ~/.ssh 2>/dev/null; chmod 700 ~/.ssh; echo "$KEYCODE" >> ~/.ssh/authorized_keys; chmod 644 ~/.ssh/authorized_keys"
This works. The equivalent python script should be
#!/usr/bin/python
import os
os.system('ssh -q %(REMOTEHOST)s "mkdir ~/.ssh 2>/dev/null; chmod 700 ~/.ssh; echo "%(KEYCODE)s" >> ~/.ssh/authorized_keys; chmod 644 ~/.ssh/authorized_keys"' %
{'REMOTEHOST':'user#remote',
'KEYCODE':open(os.path.join(os.environ['HOME'],
'.ssh/id_rsa.pub'),'r').read()})
But in this case, I get that
sh: line 1: >> ~/.ssh/authorized_keys; chmod 644 ~/.ssh/authorized_keys: No
such file or directory
What am I doing wrong? I tried escaping the inner-most quotes but same error message... Thank you in advance for your responses.
You have a serious question -- in that os.system isn't behaving the way you expect it to -- but also, you should seriously rethink the approach as a whole.
You're launching a Python interpreter -- but then, via os.system, telling that Python interpreter to launch a shell! os.system shouldn't be used at all in modern Python (subprocess is a complete replacement)... but using any Python call which starts a shell instance is exceptionally silly in this kind of use case.
Now, in terms of the actual, immediate problem -- look at how your quotation marks are nesting. You'll see that the quote you're starting before mkdir is being closed in the echo, allowing your command to be split in a spot you don't intend.
The following fixes this immediate issue, but is still awful and evil (starts a subshell unnecessarily, doesn't properly check output status, and should be converted to use subprocess.Popen()):
os.system('''ssh -q %(REMOTEHOST)s "mkdir ~/.ssh 2>/dev/null; chmod 700 ~/.ssh; echo '%(KEYCODE)s' >> ~/.ssh/authorized_keys; chmod 644 ~/.ssh/authorized_keys"''' % {
'REMOTEHOST':'user#remote',
'KEYCODE':open(os.path.join(os.environ['HOME'], '.ssh/id_rsa.pub'),'r').read()
})