vims cmd to tee to persistent python 'watchdog' process - python

I would like to run a command from vim, to spawn a persistent python watchdog process.
The normal command is pycco -w somefile.py.
I have tried running tee from vim.
:!pycco -w | tee % | w % runs the python watchdog command but does not return to vim.
Reversed, :w tee % | !pycco -w % generates error
e172 only 1 file name allowed
A commentor in this thread, mentions using sh -c ">" rather than tee.
(I generate trailing character error when trying sh).
how to tee process using vim {cmd}?

Just a suggestion, but you might benefit from vimux, a plugin for Vim that allows you to work with Tmux from inside of Vim.

Related

Script works differently when ran from the terminal and ran from Python

I have a short bash script foo.sh
#!/bin/bash
cat /dev/urandom | tr -dc 'a-z1-9' | fold -w 4 | head -n 1
When I run it directly from the shell, it runs fine, exiting when it is done
$ ./foo.sh
m1un
$
but when I run it from Python
$ python -c "import subprocess; subprocess.call(['./foo.sh'])"
ygs9
it outputs the line but then just hangs forever. What is causing this discrepancy?
Adding the trap -p command to the bash script, stopping the hung python process and running ps shows what's going on:
$ cat foo.sh
#!/bin/bash
trap -p
cat /dev/urandom | tr -dc 'a-z1-9' | fold -w 4 | head -n 1
$ python -c "import subprocess; subprocess.call(['./foo.sh'])"
trap -- '' SIGPIPE
trap -- '' SIGXFSZ
ko5o
^Z
[1]+ Stopped python -c "import subprocess; subprocess.call(['./foo.sh'])"
$ ps -H -o comm
COMMAND
bash
python
foo.sh
cat
tr
fold
ps
Thus, subprocess.call() executes the command with the SIGPIPE signal masked. When head does its job and exits, the remaining processes do not receive the broken pipe signal and do not terminate.
Having the explanation of the problem at hand, it was easy to find the bug in the python bugtracker, which turned out to be issue#1652.
The problem with Python 2 handling SIGPIPE in a non-standard way (i.e., being ignored) is already coined in Leon's answer, and the fix is given in the link: set SIGPIPE to default (SIG_DFL) with, e.g.,
import signal
signal.signal(signal.SIGPIPE,signal.SIG_DFL)
You can try to unset SIGPIPE from within your script with, e.g.,
#!/bin/bash
trap SIGPIPE # reset SIGPIPE
cat /dev/urandom | tr -dc 'a-z1-9' | fold -w 4 | head -n 1
but, unfortunately, it doesn't work, as per the Bash reference manual
Signals ignored upon entry to the shell cannot be trapped or reset.
A final comment: you have a useless use of cat here; it's better to write your script as:
#!/bin/bash
tr -dc 'a-z1-9' < /dev/urandom | fold -w 4 | head -n 1
Yet, since you're using Bash, you might as well use the read builtin as follows (this will advantageously replace fold and head):
#!/bin/bash
read -n4 a < <(tr -dc 'a-z1-9' < /dev/urandom)
printf '%s\n' "$a"
It turns out that with this version, you'll have a clear idea of what's going on (and the script will not hang):
$ python -c "import subprocess; subprocess.call(['./foo'])"
hcwh
tr: write error: Broken pipe
tr: write error
$
$ # script didn't hang
(Of course, it works well with no errors with Python3). And telling Python to use the default signal for SIGPIPE works well too:
$ python -c "import signal; import subprocess; signal.signal(signal.SIGPIPE,signal.SIG_DFL); subprocess.call(['./foo'])"
jc1p
$
(and also works with Python3).

`ps -ef` shows running process twice if started with `subprocess.Popen`

I use the following snippet in a larger Python program to spawn a process in background:
import subprocess
command = "/media/sf_SharedDir/FOOBAR"
subprocess.Popen(command, shell=True)
After that I wanted to check whether the process was running when my Python program returned.
Output of ps -ef | grep -v grep | grep FOOBAR:
ap 3396 937 0 16:08 pts/16 00:00:00 /bin/sh -c /media/sf_SharedDir/FOOBAR
ap 3397 3396 0 16:08 pts/16 00:00:00 /bin/sh /media/sf_SharedDir/FOOBAR
I was surprised to see two lines of and they have differend PIDs so are those two processes running? Is there something wrong with my Popen call?
FOOBAR Script:
#!/bin/bash
while :
do
echo "still alive"
sleep 1
done
EDIT: Starting the script in a terminal ps displayes only one process.
Started via ./FOOBAR
ap#VBU:/media/sf_SharedDir$ ps -ef | grep -v grep | grep FOOBAR
ap 4115 3463 0 16:34 pts/5 00:00:00 /bin/bash ./FOOBAR
EDIT: shell=True is causing this issue (if it is one). But how would I fix that if I required shell to be True to run bash commands?
There is nothing wrong, what you see is perfectly normal. There is no "fix".
Each of your processes has a distinct function. The top-level process is running the python interpreter.
The second process, /bin/sh -c /media/sf_SharedDir/FOOBAR' is the shell that interprets the cmd line (because you want | or * or $HOME to be interpreted, you specified shell=True).
The third process, /bin/sh /media/sf_SharedDir/FOOBAR is the FOOBAR cmd. The /bin/sh comes from the #! line inside your FOOBAR program. If it were a C program, you'd just see /media/sf_SharedDir/FOOBAR here. If it were a python program, you'd see /usr/bin/python/media/sf_SharedDir/FOOBAR.
If you are really bothered by the second process, you could modify your python program like so:
command = "exec /media/sf_SharedDir/FOOBAR"
subprocess.Popen(command, shell=True)

Kill python interpeter in linux from the terminal

I want to kill python interpeter - The intention is that all the python files that are running in this moment will stop (without any informantion about this files).
obviously the processes should be closed.
Any idea as delete files in python or destroy the interpeter is ok :D (I am working with virtual machine).
I need it from the terminal because i write c code and i use linux commands...
Hope for help
pkill -9 python
should kill any running python process.
There's a rather crude way of doing this, but be careful because first, this relies on python interpreter process identifying themselves as python, and second, it has the concomitant effect of also killing any other processes identified by that name.
In short, you can kill all python interpreters by typing this into your shell (make sure you read the caveats above!):
ps aux | grep python | grep -v "grep python" | awk '{print $2}' | xargs kill -9
To break this down, this is how it works. The first bit, ps aux | grep python | grep -v "grep python", gets the list of all processes calling themselves python, with the grep -v making sure that the grep command you just ran isn't also included in the output. Next, we use awk to get the second column of the output, which has the process ID's. Finally, these processes are all (rather unceremoniously) killed by supplying each of them with kill -9.
pkill with script path
pkill -9 -f path/to/my_script.py
is a short and selective method that is more likely to only kill the interpreter running a given script.
See also: https://unix.stackexchange.com/questions/31107/linux-kill-process-based-on-arguments
You can try the killall command:
killall python
pgrep -f <your process name> | xargs kill -9
This will kill the your process service.
In my case it is
pgrep -f python | xargs kill -9
pgrep -f youAppFile.py | xargs kill -9
pgrep returns the PID of the specific file will only kill the specific application.
If you want to show the name of processes and kill them by the command of the kill, I recommended using this script to kill all python3 running process and set your ram memory free :
ps auxww | grep 'python3' | awk '{print $2}' | xargs kill -9
to kill python script while using ubuntu 20.04.2 intead of Ctrl + C just push together
Ctrl + D
I have seen the pkill command as the top answer. While that is all great, I still try to tread carefully (since, I might be risking my machine whilst killing processes) and follow the below approach:
First list all the python processes using:
$ ps -ef | grep python
Just to have a look at what root user processes were running beforehand and to cross-check later, if they are still running (after I'm done! :D)
then using pgrep as :
$ pgrep -u <username> python -d ' ' #this gets me all the python processes running for user username
# eg output:
11265 11457 11722 11723 11724 11725
And finally, I kill these processes by using the kill command after cross-checking with the output of ps -ef| ...
kill -9 PID1 PID2 PID3 ...
# example
kill -9 11265 11457 11722 11723 11724 11725
Also, we can cross check the root PIDs by using :
pgrep -u root python -d ' '
and verifying with the output from ps -ef| ...

Have bash script execute multiple programs as separate processes

As the title suggests how do I write a bash script that will execute for example 3 different Python programs as separate processes? And then am I able to gain access to each of these processes to see what is being logged onto the terminal?
Edit: Thanks again. I forgot to mention that I'm aware of appending & but I'm not sure how to access what is being outputted to the terminal for each process. For example I could run all 3 of these programs separately on different tabs and be able to see what is being outputted.
You can run a job in the background like this:
command &
This allows you to start multiple jobs in a row without having to wait for the previous one to finish.
If you start multiple background jobs like this, they will all share the same stdout (and stderr), which means their output is likely to get interleaved. For example, take the following script:
#!/bin/bash
# countup.sh
for i in `seq 3`; do
echo $i
sleep 1
done
Start it twice in the background:
./countup.sh &
./countup.sh &
And what you see in your terminal will look something like this:
1
1
2
2
3
3
But could also look like this:
1
2
1
3
2
3
You probably don't want this, because it would be very hard to figure out which output belonged to which job. The solution? Redirect stdout (and optionally stderr) for each job to a separate file. For example
command > file &
will redirect only stdout and
command > file 2>&1 &
will redirect both stdout and stderr for command to file while running command in the background. This page has a good introduction to redirection in Bash. You can view the command's output "live" by tailing the file:
tail -f file
I would recommend running background jobs with nohup or screen as user2676075 mentioned to let your jobs keep running after you close your terminal session, e.g.
nohup command1 > file1 2>&1 &
nohup command2 > file2 2>&1 &
nohup command3 > file3 2>&1 &
Try something like:
command1 2>&1 | tee commandlogs/command1.log ;
command2 2>&1 | tee commandlogs/command2.log ;
command3 2>&1 | tee commandlogs/command3.log
...
Then you can tail the files as the commands run. Remember, you can tail them all by being in the directory and doing a "tail *.log"
Alternatively, you can setup a script to generate a screen for each command with:
screen -S CMD1 -d -m command1 ;
screen -S CMD2 -d -m command2 ;
screen -S CMD3 -d -m command3
...
Then reconnect to them later with screen --list and screen -r [screen name]
Enjoy
Another option is to use a terminal emulator to run the three processes. You could use xterm (or rxvt etc.) if you are using X.
xterm -e <program1> [arg] ... &
xterm -e <program2> [arg] ... &
xterm -e <program3> [arg] ... &
Depends on what you want. This approach lets you pop up the terminal windows, so you can see the output in real time. You can also combine it with redirection to save the output as well.

Changing Process Name using Shell for nagios monitoring with check_procs

I have a python script to start a process which I want to monitor using Nagios. When I run that script and perform ps -ef on my ubuntu EC2 instance, it shows process as python <filename>.py --arguments. For Nagios to monitor that process using check_procs, we need to supply process name. Here process name becomes 'python'.
/usr/lib/nagios/plugins/check_procs -C python
It returns the output that one python process is running. This is fine when I'm running one python process. But If I'm running multiple python scripts and monitor only few, then I have to give that particular process name. If in the above command, I give python script name, it throws an error. So I want to mask whole python <filename>.py --arguments to some other name so that while performing check_procs, I can give that new name.
If anyone have any idea, please let me know. I have checked other stackoverflow questions which suggest changing python process name using setproctitle but I want to perform it using shell.
Regards,
Sanket
You can use the check_procs command to look at arguments, which includes the module name. The following command will let you know if the python module 'module.py' is running.
/usr/lib/nagios/plugins/check_procs -c 1:1 -a module.py -C python
The -c argument lets you set the critical range. 1:1 will trigger a critical status if there is more or less than 1 process that matches running.
The -a argument will filter based on processes that contain the args 'module.py' (change it to the name of the module you want to monitor)
The -C argument will make sure that the process is a python process
If you need help figuring out how to create the service definition, I had to figure that out too. Just let me know.
REFERENCE:
check_procs plugin manpage
http://nagiosplugins.org/man/check_procs
You can't change the process name from pure Python, although you can use a wrapper (for example, written in C) to do so.
However, what you should do instead is making your program a daemon, and using a pidfile. Have a look at the python Daemon API and its implementation python-daemon.
check_procs already handles this situation.
check_procs can tell the difference between scripts launched as an argument to the interpreter vs jobs run directly a hashbang interpreter. Even though both of these look the same in the ps output!! The latter case will not be listed in check_procs -C python!
If you run your scripts explicitly via python: python <filename.py>, then you can monitor them with the check_procs -C python -a filename.py.
If you put #!/usr/bin/python in your scripts and run them as ./filename.py, then you can monitor with check_procs -C filename.py.
Example command line session showing this behavior:
#make test.py directly executable. See code below
$ chmod a+x test.py
#launch via python explicitly:
$ /usr/bin/python ./test.py &
[1] 27094
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 1 process with command name 'python'
PROCS OK: 0 processes with command name 'test.py'
PROCS OK: 1 process with args 'test.py'
#launch via python implicitly
$ ./test.py &
[2] 27134
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 1 process with command name 'python'
PROCS OK: 1 process with command name 'test.py'
PROCS OK: 2 processes with args 'test.py'
#PS 'COMMAND' output looks the same
$ ps 27094 27134
PID TTY STAT TIME COMMAND
27094 pts/6 S 0:00 /usr/bin/python ./test.py
27134 pts/6 S 0:00 /usr/bin/python ./test.py
#kill the explicit test
$ kill 27094
[1] - terminated /usr/bin/python ./test.py
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 0 processes with command name 'python'
PROCS OK: 1 process with command name 'test.py'
PROCS OK: 1 process with args 'test.py'
#kill the implicit test
$ kill 27134
[2] + terminated ./test.py
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 0 processes with command name 'python'
PROCS OK: 0 processes with command name 'test.py'
PROCS OK: 0 processes with args 'test.py'
test.py is a python script that sleeps for 2 minutes. It is chmod +x and has a hashbang #! line invoking /usr/bin/python.
#!/usr/bin/python
import time
time.sleep(120)
Create a pid file and use that file for the process lookup with nagios.
I'm not saying this is the best solution (it wouldn't scale well at all), but you can create a symbolic link to the python command and execute your script using this link. e.g.
ln -s `which python` ~/mypython
~/mypython myscript.py
Scripts launched using the link should show up as mypython in ps.
You can use subprocess.Popen to change the executable name, but you'd have to use a wrapper script (or some weird fork magic). The following code causes ps to list the executable as kwyjibo /tmp/test.py instead of /usr/bin/python /tmp/test.py:
import subprocess
p = subprocess.Popen(['kwyjibo', '/tmp/test.py'], executable='/usr/bin/python')

Categories