Specifically, I'm trying to use fabric to run some tests which rely on the existence of a MongoDB.
I have the following code:
db_cmd = 'mongod'
test_cmd = 'istanbul cover node_modules/mocha/bin/_mocha -- -R spec'
pid = os.spawnl(os.P_NOWAIT, db_cmd)
with shell_env(NODE_ENV='test'):
local(test_cmd)
I plan to use the PID to kill the process after the test_cmd has finished however I've not gotten that far yet.
The running of test_cmd results in an error suggesting that db_cmd has exited and that MongoDB is no longer available:
Uncaught Error: failed to connect to [localhost:27017]
However running mongod manually before running fabric causes test_cmd to run fine and interact with MongoDB.
I suspect I'm just not understanding os.spawnl. Note that this code needs to run on Windows / Linux and OSX so I think I'm somewhat restricted in which os.spawnxxx methods I can use. I'm also interested to know if there's a fabric method to achieve this as well though.
I successfully use:
os.killpg(process.pid, signal.SIGTERM)
Probably, you need to use subprocess module for that.
To run mongo in background use:
process = subprocess.Popen(
command, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
shell=True, preexec_fn=os.setsid
)
To kill it after tests, use command I 've written first.
command - is a string contained your mongo start code, for example :
mongod --host localhost --port 27018
It works fine for me. If you will have problems wit the code, please let me know it.
You can also do this in straight bash with jobs and traps:
#!/bin/bash
trap "kill %1" SIGINT SIGTERM EXIT
mongod --host localhost --port 27018 &
istanbul cover node_modules/mocha/bin/_mocha -- -R spec
exit 0
What this is doing:
Set a trap on signals, SIGINT SIGTERM EXIT, to kill the first backgrond job
Make a mongod instance, and throw it into the background (the first one)
Run the tests
trigger exit signal
So this will setup and tare down your mongod instance on completion, even on a term signal or exception.
Related
For my dissertation at University, I'm working on a coding leaderboard system where users can compile / run untrusted code through temporary docker containers. The system seems to be working well so far, but one problem I'm facing is that when code for an infinite loop is submitted, E.g:
while True:
print "infinite loop"
the system goes haywire. The problem is that when I'm creating a new docker container, the Python interpreter prevents docker from killing the child container as data is still being printed to STDOUT (forever). This leads to the huge vulnerability of docker eating up all available system resources until the machine using the system completely freezes (shown below):
So my question is, is there a better way of setting a timeout on a docker container than my current method that will actually kill the docker container and make my system secure (code originally taken from here)?
#!/bin/bash
set -e
to=$1
shift
cont=$(docker run --rm "$#")
code=$(timeout "$to" docker wait "$cont" || true)
docker kill $cont &> /dev/null
echo -n 'status: '
if [ -z "$code" ]; then
echo timeout
else
echo exited: $code
fi
echo output:
# pipe to sed simply for pretty nice indentation
docker logs $cont | sed 's/^/\t/'
docker rm $cont &> /dev/null
Edit: The default timeout in my application (passed to the $to variable) is "10s" / 10 seconds.
I've tried looking into adding a timer and sys.exit() to the python source directly, but this isn't really a viable option as it seems rather insecure because the user could submit code to prevent it from executing, meaning the problem would still persist. Oh the joys of being stuck on a dissertation... :(
You could set up your container with a ulimit on the max CPU time, which will kill the looping process. A malicious user can get around this, though, if they're root inside the container.
There's another S.O. question, "Setting absolute limits on CPU for Docker containers" that describes how to limit the CPU consumption of containers. This would allow you to reduce the effect of malicious users.
I agree with Abdullah, though, that you ought to be able to docker kill the runaway from your supervisor.
If you want to run the containers without providing any protection inside them, you can use runtime constraints on resources.
In your case, -m 100M --cpu-quota 50000 might be reasonable.
That way it won't eat up the parent's system resources until you get around to killing it.
I have achieved a solution for this problem.
First you must kill docker container when time limit is achieved:
#!/bin/bash
set -e
did=$(docker run -it -d -v "/my_real_path/$1":/usercode virtual_machine ./usercode/compilerun.sh 2>> $1/error.txt)
sleep 10 && docker kill $did &> /dev/null && echo -n "timeout" >> $1/error.txt &
docker wait "$did" &> /dev/null
docker rm -f $ &> /dev/null
The container runs in detached mode (-d option), so it runs in the background.
Then you run sleep also in the background.
Then wait for the container to stop. If it doesnt stop in 10 seconds (sleep timer), the container will be killed.
As you can see, the docker run process calls a script named compilerun.sh:
#!/bin/bash
gcc -o /usercode/file /usercode/file.c 2> /usercode/error.txt && ./usercode/file < /usercode/input.txt | head -c 1M > /usercode/output.txt
maxsize=1048576
actualsize=$(wc -c <"/usercode/output.txt")
if [ $actualsize -ge $maxsize ]; then
echo -e "1MB file size limit exceeded\n\n$(cat /usercode/output.txt)" > /usercode/output.txt
fi
It starts by compiling and running a C program (its my use case, I am sure the same can be done for python compiller).
This part:
command | head -c 1M > /usercode/output.txt
Is responsible for the max output size thing. It allows output to be 1MB maximum.
After that, I just check if file is 1MB. If true, write a message inside (at the beginning of) the output file.
The --stop-timeout option is not killing the container if the timeout is exceeded.
Instead, use --ulimit --cpu=timeout to kill the container if the timeout is exceeded.
This is based on the CPU time for the process inside the container.
I guess, you can use signals in python like unix to set timeout. you can use alarm of specific time say 50 seconds and catch it. Following link might help you.
signals in python
Use --stop-timeout option while running your docker container. this will execute SIGKILL once the timeout occured
I use a Cloud server to test my django small project, I type in manage.py runserver and then I log out my cloud server, I can visit my site normally, but when I reload my cloud server, I don't know how to stop the development server, I had to kill the process to stop it, is there anyway to stop the development?
The answer is findable via Google -- and answered in other forums. Example solution is available on the Unix & Linux StackExchange site.
To be explicit, you could do:
ps auxw | grep runserver
This will return the process and its respective PID, such as:
de 7956 1.8 0.6 540204 55212 ? Sl 13:27 0:09 /home/de/Development/sampleproject/bin/python ./manage.py runserver
In this particular case, the PID is 7956. Now just run this to stop it:
kill 7956
And to be clear / address some of the comments, you have to do it this way because you're running the development server in the background (the & in your command). That's why there is no "built-in" Django stop option...
One liner..
pkill -f runserver
Try this
lsof -t -i tcp:8000 | xargs kill -9
well it seems that it's a bug that django hadn't provided a command to stop the development server . I thought it have one before~~~~~
Ctrl+c should work. If it doesn't Ctrl+/ will force kill the process.
As far as i know ctrl+c or kill process is only ways to do that on remote machine.
If you will use Gunicorn server or somethink similar you will be able to do that using Supervisor.
We can use the following command.
-> netstat -ntlp
then we will get number of process running with PID, find our python server PID and Kill process.
-> kill -9 PID
For example:
From task manager you can end the python tasks that are running.
Now run python manage.py runserver from your project directory and it will work.
This worked for me on windows.
Use the below command to list all connections and listening ports (-a) along with their PID (-o).
netstat -a -o
Find the PID of the process
Then use this to kill the process
taskkill /PID PUT_THE_PID_HERE /F
Programmatically using a .bat script in Command Prompt in Windows:
#ECHO OFF
SET /A port=8000
FOR /F "tokens=5" %%T IN ('netstat -ano ^| findstr :%port%') DO (
SET /A processid=%%T
TASKKILL /PID %%T /F
)
gives
SUCCESS: The process with PID 5104 has been terminated.
You can Quit the server by hitting CTRL-BREAK.
I'm using python fabric to deploy binaries to an ec2 server and am attempting to run them in background (a subshell).
All the fabric commands for performing local actions, putting files, and executing remote commands w/o elevated privileges work fine. The issue I run into is when I attempt to run the binary.
with cd("deploy"):
run('mkdir log')
sudo('iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080', user="root")
result = sudo('./dbserver &', user="root") # <---- This line
print result
if result.failed:
print "Running dbserver failed"
else:
print "DBServer now running server" # this gets printed despite the binary not running
After I login to the server and ps aux | grep dbserver nothing shows up. How can I get fabric to execute the binary? The same command ./dbserver & executed from the shell does exactly what I want it to. Thanks.
This is likey reated to TTY issues, and/or that you're attempting to background a process.
Both of these are discussed in the FAQ under these two headings:
http://www.fabfile.org/faq.html#init-scripts-don-t-work
http://www.fabfile.org/faq.html#why-can-t-i-run-programs-in-the-background-with-it-makes-fabric-hang
Try making the sudo like this:
sudo('nohup ./dbserver &', user=root, pty=False)
I am starting my script locally via:
sudo python run.py remote
This script happens to also open a subprocess (if that matters)
webcam = subprocess.Popen('avconv -f video4linux2 -s 320x240 -r 20 -i /dev/video0 -an -metadata title="OfficeBot" -f flv rtmp://6f7528a4.fme.bambuser.com/b-fme/xxx', shell = True)
I want to know how to terminate this script when I SSH in.
I understand I can do:
sudo pkill -f "python run.py remote"
or use:
ps -f -C python
to find the process ID and kill it that way.
However none of these gracefully kill the process, I want to able to trigger the equilivent of CTRL/CMD C to register an exit command (I do lots of things on shutdown that aren't triggered when the process is simply killed).
Thank you!
You should use "signals" for it:
http://docs.python.org/2/library/signal.html
Example:
import signal, os
def handler(signum, frame):
print 'Signal handler called with signal', signum
signal.signal(signal.SIGINT, handler)
#do your stuff
then in terminal:
kill -INT $PID
or ctrl+c if your script is active in current shell
http://en.wikipedia.org/wiki/Unix_signal
also this might be useful:
How do you create a daemon in Python?
You can use signals for communicating with your process. If you want to emulate CTRL-C the signal is SIGINT (which you can raise by kill -INT and process id. You can also modify the behavior for SIGTERM which would make your program shut down cleanly under a broader range of circumstances.
I want to kill a subprocess if the time of executing is too long.
I know I have to use os.kill or os.killpg.
However, the problems comes out when if I am not a root user. For example, in my designed GUI, I want to call subprocess, and os.kill or os.killpg to kill the subprocess. But my GUI is owned by apache. So when it comes to the command os.kill, I will get error:
[type:
exceptions.OSError value: [Errno 1] Operation not permitted
And besides, the version of my python is 2.4.3. so terminate()...can't be used.
Could anyone give me some ideas?
Thanks a lot!
P.S.
Related part of my code:
timeout=4
subp = subprocess.Popen('sudo %s'%commandtosend, shell=True,preexec_fn=os.setsid, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while subp.poll() is None:
time.sleep(0.1)
now = datetime.datetime.now()
if (now - start).seconds > timeout:
os.kill(subp.pid, signal.SIGKILL)
#os.killpg(subp.pid, signal.SIGKILL)
break
Remove sudo from the subprocess command if it's possible which you should do because you shouldn't run a subprocess in a sudo user from your GUI , it's definitely a security breach:
subprocess.Popen(commandtosend, shell=True,preexec_fn=os
^^
Here don't put sudo
Like this your subprocess will be launch with the www-data user(Apache user), and you can kill it with os.kill(subp.pid, signal.SIGKILL).
If it's not possible to remove the sudo (which is bad) from the subprocess you will have to execute the kill like this :
os.system("sudo kill %s" % (subp.pid, ))
Hope this can help :)
Your subprocess is running with superuser privileges (because you're starting it with sudo).
To kill it, you need to be superuser.
One option would be to not use os.kill but run 'sudo kill 5858' where 5858 would be the PID of the process spawned by subprocess.Popen.
It's also worth noting that if your program allows the user to control commandtosend you will give the user superuser rights to the entire machine.