What I need:
Connect to different wifi network on archlinux by calling python script.
What I am doing:
Executing the following statements from python:
wpa_passphrase "MySSID" "MyPass"> /etc/wpa_supplicant/profile.conf
wpa_supplicant -B -i wlan0 -c /etc/wpa_supplicant/profile.conf
dhcpd wlan0
It works only for the first attempt. When it is executed the second time, it says dhcpd is already on.
I dont know how to switch to another network.
I have also tried wpa_cli and again, dont know how to switch to another network.
Please suggest some fix or alternatives (uncomplicated)
Your concrete Problem is that you start wpa_supplicant and dhcp instead of re-starting them. I have a script that reads
#shutdown dhc
dhclient -r
#shutdown wpa_supplicant
killall wpa_supplicant
#down interface
ifdown --force wlan0
sleep 1
#your wpa startup here:
wpa_supplicant -t -fYOUR_LOG_FILE -cYOUR_wpa_supplicant.conf -B -iwlan0
sleep 1
#restart dhc
dhclient -nw
I guess you can do this a little more nicely by configuring your /etc/network/interfaces appropriately.
Btw. in principle, it should not be necessary to restart the dhc at all. After some while it should realize that it needs to fetch a new IP, but for me this takes to long. ;)
Edit /etc/wpa_supplicant.conf
nano /etc/wpa_supplicant.conf
Complete the file to make it look like that (replacing wifi_name and wifi_key by their real values of course).
network={
ssid="wifi_name1"
psk="wifi_key1"
}
and
network={
ssid="wifi_name2"
psk="wifi_key2"
}
Then save and exit
The wifi network is now configured, we must now tell that we want to connect to it using this configuration file.
wpa_supplicant -B -i wlan0 -c <(wpa_passphrase MYSSID passphrase)
If your interface isn't named wlan0 then replace wlan0 by the real name of your interface.
We must now request an IP adress.
dhclient wlan0
If everything gone well you now see several lines containing some IP addresses and the command ping should work.
When you generate a new config with wpa_passphrase, put it in a different spot than the last to make things easier. So for example, your home wifi could be in /etc/wpa_supplicant/home.conf and your work wifi could be in /etc/wpa_supplicant/work.conf.
Then when you need to connect to your home wifi you just run
killall wpa_supplicant # With root privileges
wpa_supplicant -B -i wlan0 -c /etc/wpa_supplicant/home.conf
And when you need to connect to your work wifi you run
killall wpa_supplicant # With root privileges
wpa_supplicant -B -i wlan0 -c /etc/wpa_supplicant/work.conf
Rince and repeat for any new networks you want to use. If you don't want to keep a network, like a starbucks wifi on a roadtrip, just save it to a conf that you plan on overwriting or deleting like /etc/wpa_supplicant/temp.conf.
AFAIK, you never have to rerun dhcpcd. I have dhcpcd as a startup process and whenever I switch wifis I never need to touch it.
Edit: You also don't need to run this as a python script. You can do it in the shell. If you need to write a script that quickly changes the wifi, I would recommend you use a shell script like the following for example.
#!/bin/sh
[ -z "$1" ] && exit 1
[ ! -f "/etc/wpa_supplicant/${1}.conf" ] && echo "$1 is not a wpa_supplicant config file" && exit 1
sudo killall wpa_supplicant
sudo wpa_supplicant -B -i wlan0 -c "/etc/wpa_supplicant/${1}.conf"
Run like
wifichange home
or
wifichange work
The [ -z "$1" ] section is saying to quit if you didn't input anything. (like)
wifichange
And the [ ! -f ...] section is saying to quit if you didn't input the name of a real config file.
Now I tested the script.
Related
This is my bash script used in CMD
#!/bin/bash
set -eo pipefail
echo "Setting trap"
echo $$
echo $BASHPID
trap 'cleanup' TERM
trap 'cleanup' KILL
cleanup() {
echo "Cleaning up..."
kill -TERM `jobs -p`
}
# To start the essential services
service ntp start
service awslogs start
cd /app
python -m job_manager &
wait
The Docker file is not very interesting
FROM ubuntu:16.04
RUN apt-get update --fix-missing && apt-get install -y \
git \
python \
python-pip \
ntp \
curl
ENV APP_HOME /app
RUN mkdir -p ${APP_HOME}
COPY src/ ${APP_HOME}/
# job-cmd.sh is kept here
COPY docker/helper-files/* /
CMD /job-cmd.sh
The idea is trap the TERM signal inside job-cmd.sh and then pass on to the python task.
I have tried a number of time and it did not work. After I add these call
echo $$
echo $BASHPID
I realised the pid of the CMD process is actually 7 instead of 1 as I would expect.
My questions:
1) Why the bash process is assigned PID 7?
2) How can I fix the my job script/dockerfile?
I think this is happening because you are using the shell form of the CMD instruction. From https://docs.docker.com/engine/reference/builder/#cmd:
If you want to run your command without a shell then you must express the command as a JSON array and give the full path to the executable. This array form is the preferred format of CMD.
So, replace your CMD instruction in Dockerfile with:
CMD ["/job-cmd.sh"]
Then your Bash process will be assigned PID 1. Your TERM handler will work, but you can't trap the KILL signal. From man trap:
Trapping SIGKILL or SIGSTOP is syntactically accepted by some historical implementations, but it has no effect. Portable POSIX applications cannot attempt to trap these signals.
FYI, I explained more about the PID 1 problem here: https://serverfault.com/questions/869543/bash-script-entrypoint-pid-1-kills-tail-sub-process-only-if-a-fake-trap-whi/870872#870872
You could use trap command in the bash to do this.
#!/bin/bash
#
function gracefulShutdown {
echo "Shutting down!"
# do something..
}
trap gracefulShutdown SIGTERM TERM INT
./subprocess.sh &
tail --pid=${!} -f /dev/null &
wait "${!}"
tail command just waits for subprocess to complete, while wait command waits for the tail to complete... Now, main process is the one which is waiting on.. so any docker signals directly reach the trap we set above...
Example is available at: https://github.com/iamdvr/docker-trap-subprocess
I need to get IP address of a machine and check if NTP server is running on that. In addition, if it is not running, just start it.
I checked several posts and non of them worked for me.
You can use fabric for this. It's great! Here's a quick and dirty way:
from fabric.api import run
def restart_ntp_if_not_running():
run('if [[ $(netstat -p tcp -n | grep [your ip].123 | grep ESTABLISHED) ]]; then true; else [your command to restart here]; fi;')
The execute this like:
fab -H [host name] restart_ntp_if_not_running
For my dissertation at University, I'm working on a coding leaderboard system where users can compile / run untrusted code through temporary docker containers. The system seems to be working well so far, but one problem I'm facing is that when code for an infinite loop is submitted, E.g:
while True:
print "infinite loop"
the system goes haywire. The problem is that when I'm creating a new docker container, the Python interpreter prevents docker from killing the child container as data is still being printed to STDOUT (forever). This leads to the huge vulnerability of docker eating up all available system resources until the machine using the system completely freezes (shown below):
So my question is, is there a better way of setting a timeout on a docker container than my current method that will actually kill the docker container and make my system secure (code originally taken from here)?
#!/bin/bash
set -e
to=$1
shift
cont=$(docker run --rm "$#")
code=$(timeout "$to" docker wait "$cont" || true)
docker kill $cont &> /dev/null
echo -n 'status: '
if [ -z "$code" ]; then
echo timeout
else
echo exited: $code
fi
echo output:
# pipe to sed simply for pretty nice indentation
docker logs $cont | sed 's/^/\t/'
docker rm $cont &> /dev/null
Edit: The default timeout in my application (passed to the $to variable) is "10s" / 10 seconds.
I've tried looking into adding a timer and sys.exit() to the python source directly, but this isn't really a viable option as it seems rather insecure because the user could submit code to prevent it from executing, meaning the problem would still persist. Oh the joys of being stuck on a dissertation... :(
You could set up your container with a ulimit on the max CPU time, which will kill the looping process. A malicious user can get around this, though, if they're root inside the container.
There's another S.O. question, "Setting absolute limits on CPU for Docker containers" that describes how to limit the CPU consumption of containers. This would allow you to reduce the effect of malicious users.
I agree with Abdullah, though, that you ought to be able to docker kill the runaway from your supervisor.
If you want to run the containers without providing any protection inside them, you can use runtime constraints on resources.
In your case, -m 100M --cpu-quota 50000 might be reasonable.
That way it won't eat up the parent's system resources until you get around to killing it.
I have achieved a solution for this problem.
First you must kill docker container when time limit is achieved:
#!/bin/bash
set -e
did=$(docker run -it -d -v "/my_real_path/$1":/usercode virtual_machine ./usercode/compilerun.sh 2>> $1/error.txt)
sleep 10 && docker kill $did &> /dev/null && echo -n "timeout" >> $1/error.txt &
docker wait "$did" &> /dev/null
docker rm -f $ &> /dev/null
The container runs in detached mode (-d option), so it runs in the background.
Then you run sleep also in the background.
Then wait for the container to stop. If it doesnt stop in 10 seconds (sleep timer), the container will be killed.
As you can see, the docker run process calls a script named compilerun.sh:
#!/bin/bash
gcc -o /usercode/file /usercode/file.c 2> /usercode/error.txt && ./usercode/file < /usercode/input.txt | head -c 1M > /usercode/output.txt
maxsize=1048576
actualsize=$(wc -c <"/usercode/output.txt")
if [ $actualsize -ge $maxsize ]; then
echo -e "1MB file size limit exceeded\n\n$(cat /usercode/output.txt)" > /usercode/output.txt
fi
It starts by compiling and running a C program (its my use case, I am sure the same can be done for python compiller).
This part:
command | head -c 1M > /usercode/output.txt
Is responsible for the max output size thing. It allows output to be 1MB maximum.
After that, I just check if file is 1MB. If true, write a message inside (at the beginning of) the output file.
The --stop-timeout option is not killing the container if the timeout is exceeded.
Instead, use --ulimit --cpu=timeout to kill the container if the timeout is exceeded.
This is based on the CPU time for the process inside the container.
I guess, you can use signals in python like unix to set timeout. you can use alarm of specific time say 50 seconds and catch it. Following link might help you.
signals in python
Use --stop-timeout option while running your docker container. this will execute SIGKILL once the timeout occured
I'm using python fabric to deploy binaries to an ec2 server and am attempting to run them in background (a subshell).
All the fabric commands for performing local actions, putting files, and executing remote commands w/o elevated privileges work fine. The issue I run into is when I attempt to run the binary.
with cd("deploy"):
run('mkdir log')
sudo('iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080', user="root")
result = sudo('./dbserver &', user="root") # <---- This line
print result
if result.failed:
print "Running dbserver failed"
else:
print "DBServer now running server" # this gets printed despite the binary not running
After I login to the server and ps aux | grep dbserver nothing shows up. How can I get fabric to execute the binary? The same command ./dbserver & executed from the shell does exactly what I want it to. Thanks.
This is likey reated to TTY issues, and/or that you're attempting to background a process.
Both of these are discussed in the FAQ under these two headings:
http://www.fabfile.org/faq.html#init-scripts-don-t-work
http://www.fabfile.org/faq.html#why-can-t-i-run-programs-in-the-background-with-it-makes-fabric-hang
Try making the sudo like this:
sudo('nohup ./dbserver &', user=root, pty=False)
I have a python script i'd like to start on startup on an ubuntu ec2 instance but im running into troubles.
The script runs in a loop and takes care or exiting when its ready so i shouldn't need to start or stop it after its running.
I've read and tried a lot of approaches with various degrees of success and honestly im confused about whats the best approach. I've tried putting a shell script that starts the python script in /etc/init.d, making it executable and doing update-rc.d to try to get it to run but its failed at every stage.
here's the contents of the script ive tried:
#!/bin/bash
cd ~/Dropbox/Render\ Farm\ 1/appleseed/bin
while :
do
python ./watchfolder18.py -t ./appleseed.cli -u ec2 ../../data/
done
i then did
sudo chmod +x /etc/init.d/script_name
sudo sudo update-rc.d /etc/init.d/script_name defaults
This doesn't seem to run on startup and i cant see why, if i run the command manually it works as expected.
I also tried adding a line to rc.local to start the script but that doesn't seem to work either
Can anybody share what they have found is the simplest way to run a python script in the background with arguments on startup of an ec2 instance.
UPDATE: ----------------------
I've since moved this code to a file called /home/ubuntu/bin/watch_folder_start
#!/bin/bash
cd /home/ubuntu/Dropbox/Render\ Farm\ 1/appleseed/bin
while :
do
python ./watchfolder18.py -t ./appleseed.cli -u ec2 ../../data/
done
and changed my rc.local file to this:
nohup /home/ubuntu/bin/watch_folder_start &
exit 0
Which works when i manually run rc.local but wont fire on startup, i did chmod +x rc.local but that didn't change anything,
Your /etc/init.d/script_name is missing the plumbing that update-rc.d and so on use, and won't properly handle stop, start, and other init-variety commands, so...
For initial experimentation, take advantage of the /etc/init.d/rc.local script (which should be linked to by default from /etc/rc2/S99rc.local). The gets you out of having to worry about the init.d conventions and just add things to /etc/rc.local before the exit 0 at its end.
Additionally, that ~ isn't going to be defined, you'll need to use a full pathname - and furthermore the script will run as root. We'll address how to avoid this if desired in a bit. In any of these, you'll need to replace "whoeveryouare" with something more useful. Also be warned that you may need to prefix the python command with a su command and some arguments to get the process to run with the user id you might need.
You might try (in /etc/rc.local):
( if cd '/home/whoeveryouare/Dropbox/Render Farm 1/appleseed/bin' ; then
while : ; do
# This loop should respawn watchfolder18.py if it dies, but
# ideally one should fix watchfolder18.py and remove this loop.
python ./watchfolder18.py -t ./appleseed.cli -u ec2 ../../data/
done
else
echo warning: could not find watchfolder 1>&2
fi
) &
You could also put all that in a script and just call it from /etc/rc.local.
The first pass is roughly what you had, but if we assume that watchfolder18.py will arrange to avoid dying we can cut it down to:
( cd '/home/whoeveryouare/Dropbox/Render Farm 1/appleseed/bin' \
&& exec python ./watchfolder18.py -t ./appleseed.cli -u ec2 ../../data/ ) &
These aren't all that pretty, but it should let you get your daemon sorted out so you can debug it and so on, then come back to making a proper /etc/init.d or /etc/init script later. Something like this might work in /etc/init/watchfolder.conf, but I'm not yet facile enough to claim this is anything other than a rough stab at it:
# watchfolder - spawner for watchfolder18.py
description "watchfolder program"
start on runlevel [2345]
stop on runlevel [!2345]
script
if cd '/home/whoeveryouare/Dropbox/Render Farm 1/appleseed/bin' ; then
exec python ./watchfolder18.py -t ./appleseed.cli -u ec2 ../../data/0
fi
end script
I found that the best solution in the end was to use 'upstart' and create a file in etc/init called myfile.conf that contained the following
description "watch folder service"
author "Jonathan Topf"
start on startup
stop on shutdown
# Automatically Respawn:
respawn
respawn limit 99 5
script
HOST=`hostname`
chdir /home/ubuntu/Dropbox/Render\ Farm\ 1/appleseed/bin
exec /usr/bin/python ./watchfolder.py -t ./appleseed.cli -u $HOST ../../data/ >> /home/ubuntu/bin/ec2_server.log 2>&1
echo "watch_folder started"
end script
More info on using the upstart system here
http://upstart.ubuntu.com/
https://help.ubuntu.com/community/UbuntuBootupHowto
http://blog.joshsoftware.com/2012/02/14/upstart-scripts-in-ubuntu/