I made few changes in my botMain.py file. CodeDeploy is successful but the changes are not effective in the app. So, I edited my RunMyBot.sh file but still there's no change.
appspec.yml
version: 0.0
os: linux
files:
- source: /
destination: /home/mybot
hooks:
AfterInstall:
- location: scripts/install_dependencies.sh
timeout: 300
runas: root
ApplicationStart:
- location: scripts/RunMyBot.sh
timeout: 300
runas: root
RunMyBot.sh (new)
#!bin/bash
sudo /usr/bin/pm2 restart myBot
nohup python3 botMain.py & /dev/null 2> /dev/null < /dev/null &
RunMyBot.sh (old)
serverfile="/lib/systemd/system/mypythonservice.service"
echo "[Unit]" > $serverfile
echo "Description=My Python Service" > $serverfile
echo "After=multi-user.target" >> $serverfile
echo "[Service]" >> $serverfile
echo "Type=idle" >> $serverfile
echo "ExecStart=/usr/bin/python /home/mybot/botMain.py" >> $serverfile
echo "Restart=on-failure" >> $serverfile
echo "[Install]" >> $serverfile
echo "WantedBy=multi-user.target" >> $serverfile
cat $serverfile
sudo chmod 644 /lib/systemd/system/mypythonservice.service
sudo systemctl daemon-reload
sudo systemctl enable mypythonservice.service
The same server file script is in my instance user data also so I removed it from RunMyBot.sh
Before you deploy new version of your app, you have to stop your existing nohup. You could do this by adding ApplicationStop section to your appspec.yml.
I think the old setup was better, albeit seemingly more difficult to setup at first. With the old setup you would just restart your daemon.
you can use BeforeInstall hook to stop the running services and then deploy your new code in VM. After then, you can start your services.
AppSpec example : https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file.html
Related
I had ApplicationStop in my appspec.yml file in the previous deployment but removed it. Now, CodeDeploy is trying to find the file which was there in my previous deployment.
I found other answers but none of them are working:
sudo service codedeploy-agent stop
On typing this in CloudShell, I am getting the error - sudo: service: command not found
aws deploy create-deployment --application-name APPLICATION --deployment-group-name GROUP --ignore-application-stop-failures --s3-location bundleType=tar,bucket=BUCKET,key=KEY --description "Ignore ApplicationStop failures due to broken script"
After typing this and replacing the APPLICATION, GROUP, BUCKET, and KEY, the code deployment starts. But, deployment fails anyway with error.
My files:
appspec.yml:
version: 0.0
os: linux
files:
- source: /
destination: /home/discordbot
hooks:
AfterInstall:
- location: scripts/RunMyBot.sh
timeout: 300
runas: root
RunMyBot.sh
#!bin/bash
easy_install pip
pip install -r /home/discordbot/requirements.txt
file="/lib/systemd/system/mypythonservice.service"
echo "[Unit]" > $file
echo "Description=My Python Service" > $file
echo "After=multi-user.target" >> $file
echo "[Service]" >> $file
echo "Type=idle" >> $file
echo "ExecStart=/usr/bin/python /home/discordbot/botMain.py" >> $file
echo "Restart=on-failure" >> $file
echo "[Install]" >> $file
echo "WantedBy=multi-user.target" >> $file
cat $file
sudo chmod 644 /lib/systemd/system/mypythonservice.service
sudo systemctl daemon-reload
sudo systemctl enable mypythonservice.service
I just can't get my code deployed successfully, 27 deployments have failed.
My python code is simple which just needs to run all the time. It accepts commands from users and returns the output. Code resides in github.
I have a django application and I use gunicorn to run it. My script to start gunicorn looks like this:
django_path=/path/to/your/manage.py
settingsfile=my_name
workers=2
cd $django_path
exec gunicorn --env DJANGO_SETTINGS_MODULE=app.$settingsfile app.wsgi --workers=$workers &
this works when I execute it. However, when I look at my database in my projectfolder (cd /path/to/your/manage.py && ll) I get this:
-rw-r--r-- 1 root root 55K Dec 2 13:33 db.sqlite3
Which means I need root permisson to do anyhting on the databse (for example do a createuser). So I looked around on Stackoverflow and tried a couple of things:
I had the whole script at the top of /etc/init.d/rc.local
Then I put the script as a script file gunicorn_script.sh put in /etc/init.d, did a /usr/sbin/update-rc.d -f gunicorn_script.sh defaults
Lastly, I tried to put this command at the top of the rc.local file: su debian -c '/etc/init.d/gunicorn_script.sh start' to execute the gunicorn_script as a debian user
All of them started my app but the problem with the database remains (only root rights).
So how do I run that script as a non root user?
I have a script in my project's folder which I use to run gunicorn. Here is a header:
#!/bin/bash
CUR_DIR=$(dirname $(readlink -f $0))
WORK_DIR=$CUR_DIR
USER=myusername
PYTHON=/usr/bin/python3
GUNICORN=/usr/local/bin/gunicorn
sudo -u $USER sh -c "cd $WORK_DIR; $PYTHON -W ignore $GUNICORN -c $WORK_DIR/config/gunicorn/gunicorn.conf.py --chdir $WORK_DIR myappname.wsgi:application
Updated:
Put the code below to the file /etc/init.d/myservice, make the root owner and give +x permissions for the owner.
#!/bin/bash
#chkconfig: 345 95 50
#description: Starts myservice
if [ -z "$1" ]; then
echo "`basename $0` {start|stop}"
exit
fi
case "$1" in
start)
sh /path/to/run_script.sh start &
;;
stop)
sh /path/to/run_script.sh stop
;;
esac
Now you can use sudo service myservice start
I am sorry, I am not familiar with systemd yet, but with it it can be even easier.
Ok, so I found out that db.sqlite3 will be create in django through the makemigrations and migrate commands which I ran from root.
Hence, the problems with the permissions. I switched to debian and ran the commands from there et voila:
-rw-r--r-- 1 debian debian 55K Dec 2 13:33 db.sqlite3
I'm using Vagrant to set up a box with python, pip, virtualenv, virtualenvwrapper and some requirements. A provisioning shell script adds the required lines for virtualenvwrapper to .bashrc. It does a very basic check that they're not already there, so that it doesn't duplicate them with every provision:
if ! grep -Fq "WORKON_HOME" /home/vagrant/.bashrc; then
echo 'export WORKON_HOME=/home/vagrant/.virtualenvs' >> /home/vagrant/.bashrc
echo 'export PROJECT_HOME=/home/vagrant/Devel' >> /home/vagrant/.bashrc
echo 'source /usr/local/bin/virtualenvwrapper.sh' >> /home/vagrant/.bashrc
source /home/vagrant/.bashrc
fi
That seems to work fine; after provisioning is finished, the lines are in .bashrc, and I can ssh to the box and use virtualenvwrapper.
However, virtualenvwrapper doesn't work during provisioning. After the section above, this next checks for a pip requirements file and tries to install with virtualenvwrapper:
if [[ -f /vagrant/requirements.txt ]]; then
mkvirtualenv 'myvirtualenv' -r /vagrant/requirements.txt
fi
But that generates:
==> default: /tmp/vagrant-shell: line 50: mkvirtualenv: command not found
If I try and echo $WORKON_HOME from that shell script, nothing appears.
What am I missing to have those environment variables available, so virtualenvwrapper will run?
UPDATE: Further attempts... it seems that doing source /home/vagrant/.bashrc has no effect in my shell script - I can put echo "hello" in the .bashrc file , and that isn't output during provisioning (but is if I run source /home/vagrant/.bashrc when logged in.
I've also tried su -c "source /home/vagrant/.bashrc" vagrant in the shell script but that is no different.
UPDATE 2: Removed the $BASHRC_PATH variable, which was confusing the issue.
UPDATE 3: In another question I got the answer as to why source /home/vagrant/.bashrc wasn't working: the first part of the .bashrc file prevented it from doing anything when run "not interactively" in that way.
The Vagrant script provisioner will run as root, so it's home dir (~) will be /root. In your script if you define BASHRC_PATH=/home/vagrant, then I believe your steps will work: appending to, then sourcing from /home/vagrant/.bashrc.
Update:
Scratching my earlier idea ^^ because BASHRC_PATH is already set correctly.
As an alternative we could use .profile or .bash_profile. Here's a simplified example which sets environment variable FOO, making it available during provisioning and after ssh login:
Vagrantfile
Vagrant.configure(2) do |config|
config.vm.box = "hashicorp/precise32"
$prov_script = <<SCRIPT
if ! grep -q "export FOO" /home/vagrant/.profile; then
sudo echo "export FOO=bar" >> /home/vagrant/.profile
echo "before source, FOO=$FOO"
source /home/vagrant/.profile
echo "after source, FOO=$FOO"
fi
SCRIPT
config.vm.provision "shell", inline: $prov_script
end
Results
$ vagrant up
...
==> default: Running provisioner: shell...
default: Running: inline script
==> default: before source, FOO=
==> default: after source, FOO=bar
$ vagrant ssh -c 'echo $FOO'
bar
$ vagrant ssh -c 'tail -n 1 ~/.profile'
export FOO=bar
I found a solution, but I don't know if it's the best. It feels slightly wrong as it's repeating things, but...
I still append those lines to .bashrc, so that virtualenvwrapper will work if I ssh into the machine. But, because source /home/vagrant/.bashrc appears to have no effect during the running of the script, I have to explicitly repeat those three commands:
if ! grep -Fq "WORKON_HOME" $BASHRC_PATH; then
echo 'export WORKON_HOME=$HOME/.virtualenvs' >> $BASHRC_PATH
echo 'export PROJECT_HOME=$HOME/Devel' >> $BASHRC_PATH
echo 'source /usr/local/bin/virtualenvwrapper.sh' >> $BASHRC_PATH
fi
WORKON_HOME=/home/vagrant/.virtualenvs
PROJECT_HOME=/home/vagrant/Devel
source /usr/local/bin/virtualenvwrapper.sh
(As an aside, I also realised that during vagrant provisioning $HOME is /root, not the /home/vagrant I was assuming.)
The .bashrc in Ubuntu box does not work. You have to create the .bash_profile and add:
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
As mentioned in your other Q, Vagrant prohibits interactive shells during provisioning - apparently, only for some boxes (need to reference this though). For me, this affects the official Ubuntu Trusty and Xenial boxes.
However, you can simulate an interactive bash shell using sudo -H -u USER_HERE bash -i -c 'YOUR COMMAND HERE'
Answer taken from: https://stackoverflow.com/a/30106828/4186199
This has worked for me installing Ruby via rbenv and Node via nvm when provisioning the Ubuntu/trusty64 and xenial64 boxes.
Has any one script for autostart python script after reboot (centos).
I tryed this code, but it is not working
#! /bin/sh
# chkconfig: 2345 95 20
# description: almagest
# What your script does (not sure if this is necessary though)
# processname: www-almagest
# /etc/init.d/www-almagest start
case "$1" in
start)
echo "Starting almagest"
# run application you want to start
python ~almagest_clinic/app.py &> /dev/null &
;;
stop)
echo "Stopping example"
# kill application you want to stop
kill -9 $(sudo lsof -t -i:8002)
;;
*)
echo "Usage: /etc/init.d/www-private{start|stop}"
exit 1
;;
esac
exit 0
chkconfig script on
I found solution https://github.com/frdmn/service-daemons/blob/master/centos
with absolute path it is worked for me
The init process runs as root, and you have a relative path
python $HOME/almagest_clinic/app.py &> /dev/null &
in your script.
The root user may not be able to see that path. I would suggest changing that path to an absolute path
I've created small web server using werkzeug and I'm able to run it in usual python way with python my_server.py. Pages load, everything works fine. Now I want to start it when my pc boots. What's the easiest way to do that? I've been struggling with upstart but it doesn't seem to "live in a background" cuz after I execute start my_server I immediately receive kernel: [ 8799.793942] init: my_server main process (7274) terminated with status 1
my_server.py:
...
if __name__ == '__main__':
from werkzeug.serving import run_simple
app = create_app()
run_simple('0.0.0.0', 4000, app)
upstart configuration file my_server.conf:
description "My service"
author "Some Dude <blah#foo.com>"
start on runlevel [2345]
stop on runlevel [016]
exec /path/to/my_server.py
start on startup
Any Ideas how to make it work? Or any other better way to daemonize the script?
Update:
I believe the problem lies within my_server.py. It doesn't seem to initiate the webserver (method run_simple()) in the first place. What steps should be taken to make .py file be run by task handler such as upstart?
Place shebang as first line #!/usr/bin/env python
Allow execution permissions chmod 755
Start the daemon with superuser rights (to be absolutely sure no permission restrictions prevents it from starting)
Make sure all python libraries are there!
Something else?
Solved:
The problem was with missing python dependencies. When starting the script through task manager (e.g. upstart or start-stop-daemon) no errors are thrown. Need to be absolutely sure that pythonpath contains everything you need.
In addition to gg.kaspersky method, you could also turn your script into a "service", so that you can start or stop it using:
$ sudo service myserver start
* Starting system myserver.py Daemon [ OK ]
$ sudo service myserver status
* /path/to/myserver.py is running
$ sudo service myserver stop
* Stopping system myserver.py Daemon [ OK ]
and define it as a startup service using:
$ sudo update-rc.d myserver defaults
To do this, you must create this file and save it in /etc/init.d/.
#!/bin/sh -e
DAEMON="/path/to/myserver.py"
DAEMONUSER="myuser"
DAEMON_NAME="myserver.py"
PATH="/sbin:/bin:/usr/sbin:/usr/bin"
test -x $DAEMON || exit 0
. /lib/lsb/init-functions
d_start () {
log_daemon_msg "Starting system $DAEMON_NAME Daemon"
start-stop-daemon --background --name $DAEMON_NAME --start --user $DAEMONUSER --exec $DAEMON
log_end_msg $?
}
d_stop () {
log_daemon_msg "Stopping system $DAEMON_NAME Daemon"
start-stop-daemon --name $DAEMON_NAME --stop --retry 5 --name $DAEMON_NAME
log_end_msg $?
}
case "$1" in
start|stop)
d_${1}
;;
restart|reload|force-reload)
d_stop
d_start
;;
force-stop)
d_stop
killall -q $DAEMON_NAME || true
sleep 2
killall -q -9 $DAEMON_NAME || true
;;
status)
status_of_proc "$DAEMON_NAME" "$DAEMON" "system-wide $DAEMON_NAME" && exit 0 || exit $?
;;
*)
echo "Usage: /etc/init.d/$DAEMON_NAME {start|stop|force-stop|restart|reload|force-reload|status}"
exit 1
;;
esac
exit 0
In this example, I assume you have a shebang like #!/usr/bin/python at the head of your python file, so that you can execute it directly.
Last but not least, do not forget to give execution rights to your python server and to the service script :
$ sudo chmod 755 /etc/init.d/myserver
$ sudo chmod 755 /path/to/mserver.py
Here's the page where I learned this originally (french).
Cheers.
One simple way to do is using crontab:
$ crontab -e
A crontab file will appear for editing, write the line at the end:
#reboot python myserver.py
and quit. Now, after each reboot, the cron daemon will run your myserver python script.
If you have supervisor service that starts at boot, write a supervisor service is much, much simpler.
You can even set autorestart if your program fails.