I've got the following multi-machine Vagrant setup:
Vagrant.configure(2) do |config|
config.vm.define "xfcevm" do |xfcevm|
xfcevm.vm.box = "generic/ubuntu1904"
xfcevm.vm.hostname = "xfcevm"
config.vm.provider "virtualbox" do |vb|
vb.name = "ubuntu-xfce"
end
end
config.vm.define "kdevm" do |kdevm|
kdevm.vm.box = "generic/arch"
kdevm.vm.hostname = "kdevm"
config.vm.provider "virtualbox" do |vb|
vb.name = "arch-kde"
end
end
## only Arch doesn't ship with Python installed
config.vm.provision "shell", inline: "which python || sudo pacman --noconfirm -S python"
config.vm.provider "virtualbox" do |vb|
vb.gui = true
vb.memory = "2048"
vb.cpus = 1
vb.customize ["modifyvm", :id, "--vram", "32"]
end
config.vm.provision "ansible" do |ansible|
ansible.verbose = "v"
ansible.compatibility_mode = "2.0"
ansible.playbook = "setup.yml"
ansible.inventory_path = "hosts"
end
end
As Arch vagrant box doesn't include Python, so I've created an inline shell provision command that should test for the existence of Python (by which python) and if that evaluates to false then the pacman installation of Python should follow. For echoing output the second part shouldn't be evaluated and that's the case by running the command in terminal.
But shell provisioner evaluates the part after || anyway, no matter if Python exists. In the case of Ubuntu, it raises an obvious error for pacman not being installed:
$ vagrant up --provision
Bringing machine 'xfcevm' up with 'virtualbox' provider...
Bringing machine 'kdevm' up with 'virtualbox' provider...
==> xfcevm: Checking if box 'generic/ubuntu1904' version '1.9.34' is up to date...
==> xfcevm: Running provisioner: shell...
xfcevm: Running: inline script
xfcevm: sudo
xfcevm: :
xfcevm: pacman: command not found
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
Also the same goes with a simple if statement instead of ||:
config.vm.provision "shell", inline: "if [ ! `which python` ]; then sudo pacman --noconfirm -S python; fi"
The actual problem is a combination of two things: using python as an alias to Python3 in Arch (as contrary to Ubuntu where python is an alias for Python2) and the fact that Ubuntu doesn't ship with Python2 (and we don't need it for Ansible purposes, we use Python3).
So the solution is to check for both python and python3:
config.vm.provision "shell", inline: "if [ ! `which python`] && [ ! `which python3` ]; then sudo pacman --noconfirm -S python; fi"
After a test on my machine with the following Vagranfile:
Vagrant.configure(2) do |config|
config.vm.box = "generic/ubuntu1904"
config.vm.hostname = "test"
config.vm.network "private_network", type: "dhcp"
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.provider "virtualbox" do |v|
v.memory = 2048
v.cpus = 2
end
config.vm.provision "default", type: "shell", inline: "which python", run: "always"
end
this is the result of vagrant up (only last lines)
==> default: Running provisioner: default (shell)...
default: Running: inline script
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
Checking interactively:
$ vagrant ssh
Last login: Wed Oct 9 15:33:22 2019 from 10.0.2.2
vagrant#test:~$ which python
vagrant#test:~$ echo $?
1
vagrant#test:~$ which python3
/usr/bin/python3
vagrant#test:~$ echo $?
0
vagrant#test:~$
Conclusion: what you get is totally coherent. python does not exists in your ubuntu image so the rest of your command is run. Your scenario has a flaw and you need to find an other way.
In your context, I would try to run everything in ansible. Here is an example just for the idea, that I didn't test and that can surely be greatly improved
- name: Make sure machine can run ansible
hosts: all
gather_facts: false
tasks:
- block:
- name: Try to ansible-ping the host. Consider python is not installed otherwise
ping:
rescue:
- name: No python available, install with low-level and dirty command
become: true
become_method: sudo
raw: pacman --noconfirm -S python
Related
So I have a Python script that I'm running via AWS CodeBuild. It's using the flyway command line docker container to execute the following command:
cmd = 'flyway -user=' + connection_items['username'] + ' -password=' + connection_items['password'] + ' migrate'
os.system(cmd) # I know this is insecure... just trying to get a migration to run
What happens is it executes flyway without any of the arguments which just prints the help and exits. Anyone have any suggestions as to what I am doing wrong? I can't run via the subprocess module yet (I'm having path issues)
Thanks!
It looks more like a shell expansion issue than CodeBuild.
Your buildspec was confusing, I rewrote your buildspec as follows. I hope this helps:
---
version: 0.2
phases:
install:
runtime-versions:
python: 3.7
commands:
- echo "Installing flyway..."
- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --storage-driver=overlay&
- timeout 15 sh -c "until docker info; do echo .; sleep 1; done"
- echo "docker run --rm flyway/flyway:6.0.4 -url=jdbc:mysql://db -schemas=myschema -user=root -password=P#ssw0rd -connectRetries=60 migrate" > /usr/local/bin/flyway
- chmod +x /usr/local/bin/flyway
-
build:
commands:
- echo building...
- /usr/local/bin/flyway
- python MigrateDatabase.py
Also, I am sure you are already setting the privilege mode to true for the project environment.
I'm using Vagrant to set up a box with python, pip, virtualenv, virtualenvwrapper and some requirements. A provisioning shell script adds the required lines for virtualenvwrapper to .bashrc. It does a very basic check that they're not already there, so that it doesn't duplicate them with every provision:
if ! grep -Fq "WORKON_HOME" /home/vagrant/.bashrc; then
echo 'export WORKON_HOME=/home/vagrant/.virtualenvs' >> /home/vagrant/.bashrc
echo 'export PROJECT_HOME=/home/vagrant/Devel' >> /home/vagrant/.bashrc
echo 'source /usr/local/bin/virtualenvwrapper.sh' >> /home/vagrant/.bashrc
source /home/vagrant/.bashrc
fi
That seems to work fine; after provisioning is finished, the lines are in .bashrc, and I can ssh to the box and use virtualenvwrapper.
However, virtualenvwrapper doesn't work during provisioning. After the section above, this next checks for a pip requirements file and tries to install with virtualenvwrapper:
if [[ -f /vagrant/requirements.txt ]]; then
mkvirtualenv 'myvirtualenv' -r /vagrant/requirements.txt
fi
But that generates:
==> default: /tmp/vagrant-shell: line 50: mkvirtualenv: command not found
If I try and echo $WORKON_HOME from that shell script, nothing appears.
What am I missing to have those environment variables available, so virtualenvwrapper will run?
UPDATE: Further attempts... it seems that doing source /home/vagrant/.bashrc has no effect in my shell script - I can put echo "hello" in the .bashrc file , and that isn't output during provisioning (but is if I run source /home/vagrant/.bashrc when logged in.
I've also tried su -c "source /home/vagrant/.bashrc" vagrant in the shell script but that is no different.
UPDATE 2: Removed the $BASHRC_PATH variable, which was confusing the issue.
UPDATE 3: In another question I got the answer as to why source /home/vagrant/.bashrc wasn't working: the first part of the .bashrc file prevented it from doing anything when run "not interactively" in that way.
The Vagrant script provisioner will run as root, so it's home dir (~) will be /root. In your script if you define BASHRC_PATH=/home/vagrant, then I believe your steps will work: appending to, then sourcing from /home/vagrant/.bashrc.
Update:
Scratching my earlier idea ^^ because BASHRC_PATH is already set correctly.
As an alternative we could use .profile or .bash_profile. Here's a simplified example which sets environment variable FOO, making it available during provisioning and after ssh login:
Vagrantfile
Vagrant.configure(2) do |config|
config.vm.box = "hashicorp/precise32"
$prov_script = <<SCRIPT
if ! grep -q "export FOO" /home/vagrant/.profile; then
sudo echo "export FOO=bar" >> /home/vagrant/.profile
echo "before source, FOO=$FOO"
source /home/vagrant/.profile
echo "after source, FOO=$FOO"
fi
SCRIPT
config.vm.provision "shell", inline: $prov_script
end
Results
$ vagrant up
...
==> default: Running provisioner: shell...
default: Running: inline script
==> default: before source, FOO=
==> default: after source, FOO=bar
$ vagrant ssh -c 'echo $FOO'
bar
$ vagrant ssh -c 'tail -n 1 ~/.profile'
export FOO=bar
I found a solution, but I don't know if it's the best. It feels slightly wrong as it's repeating things, but...
I still append those lines to .bashrc, so that virtualenvwrapper will work if I ssh into the machine. But, because source /home/vagrant/.bashrc appears to have no effect during the running of the script, I have to explicitly repeat those three commands:
if ! grep -Fq "WORKON_HOME" $BASHRC_PATH; then
echo 'export WORKON_HOME=$HOME/.virtualenvs' >> $BASHRC_PATH
echo 'export PROJECT_HOME=$HOME/Devel' >> $BASHRC_PATH
echo 'source /usr/local/bin/virtualenvwrapper.sh' >> $BASHRC_PATH
fi
WORKON_HOME=/home/vagrant/.virtualenvs
PROJECT_HOME=/home/vagrant/Devel
source /usr/local/bin/virtualenvwrapper.sh
(As an aside, I also realised that during vagrant provisioning $HOME is /root, not the /home/vagrant I was assuming.)
The .bashrc in Ubuntu box does not work. You have to create the .bash_profile and add:
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
As mentioned in your other Q, Vagrant prohibits interactive shells during provisioning - apparently, only for some boxes (need to reference this though). For me, this affects the official Ubuntu Trusty and Xenial boxes.
However, you can simulate an interactive bash shell using sudo -H -u USER_HERE bash -i -c 'YOUR COMMAND HERE'
Answer taken from: https://stackoverflow.com/a/30106828/4186199
This has worked for me installing Ruby via rbenv and Node via nvm when provisioning the Ubuntu/trusty64 and xenial64 boxes.
I'm writing an ansible playbook for deploying a django app. As part of the process, I'd like to run the staticcollect command.
The issue I'm having is that the remote server has two python interpreters, one Python2.6 and one Python2.7, with Python2.6 being the default.
When I run the playbook, it runs using the Python2.6 interpretor and I need it to run against the Python2.7 interpretor.
Any idea on how this can be acheived?
My playbook is as follows:
- hosts: xxxxxxxxx
vars:
hg_branch: dmv2
django_dir: /opt/app/xxxx
conf_file: /opt/app/conf/xx_uwsgi.ini
django_env:
STATIC_ROOT: /opt/app/serve/static
remote_user: xxxxxx
tasks:
- name: Update the hg repo
command: chdir=/opt/app/xxxxx hg pull -u --rev {{hg_branch}}
- name: Collect static resources
environment: django_env
django_manage: command=collectstatic app_path={{django_dir}}
- name: Restart the django service
command: touch {{conf_file}}
- name: check nginx is running
service: name=nginx state=started
/usr/bin/python is just a symlink to either /usr/bin/python2.6 or /usr/bin/python2.7. You can just invoke a bash command to update the symlink.
- name: Remove python --> python2.6 symlink
sudo: yes
command: rm /usr/bin/python
- name: Add python --> python2.7 symlink
sudo: yes
command: ln -s /usr/bin/python2.7 /usr/bin/python
I had a similar situation at the day job and solved it with the ansible_python_interpreter host/group variable.
http://docs.ansible.com/faq.html#how-do-i-handle-python-pathing-not-having-a-python-2-x-in-usr-bin-python-on-a-remote-machine
The cool part is that you can define a different path for python on a host by host basis if you really need to.
I'm trying to run a command that I've installed in my home directory on a remote server. It's already been added to my $PATH in .bash_profile. I'm able to use it when logged in remotely via a normal ssh session, but Fabric doesn't seem to be pulling in my $PATH. Thus, I've tried adding it to my $PATH using Fabric's path context manager like so:
def test_path():
print('My env.path setting: %(path)s' % env)
with path('/path/to/sources/drush'):
run('echo $PATH')
run('drush')
Fabric responds with:
Executing task 'test_path'
My env.path setting:
run: echo $PATH
out: /usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
out:
run: echo $PATH
out: /usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/path/to/sources/drush
out:
run: drush
out: /bin/bash: drush: command not found
out:
Fatal error: run() received nonzero return code 127 while executing!
Requested: drush
Executed: /bin/bash -l -c "export PATH=\"\$PATH:\"/path/to/sources/drush\" \" && drush"
Aborting.
Thanks for looking...
The problem is in the way the PATH variable gets set - there is an additional space character at the end of it:
/bin/bash -l -c "export PATH=\"\$PATH:\"/path/to/sources/drush\" \" && drush"
^HERE
The last directory in the search path is interpreted by bash as "/path/to/source/drush " (trailing space) - an invalid directory.
I'm going to install check_mk plugin by writing a simple fabfile like this:
from fabric.api import env, run, roles, execute, parallel
env.roledefs = {
'monitoring': ['192.168.3.118'],
'mk-agent': ['192.168.3.230', '192.168.3.231', '192.168.3.232']
}
#roles('monitoring')
def mk():
run('[ -f check_mk-1.1.12p7.tar.gz ] || wget http://mathias-kettner.de/download/check_mk-1.1.12p7.tar.gz')
run('[ -d check_mk-1.1.12p7 ] || tar zxvf check_mk-1.1.12p7.tar.gz')
run('cd check_mk-1.1.12p7 && sudo ./setup.sh')
#parallel
#roles('mk-agent')
def mk_agent():
run('[ `rpm -qa | grep -c xinetd` -eq 0 ] && sudo yum -y install xinetd.x86_64')
run('sudo rpm -ivh http://mathias-kettner.de/download/check_mk-agent-1.2.0b2-1.noarch.rpm')
def check_mk():
execute(mk)
execute(mk_agent)
But, as you can guess, if the xinetd package is already installed, Fabric will be stopped with below errors:
Fatal error: run() received nonzero return code 1 while executing!
Requested: [ `rpm -qa | grep -c xinetd` -eq 0 ] && sudo yum -y install xinetd.x86_64
Executed: /bin/bash -l -c "[ \`rpm -qa | grep -c xinetd\` -eq 0 ] && sudo yum -y install xinetd.x86_64"
Aborting.
Is there any solution in this situation?
since stackoverflow doesn't let me upvote Morgan's answer without more rep, I'll contribute more detail from http://docs.fabfile.org/en/1.4.1/api/core/context_managers.html#fabric.context_managers.settings
Outside the 'with settings' in the code below, behaviour will return to normal :
def my_task():
with settings(
hide('warnings', 'running', 'stdout', 'stderr'),
warn_only=True
):
if run('ls /etc/lsb-release'):
return 'Ubuntu'
elif run('ls /etc/redhat-release'):
return 'RedHat'
This is desirable since you can essentially 'catch' what would've been an error in one section without it being fatal, but leave errors fatal elsewhere.
Perhaps in 2020 it will be useful.
In Fabric 2.5, you just need to add the warn=True to the command to avoid interruption.
For example: connection.run('test -f /path/to/file && tail /path/to/file, warn=True)
You simply need to add "env.warn_only = True" to the def mk_agent(): task.
Fabric Failure handling
Once the task list has been constructed, Fabric will start executing them as outlined in Execution strategy, until all tasks have been run on the entirety of their host lists. However, Fabric defaults to a “fail-fast” behavior pattern: if anything goes wrong, such as a remote program returning a nonzero return value or your fabfile’s Python code encountering an exception, execution will halt immediately.
This is typically the desired behavior, but there are many exceptions to the rule, so Fabric provides env.warn_only, a Boolean setting. It defaults to False, meaning an error condition will result in the program aborting immediately. However, if env.warn_only is set to True at the time of failure – with, say, the settings context manager – Fabric will emit a warning message but continue executing.
def my_task():
with settings(
hide('warnings', 'running', 'stdout', 'stderr'),
warn_only=True
):
if run('ls /etc/lsb-release'):
return 'Ubuntu'
elif run('ls /etc/redhat-release'):
return 'RedHat'