I'm using Ansible to configure some virtual machines. I wrote a Python script which retrieves the hosts from a REST service.
My VMs are organized in "Environments". For example I have the "Test", "Red" and "Integration" environments, each with a subset of VMs.
This Python script requires the custom --environment <ENV> parameter to retrieve the hosts of the wanted environment.
The problem I'm having is passing the <ENV> to the ansible-playbook command.
In fact the following command doesn't work
ansible-playbook thePlaybook.yml -i ./inventory/FromREST.py --environment Test
I get the error:
Usage: ansible-playbook playbook.yml
ansible-playbook: error: no such option: --environment
What is the right syntax to pass variables to a dynamic inventory script?
Update:
To better explain, the FromREST.py script accepts the following parameters:
Either the --list parameter or the --host <HOST> parameter, as per the Dynamic Inventory guidelines
The --environment <ENVIRONMENT> parameter, which I added to the ones required by Ansible to manage the different Environments
I had similar issue, I didn't find any solution, so I just modified my dynamic inventory to use OS Environment variable if the user does not pass --env
Capture env var in your inventory as below:
import os
print os.environ['ENV']
Pass env var to ansible
export ENV=dev
ansible -i my_custom_inv.py all --list-host
A workaround using $PPID to parse -e/--extra-vars from process snapshot.
ansible-playbook -i inventory.sh deploy.yml -e cluster=cl_01
inventory.sh file
#!/bin/bash
if [[ $1 != "--list" ]]; then exit 1; fi
extra_var=`ps -f -p $PPID | grep ansible-playbook | grep -oh "\w*=\w*" | grep cluster | cut -f2 -d=`
./inventory.py --cluster $extra_var
inventory.py returns JSON inventory for cluster cl_01.
Not pretty I know, but works.
Related
Lets say I have the following python script
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--host", required=True)
parser.add_argument("--enabled", default=False, action="store_true")
args = parser.parse_args()
print("host: " + args.host)
print("enabled: " + str(args.enabled))
$ python3 test.py --host test.com
host: test.com
enabled: False
$ python3 test.py --host test.com --enabled
host: test.com
enabled: True
Now the script is used in a docker image and I want to pass the variables in docker run. For the host parameter it is quite easy
FROM python:3.10-alpine
ENV MY_HOST=default.com
#ENV MY_ENABLED=
ENV TZ=Europe/Berlin
WORKDIR /usr/src/app
COPY test.py .
CMD ["sh", "-c", "python test.py --host ${MY_HOST}"]
But how can I can make the --enabled flag to work? So when the/an ENV is unset or is 0 or off ore something, --enabled should be suppressed, otherwise it should be included in the CMD.
Is is possible without modify the python script?
For exactly the reasons you're showing here, I'd suggest modifying your script to be able to accept command-line options from environment variables. If you add a line
parser.set_defaults(
host=os.environ.get('MY_HOST'),
enabled=(os.environ.get('MY_ENABLED') == 'true')
)
then you can use docker run -e options to provide these values, without the complexity of trying to reconstruct the command line based on which options are and aren't present. (Also see Setting options from environment variables when using argparse.)
CMD ["./test.py"] # a fixed string, environment variables specified separately
docker run -e MY_HOST=example.com -e MY_ENABLED=true my-image
Conversely, you can provide the entire command line and its options when you run the container. (But depending on the context you might just be pushing the "how to construct the command" question up a layer.)
docker run my-image \
./test.py --host=example.com --enabled
In principle you can construct this using a separate shell script without modifying your Python script, but it will be somewhat harder and significantly less safe. That script could look something like
#!/bin/sh
TEST_ARGS="--host $MY_HOST"
if [ -n "$MY_ENABLED" ]; then
TEST_ARGS="$TEST_ARGS --enabled"
fi
exec ./test.py $TEST_ARGS
# ^^^^^^^^^^ without double quotes (usually a bug)
Expanding $TEST_ARGS without putting it in double quotes causes the shell to split the string's value on whitespace. This is usually a bug since it would cause directory names like /home/user/My Files to get split into multiple words. You're still at some risk if the environment variable values happen to contain whitespace or other punctuation, intentionally or otherwise.
There are safer but more obscure ways to approach this in shells with extensions like GNU bash, but not all Docker images contain these. Rather than double-check that your image has bash, and figure out bash array syntax, and write a separate script to do the argument handling, this is where I suggest handling it exclusively at the Python layer is a better approach.
Here is the docker run output:
hausey#ubuntu:~/niso2-jxj934$ docker run niso2-jxj934
Test version: 15:59, Mar 24th 2020
Question 1: Evaluation of expression.
Command failed: /bin/bash -c "python /bin/jxj934.py -question 1 -expr \"(ifleq (ifleq -1.11298616747 1.63619642199 (sub -1.11298616747 -1.11298616747) 1.7699684348) (add (exp -0.822479932786) 1.39992604386) (add -1.11298616747 (exp 0.385042309638)) 0.205973267133)\" -n 10 -x \"-0.168958230447 -0.131749160548 0.0971246476126 1.8706205565 -0.464122426299 2.35887369763 -0.375948313434 -0.613901105864 0.411326743135 -0.149276696072\"" Exit status: exited with code 127 stderr: /bin/bash: python: command not found
Here is the Dockerfile:
FROM pklehre/niso2020-lab2-msc
ADD jxj934.py /bin
CMD ["-username","jxj934", "-submission", "python /bin/jxj934.py"]
Here is check for python:
hausey#ubuntu:~/niso2-jxj934$ which python
/usr/bin/python
Is that related to the PATH of python?
Usually, it is related to the value of PATH but, specifically, that image only has python3. In other words, looking through the filesystem with
find / -name -type f "python*"
Look for regular files named "python*" in /
There were only python3 results.
...
/usr/bin/python3.8
/usr/bin/python3.7
...
A quick solution is to specify python3 in your CMD line (python3 /bin/jxj934.py). Another is to add a soft link (ln -s /usr/bin/python /usr/bin/python3.8). The best solution is to solve it using the package manager. Then again, that depends if you're in control of the Dockerfile + image.
When you queried which python, you did so on your local machine. The container runs in a different filesystem namespace than yours and with a completely different terminal. The container will behave differently than your machine and any such investigations will yield relevant results only when run within the container.
A little unrelated to your question but it might serve you.
docker run has a --entrypoint option that allows you to override the image's entrypoint. You can ask for bash and explore the container.
docker run --it --entrypoint=bash pklehre/niso2020-lab2-msc
Note that bash has to be in the $PATH.
I am trying to create a Docker image/container that will run on Windows 10/Linux and test a REST API. Is it possible to embed the function (from my .bashrc file) inside the DockerFile? The function pytest calls pylint before running the .py file. If the rating is not 10/10, then it prompts the user to fix the code and exits. This works fine on Linux.
Basically here is the pseudo-code inside the DockerFile I am attempting to build an image.
------------------------------------------
From: Ubuntu x.xx
install python
Install pytest
install pylint
copy test_file to the respective folder
Execute pytest test_file_name.py
if the rating is not 10\10:
prompt the user to resolve the rating issue and exit
------------here is the partial code snippet from the func------------------------
function pytest () {
argument1="$1"
# Extract the path and file name for pylint when method name is passed
pathfilename=`echo ${argument1} | sed 's/::.*//'`
clear && printf '\e[3J'
output=$(docker exec -t orch-$USER pylint -r n ${pathfilename})
if (echo "$output" | grep 'warning.*error' o&>/dev/null or
echo "${output}" | egrep 'warning|convention' &>/dev/null)
then
echo echo "${output}" | sed 's/\(warning\)/\o033[33m\1\o033[39m/;s/\(errors\|error\)/\o033[31m\1\o033[39m/'
YEL='\033[0;1;33m'
NC='\033[0m'
echo -e "\n ${YEL}Fix module as per pylint/PEP8 messages to achieve 10/10 rating before pusing to github\n${NC}"`
fi
Another option I can think of:
Step 1] Build the image (using DockerFile) with all the required software
Step 2] In a .py file, add the call for execution of pytest with the logic from the function.
Your thoughts?
You can turn that function into a standalone shell script. (Pretty much by just removing the function wrapper, and taking out the docker exec part of the tool invocation.) Once you've done that, you can COPY the shell script into your image, and once you've done that, you can RUN it.
...
COPY pylint-enforcer.sh .
RUN chmod +x ./pylint-enforcer.sh \
&& ./pylint-enforcer.sh
...
It looks like pylint will produce a non-zero exit code if it emits any messages. For purposes of a Dockerfile, it may be enough to just RUN pylint -r -n .; if it prints anything, it looks like it will return a non-zero exit code, which docker build will interpret as "failure" and not proceed.
You might consider whether you'll ever want the ability to build and push an image of code that isn't absolutely perfect (during a production-down event, perhaps), and whether you want to require root-level permissions to run simple code-validity tools (if you can docker anything you can edit arbitrary files on the host as root). I'd suggest running these tools out of a non-Docker virtual environment during your CI process, and neither place them in your Dockerfile nor depend on docker exec to run them.
I am trying to specify inventory file in Ansible.
The help command output:
-i INVENTORY, --inventory-file=INVENTORY
specify inventory host file
(default=/usr/local/etc/ansible/hosts)
I tried to do like this:
ansible -i /Users/liu/personal/test_ansible/hosts
but it doesn't work and instead it outputs the help content once again:
➜ test_ansible ansible -i /Users/liu/personal/test_ansible/hosts
Usage: ansible <host-pattern> [options]
Options:
-a MODULE_ARGS, --args=MODULE_ARGS
module arguments
--ask-become-pass ask for privilege escalation password
-k, --ask-pass ask for SSH password
--ask-su-pass ask for su password (deprecated, use become)
-K, --ask-sudo-pass ask for sudo password (deprecated, use become)
--ask-vault-pass ask for vault password
-B SECONDS, --background=SECONDS
run asynchronously, failing after X seconds
(default=N/A)
.......
What am I missing here?
When you use the ansible command it will run ad-hoc Ansible modules rather than the more typical Ansible playbooks (which is ran by the ansible-playbook executable instead).
The ansible executable has a requirement of a "host pattern" which will match a group of remote nodes defined in the inventory.
So if we supplied an inventory file (named inventory.ini for this example) that looked like this:
[web]
web-1.example.org
web-2.example.org
[app]
app-1.example.org
app-2.example.org
app-3.example.org
[database:children]
database-master
database-slave
[database-master]
database-master.example.org
[database-child]
database-slave1.example.org
database-slave2.example.org
We could target just the web nodes by using ansible web -i /path/to/inventory.ini -m ping to get Ansible to use the ping module against the web-1.example.org and web-2.example.org.
Alternatively we could target all of the database nodes including the master and the 2 slaves by using ansible database -i /path/to/inventory.ini -m ping.
And finally, we can also target all of the servers in the inventory by using the "magic" all group that covers all of the groups in the inventory file by using ansible all -i /path/to/inventory.ini -m ping.
I found the solution:
export ANSIBLE_INVENTORY=/Users/liu/personal/test_ansible/hosts
then will be ok!
I've been trying to pass in an environment variable to a Docker container via the -e option. The variable is meant to be used in a supervisor script within the container. Unfortunately, the variable does not get resolved (i.e. they stay for instance$INSTANCENAME). I tried ${var} and "${var}", but this didn't help either. Is there anything I can do or is this just not possible?
The docker run command:
sudo docker run -d -e "INSTANCENAME=instance-1" -e "FOO=2" -v /var/app/tmp:/var/app/tmp -t myrepos/app:tag
and the supervisor file:
[program:app]
command=python test.py --param1=$FOO
stderr_logfile=/var/app/log/$INSTANCENAME.log
directory=/var/app
autostart=true
The variable is being passed to your container, but supervisor doesn't let use environment variables like this inside the configuration files.
You should review the supervisor documentation, and specifically the parts about string expressions. For example, for the command option:
Note that the value of command may include Python string expressions, e.g. /path/to/programname --port=80%(process_num)02d might expand to /path/to/programname --port=8000 at runtime.
String expressions are evaluated against a dictionary containing the keys group_name, host_node_name, process_num, program_name, here (the directory of the supervisord config file), and all supervisord’s environment variables prefixed with ENV_.