ansible molecule "python not found" - python

I have some ansible roles and I would like to use molecule testing with them.
When I execute command molecule init scenario -r get_files_uid -d docker I get the following file structure
get_files_uid
├── molecule
│ └── default
│ ├── converge.yml
│ ├── molecule.yml
│ └── verify.yml
├── tasks
│ └── main.yml
└── vars
└── main.yml
After that, I execute molecule test and I receive the following error:
PLAY [Converge] ****************************************************************
TASK [Gathering Facts] *********************************************************
fatal: [instance]: FAILED! => {"ansible_facts": {}, "changed": false, "failed_modules": {"ansible.legacy.setup": {"failed": true, "module_stderr": "/bin/sh: python: command not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 127}}, "msg": "The following modules failed to execute: ansible.legacy.setup\n"}
PLAY RECAP *********************************************************************
instance : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
My ansible.cfg looks like this:
[defaults]
roles_path = roles
ansible_python_interpreter = /usr/bin/python3
And I use MacOS with Ansible
ansible [core 2.13.3]
config file = None
configured module search path = ['/Users/scherevko/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/6.3.0/libexec/lib/python3.10/site-packages/ansible
ansible collection location = /Users/scherevko/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.10.6 (main, Aug 11 2022, 13:36:31) [Clang 13.1.6 (clang-1316.0.21.2.5)]
jinja version = 3.1.2
libyaml = True
molecule version:
molecule 4.0.1 using python 3.10
ansible:2.13.3
delegated:4.0.1 from molecule
docker:2.0.0 from molecule_docker requiring collections: community.docker>=3.0.0-a2
podman:2.0.2 from molecule_podman requiring collections: containers.podman>=1.7.0 ansible.posix>=1.3.0
When I run molecule --debug test I see
ANSIBLE_PYTHON_INTERPRETER: python not found
How to fix that?

The default scaffold for role molecule role initialization uses quay.io/centos/centos:stream8 as the test instance image (see molecule/default/molecule.yml)
This image does not have any /usr/bin/python3 file available:
$ docker run -it --rm quay.io/centos/centos:stream8 ls -l /usr/bin/python3
ls: cannot access '/usr/bin/python3': No such file or directory
If you let ansible discover the available python by itself, you'll see that the interpreter actually found is /usr/libexec/platform-python like in the following demo (no ansible.cfg in use):
$ docker run -d --rm --name instance quay.io/centos/centos:stream8 tail -f /dev/null
2136ad2e8b91f73d21550b2403a6b37f152a96c2373fcb5eb0491a323b0ed093
$ ansible instance -i instance, -e ansible_connection=docker -m setup | grep discovered
"discovered_interpreter_python": "/usr/libexec/platform-python",
$ docker stop instance
instance
Since your ansible.cfg only contains a default value for role path besides that wrong python interpreter path, I suggest you simply remove that file which will fix your problem. At the very least, remove the line defining ansible_python_interpreter to use default settings.
Note that you should also make sure that ANSIBLE_PYTHON_INTERPRETER is not set as a variable in your current shell (and remove that definition from whatever shell init file if it is the case).
Hardcoding the path of the python interpreter should anyway be your very last solution in very few edge cases.

Related

Packaging shell action files with Oozie, retaining original directory structure

I have a PySpark application I would like to schedule with Oozie, using the shell action.
My submit-application.sh script simply initializes a Python virtualenv (present on all worker nodes) and calls the application.py Python application script.
The application.py script is a PySpark application that comes with one own local Python module, let's say called foobar, which is simply imported and used throughout the code.
So I have a directory structure similar to this:
.
├── foobar
│   ├── config.py
│   ├── foobar.py
│   └── __init__.py
├── application.DEV.ini
├── application.PROD.ini
├── application.py
├── requirements.txt
└── submit-application.sh
I am trying to use an Oozie workflow to package all script and local module files, but apparently, they are always delivered as flattened, dumped into the root directory of the container, regardless any configuration I used. This prevents the Python script from loading the local modules, causing ModuleNotFoundError: No module named 'foobar' errors.
Is not there any way to tell Oozie to place file artifacts to a sub-directory?
It seems that the # notation is just ignored.
This is my Oozie workflow.xml file
<workflow-app name="Data-Extraction-WF" xmlns="uri:oozie:workflow:0.5">
<global>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
</global>
<start to="Data-Extraction"/>
<action name="Data-Extraction">
<shell xmlns="uri:oozie:shell-action:1.0">
<exec>submit-application.sh</exec>
<file>app/__init__.py#app/__init__.py</file>
<file>app/config.py#app/config.py</file>
<file>app/foobar.py#app/foobar.py</file>
<file>application.DEV.ini#application.DEV.ini</file>
<file>application.PROD.ini#application.PROD.ini</file>
<file>application.py#application.py</file>
<file>submit-application.sh#submit-application.sh</file>
<capture-output/>
</shell>
<ok to="success"/>
<error to="failure"/>
</action>
<kill name="failure">
<message>Workflow failed, error message: [${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="success"/>
</workflow-app>
I ended up creating a wrapper script that gets the files from HDFS and
simply using that within the Oozie workflow. In addition of the HDFS location
of the workflow, the step (sub-directory) is passed to this script, which
then downloads the whole directory and executes the run script inside.
<workflow-app name="Data-Extraction-WF" xmlns="uri:oozie:workflow:0.5">
<global>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
</global>
<start to="Data-Extraction"/>
<action name="Data-Extraction">
<shell xmlns="uri:oozie:shell-action:1.0">
<exec>execute_workflow_step.sh</exec>
<argument>-w</argument>
<argument>${wf:conf('oozie.wf.application.path')}</argument>
<argument>-s</argument>
<argument>data-transformation</argument>
<file>execute_workflow_step.sh</file>
</shell>
<ok to="success"/>
<error to="failure"/>
</action>
<kill name="failure">
<message>Workflow failed, error message: [${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="success"/>
</workflow-app>
This is my execute_workflow_step.sh script: it downloads the step directory from the HDFS directory of the workflow and executes its run script:
#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'
err_trap() {
echo "*** FAILED: Error on line $1"
exit 1
}
trap 'err_trap $LINENO' ERR
usage() { echo "Usage: $0 [-w <workflow HDFS path>] [-s <step-directory>] [-p <step submit script parameters>]" 1>&2; exit 1; }
PARAMETERS=""
while getopts ":w:s:p:" o; do
case "${o}" in
w)
WORKFLOW_PATH=${OPTARG}
;;
s)
STEP_DIRECTORY=${OPTARG}
;;
p)
PARAMETERS=${OPTARG}
;;
*)
usage
;;
esac
done
shift $((OPTIND-1))
if [ -z "${WORKFLOW_PATH}" ] || [ -z "${STEP_DIRECTORY}" ]; then
usage
fi
HDFS_BASEDIR=$(dirname "${WORKFLOW_PATH}")
WORKFLOW_STEP_DIRECTORY="${HDFS_BASEDIR}/${STEP_DIRECTORY}"
echo "Getting: ${WORKFLOW_STEP_DIRECTORY}"
hdfs dfs -get "${WORKFLOW_STEP_DIRECTORY}"
STEP_SCRIPT="${STEP_DIRECTORY}/submit-application.sh"
chmod 755 "$STEP_SCRIPT"
echo "Step submit script: ${STEP_SCRIPT}"
echo "Parameters: ${PARAMETERS}"
echo "Invoking: ${STEP_SCRIPT} ${PARAMETERS}"
"${STEP_SCRIPT}" "${PARAMETERS}"

Ansible: How to change Python Version

Trying to use GNS3 to practice ansible script, there is a docker instance called "Network Automation" with built-in ansible. However, it still uses Python 2.7 as the interpreter:
root#Network-Automation:~# ansible --version
ansible 2.7.11
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609]
I understand I can use "ansible-playbook --version -e 'ansible_python_interpreter=/usr/bin/python3'" command to run a playbook with Python version 3, or I can specifiy var within the playbook:
- name: Common package
hosts: all
gather_facts: no
vars:
ansible_python_interpreter: /usr/bin/python3
roles:
- { role: python, tags: [ init, python, common, addusers] }
...
...
However, I would like to have a permanent way to force ansible to use Python3 version. How can I achieve this? Thanks.
Why not use the var directory in your role...
├── defaults
│   └── main.yml
├── files
├── handlers
│   └── main.yml
├── meta
│   └── main.yml
├── README.md
├── tasks
│   └── main.yml
├── templates
├── tests
│   ├── inventory
│   └── test.yml
└── vars
└── main.yml
in vars/main.yml
just add....
---
# vars file for XXXX
ansible_python_interpreter: /usr/bin/python3
Per https://docs.ansible.com/ansible/latest/reference_appendices/interpreter_discovery.html you could simply set it in the inventory for that host, or in your configuration file for ansible (which can also be shipped in the same directory as the playbooks and/or inventory):
To control the discovery behavior:
for individual hosts and groups, use the ansible_python_interpreter inventory variable
globally, use the interpreter_python key in the [defaults] section of ansible.cfg
Adding some points that you might overlook based on comments above:
In the original post, the ansible was installed under root account, which in many other environment, you won't use root. In this case, you need to sudo su then install the ansible with pip3, otherwise, it will end up installing for you account only under: ~/.local/bin
By new pip version, it's recommended to use python3 -m pip install xxx than directly execute pip3 install xxx

Best practice 10+ projects will run by using ONE container / parameterised

The idea is to have a single container which will contain all small projects and will run based on parameters.
What is the current situation:
I have folders with the project this way:
├── MAIN_PROJECT_FOLDER
│ ├── PROJECT_SUB_CATEGORY1
│ ├── ├── PROJECT_NAME_FOLDER1
│ │ │ ├── run.sh
│ │ │ ├── main.py
│ │ │ ├── config.py
│ ├── ├── PROJECT_NAME_FOLDER2
│ │ │ ├── run.sh
│ │ │ ├── main.py
│ │ │ ├── config.py
│ ├── PROJECT_SUB_CATEGORY2
│ ├── ├── PROJECT_NAME_FOLDER1
│ │ │ ├── run.sh
│ │ │ ├── main.py
│ │ │ ├── config.py
│ ├── ├── PROJECT_NAME_FOLDER2
│ │ │ ├── run.sh
│ │ │ ├── main.py
│ │ │ ├── config.py
Each run.sh file has prod/dev parameters which can be executed like this:
sudo ./run.sh prod = prod
sudo ./run.sh dev = dev
sudo ./run.sh = dev
What is the way to create another .SH file or Dockerfile which at the end can be executed like this:
sudo docker run CONTAINER_NAME PROJECT_NAME PROD/DEV
sudo docker run test_contaner test_project1 prod
sudo docker run test_contaner test_project1 dev
sudo docker run test_contaner test_project2 prod
... and so one
Basically, each project is the parameter and prod/dev will be part of run.sh execution somehow.
Looking for the best practice to make this happen.
The best practice is generally to have an image that does only one thing. In your example that would imply four separate Docker images; each directory would have its own Dockerfile.
It also tends to be easier to configure settings like this using environment variables than command-line parameters. Sites like https://12factor.net/ describe this and some other practices for building services. (In YAML specifications like Docker Compose or Kubernetes, it is easier to add another key/value environment pair than to build up a correct command line from multiple disparate parts, in my experience.)
This leads you to a sequence like
sudo docker build -t me/cat1proj1 CATEGORY_1/PROJECT_1
sudo docker run -e ENVIRONMENT=prod me/cat1proj1
Architecturally, the Docker container runs any single process, and absolutely nothing stops you from writing the wrapper script you describe. That single command is specified as a combination of an "entrypoint" and a "command"; if you specify both then the command is passed as arguments to the entrypoint. The "command" part can be specified in the Dockerfile CMD, but it can also be overridden at the docker run command line.
If you write no special scripts at all, you can run (assuming you've COPYd the projects to the right directories)
sudo docker run test_image ./test_project1/run.sh prod
(I have a couple of projects that are the same application with different scripts to start them in different ways – a Web server vs. an async job runner with the same code, for instance – and just launch them with alternate startup scripts this way.)
There is a pattern of making some other script be the ENTRYPOINT, and interpreting the "command" as just arguments to that script. The command just gets passed as arguments $1, $2, "$#". The problem with doing this is that it breaks some routine debugging paths.
# "test_project1" "prod" passed as arguments to entrypoint script
sudo docker run test_image test_project1 prod
# But that breaks getting a debug shell
sudo docker run --rm -it test_image bash
# More complex commands get awkward
sudo docker run --rm --entrypoint=/bin/ls test_image -l /app
I would personally use tool like Supervisor which can be run inside one docker container.
Installing supervisor on Ubuntu and Debian based distros:
sudo apt install supervisor
Starting supervisor daemon:
sudo service supervisor start
In /etc/supervisord/supervisord.conf you will find place where to put configs for your projects:
[include]
files = /etc/supervisor/conf.d/*.conf
Now you can create configuration for supervisor and copy it to /etc/supervisor/conf.d/. Example supervisor config for project PROJECT_1:
project_1_supervisor.conf:
[program:project_1_app]
command=/usr/bin/bash /project_1_path/run.sh prod
directory=/project_1_path/
autostart=true
autorestart=true
startretries=3
stderr_logfile=/var/log/project_1.err.log
stdout_logfile=/var/log/project_1.out.log
After this restart your supervisor:
sudo supervisorctl reread
sudo supervisorctl update
After this you can check if your project program runs:
$ supervisorctl
project_1_app RUNNING pid 590, uptime 0:02:45
I think the best way to handle this is ENV, Here is the complete example that what you are looking for.
Here is the directory structure
Here is the smartest dockerfile that clone the above app and do smart thing ;) That will take four env, by default it will run project A.
ENV BASE_PATH="/opt/project"
This ENV is for project base path during clone
ENV PROJECT_PATH="/main/sub_folder_a/project_a"
This ENV is for project path for example Project B
ENV SCRIPT_NAME="hello.py"
This ENV will be used to run the actual file can be run.sh or main.py in your case.
ENV SYSTEM_ENV=dev
This env is used run.sh this can either dev or prod
FROM python:3.7.4-alpine3.10
WORKDIR /opt/project
# Required Tools
RUN apk add --no-cache supervisor git tree && \
mkdir -p /etc/supervisord.d/
# clone remote project or copy your own one
RUN echo "Starting remote clonning...."
RUN git clone https://github.com/Adiii717/python-demo-app.git /opt/project
RUN tree /opt/project
# ENV for start different project, can be overide at run time
ENV BASE_PATH="/opt/project"
ENV PROJECT_PATH="/main/sub_folder_a/project_a"
ENV SCRIPT_NAME="hello.py"
# possible dev or prod
ENV SYSTEM_ENV=dev
RUN chmod +x /opt/project/main/*/*/run.sh
# general config
RUN echo $'[supervisord] \n\
[unix_http_server] \n\
file = /tmp/supervisor.sock \n\
chmod = 0777 \n\
chown= nobody:nogroup \n\
[supervisord] \n\
logfile = /tmp/supervisord.log \n\
logfile_maxbytes = 50MB \n\
logfile_backups=10 \n\
loglevel = info \n\
pidfile = /tmp/supervisord.pid \n\
nodaemon = true \n\
umask = 022 \n\
identifier = supervisor \n\
[supervisorctl] \n\
serverurl = unix:///tmp/supervisor.sock \n\
[rpcinterface:supervisor] \n\
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface \n\
[include] \n\
files = /etc/supervisord.d/*.conf' >> /etc/supervisord.conf
# script supervisord Config
RUN echo $'[supervisord] \n\
nodaemon=true \n\
[program:run_project ] \n\
command= /run_project.sh \n\
stdout_logfile=/dev/fd/1 \n\
stdout_logfile_maxbytes=0MB \n\
stderr_logfile_maxbytes = 0 \n\
stderr_logfile=/dev/fd/2 \n\
redirect_stderr=true \n\
autorestart=false \n\
startretries=0 \n\
exitcodes=0 ' >> /etc/supervisord.d/run_project.conf
RUN echo $'#!/bin/ash \n\
echo -e "\x1B[31m starting project having name ${BASE_PATH}${PROJECT_PATH}/${SCRIPT_NAME} \x1B[0m" \n\
fullfilename=${BASE_PATH}${PROJECT_PATH}/${SCRIPT_NAME} \n\
filename=$(basename "$fullfilename") \n\
extension="${filename##*.}" \n\
if [[ ${extension} == "sh" ]];then \n\
sh ${BASE_PATH}${PROJECT_PATH}/${SCRIPT_NAME} ${SYSTEM_ENV} \n\
else \n\
python ${BASE_PATH}${PROJECT_PATH}/${SCRIPT_NAME} \n\
fi ' >> /run_project.sh
RUN chmod +x /run_project.sh
EXPOSE 9080 8000 9088 80
ENTRYPOINT ["supervisord", "--nodaemon", "--configuration", "/etc/supervisord.conf"]
Build the docker image
docker build -t multipy .
Run the docker container
docker run --rm -it multipy
This will run project a by default
to project b, your command will be
docker run --rm -it --env PROJECT_PATH=/main/sub_folder_b/project_b --env SCRIPT_NAME=hello.py multipy
To run your run.sh bash file command will be
docker run --rm -it --env SCRIPT_NAME=run.sh multipy
Here is the some logs

How to run multiple Python scripts and an executable files using Docker?

I want to create a container that is contained with two Python packages as well as a package consist of an executable file.
Here's my main package (dockerized_package) tree:
dockerized_project
├── docker-compose.yml
├── Dockerfile
├── exec_project
│   ├── config
│   │   └── config.json
│   ├── config.json
│   ├── gowebapp
├── pythonic_project1
│   ├── __main__.py
│   ├── requirements.txt
│   ├── start.sh
│   └── utility
│   └── utility.py
└── pythonic_project2
├── collect
│   ├── collector.py
├── __main__.py
├── requirements.txt
└── start.sh
Dockerfile content:
FROM ubuntu:18.04
RUN apt update
RUN apt-get install -y python3.6 python3-pip python3-dev build-essential gcc \
libsnmp-dev snmp-mibs-downloader
RUN pip3 install --upgrade pip
RUN mkdir /app
WORKDIR /app
COPY . /app
WORKDIR /app/snmp_collector
RUN pip3 install -r requirements.txt
WORKDIR /app/proto_conversion
RUN pip3 install -r requirements.txt
WORKDIR /app/pythonic_project1
CMD python3 __main__.py
WORKDIR /app/pythonic_project2
CMD python3 __main__.py
WORKDIR /app/exec_project
CMD ["./gowebapp"]
docker-compose content:
version: '3'
services:
proto_conversion:
build: .
image: pc:2.0.0
container_name: proto_conversion
# command:
# - "bash snmp_collector/start.sh"
# - "bash proto_conversion/start.sh"
restart: unless-stopped
ports:
- 8008:8008
tty: true
Problem:
When I run this project with docker-compose up --build, only the last CMD command runs. Hence, I think the previous CMD commands are killed in Dockerfile because when I remove the last two CMD, the first CMD works well.
Is there any approach to run multiple Python scripts and an executable file in the background?
I've also tried with the bash files without any success either.
As mentioned in the documentation, there can be only one CMD in the docker file and if there is more, the last one overrides the others and takes effect.
A key point of using docker might be to isolate your programs, so at first glance, you might want to move them to separate containers and talk to each other using a shared volume or a docker network, but if you really need them to run in the same container, including them in a bash script and replacing the last CMD with CMD run.sh will run them alongside each other:
#!/bin/bash
exec python3 /path/to/script1.py &
exec python3 /path/to/script2.py
Add COPY run.sh to the Dockerfile and use RUN chmod a+x run.sh to make it executable. CMD should be CMD ["./run.sh"]
try it via entrypoint.sh
ENTRYPOINT ["/docker_entrypoint.sh"]
docker_entrypoint.sh
#!/bin/bash
set -e
exec python3 not__main__.py &
exec python3 __main__.py
symbol & says that you run service as daemon in background
Best practice is to launch these as three separate containers. That's doubly true since you're taking three separate applications, bundling them into a single container, and then trying to launch three separate things from them.
Create a separate Dockerfile in each of your project subdirectories. These can be simpler, especially for the one that just contains a compiled binary
# execproject/Dockerfile
FROM ubuntu:18.04
WORKDIR /app
COPY . ./
CMD ["./gowebapp"]
Then in your docker-compose.yml file have three separate stanzas to launch the containers
version: '3'
services:
pythonic_project1:
build: ./pythonic_project1
ports:
- 8008:8008
env:
PY2_URL: 'http://pythonic_project2:8009'
GO_URL: 'http://execproject:8010'
pythonic_project2:
build: ./pythonic_project2
execproject:
build: ./execproject
If you really can't rearrange your Dockerfiles, you can at least launch three containers from the same image in the docker-compose.yml file:
services:
pythonic_project1:
build: .
workdir: /app/pythonic_project1
command: ./__main__.py
pythonic_project2:
build: .
workdir: /app/pythonic_project1
command: ./__main__.py
There's several good reasons to structure your project with multiple containers and images:
If you roll your own shell script and use background processes (as other answers have), it just won't notice if one of the processes dies; here you can use Docker's restart mechanism to restart individual containers.
If you have an update to one of the programs, you can update and restart only that single container and leave the rest intact.
If you ever use a more complex container orchestrator (Docker Swarm, Nomad, Kubernetes) the different components can run on different hosts and require a smaller block of CPU/memory resource on a single node.
If you ever use a more complex container orchestrator, you can individually scale up components that are using more CPU.

GCE module in Ansible cannot find apache-libcloud although gce.py works

I installed ansible, apache-libcloud with pip. Also, I can use the gcloud cli and ansible works for any non-gce-related playbooks.
When using the gce module as a task to create instances in an ansible playbook, the following error occurs:
TASK: [Launch instances] ******************************************************
<127.0.0.1> REMOTE_MODULE gce instance_names=mm2 machine_type=f1-micro image=ubuntu-1204-precise-v20150625 zone=europe-west1-d service_account_email= pem_file=../pkey.pem project_id=fancystuff-11
<127.0.0.1> EXEC ['/bin/sh', '-c', 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889 && echo $HOME/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889']
<127.0.0.1> PUT /var/folders/v4/ll0_f8lj7yl7yghb645h95q9ckfc19/T/tmpyDoPt9 TO /Users/d046179/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889/gce
<127.0.0.1> EXEC ['/bin/sh', '-c', u'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /Users/d046179/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889/gce; rm -rf /Users/d046179/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889/ >/dev/null 2>&1']
failed: [localhost -> 127.0.0.1] => {"failed": true, "parsed": false}
failed=True msg='libcloud with GCE support (0.13.3+) required for this module'
FATAL: all hosts have already failed -- aborting
And the site.yml of the playbook I wrote:
name: Create a sandbox instance
hosts: localhost
vars:
names: mm2
machine_type: f1-micro
image: ubuntu-1204-precise-v20150625
zone: europe-west1-d
service_account_email: xxx#developer.gserviceaccount.com
pem_file: ../pkey.pem
project_id: fancystuff-11
tasks:
- name: Launch instances
local_action: gce instance_names={{names}} machine_type={{machine_type}}
image={{image}} zone={{zone}} service_account_email={{ service_account_email }}
pem_file={{ pem_file }} project_id={{ project_id }}
register: gce
The gce cloud module fails with the error message "ibcloud with GCE support (0.13.3+) required for this module".
However, running gce.py from the ansible github repo works. The python script finds the apache-libcloud library and prints a json with all running instances. Besides, pip install apache-libcloud states it is installed properly.
Is there anything I am missing like an environment variable that points to the python libraries (PYTHONPATH)?
UPDATE 1:
I included the following task before the gce task:
- name: install libcloud
pip: name=apache-libcloud
This also does not affect the behavior nor prevents any error messages.
Update 2:
I added the following task to inspect the available PYTHONPATH:
- name: Getting PYTHONPATH
local_action: shell python -c 'import sys; print(":".join(sys.path))'
register: pythonpath
- debug:
msg: "PYTHONPATH: {{ pythonpath.stdout }}"
The following is returned:
PYTHONPATH: :/usr/local/lib/python2.7/site-packages/setuptools-17.1.1-py2.7.egg:/usr/local/lib/python2.7/site-packages/pip-7.0.3-py2.7.egg:/usr/local/lib/python2.7/site-packages:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python27.zip:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload:/usr/local/lib/python2.7/site-packages:/Library/Python/2.7/site-packages
UPDATE 3:
I introduced my own test.py script as a task which executes the same apache-libcloud imports as the gce ansible module. The script imports just fine!!!
Setting the PYTHONPATH fixes the issue. For example:
$ export PYTHONPATH=/usr/local/lib/python2.7/site-packages/
I'm using OSX and I solved this for myself. Short answer: install ansible with pip. (rather than e.g. brew)
I inspected the PYTHONPATH that Ansible sets runtime and it looked like it had nothing to do whith my normal system PYTHONPATH. E.g. for me, my system PYTHONPATH was empty, and setting that like e.g. mlazarov suggested didn't make any difference. I made ansible print the PYTHONPATH it uses runtime, and it looked like this:
ok: [localhost] => {
"msg": "PYTHONPATH: :/usr/local/Cellar/ansible/1.9.4/libexec/lib/python2.7/site-packages:/usr/local/Cellar/ansible/1.9.4/libexec/vendor/lib/python2.7/site-packages:/Library/Frameworks/Python.framework/Versions/3.4/lib/python34.zip:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/plat-darwin:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/lib-dynload:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages"
}
So there's only ansible's own site-packages and some strange Python3 installations (I'm using python2.7)
Something in this discussion made me think it might be a problem with the ansible installation, my ansible was installed with brew. I reinstalled it globally with pip (simply running sudo pip install ansible), and that fixed the problem. Now the PYTHONPATH ansible prints looks much better, with my virtualenv python installation in the beginning, and no more "libcloud with GCE support (0.13.3+) required for this module".
I was able to resolve the issue by setting the PYTHONPATH environment variable (export PYTHONPATH=/path/to/site-packages) with the current site-packages folder. Apparently, ansible establishes its own environment during module execution and ignores any paths available in python except the paths from the environment variable PYTHONPATH.
I find this a peculiar behavior which is not documented on the ansible websites.
I have a similar environment setup. I found some information at the bottom of this section: https://github.com/jlund/streisand#prerequisites
Essentially there's some magic files you can update so the brew'd ansible will add a folder to search for packages:
mkdir -p ~/Library/Python/2.7/lib/python/site-packages
echo '/usr/local/lib/python2.7/site-packages' > ~/Library/Python/2.7/lib/python/site-packages/homebrew.pth
Hope that fixes it for you!
In my case it was the case of:
pip install apache-libcloud

Categories