Ansible: How to change Python Version - python

Trying to use GNS3 to practice ansible script, there is a docker instance called "Network Automation" with built-in ansible. However, it still uses Python 2.7 as the interpreter:
root#Network-Automation:~# ansible --version
ansible 2.7.11
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609]
I understand I can use "ansible-playbook --version -e 'ansible_python_interpreter=/usr/bin/python3'" command to run a playbook with Python version 3, or I can specifiy var within the playbook:
- name: Common package
hosts: all
gather_facts: no
vars:
ansible_python_interpreter: /usr/bin/python3
roles:
- { role: python, tags: [ init, python, common, addusers] }
...
...
However, I would like to have a permanent way to force ansible to use Python3 version. How can I achieve this? Thanks.

Why not use the var directory in your role...
├── defaults
│   └── main.yml
├── files
├── handlers
│   └── main.yml
├── meta
│   └── main.yml
├── README.md
├── tasks
│   └── main.yml
├── templates
├── tests
│   ├── inventory
│   └── test.yml
└── vars
└── main.yml
in vars/main.yml
just add....
---
# vars file for XXXX
ansible_python_interpreter: /usr/bin/python3

Per https://docs.ansible.com/ansible/latest/reference_appendices/interpreter_discovery.html you could simply set it in the inventory for that host, or in your configuration file for ansible (which can also be shipped in the same directory as the playbooks and/or inventory):
To control the discovery behavior:
for individual hosts and groups, use the ansible_python_interpreter inventory variable
globally, use the interpreter_python key in the [defaults] section of ansible.cfg

Adding some points that you might overlook based on comments above:
In the original post, the ansible was installed under root account, which in many other environment, you won't use root. In this case, you need to sudo su then install the ansible with pip3, otherwise, it will end up installing for you account only under: ~/.local/bin
By new pip version, it's recommended to use python3 -m pip install xxx than directly execute pip3 install xxx

Related

ansible molecule "python not found"

I have some ansible roles and I would like to use molecule testing with them.
When I execute command molecule init scenario -r get_files_uid -d docker I get the following file structure
get_files_uid
├── molecule
│ └── default
│ ├── converge.yml
│ ├── molecule.yml
│ └── verify.yml
├── tasks
│ └── main.yml
└── vars
└── main.yml
After that, I execute molecule test and I receive the following error:
PLAY [Converge] ****************************************************************
TASK [Gathering Facts] *********************************************************
fatal: [instance]: FAILED! => {"ansible_facts": {}, "changed": false, "failed_modules": {"ansible.legacy.setup": {"failed": true, "module_stderr": "/bin/sh: python: command not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 127}}, "msg": "The following modules failed to execute: ansible.legacy.setup\n"}
PLAY RECAP *********************************************************************
instance : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
My ansible.cfg looks like this:
[defaults]
roles_path = roles
ansible_python_interpreter = /usr/bin/python3
And I use MacOS with Ansible
ansible [core 2.13.3]
config file = None
configured module search path = ['/Users/scherevko/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/6.3.0/libexec/lib/python3.10/site-packages/ansible
ansible collection location = /Users/scherevko/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.10.6 (main, Aug 11 2022, 13:36:31) [Clang 13.1.6 (clang-1316.0.21.2.5)]
jinja version = 3.1.2
libyaml = True
molecule version:
molecule 4.0.1 using python 3.10
ansible:2.13.3
delegated:4.0.1 from molecule
docker:2.0.0 from molecule_docker requiring collections: community.docker>=3.0.0-a2
podman:2.0.2 from molecule_podman requiring collections: containers.podman>=1.7.0 ansible.posix>=1.3.0
When I run molecule --debug test I see
ANSIBLE_PYTHON_INTERPRETER: python not found
How to fix that?
The default scaffold for role molecule role initialization uses quay.io/centos/centos:stream8 as the test instance image (see molecule/default/molecule.yml)
This image does not have any /usr/bin/python3 file available:
$ docker run -it --rm quay.io/centos/centos:stream8 ls -l /usr/bin/python3
ls: cannot access '/usr/bin/python3': No such file or directory
If you let ansible discover the available python by itself, you'll see that the interpreter actually found is /usr/libexec/platform-python like in the following demo (no ansible.cfg in use):
$ docker run -d --rm --name instance quay.io/centos/centos:stream8 tail -f /dev/null
2136ad2e8b91f73d21550b2403a6b37f152a96c2373fcb5eb0491a323b0ed093
$ ansible instance -i instance, -e ansible_connection=docker -m setup | grep discovered
"discovered_interpreter_python": "/usr/libexec/platform-python",
$ docker stop instance
instance
Since your ansible.cfg only contains a default value for role path besides that wrong python interpreter path, I suggest you simply remove that file which will fix your problem. At the very least, remove the line defining ansible_python_interpreter to use default settings.
Note that you should also make sure that ANSIBLE_PYTHON_INTERPRETER is not set as a variable in your current shell (and remove that definition from whatever shell init file if it is the case).
Hardcoding the path of the python interpreter should anyway be your very last solution in very few edge cases.

How to run multiple Python scripts and an executable files using Docker?

I want to create a container that is contained with two Python packages as well as a package consist of an executable file.
Here's my main package (dockerized_package) tree:
dockerized_project
├── docker-compose.yml
├── Dockerfile
├── exec_project
│   ├── config
│   │   └── config.json
│   ├── config.json
│   ├── gowebapp
├── pythonic_project1
│   ├── __main__.py
│   ├── requirements.txt
│   ├── start.sh
│   └── utility
│   └── utility.py
└── pythonic_project2
├── collect
│   ├── collector.py
├── __main__.py
├── requirements.txt
└── start.sh
Dockerfile content:
FROM ubuntu:18.04
RUN apt update
RUN apt-get install -y python3.6 python3-pip python3-dev build-essential gcc \
libsnmp-dev snmp-mibs-downloader
RUN pip3 install --upgrade pip
RUN mkdir /app
WORKDIR /app
COPY . /app
WORKDIR /app/snmp_collector
RUN pip3 install -r requirements.txt
WORKDIR /app/proto_conversion
RUN pip3 install -r requirements.txt
WORKDIR /app/pythonic_project1
CMD python3 __main__.py
WORKDIR /app/pythonic_project2
CMD python3 __main__.py
WORKDIR /app/exec_project
CMD ["./gowebapp"]
docker-compose content:
version: '3'
services:
proto_conversion:
build: .
image: pc:2.0.0
container_name: proto_conversion
# command:
# - "bash snmp_collector/start.sh"
# - "bash proto_conversion/start.sh"
restart: unless-stopped
ports:
- 8008:8008
tty: true
Problem:
When I run this project with docker-compose up --build, only the last CMD command runs. Hence, I think the previous CMD commands are killed in Dockerfile because when I remove the last two CMD, the first CMD works well.
Is there any approach to run multiple Python scripts and an executable file in the background?
I've also tried with the bash files without any success either.
As mentioned in the documentation, there can be only one CMD in the docker file and if there is more, the last one overrides the others and takes effect.
A key point of using docker might be to isolate your programs, so at first glance, you might want to move them to separate containers and talk to each other using a shared volume or a docker network, but if you really need them to run in the same container, including them in a bash script and replacing the last CMD with CMD run.sh will run them alongside each other:
#!/bin/bash
exec python3 /path/to/script1.py &
exec python3 /path/to/script2.py
Add COPY run.sh to the Dockerfile and use RUN chmod a+x run.sh to make it executable. CMD should be CMD ["./run.sh"]
try it via entrypoint.sh
ENTRYPOINT ["/docker_entrypoint.sh"]
docker_entrypoint.sh
#!/bin/bash
set -e
exec python3 not__main__.py &
exec python3 __main__.py
symbol & says that you run service as daemon in background
Best practice is to launch these as three separate containers. That's doubly true since you're taking three separate applications, bundling them into a single container, and then trying to launch three separate things from them.
Create a separate Dockerfile in each of your project subdirectories. These can be simpler, especially for the one that just contains a compiled binary
# execproject/Dockerfile
FROM ubuntu:18.04
WORKDIR /app
COPY . ./
CMD ["./gowebapp"]
Then in your docker-compose.yml file have three separate stanzas to launch the containers
version: '3'
services:
pythonic_project1:
build: ./pythonic_project1
ports:
- 8008:8008
env:
PY2_URL: 'http://pythonic_project2:8009'
GO_URL: 'http://execproject:8010'
pythonic_project2:
build: ./pythonic_project2
execproject:
build: ./execproject
If you really can't rearrange your Dockerfiles, you can at least launch three containers from the same image in the docker-compose.yml file:
services:
pythonic_project1:
build: .
workdir: /app/pythonic_project1
command: ./__main__.py
pythonic_project2:
build: .
workdir: /app/pythonic_project1
command: ./__main__.py
There's several good reasons to structure your project with multiple containers and images:
If you roll your own shell script and use background processes (as other answers have), it just won't notice if one of the processes dies; here you can use Docker's restart mechanism to restart individual containers.
If you have an update to one of the programs, you can update and restart only that single container and leave the rest intact.
If you ever use a more complex container orchestrator (Docker Swarm, Nomad, Kubernetes) the different components can run on different hosts and require a smaller block of CPU/memory resource on a single node.
If you ever use a more complex container orchestrator, you can individually scale up components that are using more CPU.

Non existing path when setting up Flask to have separated configurations for each environment

I have separated configs for each environment and one single app, the
directory tree looks like:
myapp
├── __init__.py # empty
├── config
│   ├── __init__.py # empty
│   ├── development.py
│   ├── default.py
│   └── production.py
├── instance
│   └── config.py
└── myapp
├── __init__.py
   └── myapp.py
Code
The relevant code, myapp/__init__.py:
from flask import Flask
app = Flask(__name__, instance_relative_config=True)
app.config.from_object('config.default')
app.config.from_pyfile('config.py')
app.config.from_envvar('APP_CONFIG_FILE')
myapp/myapp.py:
from myapp import app
# ...
Commands
Then I set the variables:
$export FLASK_APP=myapp.py
And try to run the development server from the project root:
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp.py) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
And from the project myapp folder:
$ cd myapp
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp.myapp.myapp) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
With another FLASK_APP variable:
$ export FLASK_APP=myapp/myapp.py
# in project root
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp.myapp.myapp) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
# moving to project/myapp
$ cd myapp
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp/myapp.py) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
Other test without success
$ python -c 'import myapp; print(myapp)'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/user/myapp/myapp/__init__.py", line 6, in <module>
app.config.from_envvar('APP_CONFIG_FILE')
File "/home/user/.virtualenvs/myapp/lib/python3.5/site-packages/flask/config.py", line 108, in from_envvar
variable_name)
RuntimeError: The environment variable 'APP_CONFIG_FILE' is not set and as such configuration could not be loaded. Set this variable and make it point to a configuration file
$ export APP_CONFIG_FILE="/home/user/myapp/config/development.py"
$ python -c 'import myapp; print(myapp)'<module 'myapp' from '/home/user/myapp/myapp/__init__.py'>
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp.myapp) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
Notes:
I am not using the PYTHON_PATH variable, it is empty
I have already seen other related questions (Flask: How to manage different environment databases?) but my problem is the (relatevely new) flask command
Using Python 3.5.2+
It took me a while but I finally found it:
Flask doesn't like projects with __init__.py at root level, delete myapp/__init__.py. This is the one located at the root folder:
myapp
├── __init__.py <--- DELETE
...
└── myapp
├── __init__.py <--- keep
└── myapp.py
Use $ export FLASK_APP=myapp/myapp.py
The environment variable specifying the configuration should be the absolut path to it: export APP_CONFIG_FILE="/home/user/myapp/config/development.py"
Now everything works \o/
$ flask run
* Serving Flask app "myapp.myapp"
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
$ flask shell
Python 3.5.2+ (default, Sep 22 2016, 12:18:14)
[GCC 6.2.0 20160927] on linux
App: myapp
Instance: /home/user/myapp/instance
>>>

Running supervisord from the host, celery from a virtualenv (Django app)

I'm trying to use celery and redis queue to perform a task for my Django app. Supervisord is installed on the host via apt-get, whereas celery resides in a specific virtualenv on my system, installed via `pip.
As a result, I can't seem to get the celery command to run via supervisord. If I run it from inside the virtualenv, it works fine, outside of it, it doesn't. How do I get it to run under my current set up? Is the solution simply to install celery via apt-get, instead of inside the virtualenv? Please advise.
My celery.conf inside /etc/supervisor/conf.d is:
[program:celery]
command=/home/mhb11/.virtualenvs/myenv/local/lib/python2.7/site-packages/celery/bin/celery -A /etc/supervisor/conf.d/celery.conf -l info
directory = /home/mhb11/somefolder/myproject
environment=PATH="/home/mhb11/.virtualenvs/myenv/bin",VIRTUAL_ENV="/home/mhb11/.virtualenvs/myenv",PYTHONPATH="/home/mhb11/.virtualenvs/myenv/lib/python2.7:/home/mhb11/.virtualenvs/myenv/lib/python2.7/site-packages"
user=mhb11
numprocs=1
stdout_logfile = /etc/supervisor/logs/celery-worker.log
stderr_logfile = /etc/supervisor/logs/celery-worker.log
autostart = true
autorestart = true
startsecs=10
stopwaitsecs = 600
killasgroup = true
priority = 998
And the folder structure for my Django project is:
/home/mhb11/somefolder/myproject
├── myproject
│ ├── celery.py # The Celery app file
│ ├── __init__.py # The project module file (modified)
│ ├── settings.py # Including Celery settings
│ ├── urls.py
│ └── wsgi.py
├── manage.py
├── celerybeat-schedule
└── myapp
├── __init__.py
├── models.py
├── tasks.py # File containing tasks for this app
├── tests.py
└── views.py
If I do a status check via supervisorctl, I get a FATAL error on the command I'm trying to run in celery.conf. Help!
p.s. note that user mhb11 does not have root privileges, in case it matters. Moreover, /etc/supervisor/logs/celery-worker.log is empty. And inside supervisord.log the relevant error I see is INFO spawnerr: can't find command '/home/mhb11/.virtualenvs/redditpk/local/lib/python2.7/site-packages/celery/bin/‌​celery'.
Path to celery binary is myenv/bin/celery whereas you are using myenv/local/lib/python2.7/site-packages/celery/bin/cel‌‌​​ery.
So if you try on your terminal the command you are passing to supervisor (command=xxx), you should get the same error.
You need to replace your command=xxx in your celery.conf with
command=/home/mhb11/.virtualenvs/myenv/bin/celery -A myproject.celery -l info
Note that I have also replaced -A parameter with celery app, instead of supervisor configuration. This celery app is relevant to your project directory set in celery.conf with
directory = /home/mhb11/somefolder/myproject
On a side note, if you are using Celery with Django, you can manage celery with Django's manage.py, no need to invoke celery directly. Like
python manage.py celery worker
python manage.py celery beat
For detail please read intro of Django Celery here.

`docker run -v` doesn't work as expected

I'm experimenting with a Docker image repository cloned from https://github.com/amouat/example_app.git (which is based on another repository: https://github.com/mrmrcoleman/python_webapp).
The structure of this repository is:
├── Dockerfile
├── example_app
│ ├── app
│ │ ├── __init__.py
│ │ └── views.py
│ └── __init__.py
├── example_app.wsgi
After building this repository with tag example_app, I try to mount a directory from the host in the repository:
$ pwd
/Users/satoru/Projects/example_app
$ docker run -v $(pwd):/opt -i -t example_app bash
root#3a12236a1471:/# ls /opt/example_app/
root#3a12236a1471:/# exit
$ ls example_app
__init__.py app run.py
Note that when I tried to list files in /opt/example_app in the container it turned out to be empty.
What's wrong in my configuration?
Your Dockerfile looks like this:
FROM python_webapp
MAINTAINER amouat
ADD example_app.wsgi /var/www/flaskapp/flaskapp.wsgi
CMD service apache2 start && tail -F /var/log/apache2/error.log
So you won't find the files you mentioned since there were nonADD-d in the Dockerfile. Also, this is not going to work unless python_webapp installs apache and creates /var/www/flaskapp and /var/log/apache2 exists. Without knowing what these other customs parts do, it is hard to know what to expect.

Categories