I'm writing an ansible playbook for deploying a django app. As part of the process, I'd like to run the staticcollect command.
The issue I'm having is that the remote server has two python interpreters, one Python2.6 and one Python2.7, with Python2.6 being the default.
When I run the playbook, it runs using the Python2.6 interpretor and I need it to run against the Python2.7 interpretor.
Any idea on how this can be acheived?
My playbook is as follows:
- hosts: xxxxxxxxx
vars:
hg_branch: dmv2
django_dir: /opt/app/xxxx
conf_file: /opt/app/conf/xx_uwsgi.ini
django_env:
STATIC_ROOT: /opt/app/serve/static
remote_user: xxxxxx
tasks:
- name: Update the hg repo
command: chdir=/opt/app/xxxxx hg pull -u --rev {{hg_branch}}
- name: Collect static resources
environment: django_env
django_manage: command=collectstatic app_path={{django_dir}}
- name: Restart the django service
command: touch {{conf_file}}
- name: check nginx is running
service: name=nginx state=started
/usr/bin/python is just a symlink to either /usr/bin/python2.6 or /usr/bin/python2.7. You can just invoke a bash command to update the symlink.
- name: Remove python --> python2.6 symlink
sudo: yes
command: rm /usr/bin/python
- name: Add python --> python2.7 symlink
sudo: yes
command: ln -s /usr/bin/python2.7 /usr/bin/python
I had a similar situation at the day job and solved it with the ansible_python_interpreter host/group variable.
http://docs.ansible.com/faq.html#how-do-i-handle-python-pathing-not-having-a-python-2-x-in-usr-bin-python-on-a-remote-machine
The cool part is that you can define a different path for python on a host by host basis if you really need to.
Related
I am attempting to perform an SQL query using oracle-instantclient-basic-21.5 through an Ubuntu 20.04.3 agent hosted by Azure Devops. The query itself (which reads: python query_data) works when I am running it on my own machine with specs:
Windows 10
Path=C:\oracle\product\11.2.0.4\client_x64\bin;...;...
TNS_ADMIN=C:\oracle\product\tns
Python 3.8.5 using sqlalchemy with driver="oracle" and dialect = "cx_oracle"
I am running the following:
pool:
vmImage: 'ubuntu-latest'
steps:
- script: |
sudo apt install alien
displayName: 'Install alien'
- script: |
sudo alien -i oracle-instantclient-basic-21.5.0.0.0-1.x86_64.rpm
displayName: 'Install oracle-instantclient-basic'
- script: |
sudo sh -c 'echo /usr/lib/oracle/21/client64/ > /etc/ld.so.conf.d/oracle-instantclient.conf'
sudo ldconfig
displayName: 'Update the runtime link path'
- script: |
sudo cp tns/TNSNAMES.ORA /usr/lib/oracle/21/client64/lib/network/admin
sudo cp tns/ldap.ORA /usr/lib/oracle/21/client64/lib/network/admin
sudo cp tns/SQLNET.ORA /usr/lib/oracle/21/client64/lib/network/admin
sudo cp tns/krb5.conf /usr/lib/oracle/21/client64/lib/network/admin
displayName: 'Copy and paste correct TNS content'
- task: UsePythonVersion#0
inputs:
versionSpec: '3.8'
- script: |
export ORACLE_HOME=/usr/lib/oracle/21/client64
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
export TNS_ADMIN=$ORACLE_HOME/lib/network/admin
python query_data
displayName: 'Attempt to run python script with locally valid environment variables'
with the error TNS:could not resolve the connect identifier specified. What I have done:
Checked that the locations I am referring to match the actual oracle-instantclient-basic installation
Copied the TNSNAMES.ORA, ldap.ORA etc. that I am using on my own machine and verified that they are present in the desired location (/usr/lib/oracle/21/client64/lib/network/admin)
Checked that TNS_ADMIN points to the correct path (/usr/lib/oracle/21/client64/lib/network/admin)
The sql query does not complain about a missing client, so it is aware of the installation. Why doesn't it read the TNS_ADMIN path or its contents correctly?
On Linux change the file names to lowercase tnsnames.ora and sqlnet.ora and ldap.ora. If you run, say, strace sqlplus a/b#c you can see that is looks for the lowercase names.
With Instant Client, don't set ORACLE_HOME.
There's no need to set LD_LIBRARY_PATH since ldconfig is used
There's no need to set TNS_ADMIN since you have moved the configuration files to the default location.
You can simplify your install by using alien -i --scripts oracle-instantclient-basic-21.5.0.0.0-1.x86_64.rpm This will automatically do the ldconfig step for you.
Hopefully you have installed the Python cx_Oracle module somehow.
I am using Docker 17.04.0-ce, build 4845c56 with docker-compose 1.12.0, build b31ff33 on Ubuntu 16.04.2 LTS. I simply want to pass an environment variable and display it from my script running in a container. I am doing this according to the documentation https://docs.docker.com/compose/compose-file/#environment . The problem is that the variable is not passed to the container.
My docker-compose.yml file:
env-file-test:
build: .
dockerfile: Dockerfile
environment:
- DEMO_VAR
My Dockerfile:
FROM alpine
COPY docker-start.sh /
CMD ["/docker-start.sh"]
And the docker-start.sh file:
#!/bin/sh
echo "DEMO_VAR Var Passed in: $DEMO_VAR"
I try to set the variable in my current terminal session and pass it to the container:
$ export DEMO_VAR=aabbdd
$ echo $DEMO_VAR
aabbdd
$ sudo docker-compose up
Starting envfiletest_env-file-test_1
Attaching to envfiletest_env-file-test_1
env-file-test_1 | DEMO_VAR Var Passed in:
envfiletest_env-file-test_1 exited with code 0
So you can see that the variable DEMO_VAR is empty!
I also tried using variables in docker-compose.yml like this: DEMO_VAR=${DEMO_VAR} but then when I run sudo docker-compose up, I get a warning: "WARNING: The DEMO_VAR variable is not set. Defaulting to a blank string.".
What am I doing wrong? What should I do to pass the variable to the container?
I found a solution. Answering my own question...
The problem was with the sudo command. It turned out that it does not pass environment variables by default. There are some possible solutions:
Use sudo -E. Demo:
$ export DEMO_VAR=aabbdd
$ echo $DEMO_VAR
aabbdd
$ sudo -E docker-compose up
env-file-test_1 | DEMO_VAR Var Passed in: aabbdd
Use sudo VAR=value:
sudo DEMO_VAR=$DEMO_VAR docker-compose up
Add environment variables to the sudoers file (https://stackoverflow.com/a/8636711)
Use docker without sudo (https://askubuntu.com/questions/477551/how-can-i-use-docker-without-sudo)
you should use ENV in your Dockerfile, and avoid export.
See the doc
https://docs.docker.com/engine/reference/builder/#env
I installed ansible, apache-libcloud with pip. Also, I can use the gcloud cli and ansible works for any non-gce-related playbooks.
When using the gce module as a task to create instances in an ansible playbook, the following error occurs:
TASK: [Launch instances] ******************************************************
<127.0.0.1> REMOTE_MODULE gce instance_names=mm2 machine_type=f1-micro image=ubuntu-1204-precise-v20150625 zone=europe-west1-d service_account_email= pem_file=../pkey.pem project_id=fancystuff-11
<127.0.0.1> EXEC ['/bin/sh', '-c', 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889 && echo $HOME/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889']
<127.0.0.1> PUT /var/folders/v4/ll0_f8lj7yl7yghb645h95q9ckfc19/T/tmpyDoPt9 TO /Users/d046179/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889/gce
<127.0.0.1> EXEC ['/bin/sh', '-c', u'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /Users/d046179/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889/gce; rm -rf /Users/d046179/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889/ >/dev/null 2>&1']
failed: [localhost -> 127.0.0.1] => {"failed": true, "parsed": false}
failed=True msg='libcloud with GCE support (0.13.3+) required for this module'
FATAL: all hosts have already failed -- aborting
And the site.yml of the playbook I wrote:
name: Create a sandbox instance
hosts: localhost
vars:
names: mm2
machine_type: f1-micro
image: ubuntu-1204-precise-v20150625
zone: europe-west1-d
service_account_email: xxx#developer.gserviceaccount.com
pem_file: ../pkey.pem
project_id: fancystuff-11
tasks:
- name: Launch instances
local_action: gce instance_names={{names}} machine_type={{machine_type}}
image={{image}} zone={{zone}} service_account_email={{ service_account_email }}
pem_file={{ pem_file }} project_id={{ project_id }}
register: gce
The gce cloud module fails with the error message "ibcloud with GCE support (0.13.3+) required for this module".
However, running gce.py from the ansible github repo works. The python script finds the apache-libcloud library and prints a json with all running instances. Besides, pip install apache-libcloud states it is installed properly.
Is there anything I am missing like an environment variable that points to the python libraries (PYTHONPATH)?
UPDATE 1:
I included the following task before the gce task:
- name: install libcloud
pip: name=apache-libcloud
This also does not affect the behavior nor prevents any error messages.
Update 2:
I added the following task to inspect the available PYTHONPATH:
- name: Getting PYTHONPATH
local_action: shell python -c 'import sys; print(":".join(sys.path))'
register: pythonpath
- debug:
msg: "PYTHONPATH: {{ pythonpath.stdout }}"
The following is returned:
PYTHONPATH: :/usr/local/lib/python2.7/site-packages/setuptools-17.1.1-py2.7.egg:/usr/local/lib/python2.7/site-packages/pip-7.0.3-py2.7.egg:/usr/local/lib/python2.7/site-packages:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python27.zip:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload:/usr/local/lib/python2.7/site-packages:/Library/Python/2.7/site-packages
UPDATE 3:
I introduced my own test.py script as a task which executes the same apache-libcloud imports as the gce ansible module. The script imports just fine!!!
Setting the PYTHONPATH fixes the issue. For example:
$ export PYTHONPATH=/usr/local/lib/python2.7/site-packages/
I'm using OSX and I solved this for myself. Short answer: install ansible with pip. (rather than e.g. brew)
I inspected the PYTHONPATH that Ansible sets runtime and it looked like it had nothing to do whith my normal system PYTHONPATH. E.g. for me, my system PYTHONPATH was empty, and setting that like e.g. mlazarov suggested didn't make any difference. I made ansible print the PYTHONPATH it uses runtime, and it looked like this:
ok: [localhost] => {
"msg": "PYTHONPATH: :/usr/local/Cellar/ansible/1.9.4/libexec/lib/python2.7/site-packages:/usr/local/Cellar/ansible/1.9.4/libexec/vendor/lib/python2.7/site-packages:/Library/Frameworks/Python.framework/Versions/3.4/lib/python34.zip:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/plat-darwin:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/lib-dynload:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages"
}
So there's only ansible's own site-packages and some strange Python3 installations (I'm using python2.7)
Something in this discussion made me think it might be a problem with the ansible installation, my ansible was installed with brew. I reinstalled it globally with pip (simply running sudo pip install ansible), and that fixed the problem. Now the PYTHONPATH ansible prints looks much better, with my virtualenv python installation in the beginning, and no more "libcloud with GCE support (0.13.3+) required for this module".
I was able to resolve the issue by setting the PYTHONPATH environment variable (export PYTHONPATH=/path/to/site-packages) with the current site-packages folder. Apparently, ansible establishes its own environment during module execution and ignores any paths available in python except the paths from the environment variable PYTHONPATH.
I find this a peculiar behavior which is not documented on the ansible websites.
I have a similar environment setup. I found some information at the bottom of this section: https://github.com/jlund/streisand#prerequisites
Essentially there's some magic files you can update so the brew'd ansible will add a folder to search for packages:
mkdir -p ~/Library/Python/2.7/lib/python/site-packages
echo '/usr/local/lib/python2.7/site-packages' > ~/Library/Python/2.7/lib/python/site-packages/homebrew.pth
Hope that fixes it for you!
In my case it was the case of:
pip install apache-libcloud
How to run manage.py from AWS EB (Elastic Beanstalk) Linux instance?
If I run it from '/opt/python/current/app', it shows the below exception.
Traceback (most recent call last):
File "./manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named django.core.management
I think it's related with virtualenv. Any hints?
How to run manage.py from AWS Elastic Beanstalk AMI.
SSH login to Linux (eb ssh)
(optional may need to run sudo su - to have proper permissions)
source /opt/python/run/venv/bin/activate
source /opt/python/current/env
cd /opt/python/current/app
python manage.py <commands>
Or, you can run command as like the below:
cd /opt/python/current/app
/opt/python/run/venv/bin/python manage.py <command>
With the new version of Python paths seem to have changed.
The app is in /var/app/current
The virtual environment is in /var/app/venv/[KEY]
So the instructions are:
SSH to the machine using eb shh
Check the path of your environment with ls /var/app/venv/. The only folder should be the [KEY] for the next step
Activate the environment with source /var/app/venv/[KEY]/bin/activate
Execute the command python3 /var/app/current/manage.py <command>
Of course Amazon can change it anytime.
TL;DR
This answer assumes you have installed EB CLI. Follow these steps:
Connect to your running instance using ssh.
eb ssh <environment-name>
Once you are inside your environment, load the environment variables (this is important for database configuration)
. /opt/python/current/env
If you wish you can see the environment variables using printenv.
Activate your virtual environment
source /opt/python/run/venv/bin/activate
Navigate to your project directory (this will depend on your latest deployment, so use the number of your latest deployment instead of XX)
cd /opt/python/bundle/XX/app/
Run the command you wish:
python manage.py <command_name>
Running example
Asumming that your environment name is my-env, your latest deployment number is 13, and you want to run the shell command:
eb ssh my-env # 1
. /opt/python/current/env # 2
source /opt/python/run/venv/bin/activate # 3
cd /opt/python/bundle/13/app/ # 4
python manage.py shell # 5
As of February 2022 the solution is as follows:
$ eb ssh
$ sudo su -
$ export $(cat /opt/elasticbeanstalk/deployment/env | xargs)
$ source /var/app/venv/*/bin/activate
$ python3 /var/app/current/manage.py <command name>
$ export $(cat /opt/elasticbeanstalk/deployment/env | xargs) is needed to import your environment variables if you have a database connection (most likely you will)
I am trying to run some Django management commands via Fabric on my staging server.
The problem is it seems Fabric is not able to activate the virtualenv and thus using system python/libs when executing the commands.
On the server the Django app is run using a virtualenv (no, I don' use virtualenvwrapper yet...)
Using Fabric (1.0.1) a command might look like this when run from my box:
The fabfile method:
def collectstatic():
require('settings', provided_by=[production, staging])
with settings(warn_only=True):
run('source %(env_path)s/bin/activate && python %(repo_path)s/%(project_name)s/configs/%(settings)s/manage.py collectstatic --noinput -v0' % env)
The output:
$ fab staging master collectstatic
[myserver.no] Executing task 'master'
[myserver.no] Executing task 'collectstatic'
[myserver.no] run: source /home/newsapps/sites/mysite/env/bin/activate && python /home/newsapps/sites/mysite/repository/mysite/configs/staging/manage.py collectstatic --noinput -v0
[myserver.no] Login password:
[myserver.no] out: Unknown command: 'collectstatic'
[myserver.no] out: Type 'manage.py help' for usage.
I know of course that the Django command collectstatic does not exist in versions prior to 1.3 which leads med to think that system python (which has Django 1.2) is beeing used.
My fabfile/project layout is based on the great fabfile of the Tribapps guys
So I created a fabric method to test pythonversion:
def pythonver():
require('settings', provided_by=[production, staging])
with settings(warn_only=True):
run('source %(env_path)s/bin/activate && echo "import sys; print sys.path" | python ' % env)
When run it gives the following output:
$ fab staging master pythonver
[myserver.no] Executing task 'master'
[myserver.no] Executing task 'pythonver'
[myserver.no] run: source /home/newsapps/sites/mysite/env/bin/activate && echo "import sys; print sys.path" | python
[myserver.no] Login password:
[myserver.no] out: ['', '/usr/lib/python2.6', '/usr/lib/python2.6/plat-linux2', '/usr/lib/python2.6/lib-tk', '/usr/lib/python2.6/lib-old', '/usr/lib/python2.6/lib-dynload', '/usr/lib/python2.6/dist-packages', '/usr/lib/pymodules/python2.6', '/usr/lib/pymodules/python2.6/gtk-2.0',
As you can see it uses system python and not my virtualenv located in home/newsapps/sites/mysite/env
But if I run this command directly on the server
source /home/newsapps/sites/mysite/env/bin/activate && echo "import sys; print sys.path" | python
.. then it outputs the right paths from the virtualenv
What am I doing wrong since the commands are not run with the python from my virtualenv using Fabric?
You should call the python version from your virtualenv bin directory, then you will be sure it uses the virtualenv's version of python.
/home/newsapps/sites/mysite/env/bin/python /home/newsapps/sites/mysite/repository/mysite/configs/staging/manage.py collectstatic --noinput -v0
I wouldn't bother with activating the virtualenv, just give the full path to the virtualenv's python interpreter. That will then use the correct PYTHONPATH, etc.
I had the same problem. Couldn't solve it the easy way. So I just used the full path to the python bin file inside the virtualenv. I'm not a pro in Python, but I guess it's the same thing in the end.
It goes something like this in my fab file:
PYTHON = '/home/dudus/.virtualenvs/pai/bin/python'
PIP = '/home/dudus/.virtualenvs/pai/bin/pip'
def update_db():
with cd(REMOTE_DIR + 'application/'):
run('%s ./manage.py syncdb --settings="%s"' %
(PYTHON, SETTINGS)) # syncdb
run('%s ./manage.py migrate --settings="%s"' %
(PYTHON, SETTINGS)) # south migrate
This will work perfectly :)
from __future__ import with_statement
from fabric.api import *
from contextlib import contextmanager as _contextmanager
env.hosts = ['servername']
env.user = 'username'
env.directory = '/path/to/virtualenvs/project'
env.activate = 'source /path/to/virtualenvs/project/bin/activate'
#_contextmanager
def virtualenv():
with cd(env.directory):
with prefix(env.activate):
yield
def deploy():
with virtualenv():
run('pip freeze')
This approach worked for me, you can apply this too.
from fabric.api import run
# ... other code...
def install_pip_requirements():
run("/bin/bash -l -c 'source venv/bin/activate' "
"&& pip install -r requirements.txt "
"&& /bin/bash -l -c 'deactivate'")
Assuming venv is your virtual env directory and add this method wherever appropriate.