pip does not log to file (using config option) - python

I edited the pip user config to log to a file using pip config set user.log ~/pip.log but it never writes to that file.
When I run pip with the --log ~/pip.log option it works though.
The output of the pip config debug:
env_var:
env:
global:
/etc/xdg/pip/pip.conf, exists: False
/etc/pip.conf, exists: False
site:
/home/user/python/venv/speech/pip.conf, exists: False
user:
/home/user/.pip/pip.conf, exists: False
/home/user/.config/pip/pip.conf, exists: True
user.log: /home/user/pip.log

I think it must be in section [global] in the config file. So unset it and set properly:
pip config unset user.log
pip config set global.log ~/pip.log

Related

Pip config settings not working for virtual environment

Studying https://pip.pypa.io/en/stable/topics/configuration/ I understand that I can have multiple pip.conf files (on a UNIX-based system) which are loaded in the described order.
My task is to write a bash script that automatically creates a virtual environment and sets pip configuration only for the virtual environment.
# my_bash_script.sh
...
python -m virtualenv .myvenv
....
touch pip.conf
# this will create path/to/.myvenv/pip.conf
# otherwise following commands will be in the user's pip.conf at ~/.config/pip/pip.conf
path/to/.myvenv/bin/python -m pip config set global.proxy "my-company-proxy.com"
# setting our company proxy here
path/to/.myvenv/bin/python -m pip config set global.trusted-host "pypi.org pypi.python.org files.pythonhosted.org"
# because of SSL issues from behind the company's firewall I need this to make pip work
...
My problem is, that I want to set the configuration not for global but for site. If I exchange global.proxy and global.trusted-host for site.proxy and site.trusted-host pip won't be able to install packages anymore whereas everything works fine if I leave it at global. Also changing it to install.proxy and install.trusted-host doesn't work.
The pip.conf file looks like this afterwards:
# /path/to/.myvenv/pip.conf
[global]
proxy = "my-company-proxy.com"
trusted-host = "pypi.org pypi.python.org files.pythonhosted.org"
pip config debug yields the following:
env_var:
env:
global:
/etc/xdg/pip/pip.conf, exists: False
/etc/pip.conf, exists: False
site:
/path/to/.myvenv/pip.conf, exists: True
global.proxy: my-company-proxy.com
global.trusted-host: pypi.org pypi.python.org files.pythonhosted.org
user:
/path/to/myuser/.pip/pip.conf, exists: False
/path/to/myuser/.config/pip/pip.conf, exists: True
What am I missing here?
Thank you in advance for your help!
The [global] in the config file refers to the fact that these settings are used for all pip commands. See this section of the manual. So you can do something like
[global]
timeout = 60
[freeze]
timeout = 10
The global/site distinction comes from the location of the config file. So your file /path/to/.myvenv/pip.conf is referred to as the site config file through its location. In it, you still need to have
[global]
proxy = "my-company-proxy.com"
trusted-host = "pypi.org pypi.python.org files.pythonhosted.org"

Python: Using .env (dotenv) file with tox

In my Python project, I'm reading environment variables from a .env file. I am actually using pydantic to read/verify the env vars.
When using tox, the .env file will be completely ignored. I am wondering how to make tox acknowledging the existence of .env?
Here's my tox.ini
[tox]
envlist = py39
[testenv]
deps = -r requirements-dev.txt
commands = pytest {posargs}
My .env file:
ENV_STATE="prod" # dev or prod
At first, I thought maybe pydantic loads the content of the .env file as environment variables, that is why I wrote this as my first answer:
original answer
tox does some isolation work, so your builds / tests are more reproducible.
This means that e.g. environment variables are filtered out, except you whitelist them.
You probably need to set
passenv = YOUR_ENVIRONMENT_VARIABLE
Also see in the tox documentation.
updated answer
This does not seem to be a tox issue at all.
I just created a simple project with pydantic and dotenv, and it works like a charm with tox.
tox.ini
[tox]
envlist = py39
skipsdist = True
[testenv]
deps = pydantic[dotenv]
commands = pytest {posargs}
.env
ENVIRONMENT="production"
main.py
from pydantic import BaseSettings
class Settings(BaseSettings):
environment: str
class Config:
env_file = ".env"
env_file_encoding = "utf-8"
test_main.py
from main import Settings
def test_settings():
settings = Settings(_env_file=".env")
assert settings.environment == "production"

Python: Flask loading venv with sh

I'm trying to launch a server and load a venv with modules installed. So basically I did a start.sh file which is:
export FLASK_APP=wsgi.py
export SECRET_KEY=test
export FLASK_DEBUG=1
export APP_CONFIG_FILE=config.py
export FLASK_RUN_PORT=80
which python3
flask run
which python3 - usr/bin/python3
And my config.py file which is:
"""Set Flask configuration vars from .env file."""
# General Config
SECRET_KEY = os.environ.get('SECRET_KEY')
FLASK_APP = os.environ.get('FLASK_APP')
FLASK_ENV = os.environ.get('FLASK_ENV')
FLASK_DEBUG = os.environ.get('FLASK_DEBUG')
PERMANENT_SESSION_LIFETIME = timedelta(minutes=30)
SQLALCHEMY_DATABASE_URI = os.environ.get('SQLALCHEMY_DATABASE_URI')
SQLALCHEMY_TRACK_MODIFICATIONS = os.environ.get('SQLALCHEMY_TRACK_MODIFICATIONS')
So basically I launch my server with sudo sh start.sh, but when I'm trying to connect to the server it says that modules are not found, so that means it is using the other interpreter. When I do sudo pip3 install hello-world, then I pass one module not found error. btw I have a .env folder with all the modules installed in my application.
You need to activate VENV
#!/bin/bash
cd $(dirname $0)
. venv/bin/activate
export FLASK_APP=wsgi.py
export SECRET_KEY=test
export FLASK_DEBUG=1
export APP_CONFIG_FILE=config.py
export FLASK_RUN_PORT=80
which python3
flask run
^ Try this, you can run it from any directory.

cassandra-snapshotter: not found

i installed cassandra snapshotter using pip install cassandra_snapshotter. It's working fine if i run it on terminal with command
sudo cassandra-snapshotter --s3-bucket-name=vivek-bucket
--s3-base-path=cassandra --aws-access-key-id=XXXX --aws-secret-access-key=XXX backup --hosts=172.31.2.85 --user ubuntu
--sshkey=/home/ubuntu/XXXX.pem --cassandra-conf-path=/etc/dse/cassandra --use-sudo=yes --new-snapshot
when i tried same command with ansible it ends with error
"start": "2017-04-25 10:02:39.111333",
"stderr": "/bin/sh: 1: cassandra-snapshotter: not found",
"stderr_lines": [
"/bin/sh: 1: cassandra-snapshotter: not found"
]
- name: snapshot and backup
hosts: localhost
connection: local
become: yes
tasks:
- name: taking snapshot
shell: cassandra-snapshotter --s3-bucket-name=vivek-bucket --s3-base-path=cassandra --aws-access-key-id=XXXX --aws-secret-access-key=XXX backup --hosts=172.31.2.85 --user ubuntu --sshkey=/home/ubuntu/XXXX.pem --cassandra-conf-path=/etc/dse/cassandra --use-sudo=yes --new-snapshot
pip installs executables in it's own location. That location is probably not in the search path. You can either set the PATH environment variable in your ansible and extend it to include that location or you could just manually do a 'which cassandra_snapshotter' on the commandline and put the full path to the cassandra_snapshotter executable in your ansible.
Also: I don't think you are using any 'shell' features in that cassandra_snapshotter call. It's better to use https://docs.ansible.com/ansible/command_module.html when possible.

GCE module in Ansible cannot find apache-libcloud although gce.py works

I installed ansible, apache-libcloud with pip. Also, I can use the gcloud cli and ansible works for any non-gce-related playbooks.
When using the gce module as a task to create instances in an ansible playbook, the following error occurs:
TASK: [Launch instances] ******************************************************
<127.0.0.1> REMOTE_MODULE gce instance_names=mm2 machine_type=f1-micro image=ubuntu-1204-precise-v20150625 zone=europe-west1-d service_account_email= pem_file=../pkey.pem project_id=fancystuff-11
<127.0.0.1> EXEC ['/bin/sh', '-c', 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889 && echo $HOME/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889']
<127.0.0.1> PUT /var/folders/v4/ll0_f8lj7yl7yghb645h95q9ckfc19/T/tmpyDoPt9 TO /Users/d046179/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889/gce
<127.0.0.1> EXEC ['/bin/sh', '-c', u'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /Users/d046179/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889/gce; rm -rf /Users/d046179/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889/ >/dev/null 2>&1']
failed: [localhost -> 127.0.0.1] => {"failed": true, "parsed": false}
failed=True msg='libcloud with GCE support (0.13.3+) required for this module'
FATAL: all hosts have already failed -- aborting
And the site.yml of the playbook I wrote:
name: Create a sandbox instance
hosts: localhost
vars:
names: mm2
machine_type: f1-micro
image: ubuntu-1204-precise-v20150625
zone: europe-west1-d
service_account_email: xxx#developer.gserviceaccount.com
pem_file: ../pkey.pem
project_id: fancystuff-11
tasks:
- name: Launch instances
local_action: gce instance_names={{names}} machine_type={{machine_type}}
image={{image}} zone={{zone}} service_account_email={{ service_account_email }}
pem_file={{ pem_file }} project_id={{ project_id }}
register: gce
The gce cloud module fails with the error message "ibcloud with GCE support (0.13.3+) required for this module".
However, running gce.py from the ansible github repo works. The python script finds the apache-libcloud library and prints a json with all running instances. Besides, pip install apache-libcloud states it is installed properly.
Is there anything I am missing like an environment variable that points to the python libraries (PYTHONPATH)?
UPDATE 1:
I included the following task before the gce task:
- name: install libcloud
pip: name=apache-libcloud
This also does not affect the behavior nor prevents any error messages.
Update 2:
I added the following task to inspect the available PYTHONPATH:
- name: Getting PYTHONPATH
local_action: shell python -c 'import sys; print(":".join(sys.path))'
register: pythonpath
- debug:
msg: "PYTHONPATH: {{ pythonpath.stdout }}"
The following is returned:
PYTHONPATH: :/usr/local/lib/python2.7/site-packages/setuptools-17.1.1-py2.7.egg:/usr/local/lib/python2.7/site-packages/pip-7.0.3-py2.7.egg:/usr/local/lib/python2.7/site-packages:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python27.zip:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload:/usr/local/lib/python2.7/site-packages:/Library/Python/2.7/site-packages
UPDATE 3:
I introduced my own test.py script as a task which executes the same apache-libcloud imports as the gce ansible module. The script imports just fine!!!
Setting the PYTHONPATH fixes the issue. For example:
$ export PYTHONPATH=/usr/local/lib/python2.7/site-packages/
I'm using OSX and I solved this for myself. Short answer: install ansible with pip. (rather than e.g. brew)
I inspected the PYTHONPATH that Ansible sets runtime and it looked like it had nothing to do whith my normal system PYTHONPATH. E.g. for me, my system PYTHONPATH was empty, and setting that like e.g. mlazarov suggested didn't make any difference. I made ansible print the PYTHONPATH it uses runtime, and it looked like this:
ok: [localhost] => {
"msg": "PYTHONPATH: :/usr/local/Cellar/ansible/1.9.4/libexec/lib/python2.7/site-packages:/usr/local/Cellar/ansible/1.9.4/libexec/vendor/lib/python2.7/site-packages:/Library/Frameworks/Python.framework/Versions/3.4/lib/python34.zip:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/plat-darwin:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/lib-dynload:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages"
}
So there's only ansible's own site-packages and some strange Python3 installations (I'm using python2.7)
Something in this discussion made me think it might be a problem with the ansible installation, my ansible was installed with brew. I reinstalled it globally with pip (simply running sudo pip install ansible), and that fixed the problem. Now the PYTHONPATH ansible prints looks much better, with my virtualenv python installation in the beginning, and no more "libcloud with GCE support (0.13.3+) required for this module".
I was able to resolve the issue by setting the PYTHONPATH environment variable (export PYTHONPATH=/path/to/site-packages) with the current site-packages folder. Apparently, ansible establishes its own environment during module execution and ignores any paths available in python except the paths from the environment variable PYTHONPATH.
I find this a peculiar behavior which is not documented on the ansible websites.
I have a similar environment setup. I found some information at the bottom of this section: https://github.com/jlund/streisand#prerequisites
Essentially there's some magic files you can update so the brew'd ansible will add a folder to search for packages:
mkdir -p ~/Library/Python/2.7/lib/python/site-packages
echo '/usr/local/lib/python2.7/site-packages' > ~/Library/Python/2.7/lib/python/site-packages/homebrew.pth
Hope that fixes it for you!
In my case it was the case of:
pip install apache-libcloud

Categories