Studying https://pip.pypa.io/en/stable/topics/configuration/ I understand that I can have multiple pip.conf files (on a UNIX-based system) which are loaded in the described order.
My task is to write a bash script that automatically creates a virtual environment and sets pip configuration only for the virtual environment.
# my_bash_script.sh
...
python -m virtualenv .myvenv
....
touch pip.conf
# this will create path/to/.myvenv/pip.conf
# otherwise following commands will be in the user's pip.conf at ~/.config/pip/pip.conf
path/to/.myvenv/bin/python -m pip config set global.proxy "my-company-proxy.com"
# setting our company proxy here
path/to/.myvenv/bin/python -m pip config set global.trusted-host "pypi.org pypi.python.org files.pythonhosted.org"
# because of SSL issues from behind the company's firewall I need this to make pip work
...
My problem is, that I want to set the configuration not for global but for site. If I exchange global.proxy and global.trusted-host for site.proxy and site.trusted-host pip won't be able to install packages anymore whereas everything works fine if I leave it at global. Also changing it to install.proxy and install.trusted-host doesn't work.
The pip.conf file looks like this afterwards:
# /path/to/.myvenv/pip.conf
[global]
proxy = "my-company-proxy.com"
trusted-host = "pypi.org pypi.python.org files.pythonhosted.org"
pip config debug yields the following:
env_var:
env:
global:
/etc/xdg/pip/pip.conf, exists: False
/etc/pip.conf, exists: False
site:
/path/to/.myvenv/pip.conf, exists: True
global.proxy: my-company-proxy.com
global.trusted-host: pypi.org pypi.python.org files.pythonhosted.org
user:
/path/to/myuser/.pip/pip.conf, exists: False
/path/to/myuser/.config/pip/pip.conf, exists: True
What am I missing here?
Thank you in advance for your help!
The [global] in the config file refers to the fact that these settings are used for all pip commands. See this section of the manual. So you can do something like
[global]
timeout = 60
[freeze]
timeout = 10
The global/site distinction comes from the location of the config file. So your file /path/to/.myvenv/pip.conf is referred to as the site config file through its location. In it, you still need to have
[global]
proxy = "my-company-proxy.com"
trusted-host = "pypi.org pypi.python.org files.pythonhosted.org"
Related
I am trying to dockerize this repo. After building it like so:
docker build -t layoutlm-v2 .
I try to run it like so:
docker run -d -p 5001:5000 layoutlm-v2
It downloads the necessary libraries and packages:
And then nothing... No errors, no endpoints generated, just radio silence.
What's wrong? And how do I fix it?
You appear to be expecting your application to offer a service on port 5000, but it doesn't appear as if that's how your code behaves.
Looking at your code, you seem to be launching a service using gradio. According the quickstart, calling gr.Interface(...).launch() will launch a service on localhost:7860, and indeed, if you inspect a container booted from your image, we see:
root#74cf8b2463ab:/app# ss -tln
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 2048 127.0.0.1:7860 0.0.0.0:*
There's no way to access a service listening on localhost from outside the container, so we need to figure out how to fix that.
Looking at these docs, it looks like you can control the listen address using the server_name parameter:
server_name
to make app accessible on local network, set this to "0.0.0.0". Can be set by environment variable GRADIO_SERVER_NAME. If None, will use "127.0.0.1".
So if we run your image like this:
docker run -p 7860:7860 -e GRADIO_SERVER_NAME=0.0.0.0 layoutlm-v2
Then we should be able to access the interface on the host at http://localhost:7860/...and indeed, that seems to work:
Unrelated to your question:
You're setting up a virtual environment in your Dockerfile, but you're not using it, primarily because of a typo here:
ENV PATH="VIRTUAL_ENV/bin:$PATH"
You're missing a $ on $VIRTUAL_ENV.
You could optimize the order of operations in your Dockerfile. Right now, making a simple change to your Dockerfile (e.g, editing the CMD setting) will cause much of your image to be rebuilt. You could avoid that by restructuring the Dockerfile like this:
FROM python:3.9
# Install dependencies
RUN apt-get update && apt-get install -y tesseract-ocr
RUN pip install virtualenv && virtualenv venv -p python3
ENV VIRTUAL_ENV=/venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt
RUN git clone https://github.com/facebookresearch/detectron2.git
RUN python -m pip install -e detectron2
COPY . /app
# Run the application:
CMD ["python", "-u", "app.py"]
I edited the pip user config to log to a file using pip config set user.log ~/pip.log but it never writes to that file.
When I run pip with the --log ~/pip.log option it works though.
The output of the pip config debug:
env_var:
env:
global:
/etc/xdg/pip/pip.conf, exists: False
/etc/pip.conf, exists: False
site:
/home/user/python/venv/speech/pip.conf, exists: False
user:
/home/user/.pip/pip.conf, exists: False
/home/user/.config/pip/pip.conf, exists: True
user.log: /home/user/pip.log
I think it must be in section [global] in the config file. So unset it and set properly:
pip config unset user.log
pip config set global.log ~/pip.log
I have tried using pip with index-url in pip.conf. However, i can not make sure that we can get all the necessary python library. So, i want to know if pip support specify more than one index-url in [global] section in pip.conf.
In your pip.conf, you will also have to add both of the index hosts as trusted, so would look something like this:
[global]
index-url = http://download.zope.org/simple
trusted-host = download.zope.org
pypi.org
secondary.extra.host
extra-index-url= http://pypi.org/simple
http://secondary.extra.host/simple
In this example, you have a primary index and two extra index urls and all hosts are trusted.
If you don't specify the host as trusted, you will get the following error:
The repository located at secondary.extra.host is not a trusted or secure host and is being ignored. If this repository is available via HTTPS it is recommended to use HTTPS instead, otherwise you may silence this warning and allow it anyways with '--trusted-host secondary.extra.host'.
Cheers!
If you want more than one package index you have to use the --extra-index-url
From the pip man page:
-i,--index-url <url>
Base URL of Python Package Index (default https://pypi.python.org/simple/).
--extra-index-url <url>
Extra URLs of package indexes to use in addition to --index-url.
In pip.conf the name of settings must be put without --. From the documentation:
The names of the settings are derived from the long command line option, e.g. if you want to use a different package index (--index-url) and set the HTTP timeout (--default-timeout) to 60 seconds your config file would look like this:
[global]
timeout = 60
index-url = http://download.zope.org/ppix
So you can add in your pip.conf
extra-index-url = http://myserver.com/pip
updating radtek 's answer with the new URL to pypi.
It changed to https://pypi.org
So for your pip to be able to fall back to the original pypi server you'll need to add "https://pypi.org/simple" as an extra-index-url while keeping your local server as index-url.
Don't forget to add both to your "trusted-host" list
This update is based on the comment of onelaview: "Official PyPI now supports HTTPS so you can specify https://pypi.org/simple/ for extra-index-URL and avoid specifying pypi.org in trusted-host."
So your pip.conf needs to be containing the following:
[global]
index-url = https://somedomain.org/simple
trusted-host = somedomain.org
pypi.org
secondary.extra.host
extra-index-url= http://pypi.org/simple <= either one of these is fine
https://pypi.org/simple <= either one of these is fine
http://secondary.extra.host/simple
You can also do this by setting an environment variable:
export PIP_EXTRA_INDEX_URL=http://localhost:8080/simple/
which is equivalent to
[global]
extra-index-url = http://localhost:8080/simple/
but does not require a pip.conf file
I'd add to #Tomasz Bartkowiak answer. You can pass multiple URLs to a PIP_TRUSTED_HOST,PIP_EXTRA_INDEX_URL using spaces:
export PIP_TRUSTED_HOST="somedomain.org pypi.org secondary.extra.host"
export PIP_EXTRA_INDEX_URL="http://pypi.org/simple https://pypi.org/simple http://secondary.extra.host/simple"
I need to deploy a Python application to AWS Elastic Beanstalk, however this module requires dependencies from our private PyPi index. How can I configure pip (like what you do with ~/.pip/pip.conf) so that AWS can connect to our private index while deploying the application?
My last resort is to modify the dependency in requirements.txt to -i URL dependency before deployment, but there must be a clean way to achieve this goal.
In .ebextensions/files.config add something like this:
files:
"/opt/python/run/venv/pip.conf":
mode: "000755"
owner: root
user: root
content: |
[global]
find-links = <URL>
trusted-host = <HOST>
index-url = <URL>
Or whatever other configurations you'd like to set in your pip.conf. This will place the pip.conf file in the virtual environment of your application, which will be activated before pip -r requirements.txt is executed. Hopefully this helps!
I want to change directory and when I run command with cd('myApp') I get:
No hosts found. Please specify (single) host string for connection:
I have this code:
def example():
local('sudo apt-get install python-dev libmysqlclient-dev')
local('pip install MySQL-python')
local('sudo apt-get install apache2')
with cd('myApp'):
run('pwd')
run('python manage.py syncdb --no-initial-data')
run('python manage.py migrate')
print(green('DONE.'))
As per the official tutorial, the error specifies that you have not specified a connection in your fabfile for it to deploy. Please check here.
Other than that, in the cd method(used along side the with statement), use full path like
with cd('/path/to/directory/myApp')
rather than just the 'myApp'. Even if it is just '/myApp'. It improves readability and also makes sure that that is the path you wish to go.