Local copy of PyPI fro Python - python

I need to deploy a replica of PyPI on an internal network. The idea is to have all the PyPI packages in the local repository, avoiding to connect to the real PyPI repo all the time.
I used bandersnatch to mirror the files of PyPI accoring to PEP381.
Than on clients' pip I dropped /etc/pip.conf as following
[global]
index-url = http://www.myserver.com/repo/PyPI/web/simple
trusted-host = www.myserver.com
On the clients machine the command:
pip install -v <some packages>
works using the local repo. However the command
pip search --index http://www.myserver.com/repo/PyPI/web/simple <some packages>
doesn' t work and returns
raise HTTPError(http_error_msg, response=self)
pip._vendor.requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http://www.myserver.com/repo/PyPI/web/
Here are 2 questions:
Is it possible to enable the pip search command without install a local PyPI server such as pypiserver?
Moreover, is it possible to fallback to the official PyPI server if the local pip install commands fails (e.g. local is not present)?
Thanks
Charlie

bandersnatch just copies packages which is not enough to have a replica of PyPI. You need a server-side program like pypiserver, devpi, Artifactory, Nessus…

I've seen devpi use /+simple/, you might need your reverse Proxy to rewrite the URL so that Artifactory can use it (/simple is requirement of repo type in Artifactory)

Related

How to configure my docker pypi server to use pypiserver[cache]?

I'm using docker pypi server as my internal pip server. I have thousands of requests and sometimes my server fails (i.e. reaches 5 timeouts)
pypiserver specifies an option that can help with that: using cache. How can I make my docker run with this option enabled (OR is there another way to handle the request load better)?
The docker tutorial specifies a cache-related command : --cache-control AGE but it has nothing to do with the pypi caching I want.
Here is my docker run command: sudo docker run -p 80:8080 -v /home/bla/.pypi_server/packages:/data/packages pypiserver/pypiserver:latest

Unable to install packages with pip

My build machine doesn't have internet connection. so i created proxy repository in nexus with name "proxy_repo" which is pointing to https://pypi.org/. and created ~/.pip/pip.config in build machine.
https://pypi.org/ is allowed to access from build machine through nexus.
content of the pip.conf is as below
[global]
trusted-host=MyPrivate-nexusrepo.com
index = https://MyPrivate-nexusrepo.com/content/repositories/proxy_repo/pypi
index-url = https://MyPrivate-nexusrepo.com/content/repositories/proxy_repo/simple
when i execute any pip command, say "pip -v install django" , i always getting below error. can some one please help?
Collecting django
1 location(s) to search for versions of django:
* https://MyPrivate-nexusrepo.com/content/repositories/proxy_repo/simple/django/
Getting page https://MyPrivate-nexusrepo.com/content/repositories/proxy_repo/simple/django/
Looking up "https://MyPrivate-nexusrepo.com/content/repositories/proxy_repo/simple/django/" in the cache
No cache entry available
Starting new HTTPS connection (1): MyPrivate-nexusrepo.com
"GET /content/repositories/proxy_repo/simple/django/ HTTP/1.1" 404 None
Could not fetch URL https://MyPrivate-nexusrepo.com/content/repositories/proxy_repo/simple/django/: 404 Client Error: Not Found for url: https://MyPrivate-nexusrepo.com/content/repositories/proxy_repo/simple/django/ - skipping
Could not find a version that satisfies the requirement django (from versions: )
Cleaning up...
No matching distribution found for django
I have encountered this problem before, solved mine by settings my date and time.. if the computer date and time is not current it prevents data fetching from Python.org
One thing that often trips people up who are running in restricted environments is that access to both of these servers must be allowed through your corporate firewall:
https://pypi.org/
https://files.pythonhosted.org/
The reason for this is that requests made to the first URL often redirect to the second one for the content requested.

No module named 'flask' while using Vagrant

I am trying to setup Vagrant in my machine(ubuntu 15.10 64bit). and I followed the steps mentioned here link
I am getting error as no Flask found when I run app.py
Am i missing something here? Its mentioned that all packages from requirements will be installed automatically. But I am not able to make it work.
Steps are as follows:
Getting started
Install Vagrant
Clone this repo as your project name:
git clone git#github.com:paste/fvang.git NEW-PROJECT-NAME
Configure project name and host name in ansible/roles/common/vars/main.yml:
project_name: "fvang"
host_name: "fvang.local"
Modify your local /etc/hosts:
192.168.33.11 fvang.local
Build your Vagrant VM:
vagrant up
Log into the VM via SSH:
vagrant ssh
Start Flask development server:
cd ~/fvang
python app/app.py
I am the author of the FVANG repo, but I don't have the rep to join your chat. I posted a response on the github issue, see here:
https://github.com/paste/fvang/issues/2
I think the Ansible provisioning script failed to complete due to changes in Ansible 2.0. (otherwise Flask would have been installed from requirements.txt). You can check which version of Ansible was installed by running ansible --version. I will be upgrading the scripts to 2.0 shortly.
Edit --
I just updated the repo to work with Ansible 2.0 and simplified a few things. Everything should work as expected now, give it a shot. You'll probably want to just vagrant destroy and vagrant up again.
A vagrant machine as new as a new operating system. You need to install each and every software you need. try this
sudo pip install Flask
After installation if you need to run the app, then you need to uncomment vagrant's ip
(In Vagrantfile) before accessing vagrant's localhost, it turns out to be 192.168.33.10 generally and port 5000

How do I deploy a python application to an external server?

I have written a python script on my local laptop which uses several third party packages. I now want to run my script regularly (via a cron job) on an external server.
The external server most likely does not have all the dependencies installed, is there is a way to package and deploy my python script and dependencies in order to ensure that it will run?
I have already tried to package the script as an exe, but failed to do so.
Not clear what kind of third party packages you have, but for those that were installed with pip, you can do this in your dev environment:
$ pip freeze > requirements.txt
And then you can install these packages in your production environment:
$ pip install requirements.txt
Ideally, you will already have a virtualenv on your production box. If not, it may be well worth reading about these before deploying your script.
Just turn your computer into a server. Simply set up your router for port forwarding so that your server's content's will display when the router's IP is entered. You can of course purchase a DNS domain to give that IP a human readable URL.

What ports does pip use?

This is hopefully a quick one to answer, I'm trying to provision a box on AWS with puppet and one of the steps involves a pip install from a requirements file. Something like this: -
/usr/local/venv/ostcms/bin/pip install -r /vagrant/requirements.txt
The step basically fails because it can't find any of the packages in the requirements file, but when I open the AWS box's security group up to allow "All Traffic" the pip step works.
I'm trying to find the port that pip uses so I can basically have that port, http and ssh open on the box and live happily ever after.
Pip runs on 3128 so make sure you have that open in your AWS console. Otherwise pip will get blocked when attempting to talk to PyPi (or anywhere else it cares to download from).

Categories