I maintain a Python Package on PyPI and I am having trouble testing changes. I would like to install it on my own computer and test before updating my version on PyPI. I try to run “python setup.py install” but it doesn’t reflect the changes that I have made. The only way I have been able to get it to reflect changes is to upload it to PyPI and update it with pip. Is there anyway to update using the setup.py? Or anyway to install on my computer before uploading to PyPI? I would assume there has to be but I have not been able to find anything for it yet
You can use editable installs with pip to install from your source and have it update as you make changes:
$ pip install -e .
Related
For some context, my project uses python version 3.8 and has some packages in a requirements.txt file. I'm trying to upgrade the python version to 3.10 and also all the packages in my requirements.txt file to the latest possible version such that there are no conflicts between dependencies. Is there a way to do this?
It is a bit hard to say what will work best for you since you've given no info on your OS, and that thing you need is done differently on macOS and Windows.
If you're on macOS (this one could also work for Linux I guess), the best way to manage python versions is with pyenv and to update python packages with pip-review, both of which you can install via brew.
So first, you want to create a list of all your packages installed. To do that, you type pip3 freeze > requirements.txt in the terminal. The command creates a txt file in your home folder. In the file, you can see the names of all your packages installed, and it will look like this:
astroid==2.11.6
async-generator==1.10
attrs==21.4.0
autopep8==1.6.0
beautifulsoup4==4.11.1
Secondly, to update your python version, you'll need to type pyen install -l in the terminal to get a list of all currently available versions of python. To install the one you need, let's say it's the most recent at the moment 3.10.5, type pyenv install 3.10.5. To set this version as a default one, type pyenv global 3.10.5. If you don't need your previous version of python, just type pyenv uninstall and the number of the version you want to delete after a space (it's really just like installing one).
Thirdly, to move all your previous packages in the newly installed python version, just type pip3 install -r requirements.txt in the terminal. The command will get the packages of the same versions you had before.
Lastly, to upgrade the packages to the most recent version, type pip-review --auto. It will fetch the latest versions available and install them with no more commands from you.
Now here's why I think pip-review works better than anything else I've tried so far: in case some dependencies are broken (i. e. a package you are using needs another package of an older version than you've just upgraded to), it will point that out in the terminal, and you could get the right one.
yes, you can do that just you need to update your requirements file.
and run the requirements file. it will automatically update your all files.
You can update to Python3.10 and drop the previous requirements.txt into your project root directory. From there you can run pip install -r requirements.txt to install all the packages.
You can also run pip install -U <package-name> to update any package to the latest version.
Also, to re-generate a new requirements.txt file, run pip freeze > requirements.txt.
Pretty sure there is no such thing as automatically updating dependencies in such way that there are no conflicts. Some dependencies may have changes that are not backward compatible and a complete test of all your project's features will be necessary whenever you change the version of anything.
There is also an open-source project called Renovate that can help you maintain your packages and update them to recent versions
What is the main difference in installing a python package by pip install and python setup.py install using the file from GitHub repository?
From what I understand right now, I kinda have the feeling that using the second option you will install the repo in some sort of developer mode where you can do changes by directly operating in the files cloned by git repo. Is this correct? I would like to find out a proper explanation of this.
I have pip3, installed via the yum install of python3-pip.
I've done a pip3 global install of some modules I need, but python3 can't find them to import. After a little investigation I see that pip3 installed the modules to /usrlib/python3.6/site-packages/pip/_vendor/
The problem is that python3 doesn't seem to know to look at pip/_vendor, it only finds modules directly installed under site-package. If I just copy the modules from .../site-package/pip/_vendor to .../site-package everything works fine.
The issue doesn't appear to be related to file permissions or ability to read the modules.
I'm wondering how I configure either pip to install directly to site-package or python3 to understand how to look in the pip/_vendor location.
I'm configuring this all with ansible and would like as module an option as possible. For instance I could manually use an argument to tell pip3 to install to the folder I want, but I don't want to hardcode the exact site-package directory if I don't have to.
I recommend starting over with pip by downloading and running get-pip.py. This will not only install the latest version of pip, but it will also install packages to a Python-readable location (the version of Python you use to run get-pip.py).
As an aside, I would avoid installing packages system-wide unless there is a specific need for them. At the very least, you should be installing them as a regular user, and even better you should be using a virtualenv.
I am learning how to use venv here: https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/#installing-from-source
And it says I can install a source package by:
python3 -m pip install .
Which works, but now if I do pip freeze then I see:
my-package # file:///Users/joesmith/my-package
The problem is if I export this to a requirements.txt and try to install this environment on another machine then it won't work cause the path to source changed obviously.
What is the proper way to use a source package locally like i did but also export it afterwards so that another person can recreate the environment/run the code on another machine?
Pip has support for VCS like git. You can upload your code to git (e.g. Github, Gitlab, ..) for example and then use the requirements.txt. like this:
git+http://git.example.com/MyProject#egg=MyProject
https://pip.pypa.io/en/stable/cli/pip_install/#vcs-support
You would install package from PyPI rather than from source.
i.e. pip install requests
In this way other developers will also easily run your project.
I am looking to download a Python library (specifically this one) from GitHub.
I had already downloaded it using pip install espnff but it appears changes have been made to it and the only way to get the updated version is through GitHub. I should also mention that I use Python with the Anaconda distribution, if that affects anything.
How do I download and update what I already have?
First, you should make sure that pip actually uses you anaconda python distribution, and not e.g. the one that comes as default on your OS. You can use which pip to do that.
After that, it is as easy as
pip install espnff --upgrade
If the latest changes have not yet been made available on pip, you could also try to install it manually from source. Taken from the repository you linked:
git clone https://github.com/rbarton65/espnff
cd espnff
python setup.py install
To make sure that you're installing the latest version available, you should use git pull to fetch and merge the latest changes before installing.
On some occasions, you might also have to delete the existing build directory first, or use
python setup.py build --force
python setup.py install