I used to be able to run the command runner.py from my module my-runner while using python setup.py develop. However, ever since that I reinstalled it using python setup.py install, I now get a pkg_resources.ResolutionError when calling runner.py.
This is the mini tree structure
.
├── bin
│ ├── some_other_file.py
│ ├── runner.py
├── setup.py
here is my setup.py
from setuptools import setup, find_packages
setup(
name='my-runner',
version='1.0.0',
license='private',
author='MyName',
author_email='myname#myemail.com',
description='My Runner',
packages=find_packages(),
scripts=['bin/runner.py', 'bin/some_other_file.py']
)
Running command runner.py returns the error
pkg_resources.ResolutionError: Script 'scripts/runner.py' not found in metadata at '/home/myname/module/my-runner.egg-info
I guess I have no clue why install would break it? I am guessing it has to do with the fact that develop does not read the egg-info dir but would like a solution to this problem.
A possible way is to remove the package(pip3 uninstall my-runner) and reinstall it (python setup.py install)
Related
Structure:
.
├── application
│ └── runner.py
└── dummyLibrary
├── helperFunctions.py
├── __init__.py
└── setup.py
runner.py:
import dummyLibrary
dummyLibrary.foo()
dummyLibrary.bar()
init.py:
(empty file)
helperFunctions.py:
def foo():
print("called foo()")
def bar():
print("called bar()")
setup.py:
#!/usr/bin/env python
from distutils.core import setup
setup(name='dummyLibrary', version='0.0.1')
After cd'ing into dummyLibrary/ ,
I tried installing dummyLibrary with
pip3 install -e .
This was the output:
Defaulting to user installation because normal site-packages is not writeable
Obtaining file:///home/ubuntu/Documents/pythonTest/dummyLibrary
Preparing metadata (setup.py) ... done
Installing collected packages: dummyLibrary
Running setup.py develop for dummyLibrary
Successfully installed dummyLibrary-0.0.1
I tried installing dummyLibrary with
python3 -m pip install -e .
This was the output:
Defaulting to user installation because normal site-packages is not writeable
Obtaining file:///home/ubuntu/Documents/pythonTest/dummyLibrary
Preparing metadata (setup.py) ... done
Installing collected packages: dummyLibrary
Attempting uninstall: dummyLibrary
Found existing installation: dummyLibrary 0.0.1
Uninstalling dummyLibrary-0.0.1:
Successfully uninstalled dummyLibrary-0.0.1
Running setup.py develop for dummyLibrary
Successfully installed dummyLibrary-0.0.1
After cd'ing into application/ and running
python3 runner.py
I get:
Traceback (most recent call last):
File "runner.py", line 1, in <module>
import dummyLibrary
ModuleNotFoundError: No module named 'dummyLibrary'
no matter how I try to install my library
Why is this?
Additional Information:
Using Ubuntu, Not using a virtual environment.
I tried rebooting after installing. Didn't help.
Welcome to Stack Overflow!
First thing: __init__.py and helperFunctions.py should be inside another nested folder with the same name as the package.
.
├── application
│ └── runner.py
└── dummyLibrary
├── dummyLibrary
│ ├── __init__.py
│ └── helperFunctions.py
└── setup.py
Secondly, with the code in runner.py as how you'd like it, the __init__.py should include the following line to import all of the functions in helperFunctions.py:
from .helperFunctions import *
Finally, your setup.py should also include the parameter packages as a list of folder names in the package, in this case, the parameter should be packages=['dummyLibrary'].
P.S.: it's a pythonic practice to name your files and packages in snake_case rather than camelCase.
Problem Statement: when I install my pip package, a specific file inside the package get coped to Temp directory
Approach:
My Package directory Sturcture is following:
my-app/
├─ app/
│ ├─ __init__.py
│ ├─ __main__.py
├─ folder-with-extra-stuff/
│ ├─ __init__.py
│ ├─ file_I_want_to_cppy.tar.gz
├─ setup.py
├─ MANIFEST.in
I'm tweaking my setup.py file to do the job. Following is my setup.py
#!/usr/bin/env python
from setuptools import setup, find_packages
from setuptools.command.install import install
import os
import sys
import shutil
rootDir = os.path.abspath(os.path.dirname(__file__))
def run_custom_install():
print("--------Start running custom command -------")
temp_dir = r'c:\temp' if sys.platform == "win32" else r'/tmp'
temp_col_dir = temp_dir + os.sep + 'dump'
os.makedirs(temp_dir, exist_ok=True)
os.makedirs(temp_col_dir, exist_ok=True)
print("----------locate the zip file ---------------")
ColDirTests = os.path.abspath(os.path.join(rootDir, 'my-app','folder-with-extra-stuff'))
_src_file = os.path.join(ColDirTests , 'file_I_want_to_cppy.tar.gz ')
print(f"******{_src_file}**********")
if os.path.exists(_src_file):
print(f"-----zip file has been located at {_src_file}")
shutil.copy(_src_file, temp_col_dir)
else:
print("!!!!Couldn't locate the zip file for transfer!!!!")
class CustomInstall(install):
def run(self):
print("***********Custom run from install********")
install.run(self)
run_custom_install()
ver = "0.0.0"
setup(
name='my_pkg',
version=ver,
packages=find_packages(),
python_requires='>=3.6.0',
install_requires = getRequirements(),
include_package_data= True,
cmdclass={
'install' : CustomInstall,
}
)
MANIFEST.in
include README.md
include file_I_want_to_cppy.tar.gz
recursive-include my-app *
global-exclude *.pyc
include requirements.txt
prune test
Testing build:
> python setup.py bdist_wheel
It is working during build. I can see there is a directory formed C:\temp\dump and file_I_want_to_cppy.tar.gz inside it. But when I release the package in pip and try to install it from pip, the folder remains Empty!
Any idea what I might be doing wrong here?
After a lot of research I have figure out how to resolve this issue. Let me summarize my findings, it might be helpful for other who wants to do post_pip_install processing.
setup.py
Different options to install package: 1) pip install pkg_name, 2) python -m setup.py sdist
If you want to make them work in either ways, need to have install, egg_info and develop all 3 options repeated as shown in setup.py
If you create *.whl file by python -m setup.py bdist_wheel , post pip install processing won't be executed! Please upload .tar.gz format generated usingsdist to PyPi/Artifacts to make post pip install processing work. Again, Please note: It will not work when installing from a binary wheel
upload the pip package: twine upload dist/*.tar.gz
from setuptools import setup, find_packages
from setuptools.command.install import install
from setuptools.command.egg_info import egg_info
from setuptools.command.develop import develop
rootDir = os.path.abspath(os.path.dirname(__file__))
def run_post_processing():
print("--------Start running custom command -------")
# One can Run any Post Processing here that will be executed post pip install
class PostInstallCommand(install):
def run(self):
print("***********Custom run from install********")
install.run(self)
run_post_processing()
class PostEggCommand(egg_info):
def run(self):
print("***********Custom run from Egg********")
egg_info.run(self)
run_post_processing()
class PostDevelopCommand(develop):
def run(self):
print("***********Custom run from Develop********")
develop.run(self)
run_post_processing()
ver = "0.0.0"
setup(
name='my_pkg',
version=ver,
packages=find_packages(),
python_requires='>=3.6.0',
install_requires = getRequirements(),
include_package_data= True,
cmdclass={
'install' : PostInstallCommand,
'egg_info': PostEggCommand,
'develop': PostDevelopCommand
}
)
Few More Things from my research:
If you want to do pre-processing instead of post-processing, need to move install.run(self) at the end
while pip installing, if you want to see custom messages of pre/post instllation, use -vvv. Example: pip install -vvv my_pkg
I've restructured a project to the src directory structure. It looks like this:
root_dir/
src/
module1/
__init__.py
script1.py
script2.py
module2/
__init__.py
other_script1.py
other_script2.py
conftest.py
setup.py
tests/
conftest.py
some_tests/
conftest.py
test_some_parts.py
some_other_tests/
conftest.py
test_these_other_parts.py
My setup.py looks like this:
setup(
name='Project',
version=0.0,
author='Me',
install_requires=['pyodbc'],
tests_require=['pytest'],
setup_requires=['pytest-runner'],
test_suite='root_dir.Tests',
entry_points={
'console_scripts': ['load_data = module1.script1:main']
},
package_data={'Config': ['*.json']},
packages=find_packages('src'),
package_dir={'': 'src'})
I am running Anaconda3 on Windows 10. When I run python setup.py install, I am able to run the load_data script without any issue. However, from what I've been reading, it is preferable to use pip install . vice python setup.py install. When I pip install the package and attempt to run load_data, I get ModuleNotFoundError: No module named 'module1.script1'. I've attempted adding 'src' to the front of this, but this doesn't work either. I don't understand what the differences are or how to troubleshoot this.
When building the source distribution for the package not all files are included. Try creating a MANIFEST.in file with
recursive-include src/module1 *
If I have a tree that looks like:
├── project
│ ├── package
│ │ ├── __init__.py
│ │ ├── setup.py
├── env
└── setup.py
Is there a way to include the nested setup.py in the install for the top setup.py? I want to avoid this:
pip install -e . ; cd project/package ; pip install -e .
The solution is to have two separate projects: a main project (usually an application) and a sub-project (usually a library). The main application has a dependency to the library.
Tree structure and setup.py
The main project can have the following structure:
your_app/
|-- setup.py
ˋ-- src/
ˋ-- your_app/
|-- __init__.py
|-- module1.py
ˋ-- ...
The setup.py of your application can be:
from setuptools import find_packages
from setuptools import setup
setup(
name='Your-App',
version='0.1.0',
install_requires=['Your-Library'],
packages=find_packages('src'),
package_dir={'': 'src'},
url='https://github.com/your-name/your_app',
license='MIT',
author='Your NAME',
author_email='your#email.com',
description='Your main project'
)
You can notice that:
The name of your application can be slightly different to the name of your package;
This package has a dependency to "Your-Library", defined below;
You can put your source in the src directory, but it is optional. A lot of project have none.
The sub-project can have the following structure:
your_library/
|-- setup.py
ˋ-- src/
ˋ-- your_library/
|-- __init__.py
|-- lib1.py
ˋ-- ...
The setup of you library can be:
from setuptools import find_packages
from setuptools import setup
setup(
name='Your-Library',
version='0.1.0',
packages=find_packages('src'),
package_dir={'': 'src'},
url='https://github.com/your-name/your_library',
license='MIT',
author='Your NAME',
author_email='your#email.com',
description='Your sub-project'
)
Putting all things together
Create a virtualenv for your application and activate it
Go in the your_library/ directory and run:
pip install -e .
Then, go in your_app/ directory and run:
pip install -e .
You are now ready to code. Have fun!
See the Hitchhiker's Guide to Python: “Structuring Your Project”.
I wonder if as well as .deb packages for example, it is possible in my setup.py I configure the dependencies for my package, and run:
$ sudo python setup.py install
They are installed automatically. Already researched the internet but all I found out just leaving me confused, things like "requires", "install_requires" and "requirements.txt"
Just create requirements.txt in your lib folder and add all dependencies like this:
gunicorn
docutils>=0.3
lxml==0.5a7
Then create a setup.py script and read the requirements.txt in:
import os
lib_folder = os.path.dirname(os.path.realpath(__file__))
requirement_path = lib_folder + '/requirements.txt'
install_requires = [] # Here we'll get: ["gunicorn", "docutils>=0.3", "lxml==0.5a7"]
if os.path.isfile(requirement_path):
with open(requirement_path) as f:
install_requires = f.read().splitlines()
setup(name="mypackage", install_requires=install_requires, [...])
The execution of python setup.py install will install your package and all dependencies. Like #jwodder said it is not mandatory to create a requirements.txt file, you can just set install_requires directly in the setup.py script. But writing a requirements.txt file is a best practice.
In the setup function call, you also have to set version, packages, author, etc, read the doc for a complete example: https://docs.python.org/3/distutils/setupscript.html
You package dir will look like this:
├── mypackage
│ ├── mypackage
│ │ ├── __init__.py
│ │ └── mymodule.py
│ ├── requirements.txt
│ └── setup.py
Another possible solution
try:
# for pip >= 10
from pip._internal.req import parse_requirements
except ImportError:
# for pip <= 9.0.3
from pip.req import parse_requirements
def load_requirements(fname):
reqs = parse_requirements(fname, session="test")
return [str(ir.req) for ir in reqs]
setup(name="yourpackage", install_requires=load_requirements("requirements.txt"))
You generate egg information from your setup.py, then you use the requirements.txt from these egg information:
$ python setup.py egg_info
$ pip install -r <your_package_name>.egg-info/requires.txt
In Python 3.4+, it is possible to use the Path class from pathlib, to do effectively the same thing as #hayj answer.
from pathlib import Path
import setuptools
...
def get_install_requires() -> List[str]:
"""Returns requirements.txt parsed to a list"""
fname = Path(__file__).parent / 'requirements.txt'
targets = []
if fname.exists():
with open(fname, 'r') as f:
targets = f.read().splitlines()
return targets
...
setuptools.setup(
...
install_requires=get_install_requires(),
...
)