ModuleNotFoundError with package from PyPi - python

I uploaded a package on PyPi using twine, and it went fine.
Now I'm trying to install that package and importing it into a script.
According to pip the module is already installed correctly:
PS C:\Users\alber> pip install ethbotutils
Requirement already satisfied: ethbotutils in c:\users\alber\appdata\local\programs\python\python39\lib\site-packages (1.1)
But when I try to import it in a script or in a IDE or in Python IDLE i get:
>>> import ethbotutils
Traceback (most recent call last):
File "<pyshell#1>", line 1, in <module>
import ethbotutils
ModuleNotFoundError: No module named 'ethbotutils'
This is the pyproject.toml file (stored in the project root):
[build-system]
requires = ["setuptools", "wheel"]
build-backend = "setuptools.build_meta"
And this the setup.py file (stored withing the package directory):
from setuptools import setup
setup(
name='ethbotutils',
version=1.0,
packages=["."],
install_requires=["requests~=2.25.1", "PyYAML~=5.4.1"],
python_requires=">=3.6"
)
EDIT:
What #a_guest suggested seems to be working: if I import a script present in the package, like "bot_utils" everything works, but it still doens't when I try to import the whole package by its name. How can I fix that?

The name of a distribution (the name parameter of setup) determines how a distribution (or project) is identified within the Python ecosystem; this includes the Python Package Index, where the distribution will be located at the URL https://pypi.org/project/<name>/.
This is distinct from the actual package(s) that a distribution contains (the packages parameter of setup). It is these packages that are made available when installing a distribution (e.g. via pip).
So as an example, if the setup.py file contains the following specification
setup(
name='foo',
packages=['bar'],
...
)
then this will create a distribution named foo which installs a package named bar; i.e. after doing pip install foo, the content of that distribution (the package) can be accessed via import bar. Typically the name of a distribution and the top-level package should coincide in order to avoid conflicts with other distributions which might get installed into the same virtual environment (see this answer for more information).
For the specific example of the OP, this means that the setup.py file should contain the following specification:
setup(
name='ethbotutils',
packages=['ethbotutils'],
...
)
In order for the setup to work, all relevant Python modules need to be placed inside a local folder ethbotutils which exists next to the setup.py file.

Related

How do I specify the name of a Python package import?

I have a python package fmdt-python that anyone can install with pip install fmdt-python. I want to configure this package so that I can call import fmdt anywhere. Despite my best efforts, after successfully installing fmdt-python python can't actually find the package fmdt. How do I configure the project.toml of my pypi project fmdt-python to be imported as fmdt in python?
For reference, the pypi package ffmpeg-python is imported in python as ffmpeg We can inspect the local path pip uses to install packages to see that there is a long versioned name of the package alongside a shorter name used in the import statement:
but for my package fmdt-python pip only installs the directory with the long name:
I would like to configure my package so that pip installs the proper fmdt folder alongside fmdt_python-0.0.12.dist-info.
I am using hatchling as the build system and use a pyproject.toml file to configure this package. For reference, here's the github of the package and this is the pypi index.
The "problem" with my directory structure is that I had a python package fmdt in which I placed all of the configuration files like pyproject.toml, LICENCE, setup.py, etc.
Rearranging the structure to:
with the config files outside of the fmdt folder, I was able to configure my build to make the fmdt package available for import when downloading the pypi distribution fmdt-python

setup.py file using requirements.txt

I've read a discussion where a suggestion was to use the requirements.txt inside the setup.py file to ensure the correct installation is available on multiple deployments without having to maintain both a requirements.txt and the list in setup.py.
However, when I'm trying to do an installation via pip install -e ., I get an error:
Obtaining file:///Users/myuser/Documents/myproject
Processing /home/ktietz/src/ci/alabaster_1611921544520/work
ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory:
'/System/Volumes/Data/home/ktietz/src/ci/alabaster_1611921544520/work'
It looks like pip is trying to look for packages that are available on pip (alabaster) on my local machine. Why? What am I missing here? Why isn't pip looking for the required packages on the PyPi server?
I have done it before the other way around, maintaining the setup file and not the requirements file. For the requirements file, just save it as:
*
and for setup, do
from distutils.core import setup
from setuptools import find_packages
try:
from Module.version import __version__
except ModuleNotFoundError:
exec(open("Module/version.py").read())
setup(
name="Package Name",
version=__version__,
packages=find_packages(),
package_data={p: ["*"] for p in find_packages()},
url="",
license="",
install_requires=[
"numpy",
"pandas"
],
python_requires=">=3.8.0",
author="First.Last",
author_email="author#company.com",
description="Description",
)
For reference, my version.py script looks like:
__build_number__ = "_LOCAL_"
__version__ = f"1.0.{__build_number__}"
Which Jenkins is replacing the build_number with a tag
This question consists of two separate questions, for the rather philosopihc choice of how to arrange setup requirements is actually unrelated to the installation error that you are experiencing.
First about the error: It looks like the project you are trying to install depends on another library (alabaster) of which you apparently also did an editable install using pip3 install -e . that points to this directory:
/home/ktietz/src/ci/alabaster_1611921544520/work
What the error tells you is that the directory where the install is supposed to be located does not exist anymore. You should only install your project itself in editable mode, but the dependencies should be installed into a classical system directory, i. e. without the option -e.
To clean up, I would suggest that you do the following:
# clean up references to the broken editable install
pip3 uninstall alabaster
# now do a proper non-editable install
pip3 install alabaster
Concerning the question how to arrange setup requirements, you should primarily use the install_requires and extras_require options of setuptools:
# either in setup.py
setuptools.setup(
install_requires = [
'dep1>=1.2',
'dep2>=2.4.1',
]
)
# or in setup.cfg
[options]
install_requires =
dep1>=1.2
dep2>=2.4.1
[options.extras_require]
extra_deps_a =
dep3
dep4>=4.2.3
extra_deps_b =
dep5>=5.2.1
Optional requirements can be organised in groups. To include such an extra group with the install, you can do pip3 install .[extra_deps_name].
If you wish to define specific dependency environments with exact versions (e. g. for Continuous Integration), you may use requirements.txt files in addition, but the general dependency and version constraint definitions should be done in setup.cfg or setup.py.

How to create a shareable python distribution package that can be installed using pip?

I want to share my python code with my colleague in a way that he will get the distribution package and using pip the package can be installed on his/her machine.
I created .whl file which i thought can be directly installed through the pip install command.
Though it was installed successfully, when i start using it shows the error.
Is this possible like i give .whl file and it can be used in other's machine once installed through pip install command ?
I'm trying to do it on windows machine.
Here is the setup.py :
import setuptools
with open("README.md", "r") as fh:
long_description = fh.read()
setuptools.setup(
name='dokr',
version='0.1',
scripts=['dokr'] ,
author="Debapritam Chakra",
author_email="debapritam22#gmail.com",
description="A Sample package",
long_description=long_description,
long_description_content_type="text/markdown",
url="",
packages=setuptools.find_packages(),
classifiers=[
"Programming Language :: Python :: 2.7",
"",
"Operating System :: OS Independent",
],
)
package that i am trying to create is dokr and have file named dokr in the same directory which has the content shown below.
#!/usr/bin/env python
echo "hey there, this is a pip package"
used the command python setup.py sdist bdist_wheel to generate the distribution package.
To install package on my machine, I used the command :
python -m pip install dist/name-of-wheel-file.whl
It showed it is installed successfully(even checked using the pip list).
It throws the error when i try to import the package as
import dokr
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named dokr
Additional observation :
Python on my machine is installed under C:\Python27.
After installing the package from whl file using pip, I could see there is a directory created under the path : C:\Python27\Lib\site-packages\
named dokr-0.1.dist-info.For which pip list shows that the module is present.
But there is no such folder having the python file dokr itself, which i want to import in other python file. which shows error during importing.
I am new to python and this platform as well. Any lead would be helpful.
Ofcourse, you can create a python distribution package.
Can you clearly show what is the error?
However, you can look into the following link!
: Packaging Python Projects
Hope this helps!
Fortunately found the cause of the problem.
I did not pass the argument( packages which takes the list of packages to be included ) in the setup function, for which packages could not be copied.
Following article and an awesome answer helped to sort out the issue.
https://setuptools.readthedocs.io/en/latest/setuptools.html#developer-s-guide
pip install . creates only the dist-info not the package
And I thank everyone who came here to help. :)

Installing a single file package in conda/pip requires redundant import statement

I've created a python package that is posted to pypi.org. The package consists of a single .py file, which is the same name as the package.
After installing the package via pip (pip install package_name) in a conda or standard python environment I must use the following statement to import a function from this module:
from package_name.package_name import function_x
How can I reorganise my package or adjust my installation command so that I may use import statement
from package_name import function_x
which I have successfully used when I install via python setup.py install.
My setup.py is below
setup(
name = "package_name",
version = "...",
packages=find_packages(exclude=['examples', 'docs', 'build', 'dist']),
)
Change your setup arguments from using packages to using py_modules e.g.
setup(
name = "package_name",
version = "..",
py_modules=['package_name'],
)
This is documented here https://docs.python.org/2/distutils/introduction.html#a-simple-example

setup.py: add dependencies required for installation [duplicate]

I'm developing a Python application and in the process of branching off a release. I've got a PyPI server set up on a company server and I've copied a source distribution of my package onto it.
I checked that the package was being hosted on the server and then tried installing it on my local development machine. I ended up with this output:
$ pip3 install --trusted-host 172.16.1.92 -i http://172.16.1.92:5001/simple/ <my-package>
Collecting <my-package>
Downloading http://172.16.1.92:5001/packages/<my-package>-0.2.0.zip
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\<me>\AppData\Local\Temp\pip-build-ubb3jkpr\<my-package>\setup.py", line 9, in <module>
import appdirs
ModuleNotFoundError: No module named 'appdirs'
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in C:\Users\<me>\AppData\Local\Temp\pip-build-ubb3jkpr\<my-package>\
The reason is that I'm trying to import a third-party library appdirs in my setup.py, which is necessary for me to compute the data_files argument to setup():
try:
from setuptools import setup
except ImportError:
from distutils.core import setup
import os
from collections import defaultdict
import appdirs
from <my-package>.version import __version__ as <my-package>_version
APP_NAME = '<my-app>'
APP_AUTHOR = '<company>'
SYSTEM_COMPONENT_PLUGIN_DIR = os.path.join(appdirs.user_data_dir(APP_NAME, APP_AUTHOR), 'components')
# ...
setup(
# ...
data_files=component_files,
)
However, I don't have appdirs installed on my local dev machine and I don't expect the end users to have it either.
Is it acceptable to rely on third-party libraries like this in setup.py, and if so what is the recommended approach to using them? Is there a way I can ensure appdirs gets installed before it's imported in setup.py, or should I just document that appdirs is a required package to install my package?
I'm ignoring licensing issues in this answer. You definetly need to take these into account before you really do a release.
Is it acceptable to rely on third-party libraries like this in setup.py
Yes, it is acceptable but generally these should be minimized, especially if these are modules which have no obvious use for the end-user. Noone likes to have packages they don't need or use.
what is the recommended approach to using them?
There are basically 3 options:
Bootstrap them (for example use pip to programmatically install packages). For example setuptools provides an ez_setup.py file that can be used to bootstrap setuptools. Maybe that can be customized to download and install appdirs.
Include them (especially if it's a small package) in your project. For example appdirs is basically just a single file module. Pretty easy to copy and maintain in your project. Be very careful with licensing issues when you do that!
Fail gracefully when it's not possible to import them and let the user install them. For example:
try:
import appdirs
except ImportError:
raise ImportError('this package requires "appdirs" to be installed. '
'Install it first: "pip install appdirs".')
You could use pip to install the package programmatically if the import fails:
try:
import appdirs
except ImportError:
import pip
pip.main(['install', 'appdirs'])
import appdirs
In some circumstances you may need to use importlib or __import__ to import the package after pip.main or referesh the PATH variable. It could also be worthwhile to include a verification if the user really wants to install that package before installing it.
I used a lot of the examples from "Installing python module within code" and I haven't personally tried used this in setup.py files but it looks like it could be a solution for your question.
You can mention install_requires with the dependencies list. Please check the python packaging guide here. Also you can provide a requirements.txt file so that it can be run at once using "pip install -r"

Categories