how to run a script using pyproject.toml settings and poetry? - python

I am using poetry to create .whl files.
I have an ftp sever runing on a remote host.
I wrote a python script (log_revision.py) which save in a database the git commit, few more parameters and in the end send the the .whl(that poetry created) to the remote server ( each .whl in a different path in the server, the path is save in the db) .
At the moment I run the script manually after each time I run the poetry build commend.
I know the pyproject.toml has the [tool.poetry.scripts] but i dont get how can i use it to run a python script.
I tried
[tool.poetry.scripts]
my-script = "my_package_name:log_revision.py
and then poetry run my-script but I allways get an error
AttributeError: module 'my_package_namen' has no attribute 'log_revision'
1. can some one please help me understand how to run to wish commend?
as a short term option(with out git and params) i tried to use the poetry publish -r http://192.168.1.xxx/home/whl -u hello -p world but i get the following error
[RuntimeError]
Repository http://192.168.1.xxx/home/whl is not defined
2. what am i doing wring and how can i fix it?
would appricate any help, thx!

At the moment the [tool.poetry.scripts] sections is equivalent to setuptools console_scripts.
So the argument must be a valid module and method name. Let's imagine within your package my_package, you have log_revision.py, which has a method start(). Then you have to write:
[tool.poetry.scripts]
my-script = "my_package.log_revision:start"
Here's a complete example:
You should have this folder structure:
my_package
├── my_package
│   ├── __init__.py
│   └── log_revision.py
└── pyproject.toml
The content of pyproject.toml is:
[tool.poetry]
name = "my_package"
version = "0.1.0"
description = ""
authors = ["Your Name <you#example.com>"]
[tool.poetry.dependencies]
python = "^3.8"
[tool.poetry.scripts]
my-script = "my_package.log_revision:start"
[build-system]
requires = ["poetry_core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
and of log_revision.py:
def start():
print("Hello")
After you have run poetry install once you should be able to do this:
$ poetry run my-script
Hello
You cannot pass something to the start() method directly. Instead you can use command line arguments and parse them, e.g. with pythons argparse.

Although the previous answers are correct, they are a bit complicated. The simplest way to run a python script with poetry is as follows:
poetry run python myscript.py
If you are using a dev framework like streamlit you can use
poetry run streamlit run myapp.py
Basically anything you put after poetry run will execute from the poetry virtual environment.

For future visitors, I think what OP is asking for (a post build hook?) isn't directly supported. But you might find satisfaction from using a tool I wrote called poethepoet which integrates with poetry to run arbitrary tasks defined in the pyproject.toml in terms of shell commands or by referencing python functions.
For example you could define something like the following in your pyproject.toml
[tool.poe.tasks.log_revision]
script = "my_package.log_revision:main" # where main is the name of the python function in the log_revision module
help = "Register this revision in the catalog db"
[tool.poe.tasks.build]
cmd = "poetry build"
help = "Build the project"
[tool.poe.tasks.shipit]
sequence = ["build", "log_revision"]
help = "Build and deploy"
And then execute and of the tasks with the poe CLI like the following which will run the other two tasks in sequence, thus building your project and running the deployment script in one go!
poe shipit
By default tasks are executed inside the poetry managed virtualenv (like using poetry run) so the python script can use dev dependencies.
If you need to define CLI arguments or load values into a task from a dotenv file (such as credentials) then this is also supported.
Update: poetry plugin support
poethepoet can now support post build hooks when used as a poetry plugin. For example when using poetry >=1.2.0b1 you can configure the following to run your log_revision task automatically after poetry build is run:
[tool.poe.poetry_hooks]
post_build = "log-revision"
[tool.poe.tasks.log-revision]
script = "scripts:log_revision"

Tinkering with such a problem for a couple of hours and found a solution
I had a task to start the django server via poetry script.
Here are the directories. manage.py is in test folder:
├── pyproject.toml
├── README.rst
├── runserver.py
├── test
│   ├── db.sqlite3
│   ├── manage.py
│   └── test
│   ├── asgi.py
│   ├── __init__.py
│   ├── __pycache__
│   │   ├── __init__.cpython-39.pyc
│   │   ├── settings.cpython-39.pyc
│   │   ├── urls.cpython-39.pyc
│   │   └── wsgi.cpython-39.pyc
│   ├── settings.py
│   ├── urls.py
│   └── wsgi.py
├── tests
│   ├── __init__.py
│   └── test_tmp.py
└── tmp
└── __init__.py
I had to create a file runserver.py:
import subprocess
def djtest():
cmd =['python', 'test/manage.py', 'runserver']
subprocess.run(cmd)
then write the script itself pyproject.toml:
[tool.poetry.scripts]
dj = "runserver:djtest"
and still make changes to pyproject.toml:
[tool.poetry.scripts]
dj = "runserver:djtest"
only then command poetry run dj started working

Related

Get app version from pyproject.toml inside python code

I am not very familiar with python, I only done automation with so I am a new with packages and everything.
I am creating an API with Flask, Gunicorn and Poetry.
I noticed that there is a version number inside the pyproject.toml and I would like to create a route /version which returns the version of my app.
My app structure look like this atm:
├── README.md
├── __init__.py
├── poetry.lock
├── pyproject.toml
├── tests
│ └── __init__.py
└── wsgi.py
Where wsgi.py is my main file which run the app.
I saw peoples using importlib but I didn't find how to make it work as it is used with:
__version__ = importlib.metadata.version("__package__")
But I have no clue what this package mean.
You should not use __package__, which is the name of the "import package" (or maybe import module, depending on where this line of code is located), and this is not what importlib.metadata.version() expects. This function expects the name of the distribution package (the thing that you pip-install), which is the one you write in pyproject.toml as name = "???".
You can extract version from pyproject.toml using toml package to read toml file and then display it in a webpage.

Error: the command ( (from my_script) could not be found within PATH

While working on a Python FastAPI project using Pipenv and Pytest, I was asked to write a Pipenv script to run the tests.
Project has the following structure:
.
├── app
| ├── main.py
│   ├── my_package
│   │   ├── __init__.py
│   │   ├── (project folders)
│   │   └── tests
| | └── tests.py
│   └── __pycache__
└──(deployment scripts and folders, Pipfile, Dokerfile, etc)
I'd like to run Pytest on tests.py from the top structure (./app/.../tests.py). At least for now.
As recommended here, I currently run them from the top with:
( cd app ; python -m pytest my_package/tests/tests.py )
... which works as expected.
However, when I add that my Pipfile-scripts-section:
[scripts]
my_script = "( cd app ; python -m pytest my_package/tests/tests.py )"
... run it with:
pipenv run my_script
I get the error:
Error: the command ( (from my_script) could not be found within PATH.
I've also tried:
[scripts]
my_script = "cd app && python -m pytest my_package/tests/tests.py"
... which returns another similar error:
Error: the command cd (from my_script) could not be found within PATH.
So it's clear I'm wrong to use it as bash aliases.
I've tried searching for more documentation on how the [scripts] section works as I, but I've had no luck (yet).
I'm not familiar with the tools you're using, but the error message suggests that it's looking for an executable. ( is part of shell syntax, and cd is a shell builtin, not an executable. Try this:
my_script = "bash -c 'cd app && python -m pytest my_package/tests/tests.py'"
Here bash is the executable, and -c makes it run your snippet.
BTW, keep in mind that cd can fail, so the script should bail out if it does.

running a package pytest with poetry

I am new to poetry and want to get it set-up with pytest. I have a package mylib in the following set-up
├── dist
│   ├── mylib-0.0.1-py3-none-any.whl
│   └── mylib-0.0.1.tar.gz
├── poetry.lock
├── mylib
│   ├── functions.py
│   ├── __init__.py
│   └── utils.py
├── pyproject.toml
├── README.md
└── tests
└── test_functions.py
in test_functions I have
import mylib
However, when I run
poetry run pytest
it complains about mylib not being included. I can run
pip install dist/mylib-0.0.1-py3-none-any.whl
but that clutters my python environment with mylib. I want to use that environment as well for other packages.
My question is: What is the proper way to work with poetry and pytest?
My underlying python environment is a clean pyenv python 3.8. Using pyproject.toml I create a project based virtual environment for mylib.
You need to run poetry install to set up your dev environment. It will install all package and development requirements, and once that is done it will do a dev-install of your source code.
You only need to run it once, code changes will propagate directly and do not require running the install again.
If you have set up the virtual env that you want already, take care that it is activated when you run the install command. If you don't, poetry will try to create a new virtual env and use that, which is probably not what you want.
FYI you also need pytest specified as a dev dependency in pyproject.toml.
If you don't have that, poetry run will find the pytest instance in your home env, but that instance won't find the venv. I don't think the documentation makes that very clear.
There is a specific way to run pytest:
poetry run pytest
I couldn't run it just running pytest with the virtual environment activated. Nothing happens when I run.
It just works when I prefix the poetry executable.
P.S.: Don't forget to add pytest as a dev dependency in your pyproject.toml file.

Proper ways to set the path of my app in Python

I have a question in how to properly create a path in Python (Python 3.x).
I developed a small scraping app in Python with the following directory structure.
root
├── Dockerfile
├── README.md
├── tox.ini
├── src
│   └── myapp
│   ├── __init__.py
│   ├── do_something.py
│   └── do_something_else.py
└── tests
├── __init__.py
├── test_do_something.py
└── test_do_something_else.py
When I want to run my code, I can go to the src directory and do with
python do_something.py
But, because do_something.py has an import statement from do_something_else.py, it fails like:
Traceback (most recent call last):
File "src/myapp/do_something.py", line 1, in <module>
from src.myapp.do_something_else import do_it
ModuleNotFoundError: No module named 'src'
So, I eventually decided to use the following command to specify the python path:
PYTHONPATH=../../ python do_something.py
to make sure that the path is seen.
But, what are the better ways to feed the path so that my app can run?
I want to know this because when I run pytest via tox, the directory that I would run the command tox would be at the root so that tox.ini is seen by tox package. If I do that, then I most likely run into a similar problem due to the Python path not properly set.
Questions I want to ask specifically are:
where should I run my main code when creating my own project like this? root as like python src/myapp/do_something.py? Or, go to the src/myapp directory and run like python do_something.py?
once, the directory where I should execute my program is determined, what is the correct way to import modules from other py file? Is it ok to use from src.myapp.do_something_else import do_it (this means I must add path from src directory)? Or, different way to import?
What are ways I can have my Python recognize the path? I am aware there are several ways to make the pass accessible as below:
a. write export PYTHONPATH=<path_of_my_choice>:$PYTHONPATH to make the
path accessible temporarily, or write that line in my .bashrc to make it permanent (but it's hard to reproduce when I want to automate creating Python environment via ansible or other automation tools)
b. write import sys; sys.path.append(<root>) to have the root as an accessible path
c. use pytest-pythonpath package (but this is not really a generic answer)
Thank you so much for your inputs!
my environment
OS: MacOS and Amazon Linux 2
Python Version: 3.7
Dependency in Python: pytest, tox
I would suggest to use setup.py to make this a python package. Then you can install it in development mode python setup.py develop. This way it will be available in your python environment w/o needing to specify the PYTHONPATH.
For testing, you can simply install the package python setup.py install.
Hope that helps.
Two simple steps should make it happen. Python experts can comment if this is a good way to do it (especially going by the concluding caution raised towards the end of this post).
I would have done it like below.
First I would have put a "__init__.py" in root so that hierarchy looks like below. This way python will treat the folder as a package.
root
├── Dockerfile
├── README.md
├── tox.ini
├── __init__.py
├── src
│ └── myapp
│ ├── __init__.py
│ ├── do_something.py
│ └── do_something_else.py
└── tests
├── __init__.py
├── test_do_something.py
└── test_do_something_else.py
Then in "do_something.py", I would have added these lines at the top. In the second line please put the full path to the "root" directory.
import sys
sys.path += ['/home/SomeUserName/SomeFolderPath/root']
from src.myapp.do_something_else import do_it
Please note that the second line will essentially modify the sys.path by adding the root folder path (I guess until the interpreter quits). If this is not what you can afford then I am sorry.

Python package published with poetry is not found after install

In the last few days, I was working on a python module. Until now, I used poetry as a packages management tool in many other projects, but it is my first time wanting to publish a package to PyPI.
I was able to run the poetry build and poetry publish commands. I was also able to also install the published package:
$ pip3 install git-profiles
Collecting git-profiles
Using cached https://files.pythonhosted.org/packages/0e/e7/bac9027effd1e34a5b5718f2b35c0b28b3d67f3809e2f2981b6c7b58963e/git_profiles-1.1.0-py3-none-any.whl
Installing collected packages: git-profiles
Successfully installed git-profiles-1.1.0
However, right after the install, I am not able to run my package:
$ git-profiles --help
git-profiles: command not found
My project has the following structure:
git-profiles/
├── src/
│   ├── commands/
│   ├── executor/
│   ├── git_manager/
│   ├── profile/
│   ├── utils/
│   ├── __init__.py
│   └── git_profiles.py
└── tests
I tried to work with different scripts configurations in the pyproject.toml file but I've never been able to make it work after install.
[tool.poetry.scripts]
poetry = "src:git_profiles.py"
or
[tool.poetry.scripts]
git-profile = "src:git_profiles.py"
I don't know if this is a python/pip path/version problem or I need to change something in the configuration file.
If it is helpful, this is the GitHub repository I'm talking about. The package is also published on PyPI.
Poetry's scripts sections wraps around the console script definition of setuptools. As such, the entrypoint name and the call path you give it need to follow the exact same rules.
In short, a console script does more or less this from the shell:
import my_lib # the module isn't called src, that's just a folder name
# the right name to import is whatever you put at [tool.poetry].name
my_lib.my_module.function()
Which, if given the name my-lib-call (the name can be the same as your module, but it doesn't need to be) would be written like this:
[tool.poetry.scripts]
my-lib-call = "my_lib.my_module:function"
Adapted to your project structure, the following should do the job:
[tool.poetry.scripts]
git-profile = "git-profiles:main"

Categories