I have a pyproject.toml with
[tool.poetry]
name = "my-project"
version = "0.1.0"
[tool.commitizen]
name = "cz_conventional_commits"
version = "0.1.0"
I add a new feature and commit with commit message
feat: add parameter for new feature
That's one commit.
Then I call
commitizen bump
Commitizen will recognize a minor version increase, update my pyproject.toml, and commit again with the updated pyproject.toml and a tag 0.2.0.
That's a second commit.
But now my pyproject.toml is "out of whack" (assuming I want my build version in sync with my git tags).
[tool.poetry]
name = "my-project"
version = "0.1.0"
[tool.commitizen]
name = "cz_conventional_commits"
version = "0.2.0"
I'm two commits in, one tagged, and things still aren't quite right. Is there workflow to keep everything aligned?
refer to support-for-pep621 and version_files
you can add "pyproject.toml:^version" to pyproject.toml:
[tool.commitizen]
version_files = [
"pyproject.toml:^version"
]
Related
I am using the latest version of pip, 23.01. I have a pyproject.toml file with dependencies and optional dependency groups (aka "extras"). To avoid redundancies and make managing optional dependency groups easier, I would like to know how to have optional dependency groups require other optional dependency groups.
I have a pyproject.toml where the optional dependency groups have redundant overlaps in dependencies. I guess they could described as "hierarchical". It looks like this:
[project]
name = 'my-package'
dependencies = [
'pandas',
'numpy>=1.22.0',
# ...
]
[project.optional-dependencies]
# development dependency groups
test = [
'my-package[chem]',
'pytest>=4.6',
'pytest-cov',
# ...
# Redundant overlap with chem and torch dependencies
'rdkit',
# ...
'torch>=1.9',
# ...
]
# feature dependency groups
chem = [
'rdkit',
# ...
# Redundant overlap with torch dependencies
'torch>=1.9',
# ...
]
torch = [
'torch>=1.9',
# ...
]
In the above example, pip install .[test] will include all of chem and torch groups' packages, and pip install .[chem] will include torch group's packages.
Removing overlaps and references from one group to another, a user can still get packages required for chem by doing pip install .[chem,torch], but I work with data scientists who may not realize immediately that the torch group is a requirement for the chem group, etc.
Therefore, I want a file that's something like this:
[project]
name = 'my-package'
dependencies = [
'pandas',
'numpy>=1.22.0',
# ...
]
[project.optional-dependencies]
# development dependency groups
test = [
'my-package[chem]',
'pytest>=4.6',
'pytest-cov',
# ...
]
# feature dependency groups
chem = [
'my-package[torch]',
'rdkit',
# ...
]
torch = [
'torch>=1.9',
# ...
]
This approach can't work because my-package is hosted in our private pip repository, so having'my-package[chem]' like the above example fetches the previously built version's chem group packages.
It appears that using Poetry and its pyproject.toml format/features can make this possible, but I would prefer not to switch our build system around too much. Is this possible with pip?
I think PDM has it solved:
Can be that it's not doing anything special and it would work the same with pip, the trick is to make a third one to depend on the other two, example from the docs:
[project]
name = "foo"
version = "0.1.0"
[project.optional-dependencies]
socks = ["pysocks"]
jwt = ["pyjwt"]
all = ["foo[socks,jwt]"]
reference: PDM - Manage Dependencies
I'm trying to add Seaborn dependency to my module, using Poetry.
I've tried it on different ways, but always without success, maybe I'm doing it wrong.
Here's my current toml config file:
[tool.poetry]
name = "seaborn"
version = "0.1.0"
description = ""
authors = ["me"]
[tool.poetry.dependencies]
python = "3.9.6"
pandas = "^1.4.1"
jupyter = "^^.0.0"
scipy = "1.7.0"
numpy = "^1.22.3"
[tool.poetry.dev-dependencies]
pytest = "^5.2"
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
I've tried on the CLI:
poetry add seaborn
But no success.
Here's the output
poetry add seaborn
Using version ^0.11.2 for seaborn
Updating dependencies
Resolving dependencies... (0.0s)
AssertionError
at ~/.pyenv/versions/3.10.0/lib/python3.10/site-packages/poetry/mixology/incompatibility.py:111 in __str__
107│ )
108│
109│ def __str__(self):
110│ if isinstance(self._cause, DependencyCause):
→ 111│ assert len(self._terms) == 2
112│
113│ depender = self._terms[0]
114│ dependee = self._terms[1]
115│ assert depender.is_positive()
If I try to add it to the toml config file like seaborn = "^0.0.1"
The out put is very similar:
poetry update
Updating dependencies
Resolving dependencies... (0.0s)
AssertionError
at ~/.pyenv/versions/3.10.0/lib/python3.10/site-packages/poetry/mixology/incompatibility.py:111 in __str__
107│ )
108│
109│ def __str__(self):
110│ if isinstance(self._cause, DependencyCause):
→ 111│ assert len(self._terms) == 2
112│
113│ depender = self._terms[0]
114│ dependee = self._terms[1]
115│ assert depender.is_positive()
Can anyone help me?
Thank you so much!
After a few hours of dropping modules/restarting Pycharm / Invalidating cache... My project is up-to date without any issue!
For future note:
Do not name your modules/scripts with an already existing package (eg: scipy, seaborn, and so on)
I cannot comment yet, so need to supply a new answer.
This issue has broad applicability beyond just the Seaborn module and should be renamed something like, "Cannot add package using Poetry, AssertionError incompatibility.py:111".
The existing Answer, by #diguex, that I up-voted is exactly the fix needed and this answer helped me with the same problem, attempting to import 'flask-restx' into a demo project named 'flask-restx'.
Long and short, Poetry cannot import a dependency into itself. Naming the module with an already existing package name will confuse Poetry into thinking it is doing just that. For more discussion, see: https://github.com/python-poetry/poetry/issues/3491
We've been using pipenv for dependency management for a while, and using micropipenv's protected functionality to check lock freshness - the idea here being that micropipenv is lightweight, so this is a cheap and cheerful way of ensuring that our dependencies haven't drifted during CI or during a docker build.
Alas, micropipenv has no such feature for poetry (it skips the hash check completely), and I am therefore left to "reverse-engineer" the feature on my own. Ostensibly this should be super easy - I've assembled the code posted later from what I traced through the poetry and poetry-core repos (Locker, Factory, core.Factory, and PyProjectTOML, primarily). This absolutely does not do the trick, and I'm at a loss as to why.
_relevant_keys = ["dependencies", "group", "source", "extras"]
def _get_content_hash(pyproject):
content = pyproject["tool"]["poetry"]
print(content)
relevant_content = {}
for key in _relevant_keys:
relevant_content[key] = content.get(key)
print(json.dumps(relevant_content, sort_keys=True).encode())
content_hash = sha256(
json.dumps(relevant_content, sort_keys=True).encode()
).hexdigest()
print(f"Calculated: {content_hash}")
return content_hash
def is_fresh(lockfile, pyproject):
metadata = lockfile.get("metadata", {})
print(f"From file: {lockfile['metadata']['content-hash']}")
if "content-hash" in metadata:
return _get_content_hash(pyproject) == lockfile["metadata"]["content-hash"]
return False
Would love to figure out what exactly the heck I'm missing here - i'm guessing that the poetry locker _local_config gets changed at some point and I've failed to notice it.
References:
Locker: https://github.com/python-poetry/poetry/blob/a1a5bce96d85bdc0fdc60b8abf644615647f969e/poetry/packages/locker.py#L454
core.Factory: https://github.com/python-poetry/poetry-core/blob/afaa6903f654b695d9411fb548ad10630287c19f/poetry/core/factory.py#L24
Naturally, this ended up being a PEBKAC error. I was using the hash generation function from the master branch but using an earlier version of poetry on the command line. Once I used the function from the correct code version, everything was hunky dory.
I think this functionality actually exists in micropipenv now anyways lol
At work, we have a workflow where each branch is "named" by date. During the week, at least once, the latest branch gets pushed to production. What we require now is the summary/commit messages of the changes between the latest branch in production vs the new branch via gitpython.
What I have tried to do:
import git
g = git.Git("pathToRepo")
r = git.Repo("pathToRepo")
g.pull() # get latest
b1commits = r.git.log("branch1")
b2commits = r.git.log("branch2")
This give me all of the commit history from both branches but I can't figure out how to compare them to just get the newest commit messages.
Is this possible to do in gitPython? Or is there a better solution?
I figured it out:
import git
g = git.Git(repoPath+repoName)
g.pull()
commitMessages = g.log('%s..%s' % (oldBranch, newBranch), '--pretty=format:%ad %an - %s', '--abbrev-commit')
Reading through the Git documentation I found that I can compare two branches with this syntax B1..B2. I tried the same with gitpython and it worked, the other parameters are there for a custom format.
This solution uses GitPython
import git
def get_commit_from_range(start_commit, end_commit):
repo = git.Repo('path')
commit_range = f"{start_commit}...{end_commit}"
result = repo.iter_commits(commit_range)
for commit in result:
print(commit.message)
Well the title is self explanatory. What will be the python code equivalent to running git reset --hard (on terminal) using GitPython module?
You can use:
repo = git.Repo('c:/SomeRepo')
repo.git.reset('--hard')
Or if you need to reset to a specific branch:
repo.git.reset('--hard','origin/master')
Or in my case, if you want to just hard update a repo to origin/master (warning, this will nuke your current changes):
# blast any current changes
repo.git.reset('--hard')
# ensure master is checked out
repo.heads.master.checkout()
# blast any changes there (only if it wasn't checked out)
repo.git.reset('--hard')
# remove any extra non-tracked files (.pyc, etc)
repo.git.clean('-xdf')
# pull in the changes from from the remote
repo.remotes.origin.pull()
I searched for reset in the documentation and found this:
class git.refs.head.HEAD(repo, path='HEAD')
reset(commit='HEAD', index=True, working_tree=False, paths=None, **kwargs)
Reset our HEAD to the given commit optionally synchronizing the index and working tree. The reference we refer to will be set to commit as well.
You can use:
repo = git.Repo('repo')
# ...
# Remove last commit
repo.head.reset('HEAD~1', index=True, working_tree=True)