Similar questions like this were raised many times, but I was not able to find a solution for my specific problem.
I was playing around with setuptools_scm recently and first thought it is exactly what I need. I have it configured like this:
pyproject.toml
[build-system]
requires = ["setuptools_scm"]
build-backend = "setuptools.build_meta"
[project]
...
dynamic = ["version"]
[tool.setuptools_scm]
write_to = "src/hello_python/_version.py"
version_scheme = "python-simplified-semver"
and my __init__.py
from ._version import __version__
from ._version import __version_tuple__
Relevant features it covers for me:
I can use semantic versioning
it is able to use *.*.*.devN version strings
it increments minor version in case of feature-branches
it increments patch/micro version in case of fix-branches
This is all cool. As long as I am on my feature-branch I am able to get the correct version strings.
What I like particularly is, that the dev version string contains the commit hash and is thus unique across multiple branches.
My workflow now looks like this:
create feature or fix branch
commit, (push, ) publish
merge PR to develop-branch
As soon as I am on my feature-branch I am able to run python -m build which generated a new _version.py with the correct version string accordingly to the latest git tag found. If I add new commits, it is fine, as the devN part of the version string changes due to the commit hash. I would even be able to run a python -m twine upload dist/* now. My package is build with correct version, so I simply publish it. This works perfectly fine localy and on CI for both fix and feature branches alike.
The problem that I am facing now, is, that I need a slightly different behavior for my merged PullRequests
As soon as I merge, e.g. 0.0.1.dev####, I want to run my Jenkins job not on the feature-branch anymore, but instead on develop-branch. And the important part now is, I want to
get develop-branch (done by CI)
update version string to same as on branch but without devN, so: 0.0.1
build and publish
In fact, setuptools_scm is changing the version to 0.0.2.dev### now, and I would like to have 0.0.1.
I was tinkering a bit with creating git tags before running setuptools_scm or build, but I was not able to get the correct version string to put into the tag. At this point I am struggling now.
Is anyone aware of a solution to tackle my issue with having?:
minor increment on feature-branches + add .devN
patch/micro increment on fix-branches + add .devN
no increment on develop-branch and version string only containing major.minor.patch of merged branch
TLDR: turning off to write the version number to a file every time setuptools_scm runs could maybe solve your problem, alternatively add the version file to .gitignore.
Explanation:
I also just started using setuptools_scm, so I am not very confident in using it yet.
But, as far as I understand the logic to derive the version number is incremented according to the state of your repository (the detailed logic is documented here: https://github.com/pypa/setuptools_scm/#default-versioning-scheme).
When I am not mistaken, the tool now does exactly what it is expected to: it does NOT set a version name only derived from the tag, but also adds a devSomething because the tag you've set is not referencing the most current commit on the develop branch head in your case.
Also I had the problem that when letting setuptools_scm generate a version and also configuring it to write it to a file, this would lead to another state since the last commit, again generating a dev version number.
To get a "clean" (e.g. v0.0.1) version number I hat to do the tagging after merging (with merge commit) since the merge commit was also taken into account for the version numbering logic.
Still my setup is currently less complex than yours. Just feature and fix branches and just a main branch without develop. So fewer merge commits (I chose to do merge commits, so no linear history). Now after merging with commit I create a Tag manually and formulate its name myself.
And this also only works for me in case I opt-out for writing the version number into a file. This I have done by inserting the following into pyproject.toml:
[tool.setuptools_scm]
# intentionally empty/commented out
# write_to option leads to an unclean workspace during build
# which again is leading setuptools_scm to interpret this during build and producing wheels with unclean version numbers
# write_to = "version.txt"
Since setuptools_scm runs during build, a new version file is also generated, which pollutes your worktree. Since your worktree will never be clean this way, you always get a dev version number. To still have a version file and to let it ignore during build, add the file to your .gitignore.
My approach is not perfect, some manual steps, but for now it works for me.
Certainly not 100% applicable in your CI scenario, but maybe you could change the order of doing merges and tags. I hope this helps somehow.
Related
I'm automating changing files in github using pygit2. Sometimes the files have changed in github while I am processing a repo so I want to pull() before I push().
Since this is automated I would like to avoid conflicts by either having my local changes always override the remote, or vice-versa. This seems like a very simple scenario but after hours of scouring the internet for examples I have found 0 examples of someone doing this. The pygit source itself has some examples that get close but the "handle conflicts" portion is just a "TODO" comment.
It looks like pygit2 should support it, but none of the APIs seem to do this.
For example,
Repository.merge_commits(ours, theirs, favor='normal', flags={}, file_flags={})
When I set favor="theirs" or favor="ours" and purposely force a conflict I still get conflicts.
I tried this:
ancestor_id = repo.merge_base(repo.head.target,remote_master_id)
repo.merge_trees(ancestor_id,repo.head,remote_master_id,favor="theirs")
No conflict now, but now I somehow end up with the repo in a state where both (ours and theirs) changes are in the commit history but the file itself is missing either change.
I'm just guessing here since I have no clue what merge_trees does (except "merge trees") and experimenting with values of ancestor_id.
Is there a way to get pygit2 to get it to do what I want?
I clone the 'Apache/tomcat' git repo to use some info about commit.
However, when i use git.repo('repo local address').iter_commits(), i can't get some commits.
Besides, I can't search these in github search engine.
For example, commit 69c56080fb3355507e1b55d014ec0ee6767a6150 is in the 'Apache tomcat' repo, however, search '69c56080fb3355507e1b55d014ec0ee6767a6150' in 'in this repository' get nothing.
It's amazing for me.
It seems like that the commit isn't in the master branch, so can't be searched?
I want to know the theory behind this and how to get info about these 'missing' commits in Python.
Thanks.
repo.iter_commits(), with no arguments, gives you the commits which can be reached by tracing back through the parent(s) of the current commit. In other words, if you are in the master branch, it will only give you commits that are part of the master branch.
You can give it a rev argument which, among other things, can be a branch name. For example, iter_commits(rev='8.5.x') ought to give you all commits in the 8.5.x branch, which will include 69c5608. You can use another function, repo.branches(), if you need to get a list of branches.
Alternatively, if you already know the hash of a single commit that you want to look up, you can use repo.commit(), again with a rev parameter which in this case is the full or abbreviated commit hash: commit(rev='69c5608').
I believe the issue here is that this commit is in the branch 8.5.x and not master. You can see this in the first link. It will show which branches include it. The GitHub search algorithm only searches the master/main/trunk branch.
To find it via git python library, try changing to that branch. See these instructions on how to switch branches: https://gitpython.readthedocs.io/en/stable/tutorial.html#switching-branches
I'm using drake as an external in another bazel project and it's adding ...runfiles/drake as well as ...runfiles/drake/bindings to the PYTHONPATH. The latter pretty much only includes pydrake (which is what I want), but the former is including a bunch of other directories as modules including common , examples, tools, and bindings which results in name collisions with my own project. Is this expected behavior? What's the best way to deal with this? I tested the examples in drake-external-examples/drake_bazel_external and I'm seeing the same issue (relevant commit here).
TL;DR Best way to handle this is to make sure your imports are scoped to your project; e.g. instead of from common import foo, instead do from drake_bazel_external.common import foo.
Here's a snippet from a sample Bazel project that does this with Python:
https://github.com/EricCousineau-TRI/repro/blob/39f79009a2e89b987f072276d1921a282f63e6a1/python/bazel_py_example/mid/py3_bin.py#L3
To root cause, here's my attempt to instrument your repro with some more output, pinned to drake#v0.18.0:
drake_bazel_external/apps/bar.py (branch)
Here's a preview of the Python paths, which corroborate what you're seeing:
path:
{source_tree}/apps
{runfiles}
{runfiles}/drake/bindings
{runfiles}/lcmtypes_bot2_core/lcmtypes
{runfiles}/lcmtypes_bot2_core
{runfiles}/lcmtypes_robotlocomotion/lcmtypes
{runfiles}/lcmtypes_robotlocomotion
{runfiles}/meshcat_python/src
{runfiles}/spdlog
{runfiles}/meshcat_python
{runfiles}/lcm
{runfiles}/ignition_math
{runfiles}/drake
{runfiles}/drake_external_examples
/usr/lib/python36.zip
/usr/lib/python3.6
/usr/lib/python3.6/lib-dynload
/usr/lib/python3/dist-packages
common: {runfiles}/drake/common/__init__.py
Ultimately, this is expected behavior. Here's the Drake issue, and a related bazelbuild issue:
https://github.com/RobotLocomotion/drake/issues/7871
https://github.com/bazelbuild/bazel/issues/7653
Best method is to use project-specific imports. For now, try to avoid this by using a more specific import.
I will re-open the Drake issue, but will keep it pegged at low priority since there's a better solution (IMO), and will require more infrastructure work to make it happen.
Thanks!
EDIT: To be specific, the thing that is most acutely tripping up your example is the fact that Bazel is generating #drake//common:__init__.py. It's only generated because of the legacy_create_init flag, as well as the fact that we want the file libdrake_marker.so.
There's still the fact that drake (among other repositories) are on the Python at all.
EDIT 2: Filed a new issue on Jeremy's request: https://github.com/RobotLocomotion/drake/issues/13320
Has anyone used the GlideRecord library for python? I can't seem to get it to perform some fairly basic functionality. I want to add a few sysparm_query parameters. This is just a code snippet, I had to manually edit it for security purposes. Hopefully I didn't introduce any typo errors.
for i in glide1, glide2:
i.set_credentials('xxxx', 'xxxx')
i.set_server("https://<instance>.service-now.com/")
i.addQuery("active", "true")
def getIncidents(glide1):
group = "mygroup"
glide1.addQuery('assignment_group', group)
print glide1.query_data['sysparm_query'] + '\n'
print glide1.getQuery()[50:] #just to avoid too much output
gives me the output:
active=true^assignment_group=mygroup
displayvalue=true&JSONv2&sysparm_record_count=100&sysparm_action=getRecords&sysparm_query=
I cannot get the query data to append. Perhaps I should look at doing the queries manually? Here is a link to the GlideRecord git:
https://github.com/bazizi/ServiceNow_GlideRecord_API/blob/master/GlideRecord/init.py
Cheers, Arthur
I just realized that the getQuery() member function I had defined only returned the base query URL (not including the query itself). I had initially added this function for testing purposes, and wrongfully added this to the documentation.
I just fixed this issue and committed to the GitHub repository. Please pull from the git repository again or if you installed using PIP, run the following commands to re-install it from scratch:
pip uninstall GlideRecord
pip install GlideRecord
In terms of setting the assignment group by name, however, I still need to find out how ServiceNow hashes the assignment_group, or if there is another way this query can be added; That is, I have no fix for now.
Thanks
Behnam
If I were to tag a bunch of images via XMP, in Python, what would be the best way? I've used Perl's Image::ExifTool and I am very much used to its reliability. I mean the thing never bricked on tens of thousands of images.
I found this, backed by some heavy-hitters like the European Space Agency, but it's clearly marked as unstable.
Now, assuming I am comfortable with C++, how easy is it to, say, use the Adobe XMP toolkit directly, in Python? Having never done this before, I am not sure what I'd sign up for.
Update: I tried some libraries out there and, including the fore mentioned toolkit, they are still pretty immature and have glaring problems. I resorted to actually writing an Perl-based server that accepts XML requests to read and write metadata, with the combat-tested Image::EXIF. The amount of code is actually very light, and definitely beats torturing yourself by trying to get the Python libraries to work. The server solution is language-agnostic, so it's a twofer.
Well, they website says that the python-xmp-toolkit uses Exempi, which is based on the Adobe XMP toolkit, via ctypes. What I'm trying to say is that you're not likely to create a better wrapping of the C++ code yourself. If it's unstable (i.e. buggy), it's most likely still cheaper for you to create patches than doing it yourself from scratch.
However, in your special situation, it depends on how much functionality you need. If you just need a single function, then wrapping the C++ code into a small C extension library or with Cython is feasible. When you need to have all functionality & flexibility, you have to create wrappers manually or using SWIG, basically repeating the work already done by other people.
I struggled for several hours with python-xmp-toolkit, and eventually gave up and just wrapped calls to ExifTool.
There is a Ruby library that wraps ExifTool as well (albeit, much better than what I created); I feel it'd be worth porting it to Python for a simple way of dealing with XMP.
For Python 3.x there's py3exiv2 which supports editing XMP metadata
With py3exiv2 you can read and write all standard metadata, create your own XMP namespace or extract the thumbnail embedded in image file.
One thing I like about py3exiv2 is that it's built on the (C++) exiv2 library which seems well-maintained
I did encounter a problem though when installing it on my system (Ubuntu 16.04). To get it working I first had to install the latest version of libexiv2-dev (sudo apt-get install libexiv2-dev), and only after this install py3exiv2 (sudo -H pip3 install py3exiv2)
Here's how I've used py3exiv2 to write a new tag:
import pyexiv2
metadata = pyexiv2.ImageMetadata("file_name.jpg")
metadata.read()
key = "Xmp.xmp.CustomTagKey"
value = "CustomTagValue"
metadata[key] = pyexiv2.XmpTag(key, value)
metadata.write()
(There's also a tutorial in the documentation)
For people finding this thread in the future, I would like to share my solution. I put a package up on the Python Package Index (PyPI) called imgtag. It lets you do basic XMP subject field tag editing using python-xmp-toolkit, but abstracts away all of the frustrating nonsense of actually using python-xmp-toolkit into one-line commands.
Install exempi for your platform, then run
python3 -m pip install imgtag
Now you can use it as such:
from imgtag import ImgTag
# Open image for tag editing
test = ImgTag(
filename="test.jpg", # The image file
force_case="lower", # Converts the case of all tags
# Can be `None`, `"lower"`, `"upper"`
# Default: None
strip=True, # Strips whitespace from the ends of all tags
# Default: True
no_duplicates=True # Removes all duplicate tags (case sensitive)
# Default: True
)
# Print existing tags
print("Current tags:")
for tag in test.get_tags():
print(" Tag:", tag)
# Add tags
test.add_tags(["sleepy", "happy"])
# Remove tags
test.remove_tags(["cute"])
# Set tags, removing all existing tags
test.set_tags(["dog", "good boy"])
# Save changes and close file
test.close()
# Re-open for tag editing
test.open()
# Remove all tags
test.clear_tags()
# Delete the ImgTag object, automatically saving and closing the file
del(test)
I haven't yet added methods for the other XMP fields like description, date, creator, etc. Maybe someday I will, but if you look at how the existing functions work in the source code, you can probably figure out how to add the method yourself. If you do add more methods, make a pull request please. :)
You can use ImageMagic convert, IIRC there's a Python module to it as well.