I have a github repo that contains three protected branches; master, staging & uat. Anyone may make other branches to make changes but I would like a way make sure that people merge in this order:
users_branch -> uat -> staging -> master.
I have looked at pre-receive hooks using python but cant seem to get information I need on which branches are being merged to create this logic. The only arguments available in pre-receive are; base, commit & ref
Is there anyway to enforce that only uat may merge into staging and only staging may merge into master?
You could setup a workflow with git-flow.
Or you could setup a manual process where commit rights to those branches reside with one person who is responsible for pulling in changes and merging them in the right order.
One thing to remember with Git is these controls will only apply at your 'central' repo. You can't control what happens at the individual cloned repos. Also since hooks are not distributed with repositories for security reasons, you will not be able to enforce the this order via hooks either.
I guess the best you could do is to check all merge commits in the newly pushed commits to check whether they have exactly two parents and the second parent is contained in the branch you want to enforce merging from.
But you cannot add arbitrary git hooks to GitHub repositories anyway, or can you?
Related
Similar questions like this were raised many times, but I was not able to find a solution for my specific problem.
I was playing around with setuptools_scm recently and first thought it is exactly what I need. I have it configured like this:
pyproject.toml
[build-system]
requires = ["setuptools_scm"]
build-backend = "setuptools.build_meta"
[project]
...
dynamic = ["version"]
[tool.setuptools_scm]
write_to = "src/hello_python/_version.py"
version_scheme = "python-simplified-semver"
and my __init__.py
from ._version import __version__
from ._version import __version_tuple__
Relevant features it covers for me:
I can use semantic versioning
it is able to use *.*.*.devN version strings
it increments minor version in case of feature-branches
it increments patch/micro version in case of fix-branches
This is all cool. As long as I am on my feature-branch I am able to get the correct version strings.
What I like particularly is, that the dev version string contains the commit hash and is thus unique across multiple branches.
My workflow now looks like this:
create feature or fix branch
commit, (push, ) publish
merge PR to develop-branch
As soon as I am on my feature-branch I am able to run python -m build which generated a new _version.py with the correct version string accordingly to the latest git tag found. If I add new commits, it is fine, as the devN part of the version string changes due to the commit hash. I would even be able to run a python -m twine upload dist/* now. My package is build with correct version, so I simply publish it. This works perfectly fine localy and on CI for both fix and feature branches alike.
The problem that I am facing now, is, that I need a slightly different behavior for my merged PullRequests
As soon as I merge, e.g. 0.0.1.dev####, I want to run my Jenkins job not on the feature-branch anymore, but instead on develop-branch. And the important part now is, I want to
get develop-branch (done by CI)
update version string to same as on branch but without devN, so: 0.0.1
build and publish
In fact, setuptools_scm is changing the version to 0.0.2.dev### now, and I would like to have 0.0.1.
I was tinkering a bit with creating git tags before running setuptools_scm or build, but I was not able to get the correct version string to put into the tag. At this point I am struggling now.
Is anyone aware of a solution to tackle my issue with having?:
minor increment on feature-branches + add .devN
patch/micro increment on fix-branches + add .devN
no increment on develop-branch and version string only containing major.minor.patch of merged branch
TLDR: turning off to write the version number to a file every time setuptools_scm runs could maybe solve your problem, alternatively add the version file to .gitignore.
Explanation:
I also just started using setuptools_scm, so I am not very confident in using it yet.
But, as far as I understand the logic to derive the version number is incremented according to the state of your repository (the detailed logic is documented here: https://github.com/pypa/setuptools_scm/#default-versioning-scheme).
When I am not mistaken, the tool now does exactly what it is expected to: it does NOT set a version name only derived from the tag, but also adds a devSomething because the tag you've set is not referencing the most current commit on the develop branch head in your case.
Also I had the problem that when letting setuptools_scm generate a version and also configuring it to write it to a file, this would lead to another state since the last commit, again generating a dev version number.
To get a "clean" (e.g. v0.0.1) version number I hat to do the tagging after merging (with merge commit) since the merge commit was also taken into account for the version numbering logic.
Still my setup is currently less complex than yours. Just feature and fix branches and just a main branch without develop. So fewer merge commits (I chose to do merge commits, so no linear history). Now after merging with commit I create a Tag manually and formulate its name myself.
And this also only works for me in case I opt-out for writing the version number into a file. This I have done by inserting the following into pyproject.toml:
[tool.setuptools_scm]
# intentionally empty/commented out
# write_to option leads to an unclean workspace during build
# which again is leading setuptools_scm to interpret this during build and producing wheels with unclean version numbers
# write_to = "version.txt"
Since setuptools_scm runs during build, a new version file is also generated, which pollutes your worktree. Since your worktree will never be clean this way, you always get a dev version number. To still have a version file and to let it ignore during build, add the file to your .gitignore.
My approach is not perfect, some manual steps, but for now it works for me.
Certainly not 100% applicable in your CI scenario, but maybe you could change the order of doing merges and tags. I hope this helps somehow.
I'm automating changing files in github using pygit2. Sometimes the files have changed in github while I am processing a repo so I want to pull() before I push().
Since this is automated I would like to avoid conflicts by either having my local changes always override the remote, or vice-versa. This seems like a very simple scenario but after hours of scouring the internet for examples I have found 0 examples of someone doing this. The pygit source itself has some examples that get close but the "handle conflicts" portion is just a "TODO" comment.
It looks like pygit2 should support it, but none of the APIs seem to do this.
For example,
Repository.merge_commits(ours, theirs, favor='normal', flags={}, file_flags={})
When I set favor="theirs" or favor="ours" and purposely force a conflict I still get conflicts.
I tried this:
ancestor_id = repo.merge_base(repo.head.target,remote_master_id)
repo.merge_trees(ancestor_id,repo.head,remote_master_id,favor="theirs")
No conflict now, but now I somehow end up with the repo in a state where both (ours and theirs) changes are in the commit history but the file itself is missing either change.
I'm just guessing here since I have no clue what merge_trees does (except "merge trees") and experimenting with values of ancestor_id.
Is there a way to get pygit2 to get it to do what I want?
I clone the 'Apache/tomcat' git repo to use some info about commit.
However, when i use git.repo('repo local address').iter_commits(), i can't get some commits.
Besides, I can't search these in github search engine.
For example, commit 69c56080fb3355507e1b55d014ec0ee6767a6150 is in the 'Apache tomcat' repo, however, search '69c56080fb3355507e1b55d014ec0ee6767a6150' in 'in this repository' get nothing.
It's amazing for me.
It seems like that the commit isn't in the master branch, so can't be searched?
I want to know the theory behind this and how to get info about these 'missing' commits in Python.
Thanks.
repo.iter_commits(), with no arguments, gives you the commits which can be reached by tracing back through the parent(s) of the current commit. In other words, if you are in the master branch, it will only give you commits that are part of the master branch.
You can give it a rev argument which, among other things, can be a branch name. For example, iter_commits(rev='8.5.x') ought to give you all commits in the 8.5.x branch, which will include 69c5608. You can use another function, repo.branches(), if you need to get a list of branches.
Alternatively, if you already know the hash of a single commit that you want to look up, you can use repo.commit(), again with a rev parameter which in this case is the full or abbreviated commit hash: commit(rev='69c5608').
I believe the issue here is that this commit is in the branch 8.5.x and not master. You can see this in the first link. It will show which branches include it. The GitHub search algorithm only searches the master/main/trunk branch.
To find it via git python library, try changing to that branch. See these instructions on how to switch branches: https://gitpython.readthedocs.io/en/stable/tutorial.html#switching-branches
I need to modify an option of the accounting configuration (menu Accounting > Configuration > Accounting).
As you know, those options belong to a Transient Model named account.config.settings, which inherits from res.config.settings.
The problem is that even if I modify no option and click on Apply, Odoo begins loading forever. I put the log in debug_sql mode, and I realised that after clicking on Apply, Odoo starts to make thousands of SQL queries, and that is the reason why it does not stop loading.
I made a database backup and restored it in a newer instance of Odoo 8. In this instance, when I click on Apply, Odoo makes several SQL queries but not so much as in the other instance, so it works perfectly.
My conclusion was that the problem could be in the instance code (not in the database), so I looked for all the modules inheriting from account.config.settings and updated their repositories to go back to the same commits as the wrong instance (with git checkout xxx).
Afterwards I was expecting the newer instance to start failing when clicking on Apply, but it remains working OK.
So I am running out of ideas. I am thinking about running the backup database in the newer instance just to change the option I need, and after that restoring it again in the older instance, but I prefer to avoid that since I think it is a bit risky.
Any ideas? What more can I try to find out the problem?
Finally I found out the guilty module. It was account_due_list from the repository account-payment of the Odoo Community Association. The commit which fixes the problem is https://github.com/OCA/account-payment/commit/d7a09399982c80bb0f9465c44b9dc2a2b17e557a#diff-57131fd364915a56cbf8696d74e19478, merged on September the 22nd in 2016. Its title, "check if currency id not changed per company, remove it from create values".
The computed field maturity_residual depended on company_id.currency_id. This dependency has to be removed due to be the cause of the whole problem. It triggered thousands of SQL queries which made Odoo be loading forever.
Old and wrong code
#api.depends('date_maturity', 'debit', 'credit', 'reconcile_id',
'reconcile_partial_id', 'account_id.reconcile',
'amount_currency', 'reconcile_partial_id.line_partial_ids',
'currency_id', 'company_id.currency_id')
'currency_id')
def _maturity_residual(self):
...
New and right code
#api.depends('date_maturity', 'debit', 'credit', 'reconcile_id',
'reconcile_partial_id', 'account_id.reconcile',
'amount_currency', 'reconcile_partial_id.line_partial_ids',
'currency_id')
def _maturity_residual(self):
...
I found very risky to update repositories to the latest version due to what #CZoellner exactly says, sometimes there are weird commits which can destroy some database data. So, these are the consequences of not doing that.
I would like to do the following command from Python script using dulwich:
$ git branch --contains <myCommitSha> | wc -l
What I intend is to check if a particular commit (sha) is placed in more than one branches.
Of course I thought that I can execute the above command from Python and parse the output (parse the number of branches), but that's the last resort solution.
Any other ideas/comments? Thanks in advance.
Just in case someone was wondering how to do this now using gitpython:
repo.git.branch('--contains', YOURSHA)
Since branches are just pointers to random commits and they don't "describe" trees in any way, there is nothing linking some random commit TO a branch.
The only sensible way I would take to look up if a given commit is an ancestor of a commit to which some branch points is to traverse all ancestor chains from branch-top commit down.
In other words, in dulwich I would iterate over branches and traverse backwards to see if a sha is on the chain.
I am rather certain that's exactly what git branch --contains <myCommitSha> does as I am not aware of any other shortcut.
Since your choice is (a) make python do the iteration or (b) make C do same iteration, I'd just go with C. :)
There is no built-in function for this, but you can of course implement this yourself.
You can also just do something like this (untested):
branches = [ref for ref in repo.refs.keys("refs/heads/") if
any((True for commit in repo.get_walker(include=[repo.refs[ref]])
if commit.id == YOURSHA))]
This will give you a list of all the branch heads that contain the given commit, but will have a runtime of O(n*m), n beeing the amount of commits in your repo, m beeing the amount of branches. The git implementation probably has a runtime of O(n).
In case anyone uses GitPython and wants all branches
import git
gLocal = git.Git("<LocalRepoLocation>")
gLocal.branch('-a','--contains', '<CommitSHA>').split('\n')