For a python script I'm writing (using Python 2.7 on Windows 7) I should be able to modify a branch with a given commit, that is adding it (cherry pick) if the commit is missing, or reverting it if it's already present.
Apparently revert has not been wrapped in gitpython's Repo class, so I tried to use Git directly with:
repo.git.revert(reference)
where reference is one of the commits returned by repo.iter_commits("master")
What happens is that the script locks on that command and becomes idle; I then have to kill the command prompt window.
If I go in the working directory and explore the repository, I can see (with git diff) that after the execution, the changes have been applied even tho' no new commit is visibile if I git log.
Any ideas about if and what I'm doing wrong?
I solved the mistery by trying to git commit the applied changes manually. Git complained about a swap file in the working directory.
So, the problem was that the command was being executed as if it was run from a terminal, hence waiting for me to somehow edit the commit message! So I needed to run the revert command with the no-edit option.
I changed the method invocation to:
repo.git.revert(reference.hexsha, no_edit = True)
(notice that gitpython requires the underscore as a separator. Also, using explicitly the hexsha property is not required, since reference would be converted to its str() representation anyway.)
It seems to work.
Related
I'm trying to generate a simple Python code that:
checks if it is running inside a git folder
if so, fetch the latest commit, else skip
it should work under the three platforms: Linux, Windows, and Mac
I have this code that works correctly under Linux:
from subprocess import call, STDOUT
import os
if call(["git", "branch"], stderr=STDOUT, stdout=open(os.devnull, 'w')) != 0:
# Not a git folder
commit = ''
else:
# Inside a git folder.: fetch latest commit
commit = subprocess.check_output(['git', 'rev-parse', '{}'.format('HEAD')])
print(commit)
but I have no way of checking if itwill work under Windows and Mac.
Does it work? Is there any way of checking/knowing this sort of things when one has no access to the other operating system?
You don't want to run git branch to detect whether you're in a Git repository, because you may or may not have any branches. To detect whether you're able to use Git commands, you'll want to run something like git rev-parse --git-dir, which will exit non-zero if you're not within a Git repository.
However, there are a couple of other issues with your code. First of all, in a new repository (one created fresh with git init), there will be a .git directory and the above command will succeed, but HEAD will not point anywhere. Therefore, your git rev-parse HEAD command will fail and print HEAD and an error.
Finally, if you want parse a revision, you should usually use --verify so that you don't print the dummy HEAD value on failure. So your invocation should look like git rev-parse --verify HEAD.
Ultimately, it's up to you to figure out what you want to do in a newly initialized repository, whether that's fail or fall back to an empty string.
The behaviors I've described here are consistent across platforms; they're built into Git and well defined.
There's a method check_output in subprocess library
from subprocess import check_output
try:
# use python to parse this log for info. This is your entire last commit
logs = check_output(['git', 'log', '-1', '--stat']).decode("UTF-8")
except Exception as e:
# Do whatever you wanna do otherwise if not git repository
print(e)
Git has a command called "git log".
"-1" indicates the last commit and
--stat will give you the files that were changed, commit ID, TIME ETC
then you can use python to parse this log and retrive any information you want
Check this out for more info on git log
With the Python SDK, the job seems to hang forever (I have to kill it manually at some point) if I use the extra_package option to use a custom ParDo.
Here is a job id for example : 2016-12-22_09_26_08-4077318648651073003
No explicit logs or errors are thrown...
I noticed that It seems related to the extra_package option because if I use this option without actually triggering the ParDo (code commented), it doesn't work either.
The initial Bq query with a simple output schema and no transform steps works.
Did it happen to someone ?
P.S : I'm using the DataFlow 0.4.3 version. I tested inside a venv and it seems to work with a DirectPipelineRunner
As determined by thylong and jkff:
The extra_package was binary-incompatible with Dataflow's packages. The requirements.txt in the root directory and the one in the extra_package were different, causing the exec.go in DataFlow container failing again and again. To fix, we recreated the venv with the same frozen dependencies.
I am stuck since 2 days trying to set up a small automatic deployment script.
The thing is: I have been using Git for some months now, but I always used it locally just by myself, just with the purpose of easily saving version of my code. All good until here.
Now I have to find a way to "publish" the code as soon as new functionalities are implemented and I think the code is stable enough.
Searching around I've discovered these 'hooks', which are scripts that are executed by Git in certain situations. Basically the idea is to have my master branch sync'd with my published code, so that everytime I merge a branch to the master and 'push', the files are automatically copied into '/my/published/folder'.
That said, I've found this tutorial that explains to do exactly what I want using a 'hooks' post-receive script, which is written in Ruby. Since at my studio I don't have and don't want to use Ruby at this time, I've found a Python version of the same script.
I tested and tested, but I couldn't make it work. I keep getting the same error:
remote: GIT_WORK_TREE is not recognized as as internal or external command,
Consider this is based on the tutorial I've shared above. Same prj name, same structure, etc.
I even installed Ruby on my personal laptop and tried the original script, but it still doesn't work...
I'm using Windows, and the Git env variable is set and accessible. But nevertheless it seems like it's not recognizing the GIT_WORK_TREE command. If I run it from the Git Bash it works just fine, but if I use the Windows Shell I get the same error message.
I suppose that when in my py script use the call() function, it runs the cmd using the Windows Shell. That's my guess, but I don't really know how to solve it. Google didn't help, as if no one ever had this problem before.
Maybe I'm just not seeing something obvious here, but I spent the whole day on this and I cannot get out of this bog!
Does anyone know how to solve it, or at least have an idea for a workaround?
Hope someone can help...
Thanks a lot!
The Ruby script you are talking about generates "bash" command:
GIT_WORK_TREE=/deploy/path git checkout -f ...
It means: define environment variable "GIT_WORK_TREE" with value "/deploy/path" and execute "git checkout -f ...".
As I understand it doesn't work for Windows command line.
Try to use something like:
set GIT_WORK_TREE=c:\temp\deploy && git checkout -f ...
I've had this problem as well - the best solution I've found is to pass the working tree across as one of the parameters
git --work-tree="/deploy/path" checkout -f ...
I am creating a post-commit script in Python and calling git commands using subprocess.
In my script I want to stash all changes before I run some commands and then pop them back. The problem is that if there was nothing to stash, stash pop returns a none-zero error code resulting in an exception in subprocess.check_output(). I know how I can ignore the error return code, but I don't want to do it this way.
So I have been thinking. Is there any way to get the number of items currently in stash? I know there is a command 'git stash list', but is there something more suited for my needs or some easy and safe way to parse the output of git stash list?
Also appreciate other approaches to solve this problem.
Don't do that!
Suppose that git stash save saves nothing, but there are already some items in the stash. Then, when you're all done, you pop the most recent stash, which is not one you created.
What did you just do to the user?
One way to do this in shell script code is to check the result of git rev-parse refs/stash before and after git stash save. If it changes (from failure to something, or something to something-else), you have created a new stash, which you can then pop when you are done.
More recent versions of Git have git stash create, which creates the commit-pair as usual but does not put them into the refs/stash reference. If there is nothing to save, git stash create does nothing and outputs nothing. This is a better way to deal with the problem, but is Git-version-dependent.
You can simply try calling git stash show stash#{0}. If this returns successfully, there is something stashed.
I'm trying to use flymake to run pyflakes, as suggested here
This works fine for local files, and almost works with remote files with a bit of tweaking, but I'm left with a problem where flymake/pyflakes 'modifies' the buffer when it runs (although nothing actually seems to change), which renders it a bit useless in practice (e.g. saving a file runs flymake which immediately modifies the buffer again).
Here's what I did to almost get it working:
Installed pyflakes on the remote box.
Customized my tramp-remote-process-environment variable so that pyflakes could be found in its PATH
Used a variant of the code from the wiki link above. Obviously I excluded the check that disables it for remote buffers. Also, the (when (load "flymake" t) ...) construct didn't seem to work as I expected, but I'm not too worried about that.
Re-defined (for test purposes -- advice should be fine if this can be made to work) the flymake-start-syntax-check-process function so that it uses start-file-process (which works with tramp) instead of start-process (which does not).
The change in #4 does not appear to cause any issues when processing a local file, but although this now enables flymake to run the remote pyflakes for the remote files (errors are highlighted as expected), in this instance the buffer is 'modified' whenever flymake runs.
I'm guessing that start-file-process, for remote processes, results in some additional return value/data that does not occur for local processes.
Does anyone have any insight/advice?
Emacs 23.1 and 23.2 on Ubuntu
Python 2.4.6
Pyflakes 0.4.0 (via easy_install)
You need to tell flymake to create it's copy of the buffer somewhere locally, I prefer using the $TMP directory since this also allows me to use tramp on files in directories I don't have write permissions to.
You may want to checkout my fork of flymake-python since it does all this.
I have this fixed in my fork of Flymake (https://github.com/illusori/emacs-flymake).
It will either run the syntax check on the remote machine via Tramp, without the buffer-modification issue you're seeing; or you can set flymake-run-in-place to nil and it will run the syntax check on the local machine, just like flymake on a regular non-Tramp buffer.
Since it's fixed at the Flymake level, this fix works for all languages and syntax checks rather than just pyflakes.
If you're interested in details of why it's happening, basically when the Tramp handler for start-file-process kicks in, it dumps the login message for the connection onto the end of the current buffer before any output filter can be attached to the process.
Usually this manifests as people seeing the contents of /etc/issue appear at the end of their file along with "You have mail." and so on.
In your case it may be that the login message is empty or just a new-line, so you're not seeing any text being added, even though it's setting the buffer as being modified.