Recently I happened a strange problem.
My OS is Gentoo. I install pip and layman, but the binary file in /usr/bin: /usr/bin/pip and /usr/bin/layman, are all softlink to /usr/bin/python-exec.
% ll /usr/bin/{pip,layman}
lrwxrwxrwx 1 root root 11 Sep 18 23:51 /usr/bin/layman -> python-exec
lrwxrwxrwx 1 root root 11 Aug 16 08:14 /usr/bin/pip -> python-exec
The content of /usr/bin/python-exec:
#!/usr/bin/python2.7
# EASY-INSTALL-ENTRY-SCRIPT: 'Pygments==1.6','console_scripts','pygmentize'
__requires__ = 'Pygments==1.6'
import sys
from pkg_resources import load_entry_point
sys.exit(
load_entry_point('Pygments==1.6', 'console_scripts', 'pygmentize')()
)
I found this file belongs to dev-python/python-exec-0.3.1:
% equery belongs python-exec
* Searching for python-exec ...
dev-python/python-exec-0.3.1 (/usr/bin/python-exec)
and this package is :
* dev-python/python-exec
Latest version available: 0.3.1
Latest version installed: 0.3.1
Size of files: 72 kB
Homepage: https://bitbucket.org/mgorny/python-exec/
Description: Python script wrapper
License: BSD
I don't know what is the function of the /usr/bin/python-exec script?
and why the /usr/bin/pip and /usr/bin.layman will softlink to this script?
now if I want to use pip to install package or layman to manage overlays, I should use /usr/bin/pip-python2.7 and layman-python2.7.
python-exec is a Gentoo-specific wrapper script that invokes an implementation of the requested script appropriate to the currently selected Python runtime. This enables Gentoo to support switching between Python runtimes without breaking scripts, as long as a compatible version of the script exists for the selected runtime.
I read the help information of python-exec tool.
I think it is a pygment wapper. There are Formatters, Filters, and so on.
So If I use:
$ python-exec -l python -f html -o /tmp/test.file
#this is the input#
print 'hello world'
....
it will write the input to /tmp/test.file, and use pygments to colorful the code.
But why the pip and layman will softlink to it, I still don't know.
Append:
I have found the reason:
for some reason, some another application overwrite the /usr/bin/python-exec. (I think maybe is pygments.)
The right content is :
#!/usr/bin/python-exec-c
# vim:fileencoding=utf-8:ft=python
# (c) 2012 Michał Górny
# Released under the terms of the 2-clause BSD license.
#
# This is not the script you are looking for. This is just a wrapper.
# The actual scripts of this application were installed with -python*,
# -pypy* or -jython* suffixes. You are most likely looking for one
# of those.
from __future__ import with_statement
import os, os.path, sys
try:
from epython import EPYTHON
except ImportError:
EPYTHON = os.path.basename(sys.executable)
if '' and EPYTHON.endswith(''):
EPYTHON = EPYTHON[:-len('')]
# In the loop:
# sys.argv[0] keeps the 'bare' name
# __file__ keeps the 'full' name
while True:
__file__ = sys.argv[0] + '-' + EPYTHON
try:
kwargs = {}
if sys.version_info[0] >= 3:
import tokenize
# need to provide encoding
with open(__file__, 'rb') as f:
kwargs['encoding'] = tokenize.detect_encoding(f.readline)[0]
with open(__file__, 'r', **kwargs) as f:
data = f.read()
except IOError:
# follow symlinks (if supported)
try:
sys.argv[0] = os.path.join(os.path.dirname(sys.argv[0]),
os.readlink(sys.argv[0]))
except (OSError, AttributeError):
# no more symlinks? then it's time to fail.
sys.stderr.write('This Python implementation (%s) is not supported by the script.\n'
% EPYTHON)
sys.exit(127)
else:
break
sys.argv[0] = __file__
exec(data)
and many binary files, such as pip, layman will all softlink to it, it is a simple wrapper.
Related
Without getting confused, there are tons of questions about installing packages, how to import the resulting modules, and listing what packages are available. But there doesn't seem to be the equivalent of a "--what-provides" option for pip, if you don't have a requirements.txt or pipenv. This question is similar to a previous question, but asks for the parent package, and not additional metadata. That said, these other questions did not get a lot of attention or many accepted answers - eg. How do you find python package metadata information given a module. So forging ahead... .
By way of example, there are two packages (to name a few) that will install a module called "serial" - namely "pyserial" and "serial". So assuming that one of the packages was installed, we might find it by using pip list:
python3 -m pip list | grep serial
However, the problem comes in if the name of the package does not match the name of the module, or if you just want to find out what package to install, working on a legacy server or dev machine.
You can check the path of the imported module - which can give you a clue. But continuing the example...
>>> import serial
>>> print(serial.__file__)
/usr/lib/python3.6/site-packages/serial/__init__.py
It is in a "serial" directory, but only pyserial is in fact installed, not serial:
> python3 -m pip list | grep serial
pyserial 3.4
The closest I can come is to generate a requirements.txt via "pipreqs ./" which may fail on a dependent child file (as it does with me), or to reverse check dependencies via pipenv (which brings a whole set of new issues along to get it all setup):
> pipenv graph --reverse
cymysql==0.9.15
ftptool==0.7.1
netifaces==0.10.9
pip==20.2.2
PyQt5-sip==12.8.1
- PyQt5==5.15.0 [requires: PyQt5-sip>=12.8,<13]
setuptools==50.3.0
wheel==0.35.1
Does anyone know of a command that I have missed for a simple solution to finding what pip package provides a particular module?
Use the packages_distributions() function from importlib.metadata (or importlib-metadata). So for example, in your case where serial is the name of the "import package":
import importlib.metadata # or: `import importlib_metadata`
importlib.metadata.packages_distributions()['serial']
This should return a list containing pyserial, which is the name of the "distribution package" (the name that should be used to pip-install).
References
https://importlib-metadata.readthedocs.io/en/stable/using.html#package-distributions
https://github.com/python/importlib_metadata/pull/287/files
For older Python versions and/or older versions of importlib-metadata...
I believe something like the following should work:
#!/usr/bin/env python3
import importlib.util
import pathlib
import importlib_metadata
def get_distribution(file_name):
result = None
for distribution in importlib_metadata.distributions():
try:
relative = (
pathlib.Path(file_name)
.relative_to(distribution.locate_file(''))
)
except ValueError:
pass
else:
if distribution.files and relative in distribution.files:
result = distribution
break
return result
def alpha():
file_name = importlib.util.find_spec('serial').origin
distribution = get_distribution(file_name)
print("alpha", distribution.metadata['Name'])
def bravo():
import serial
file_name = serial.__file__
distribution = get_distribution(file_name)
print("bravo", distribution.metadata['Name'])
if __name__ == '__main__':
alpha()
bravo()
This is just an example of code showing how to get the metadata of the installed project a specific module belongs to.
The important bit is the get_distribution function, it takes a file name as an argument. It could be the file name of a module or package data. If that file name belongs to a project installed in the environment (via pip install for example) then the importlib.metadata.Distribution object is returned.
Edit 2023/01/31: This issue is now solved via the importlib_metadata library. See Provide mapping from "Python packages" to "distribution packages", specifically "Note 2" deals with this exact issue. As such, see comments by #sinoroc, you can locate the package (eg. package "pyserial" providing module "serial") with something like this:
>>> import importlib_metadata
>>> print(importlib_metadata.packages_distributions()['serial'])
['pyserial']
Building on #sinoroc's much-published answer, I came up with the following code (incorporating the mentioned importlib.util.find_spec method, but with a bash-based search against the RECORD file in the path returned). I also tried to implement #sinoroc's version - but was not successful. Both methods are included to demonstrate.
Run as "python3 python_find-module-package.py -m [module-name-here] -d", which will also print debug. Leave off the "-d" switch to get just the package name returned (and errors).
TLDR: Code on github.
#!/usr/bin/python3
import sys
import os.path
import importlib.util
import importlib_metadata
import pathlib
import subprocess
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("-m", "--module", help="Find matching package for the specified Python module",
type=str)
#parser.add_argument("-u", "--username", help="Database username",
# type=str)
#parser.add_argument("-p", "--password", help="Database password",
# type=str)
parser.add_argument("-d", "--debug", help="Debug messages are enabled",
action="store_true")
args = parser.parse_args()
TESTMODULE='serial'
def debugPrint (message="Nothing"):
if args.debug:
print ("[DEBUG] %s" % str(message))
class application ():
def __init__(self, argsPassed):
self.argsPassed = argsPassed
debugPrint("Got these arguments:\n%s" % (argsPassed))
def run (self):
#debugPrint("Running with args:\n%s" % (self.argsPassed))
try:
if self.argsPassed.module is not None:
self.moduleName=self.argsPassed.module #i.e. the module that you're trying to match with a package.
else:
self.moduleName=TESTMODULE
print("[WARN] No module name supplied - defaulting to %s!" % (TESTMODULE))
self.location=importlib.util.find_spec(self.moduleName).origin
debugPrint(self.location)
except:
print("[ERROR] Parsing module name!")
exit(1)
try:
self.getPackage()
except Exception as e:
print ("[ERROR] getPackage failed: %s" % str(e))
try:
distResult=self.getDistribution(self.location)
self.packageStrDist=distResult.metadata['Name']
print(self.packageStrDist)
except Exception as e:
print ("[ERROR] getDistribution failed: %s" % str(e))
debugPrint("Parent package for \"%s\" is: \"%s\"" % (self.moduleName, self.packageStr))
return self.packageStr
def getPackage (self):
locationStr=self.location.split("site-packages/",1)[1]
debugPrint(locationStr)
#serial/__init__.py
locationDir=self.location.split(locationStr,1)[0]
debugPrint(locationDir)
#/usr/lib/python3.6/site-packages
cmd='find \"' + locationDir + '\" -type f -iname \'RECORD\' -printf \'\"%p\"\\n\' | xargs grep \"' + locationStr + '\" -l -Z'
debugPrint(cmd)
#find "/usr/lib/python3.6/site-packages" -type f -iname 'RECORD' -printf '"%p"\n' | xargs grep "serial/__init__.py" -l -Z
#return_code = os.system(cmd)
#return_code = subprocess.run([cmd], stdout=subprocess.PIPE, universal_newlines=True, shell=False)
#findResultAll = return_code.stdout
findResultAll = subprocess.check_output(cmd, shell=True) # Returns stdout as byte array, null terminated.
findResult = str(findResultAll.decode('ascii').strip().strip('\x00'))
debugPrint(findResult)
#/usr/lib/python3.6/site-packages/pyserial-3.4.dist-info/RECORD
findDir = os.path.split(findResult)
self.packageStr=findDir[0].replace(locationDir,"")
debugPrint(self.packageStr)
def getDistribution(self, fileName=TESTMODULE):
result = None
for distribution in importlib_metadata.distributions():
try:
relative = (pathlib.Path(fileName).relative_to(distribution.locate_file('')))
#except ValueError:
#except AttributeError:
except:
pass
else:
if relative in distribution.files:
result = distribution
return result
if __name__ == '__main__':
result=1
try:
prog = application(args)
result = prog.run()
except Exception as E:
print ("[ERROR] Prog Exception: %s" % str(E))
finally:
sys.exit(result)
# exit the program if we haven't already
print ("Shouldn't get here.")
sys.exit(result)
Could any one help me to check in Linux to meet rpm version
Below command I used to get package version.
rpm -qi libstdc++6 | awk -F': ' '/Version/ {print $2}'
using sub-process, I get below out put
6.2.1+r239768
Now I want to compare 6.2.1+r239768 is greater than 5.1.1
Below is results I am expected
output = 6.2.1+r239768
print "This rpm version is supported"
output = 6.3+r23
print "This rpm version is supported"
output = 7.1.1+r57678
print "This rpm version is supported"
output = 5.1.1+r23677
print "This rpm version is not supported"
output = 4.1+r56888
print "This rpm version is not supported"
You can use pkg_resources module which you should already have as it is part of setuptools:
In []:
from pkg_resources import parse_version
parse_version('6.2.1+r239768') > parse_version('5.1.1')
Out[]:
True
So to get your output:
In []:
versions = ['6.2.1+r239768', '6.3+r23', '7.1.1+r57678', '5.1.1+r23677', '4.1+r56888']
base = parse_version('6.2.1+r239768')
for v in versions:
print("Output = {}".format(v))
print("This rpm version is {}supported".format('not ' if parse_version(v) < base else ''))
Out[]:
Output = 6.2.1+r239768
This rpm version is supported
Output = 6.3+r23
This rpm version is supported
Output = 7.1.1+r57678
This rpm version is supported
Output = 5.1.1+r23677
This rpm version is not supported
Output = 4.1+r56888
This rpm version is not supported
I feels like you should be better of using the python API for rpm.
As I did not have RPM based system on hand right now, could not come up with code.
Following code, taken from Section 16.5, can be adapted accordingly
#!/usr/bin/python
# Reads in package header, compares to installed package.
# Usage:
# python vercompare.py libstdc++6.rpm
#
import os
import sys
import rpm
def readRpmHeader(ts, filename):
""" Read an rpm header. """
fd = os.open(filename, os.O_RDONLY)
try:
h = ts.hdrFromFdno(fd)
finally:
os.close(fd)
return h
ts = rpm.TransactionSet()
h = readRpmHeader( ts, sys.argv[1] )
pkg_ds = h.dsOfHeader()
for inst_h in ts.dbMatch('name', h['name']):
inst_ds = inst_h.dsOfHeader()
if pkg_ds.EVR() >= inst_ds.EVR():
print "Package file is same or newer, OK to upgrade."
else:
print "Package file is older than installed version."
$ rpmdev-vercmp 6.2.1+r239768 6.3+r23
6.2.1+r239768 < 6.3+r23
You can read the exit code:
rpmdev-vercmp <EVR1> <EVR2>
Exit status is 0 if the EVR's are equal, 11 if EVR1 is newer, and 12 if EVR2
is newer. Other exit statuses indicate problems.
The python rpm library gives you a way to compare a EVR (Epoch, Version, Release), but doesn't seem to give you the code to split a string into those parts. It's trivial, though. Here's a little working program.
#! /usr/bin/env python
import re
import sys
import rpm
evr_re = re.compile(r'^(?:([^:]*):)?([^-]+)(?:-(.*))?$')
def str_to_evr(s):
mo = evr_re.match(s)
if not mo:
raise ValueError("not a valid version string")
return mo.groups()
def main():
if len(sys.argv) != 3:
raise SystemExit("usage: vercmp VER1 VER2")
try:
evr_a = str_to_evr(sys.argv[1].strip())
evr_b = str_to_evr(sys.argv[2].strip())
result = rpm.labelCompare(evr_a, evr_b)
except Exception as exc:
raise SystemExit("Problem converting or comparing versions: %s %s" % (
type(exc).__name__, exc))
print (result)
if __name__ == "__main__":
main()
I want to create a start menu or Desktop shortcut for my Python windows installer package. I am trying to follow https://docs.python.org/3.4/distutils/builtdist.html#the-postinstallation-script
Here is my script;
import sys
from os.path import dirname, join, expanduser
pyw_executable = sys.executable.replace('python.exe','pythonw.exe')
script_file = join(dirname(pyw_executable), 'Scripts', 'tklsystem-script.py')
w_dir = expanduser(join('~','lsf_files'))
print(sys.argv)
if sys.argv[1] == '-install':
print('Creating Shortcut')
create_shortcut(
target=pyw_executable,
description='A program to work with L-System Equations',
filename='L-System Tool',
arguments=script_file,
workdir=wdir
)
I also specified this script in scripts setup option, as indicated by aforementioned docs.
Here is the command I use to create my installer;
python setup.py bdist_wininst --install-script tklsystem-post-install.py
After I install my package using created windows installer, I can't find where my shorcut is created, nor I can confirm whether my script run or not?
How can I make setuptools generated windows installer to create desktop or start menu shortcuts?
Like others have commented here and elsewhere the support functions don't seem to work at all (at least not with setuptools). After a good day's worth of searching through various resources I found a way to create at least the Desktop shortcut. I'm sharing my solution (basically an amalgam of code I found here and here). I should add that my case is slightly different from yasar's, because it creates a shortcut to an installed package (i.e. an .exe file in Python's Scripts directory) instead of a script.
In short, I added a post_install function to my setup.py, and then used the Python extensions for Windows to create the shortcut. The location of the Desktop folder is read from the Windows registry (there are other methods for this, but they can be unreliable if the Desktop is at a non-standard location).
#!/usr/bin/env python
import os
import sys
import sysconfig
if sys.platform == 'win32':
from win32com.client import Dispatch
import winreg
def get_reg(name,path):
# Read variable from Windows Registry
# From https://stackoverflow.com/a/35286642
try:
registry_key = winreg.OpenKey(winreg.HKEY_CURRENT_USER, path, 0,
winreg.KEY_READ)
value, regtype = winreg.QueryValueEx(registry_key, name)
winreg.CloseKey(registry_key)
return value
except WindowsError:
return None
def post_install():
# Creates a Desktop shortcut to the installed software
# Package name
packageName = 'mypackage'
# Scripts directory (location of launcher script)
scriptsDir = sysconfig.get_path('scripts')
# Target of shortcut
target = os.path.join(scriptsDir, packageName + '.exe')
# Name of link file
linkName = packageName + '.lnk'
# Read location of Windows desktop folder from registry
regName = 'Desktop'
regPath = r'Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders'
desktopFolder = os.path.normpath(get_reg(regName,regPath))
# Path to location of link file
pathLink = os.path.join(desktopFolder, linkName)
shell = Dispatch('WScript.Shell')
shortcut = shell.CreateShortCut(pathLink)
shortcut.Targetpath = target
shortcut.WorkingDirectory = scriptsDir
shortcut.IconLocation = target
shortcut.save()
setup(name='mypackage',
...,
...)
if sys.argv[1] == 'install' and sys.platform == 'win32':
post_install()
Here's a link to a full setup script in which I used this:
https://github.com/KBNLresearch/iromlab/blob/master/setup.py
If you want to confirm whether the script is running or not, you can print to a file instead of the console. Looks like text you print to console in the post-install script won't show up.
Try this:
import sys
from os.path import expanduser, join
pyw_executable = join(sys.prefix, "pythonw.exe")
shortcut_filename = "L-System Toolsss.lnk"
working_dir = expanduser(join('~','lsf_files'))
script_path = join(sys.prefix, "Scripts", "tklsystem-script.py")
if sys.argv[1] == '-install':
# Log output to a file (for test)
f = open(r"C:\test.txt",'w')
print('Creating Shortcut', file=f)
# Get paths to the desktop and start menu
desktop_path = get_special_folder_path("CSIDL_COMMON_DESKTOPDIRECTORY")
startmenu_path = get_special_folder_path("CSIDL_COMMON_STARTMENU")
# Create shortcuts.
for path in [desktop_path, startmenu_path]:
create_shortcut(pyw_executable,
"A program to work with L-System Equations",
join(path, shortcut_filename),
script_path,
working_dir)
At least with Python 3.6.5, 32bit on Windows, setuptools does work for this. But based on the accepted answer, by trial and error I found some issues that may have caused your script to fail to do what you wanted.
create_shortcut does not accept keyword arguments, only positional, so its usage in your code is invalid
You must add a .lnk extension for Windows to recognise the shortcut
I found sys.executable will be the name of the installer executable, not the python executable
As mentioned, you can't see stdout or stderr so you might want to log to a text file. I would suggest also redirecting sys.stdout and sys.stderr to the log file.
(Maybe not relevant) as mentioned in this question there appears to be a bug with the version string generated by bdist_wininst. I used the hexediting hack from an answer there to work around this. The location in the answer is not the same, you have to find the -32 yourself.
Full example script:
import sys
import os
import datetime
global datadir
datadir = os.path.join(get_special_folder_path("CSIDL_APPDATA"), "mymodule")
def main(argv):
if "-install" in argv:
desktop = get_special_folder_path("CSIDL_DESKTOPDIRECTORY")
print("Desktop path: %s" % repr(desktop))
if not os.path.exists(datadir):
os.makedirs(datadir)
dir_created(datadir)
print("Created data directory: %s" % repr(datadir))
else:
print("Data directory already existed at %s" % repr(datadir))
shortcut = os.path.join(desktop, "MyModule.lnk")
if os.path.exists(shortcut):
print("Remove existing shortcut at %s" % repr(shortcut))
os.unlink(shortcut)
print("Creating shortcut at %s...\n" % shortcut)
create_shortcut(
r'C:\Python36\python.exe',
"MyModuleScript",
shortcut,
"",
datadir)
file_created(shortcut)
print("Successfull!")
elif "-remove" in sys.argv:
print("Removing...")
pass
if __name__ == "__main__":
logfile = r'C:\mymodule_install.log' # Fallback location
if os.path.exists(datadir):
logfile = os.path.join(datadir, "install.log")
elif os.environ.get("TEMP") and os.path.exists(os.environ.get("TEMP"),""):
logfile = os.path.join(os.environ.get("TEMP"), "mymodule_install.log")
with open(logfile, 'a+') as f:
f.write("Opened\r\n")
f.write("Ran %s %s at %s" % (sys.executable, " ".join(sys.argv), datetime.datetime.now().isoformat()))
sys.stdout = f
sys.stderr = f
try:
main(sys.argv)
except Exception as e:
raise
f.close()
sys.exit(0)
UPD: on an off chance that the client machine has pywin32 installed, we try in-process creation first. Somewhat cleaner that way.
Here is another take. This assumes the package is called myapp, and that also becomes the executable that you want a shortcut to. Substitute your own package name and your own shortcut text.
Uses a Windows Scripting Host COM class - in process if possible, inside a Powershell command line as a subprocess if not. Tested on Python 3.6+.
from setuptools import setup
from setuptools.command.install import install
import platform, sys, os, site
from os import path, environ
def create_shortcut_under(root, exepath):
# Root is an env variable name -
# either ALLUSERSPROFILE for the all users' Start menu,
# or APPDATA for the current user specific one
profile = environ[root]
linkpath = path.join(profile, "Microsoft", "Windows", "Start Menu", "Programs", "My Python app.lnk")
try:
from win32com.client import Dispatch
from pywintypes import com_error
try:
sh = Dispatch('WScript.Shell')
link = sh.CreateShortcut(linkpath)
link.TargetPath = exepath
link.Save()
return True
except com_error:
return False
except ImportError:
import subprocess
s = "$s=(New-Object -COM WScript.Shell).CreateShortcut('" + linkpath + "');$s.TargetPath='" + exepath + "';$s.Save()"
return subprocess.call(['powershell', s], stdout = subprocess.DEVNULL, stderr = subprocess.DEVNULL) == 0
def create_shortcut(inst):
try:
exepath = path.join(path.dirname(sys.executable), "Scripts", "myapp.exe")
if not path.exists(exepath):
# Support for "pip install --user"
exepath = path.join(path.dirname(site.getusersitepackages()), "Scripts", "myapp.exe")
# If can't modify the global menu, fall back to the
# current user's one
if not create_shortcut_under('ALLUSERSPROFILE', exepath):
create_shortcut_under('APPDATA', exepath)
except:
pass
class my_install(install):
def run(self):
install.run(self)
if platform.system() == 'Windows':
create_shortcut(self)
#...
setup(
#...
cmdclass={'install': my_install},
entry_points={"gui_scripts": ["myapp = myapp.__main__:main"]},
I would like to include the current git hash in the output of a Python script (as a the version number of the code that generated that output).
How can I access the current git hash in my Python script?
No need to hack around getting data from the git command yourself. GitPython is a very nice way to do this and a lot of other git stuff. It even has "best effort" support for Windows.
After pip install gitpython you can do
import git
repo = git.Repo(search_parent_directories=True)
sha = repo.head.object.hexsha
Something to consider when using this library. The following is taken from gitpython.readthedocs.io
Leakage of System Resources
GitPython is not suited for long-running processes (like daemons) as it tends to leak system resources. It was written in a time where destructors (as implemented in the __del__ method) still ran deterministically.
In case you still want to use it in such a context, you will want to search the codebase for __del__ implementations and call these yourself when you see fit.
Another way assure proper cleanup of resources is to factor out GitPython into a separate process which can be dropped periodically
This post contains the command, Greg's answer contains the subprocess command.
import subprocess
def get_git_revision_hash() -> str:
return subprocess.check_output(['git', 'rev-parse', 'HEAD']).decode('ascii').strip()
def get_git_revision_short_hash() -> str:
return subprocess.check_output(['git', 'rev-parse', '--short', 'HEAD']).decode('ascii').strip()
when running
print(get_git_revision_hash())
print(get_git_revision_short_hash())
you get output:
fd1cd173fc834f62fa7db3034efc5b8e0f3b43fe
fd1cd17
The git describe command is a good way of creating a human-presentable "version number" of the code. From the examples in the documentation:
With something like git.git current tree, I get:
[torvalds#g5 git]$ git describe parent
v1.0.4-14-g2414721
i.e. the current head of my "parent" branch is based on v1.0.4, but since it has a few commits on top of that, describe has added the number of additional commits ("14") and an abbreviated object name for the commit itself ("2414721") at the end.
From within Python, you can do something like the following:
import subprocess
label = subprocess.check_output(["git", "describe"]).strip()
Here's a more complete version of Greg's answer:
import subprocess
print(subprocess.check_output(["git", "describe", "--always"]).strip().decode())
Or, if the script is being called from outside the repo:
import subprocess, os
print(subprocess.check_output(["git", "describe", "--always"], cwd=os.path.dirname(os.path.abspath(__file__))).strip().decode())
Or, if the script is being called from outside the repo and you like pathlib:
import subprocess
from pathlib import Path
print(subprocess.check_output(["git", "describe", "--always"], cwd=Path(__file__).resolve().parent).strip().decode())
numpy has a nice looking multi-platform routine in its setup.py:
import os
import subprocess
# Return the git revision as a string
def git_version():
def _minimal_ext_cmd(cmd):
# construct minimal environment
env = {}
for k in ['SYSTEMROOT', 'PATH']:
v = os.environ.get(k)
if v is not None:
env[k] = v
# LANGUAGE is used on win32
env['LANGUAGE'] = 'C'
env['LANG'] = 'C'
env['LC_ALL'] = 'C'
out = subprocess.Popen(cmd, stdout = subprocess.PIPE, env=env).communicate()[0]
return out
try:
out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD'])
GIT_REVISION = out.strip().decode('ascii')
except OSError:
GIT_REVISION = "Unknown"
return GIT_REVISION
If subprocess isn't portable and you don't want to install a package to do something this simple you can also do this.
import pathlib
def get_git_revision(base_path):
git_dir = pathlib.Path(base_path) / '.git'
with (git_dir / 'HEAD').open('r') as head:
ref = head.readline().split(' ')[-1].strip()
with (git_dir / ref).open('r') as git_hash:
return git_hash.readline().strip()
I've only tested this on my repos but it seems to work pretty consistantly.
This is an improvement of Yuji 'Tomita' Tomita answer.
import subprocess
def get_git_revision_hash():
full_hash = subprocess.check_output(['git', 'rev-parse', 'HEAD'])
full_hash = str(full_hash, "utf-8").strip()
return full_hash
def get_git_revision_short_hash():
short_hash = subprocess.check_output(['git', 'rev-parse', '--short', 'HEAD'])
short_hash = str(short_hash, "utf-8").strip()
return short_hash
print(get_git_revision_hash())
print(get_git_revision_short_hash())
if you want a bit more data than the hash, you can use git-log:
import subprocess
def get_git_hash():
return subprocess.check_output(['git', 'log', '-n', '1', '--pretty=tformat:%H']).strip()
def get_git_short_hash():
return subprocess.check_output(['git', 'log', '-n', '1', '--pretty=tformat:%h']).strip()
def get_git_short_hash_and_commit_date():
return subprocess.check_output(['git', 'log', '-n', '1', '--pretty=tformat:%h-%ad', '--date=short']).strip()
for full list of formating options - check out git log --help
I ran across this problem and solved it by implementing this function.
https://gist.github.com/NaelsonDouglas/9bc3bfa26deec7827cb87816cad88d59
from pathlib import Path
def get_commit(repo_path):
git_folder = Path(repo_path,'.git')
head_name = Path(git_folder, 'HEAD').read_text().split('\n')[0].split(' ')[-1]
head_ref = Path(git_folder,head_name)
commit = head_ref.read_text().replace('\n','')
return commit
r = get_commit('PATH OF YOUR CLONED REPOSITORY')
print(r)
I had a problem similar to the OP, but in my case I'm delivering the source code to my client as a zip file and, although I know they will have python installed, I cannot assume they will have git. Since the OP didn't specify his operating system and if he has git installed, I think I can contribute here.
To get only the hash of the commit, Naelson Douglas's answer was perfect, but to have the tag name, I'm using the dulwich python package. It's a simplified git client in python.
After installing the package with pip install dulwich --global-option="--pure" one can do:
from dulwich import porcelain
def get_git_revision(base_path):
return porcelain.describe(base_path)
r = get_git_revision("PATH OF YOUR REPOSITORY's ROOT FOLDER")
print(r)
I've just run this code in one repository here and it showed the output v0.1.2-1-gfb41223, similar to what is returned by git describe, meaning that I'm 1 commit after the tag v0.1.2 and the 7-digit hash of the commit is fb41223.
It has some limitations: currently it doesn't have an option to show if a repository is dirty and it always shows a 7-digit hash, but there's no need to have git installed, so one can choose the trade-off.
Edit: in case of errors in the command pip install due to the option --pure (the issue is explained here), pick one of the two possible solutions:
Install Dulwich package's dependencies first:
pip install urllib3 certifi && pip install dulwich --global-option="--pure"
Install without the option pure: pip install dulwich. This will install some platform dependent files in your system, but it will improve the package's performance.
If you don't have Git available for some reason, but you have the git repo (.git folder is found), you can fetch the commit hash from .git/fetch/heads/[branch].
For example, I've used a following quick-and-dirty Python snippet run at the repository root to get the commit id:
git_head = '.git\\HEAD'
# Open .git\HEAD file:
with open(git_head, 'r') as git_head_file:
# Contains e.g. ref: ref/heads/master if on "master"
git_head_data = str(git_head_file.read())
# Open the correct file in .git\ref\heads\[branch]
git_head_ref = '.git\\%s' % git_head_data.split(' ')[1].replace('/', '\\').strip()
# Get the commit hash ([:7] used to get "--short")
with open(git_head_ref, 'r') as git_head_ref_file:
commit_id = git_head_ref_file.read().strip()[:7]
If you are like me :
Multiplatform so subprocess may crash one day
Using Python 2.7 so GitPython not available
Don't want to use Numpy just for that
Already using Sentry (old depreciated version : raven)
Then (this will not work on shell because shell doesn't detect current file path, replace BASE_DIR by your current file path) :
import os
import raven
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
print(raven.fetch_git_sha(BASE_DIR))
That's it.
I was looking for another solution because I wanted to migrate to sentry_sdk and leave raven but maybe some of you want to continue using raven for a while.
Here was the discussion that get me into this stackoverflow issue
So using the code of raven without raven is also possible (see discussion) :
from __future__ import absolute_import
import os.path
__all__ = 'fetch_git_sha'
def fetch_git_sha(path, head=None):
"""
>>> fetch_git_sha(os.path.dirname(__file__))
"""
if not head:
head_path = os.path.join(path, '.git', 'HEAD')
with open(head_path, 'r') as fp:
head = fp.read().strip()
if head.startswith('ref: '):
head = head[5:]
revision_file = os.path.join(
path, '.git', *head.split('/')
)
else:
return head
else:
revision_file = os.path.join(path, '.git', 'refs', 'heads', head)
if not os.path.exists(revision_file):
# Check for Raven .git/packed-refs' file since a `git gc` may have run
# https://git-scm.com/book/en/v2/Git-Internals-Maintenance-and-Data-Recovery
packed_file = os.path.join(path, '.git', 'packed-refs')
if os.path.exists(packed_file):
with open(packed_file) as fh:
for line in fh:
line = line.rstrip()
if line and line[:1] not in ('#', '^'):
try:
revision, ref = line.split(' ', 1)
except ValueError:
continue
if ref == head:
return revision
with open(revision_file) as fh:
return fh.read().strip()
I named this file versioning.py and I import "fetch_git_sha" where I need it passing file path as argument.
Hope it will help some of you ;)
I am using the collective.python buildout.
I have the following .pythonrc (configured with export PYTHONSTARTUP=~/.pythonrc):
import readline
import rlcompleter
readline.parse_and_bind('tab: complete')
When I run Python in the shell, tab completion works. When I run Plone in debug mode it does not. Unless, I paste the contents of my .pythonrc into the Plone debug Python prompt. What am I missing here?
Note: Pasting the contents of my .pythonrc only works when I install Plone via python bootstrap.py (i.e. bootstrapping Plone buildout with collective.python Python). If I install Plone inside a virtualenv, nothing works. But at least in that scenario, the missing functionality makes sense to me (i.e. something is probably missing from the virtualenv that is required to make tab completion work.)
The instance controller uses two command-line switches; -i for interactive mode, and -c to load the Zope configuration and set up the app variable. The -c switch is what disables the PYTHONSTARTUP environment variable.
You could modify the plone.recipe.zope2instance package to run the script anyway.
In plone.recipe.zope2instance, find the plone/recipe/zope2instance/ctl.py file, alter the do_debug() method to:
def do_debug(self, arg):
interactive_startup = ("import os;"
"os.path.exists(os.environ.get('PYTHONSTARTUP', '')) "
"and execfile(os.environ['PYTHONSTARTUP']); del os;"
'import Zope2; app=Zope2.app()')
cmdline = self.get_startup_cmd(self.options.python,
interactive_startup,
pyflags = '-i', )
In fact, I like the idea of supporting PYTHONSTARTUP so much I committed that change to the recipe already, see rev 536f8fc1c4!
I do import user. This reads ~/.pythonrc.py. Note the .py extension. I have set that file as my PYTHONSTARTUP
I'll paste that file for good measure. I have cobbled it together a few years ago. Not sure if it is still the best, as I see comments about 2006 and python2.3. It does the trick though.
$ cat ~/.pythonrc.py
# See http://blog.partecs.com/2006/02/27/source-inspector/
#import pydoc
import inspect
import rlcompleter, readline
readline.parse_and_bind('tab: complete')
# def source(obj):
# """source of the obj."""
# try:
# pydoc.pipepager(inspect.getsource(obj), 'less')
# except IOError:
# pass
# From /usr/local/lib/python2.3/user.py
import os
home = os.curdir # Default
if 'HOME' in os.environ:
home = os.environ['HOME']
elif os.name == 'posix':
home = os.path.expanduser("~/")
# Make sure home always ends with a directory separator:
home = os.path.realpath(home) + os.sep
# From http://wiki.python.org/moin/PdbRcIdea
# Command line history:
histfile = home + '.pyhist'
try:
readline.read_history_file(histfile)
except IOError:
pass
import atexit
atexit.register(readline.write_history_file, histfile)
readline.set_history_length(200)
# Cleanup namespace
# del atexit
# del home
# del histfile
# del os
# del readline
# del rlcompleter