Mercurial Alias in Python - python

I have an existing mercurial alias (for closing and merging feature branches) written in bash. The problem is, that my colleagues with windows machines cannot use it. Mercurial is already delivered with Python, so question is whether it is possible to call python code in the alias. Then it would be OS independent.
[alias]
close-feature = ![ -z "$1" ] && echo "You have to specify the issue number!" && exit 1; \
if hg branches | grep -q "fb-$1"; \
then $HG up fb-$1; $HG commit -m 'Close branch fb-$1.' --close-branch; $HG pull; $HG up default; $HG merge fb-$1; $HG commit -m 'Merge branch fb-$1 -> default.'; \
else echo "The branch fb-$1 does NOT exist!"; \
fi

question is whether it is possible to call python code in the alias
Without standalone Python - no, AFAIK. You can write Mercurial extension in Python, which will add needed command - and extension will be executed in pure Mercurial's environment

Related

How to correctly zip the whole python standard library?

I recently successfully embedded a python distribution with an application in Mac OS X using a homebrew installed python3.7 as per the methodology outlined in Joao Ventura's very useful two part series, provided here for reference (http://joaoventura.net/blog/2016/embeddable-python-osx/) and (http://joaoventura.net/blog/2016/embeddable-python-osx-from-src/).
The only remaining issue for me was to reduce the size of the python distribution size in the application by zip compressing the whole standard library minus lib-dynload, config-3.7m-darwin and site-packages.
My directory structures is as follows:
- python3.7/
- include/
- lib/
- python3.7/
- libpython3.7.dylib
- python3.7 <executable>
The basic initial step is to move lib-dynload and config-3.7m-darwin from lib/python3.7, so that I can compress the sodlib source files into lib/python37.zip and then move lib-dynload and config-3.7m-darwin back into now empty lib/python3.7 to end up with the desired structure:
- python3.7/
- include/
- lib/
- python3.7/
- lib-dynload/
- config-3.7m-darwin
- python37.zip
- libpython3.7.dylib
- python3.7 <executable>
To test whether it worked or not, I would check sys.path from the executable and try to import a module and check its __file__ attribute to see if it came from the zip archive.
On this basis, I would cd into lib/python3.7 and try the following:
Select all files and folders and zip using OS X's Finder's compress to generate python37.zip
Using the python zipfile module:
python -m zipfile -c python37.zip lib/python3.7/*
Using the zip method from How can you bundle all your python code into a single zip file?
cd lib/python3.7
zip -r9 ../python37.zip *
In all cases, I got it to work by setting PYTHONPATH to the zipped library, as in:
PYTHONPATH=lib/python37.zip ./python3.7`
Doing, I was able to successfully import from the zip archive and verify that the modules came from the zip archive. But without setting PYTHONPATH, it did not work.
Hence, I would very much appreciate some help to establish the correct and most straightforward way to zip the standard library such that it would be recognized automatically from sys.path (without any extra steps such as specifying the PYTHONPATH environment value which may not be possible on a user's machine).
Thanks in advance for any help provided.
S
Finally figured it out through a long process of elimination.
The only module you have to keep in site packages is os.py.
Here's a bash script for the whole process which may or may not work. It assumes you have downloaded a python source distribution from python.org
Then cd into the resultant source folder and run this script in the root
#!/usr/bin/env bash
# build_python.sh
# NOTE: need os.py to remain in site-packages or it will fail
NAME=xpython
PWD=$(pwd)
PREFIX=${PWD}/${NAME}
VERSION=3.8
VER="${VERSION//./}"
LIB=${PREFIX}/lib/python${VERSION}
MAC_DEP_TARGET=10.13
remove() {
echo "removing $1"
rm -rf $1
}
rm_lib() {
echo "removing $1"
rm -rf ${LIB}/$1
}
clean() {
echo "removing __pycache__ .pyc/o from $1"
find $1 | grep -E "(__pycache__|\.pyc|\.pyo$)" | xargs rm -rf
}
clean_tests() {
echo "removing 'test' dirs from $1"
find $1 | grep -E "(tests|test)" | xargs rm -rf
}
clean_site_packages() {
echo "removing everything in $LIB/site-packages"
rm -rf $LIB/site-packages/*
}
rm_ext() {
echo "removing $LIB/lib-dynload/$1.cpython-${VER}-darwin.so"
rm -rf $LIB/lib-dynload/$1.cpython-38-darwin.so
}
rm_bin() {
echo "removing $PREFIX/bin/$1"
rm -rf $PREFIX/bin/$1
}
./configure MACOSX_DEPLOYMENT_TARGET=${MAC_DEP_TARGET} \
--prefix=$PREFIX \
--enable-shared \
--with-universal-archs=64-bit \
--with-lto \
--enable-optimizations
make altinstall
clean $PREFIX
clean_tests $LIB
clean_site_packages
remove ${LIB}/site-packages
# remove what you want here...
rm_lib config-${VERSION}-darwin
rm_lib idlelib
rm_lib lib2to3
rm_lib tkinter
rm_lib turtledemo
rm_lib turtle.py
rm_lib ensurepip
rm_lib venv
remove $LIB/distutils/command/*.exe
remove $PREFIX/lib/pkgconfig
remove $PREFIX/share
# remove what you want here...
rm_ext _tkinter
rm_bin 2to3-${VERSION}
rm_bin idle${VERSION}
rm_bin easy_install-${VERSION}
rm_bin pip${VERSION}
mv $LIB/lib-dynload $PREFIX
cp $LIB/os.py $PREFIX
clean $PREFIX
python -m zipfile -c $PREFIX/lib/python${VER}.zip $LIB/*
remove $LIB
mkdir -p $LIB
mv $PREFIX/lib-dynload $LIB
mv $PREFIX/os.py $LIB
mkdir $LIB/site-packages
This is for a mac user, can be easily adapted for other platforms. It's not very well tested, so post feedback if you encounter any issues.

~/virtualenvs/venv instead of ./venv

I am in a team programmers. I integrated virtualenvwrapper, since I am working on several projects. In my team project, they take into account that the virtualenv is located directly in the project. Hence, they are many files where the path is directly ./venv/bin/python, and my supervisor didn't want to customize it. My virtualenv on that project is located in ~/.virtualenvs. Is there a clever idea to link ./venv/bin/python to ~/.virtualenvs/bin/python without changing the nature of the files? To be clear, the path of my virtual is ~/virtualenvs/venv instead of ./venv. It works well if I write ln -s ~/virtualenvs/venv ./venv, but it is not clean to do such things. That's why I wanted something clean.
Update
I have a shell command and it works perfectly well.
#!/bin/bash
cd ~
cd ./Projects/Work_Projects/24-django/
workon venv
I have a management command named reboot-db-local
#!/bin/bash
source .mysql-context \
&& mysql -u $MYSQL_USER -p$MYSQL_PASSWORD -e "DROP DATABASE credit24h_dev;" \
&& mysql -u $MYSQL_USER -p$MYSQL_PASSWORD -e "CREATE DATABASE credit24h_dev CHARSET=utf8;" \
&& ./venv/bin/python manage.py migrate \
&& ./load-fixtures \
&& ./venv/bin/python manage.py check_permissions

Makefile: use $exe1 if exists else $exe2

In bash I can do something like this in order to check if a program exists:
if type -P vim > /dev/null; then
echo "vim installed"
else
echo "vim not installed"
fi
I would like to do the same thing in a Makefile.
In details I would like to choose "python3" if installed, else "python" (2). My Makefile looks like this:
PYTHON = python
TSCRIPT = test/test_psutil.py
test:
$(PYTHON) $(TSCRIPT)
Is there anything I can do to use a conditional around that PYTHON = python line? I understand Makefiles can be told to use bash syntax somehow (SHELL:=/bin/bash?) but I'm no expert.
The easiest thing is probably to use $(shell) to figure out if python3 is callable:
ifeq ($(shell which python3),)
PYTHON = python
else
PYTHON = python3
endif
$(shell which python 3) runs which python3 in a shell and expands to the output of that command. That is the path of python3 if it is available, and otherwise it is empty. This can be used in a conditional.
Addendum: About the portability concerns in the comments: the reason that $(shell type -P python3) does not work is that GNU make attempts to optimize away the shell call and fork/exec itself, which does not work with a shell builtin. I found this out from here. If your /bin/sh knows type -P, then
# note the semicolon -------v
ifeq ($(shell type -P python3;),)
works. My /bin/sh is dash, though, so that didn't work for me (it complained about -P not being a valid command). What did work was
ifeq ($(shell type python3;),)
because dash's type sends the error message about unavailable commands to stderr, not stdout (so the $(shell) expands to the empty string). If you can depend on which, I think doing that is the cleanest way. If you can depend on bash, then
ifeq ($(shell bash -c 'type -P python3'),)
also works. Alternatively,
SHELL = bash
ifeq ($(shell type -P python3;),)
has the same effect. If none of those are an option, desperate measures like #MadScientist's answer become attractive.
Or, if all else fails, you can resort to searching the path yourself:
PYTHON = $(shell IFS=:; for dir in $$PATH; do if test -f "$$dir/python3" && test -x "$$dir/python3"; then echo python3; exit 0; fi; done; echo python)
This is lifted from the way autoconf's AC_CHECK_PROG is implemented. I'm not sure whether I'd want this, though.
If you wanted to be more portable you can try invoking the command itself to see if it works or not:
PYTHON := $(shell python3 --version >/dev/null 2>&1 && echo python3 || echo python)
PYTHON := $(shell type -P python3 || echo "python")
You could use command -v:
PYTHON := $(shell command -v python3 2> /dev/null || echo python)
In Bash, command is a builtin command.
The example above is for GNU Make. Other Make programs may have a different syntax for running shell commands.

Any pointers on using Ropevim? Is it a usable library?

Rope is a refactoring library for Python and RopeVim is a Vim plugin which calls into Rope.
The idea of using RopeVim seems great to me, is there any documentation on "getting started" with RopeVim?
I've followed what documentation there is: https://bitbucket.org/agr/ropevim/src/tip/README.txt
I suppose I'm looking for:
look at this blog post / article
/ link it makes it all make sense.
alternate recommendations like
"forget about RopeVim", it doesn't
work very well or say "use this
instead of ropevim".
For basic renaming, hover your vim cursor over the variable/method/etc that you wish to rename and then type:
:RopeRename <enter>
From there it should be self-explanatory. It asks for the root path to the project you wish to do the renaming in. Then it asks you for the new name. Then you can preview/confirm changes.
If you have tab-complete setup in your vim command-area you can look through the other rope features by typing:
:Rope<Tab>
The documentation you found only shows the Vim particulars. If you want to see what those rope functions can do, see the rope documentation. Note, it's incomplete and points to the unittests for a full overview of what it can do.
i use this script and is the best to automate all the process
https://gist.github.com/15067
#!/bin/bash
# Plant rope vim's plugin
# This is a script to install or update 'ropevim'
# Copyright Alexander Artemenko, 2008
# Contact me at svetlyak.40wt at gmail com
function create_dirs
{
mkdir -p src
mkdir -p pylibs
}
function check_vim
{
if vim --version | grep '\-python' > /dev/null
then
echo You vim does not support python plugins.
echo Please, install vim with python support.
echo On debian or ubuntu you can do this:
echo " sudo apt-get install vim-python"
exit 1
fi
}
function get_or_update
{
if [ -e $1 ]
then
cd $1
echo Pulling updates from $2
hg pull > /dev/null
cd ..
else
echo Cloning $2
hg clone $2 $1 > /dev/null
fi
}
function pull_sources
{
cd src
get_or_update rope http://bitbucket.org/agr/rope
get_or_update ropevim http://bitbucket.org/agr/ropevim
get_or_update ropemode http://bitbucket.org/agr/ropemode
cd ../pylibs
ln -f -s ../src/rope/rope
ln -f -s ../src/ropemode/ropemode
ln -f -s ../src/ropevim/ropevim.py
cd ..
}
function gen_vim_config
{
echo "let \$PYTHONPATH .= \":`pwd`/pylibs\"" > rope.vim
echo "source `pwd`/src/ropevim/ropevim.vim" >> rope.vim
echo "Now, just add \"source `pwd`/rope.vim\" to your .vimrc"
}
check_vim
create_dirs
pull_sources
gen_vim_config
If you can live without vim, use Eric, which has rope support.

Maintaining environment state between subprocess.Popen commands?

I'm writing a deployment engine for our system, where each project specifies his custom deployment instructions.
The nodes are running on EC2.
One of the projects depends on a from source version of a 3rd party application.
Specifically:
cd /tmp
wget s3://.../tools/x264_20_12_2010.zip
unzip x264_20_12_2010.zip
cd x264_20_12_2010
./configure
make
checkinstall --pkgname=x264 --pkgversion "2:0.HEAD" --backup=no --deldoc=yes --fstrans=no --default
Currently I'm doing this with boto's ShellCommand (which uses subprocess.Popen internally), this looks something like this:
def deploy():
ShellCommand("apt-get remove ffmpeg x264 libx264-dev")
ShellCommand("apt-get update")
ShellCommand("apt-get install -y build-essential checkinstall yasm texi2html libfuse-dev fuse-utils libcurl4-openssl-dev libxml2-dev mime-support libfaac-dev libjack-jackd2-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libsdl1.2-dev libtheora-dev libvorbis-dev libvpx-dev libx11-dev libxfixes-dev libxvidcore-dev zlib1g-dev")
ShellCommand("cd /tmp")
s3cmd_sync("s3://.../tools/x264_20_12_2010.zip", "/tmp/x264_20_12_2010.zip")
ShellCommand("unzip x264_20_12_2010.zip")
ShellCommand("cd x264_20_12_2010")
ShellCommand("./configure")
ShellCommand("make")
ShellCommand(r'checkinstall --pkgname=x264 --pkgversion "2:0.HEAD" --backup=no --deldoc=yes --fstrans=no --default')
Sadly this fails, because cd /tmp applies to the subprocess, meaning that once I return the the father process and issue the second ShellCommand the exeenv is inherited from the father, which leans me to think that I need some execution framework for shell commands which will apply all commands in the same sub process without loosing context.
What is the recommend solution to this problem? Please note that loggings of the command line executed app is very important (how can you debug without it?), which is why I like ShellCommand... (see boto logging if interested).
Thank you,
Maxim.
Think of os.chdir("DIRECTORY") instead of Popen("cd DIRECTORY")
Maybe it's best here not to execute a new shell for each command: Just write one multi-line shell skript
deploy_commands = """apt-get foo
apt-get bar
cd baz ; boo bat"""
end execute via Popen(deploy_commands, shell=True).
But please read the security warning in the Popen documentation about not escaping untrusted parameters.
I ended up doing this
def shell_script(appname, *commands):
workspace = tempfile.mkdtemp(prefix=appname + '-')
installer = open(workspace + "/installer.sh", 'w')
installer.write("#!/bin/bash\n")
installer.write("cd " + workspace + "\n")
for line in commands:
installer.write(line + "\n")
ShellCommand("chmod u+x " + installer.name)
installer.close()
ShellCommand(installer.name)

Categories