Make always invoks some dependency rules each time it runs - python

I have situation where I need to run some python scripts remotly and need to select and copy a few files to a remote folder. I am doing this in two stages. I copy files to a temp folder and then make an archive ready to send.
I created makefile to automate the first stage but it seems to work a little strange. The makefile looks as follows:
# Makefile and user paths
mkfile_path = $(dir $(realpath $(firstword $(MAKEFILE_LIST))))
user_path = $(shell echo $$HOME)
# Dependencies
ENTDIR = entropy
BINDIR = binary-files
MODDIR = modules
NORTH = $(BINDIR)/north
SOUTH = $(BINDIR)/south
WEST = $(BINDIR)/west
DISK = $(MODDIR)/disk
GEN = $(MODDIR)/general
PROB = $(MODDIR)/probability
NLTK = nltk_data
METAVAR = obj-meta-vars
# Target
TARGET=scripts-to-run-remotely.tar.gz
# Rules
all : $(TARGET)
#echo "Complete"
$(TARGET) : $(NORTH)/north.obj \
$(SOUTH)/south.obj \
$(WEST)/west.obj \
$(ENTDIR)/mifunction.py \
$(ENTDIR)/miopt.py \
$(ENTDIR)/miprint.py \
$(ENTDIR)/run-logs.py \
$(DISK)/%.py \
$(GEN)/%.py \
$(PROB)/%.py \
$(METAVAR)/%.obj \
$(NLTK)
tar -czf $(TARGET) $(ENTDIR)/* $(BINDIR)/* $(MODDIR)/* $(NLTK)/* $(METAVAR)/*
# Files
$(NORTH)/north.obj: $(NORTH)
cp /home/user/Documents/python/$(NORTH)/north.obj ./$(NORTH)
$(SOUTH)/south.obj: $(SOUTH)
cp /home/user/Documents/python/$(SOUTH)/south.obj ./$(SOUTH)
$(WEST)/west.obj: $(WEST)
cp /home/user/Documents/python/$(WEST)/west.obj ./$(WEST)
$(DISK)/%.py: $(DISK)
cp /home/user/Documents/python/$(DISK)/*.py ./$(DISK)
$(GEN)/%.py: $(GEN)
cp /home/user/Documents/python/$(GEN)/*.py ./$(GEN)
$(PROB)/%.py: $(PROB)
cp /home/user/Documents/python/$(PROB)/*.py ./$(PROB)
$(ENTDIR)/mifunction.py: $(ENTDIR)
cp /home/user/Documents/python/$(ENTDIR)/mifunction.py ./$(ENTDIR)
$(ENTDIR)/optmi.py: $(ENTDIR)
cp /home/user/Documents/python/$(ENTDIR)/miopt.py ./$(ENTDIR)
$(ENTDIR)/printmi.py: $(ENTDIR)
cp /home/user/Documents/python/$(ENTDIR)/miprint.py ./$(ENTDIR)
$(ENTDIR)/run-logs.py: $(ENTDIR)
cp /home/user/Documents/python/$(ENTDIR)/run-logs.py ./$(ENTDIR)
$(METAVAR)/%.obj: $(METAVAR)
cp /home/user/Dropbox/data/outputs/$(METAVAR)/*.obj ./$(METAVAR)
# Folders
$(NORTH):
mkdir -p $#
$(SOUTH):
mkdir -p $#
$(WEST):
mkdir -p $#
$(ENTDIR):
mkdir -p $#
$(DISK):
mkdir -p $#
$(GEN):
mkdir -p $#
$(PROB):
mkdir -p $#
$(METAVAR):
mkdir -p $#
$(NLTK):
mkdir -p $#
#python3 -m nltk.downloader wordnet wordnet_ic averaged_perceptron_tagger -d $(mkfile_path)/$(NLTK)
clean:
#rm -rf ./$(TARGET) ./$(ENTDIR) ./$(BINDIR) ./$(MODDIR) ./$(METAVAR) ./$(NLTK)
#echo "All files and folders removed"
# Always run those:
.PHONY: all
The first things I'd like to ask is how to avoid redundancy; if possible, how to avoid repeating parts of a code.
The second thing is when I run make for the first time, it runs through all rules where fodlers need to be created and then through all rules where files need to be copied. When I run make again, it invokes rules related to copying files:
cp /home/user/Documents/python/entropy/mifunction.py ./entropy
cp /home/user/Documents/python/entropy/miopt.py ./entropy
cp /home/user/Documents/python/entropy/miprint.py ./entropy
cp /home/user/Documents/python/entropy/run-logs.py ./entropy
cp /home/user/Documents/python/modules/disk/*.py ./modules/disk
cp /home/user/Documents/python/modules/general/*.py ./modules/general
cp /home/user/Documents/python/modules/probability/*.py ./modules/probability
cp /home/user/Dropbox/data/outputs/obj-meta-vars/*.obj ./obj-meta-vars
tar -czf enropy/* binary-files/* modules/* nltk_data/* obj-meta-vars/*
Complete
I am guessing there must be something wrong with the dependency related to checking existing folders.
Thanks.

The problem is that your copying rules have this form:
$(ENTDIR)/mifunction.py: $(ENTDIR)
cp /home/user/Documents/python/$(ENTDIR)/mifunction.py ./$(ENTDIR)
Notice that the destination directory is a prerequisite. Make will consider the target out of date if the directory has a later timestamp than the target, and the OS updates the timestamp of the directory when a file is added to it (or removed). Since this makefile copies other files to that directory, this target will appear to be out of date the next time you run Make.
There is more than one way to solve this. The simplest is to change the prerequisite to an order-only prerequisite by adding a pipe ('|'):
$(ENTDIR)/mifunction.py: | $(ENTDIR)
cp /home/user/Documents/python/$(ENTDIR)/mifunction.py ./$(ENTDIR)
Once you confirm that this works, you can work on other improvements. You might consider using the original files as prerequisites:
$(ENTDIR)/mifunction.py: /home/user/Documents/python/$(ENTDIR)/mifunction.py | $(ENTDIR)
cp /home/user/Documents/python/$(ENTDIR)/mifunction.py ./$(ENTDIR)
This looks ungainly until you introduce automatic variables:
$(ENTDIR)/mifunction.py: /home/user/Documents/python/$(ENTDIR)/mifunction.py | $(ENTDIR)
cp $< $#
Whether or not you do that, you can introduce another variable:
PYTHON_DIR := /home/user/Documents/python
which will remove a lot of redundancy.
Further improvements are possible, but that's probably enough for now.

Trying to answer to the avoid-repition part of the questions: First, use automatic variables to refer to targets or prerequisites inside rules, e.g.
$(WEST)/west.obj: $(WEST)
cp /home/user/Documents/python/$# $<
The rule should then look identical at quite a few places, which enables the next change - define identical rules as a variable:
COPY = cp /home/user/Documents/python/$# $<
$(WEST)/west.obj: $(WEST)
$(COPY)
Next, use variables for your paths, e.g.
PYTHON_SOURCE_PATH = /home/user/Documents/python/
and use this variable in all your rules that need it (or the COPY variable as shown before). You should be able to change this path by only editing this particular line where the variable is set up. Next, collect your directories that are possibly created in a variable, too. Then, a lot of rules can be replaced by a single one:
DIRECTORIES = $(NORTH) $(SOUTH) # ... the others
$(DIRECTORIES):
mkdir -p $#

Related

warning: jobserver unavailable when calling make from external Python script

I was converting an external build script from Bash to Python, when I ran into this error. First, I run make build -j4, which runs a Python script to generate a list of icons to build. I pass $(MAKE) to the Python script as well, which runs [make command passed to it] [list of icons]. However, I get an error when doing this, which I didn't get in the Bash version.
The error I get is: make[1]: warning: jobserver unavailable: using -j1. Add '+' to parent make rule., when the Python calls make
I have tried adding a + to the rule it's called from, as well as the rules it calls, to no success.
Makefile extracts:
BUILD_DIR=argon
ICON_RESOLUTIONS=8 16 22 24 32 48 64 128 256
#Generates a list of svg files and png files
SVG_OBJS_ORIG = $(wildcard ./$(BUILD_DIR)/scalable/*/*.svg)
SVG_OBJS = $(SVG_OBJS_ORIG) $(wildcard ./$(BUILD_DIR)/scalable/*/*/*.svg)
PNG_OBJS = $(subst ./$(BUILD_DIR),./$(BUILD_DIR)/resolution,$(subst .svg,.png,$(SVG_OBJS)))
PNG_LIST = $(wildcard ./$(BUILD_DIR)/*/*/*.png*)
build: autoclean
#Generate a list of icons to build, then call make with all the icon svgs
./icon-builder.py --list "$(BUILD_DIR)" "$(ICON_RESOLUTIONS)" "$(MAKE)"
$(PNG_OBJS): ./$(BUILD_DIR)/resolution/%.png: ./$(BUILD_DIR)/%.svg
mkdir -p "$(BUILD_DIR)"
./make-helper.sh "-i" "$#" "$(ICON_RESOLUTIONS)" "$(BUILD_DIR)"
index:
./generate-index.py "--index" "$(BUILD_DIR)"
Python extract:
#makeCommand is the value of $(MAKE), as that's passed to the script
#buildList is an array of each 'file' to pass to make
subprocess.run(makeCommand + buildList)
#this will ususally evaluate to something like: subprocess.run(["make", "argon/resolution/scalable/apps/openjdk-9.png", "argon/resolution/scalable/apps/org.gnome.ArchiveManager.png", "index"])
Bash extract:
$makeCommand "${rebuildList[#]}"
#This would usually evalute to something like: make argon/resolution/scalable/apps/openjdk-9.png argon/resolution/scalable/apps/org.gnome.ArchiveManager.png index
Reproducible example:
Makefile:
SHELL=bash
BUILD_DIR=argon
ICON_RESOLUTIONS=8 16 22 24 32 48 64 128 256
SVG_OBJS_ORIG = $(wildcard ./$(BUILD_DIR)/scalable/*/*.svg)
SVG_OBJS = $(SVG_OBJS_ORIG) $(wildcard ./$(BUILD_DIR)/scalable/*/*/*.svg)
PNG_OBJS = $(subst ./$(BUILD_DIR),./$(BUILD_DIR)/resolution,$(subst .svg,.png,$(SVG_OBJS)))
PNG_LIST = $(wildcard ./$(BUILD_DIR)/*/*/*.png*)
.PHONY: build autoclean index
build: autoclean
#Generate a list of icons to build, then call make with all the icon svgs
./icon-builder.py --list "$(BUILD_DIR)" "$(ICON_RESOLUTIONS)" "$(MAKE)"
autoclean:
#Delete broken symlinks, left over pngs and the index
find "./$(BUILD_DIR)" -type d -empty -delete
#External script to autoclean, no issues here
if [[ -f "$(BUILD_DIR)/index.theme" ]]; then \
rm "$(BUILD_DIR)/index.theme"; \
fi
$(PNG_OBJS): ./$(BUILD_DIR)/resolution/%.png: ./$(BUILD_DIR)/%.svg
mkdir -p "$(BUILD_DIR)"
#External script to build specific icon would be here, no issues there
index:
echo "The script that goes here works fine, and doesn't call make"
icon-builder.py:
#!/usr/bin/python3
import subprocess, sys
#Code to generate this left out
buildList=['argon/resolution/scalable/apps/gnome-mines.png', 'argon/resolution/scalable/apps/org.gnome.Mines-symbolic.png', 'argon/resolution/scalable/apps/gnome-calculator-symbolic.png', 'argon/resolution/scalable/apps/org.gnome.Mines.png', 'argon/resolution/scalable/apps/gnome-mines-symbolic.png', 'argon/resolution/scalable/apps/google-chrome.png', 'argon/resolution/scalable/apps/gnome-photos.png', 'argon/resolution/scalable/apps/gnome-calculator.png', 'argon/resolution/scalable/apps/gnome-photos-symbolic.png']
buildList.append("index")
#Add make to make arguments
makeCommand = str(sys.argv[4])
makeCommand = makeCommand.split()
#Combine make command and icons to start build
subprocess.run(makeCommand + buildList)
print(makeCommand + buildList)
argon/scalable/[several dirs]/ are all filled with svgs
When running make build -j4, I get:
#Delete broken symlinks, left over pngs and the index
find "./argon" -type d -empty -delete
#External script to autoclean, no issues here
if [[ -f "argon/index.theme" ]]; then \
rm "argon/index.theme"; \
fi
#Generate a list of icons to build, then call make with all the icon svgs
./icon-builder.py --list "argon" "8 16 22 24 32 48 64 128 256" "make"
make[1]: warning: jobserver unavailable: using -j1. Add '+' to parent make rule.
make[1]: Entering directory '/data/ratus5/Projects/Code/test'
mkdir -p "argon"
#External script to build specific icon would be here, no issues there
mkdir -p "argon"
#External script to build specific icon would be here, no issues there
mkdir -p "argon"
#External script to build specific icon would be here, no issues there
mkdir -p "argon"
#External script to build specific icon would be here, no issues there
mkdir -p "argon"
#External script to build specific icon would be here, no issues there
mkdir -p "argon"
#External script to build specific icon would be here, no issues there
mkdir -p "argon"
#External script to build specific icon would be here, no issues there
mkdir -p "argon"
#External script to build specific icon would be here, no issues there
mkdir -p "argon"
#External script to build specific icon would be here, no issues there
echo "The script that goes here works fine, and doesn't call make"
The script that goes here works fine, and doesn't call make
make[1]: Leaving directory '/data/ratus5/Projects/Code/test'
['make', 'argon/resolution/scalable/apps/gnome-mines.png', 'argon/resolution/scalable/apps/org.gnome.Mines-symbolic.png', 'argon/resolution/scalable/apps/gnome-calculator-symbolic.png', 'argon/resolution/scalable/apps/org.gnome.Mines.png', 'argon/resolution/scalable/apps/gnome-mines-symbolic.png', 'argon/resolution/scalable/apps/google-chrome.png', 'argon/resolution/scalable/apps/gnome-photos.png', 'argon/resolution/scalable/apps/gnome-calculator.png', 'argon/resolution/scalable/apps/gnome-photos-symbolic.png', 'index']

How to run some configuration commands to allow for a Makefile to run properly in a data processing project?

I am using a Makefile to run relatively small data science processing projects (typically involving running around 10-15 python scripts).
I want to have my project folder set up properly at the beginning of the makefile, which does certain things that will allow another user to replicate the process including:
create a substitution drive to avoid long file paths with spaces that are out of my control
create folders to house output data
create and setup a new python environment
I have tried to do this in the following manner:
config :
subst A: "B:\Network Drive\with-lots\Of Spaces (and other issues)"
mkdir output-data
conda create -n new_env
conda activate new_env
pip install requirements.txt
output-data/file1.csv : script_one.py Q:/inputfile1.csv
python $^ $#
...
The main issue here is that running this more than once will give an error causing make to stop, since the path substitution will have been done, the directory will have already been created, and the environment is already set up.
Is there a better approach that would allow the config phony target to be run only once, or at least have it run every command that it needs to run without shutting down due to an error?
Or is there a better approach for having a replicable environment set up for someone else to get it going on their own system, i.e. outside the Makefile?
You could create a marker empty file to indicate that the configuration step has already been done:
.config.done:
subst A: "B:\Network Drive\with-lots\Of Spaces (and other issues)"
mkdir output-data
conda create -n new_env
conda activate new_env
pip install requirements.txt
touch $#
output-data/file1.csv: .config.done script_one.py Q:/inputfile1.csv
python $^ $#
...
But a better solution would be to have a way to know if each configuration step has been done or not by looking at a result file or directory. mkdir output-data, for instance, is easy: if output-data exists we know it has been done, so you could add output-data as an order-only prerequisite of the next step (order-only because it is a directory and you care only about its existence, not its last modification time). This way if one step fails the corresponding file or directory is not be created, the complete make run fails and after fixing what needs to be you can restart the configuration from where it stopped.
If you do not have a resulting file or directory to look at for some steps you can use the same empty file trick as above.
You could then describe your configuration with a much finer grain:
.PHONY: config
config: .requirements.installed
.requirements.installed: .new_env.activated
pip install requirements.txt
touch $#
.new_env.activated: .new_env.created
conda activate new_env
touch $#
.new_env.created: | output-data
conda create -n new_env
touch $#
output-data: .subst-A.done
mkdir $#
.subst-A.done:
subst A: "B:\Network Drive\with-lots\Of Spaces (and other issues)"
touch $#
output-data/file1.csv: config script_one.py Q:/inputfile1.csv
python $^ $#
By removing some dependencies you can even parallelize the configuration (if you run make with the -j option):
.PHONY: config
config: .subst-A.done .new_env.activated .requirements.installed | output-data
.requirements.installed:
pip install requirements.txt
touch $#
.new_env.activated: .new_env.created
conda activate new_env
touch $#
.new_env.created:
conda create -n new_env
touch $#
output-data:
mkdir $#
.subst-A.done:
subst A: "B:\Network Drive\with-lots\Of Spaces (and other issues)"
touch $#
output-data/file1.csv: config script_one.py Q:/inputfile1.csv
python $^ $#

How to correctly zip the whole python standard library?

I recently successfully embedded a python distribution with an application in Mac OS X using a homebrew installed python3.7 as per the methodology outlined in Joao Ventura's very useful two part series, provided here for reference (http://joaoventura.net/blog/2016/embeddable-python-osx/) and (http://joaoventura.net/blog/2016/embeddable-python-osx-from-src/).
The only remaining issue for me was to reduce the size of the python distribution size in the application by zip compressing the whole standard library minus lib-dynload, config-3.7m-darwin and site-packages.
My directory structures is as follows:
- python3.7/
- include/
- lib/
- python3.7/
- libpython3.7.dylib
- python3.7 <executable>
The basic initial step is to move lib-dynload and config-3.7m-darwin from lib/python3.7, so that I can compress the sodlib source files into lib/python37.zip and then move lib-dynload and config-3.7m-darwin back into now empty lib/python3.7 to end up with the desired structure:
- python3.7/
- include/
- lib/
- python3.7/
- lib-dynload/
- config-3.7m-darwin
- python37.zip
- libpython3.7.dylib
- python3.7 <executable>
To test whether it worked or not, I would check sys.path from the executable and try to import a module and check its __file__ attribute to see if it came from the zip archive.
On this basis, I would cd into lib/python3.7 and try the following:
Select all files and folders and zip using OS X's Finder's compress to generate python37.zip
Using the python zipfile module:
python -m zipfile -c python37.zip lib/python3.7/*
Using the zip method from How can you bundle all your python code into a single zip file?
cd lib/python3.7
zip -r9 ../python37.zip *
In all cases, I got it to work by setting PYTHONPATH to the zipped library, as in:
PYTHONPATH=lib/python37.zip ./python3.7`
Doing, I was able to successfully import from the zip archive and verify that the modules came from the zip archive. But without setting PYTHONPATH, it did not work.
Hence, I would very much appreciate some help to establish the correct and most straightforward way to zip the standard library such that it would be recognized automatically from sys.path (without any extra steps such as specifying the PYTHONPATH environment value which may not be possible on a user's machine).
Thanks in advance for any help provided.
S
Finally figured it out through a long process of elimination.
The only module you have to keep in site packages is os.py.
Here's a bash script for the whole process which may or may not work. It assumes you have downloaded a python source distribution from python.org
Then cd into the resultant source folder and run this script in the root
#!/usr/bin/env bash
# build_python.sh
# NOTE: need os.py to remain in site-packages or it will fail
NAME=xpython
PWD=$(pwd)
PREFIX=${PWD}/${NAME}
VERSION=3.8
VER="${VERSION//./}"
LIB=${PREFIX}/lib/python${VERSION}
MAC_DEP_TARGET=10.13
remove() {
echo "removing $1"
rm -rf $1
}
rm_lib() {
echo "removing $1"
rm -rf ${LIB}/$1
}
clean() {
echo "removing __pycache__ .pyc/o from $1"
find $1 | grep -E "(__pycache__|\.pyc|\.pyo$)" | xargs rm -rf
}
clean_tests() {
echo "removing 'test' dirs from $1"
find $1 | grep -E "(tests|test)" | xargs rm -rf
}
clean_site_packages() {
echo "removing everything in $LIB/site-packages"
rm -rf $LIB/site-packages/*
}
rm_ext() {
echo "removing $LIB/lib-dynload/$1.cpython-${VER}-darwin.so"
rm -rf $LIB/lib-dynload/$1.cpython-38-darwin.so
}
rm_bin() {
echo "removing $PREFIX/bin/$1"
rm -rf $PREFIX/bin/$1
}
./configure MACOSX_DEPLOYMENT_TARGET=${MAC_DEP_TARGET} \
--prefix=$PREFIX \
--enable-shared \
--with-universal-archs=64-bit \
--with-lto \
--enable-optimizations
make altinstall
clean $PREFIX
clean_tests $LIB
clean_site_packages
remove ${LIB}/site-packages
# remove what you want here...
rm_lib config-${VERSION}-darwin
rm_lib idlelib
rm_lib lib2to3
rm_lib tkinter
rm_lib turtledemo
rm_lib turtle.py
rm_lib ensurepip
rm_lib venv
remove $LIB/distutils/command/*.exe
remove $PREFIX/lib/pkgconfig
remove $PREFIX/share
# remove what you want here...
rm_ext _tkinter
rm_bin 2to3-${VERSION}
rm_bin idle${VERSION}
rm_bin easy_install-${VERSION}
rm_bin pip${VERSION}
mv $LIB/lib-dynload $PREFIX
cp $LIB/os.py $PREFIX
clean $PREFIX
python -m zipfile -c $PREFIX/lib/python${VER}.zip $LIB/*
remove $LIB
mkdir -p $LIB
mv $PREFIX/lib-dynload $LIB
mv $PREFIX/os.py $LIB
mkdir $LIB/site-packages
This is for a mac user, can be easily adapted for other platforms. It's not very well tested, so post feedback if you encounter any issues.

No such file or directory in find running .sh

Running this on osx...
cd ${BUILD_DIR}/mydir && for DIR in $(find ./ '.*[^_].py' | sed 's/\/\//\//g' | awk -F "/" '{print $2}' | sort |uniq | grep -v .py); do
if [ -f $i/requirements.txt ]; then
pip install -r $i/requirements.txt -t $i/
fi
cd ${DIR} && zip -r ${DIR}.zip * > /dev/null && mv ${DIR}.zip ../../ && cd ../
done
cd ../
error:
(env) ➜ sh package_lambdas.sh find: .*[^_].py: No such file or directory
why?
find takes as an argument a list of directories to search. You provided what appears to be regular expression. Because there is no directory named (literally) .*[^_].py, find returns an error.
Below I have revised your script to correct that mistake (if I understand your intention). Because I see so many ill-written shell scripts these days, I've taken the liberty of "traditionalizing" it. Please see if you don't also find it more readable.
Changes:
use #!/bin/sh, guaranteed to be on an Unix-like system. Faster than bash, unless (like OS X) it is bash.
use lower case for variable names to distinguish from system variables (and not hide them).
eschew braces for variables (${var}); they're not needed in the simple case
do not pipe output to /usr/bin/true; route it to dev/null if that's what you mean
rm -f by definition cannot fail; if you meant || true, it's superfluous
put then and do on separate lines, easier to read, and that's how the Bourne shell language was meant to be used
Let && and || serve as line-continuation, so you can see what's happening step by step
Other changes I would suggest:
Use a subshell when changing the working directory temporarily. When it terminates, the working directory is restored automatically (retained by the parent), saving you the cd .. step, and errors.
Use set -e to cause the script to terminate on error. For expected errors, use || true explicitly.
Change grep .py to grep '\.py$', just for good measure.
To avoid Tilting Matchstick Syndrome, use something other than / as a sed substitute delimiter, e.g., sed 's://:/:g'. But sed could be avoided altogether with awk -F '/+' '{print $2}'.
Revised version:
#! /bin/sh
src_dir=lambdas
build_dir=bin
mkdir -p $build_dir/lambdas
rm -rf $build_dir/*.zip
cp -r $src_dir/* $build_dir/lambdas
#
# The sed is a bit complicated to be osx / linux cross compatible :
# ( .//run.sh vs ./run.sh
#
cd $build_dir/lambdas &&
for L in $(find . -exec grep -l '.*[^_].py' {} + |
sed 's/\/\//\//g' |
awk -F "/" '{print $2}' |
sort |
uniq |
grep -v .py)
do
if [ -f $i/requirements.txt ]
then
echo "Installing requirements"
pip install -r $i/requirements.txt -t $i/
fi
cd $L &&
zip -r $L.zip * > /dev/null &&
mv $L.zip ../../ &&
cd ../
done
cd ../
The find(1) manpage says its args are [path ...] [expression], where "expression" consists of "primaries" and "operands" (-flags). '.*[^-].py' doesn't look like any expression, so it's being interpreted as a path, and it's reporting that there is no file named '.*[^-].py' in the working directory.
Perhaps you meant:
find ./ -regex '.*[^-].py'

Gettext : How to update po and pot files after the source is modified

I've got a python project with internationalized strings.
I've modified the source codes and the lines of the strings are changed, i.e. in pot and po files lines of he strings are not pointing to correct lines.
So how to update the po and pot files to new string locations in files.
You could have a look to this script to update your po files with new code. It use xgettext and msgmerge.
echo '' > messages.po # xgettext needs that file, and we need it empty
find . -type f -iname "*.py" | xgettext -j -f - # this modifies messages.po
msgmerge -N existing.po messages.po > new.po
mv new.po existing.po
rm messages.po
Using autoconf and automake you can simply change into the po subdirectory and run:
make update-po
or:
make update-gmo
For those who use meson:
<project_id>-pot and <project_id>-update-po.
E.g. for iputils project:
$ dir="/tmp/build"
$ meson . $dir && ninja iputils-pot -C $dir && ninja iputils-update-po -C $dir
SOURCE: https://mesonbuild.com/i18n-module.html

Categories