Implement Git hook - prePush and preCommit - python

Could you please show me how to implement git hook?
Before committing, the hook should run a python script. Something like this:
cd c:\my_framework & run_tests.py --project Proxy-Tests\Aeries \
--client Aeries --suite <Commit_file_Name> --dryrun
If the dry run fails then commit should be stopped.

You need to tell us in what way the dry run will fail. Will there be an output .txt with errors? Will there be an error displayed on terminal?
In any case you must name the pre-commit script as pre-commit and save it in .git/hooks/ directory.
Since your dry run script seems to be in a different path than the pre-commit script, here's an example that finds and runs your script.
I assume from the backslash in your path that you are on a windows machine and I also assume that your dry-run script is contained in the same project where you have git installed and in a folder called tools (of course you can change this to your actual folder).
#!/bin/sh
#Path of your python script
FILE_PATH=tools/run_tests.py/
#Get relative path of the root directory of the project
rdir=`git rev-parse --git-dir`
rel_path="$(dirname "$rdir")"
#Cd to that path and run the file.
cd $rel_path/$FILE_PATH
echo "Running dryrun script..."
python run_tests.py
#From that point on you need to handle the dry run error/s.
#For demonstrating purproses I'll asume that an output.txt file that holds
#the result is produced.
#Extract the result from the output file
final_res="tac output | grep -m 1 . | grep 'error'"
echo -e "--------Dry run result---------\n"${final_res}
#If a warning and/or error exists abort the commit
eval "$final_res" | while read -r line; do
if [ $line != "0" ]; then
echo -e "Dry run failed.\nAborting commit..."
exit 1
fi
done
Now every time you fire git commit -m the pre-commit script will run the dry run file and abort the commit if any errors have occured, keeping your files in the stagin area.

I have implemented this in my hook. Here is the code snippet.
#!/bin/sh
#Path of your python script
RUN_TESTS="run_tests.py"
FRAMEWORK_DIR="/my-framework/"
CUR_DIR=`echo ${PWD##*/}`
`$`#Get full path of the root directory of the project under RUN_TESTS_PY_FILE
rDIR=`git rev-parse --git-dir --show-toplevel | head -2 | tail -1`
OneStepBack=/../
CD_FRAMEWORK_DIR="$rDIR$OneStepBack$FRAMEWORK_DIR"
#Find list of modified files - to be committed
LIST_OF_FILES=`git status --porcelain | awk -F" " '{print $2}' | grep ".txt" `
for FILE in $LIST_OF_FILES; do
cd $CD_FRAMEWORK_DIR
python $RUN_TESTS --dryrun --project $CUR_DIR/$FILE
OUT=$?
if [ $OUT -eq 0 ];then
continue
else
return 1
fi
done

Related

Is it possible to compile microbit python code locally?

I am running Ubuntu 22.04 with xorg.
I need to find a way to compile microbit python code locally to a firmware hex file. Firstly, I followed the guide here https://microbit-micropython.readthedocs.io/en/latest/devguide/flashfirmware.html.
After a lot of debugging, I got to this point: https://pastebin.com/MGShD31N
However, the file platform.h does exist.
sawntoe#uwubuntu:~/Documents/Assignments/2022/TVP/micropython$ ls /home/sawntoe/Documents/Assignments/2022/TVP/micropython/yotta_modules/mbed-classic/api/platform.h
/home/sawntoe/Documents/Assignments/2022/TVP/micropython/yotta_modules/mbed-classic/api/platform.h
sawntoe#uwubuntu:~/Documents/Assignments/2022/TVP/micropython$
At this point, I gave up on this and tried using Mu editor with the AppImage. However, Mu requires wayland, and I am on xorg.
Does anyone have any idea if this is possible? Thanks.
Mu and the uflash command are able to retrieve your Python code from .hex files. Using uflash you can do the following for example:
uflash my_script.py
I think that you want is somehow possible to do, but its harder than just using their web python editor: https://python.microbit.org/v/2
Peter Till answers the original question. The additional below adds to this answer by showing how to automate the build and load process. I use Debian. The original question states that Ubuntu is used, which is built on Debian.
A script to find and mount the micro:bit
When code is loaded to the micro:bit, the board is dismounted from the system. So each time you have new code to load, you have to remount the board.
I modified a script to find and mount the micro:bit.
#!/bin/bash
BASEPATH="/media/$(whoami)/"
MICRO="MICROBIT"
if [ $# -eq 0 ]
then
echo "no argument supplied, use 'mount' or 'unmount'"
exit 1
fi
if [ $1 == "--help" ]
then
echo "mounts or unmounts a BBC micro:bit"
echo "args: mount - mount the microbit, unmout - unmount the microbit"
fi
# how many MICRO found in udiksctl dump
RESULTS=$(udisksctl dump | grep IdLabel | grep -c -i $MICRO)
case "$RESULTS" in
0 ) echo "no $MICRO found in 'udkisksctl dump'"
exit 0
;;
1 ) DEVICELABEL=$(udisksctl dump | grep IdLabel | grep -i $MICRO | cut -d ":" -f 2 | sed 's/^[ \t]*//')
DEVICE=$(udisksctl dump | grep -i "IdLabel: \+$DEVICELABEL" -B 12 | grep " Device:" | cut -d ":" -f 2 | sed 's/^[ \t]*//')
DEVICEPATH="$BASEPATH""$DEVICELABEL"
echo "found one $MICRO, device: $DEVICE"
if [[ -z $(mount | grep "$DEVICE") ]]
then
echo "$DEVICELABEL was unmounted"
if [ $1 == "mount" ]
then
udisksctl mount -b "$DEVICE"
exit 0
fi
else
echo "$DEVICELABEL was mounted"
if [ $1 == "unmount" ]
then
udisksctl unmount -b "$DEVICE"
exit 0
fi
fi
;;
* ) echo "more than one $MICRO found"
;;
esac
echo "exiting without doing anything"
I alias this script to mm in my .bashrc file.
Automate mounting the micro:bit and flashing the python file
I use the inotifywait command to run mm and to then run uflash to load the .py file I am working on. Each time that the python file is saved, the aliased command mm is run followed by the uflash command.
while inotifywait -e modify <your_file>.py ; do mm && uflash <your_file>.py ; done
Okay, so elaborating on Peter Till's answer.
Firstly, you can use uflash:
uflash path/to/your/code .
Or, you can use microfs:
ufs put path/to/main.py
Working Ubuntu 22.04 host CLI setup with Carlos Atencio's Docker to build your own firmware
After trying to setup the toolchain for a while, I finally decided to Google for a Docker image with the toolchain, and found https://github.com/carlosperate/docker-microbit-toolchain at this commit from Carlos Atencio, a Micro:Bit foundation employee, and that just absolutely worked:
# Get examples.
git clone https://github.com/bbcmicrobit/micropython
cd micropython
git checkout 7fc33d13b31a915cbe90dc5d515c6337b5fa1660
# Get Docker image.
docker pull ghcr.io/carlosperate/microbit-toolchain:latest
# Build setup to be run once.
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest yt target bbc-microbit-classic-gcc-nosd#https://github.com/lancaster-university/yotta-target-bbc-microbit-classic-gcc-nosd
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest make all
# Build one example.
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest \
tools/makecombinedhex.py build/firmware.hex examples/counter.py -o build/counter.hex
# Build all examples.
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest \
bash -c 'for f in examples/*; do b="$(basename "$f")"; echo $b; tools/makecombinedhex.py build/firmware.hex "$f" -o "build/${b%.py}.hex"; done'
And you can then flash the example you want to run with:
cp build/counter.hex "/media/$USER/MICROBIT/"
Some further comments at: Generating micropython + python code `.hex` file from the command line for the BBC micro:bit

Git pre-commit hook that automatically grants execution permissions (+x) on all .sh files which are being committed

I'm trying to solve the same exact same problem illustrated here:
How to commit executable shell scripts with Git on Windows
"If you develop software involving shell scripts on Windows, which should also run on UNIX, you have a problem.
Windows filesystems like NTFS do not support UNIX permission bits.
Whenever you create new shell scripts on Windows, or rename existing ones (which may have been executable at the time of check-out), these won’t be executable. When you push the code, these scripts won’t run a UNIX-based machine."
The given precommit hook-script which is proposed as a solution to the aforementioned problem is written in python.
#!/usr/bin/env python
import subprocess
if __name__ == '__main__':
output = subprocess.check_output(["git", "ls-files", "-s", "--", "*.sh"], shell=True).decode("utf-8") # type: str
files_to_fix = []
for line in output.splitlines():
# Example for "line": '100644 82f6a7d558e1b38c8b47ec5084fe20f970f09981 0 test-update.sh'
entry = line.replace('\t', ' ').split(" ", maxsplit=3)
mode = entry[0][3:] # strips the first 3 chars ("100") which we don't care about
filename = entry[3]
if mode == "644":
files_to_fix.append(filename)
for file_path in files_to_fix:
# git update-index --chmod=+x script.sh
subprocess.check_call(["git", "update-index", "--chmod=+x", file_path], shell=True)
I'm not proficient in bash to rewrite it in bash. Is it possible to achieve this in bash at all?
With a bash script hook:
#!/usr/bin/env bash
files_to_fix=()
while read -r -d '' mode _ _ file_path; do
[[ $mode == *644 ]] && files_to_fix+=("$file_path")
done < <(git ls-files --stage -z '*.sh')
git update-index --chmod=+x -- "${files_to_fix[#]}"
Or with a POSIX shell:
#!/usr/bin/env sh
git ls-files --stage '*.sh' | while read -r mode _ _ file_path; do
case $mode in
*644) git update-index --chmod=+x -- "$file_path" ;;
*) ;;
esac
done
This one liner uses find to identify files that don't have the execute bit set instead of looking for permission 644, so it works with unusual patterns like 640 or even 200 (write only!):
find . ! -perm -u+x -type f -name "*.sh" -exec git update-index --chmod=+x -- {} \;
Save it in .git/hooks/pre-commit (and make your hook executable!)

How to run my python script parallely with another Java application on the same Linux box in Gitlab CI?

For one gitlab CI runner
I have a jar file which needs to be continuosly running in the Git linux box but since this is a application which is continuosly running, the python script in the next line is not getting executed. How to run the jar application and then execute the python script simultaneously one after another?
.gitlab.ci-yml file:
pwd && ls -l
unzip ZAP_2.8.0_Core.zip && ls -l
bash scan.sh
python3 Report.py
scan.sh file has the code java -jar app.jar.
Since, this application is continuosly running, 4th line code python3 Report.py is not getting executed.
How do I make both these run simulataneously without the .jar application stopping?
The immediate solution would probably be:
pwd && ls -l
echo "ls OK"
unzip ZAP_2.8.0_Core.zip && ls -l
echo "unzip + ls OK"
bash scan.sh &
scanpid=$!
echo "started scanpid with pid $scanpid"]
ps axuf | grep $scanpid || true
echo "ps + grep OK"
( python3 Report.py ; echo $? > report_status.txt ) || true
echo "report script OK"
kill $scanpid
echo "kill OK"
echo "REPORT STATUS = $(cat report_status.txt)"
test $(cat report_status.txt) -eq 0
Start the java process in the background,
run your python code and remember its return status and always return true.
kill the background process after running python
check for the status code of the python script.
Perhaps this is not necessary, as I never checked how gitlabci deals with background processes, that were spawned by its runners.
I do here a conservative approach.
- I remember the process id of the bash script, so that I can kill it later
- I ensure, that the line running the python script always returns a 0 exit code such, that gitlabci does not stop executing the next lines, but I remember the status code
- then I kill the bash script
- then I check whether the exit code of the python script was 0 or not, such, that gitlabci can perform the proper checking whether the runner was executed successfully or not.
Another minor comment (not related to your question)
I don't really understand why you write
unzip ZAP_2.8.0_Core.zip && ls -l
instead of
unzip ZAP_2.8.0_Core.zip ; ls -l```
If you expect the unzip command to fail you could just write
unzip ZAP_2.8.0_Core.zip
ls -l
and gitlabci would abort automatically before executing ls -l
I also added many echo statements for better debugging, error analysis, you might remove them in your final solution.
To run the two scripts one after the other, you can add & to the end of the line that is blocking. That will make it run in the background.
Either do
bash scan.sh & or add & to the end of the line calling the jar file within the scan.sh...

Secure Copy (scp) the latest file which arrives at a given folder?

I need to write a script in bash/python to scp the latest file which arrives at a given folder.That is I am continously getting files into a folder say (/home/ram/Khopo/) I need to scp it into xxx#192.168.21.xxx in /home/xxx/khopo/.
I googled and got this result
file_to_copy=`ssh username#hostname 'ls -1r | head -1'`
echo copying $file_to_copy ...
scp username#hostname:$file_to_copy /local/path
But I want to know whether it is possible do this such that it runs only when a new folder arrives at the source(/home/ram/Khopo/) and waits for the file to reach the folder and do it immediately when it has arrived
I would try to sync the remote directory. This should give you nice outlook, how to do that:
rsync:
https://askubuntu.com/a/105860
https://www.atlantic.net/hipaa-compliant-cloud-storage/how-to-use-rsync-copy-sync-files-servers/
or other tools for syncing:
https://en.wikipedia.org/wiki/Comparison_of_file_synchronization_software
As others have suggested you can use inotifywait, below an example of what you could do in bash:
#!/bin/bash
echo "Enter ssh password"
IFS= read -rs password # Read the password in a hidden way
inotifywait -m -e create "/folder_where_files_arrive" | while read line
do
file_to_copy=$(echo $line | cut -d" " -f1,3 --output-delimiter="")
echo copying $file_to_copy ...
if [[ -d $file_to_copy ]]; then # is a directory
sshpass -p $password scp -r username#hostname:$file_to_copy /local/path
elif [[ -f $file_to_copy ]]; then # is a file
sshpass -p $password scp username#hostname:$file_to_copy /local/path
fi
done
Then you would ideally put this script to run in background, e.g.,:
nohup script.sh &
For sshpass you can install it in ubunut/debian with:
apt install sshpass

Advance Scripting inside a DockerFile

I am trying to create a Docker image/container that will run on Windows 10/Linux and test a REST API. Is it possible to embed the function (from my .bashrc file) inside the DockerFile? The function pytest calls pylint before running the .py file. If the rating is not 10/10, then it prompts the user to fix the code and exits. This works fine on Linux.
Basically here is the pseudo-code inside the DockerFile I am attempting to build an image.
------------------------------------------
From: Ubuntu x.xx
install python
Install pytest
install pylint
copy test_file to the respective folder
Execute pytest test_file_name.py
if the rating is not 10\10:
prompt the user to resolve the rating issue and exit
------------here is the partial code snippet from the func------------------------
function pytest () {
argument1="$1"
# Extract the path and file name for pylint when method name is passed
pathfilename=`echo ${argument1} | sed 's/::.*//'`
clear && printf '\e[3J'
output=$(docker exec -t orch-$USER pylint -r n ${pathfilename})
if (echo "$output" | grep 'warning.*error' o&>/dev/null or
echo "${output}" | egrep 'warning|convention' &>/dev/null)
then
echo echo "${output}" | sed 's/\(warning\)/\o033[33m\1\o033[39m/;s/\(errors\|error\)/\o033[31m\1\o033[39m/'
YEL='\033[0;1;33m'
NC='\033[0m'
echo -e "\n ${YEL}Fix module as per pylint/PEP8 messages to achieve 10/10 rating before pusing to github\n${NC}"`
fi
Another option I can think of:
Step 1] Build the image (using DockerFile) with all the required software
Step 2] In a .py file, add the call for execution of pytest with the logic from the function.
Your thoughts?
You can turn that function into a standalone shell script. (Pretty much by just removing the function wrapper, and taking out the docker exec part of the tool invocation.) Once you've done that, you can COPY the shell script into your image, and once you've done that, you can RUN it.
...
COPY pylint-enforcer.sh .
RUN chmod +x ./pylint-enforcer.sh \
&& ./pylint-enforcer.sh
...
It looks like pylint will produce a non-zero exit code if it emits any messages. For purposes of a Dockerfile, it may be enough to just RUN pylint -r -n .; if it prints anything, it looks like it will return a non-zero exit code, which docker build will interpret as "failure" and not proceed.
You might consider whether you'll ever want the ability to build and push an image of code that isn't absolutely perfect (during a production-down event, perhaps), and whether you want to require root-level permissions to run simple code-validity tools (if you can docker anything you can edit arbitrary files on the host as root). I'd suggest running these tools out of a non-Docker virtual environment during your CI process, and neither place them in your Dockerfile nor depend on docker exec to run them.

Categories