Creating tests to maintain minimum coverage? - python

Is there a way to create a Makefile/script that will fail if a file from coverage.py library has values below a certain threshold? Say 80%.

For coverage.py you can use fail_under option
First, gather test coverage by using the coverage run command (here), then run the coverage report command
Option 1.
Run via coverage report command directly
coverage report --fail-under=80
Option 2.
Use configuration file by defining value in the report section of .coveragerc
[report]
fail_under = 80
Then run the coverage report command
coverage report
It will return non zero exit code if the coverage is below the value of fail_under

If you want something more fine-grained than an overall percentage threshold, you can try this coverage goals program I threw together: https://nedbatchelder.com/blog/202111/coverage_goals.html

I figured it out with the following. Feel free to copy if it fits your needs!
The Makefile goes as follows:
test: $(TEST_DIRECTORY)/*/*
rm -f $(TEST_OUTPUT_DIRECTORY)/coverage_tests.log
cd $(TEST_DIRECTORY)/inference && coverage run -m unittest discover
for file in $^ ; do \
TEST_FILE=$$(echo $${file} | grep -E -o test_.*) ; \
FILE=$$(echo $${TEST_FILE} | sed 's/test_//') ; \
if [[ ! -z $${FILE} ]] ; then \
cd $(TEST_DIRECTORY)/inference && coverage report --include $(INFERENCE_DIRECTORY)/$${FILE} > $(TEST_OUTPUT_DIRECTORY)/coverage_temp.log ; \
cd $(TEST_DIRECTORY) && ./coverage_check.sh $${FILE} < $(TEST_OUTPUT_DIRECTORY)/coverage_temp.log >> $(TEST_OUTPUT_DIRECTORY)/coverage_tests.log ; \
rm -f $(TEST_OUTPUT_DIRECTORY)/coverage_temp.log ; \
fi ; \
done
rm -f $(TEST_DIRECTORY)/inference/.coverage
and it utilizes a coverage check script which is here:
#!/bin/sh
while read line; do
for word in $line; do
if [[ "$word" == *"%"* ]]; then
COVERAGE=`echo $word | sed 's/%//g'`
if [ $COVERAGE -gt 80 ]
then
echo "$1: PASSED $word"
break 2
else
echo "$1: FAILED $word"
break 2
fi
fi
done
done

Related

Is it possible to compile microbit python code locally?

I am running Ubuntu 22.04 with xorg.
I need to find a way to compile microbit python code locally to a firmware hex file. Firstly, I followed the guide here https://microbit-micropython.readthedocs.io/en/latest/devguide/flashfirmware.html.
After a lot of debugging, I got to this point: https://pastebin.com/MGShD31N
However, the file platform.h does exist.
sawntoe#uwubuntu:~/Documents/Assignments/2022/TVP/micropython$ ls /home/sawntoe/Documents/Assignments/2022/TVP/micropython/yotta_modules/mbed-classic/api/platform.h
/home/sawntoe/Documents/Assignments/2022/TVP/micropython/yotta_modules/mbed-classic/api/platform.h
sawntoe#uwubuntu:~/Documents/Assignments/2022/TVP/micropython$
At this point, I gave up on this and tried using Mu editor with the AppImage. However, Mu requires wayland, and I am on xorg.
Does anyone have any idea if this is possible? Thanks.
Mu and the uflash command are able to retrieve your Python code from .hex files. Using uflash you can do the following for example:
uflash my_script.py
I think that you want is somehow possible to do, but its harder than just using their web python editor: https://python.microbit.org/v/2
Peter Till answers the original question. The additional below adds to this answer by showing how to automate the build and load process. I use Debian. The original question states that Ubuntu is used, which is built on Debian.
A script to find and mount the micro:bit
When code is loaded to the micro:bit, the board is dismounted from the system. So each time you have new code to load, you have to remount the board.
I modified a script to find and mount the micro:bit.
#!/bin/bash
BASEPATH="/media/$(whoami)/"
MICRO="MICROBIT"
if [ $# -eq 0 ]
then
echo "no argument supplied, use 'mount' or 'unmount'"
exit 1
fi
if [ $1 == "--help" ]
then
echo "mounts or unmounts a BBC micro:bit"
echo "args: mount - mount the microbit, unmout - unmount the microbit"
fi
# how many MICRO found in udiksctl dump
RESULTS=$(udisksctl dump | grep IdLabel | grep -c -i $MICRO)
case "$RESULTS" in
0 ) echo "no $MICRO found in 'udkisksctl dump'"
exit 0
;;
1 ) DEVICELABEL=$(udisksctl dump | grep IdLabel | grep -i $MICRO | cut -d ":" -f 2 | sed 's/^[ \t]*//')
DEVICE=$(udisksctl dump | grep -i "IdLabel: \+$DEVICELABEL" -B 12 | grep " Device:" | cut -d ":" -f 2 | sed 's/^[ \t]*//')
DEVICEPATH="$BASEPATH""$DEVICELABEL"
echo "found one $MICRO, device: $DEVICE"
if [[ -z $(mount | grep "$DEVICE") ]]
then
echo "$DEVICELABEL was unmounted"
if [ $1 == "mount" ]
then
udisksctl mount -b "$DEVICE"
exit 0
fi
else
echo "$DEVICELABEL was mounted"
if [ $1 == "unmount" ]
then
udisksctl unmount -b "$DEVICE"
exit 0
fi
fi
;;
* ) echo "more than one $MICRO found"
;;
esac
echo "exiting without doing anything"
I alias this script to mm in my .bashrc file.
Automate mounting the micro:bit and flashing the python file
I use the inotifywait command to run mm and to then run uflash to load the .py file I am working on. Each time that the python file is saved, the aliased command mm is run followed by the uflash command.
while inotifywait -e modify <your_file>.py ; do mm && uflash <your_file>.py ; done
Okay, so elaborating on Peter Till's answer.
Firstly, you can use uflash:
uflash path/to/your/code .
Or, you can use microfs:
ufs put path/to/main.py
Working Ubuntu 22.04 host CLI setup with Carlos Atencio's Docker to build your own firmware
After trying to setup the toolchain for a while, I finally decided to Google for a Docker image with the toolchain, and found https://github.com/carlosperate/docker-microbit-toolchain at this commit from Carlos Atencio, a Micro:Bit foundation employee, and that just absolutely worked:
# Get examples.
git clone https://github.com/bbcmicrobit/micropython
cd micropython
git checkout 7fc33d13b31a915cbe90dc5d515c6337b5fa1660
# Get Docker image.
docker pull ghcr.io/carlosperate/microbit-toolchain:latest
# Build setup to be run once.
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest yt target bbc-microbit-classic-gcc-nosd#https://github.com/lancaster-university/yotta-target-bbc-microbit-classic-gcc-nosd
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest make all
# Build one example.
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest \
tools/makecombinedhex.py build/firmware.hex examples/counter.py -o build/counter.hex
# Build all examples.
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest \
bash -c 'for f in examples/*; do b="$(basename "$f")"; echo $b; tools/makecombinedhex.py build/firmware.hex "$f" -o "build/${b%.py}.hex"; done'
And you can then flash the example you want to run with:
cp build/counter.hex "/media/$USER/MICROBIT/"
Some further comments at: Generating micropython + python code `.hex` file from the command line for the BBC micro:bit

Implement Git hook - prePush and preCommit

Could you please show me how to implement git hook?
Before committing, the hook should run a python script. Something like this:
cd c:\my_framework & run_tests.py --project Proxy-Tests\Aeries \
--client Aeries --suite <Commit_file_Name> --dryrun
If the dry run fails then commit should be stopped.
You need to tell us in what way the dry run will fail. Will there be an output .txt with errors? Will there be an error displayed on terminal?
In any case you must name the pre-commit script as pre-commit and save it in .git/hooks/ directory.
Since your dry run script seems to be in a different path than the pre-commit script, here's an example that finds and runs your script.
I assume from the backslash in your path that you are on a windows machine and I also assume that your dry-run script is contained in the same project where you have git installed and in a folder called tools (of course you can change this to your actual folder).
#!/bin/sh
#Path of your python script
FILE_PATH=tools/run_tests.py/
#Get relative path of the root directory of the project
rdir=`git rev-parse --git-dir`
rel_path="$(dirname "$rdir")"
#Cd to that path and run the file.
cd $rel_path/$FILE_PATH
echo "Running dryrun script..."
python run_tests.py
#From that point on you need to handle the dry run error/s.
#For demonstrating purproses I'll asume that an output.txt file that holds
#the result is produced.
#Extract the result from the output file
final_res="tac output | grep -m 1 . | grep 'error'"
echo -e "--------Dry run result---------\n"${final_res}
#If a warning and/or error exists abort the commit
eval "$final_res" | while read -r line; do
if [ $line != "0" ]; then
echo -e "Dry run failed.\nAborting commit..."
exit 1
fi
done
Now every time you fire git commit -m the pre-commit script will run the dry run file and abort the commit if any errors have occured, keeping your files in the stagin area.
I have implemented this in my hook. Here is the code snippet.
#!/bin/sh
#Path of your python script
RUN_TESTS="run_tests.py"
FRAMEWORK_DIR="/my-framework/"
CUR_DIR=`echo ${PWD##*/}`
`$`#Get full path of the root directory of the project under RUN_TESTS_PY_FILE
rDIR=`git rev-parse --git-dir --show-toplevel | head -2 | tail -1`
OneStepBack=/../
CD_FRAMEWORK_DIR="$rDIR$OneStepBack$FRAMEWORK_DIR"
#Find list of modified files - to be committed
LIST_OF_FILES=`git status --porcelain | awk -F" " '{print $2}' | grep ".txt" `
for FILE in $LIST_OF_FILES; do
cd $CD_FRAMEWORK_DIR
python $RUN_TESTS --dryrun --project $CUR_DIR/$FILE
OUT=$?
if [ $OUT -eq 0 ];then
continue
else
return 1
fi
done

No such file or directory in find running .sh

Running this on osx...
cd ${BUILD_DIR}/mydir && for DIR in $(find ./ '.*[^_].py' | sed 's/\/\//\//g' | awk -F "/" '{print $2}' | sort |uniq | grep -v .py); do
if [ -f $i/requirements.txt ]; then
pip install -r $i/requirements.txt -t $i/
fi
cd ${DIR} && zip -r ${DIR}.zip * > /dev/null && mv ${DIR}.zip ../../ && cd ../
done
cd ../
error:
(env) ➜ sh package_lambdas.sh find: .*[^_].py: No such file or directory
why?
find takes as an argument a list of directories to search. You provided what appears to be regular expression. Because there is no directory named (literally) .*[^_].py, find returns an error.
Below I have revised your script to correct that mistake (if I understand your intention). Because I see so many ill-written shell scripts these days, I've taken the liberty of "traditionalizing" it. Please see if you don't also find it more readable.
Changes:
use #!/bin/sh, guaranteed to be on an Unix-like system. Faster than bash, unless (like OS X) it is bash.
use lower case for variable names to distinguish from system variables (and not hide them).
eschew braces for variables (${var}); they're not needed in the simple case
do not pipe output to /usr/bin/true; route it to dev/null if that's what you mean
rm -f by definition cannot fail; if you meant || true, it's superfluous
put then and do on separate lines, easier to read, and that's how the Bourne shell language was meant to be used
Let && and || serve as line-continuation, so you can see what's happening step by step
Other changes I would suggest:
Use a subshell when changing the working directory temporarily. When it terminates, the working directory is restored automatically (retained by the parent), saving you the cd .. step, and errors.
Use set -e to cause the script to terminate on error. For expected errors, use || true explicitly.
Change grep .py to grep '\.py$', just for good measure.
To avoid Tilting Matchstick Syndrome, use something other than / as a sed substitute delimiter, e.g., sed 's://:/:g'. But sed could be avoided altogether with awk -F '/+' '{print $2}'.
Revised version:
#! /bin/sh
src_dir=lambdas
build_dir=bin
mkdir -p $build_dir/lambdas
rm -rf $build_dir/*.zip
cp -r $src_dir/* $build_dir/lambdas
#
# The sed is a bit complicated to be osx / linux cross compatible :
# ( .//run.sh vs ./run.sh
#
cd $build_dir/lambdas &&
for L in $(find . -exec grep -l '.*[^_].py' {} + |
sed 's/\/\//\//g' |
awk -F "/" '{print $2}' |
sort |
uniq |
grep -v .py)
do
if [ -f $i/requirements.txt ]
then
echo "Installing requirements"
pip install -r $i/requirements.txt -t $i/
fi
cd $L &&
zip -r $L.zip * > /dev/null &&
mv $L.zip ../../ &&
cd ../
done
cd ../
The find(1) manpage says its args are [path ...] [expression], where "expression" consists of "primaries" and "operands" (-flags). '.*[^-].py' doesn't look like any expression, so it's being interpreted as a path, and it's reporting that there is no file named '.*[^-].py' in the working directory.
Perhaps you meant:
find ./ -regex '.*[^-].py'

how to add getopt options in a bash script

I have written a bash script that consists of multiple Unix commands and Python scripts. The goal is to make a pipeline for detecting long non coding RNA from a certain input. Ultimately I would like to turn this into an 'app' and host it on some bioinformatics website. One problem I am facing is using getopt tools in bash. I couldn't find a good tutorial that I understand clearly. In addition any other comments related to the code is appreciated.
#!/bin/bash
if [ "$1" == "-h" ]
then
echo "Usage: sh $0 cuffcompare_output reference_genome blast_file"
exit
else
wget https://github.com/TransDecoder/TransDecoder/archive/2.0.1.tar.gz && tar xvf 2.0.1 && rm -r 2.0.1
makeblastdb -in $3 -dbtype nucl -out $3.blast.out
grep '"u"' $1 | \
gffread -w transcripts_u.fa -g $2 - && \
python2.7 get_gene_length_filter.py transcripts_u.fa transcripts_u_filter.fa && \
TransDecoder-2.0.1/TransDecoder.LongOrfs -t transcripts_u_filter.fa
sed 's/ .*//' transcripts_u_filter.fa | grep ">" | sed 's/>//' > transcripts_u_filter.fa.genes
cd transcripts_u_filter.fa.transdecoder_dir
sed 's/|.*//' longest_orfs.cds | grep ">" | sed 's/>//' | uniq > longest_orfs.cds.genes
grep -v -f longest_orfs.cds.genes ../transcripts_u_filter.fa.genes > longest_orfs.cds.genes.not.genes
sed 's/^/>/' longest_orfs.cds.genes.not.genes > temp && mv temp longest_orfs.cds.genes.not.genes
python ../extract_sequences.py longest_orfs.cds.genes.not.genes ../transcripts_u_filter.fa longest_orfs.cds.genes.not.genes.fa
blastn -query longest_orfs.cds.genes.not.genes.fa -db ../$3.blast.out -out longest_orfs.cds.genes.not.genes.fa.blast.out -outfmt 6
python ../filter_sequences.py longest_orfs.cds.genes.not.genes.fa.blast.out longest_orfs.cds.genes.not.genes.fa.blast.out.filtered
grep -v -f longest_orfs.cds.genes.not.genes.fa.blast.out.filtered longest_orfs.cds.genes.not.genes.fa > lincRNA_final.fa
fi
Here is how I run it:
sh test.sh cuffcompare_out_annot_no_annot.combined.gtf /mydata/db/Brapa_sequence_v1.2.fa TE_RNA_transcripts.fa
If you wanted the call to be :
test -c cuffcompare_output -r reference_genome -b blast_file
You would have something like :
#!/bin/bash
while getopts ":b:c:hr:" opt; do
case $opt in
b)
blastfile=$OPTARG
;;
c)
comparefilefile=$OPTARG
;;
h)
echo "USAGE : test -c cuffcompare_output -r reference_genome -b blast_file"
;;
r)
referencegenome=$OPTARG
;;
\?)
echo "Invalid option: -$OPTARG" >&2
exit 1
;;
:)
echo "Option -$OPTARG requires an argument." >&2
exit 1
;;
esac
done
In the string ":b:c:hr:",
- the first ":" tells getopts that we'll handle any errors,
- subsequent letters are the allowable flags. If the letter is followed by a ':', then getopts will expect that flag to take an argument, and supply that argument as $OPTARG

How to toggle comment on a line matching a pattern?

I'm working on a Django project where I've created a makefile task to reset my database setup (i.e. reset-db).
However, as I just want to syncdb the core application, I need to be able to toggle comment on the legacy app line. Commenting prior to the syncdb and uncommenting after (so other operation happen normally).
Default state
INSTALLED_APPS = (
…
'django_extensions',
'core',
'legacy' # #reset-db
)
Goal state
INSTALLED_APPS = (
…
'django_extensions',
'core',
# 'legacy' # #reset-db
)
Makefile reset-db
The task is currently
reset-db:
# cmd to comment line
DJANGO_SETTINGS_MODULE=${SETTINGS} sudo -u postgres -H dropdb evrpa \
&& sudo -u postgres -H createdb evrpa -O elopez; \
./manage.py syncdb --noinput --database=default;
# cmd to UNcomment line
So what's the sed or awk command to do that?
Just take a copy of the original file, modify it, use it, then restore it:
mv ${SETTINGS_PY} ${SETTINGS_PY}.tmp &&
sed '/#reset-db/s/^/#/' ${SETTINGS_PY}.tmp > ${SETTINGS_PY} &&
./manage.py syncdb --noinput --database=default &&
mv ${SETTINGS_PY}.tmp ${SETTINGS_PY}
That way you don't have to come up with a script to try to get the modified file back to it's original content since you have a copy of that original file to restore from.
I assume you have a good reason for not quoting your variables and so I have also left them unquoted.
I added && at the end of every line because you always want to test for the previous command succeeding before executing the next command. If that's not the right syntax to do that in your makefile, change it as appropriate.
Here is how I've done it
Commenting
awk '/#reset-db/{ $0="# " $0 } {print}' ${SETTINGS_PY} > ${SETTINGS_PY}.tmp
For lines matching the pattern /#reset-db/, I updated the line content by prefixing # to it using { $0="# " $0 }.
Then I print all lines with {print}.
Uncommenting
awk '/^# .*#reset-db/{ $0=gensub(/^#(.*)/, "\\1", "", $0) } {print}' ${SETTINGS_PY} > ${SETTINGS_PY}.tmp
For lines matching the pattern /^# .*#reset-db/, I updated the current line content by removing # using { $0=gensub(/^#(.*)/, "\\1", "", $0) }.
Then I print all lines with {print}.
Full task
Note: You need to escape the $ in makefile scripts with another $ (e.g. $0 → $$0).
reset-db:
DJANGO_SETTINGS_MODULE=${SETTINGS} sudo -u postgres -H dropdb evrpa \
&& sudo -u postgres -H createdb evrpa -O elopez; \
awk '/#reset-db/{ $$0="# " $$0 } {print}' ${SETTINGS_PY} > ${SETTINGS_PY}.tmp \
&& mv ${SETTINGS_PY}{.tmp,}
./manage.py syncdb --noinput --database=default;
awk '/^# .*#reset-db/{ $$0=gensub(/^#(.*)/, "\\1", "", $$0) } {print}' ${SETTINGS_PY} > ${SETTINGS_PY}.tmp \
&& mv ${SETTINGS_PY}{.tmp,}

Categories