Subprocess bash script: command not found [duplicate] - python

This question already has answers here:
Getting "command not found" error in bash script
(6 answers)
Closed 1 year ago.
The following python script:
def run_build(path):
cmd = path + '/build.sh'
p = subprocess.call(cmd)
The following bash script exec two another scripts:
#!/bin/bash
cd "${0%/*}"
echo $(./create_env.sh)
echo $(./set_webhook.sh)
echo $(docker-compose up -d --build)
create_env.sh:
#!/bin/bash
PORT=$(comm -23 <(seq 7000 8000 | sort) <(ss -tan | awk '{print $4}' | cut -d':' -f2 | grep "[0-9]\{1,5\}" | sort -u) | head -n 1)
MONGODB_PORT=$(comm -23 <(seq 27017 27100 | sort) <(ss -tan | awk '{print $4}' | cut -d':' -f2 | grep "[0-9]\{1,5\}" | sort -u) | head -n 1)
destdir=$PWD/.env
echo >> "$destdir"
echo "APP_PORT=$PORT" >> "$destdir"
echo "MONGODB_PORT=$MONGODB_PORT" >> "$destdir"
The output is:
Path: /home/navka/Environments/teststartupservicebot/build.sh
./create_env.sh: line 2: head: command not found
./create_env.sh: line 2: comm: command not found
./create_env.sh: line 2: seq: command not found
...
Where is my problem? Thanks!

I would say your first step would be to place:
echo $PATH
as the first line following the #!/bin/bash shebang line in create_env.sh, to ensure the path is set up.
Make sure it contains the directory for those executables (probably /usr/bin), which you can probably find out by executing (for example) which comm or where comm from a command line.
If it doesn't contain the relevant directory, that explains why it cannot find the executables. In that case, you will need to discover why they're not there.
Perhaps the simplest fix would be to just add something like:
PATH="${PATH}:/usr/bin"
to your environment setup script. This will ensure the path does have the relevant entry.
And, as an aside, if those lines in build.sh are meant to be cumulative (so, for example, set_workbook requires the environment changes made by create_env, you should be aware that these are currently run in sub-shells, meaning changes from one will not persist after the sub-shell exits.
That's not necessarily the case as you persist them to a file, which may be read by the subsequent steps.
If you do need the changes in the environment for subsequent steps (as opposed to a file), you will need to source them in the context of the current shell, such as with:
. ./create_env.sh
As I said, this may not be necessary but you may want to look into it, just in case.

Related

How to filter shell output to only number with decimal?

I have CI/CD Config which required python version to be set to default by pyenv. I want to python2 -V output showed up with only, example, 2.7.18. But, rather than showing 2.7.18, it showing full text Python 2.7.18 .
But, when I use it in python3 python -V, it showed the correct & current python3 version (3.9.0).
I use this code to try showing numbers only : $(python -V | grep -Eo '[0-9]\.[0-9]\.[10-19]').
And to set default with pyenv : pyenv global $(python3 -V | grep -Eo '[0-9]\.[0-9]\.[10-19]') $(python -V | grep -Eo '[0-9]\.[0-9]\.[10-19]')
So pyenv $(python3 version) $(python2 version)
Here is the image :
Image of wrong output
Thanks!
A simple way would be to just replace the string Python with the emtpy string, if it exists.
Here a quick one-liner
python -V 2>&1| sed -e "s/Python//g" | xargs
That would print the python version, redirects stderr to stdout, replaces "Python" with "". Xargs without parameters returns the trimmed input string.
Here are a few more ways to get the version number:
# Print 1 word per line, the select the last word:
python -V 2>&1 | xargs -n1 | tail -n1
# Print the last word:
python -V 2>&1 | perl -lane 'print $F[-1];'
# Print the first stretch of 1 or more { digits or periods }:
python -V 2>&1 | grep -Po '[\d.]+'

Implement Git hook - prePush and preCommit

Could you please show me how to implement git hook?
Before committing, the hook should run a python script. Something like this:
cd c:\my_framework & run_tests.py --project Proxy-Tests\Aeries \
--client Aeries --suite <Commit_file_Name> --dryrun
If the dry run fails then commit should be stopped.
You need to tell us in what way the dry run will fail. Will there be an output .txt with errors? Will there be an error displayed on terminal?
In any case you must name the pre-commit script as pre-commit and save it in .git/hooks/ directory.
Since your dry run script seems to be in a different path than the pre-commit script, here's an example that finds and runs your script.
I assume from the backslash in your path that you are on a windows machine and I also assume that your dry-run script is contained in the same project where you have git installed and in a folder called tools (of course you can change this to your actual folder).
#!/bin/sh
#Path of your python script
FILE_PATH=tools/run_tests.py/
#Get relative path of the root directory of the project
rdir=`git rev-parse --git-dir`
rel_path="$(dirname "$rdir")"
#Cd to that path and run the file.
cd $rel_path/$FILE_PATH
echo "Running dryrun script..."
python run_tests.py
#From that point on you need to handle the dry run error/s.
#For demonstrating purproses I'll asume that an output.txt file that holds
#the result is produced.
#Extract the result from the output file
final_res="tac output | grep -m 1 . | grep 'error'"
echo -e "--------Dry run result---------\n"${final_res}
#If a warning and/or error exists abort the commit
eval "$final_res" | while read -r line; do
if [ $line != "0" ]; then
echo -e "Dry run failed.\nAborting commit..."
exit 1
fi
done
Now every time you fire git commit -m the pre-commit script will run the dry run file and abort the commit if any errors have occured, keeping your files in the stagin area.
I have implemented this in my hook. Here is the code snippet.
#!/bin/sh
#Path of your python script
RUN_TESTS="run_tests.py"
FRAMEWORK_DIR="/my-framework/"
CUR_DIR=`echo ${PWD##*/}`
`$`#Get full path of the root directory of the project under RUN_TESTS_PY_FILE
rDIR=`git rev-parse --git-dir --show-toplevel | head -2 | tail -1`
OneStepBack=/../
CD_FRAMEWORK_DIR="$rDIR$OneStepBack$FRAMEWORK_DIR"
#Find list of modified files - to be committed
LIST_OF_FILES=`git status --porcelain | awk -F" " '{print $2}' | grep ".txt" `
for FILE in $LIST_OF_FILES; do
cd $CD_FRAMEWORK_DIR
python $RUN_TESTS --dryrun --project $CUR_DIR/$FILE
OUT=$?
if [ $OUT -eq 0 ];then
continue
else
return 1
fi
done

Script works differently when ran from the terminal and ran from Python

I have a short bash script foo.sh
#!/bin/bash
cat /dev/urandom | tr -dc 'a-z1-9' | fold -w 4 | head -n 1
When I run it directly from the shell, it runs fine, exiting when it is done
$ ./foo.sh
m1un
$
but when I run it from Python
$ python -c "import subprocess; subprocess.call(['./foo.sh'])"
ygs9
it outputs the line but then just hangs forever. What is causing this discrepancy?
Adding the trap -p command to the bash script, stopping the hung python process and running ps shows what's going on:
$ cat foo.sh
#!/bin/bash
trap -p
cat /dev/urandom | tr -dc 'a-z1-9' | fold -w 4 | head -n 1
$ python -c "import subprocess; subprocess.call(['./foo.sh'])"
trap -- '' SIGPIPE
trap -- '' SIGXFSZ
ko5o
^Z
[1]+ Stopped python -c "import subprocess; subprocess.call(['./foo.sh'])"
$ ps -H -o comm
COMMAND
bash
python
foo.sh
cat
tr
fold
ps
Thus, subprocess.call() executes the command with the SIGPIPE signal masked. When head does its job and exits, the remaining processes do not receive the broken pipe signal and do not terminate.
Having the explanation of the problem at hand, it was easy to find the bug in the python bugtracker, which turned out to be issue#1652.
The problem with Python 2 handling SIGPIPE in a non-standard way (i.e., being ignored) is already coined in Leon's answer, and the fix is given in the link: set SIGPIPE to default (SIG_DFL) with, e.g.,
import signal
signal.signal(signal.SIGPIPE,signal.SIG_DFL)
You can try to unset SIGPIPE from within your script with, e.g.,
#!/bin/bash
trap SIGPIPE # reset SIGPIPE
cat /dev/urandom | tr -dc 'a-z1-9' | fold -w 4 | head -n 1
but, unfortunately, it doesn't work, as per the Bash reference manual
Signals ignored upon entry to the shell cannot be trapped or reset.
A final comment: you have a useless use of cat here; it's better to write your script as:
#!/bin/bash
tr -dc 'a-z1-9' < /dev/urandom | fold -w 4 | head -n 1
Yet, since you're using Bash, you might as well use the read builtin as follows (this will advantageously replace fold and head):
#!/bin/bash
read -n4 a < <(tr -dc 'a-z1-9' < /dev/urandom)
printf '%s\n' "$a"
It turns out that with this version, you'll have a clear idea of what's going on (and the script will not hang):
$ python -c "import subprocess; subprocess.call(['./foo'])"
hcwh
tr: write error: Broken pipe
tr: write error
$
$ # script didn't hang
(Of course, it works well with no errors with Python3). And telling Python to use the default signal for SIGPIPE works well too:
$ python -c "import signal; import subprocess; signal.signal(signal.SIGPIPE,signal.SIG_DFL); subprocess.call(['./foo'])"
jc1p
$
(and also works with Python3).

md5 in linux and python [duplicate]

This question already has an answer here:
Why is an MD5 hash created by Python different from one created using echo and md5sum in the shell?
(1 answer)
Closed 6 years ago.
I am using md5 algo for hashing same string in python and linux but I get different values can some one point out whats wrong
in linux:
echo "logdir" | md5sum - | awk '{print $1}'
gives: aba76197efa97e6bd4e542846471b391
in python:
md5.new("logdir".encode('utf-8')).hexdigest()
gives: ee6da4c228cfaebfda7f14e4371a097d
echo will add a newline unless you explicitly tell it not to via echo -n.
$ echo -n "logdir" | md5sum - | awk '{print $1}'
ee6da4c228cfaebfda7f14e4371a097d
From man echo:
DESCRIPTION
Echo the STRING(s) to standard output.
-n do not output the trailing newline

Kill python interpeter in linux from the terminal

I want to kill python interpeter - The intention is that all the python files that are running in this moment will stop (without any informantion about this files).
obviously the processes should be closed.
Any idea as delete files in python or destroy the interpeter is ok :D (I am working with virtual machine).
I need it from the terminal because i write c code and i use linux commands...
Hope for help
pkill -9 python
should kill any running python process.
There's a rather crude way of doing this, but be careful because first, this relies on python interpreter process identifying themselves as python, and second, it has the concomitant effect of also killing any other processes identified by that name.
In short, you can kill all python interpreters by typing this into your shell (make sure you read the caveats above!):
ps aux | grep python | grep -v "grep python" | awk '{print $2}' | xargs kill -9
To break this down, this is how it works. The first bit, ps aux | grep python | grep -v "grep python", gets the list of all processes calling themselves python, with the grep -v making sure that the grep command you just ran isn't also included in the output. Next, we use awk to get the second column of the output, which has the process ID's. Finally, these processes are all (rather unceremoniously) killed by supplying each of them with kill -9.
pkill with script path
pkill -9 -f path/to/my_script.py
is a short and selective method that is more likely to only kill the interpreter running a given script.
See also: https://unix.stackexchange.com/questions/31107/linux-kill-process-based-on-arguments
You can try the killall command:
killall python
pgrep -f <your process name> | xargs kill -9
This will kill the your process service.
In my case it is
pgrep -f python | xargs kill -9
pgrep -f youAppFile.py | xargs kill -9
pgrep returns the PID of the specific file will only kill the specific application.
If you want to show the name of processes and kill them by the command of the kill, I recommended using this script to kill all python3 running process and set your ram memory free :
ps auxww | grep 'python3' | awk '{print $2}' | xargs kill -9
to kill python script while using ubuntu 20.04.2 intead of Ctrl + C just push together
Ctrl + D
I have seen the pkill command as the top answer. While that is all great, I still try to tread carefully (since, I might be risking my machine whilst killing processes) and follow the below approach:
First list all the python processes using:
$ ps -ef | grep python
Just to have a look at what root user processes were running beforehand and to cross-check later, if they are still running (after I'm done! :D)
then using pgrep as :
$ pgrep -u <username> python -d ' ' #this gets me all the python processes running for user username
# eg output:
11265 11457 11722 11723 11724 11725
And finally, I kill these processes by using the kill command after cross-checking with the output of ps -ef| ...
kill -9 PID1 PID2 PID3 ...
# example
kill -9 11265 11457 11722 11723 11724 11725
Also, we can cross check the root PIDs by using :
pgrep -u root python -d ' '
and verifying with the output from ps -ef| ...

Categories