running shell command in python - python

I have this simple code for running shell scripts and it sometimes work, sometimes not.If not working console log is:
Please edit the vars script to reflect your configuration, then
source it with "source ./vars". Next, to start with a fresh PKI
configuration and to delete any previous certificates and keys, run
"./clean-all". Finally, you can run this tool (pkitool) to build
certificates/keys.
It is strange for me because when I run commands in console they work as should
def cmds(*args):
cd1 = "cd /etc/openvpn/easy-rsa && source ./vars"
cd2 = "cd /etc/openvpn/easy-rsa && ./clean-all"
cd3 = "cd /etc/openvpn/easy-rsa && printf '\n\n\n\n\n\n\n\n\n' | ./build-ca"
runcd1 = subprocess.Popen(cd1, shell=True)
runcd2 = subprocess.Popen(cd2 , shell=True)
runcd3 = subprocess.Popen(cd3 , shell=True)
return (runcd1, runcd2, runcd3)
I've changed like this:
def pass3Cmds(*args):
commands = "cd /etc/openvpn/easy-rsa && source ./vars && ./clean-all && printf '\n\n\n\n\n\n\n\n\n' | ./build-ca"
runCommands = subprocess.Popen(commands, shell=True, stdout=PIPE)
return (runCommands)
but console writes down:
source: not found

You need to combine the three commands into one.
The "source ./vars" only affects the shell from which it's run. When you use three separate Popen commands, you're getting three separate shells.
Run all the commands in one Popen with &&s between them.
The reason this works "sometimes" as written is that you're sometimes running python in a shell where you already sourced the vars script.

Related

Php script triggering python script in background

Current Situation
I created a php script, to start the python script.
Following is the script:
$python_file = "/var/www/web/test.py 2>&1 | tee -a /tmp/mylog 2>/dev/null >/dev/null &";
$command = "nohup python3 ".$python_file;
exec($command);
Problem:
After triggering the php script, the script keeps on running and finally it returns 504 error page.
Expected Solution
After triggering the above script, it needs to return immediately after the exec statement. is it possible?
add & to run in the background
$python_file = "/var/www/web/test.py 2>&1 | tee -a /tmp/mylog 2>/dev/null >/dev/null &";
$command = "nohup python3 ".$python_file . " &";
exec($command);

How to run my python script parallely with another Java application on the same Linux box in Gitlab CI?

For one gitlab CI runner
I have a jar file which needs to be continuosly running in the Git linux box but since this is a application which is continuosly running, the python script in the next line is not getting executed. How to run the jar application and then execute the python script simultaneously one after another?
.gitlab.ci-yml file:
pwd && ls -l
unzip ZAP_2.8.0_Core.zip && ls -l
bash scan.sh
python3 Report.py
scan.sh file has the code java -jar app.jar.
Since, this application is continuosly running, 4th line code python3 Report.py is not getting executed.
How do I make both these run simulataneously without the .jar application stopping?
The immediate solution would probably be:
pwd && ls -l
echo "ls OK"
unzip ZAP_2.8.0_Core.zip && ls -l
echo "unzip + ls OK"
bash scan.sh &
scanpid=$!
echo "started scanpid with pid $scanpid"]
ps axuf | grep $scanpid || true
echo "ps + grep OK"
( python3 Report.py ; echo $? > report_status.txt ) || true
echo "report script OK"
kill $scanpid
echo "kill OK"
echo "REPORT STATUS = $(cat report_status.txt)"
test $(cat report_status.txt) -eq 0
Start the java process in the background,
run your python code and remember its return status and always return true.
kill the background process after running python
check for the status code of the python script.
Perhaps this is not necessary, as I never checked how gitlabci deals with background processes, that were spawned by its runners.
I do here a conservative approach.
- I remember the process id of the bash script, so that I can kill it later
- I ensure, that the line running the python script always returns a 0 exit code such, that gitlabci does not stop executing the next lines, but I remember the status code
- then I kill the bash script
- then I check whether the exit code of the python script was 0 or not, such, that gitlabci can perform the proper checking whether the runner was executed successfully or not.
Another minor comment (not related to your question)
I don't really understand why you write
unzip ZAP_2.8.0_Core.zip && ls -l
instead of
unzip ZAP_2.8.0_Core.zip ; ls -l```
If you expect the unzip command to fail you could just write
unzip ZAP_2.8.0_Core.zip
ls -l
and gitlabci would abort automatically before executing ls -l
I also added many echo statements for better debugging, error analysis, you might remove them in your final solution.
To run the two scripts one after the other, you can add & to the end of the line that is blocking. That will make it run in the background.
Either do
bash scan.sh & or add & to the end of the line calling the jar file within the scan.sh...

Best way to run script via a /bin/bash shell script in multiple environments?

I have a python script which I run on localhost and development in command line with argument, sth as python script.py development - on development and python script.py localhost - on localhost.
Now I want to run this script - when I running script /bin/bash sh,
so I want to run this script from /bin/.bash script.
I added in headers in sh script: #!/usr/bin/env python.
In what way I can achieve this?
do
if [ $1 == "local" ]; then
python script.py $1
elif [ $1 == "development" ]; then
python script.py $1
What I can do to improve this script?
Since $1 already contains what you want, the conditional is unnecessary.
If your script is a Bash script, you should put #!/bin/bash (or your local equivalent) in the shebang line. However, this particular script uses no Bash features, and so might usefully be coded to run POSIX sh instead.
#!/bin/sh
case $1 in
local|development) ;;
*) echo "Syntax: $0 local|development" >&2; exit 2;;
esac
exec python script.py "$1"
A more useful approach is to configure your local system to run the script directly with ./script.py or similar, and let the script itself take care of parsing its command-line arguments. How exactly to do that depends on your precise environment, but on most U*x-like systems, you would put #!/usr/bin/env python as the first line of script.py itself, and chmod +x the file.
I assume this is what you wanted...
#!/bin/bash
if [ ! "$#" ]; then
echo "Usage: $1 (local|development) "
exit
fi
if [ "$1" == "local" ]; then
python script.py "$1"
echo "$1"
elif
[ "$1" == "development" ]; then
python script.py "$1"
echo "$1"
fi
Save the bash code above into a file named let's say script.sh. The make it executable: chmod +x script.sh. Then run it:
./script.sh
If no argument is specified, the script will just print an info about how to use it.
./script.sh local - executes python script.py local
./script.sh development - executes python script.py development
You can comment the lines with echo, they were left there just for debugging purposes (add a # in front of the echo lines to comment them).

Using Python to run a sourced script, cd, and then run a command within the SAME shell

How can run a sourced bash script, and then change directories, and then run a command, all within the same shell (Using python)? Is this even possible?
My Attempt:
subprocess.check_call(["env -i bash -c 'source ./init-build ARG'", "cd ../myDir", "bitbake myBoard"], shell =True)
I would make this for you, but I need to see the absolute paths. Here is an example
subprocess.check_call(["""/usr/bin/env bash -c "cd /home/x/y/tools && source /home/x/y/venv/bin/activate && python asdf.py" >> /tmp/asdf.txt 2>&1"""], shell=True)

Multiple bash command in Nomad

I have an application that runs multiple Python scripts in order. I can run them in docker-compose as follow:
command: >
bash -c "python -m module_a &&
python -m module_b &&
python -m module_c"
Now I'm, scheduling the job in Nomad, and added the below command under configuration for Docker driver:
command = "/bin/bash"
args = ["-c", "python -m module_a", "&&","
"python -m module_b", "&&",
"python -m module_c"]
But Nomad seems to escape &&, and just runs the first module, and issue exit code 0. Is there any way to run the multiline command similar to docker-compose?
The following is guaranteed to work with the exec driver:
command = "/bin/bash"
args = [
"-c", ## next argument is a shell script
"for module; do python -m \"$module\" || exit; done", ## this is that script.
"_", ## passed as $0 to the script
"module_a", "module_b", "module_c" ## passed as $1, $2, and $3
]
Note that only a single argument is passed as a script -- the one immediately following -c. Subsequent arguments are arguments to that script, not additional scripts or script fragments.
Even simpler, you could run:
command = "/bin/bash"
args = ["-c", "python -m module_a && python -m module_b && python -m module_c" ]

Categories