pssh freezes only inside shell script - python

I am trying to run a pssh command inside a shell script, but the script freezes and there are no connections made, as verified in a ps -ef command. Also, because there is only one host in the hosts file I am using.
At this point, Control-C fails to kill the script, and it will not timeout. Only a kill command works.
If I run the same command on the command-line, there is no issue. Also, a pscp command in the same script causes no issues, so it seems that the required libraries are being loaded.
$ cat /home/myusername/tmp/hosts
mysinglehostname
Here is the script being run:
$ cat /home/myusername/bin/testpssh
#!/bin/bash
source ~/.bashrc
$HOME/path/to/python-virtualenv/bin/pscp -h "/home/myusername/tmp/hosts" "/tmp/garbage" "/tmp/garbage"
$HOME/path/to/python-virtualenv/bin/pssh -h "/home/myusername/tmp/hosts" -l myusername -p 512 -t 3 -o "out" -O GSSAPIAuthentication=no -i "whoami"
Here is what happens when I run the script:
$ /home/myusername/bin/testpssh &
[1] 18553
$ [1] 14:51:12 [SUCCESS] mysinglehostname 22
$ ps -ef | grep pssh
myusername 18580 18553 0 14:33 pts/16 00:00:00 /home/myusername/path/to/python-virtualenv/bin/python /home/myusername/path/to/python-virtualenv/bin/pssh -h /home/myusername/tmp/hosts -l myusername -p 512 -t 3 -o out -O GSSAPIAuthentication=no -i whoami
$ ## The script above is hanging after completing the pscp, before pssh completes.\
> But if I copy and paste the process line, it works fine as shown here:
$ /home/myusername/path/to/python-virtualenv/bin/python \
> /home/myusername/path/to/python-virtualenv/bin/pssh \
> -h /home/myusername/tmp/hosts -l myusername \
> -p 512 -t 3 -o out -O GSSAPIAuthentication=no -i whoami
[1] 14:59:03 [SUCCESS] mysinglehostname 22
myusername
$
The first [SUCCESS] above is for the pscp action, and no subsequent [SUCCESS] comes from the pssh command, unless it is performed explicitly on the command-line.
Why will the pssh command not work inside the bash shell script?
The script works fine if I use ksh instead of bash (and remove the line to source ~/.bashrc) in the shebang line.
I am on RedHat 6.4, using python 2.6.6

Related

Unexpected output of bash 'ps -p $$' command returned by 'subprocess.run()'

I am running Linux Mint 18.1 and Python 3.9. To find out which shell is executing shell commands I have started to use ps -p $$ which is expected to return the info about the shell as value of CMD.
When using subprocess.run() in Python not specifying the shell or specifying the shell as executable='sh' the CMD value is sh for both passed commands (see code below), but when I specify executable='bash' I get different results (ps and bash).
The GNOME-Terminal which is running bash prints bash as CMD value when running ps -p $$
What is the reason for the different values of CMD being ps printed by the code below in case of ps -p $$ and bash in case of ps -p $$;echo $0?
from subprocess import run
print(run('ps -p $$ ', capture_output=True, shell=True,
encoding='utf-8', executable='bash').stdout)
print(run('ps -p $$;echo $0', capture_output=True, shell=True,
encoding='utf-8', executable='bash').stdout)
which prints:
PID TTY TIME CMD
22928 ? 00:00:00 ps
PID TTY TIME CMD
22929 ? 00:00:00 bash
bash
UPDATE to respond to the given answer and comments:
#Charles Duffy : YES, without Python involved when running the commands in GNOME Terminal using bash -c I get the same behavior as if run with subprocess.run() in Python, but ... I don't get it when running without the preliminary bash -c.
#Barmar : Trying to check out the explanation in your answer I have introduced a third command echo $0;ps -p $$ to see if the last command in the sequence will give a CMD value of ps. Below the result of a terminal session:
$ bash -c 'ps -p $$'
PID TTY TIME CMD
23386 pts/1 00:00:00 ps
$ bash -c 'ps -p $$; echo $0'
PID TTY TIME CMD
23388 pts/1 00:00:00 bash
bash
$ bash -c 'echo $0;ps -p $$'
bash
PID TTY TIME CMD
23395 pts/1 00:00:00 bash
What have I misunderstood in your answer expecting from the third command to give ps as CMD value?
This is a bash optimization. If the command line is just a single command, it's is implemented by simply calling execv() rather than forking a child to execute the command. This replaces the shell process with the ps program, keeping the same PID. It's as if you executed.
print(run('exec ps -p $$', ...))
You don't see it in the second attempt because ps is not the last command in the sequence. It has to fork a child process, while the shell keeps running to wait for it to exit and execute the following commands.

Force flushing from inside bash script code to a stdout file

I'm trying to flush to stdout the output of a bioinformatic software written on Python code (Ete-Toolkit software). I tried the command (stdbuf) detailed on Force flushing of output to a file while bash script is still running but does not work because I have seen that stdbuf command it's only possible to execute from shell and not from bash(How to use stdbuf on a bash function).
Moreover from Python I discovered the following function that maybe could be interesting:
import sys
sys.stdout.flush()
But I don't know how can I implement inside the next bash script attached below.
The purpose is that if I only use the options -o and -e in the bash script (as you can see) the output is printed to logs_40markers in a not continuos manner which does not permits me to see the error. I can do it working directly from shell but my internet connection is not stable and practically each night there are a power outage and I have to restart again the command that will takes minimun one week.
#!/bin/bash
#$ -N tree
#$ -o logs_40markers
#$ -e logs_40markers
#$ -q all.q#compute-0-3
#$ -l mf=100G
stdbuf -oL
module load apps/etetoolkit-3.1.2
export QT_QPA_PLATFORM='offscreen'
ete3 build -w mafft_default-none-none-none -m sptree_fasttree_all -o provaflush --cogs coglist_species_filtered.txt -a multifasta_speciesunique.fa --clearall --cpu 40
&> logs_40markers
Thanks on advance if someone can give me some guide/advice,
Have a nice day,
Thank you,
Maggi
one informatician colleague of me solved the problem using the PYTHONUNBUFFERED command.
#!/bin/bash
#$ -N tree
#$ -o logs_40markers
#$ -e logs_40markers
#$ -q all.q#compute-0-3
#$ -l mf=100G
module load apps/etetoolkit-3.1.2
export QT_QPA_PLATFORM='offscreen'
export PYTHONUNBUFFERED="TRUE"
ete3 build -w mafft_default-none-none-none -m sptree_fasttree_all -o provaflush --cogs coglist_species_filtered.txt -a multifasta_speciesunique.fa --clearall --cpu 40 --v 4
To check the current situation of the process,
type in the shell:
tail output.log file -f (means follow)
I hope that someone could find this solution helpful

Docker (or Python) - arguments passed to container are getting corrupted

I want to pass argument to Docker container, but it's getting corrupted:
$ docker run -it python python -c 'import os;import sys;print(sys.argv);print(os.fsencode(sys.argv[1]))' $'\xFF'
['-c', '�']
b'\xef\xbf\xbd'
I'm able to pass the data through Docker's stdin though:
$ echo -n $'\xFF' | docker run -i bash xxd

Start a remote script from a Mac OS X machine via SSH command

I am trying to start a python script on my VM from my local Mac OS
I did
ssh -i /key/path/id_rsa root#111.11.1.0 "sleep 5s; cd /root/Server;pkill -f server.py;./server.py;"
Result
It's SSH in and it quickly runs those commands and it quickly logging me out. I was expecting it to stay open in SSH session.
My script is NOT running ...
ps -aux | grep python
root 901 0.0 0.2 553164 18584 ? Ssl Jan19 20:37 /usr/bin/pytho -Es /usr/sbin/tuned -l -P
root 15444 0.0 0.0 112648 976 pts/0 S+ 19:16 0:00 grep --color=auto python
If I do this it works
ssh -i /key/path/id_rsa root#111.11.1.0 "sleep 5s; cd /root/Server"
Then
./server.py;
Then, it works.
Am I missing anything?
You might need to state the shell starting your script i.e /bin/bash server.py:
ssh -i /key/path/id_rsa root#111.11.1.0 "sleep 5s; cd /root/Server; pkill -f server.py; /bin/bash ./server.py;"
If you would like to start the script and leave it running even after you end your ssh session you could use nohup. Notice that you need to put the process in the background and redirect stdin, stdout and stderr to completly detach from the remote process:
-i /key/path/id_rsa root#111.11.1.0 "sleep 5s; cd /root/Server; nohup /bin/bash ./server.py < /dev/null > std.out 2> std.err &"
It seems like the reason that your ssh command returns imediately is because somehow the call to pkill -f server.py will also terminate the actual ssh session, since it also contains the server.py in the commandline.
I don't have my regular MacBook Pro here to test with, but I think that adding another semicolon and ending the command line with /bin/bash might do it.

how to execute python script on remote machine using psexec?

I am trying to execute a python script on remote machine using psexec. The python script is already on the remote machine i only want to execute it there. I am using the following command:
psexec -i -s -d \\123 -u xyz -p xyz C:/sample.py
But i get error as :
PsExec could not start C:\sample.py on 123:
The system cannot find the file specified
I tried placing the python exe path also in the psexec comand as:
psexec -i -s -d \\123 -u xyz -p xyz C:\programs\python.exe C:/sample.py
then it opens the python.exe but does not execute the sample.py. The paths are all correct. But i am not getting why the psexec command is not able to find the script. Please suggest how shall i execute the script on the remote machine using psexec.
Remove the -d option from the command and provide the path in quotes and use backslash in path
try adding " " around the exe filename
psexec -i -s -d \\123 -u xyz -p xyz "C:\programs\python.exe" C:/sample.py
if it doesn't work, try adding " " also around the parameters
psexec -i -s -d \\123 -u xyz -p xyz "C:\programs\python.exe" "C:/sample.py"

Categories