I have the following options:
python runscript.py -O start -a "-a "\"-o \\\"-f/dev/sda1 -b256k -Q8\\\" -l test -p maim\""
runscript takes -O and -a and then sends remaining to shell script 1
shell script 1 takes option -a and should consider remaining \"-o \\\"-f/dev/sda1 -b256k -Q8\\\" -l test -p maim\" as argument to shell script 2
shell script 2 takes argument -o, -l and -p.
Can anyone please help me with this kind of scenario. I am stuck where shell script 1 considers and starts parsing argument -o too.
Is there a simple way to do. The hierarchy of shell script 1 calling 2 should be maintained.
Regards
Sai
The command you gave is bit confusing. I am generalizing the scenario. Is this something you meant?
python runscript.py -p1 v1 -p2 v2 -p3 v3
runscript.py will take all given parameters.
and calls shellsript_1.sh with selected params say -p2 v2
and then calls shellscript_2.sh with remaining params say -p3 v3
We may need more accurate explanation of the problem.
The conventional way to do this in UNIX is to split the argument list about a pivot (usually --) such that the main script consumes all the arguments to the left of the pivot and leaves the remaining arguments for the other script(s). If you have flexibility in your calling function, I'd recommend doing it this way.
So, if runscript.py and both shell scripts all need to consume a separate argument list, your command line would look something like this:
python runscript.py <args for runscript> -- <args for 1st script> -- <args for 2nd script>
For example (I'm just guessing at your hierarchy):
python runscript.py -O start -- -l test -p maim -- -f/dev/sda1 -b256k -Q8
Related
I would like some help on how to set up properly a complicated job on a HPC. So, at some point in my python code I want to submit a job by using os.system("bsub -K < mama.sh") , I fould that the -K arg would actually wait for the job to end before continuing. So now I want from this mama.sh script to call 5 other jobs (kid1.sh, kid2.sh ... kid5.sh) that would run in parallel (to reduce computational time). Each one of these 5 children scripts will run a python piece of code. mama.sh should wait until all 5 other jobs have finished before continuing.
I thought of something like that:
#!/bin/sh
#BSUB -q hpc
#BSUB -J kids[1-5]
#BSUB -n 5
#BSUB -W 10:00
#BSUB -R "rusage[mem=6GB]"
#BSUB -R "span[hosts=1]"
# -- end of LSF options --
module load python3/3.8
python3 script%Ι.py
ORRR
python3 script1.py
python3 script2.py
python3 script3.py
python3 script4.py
python3 script5.py
Maybe the above doesn't make sense at all though. Is there any way to actually do that?
Thanks in advance
As is know to me, you can accomplish the goal in different level.
By two easy ways:
parallel your python code by import multiprocessing
parallel your shell script by &, command can be executed in the background.
python3 script1.py &
python3 script2.py
From this stackoverflow thread https://stackoverflow.com/questions/4443...mmand-line, I have extracted this command line:
gimp-console -idf --batch-interpreter python-fu-eval -b "import sys;sys.path=['.']+sys.path;import batch;batch.run('./images')" -b "pdb.gimp_quit(1)"
It works perfectly well.
Now, I would like to run this command from a Python script, usually I use subprocess.Popen but this time it does not work and I get this message:
"batch command experienced an execution error"
How can I launch the GIMP command line from a Python script?
One easy way to resolve this is to just put your GIMP startup script into a bash script, say startgimp.sh
#!/bin/bash
#set your path to GIMP or cd into the folder where you installed GIMP
gimp-console -idf --batch-interpreter python-fu-eval -b "import sys;sys.path=['.']+sys.path;import batch;batch.run('./images')" -b "pdb.gimp_quit(1)"
then from Python simply call the bash script like so
import subprocess
subprocess.call(["bash","/path/to/your/script/startgimp.sh"])
If you are able to make the .sh script executable, e.g. chmod +x startgimp.sh then you can skip the bash part and just do subprocess.call("/path/to/your/script/startgimp.sh")
Some caveats
This is assuming you're on a UNIX based system
I used subprocess.call so this WILL block while waiting for GIMP to complete. Use Popen like you've used if you don't want this
I don't have GIMP to try this out, but you could also try splitting your GIMP command into elements in the list and pass it to subprocess and see if that works.
e.g. subprocess.call(["gimp-console","-idf","--batch-interpreter","python-fu-eval" and so on)
My .profile defines a function
myps () {
ps -aef|egrep "a|b"|egrep -v "c\-"
}
I'd like to execute it from my python script
import subprocess
subprocess.call("ssh user#box \"$(typeset -f); myps\"", shell=True)
Getting an error back
bash: -c: line 0: syntax error near unexpected token `;'
bash: -c: line 0: `; myps'
Escaping ; results in
bash: ;: command not found
script='''
. ~/.profile # load local function definitions so typeset -f can emit them
ssh user#box ksh -s <<EOF
$(typeset -f)
myps
EOF
'''
import subprocess
subprocess.call(['ksh', '-c', script]) # no shell=True
There are a few pertinent items here:
The dotfile defining this function needs to be locally invoked before you run typeset -f to dump the function's definition over the wire. By default, a noninteractive shell does not run the majority of dotfiles (any specified by the ENV environment variable is an exception).
In the given example, this is served by the . ~/profile command within the script.
The shell needs to be one supporting typeset, so it has to be bash or ksh, not sh (as used by script=True by default), which may be provided by ash or dash, lacking this feature.
In the given example, this is served by passing ['ksh', '-c'] is the first two arguments to the argv array.
typeset needs to be run locally, so it can't be in an argv position other than the first with script=True. (To provide an example: subprocess.Popen(['''printf '%s\n' "$#"''', 'This is just literal data!', '$(touch /tmp/this-is-not-executed)'], shell=True) evaluates only printf '%s\n' "$#" as a shell script; This is just literal data! and $(touch /tmp/this-is-not-executed) are passed as literal data, so no file named /tmp/this-is-not-executed is created).
In the given example, this is mooted by not using script=True.
Explicitly invoking ksh -s (or bash -s, as appropriate) ensures that the shell evaluating your function definitions matches the shell you wrote those functions against, rather than passing them to sh -c, as would happen otherwise.
In the given example, this is served by ssh user#box ksh -s inside the script.
I ended up using this.
import subprocess
import sys
import re
HOST = "user#" + box
COMMAND = 'my long command with many many flags in single quotes'
ssh = subprocess.Popen(["ssh", "%s" % HOST, COMMAND],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = ssh.stdout.readlines()
The original command was not interpreting the ; before myps properly. Using sh -c fixes that, but... ( please see Charles Duffy comments below ).
Using a combination of single/double quotes sometimes makes the syntax easier to read and less prone to mistakes. With that in mind, a safe way to run the command ( provided the functions in .profile are actually accessible in the shell started by the subprocess.Popen object ):
subprocess.call('ssh user#box "$(typeset -f); myps"', shell=True),
An alternative ( less safe ) method would be to use sh -c for the subshell command:
subprocess.call('ssh user#box "sh -c $(echo typeset -f); myps"', shell=True)
# myps is treated as a command
This seemingly returned the same result:
subprocess.call('ssh user#box "sh -c typeset -f; myps"', shell=True)
There are definitely alternative methods for accomplishing these type of tasks, however, this might give you an idea of what the issue was with the original command.
I have found some code that I think will allow me to communicate with my Helios Heat recovery unit. I am relatively new to Python (but not coding in general) and I really cannot work out how to use this code. It is obviously written for smarthome.py but I'd like to use it from the command line.
I can also see that the way this file is constructed is probably not the best way to construct an __init__.py but I'd like to try and use it first.
So, how do I run this code? https://github.com/mtiews/smarthomepy-helios
Cheers
After git clone https://github.com/mtiews/smarthomepy-helios.git: either
invoke python with the __init__.py script as argument:
python smarthomepy-helios/__init__.py
or
make the __init__.py executable and run it:
chmod u+x smarthomepy-helios/__init__.py
smarthomepy-helios/__init__.py
Running it either way gives me
2016-02-20 18:07:51,791 - root - ERROR - Helios: Could not open /dev/ttyUSB0.
Exception: Not connected
But passing --help I get some nice synopsis:
$> python smarthomepy-helios/__init__.py --help
usage: __init__.py [-h] [-t PORT] [-r READ_VAR] [-w WRITE_VAR] [-v VALUE] [-d]
Helios ventilation system commandline interface.
optional arguments:
-h, --help show this help message and exit
-t PORT, --tty PORT Serial device to use
-r READ_VAR, --read READ_VAR
Read variables from ventilation system
-w WRITE_VAR, --write WRITE_VAR
Write variable to ventilation system
-v VALUE, --value VALUE
Value to write (required with option -v)
-d, --debug Prints debug statements.
Without arguments all readable values using default tty will be retrieved.
I am using python2.7 and argparse for my script. I am executing script as below:
python2.7 script.py -a valuefora -b valueforb -c valueforc -d valueford
Now what I want is that,
if option -a is given, then only -b, -c, -d options should be asked.
In addition to above, I also want to make this group -a -b -c -d as a EITHER OR for -e i.e. ([-a -b -c -d] | -e )
Please correct me anywhere I am wrong.
Your best choice is to test for the presence of various combinations after parse_args and use parser.error to issue an argparse compatible error message. And write your own usage line. And make sure the defaults clearly indicate whether an option has been parsed or not.
If you can change the -a and -e options to command names like cmda or build, you could use subparsers. In this case you might define a command_a subparser that accepts -b, -c, and -d, and another command_e subparser that has none of these. This is closes argparse comes to 'required together' groups of arguments.
mutually exclusive groups can define something with a usage like [-a -b -c], but that just means -b cannot occur along with -a and -c. But there's nothing fancy about that mechanism. It just constructs a dictionary of such exclusions, and checks it each time it parses a new option. If there is a conflict it issues the error message and quits. It is not set up to handle fancy combinations, such as your (-e | agroup).
Custom actions can also check for the absence or presence of non-default values in the namespace, much as you would after parsing. But doing so during parsing isn't any simpler. And it raises questions about order. Do you want to handle -b -c -a the same way as -a -c -b? Should -a check for the presence of the others, or should -b check that -a has already been parsed? Who checks for the presence, or absence of -e.
The are a number of other stack questions about argparse groups, exclusive and inclusive, but I think these are the essential issues.