To launch a python script (it is needed for running an OLED display) from terminal, I have to use the following bash commands: python demo_oled_v01.py --display ssd1351 --width 128 --height 128 --interface spi --gpio-data-command 20. Those parameters after .py are important, otherwise, the script will run with default settings and in my case, the script will not launch with default settings. Thus, those parameters are needed.
The problem arises when I need to launch my script from another python script, (instead of using bash commands on terminal). To launch one of my python script from a parent script. I have used:
import subprocess # to use subprocess
p = subprocess.Popen(['python', 'demo_oled_v01.py --display ssd1351 --width 128 --height 128 --interface spi --gpio-data-command 20'])
in my parent script but I got an error stating:
python: can't open file 'demo_oled_v01.py --display ssd1351 --width 128 --height 128 --interface spi --gpio-data-command 20': [Errno 2] No such file or directory
I suspect that adding the parameters --display ssd1351 --width 128 --height 128 --interface spi --gpio-data-command 20 after .py may be causing difficulty in launching the script. As mentioned, these parameters are otherwise essential for me to include for launching with bash commands on terminal. How can I use subprocess with the required parameters to launch this script?
The subprocess library is interpreting all of your arguments, including demo_oled_v01.py as a single argument to python. That's why python is complaining that it cannot locate a file with that name. Try running it as:
p = subprocess.Popen(['python', 'demo_oled_v01.py', '--display',
'ssd1351', '--width', '128', '--height', '128', '--interface', 'spi',
'--gpio-data-command', '20'])
See more information on Popen here.
This started as a comment thread, but got too long and complex.
Calling Python as a subprocess of Python is an antipattern. You can often fruitfully avoid this by refactoring your Python code so that your program can call the other program as a simple library (or module, or package, or what have you -- there is a bit of terminology here which you'll want to understand more properly ... eventually).
Having said that, there are scenarios where the subprocess needs to be a subprocess (perhaps it is designed to do its own signal handling, for example) so don't apply this blindly.
If you have a script like demo.py which contains something like
def really_demo(something, other, message='No message'):
# .... some functionality here ...
def main():
import argparse
parser = argparse.ArgumentParser(description='Basic boilerplate, ignore the details.')
parser.add_argument('--something', dest='something') # store argument in args.something
parser.add_argument('--other', dest='other') # ends up in args.other
# ... etc etc etc more options
args = parser.parse_args()
# This is the beef: once the arguments are parsed, pass them on
really_demo(args.something, args.other, message=args.message)
if __name__ == '__main__':
main()
Observe how when you run the script from the command line, __name__ will be '__main__' and so it will plunge into the main() function which picks apart the command line, then calls some other function -- in this case, real_demo(). Now, if you are calling this code from an already running Python, there is no need really to collect the arguments into a list and pass them to a new process. Just have your Python script load the function you want to call from the script, and call it with your arguments.
In other words, if you are currently doing
subprocess.call(['demo.py', '--something', 'foo', '--other', value, '--message', 'whatever'])
you can replace the subprocess call with
from demo import real_demo
real_demo('foo', value, message='whatever')
Notice how you are bypassing the main() function and all the ugly command-line parsing, and simply calling another Python function. (Pay attention to the order and names of the arguments; they may be quite different from what the command-line parser accepts.) The fact that it is defined in a different file is a minor detail which import handles for you, and the fact that the file contains other functions is something you can ignore (or perhaps exploit more fully if, for example, you want to access internal functions which are not exposed via the command-line interface in a way which is convenient for you).
As an optimization, Python won't import something twice, so you really need to make sure the functionality you need is not run when you import it. Commonly, you import once, at the beginning of your script (though technically you can do it inside the def which needs it, for example, if there is only one place in your code which depends on the import) and then you call the functions you got from the import as many or as few times as you need them.
This is a lightning recap of a very common question. If this doesn't get you started in the right direction, you should be able to find many existing questions on Stack Overflow about various aspects of this refactoring task.
Add full path to the python script & separate all parameter
EX:
import subprocess
p = subprocess.Popen(['python', 'FULL_PATH_TO_FILE/demo_oled_v01.py', '--display', 'ssd1351', '--width', '128', '--height', '128', '--interface', 'spi', '--gpio-data-command', '20'])
For Windows and Python 3.x, you could :
Use a Windows shell (cmd.exe most probably on Windows by default)
result = subprocess.Popen('cd C:\\Users\\PathToMyPythonScript
&& python myPythonScript.py value1ofParam1 value2ofParam2',
shell=True, universal_newlines=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output, error = result.communicate()
print(output)
Stay in your Python environment (remove shell=True) you can write :
result = subprocess.Popen(["C:\Windows\System32\cmd.exe", "/k",
"cd", "C:\\Users\\PathToMyPythonScript",
"&&", "dir", "&&", "python", "myPythonScript.py",
"value1ofParam1", "value2ofParam2"], universal_newlines=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output, error = result.communicate()
print(output)
Example of script file you can call for a try (the "MyPythonScript.py") :
# =============================================================================
# This script just outputs the arguments you've passed to it
import sys
print('Number of arguments:', len(sys.argv), 'arguments.')
print('Argument List:', str(sys.argv))
# =============================================================================
I wrote a simple extension for vscode, which registers a command, namely comandName.
In my package.json file I have:
"activationEvents": [
"onLanguage:python",
"onCommand:commandName"
]
I meant to execute this command only for Python. To my knowledge, however, the activationEvents in my package.json means that the command would be activated when a Python file is opened or the command is executed. But the command can be executed within any language. I searched the documentation but found no way to execute the command for certain languages.
Is there any way to achieve this target?
I'm afraid this is impossible for now. However you can work around this to avoid any side effects on other files.
If you bind this command to some key combination or to context menu you can use when clause to limit the command to specific types of files.
It will still allow to execute the command from command palette though. To work around this you can just ignore it when being in file other than Python:
vsc.commands.registerTextEditorCommand('testCommand', editor => {
if (editor.document.languageId !== 'python') {
return
}
// command logic
}))
Notice I used registerTextEditorCommand instead of regular command. The difference is that this one requires user to be in text editor context to work which is what you probabl
I am invoking the bash script from python script.
I want the bash script to add an element to dictionary "d" in the python script
abc3.sh:
#!/bin/bash
rank=1
echo "plugin"
function reg()
{
if [ "$1" == "what" ]; then
python -c 'from framework import data;data(rank)'
echo "iamin"
else
plugin
fi
}
plugin()
{
echo "i am plugin one"
}
reg $1
python file:
import sys,os,subprocess
from collections import *
subprocess.call(["./abc3.sh what"],shell=True,executable='/bin/bash')
def data(rank,check):
d[rank]["CHECK"]=check
print d[1]["CHECK"]
If I understand correctly, you have a python script that runs a shell script, that in turn runs a new python script. And you'd want the second Python script to update a dictionnary in the first script. That will not work like that.
When you run your first python script, it will create a new python process, which will interpret each instruction from your source script.
When it reaches the instruction subprocess.call(["./abc3.sh what"],shell=True,executable='/bin/bash'), it will spawn a new shell (bash) process which will in turn interpret your shell script.
When the shell script reaches python -c <commands>, it invokes a new python process. This process is independant from the initial python process (even if you run the same script file).
Because each of theses scripts will run in a different process, they don't have access to each other data (the OS makes sure that each process is independant from each other, excepted for specific inter-process communications methods).
What you need to do: use some kind of interprocess mechanism, so that the initial python script gets data from the shell script. You may for example read data from the shell standard output, using https://docs.python.org/3/library/subprocess.html#subprocess.check_output
Let's suppose that you have a shell plugin that echoes the value:
echo $1 12
The mockup python script looks like (I'm on windows/MSYS2 BTW, hence the strange paths for a Linux user):
import subprocess
p = subprocess.Popen(args=[r'C:\msys64\usr\bin\sh.exe',"-c","C:/users/jotd/myplugin.sh myarg"],stdout=subprocess.PIPE,stderr=subprocess.PIPE)
o,e= p.communicate()
p.wait()
if len(e):
print("Warning: error found: "+e.decode())
result = o.strip()
d=dict()
d["TEST"] = result
print(d)
it prints the dictionary, proving that argument has been passed to the shell, and went back processed.
Note that stderr has been filtered out to avoid been mixed up with the results, but is printed to the console if occurs.
{'TEST': b'myarg 12'}
I have a problem I am hoping someone can help with
I am running a perl script that calls a python script with a set of arguments as below
my $outText=`sudo /usr/bin/python /usr/local/bin/tacms/scriptname.py $pNumber $crnNumber`
The scriptname is supposed to process information based on the arguments passed and then give a terminal output which is saved in the variable, outText.
There have been instances where this has failed and I suspect it is a timeout. So how would I increase the timeout period of the backticks
yes, it was the script I was calling so what I did was say that if the results of $outText are null then repeat the whole process else continue...ie
my $outText=`sudo /usr/bin/python /usr/local/bin/tacms/scriptname.py $pNumber $crnNumber`
if ($outText eq ""){
my $outText=`sudo /usr/bin/python /usr/local/bin/tacms/scriptname.py $pNumber $crnNumber`;
//use variable here;
}
else{
//use variable here;
}
At least this way my system would retry at least once before it fails. That was the best solution I could come up with. The python script being called was calling a web service in SOAP so sometimes, it would timeout.
I have this script below where I start a python program.
The python program outputs to stdout/terminal. But I want the program to be started via rc script silently.
I can start the and stop the program perfectly. And it also creates the log file, but dosent fill anything to it. I tried a lot of different ways. Even with using daemon as starter.
Where is my problem?
#!/bin/sh
# REQUIRE: DAEMON
# KEYWORD: shutdown
. /etc/rc.subr
location="/rpiVent"
name="rpiVentService"
rcvar=`set_rcvar`
command="$location/$name"
#command_args="> $location/$name.log" // Removed
command_interpreter="/usr/bin/python"
load_rc_config $name
run_rc_command "$1"
Piping with > is a feature of the shell and not an actual part of the command line. When commands are programmatically involved, the arguments given them cannot contain shell directives (unless the parent process has special support for shell, like with Python subprocess.Popen(shell=True) (doc).
What in this case you can do is that you can wrap your command (/rpiVent/rpiVentService) to a shell script then invoke this shell script in FreeBSD rc script::
Creat /rpiVent/run.sh:
#!/bin/sh
/rpiVent/rpiVentservice > /rpiVent/rpiVentService.log
and then use this is a command (no args needed).
The correct way to do this is probably by "overriding" the start command using start_cmd variable, like this:
#!/bin/sh
# REQUIRE: DAEMON
# KEYWORD: shutdown
. /etc/rc.subr
location="/rpiVent"
name="rpiVentService"
rcvar=`set_rcvar`
load_rc_config $name
command="$location/$name"
command_interpreter="/usr/bin/python"
start_cmd=rpivent_cmd
rpivent_cmd()
{
$command_interpreter $command >$location/$name.log
}
run_rc_command "$1"