Calling a python script with args from another python script - python

I am still a newbie to python, so apologies in advance. I have related topics on this but didn't find the best solution. (Run a python script from another python script, passing in args)
Basically, I have a python script (scriptB.py) that takes in a config file as argument and does some stuff. I need to call this script from another python script (scriptA.py).
If I had no arguments to pass, I could have just done
import scriptB.py
However, things got little complicated because we need to pass the config file (mycnofig.yml) as argument.
One of the suggestions was to use;
os.system(python scriptB.py myconfig.yml)
But, it is often reported as not a recommended approach and that it often does not work.
Another suggestion was to use:
import subprocess
subprocess.Popen("scriptB.py myconfig.yaml", shell=True)
I am not very sure if this is a common practice.
Just want to point out that both scripts don't have any main inside the script.
Please advise on the best way to handle this.
Thanks,

this should work just fine
subprocess.Popen(['python', '/full_path/scriptB.py', 'myconfig.yaml'], stdout=PIPE, stderr=PIPE)
See https://docs.python.org/3/library/subprocess.html#replacing-os-popen-os-popen2-os-popen3

If you really need to run a separate process, using the multiprocessing library is probably best. I would make an actual function inside scriptB.py that does the work. In the below example I consider config_handler to be a function inside scriptB.py that actually takes the config file path argument.
1.) create a function that will handle the calling of your external python script, also, import your script and the method inside it that takes arguments
scriptA.py: importing config_handler from scriptB
import multiprocessing
from scriptB import config_handler
def other_process(*args):
p = multiprocessing.Process(*args)
p.start()
2.) Then just call the process and feed your arguments to it:
scriptA.py: calling scriptB.py function, config_handler
other_process(name="config_process_name", target=config_handler, args=("myconfig.yml",))
Opinion:
From the information you have provided, i imagine you could manage to do this without separate processes. Just do things all in sequence and make scriptB.py a library with a function you use in scriptA.py.

It seems you got all your answers in the old thread, but if you really want to run it through os, not through python, this is what I do:
from subprocess import run, PIPE, DEVNULL
your_command = './scriptB.py myconfig.yaml'
run(your_command.split(), stdout=PIPE, stderr=DEVNULL)
In case you need the output:
output = run(your_command.split(), stdout=PIPE, stderr=DEVNULL).stdout.decode('utf-8')
If the scriptB has the shebang header telling the bash its a python script, it should run it correctly.
Path can be both relative and absolute.
It is for Python 3.x

Related

How to run a .py file from a .py file in an entirely different project

For the life of me i can't figure this one out.
I have 2 applications build in python, so 2 projects in different folders, is there a command to say in the first application like run file2 from documents/project2/test2.py ?
i tried something like os.system('') and exec() but that only seems to work if its in the same folder. How can i give a command a path like documents/project2 and then for example:
exec(documents/project2 python test2.py) ?
short version:
Is there a command that runs python test2.py while that test2 is in a completely different file/project?
thnx for all feedback!
There's a number of approaches to take.
1 - Import the .py
If the path to the other Python script can be made relative to your project, you can simply import the .py. This will cause all the code at the 'root' level of the script to be executed and makes functions as well as type and variable definitions available to the script importing it.
Of course, this only works if you control how and where everything is installed. It's the most preferable solution, but only works in limited situations.
import ..other_package.myscript
2 - Evaluate the code
You can load the contents of the Python file like any other text file and execute the contents. This is considered more of a security risk, but given the interpreted nature of Python in normal use not that much worse than an import under normal circumstances.
Here's how:
with open('/path/to/myscript.py', 'r') as f:
exec(f.read())
Note that, if you need to pass values to code inside the script, or out of it, you probably want to use files in this case.
I'd consider this the least preferable solution, due to it being a bit inflexible and not very secure, but it's definitely very easy to set up.
3 - Call it like any other external program
From a Python script, you can call any other executable, that includes Python itself with another script.
Here's how:
from subprocess import run
run('python path/to/myscript.py')
This is generally the preferable way to go about it. You can use the command line to interface with the script, and capture the output.
You can also pipe in text with stdin= or capture the output from the script with stdout=, using subprocess.Popen directly.
For example, take this script, called quote.py
import sys
text = sys.stdin.read()
print(f'In the words of the poet:\n"{text}"')
This takes any text from standard in and prints them with some extra text, to standard out like any Python script. You could call it like this:
dir | python quote.py
To use it from another Python script:
from subprocess import Popen, PIPE
s_in = b'something to say\nright here\non three lines'
p = Popen(['python', 'quote.py'], stdin=PIPE, stdout=PIPE)
s_out, _ = p.communicate(s_in)
print('Here is what the script produced:\n\n', s_out.decode())
Try this:
exec(open("FilePath").read())
It should work if you got the file path correct.
Mac example:
exec(open("/Users/saudalfaris/Desktop/Test.py").read())
Windows example:
exec(open("C:\Projects\Python\Test.py").read())

Import results of a c++-Programm to python

I'm currently dealing with some python based squish gui tests. Some of these tests call another tool, written in c++ and build as an executable. I have full access to that tool and I'm able to modify it. The tests call it via command line and currently evaluate the error code and create a passed or failed depending on the error codes value.
I think there is a better way to do it or? One Problem is, that the error code is limited to uint8 on unix systems and I would like to be able to share more than just an error code with my python script.
My first idea was printing everything in a file in json or xml and read that file. But this somehow sounds wrong for me. Has anybody a better idea?
When I first read the question, I immediately thought piping the output would work. Check this link out to get a better idea:
Linux Questions Piping
If this doesn't work, I do think writing your output to a file and reading it with your python script would get the job done.
You can capture the output of the external process via Python and process it as you see fit.
Here is a very simple variant:
import os
import subprocess
def main():
s = os_capture(["ls"])
if "ERROR" in s:
test.fail("Executing 'ls' failed.")
def os_capture(args, cwd=None):
if cwd is None:
cwd = os.getcwd()
stdout = subprocess.Popen(
args=args,
cwd=cwd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT).communicate()[0]
return stdout

Launch a python script from another script, with parameters in subprocess argument

To launch a python script (it is needed for running an OLED display) from terminal, I have to use the following bash commands: python demo_oled_v01.py --display ssd1351 --width 128 --height 128 --interface spi --gpio-data-command 20. Those parameters after .py are important, otherwise, the script will run with default settings and in my case, the script will not launch with default settings. Thus, those parameters are needed.
The problem arises when I need to launch my script from another python script, (instead of using bash commands on terminal). To launch one of my python script from a parent script. I have used:
import subprocess # to use subprocess
p = subprocess.Popen(['python', 'demo_oled_v01.py --display ssd1351 --width 128 --height 128 --interface spi --gpio-data-command 20'])
in my parent script but I got an error stating:
python: can't open file 'demo_oled_v01.py --display ssd1351 --width 128 --height 128 --interface spi --gpio-data-command 20': [Errno 2] No such file or directory
I suspect that adding the parameters --display ssd1351 --width 128 --height 128 --interface spi --gpio-data-command 20 after .py may be causing difficulty in launching the script. As mentioned, these parameters are otherwise essential for me to include for launching with bash commands on terminal. How can I use subprocess with the required parameters to launch this script?
The subprocess library is interpreting all of your arguments, including demo_oled_v01.py as a single argument to python. That's why python is complaining that it cannot locate a file with that name. Try running it as:
p = subprocess.Popen(['python', 'demo_oled_v01.py', '--display',
'ssd1351', '--width', '128', '--height', '128', '--interface', 'spi',
'--gpio-data-command', '20'])
See more information on Popen here.
This started as a comment thread, but got too long and complex.
Calling Python as a subprocess of Python is an antipattern. You can often fruitfully avoid this by refactoring your Python code so that your program can call the other program as a simple library (or module, or package, or what have you -- there is a bit of terminology here which you'll want to understand more properly ... eventually).
Having said that, there are scenarios where the subprocess needs to be a subprocess (perhaps it is designed to do its own signal handling, for example) so don't apply this blindly.
If you have a script like demo.py which contains something like
def really_demo(something, other, message='No message'):
# .... some functionality here ...
def main():
import argparse
parser = argparse.ArgumentParser(description='Basic boilerplate, ignore the details.')
parser.add_argument('--something', dest='something') # store argument in args.something
parser.add_argument('--other', dest='other') # ends up in args.other
# ... etc etc etc more options
args = parser.parse_args()
# This is the beef: once the arguments are parsed, pass them on
really_demo(args.something, args.other, message=args.message)
if __name__ == '__main__':
main()
Observe how when you run the script from the command line, __name__ will be '__main__' and so it will plunge into the main() function which picks apart the command line, then calls some other function -- in this case, real_demo(). Now, if you are calling this code from an already running Python, there is no need really to collect the arguments into a list and pass them to a new process. Just have your Python script load the function you want to call from the script, and call it with your arguments.
In other words, if you are currently doing
subprocess.call(['demo.py', '--something', 'foo', '--other', value, '--message', 'whatever'])
you can replace the subprocess call with
from demo import real_demo
real_demo('foo', value, message='whatever')
Notice how you are bypassing the main() function and all the ugly command-line parsing, and simply calling another Python function. (Pay attention to the order and names of the arguments; they may be quite different from what the command-line parser accepts.) The fact that it is defined in a different file is a minor detail which import handles for you, and the fact that the file contains other functions is something you can ignore (or perhaps exploit more fully if, for example, you want to access internal functions which are not exposed via the command-line interface in a way which is convenient for you).
As an optimization, Python won't import something twice, so you really need to make sure the functionality you need is not run when you import it. Commonly, you import once, at the beginning of your script (though technically you can do it inside the def which needs it, for example, if there is only one place in your code which depends on the import) and then you call the functions you got from the import as many or as few times as you need them.
This is a lightning recap of a very common question. If this doesn't get you started in the right direction, you should be able to find many existing questions on Stack Overflow about various aspects of this refactoring task.
Add full path to the python script & separate all parameter
EX:
import subprocess
p = subprocess.Popen(['python', 'FULL_PATH_TO_FILE/demo_oled_v01.py', '--display', 'ssd1351', '--width', '128', '--height', '128', '--interface', 'spi', '--gpio-data-command', '20'])
For Windows and Python 3.x, you could :
Use a Windows shell (cmd.exe most probably on Windows by default)
result = subprocess.Popen('cd C:\\Users\\PathToMyPythonScript
&& python myPythonScript.py value1ofParam1 value2ofParam2',
shell=True, universal_newlines=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output, error = result.communicate()
print(output)
Stay in your Python environment (remove shell=True) you can write :
result = subprocess.Popen(["C:\Windows\System32\cmd.exe", "/k",
"cd", "C:\\Users\\PathToMyPythonScript",
"&&", "dir", "&&", "python", "myPythonScript.py",
"value1ofParam1", "value2ofParam2"], universal_newlines=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output, error = result.communicate()
print(output)
Example of script file you can call for a try (the "MyPythonScript.py") :
# =============================================================================
# This script just outputs the arguments you've passed to it
import sys
print('Number of arguments:', len(sys.argv), 'arguments.')
print('Argument List:', str(sys.argv))
# =============================================================================

execute python script with argparse inside python script

I got two different python script, the first one is using argparse to get some additional argument (I will call it arg.py) and the second one is my main script (main.py).
I want to call arg.py inside main.py but I don't know how to do so. I take a look at the command execfile but I didn't manage to make it work.
I try to put
execfile('arg.py -h')
In main.py but python try to find the file 'arg.py -h' which doesn't exist. Does anyone know how to do that ?
Thanks in advance
You can run it as a separate process using subprocess.call or subprocess.Popen. If you don't want to run it as a child process, then it starts to get more complicated (depending on the structure of your arg.py script.
execfile takes the file you pass to it and includes it in the current script -- Much like a #include pre-processor directive in c/c++ (although this is done dynamically). One really ugly option using execfile would be:
#completely untested.
import sys
def run_script(script_name,*args):
_argv = sys.argv[:]
sys.argv = list(args)
execfile(script_name)
sys.argv = _argv

when is 'commands' preferable to 'popen' subprocess?

I'm apprenticing into system administration without schooling, so sometimes I'm missing what is elementary information to many others.
I'm attempting to give my stdout line another argument before printing, but I'm not sure which process I should use, and I'm a bit fuzzy on the commands for subprocess if that's what I should be using.
My current code is:
f = open('filelist', 'r')
searchterm = f.readline()
f.close()|
#takes line from a separate file and gives it definition so that it may be callable.
import commands
commands.getoutput('print man searchterm')
This is running, but not giving me an ouput to the shell. My more important question is though, am I using the right command to get my preferred process? Should I be using one of the subprocess commands instead? I tried playing around with popen, but I don't understand it fully enough to use it correctly.
Ie, I was running
subprocess.Popen('print man searchterm')
but I know without a doubt that's not how you're supposed to run it. Popen requires more arguments than I have given it, like file location and where to run it (Stdout or stderr). But I was having trouble making these commands work. Would it be something like:
subprocess.Popen(pipe=stdout 'man' 'searchterm')
#am unsure how to give the program my arguments here.
I've been researching everywhere, but it is such a widely used process I seem to be suffering from a surplus of information rather than not enough. Any help would be appreciated, I'm quite new.
Preemptive thanks for any help.
The cannonical way to get data from a separate process is to use subprocess (commands is deprecated)
import subprocess
p = subprocess.Popen(['print','man','searchitem'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdoutdata, stderrdata = p.communicate()
Note that some convenience functions exist for splitting strings into lists of arguments. Most notably is shlex.split which will take a string and split it into a list the same way a shell does. (If nothing is quoted in the string, str.split() works just as well).
commands is deprecated in Python 2.6 and later, and has been removed in Python 3. There's probably no situation where it's preferable in new code, even if you are stuck with Python 2.5 or earlier.
From the docs:
Deprecated since version 2.6: The commands module has been removed in
Python 3. Use the subprocess module instead.
To run man searchterm in a separate process and display the result in the terminal, you could do this:
import subprocess
proc = subprocess.Popen('man searchterm'.split())
proc.communicate()

Categories