I have a script (script1.py) of the following form:
#!/bin/python
import sys
def main():
print("number of command line options: {numberOfOptions}".format(numberOfOptions = len(sys.argv)))
print("list object of all command line options: {listOfOptions}".format(listOfOptions = sys.argv))
for i in range(0, len(sys.argv)):
print("option {i}: {option}".format(i = i, option = sys.argv[i]))
if __name__ == '__main__':
main()
I want to import this script in another script (script2.py) and pass to it some arguments. The script script2.py could look something like this:
import script1
listOfOptions = ['option1', 'option2']
#script1.main(listOfOptions) insert magic here
How could I pass the arguments defined in script2.py to the main function of script1.py as though they were command line options?
So, for example, would it be Pythonic to do something such as the following?:
import script1
import sys
sys.argv = ['option1', 'option2']
script1.main()
Separate command line parsing and called function
For reusability of your code, it is practical to keep the acting function separated from command line parsing
scrmodule.py
def fun(a, b):
# possibly do something here
return a + b
def main():
#process command line argumens
a = 1 #will be read from command line
b = 2 #will be read from command line
# call fun()
res = fun(a, b)
print "a", a
print "b", b
print "result is", res
if __name__ == "__main__":
main()
Reusing it from another place
from scrmodule import fun
print "1 + 2 = ", fun(1, 2)
# script1.py
#!/bin/python
import sys
#main function is expecting argument from test_passing_arg_to_module.py code.
def main(my_passing_arg):
print("number of command line options: {numberOfOptions}".format(numberOfOptions = len(sys.argv)))
print("list object of all command line options: {listOfOptions}".format(listOfOptions = my_passing_arg))
print(my_passing_arg)
if __name__ == '__main__':
main()
#test_passing_arg_to_module.py
import script1
my_passing_arg="Hello world"
#calling main() function from script1.py code.
#pass my_passinga_arg variable to main(my_passing_arg) function in scritp1.py.
script1.main(my_passing_arg)
##################
# Execute script
# $python3.7 test_passing_arg_to_module.py
# Results.
# number of command line options: 1
# list object of all command line options: Hello world
# Hello world
Related
I have a script test.py and I want it to execute another script this_other_script.py which will return a list object. test.py looks like:
if __name__ == '__main__':
someValue = this_other_script
print(len(someValue))
this_other_script.py looks like:
if __name__ == '__main__':
data = [a,b,c,d]
return(data)
When I run test.py I receive an error of SyntaxError: 'return' outside function.
If this is due to program scope I would have thought it would be OK for the calling program to be given a return value from a program it is calling. I wouldn't expect for this_other_script to access a value of a variable not given to it by test.py so I'm not sure why this error is displayed.
in test.py:
if __name__ == '__main__':
import this_other_script
someValue = this_other_script.get_data()
print(len(someValue))
in this_other_script.py:
def get_data():
data = [1,2,3,4]
return(data)
alternative answer:
in test.py
if __name__ == '__main__':
import this_other_script
someValue = this_other_script.get_data()
print(len(someValue))
in this_other_script.py:
def get_data():
data = [1,2,3,4]
return(data)
if __name__ == '__main__':
get_data()
This is my main python script:
import time
import subprocess
def main():
while(True):
a=input("Please enter parameter to pass to subprocess:")
subprocess.Popen(args="python child.py")
print(f"{a} was started")
time.sleep(5)
if __name__ == '__main__':
main()
This is python child script named child.py:
def main(a):
while(True):
print(a)
if __name__ == '__main__':
main(a)
How to pass value to argument a which is in the child subprocess?
You need to use command line arguments, like this;
import time
import subprocess
def main():
while(True):
a=input("Please enter parameter to pass to subprocess:")
subprocess.Popen(["python", "child.py", a])
print(f"{a} was started")
time.sleep(5)
if __name__ == '__main__':
main()
child.py:
import sys
def main(a):
while(True):
print(a)
if __name__ == '__main__':
a = sys.argv[1]
main(a)
You may use subprocess.PIPE to pass data between your main process and spawned subprocess.
Main script:
import subprocess
def main():
for idx in range(3):
a = input(
'Please enter parameter to pass to subprocess ({}/{}): '
.format(idx + 1, 3))
print('Child in progress...')
pipe = subprocess.Popen(
args='python child.py',
stdin=subprocess.PIPE,
stdout=subprocess.PIPE)
pipe.stdin.write(str(a).encode('UTF-8'))
pipe.stdin.close()
print('Child output is:')
print(pipe.stdout.read().decode('UTF-8'))
if __name__ == '__main__':
main()
Child script:
import sys
import time
def main(a):
for dummy in range(3):
time.sleep(.1)
print(a)
if __name__ == '__main__':
a = sys.stdin.read()
main(a)
Output:
>>> python main.py
Please enter parameter to pass to subprocess (1/3): qwe
Child in progress...
Child output is:
qwe
qwe
qwe
Please enter parameter to pass to subprocess (2/3): qweqwe
Child in progress...
Child output is:
qweqwe
qweqwe
qweqwe
Please enter parameter to pass to subprocess (3/3): 123
Child in progress...
Child output is:
123
123
123
The easiest way to pass arguments to a child process is to use command line parameters.
The first step is to rewrite child.py so that it accepts command line arguments. There is detailed information about parsing command line arguments in this question: How to read/process command line arguments? For this simple example though, we will simply access the command line arguments through sys.argv.
import sys
def main(a):
while(True):
print(a)
if __name__ == '__main__':
# the first element in the sys.argv list is the name of our program ("child.py"), so
# the argument the parent process sends to the child process will be the 2nd element
a = sys.argv[1]
main(a)
Now child.py can be started with an argument like python child.py foobar and the text "foobar" will be used as the value for the a variable.
With that out of the way, all that's left is to rewrite parent.py and make it pass an argument to child.py. The recommended way to pass arguments with subprocess.Popen is as a list of strings, so we'll do just that:
import time
import subprocess
def main():
while(True):
a = input("Please enter parameter to pass to subprocess:")
subprocess.Popen(["python", "child.py", a]) # pass a as an argument
print(f"{a} was started")
time.sleep(5)
if __name__ == '__main__':
main()
Summary: I'd like to write python scripts that act like bash scripts on the command line, but then I'd also like to pipe them together easily in python. Where I'm having trouble is the glue to make the latter happen.
So imagine I wrote two scripts, script1.py and script2.py and I can pipe them together like so:
echo input_string | ./script1.py -a -b | ./script2.py -c -d
How do I get this behavior from within another python file?
Here's the way I know, but I don't like:
arg_string_1 = convert_to_args(param_1, param_2)
arg_string_2 = convert_to_args(param_3, param_4)
output_string = subprocess.check_output("echo " + input_string + " | ./script1.py " + arg_string_1 + " | ./script2.py " + arg_string_2)
If I didn't want to take advantage of multithreading, I could do something like this (?):
input1 = StringIO(input_string)
output1 = StringIO()
script1.main(param_1, param_2, input1, output1)
input2 = StringIO(output1.get_value())
output2 = StringIO()
script2.main(param_3, param_4, input2, output2)
Here's the approach I was trying, but I got stuck at writing the glue. I'd appreciate either learning how to finish my approach below, or suggestions for a better design/approach!
My approach: I wrote script1.py and script2.py to look like:
#!/usr/bin/python3
... # import sys and define "parse_args"
def main(param_1, param_2, input, output):
for line in input:
...
print(stuff, file=output)
if __name__ == "__main__":
parameter_1, parameter_2 = parse_args(sys.argv)
main(parameter_1, parameter_2, sys.stdin, sys.stdout)
Then I wanted to write something like this, but don't know how to finish:
pipe_out, pipe_in = ????
output = StringIO()
thread_1 = Thread(target=script1.main, args=(param_1, param_2, StreamIO(input_string), pipe_out))
thread_2 = Thread(target=script2.main, args=(param_3, param_4, pipe_in, output)
thread_1.start()
thread_2.start()
thread_1.join()
thread_2.join()
output_str = output.get_value()
For the "pipe in", uses sys.stdin with the readlines() method. (Using method read() would read one character at a time.)
For passing information from one thread to another, you can use Queue. You must define one way to signal the end of data. In my example, since all data passed between threads are str, I simply use a None object to signal the end of data (since it cannot appear in the transmitted data).
One could also use more threads, or use different functions in threads.
I did not include the sys.argvin my example to keep it simple. Modifying it to get parameters (parameter1, ...) should be easy.
import sys
from threading import Thread
from Queue import Queue
import fileinput
def stdin_to_queue( output_queue ):
for inp_line in sys.stdin.readlines(): # input one line at at time
output_queue.put( inp_line, True, None ) # blocking, no timeout
output_queue.put( None, True, None ) # signal the end of data
def main1(input_queue, output_queue, arg1, arg2):
do_loop = True
while do_loop:
inp_data = input_queue.get(True)
if inp_data is None:
do_loop = False
output_queue.put( None, True, None ) # signal end of data
else:
out_data = arg1 + inp_data.strip('\r\n').upper() + arg2 # or whatever transformation...
output_queue.put( out_data, True, None )
def queue_to_stdout(input_queue):
do_loop = True
while do_loop:
inp_data = input_queue.get(True)
if inp_data is None:
do_loop = False
else:
sys.stdout.write( inp_data )
def main():
q12 = Queue()
q23 = Queue()
q34 = Queue()
t1 = Thread(target=stdin_to_queue, args=(q12,) )
t2 = Thread(target=main1, args=(q12,q23,'(',')') )
t3 = Thread(target=main1, args=(q23,q34,'[',']') )
t4 = Thread(target=queue_to_stdout, args=(q34,))
t1.start()
t2.start()
t3.start()
t4.start()
main()
Finally, I tested this program (python2) with a text file.
head sometextfile.txt | python script.py
Redirect the return value to stdout depending on whether the script is being run from the command line:
#!/usr/bin/python3
import sys
# Example function
def main(input):
# Do something with input producing stuff
...
return multipipe(stuff)
if __name__ == '__main__':
def multipipe(data):
print(data)
input = parse_args(sys.argv)
main(input)
else:
def multipipe(data):
return data
Each other script will have the same two definitions of multipipe. Now, use multipipe for output.
If you call all the scripts together from the command line $ ./scrip1.py | ./scrip2.py, each will have __name__ == '__main__' and so multipipe will print it all to stdout to be read as an argument by the next script (and return None, so each function returns None, but you're not looking at the return values anyway in this case).
If you call them within some other python script, each function will return whatever you passed to multipipe.
Effectively, you can use your existing functions, just replace print(stuff, file=output) with return multipipe(stuff). Nice and simple.
To use it with multithreading or multiprocessing, set the functions up so that each function returns a single thing, and plug them into a simple function that adds data to a multithreading queue. For an example of such a queueing system, see the sample at the bottom of the Queue docs. With that example, just make sure that each step in the pipeline puts None (or other sentinel value of your choice - I like ... for that since it's extremely rare that you'd pass the Ellipsis object for any reason other than as a marker for its singleton-ness) in the queue to the next one to signify done-ness.
There is a very simple solution using the standard Popen class.
Here's an example:
#this is the master python program
import subprocess
import sys
import os
#note the use of stdin and stdout arguments here
process1 = subprocess.Popen(['./script1.py'], stdin=sys.stdin, stdout=subprocess.PIPE)
process2 = subprocess.Popen(['./script2.py'], stdin=process1.stdout)
process1.wait()
process2.wait()
the two scripts are:
#!/usr/bin/env python
#script1.py
import sys
for line in sys.stdin:
print(line.strip().upper())
Here's the second one
#!/usr/bin/env python
#script2.py
import sys
for line in sys.stdin:
print("<{}>".format(line.strip()))
I have a python module file (func.py) and a unit test file (f_test.py):
# func.py
def f(x):
return x + 1
x = input("Enter x: "))
print("f(x): " + str(f(x)))
and
# f_test.py
import unittest
from func import f
class MyTest(unittest.TestCase):
def test(self):
self.assertEqual(f(1), 2)
When I run f_test.py I expect the test suite to be executed (successfully).
Instead however I see the following input:
Finding files... done.
Importing test modules ... Enter x:
If I comment out the input/output lines from func.py then I get the expected behaviour.
How do I achieve it without modifying func.py?
When you import func, all the code inside is run. Running the definition def f(x)... is what creates the function f.
You can distinguish between importing and running a file by using if __name__=='__main__'.
For instance:
# func.py
def f(x):
return x + 1
if __name__ == '__main__':
x = input("Enter x: "))
print("f(x): " + str(f(x)))
When you run func.py, __name__=='__main__' will be true. When you import it, it will be false (__name__ will be 'func' instead).
Of course, the correct answer is to modify func.py. If you absolutely, positively, won't modify func.py, then you could redirect standard input:
# f_test.py
import os
import sys
import unittest
oldstdin, sys.stdin = sys.stdin, StringIO.StringIO('7')
try:
from func import f
finally:
sys.stdin = oldstdin
class MyTest(unittest.TestCase):
def test(self):
self.assertEqual(f(1), 2)
You need to add to the bottom of f_test.py:
if __name__ == '__main__':
unittest.main()
This way the tests will be executed when the file is run (I forget this more times than I would like to admit).
I tried to word the question right, but what I'm trying to do is check the stdout of a list after the while statement. I mock the user input for two iterations and break during the thirs iteration.
here is my run code.
def main():
pirateList = []
maxLengthList = 6
while len(pirateList) < maxLengthList:
item = input("Argh! Enter the item: ")
if item == "exit":
break;
else:
pirateList.append(item)
print(pirateList)
print(pirateList)
main()
here is my test code, i should be expecting [bow, arrow]
import unittest
from unittest.mock import patch
import io
import sys
from RunFile import main
class GetInputTest(unittest.TestCase):
#patch('builtins.input', side_effect=["bow", "arrow","exit"])
def test_output(self,m):
saved_stdout = sys.stdout
try:
out = io.StringIO()
sys.stdout = out
main()
output = out.getvalue().strip()
assert output.endswith('[bow, arrow]')
finally:
sys.stdout = saved_stdout
if __name__ == "__main__":
unittest.main()
when I run this code the program just gets hung up.No errors or tracks
The import statement you are having
from RunFile import main
Actually runs the main function, as it should, and asks for the input. You should have the standard if-clause there:
if __name__ == "__main__":
main()
You might also want to change the stdout handling, here is an example:
class GetInputTest(unittest.TestCase):
#patch('builtins.input', side_effect=["bow", "arrow","exit"])
#patch('sys.stdout', new_callable=StringIO)
def run_test_with_stdout_capture(self , mock_output, mock_input ):
main()
return mock_output.getvalue()
def test( self ):
print ("GOT: + " + self.run_test_with_stdout_capture())
if __name__ == "__main__":
unittest.main()
Do note that you cannot print inside the #patch sys.stdout -- it will get captured!