Running python program with arguments from another program - python

I want to call a python program from current program,
def multiply(a):
return a*5
mul = sys.argv[1]
I saved this file as test.py.so from the current file I'm calling it but i want to run the code in parallel like multiprocessing queue,But its not working.
what I tried so far,
import os
import sys
import numpy as np
cwd = os.getcwd()
l = [20,5,12,24]
for i in np.arange(len(l)):
os.system('python test.py multiply[i]')
I want to run the main script for all list items parallelly like multiprocessing. How to achieve that?

If you want to make your program work like that using that the os.system, you need to change the test.py file a bit:
import sys
def multiply(a):
return a*5
n = int(sys.argv[1])
print(multiply(n))
This code written like this takes the second element (the first one is just the name of the file) of argv that it took when you executed it with os.system and converted it to an integer, then executed your function (multiply by 5).
In these cases though, it's way better to just import this file as a module in your project.

Related

Run python file inside another

I am using python 3.5
I want to run a python script calling from another python script.
In particular, say I have script A (in particular, this is the exact file that I want to run: script A file):
Script A is the following.
if __name__ == '__main__':
args = argparser.parse_args()
_main(args)
I am running script B, inside script B, it calls script A.
How do I simply do this by calling the main function of script A while running script B?
Please no os.system('python scriptA.py 1'), this is not what i want. thanks
normally you can import it and call the main function
like
import script_a
...
script_a._main()
of course it could be that the script_a is not in you src structure, so that you can not simple import, or script_a is completely somewhere else.
Suppose the path of the script_a is path_a
you can
import sys
sys.path.append(path_a)
import script_a
script_a._main()
if you want to pass the args to script_a , in your script_b
import script_a
...
if __name__ == '__main__':
args = argparser.parse_args()
script_a._main(args)
In Script B simply import Script A,
import script_A
or
from script_A import *
now you can access your script A in script B
Treat the file like a module and put import scriptA at the top of your file.
You can then use scriptA.main(1) where 1 is the argument you are passing to it.
N.B When importing do not put .py at the end.
If you have code which is not indented inside of script A and if you import script A inside script B, it will automatically first run script A and then move on to the __main__() of script B. How ever if you wish control when the execution of script A begins, then , indent your code in Script A or code it in a normal function such as def start() .
Now, import Script A into Script B as follows
import ScriptA
And run the script A as
ScriptA.start()
NOTE: Make sure that script A and script B are in the same directory for this to work. Hope this solves your purpose. Cheers!

program is running fine but cannot being import with IndexError

I'm using python 2.7, the following is the simplified version of my script:
executor.py
import sys
def someCal(num):
num = int(num)
print num*num
someCal(sys.argv[1])
so python executor.py 13 would print out 169, it's working as expected.
and I have another script, I want to make use of someCal() function in executor.py so I import it
main.py
import executor
to_count = 999
executor.someCal(to_count)
I got the error message below when execute python main.py:
File "main.py", line 3, in <module>
import executor
File "/Users/mac/executor.py", line 13, in <module>
someCal(sys.argv[1])
I don't know why it keep mentioning about line 13 in executor.py, because I didn't use that part.
Thanks in advance!
from executor import *
It is a better method and will work fine as you wanted.No need if name == 'main': with this method. Also you can call your functions with their names.Like:
from executor import *
print (someCal(10))
Edit for example:
executor.py
def someCal(num):
num = int(num)
return num*num
another.py
from executor import *
print (someCal(10))
Output:
>>>
100
>>>
If you working with functions, you should return a value in function, not print. If you return a value, you can print it later.But if you dont use return and just keep it like print num*num, then you cant use it later with print function. You can try and see that.So, returning a value is important in functions.
For your second question, check this one: What does if __name__ == "__main__": do?
Python is the best language about clear codes, so you should keep it clear, sys is not necessary for you. And you dont need if name == 'main': this statement, remember every .py file is a module, so if you can import any module without that statement,like import random; then you can import your own modules too.Just care about they have to stay in same directory so Python can find your own module/file. Keep it simple :-)
Another import method for a module is:
import executor as ChuckNorris
print (ChuckNorris.someCal(10))
Output is same of course, you can write whatever you want instead of ChuckNorris, but be sure that name doesnt overlap with another function name in your program. For example you have a .py file called Number.py, you will import this file to another file, but you cant be sure is there any function called Number in another one, so you can call it like import Number as whatyouwanttocallit and you will avoid from that problems.
When you import executor in main.py, it actually doing python executor.py, I suggest you change your executor.py to:
if __name__ == '__main__':
someCal(sys.argv[1])
and, you might want to add defensive code like if len(sys.argv)>1 before use sys.argv[1] directly

Python subprocess.Popen : sys.stdout vs .txt file vs Cpickle.dump

I would like to know what is the best practice when you want to "return" something from a python script.
Here is my problem. I'm running a Python childScript from a parentScript using the subprocess.Popen method. I would like to get a tuple of two floats from the execution of the first script.
Now, the first method I have seen is by using sys.stdout and a pipe in the subprocess function as follow:
child.py:
if __name__ == '__main__':
myTuple = (x,y)
sys.stdout.write(str(myTuple[0]) +":"+str(myTuple[1]))
sys.stdout.flush()
parent.py:
p = subprocess.Popen([python, "child.py"], stdout=subprocess.PIPE)
out, err = p.communicate()
Though here it says that it is not recommended in most cases but I don't know why...
The second way would be to write my tuple into a text file in Script1.py and open it in Script2.py. But I guess writing and reading file takes a bit of time so I don't know if it is a better way to do?
Finally, I could use CPickle and dump my tuple and open it from script2.py. I guess that would be a bit faster than using a text file but would it be better than using sys.stdout?
What would be the proper way to do?
---------------------------------------EDIT------------------------------------------------
I forgot to mention that I cannot use import since parent.py actually generates child.py in a folder. Indeed I am doing some multiprocessing.
Parent.py creates say 10 directories where child.py is copied in each of them. Then I run each of the child.py from parent.py on several processors. And I want parent.py to gather the results "returned" by all the child.py. So parent.py cannot import child.py since it is not generated yet, or maybe I can do some sort of dynamic import? I don't know...
---------------------------------------EDIT2-----------------------------------------------
Another edit to answer a question with regards to why I proceed this way. Child.py actually calls ironpython and another script to run a .Net assembly. The reason why I HAVE to copy all the child.py files in specific folders is because this assembly generates a resource file which is then used by itself. If I don't copy child.py (and the assembly by the way) in each subfolders the resource files are copied at the root which creates conflicts when I call several processes using the multiprocessing module. If you have some suggestions about this overall architecture it is more than welcome :).
Thanks
Ordinary, you should use import other_module and call various functions:
import other_module
x, y = other_module.some_function(param='z')
If you can run the script, you also can import it.
If you want to use subprocess.Popen() then to pass a couple of floats, you could use json format: it is human readable, exact (in this case), and it is machine-readable. For example:
child.py:
#!/usr/bin/env python
import json
import sys
numbers = 1.2345, 1e-20
json.dump(numbers, sys.stdout)
parent.py:
#!/usr/bin/env python
import json
import sys
from subprocess import check_output
output = check_output([sys.executable, 'child.py'])
x, y = json.loads(output.decode())
Child.py actually calls ironpython and another script to run a .Net assembly. The reason why I HAVE to copy all the child.py files is because this assembly generates a resource file which is then used by it. If I don't copy child.py in each subfolders the resource files are copied at the root which creates conflicts when I call several processes using the multiprocessing module. If you have some suggestions about this overall architecture it is more than welcome :).
You can put the code from child.py into parent.py and call os.chdir() (after the fork) to execute each multiprocessing.Process in its own working directory or use cwd parameter (it sets the current working directory for the subprocess) if you run the assembly using subprocess module:
#!/usr/bin/env python
import os
import shutil
import tempfile
from multiprocessing import Pool
def init(topdir='.'):
dir = tempfile.mkdtemp(dir=topdir) # parent is responsible for deleting it
os.chdir(dir)
def child(n):
return os.getcwd(), n*n
if __name__ == "__main__":
pool = Pool(initializer=init)
results = pool.map(child, [1,2,3])
pool.close()
pool.join()
for dirname, _ in results:
try:
shutil.rmtree(dirname)
except EnvironmentError:
pass # ignore errors

Main program variables in side programs (Python)

How do I use variables that exist in the main program in the side program? For example, if I were to have Var1 in the main program, how would I use it in the side program, how would I for example, print it?
Here's what I have right now:
#Main program
Var1 = 1
#Side program
from folder import mainprogram
print(mainprogram.Var1)
This I think would work, if it didn't run the main program when it imports it, because I have other functions being executed in it. How would I import all the main program data, but not have it execute?
The only thing I thought of was to import that specific variable from the program, but I don't know how to do it. What I have in my head is:
from folder import mainprogram
from mainprogram import Var1
But it still excecutes mainprogram.
Your approach is basically correct (except for from folder import mainprogram - that looks a bit strange, unless you want to import a function named mainprogram from a Python script named folder.py). You have also noticed that an imported module is executed on import. This is usually what you want.
But if there are parts of the module that you only want executed when it's run directy (as in python.exe mainprogram.py) but not when doing import mainprogram, then wrap those parts of the program in an if block like this:
if __name__ == "__main__":
# this code will not be run on import

Initialization of Python program

I have a Python program that stars with a bunch of code where I basically import some modules, initialize some variables and calls a few functions. Here's part of it:
import numpy as np
import scipy as sp
import scipy.optimize as opt
import scipy.constants as const
import random
import time
if os.name == 'nt': os.system('cls')
if os.name == 'posix': os.system('clear')
rows, columns = os.popen('stty size', 'r').read().split()
Inclination = math.radians(INCLINATION)
Period = PERIOD*const.day
Is there a way where I can put all of this into once single module and just call it? I tried to put all of this into an external program and call it, but as I understood everything gets done, but only locally, not on the main code.
The idea would be to be able to also use this "initialization module" in multiple programs.
Did you try putting all of that into some other .py file, and then just from x import *? Then you should have all of those modules and constants in whatever file you called from.
EDIT: If you're worried about performing all of that multiple times, don't be. On an import, Python checks to see if a module has already been loaded before it goes and loads that module again. For example say we have these files:
fileA.py => from initializer import *
fileB.py => import initializer
fileC.py => import fileA, fileB
When you run fileC.py, the code in initializer.py is only run once, even though both fileA and fileB successfully load it, and even though they do so in different ways.
you don't need any special mechanism. when you import this module then python goes throw it and all values are initialized and you can use it. just import it and this is all.

Categories