I am trying to access a variable from a python module which is being run as a script. The variable is defined in the if __name__ == "__main__":
The code I am working with looks something like this:
MyCode.py
cmd = 'python OtherCode.py'
os.system(cmd) # Run the other module as a python script
OtherCode.py
if __name__ == "__main__":
var = 'This is the variable I want to access'
I was wondering if there was a way to access this variable while still running the OtherCode.py as a script.
You can use the runpy module to import the module as __main__ then extract the variable from the returned dictionary:
import runpy
vars = runpy.run_module("OtherCode", run_name="__main__")
desired_var = vars["var"] # where "var" is the variable name you want
When you use os.system, it runs the specified command as a completely separate process. You would need to pass the variable through some sort of OS-level communication mechanism: stdout, a socket, shared memory, etc.
But since both scripts are Python, it would be much easier to just import OtherCode. (Though note that you will need to set up OtherCode.py inside of a package so that Python knows it can be imported.)
While this fix might not be ideal (or what people are looking for if they google it)
I ended up printing out the variable and then used subprocess to get the stdout as a variable:
MyCode.py
cmd = 'python OtherCode.py'
cmdOutput = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE).stdout
OtherCode.py
if __name__ == "__main__":
var = 'This is the variable I want to access'
print(var)
In this case the cmdOutput == var
Related
I'm working on cloning a Virtual Machine (VM) in vCenter environment using this code. It takes command line arguments for name of the VM, template, datastore, etc. (e.g. $ clone_vm.py -s <host_name> -p < password > -nossl ....)
I have another Python file where I've been able to list the Datastore volumes in descending order of free_storage. I have stored the datastore with maximum available storage in a variable ds_max. (Let's call this ds_info.py)
I would like to use ds_max variable from ds_info.py as a command line argument for datastore command line argument in clone_vm.py.
I tried importing the os module in ds_info.py and running os.system(python clone_vm.py ....arguments...) but it did not take the ds_max variable as an argument.
I'm new to coding and am not confident to change the clone_vm.py to take in the Datastore with maximum free storage.
Thank you for taking the time to read through this.
I suspect there is something wrong in your os.system call, but you don't provide it, so I can't check.
Generally it is a good idea to use the current paradigm, and the received wisdom (TM) is that we use subprocess. See the docs, but the basic pattern is:
from subprocess import run
cmd = ["mycmd", "--arg1", "--arg2", "val_for_arg2"]
run(cmd)
Since this is just a list, you can easily drop arguments into it:
var = "hello"
cmd = ["echo", var]
run(cmd)
However, if your other command is in fact a python script it is more normal to refactor your script so that the main functionality is wrapped in a function, called main by convention:
# script 2
...
def main(arg1, arg2, arg3):
do_the_work
if __name__ == "__main__":
args = get_sys_args() # dummy fn
main(*args)
Then you can simply import script2 from script1 and run the code directly:
# script 1
from script2 import main
args = get_args() # dummy fn
main(*args)
This is 'better' as it doesn't involve spawning a whole new python process just to run python code, and it generally results in neater code. But nothing stops you calling a python script the same way you'd call anything else.
I'm trying to run a program using spyder instead of ipython notebook because it currently runs faster.
The data is imported and extracted using
run util/file_reader.py C:/file_address
Obviously the run command doesn't work in normal python and I can't find an equivalent, I've looked at the various how to replace ipython magic commands Q&As on here and generally but I can't find one for the run command...
Is there a module or set of code that would work as an equivalent in normal python?
What you want is a bit weird. Precisely, the run magic runs the given file in the current ipython namespace as if it were the __main__ module. To get precisely the same effects would require a bit of effort.
with open("util/file_reader.py") as f:
src = f.read()
# set command line arguments
import sys
sys.argv = ["file_reader.py", "C:/file_address"]
# use "__main__.py" as file name, so module thinks its the main module
code = compile(src, "__main__.py", "exec")
exec(code)
If would be easier and better to define a main function in file_reader.py and then call that at the end of the file if an if __name__ == "__main__":
eg.
util/file_reader.py
def main(filename):
print(filename)
if __name__ == "__main__":
import sys
main(sys.argv[1])
So now you can easily run the code in this module by importing it and then calling the main function.
eg.
import util.file_reader
util.file_reader.main("C:/file_address")
I have a number of scripts that reference a Python program via the:
python -c "execfile('myfile.py'); readFunc(param='myParam', input='blahblah')"
interface. What I'd like to do is conceptually simple: Develop a more modular system with a "main" and a normal Python CLI interface that then calls these functions, but also MAINTAINS the existing interface, so the scripts built to use it still work.
Is this possible?
Ideally, if I was to call
python myFile readFunc myParam blabblah
It'd be something like:
main(sys.argv):
readFunc(sys.argv[2], sys.arg[3])
I've tried something like that, but it hasn't quite worked. Is it possible to keep both interfaces/methods of invocation?
Thanks!
The first idea that comes to mind stems from the optional arguments to the execfile() function. You might be able to do something like this:
#!python
def main(args):
results = do_stuff()
return results
if __name__ == '__main__':
import sys
main(sys.argv[1:])
if __name__ == 'execfile':
main(args)
... and then when you want to call it via execfile() you supply a dictionary for its optional globals argument:
#!sh
python -c 'execfile(myfile, {"__name__":"execfile", "args":(1,2,3)}); ...'
This does require a little extra work when you're calling your functionality via -c as you have to remember to pass that dictionary and over-ride '__name__' ... through I suppose you could actually use any valid Python identifier. It's just that __name__ is closest to what you're actually doing.
The next idea feels a little dirty but relies on the apparent handling of the __file__ global identifier. That seems to be unset when calling python -c and set if the file is being imported or executed. So this works (at least for CPython 2.7.9):
#!/usr/bin/env python
foo='foo'
if __name__ == '__main__' and '__file__' not in globals():
print "Under -c:", foo
elif __name__ == '__main__':
print "Executed standalone:", foo
... and if you use that please don't give me credit. It looks ...
... ... ummm ....
... just ...
.... WRONG
If I understand this one
python myFile readFunc myParam blabblah
correctly, you want to parse argv[1] as a command name to be executed.
So just do
if __name__ == '__main__':
import sys
if len(sys.argv) < 2 or sys.argv[1].lower() == 'nop' or sys.argv[0] == '-c': # old, legacy interface
pass
elif sys.argv[1].lower() == 'readfunc': # new one
readFunc(sys.argv[2:])
where the 2nd part gets executed on a direct execution of the file (either via python file.py readFunc myParam blabblah or via python -m file readFunc myParam blabblah)
The "nop" / empty argv branch comes to play when using the "legacy" interface: in this case, you most probably have given no cmdline arguments and thus can assume that you don't want to execute anything.
This makes the situation as before: the readFunc identifier is exported and can be used from within the -c script as before.
I need to interpret few files (scripts) by embedded python interpreter concurrently (to be more detailed one script executes another script as Popen and my app intercepts it and executes it itself). I've found it's called sub-interpreter and i'm going to use it. But i've read sub-interpreter does not have sys.argv:
The new environment has no sys.argv variable
I need to pass argv anyway so how can i do it?
You might find it easier to modify each of the scripts follow the pattern:
def run(*posargs, **argdict):
"""
This does the work and can be called with:
import scriptname
scriptname.run(someargs)
"""
# Code goes here and uses posargs[n] where it would use sys.argv[n+1]
if __name__ == "__main__":
import sys
run(sys.argv[1:])
Then your main script can just call each of the subscripts in turn by simply calling the run method.
You can use environment variables. Have the parent set them by updating the dict os.environ if it's in Python, or setenv() if in C or C++ etc. Then the children can read os.environ to get whatever strings they need.
import os
import pdb
os.system("ToBuildOrNot.py MSS_sims")
for output in os.system:
if ToBuildOrNot is True:
print "The MSS_sims Needs To rebuilt"
elif ToBuildOrNot is False:
print "The MSS_sism does NOT Need to be Rebuilt"
else:
print "error"
Don't invoke a Python script from a Python script by using system, which spawns a whole other interpreter. Just import it. Like this:
import ToBuildOrNot
needsBuild = ToBuildOrNot.run() # or whatever you call your top-level function
Since ToBuildOrNot.py is a script now, make sure the "main" function is protected so it doesn't execute automatically on import. Most people do it this way in Python: What does if __name__ == "__main__": do?