Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 10 months ago.
The community reviewed whether to reopen this question 10 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I have a script named test1.py which is not in a module. It just has code that should execute when the script itself is run. There are no functions, classes, methods, etc. I have another script which runs as a service. I want to call test1.py from the script running as a service.
For example:
File test1.py:
print "I am a test"
print "see! I do nothing productive."
File service.py:
# Lots of stuff here
test1.py # do whatever is in test1.py
I'm aware of one method which is opening the file, reading the contents, and basically evaluating it. I'm assuming there's a better way of doing this. Or at least I hope so.
The usual way to do this is something like the following.
test1.py
def some_func():
print 'in test 1, unproductive'
if __name__ == '__main__':
# test1.py executed as script
# do something
some_func()
service.py
import test1
def service_func():
print 'service func'
if __name__ == '__main__':
# service.py executed as script
# do something
service_func()
test1.some_func()
This is possible in Python 2 using
execfile("test2.py")
See the documentation for the handling of namespaces, if important in your case.
In Python 3, this is possible using (thanks to #fantastory)
exec(open("test2.py").read())
However, you should consider using a different approach; your idea (from what I can see) doesn't look very clean.
Another way:
File test1.py:
print "test1.py"
File service.py:
import subprocess
subprocess.call("test1.py", shell=True)
The advantage to this method is that you don't have to edit an existing Python script to put all its code into a subroutine.
Documentation: Python 2, Python 3
import os
os.system("python myOtherScript.py arg1 arg2 arg3")
Using os you can make calls directly to your terminal. If you want to be even more specific you can concatenate your input string with local variables, ie.
command = 'python myOtherScript.py ' + sys.argv[1] + ' ' + sys.argv[2]
os.system(command)
If you want test1.py to remain executable with the same functionality as when it's called inside service.py, then do something like:
test1.py
def main():
print "I am a test"
print "see! I do nothing productive."
if __name__ == "__main__":
main()
service.py
import test1
# lots of stuff here
test1.main() # do whatever is in test1.py
I prefer runpy:
#!/usr/bin/env python
# coding: utf-8
import runpy
runpy.run_path(path_name='script-01.py')
runpy.run_path(path_name='script-02.py')
runpy.run_path(path_name='script-03.py')
You should not be doing this. Instead, do:
test1.py:
def print_test():
print "I am a test"
print "see! I do nothing productive."
service.py
#near the top
from test1 import print_test
#lots of stuff here
print_test()
Use import test1 for the 1st use - it will execute the script. For later invocations, treat the script as an imported module, and call the reload(test1) method.
When reload(module) is executed:
Python modules’ code is recompiled and the module-level code reexecuted, defining a new set of objects which are bound to names in the module’s dictionary. The init function of extension modules is not called
A simple check of sys.modules can be used to invoke the appropriate action. To keep referring to the script name as a string ('test1'), use the 'import()' builtin.
import sys
if sys.modules.has_key['test1']:
reload(sys.modules['test1'])
else:
__import__('test1')
As it's already mentioned, runpy is a nice way to run other scripts or modules from current script.
By the way, it's quite common for a tracer or debugger to do this, and under such circumstances methods like importing the file directly or running the file in a subprocess usually do not work.
It also needs attention to use exec to run the code. You have to provide proper run_globals to avoid import error or some other issues. Refer to runpy._run_code for details.
Why not just import test1? Every python script is a module. A better way would be to have a function e.g. main/run in test1.py, import test1 and run test1.main(). Or you can execute test1.py as a subprocess.
I found runpy standard library most convenient. Why? You have to consider case when error raised in test1.py script, and with runpy you are able to handle this in service.py code. Both traceback text (to write error in log file for future investigation) and error object (to handle error depends on its type): when with subprocess library I wasn't able to promote error object from test1.py to service.py, only traceback output.
Also, comparing to "import test1.py as a module" solution, runpy is better cause you have no need to wrap code of test1.py into def main(): function.
Piece of code as example, with traceback module to catch last error text:
import traceback
import runpy #https://www.tutorialspoint.com/locating-and-executing-python-modules-runpy
from datetime import datetime
try:
runpy.run_path("./E4P_PPP_2.py")
except Exception as e:
print("Error occurred during execution at " + str(datetime.now().date()) + " {}".format(datetime.now().time()))
print(traceback.format_exc())
print(e)
This process is somewhat un-orthodox, but would work across all python versions,
Suppose you want to execute a script named 'recommend.py' inside an 'if' condition, then use,
if condition:
import recommend
The technique is different, but works!
Add this to your python script.
import os
os.system("exec /path/to/another/script")
This executes that command as if it were typed into the shell.
An example to do it using subprocess.
from subprocess import run
import sys
run([sys.executable, 'fullpathofyourfile.py'])
This is an example with subprocess library:
import subprocess
python_version = '3'
path_to_run = './'
py_name = '__main__.py'
# args = [f"python{python_version}", f"{path_to_run}{py_name}"] # works in python3
args = ["python{}".format(python_version), "{}{}".format(path_to_run, py_name)]
res = subprocess.Popen(args, stdout=subprocess.PIPE)
output, error_ = res.communicate()
if not error_:
print(output)
else:
print(error_)
According to the given example, this is the best way:
# test1.py
def foo():
print("hellow")
# test2.py
from test1 import foo # might be different if in different folder.
foo()
But according to the title, using os.startfile("path") is the best way as its small and it works. This would execute the file specified. My python version is 3.x +.
Related
I have a Python program I'm building that can be run in either of 2 ways: the first is to call python main.py which prompts the user for input in a friendly manner and then runs the user input through the program. The other way is to call python batch.py -file- which will pass over all the friendly input gathering and run an entire file's worth of input through the program in a single go.
The problem is that when I run batch.py, it imports some variables/methods/etc from main.py, and when it runs this code:
import main
at the first line of the program, it immediately errors because it tries to run the code in main.py.
How can I stop Python from running the code contained in the main module which I'm importing?
Because this is just how Python works - keywords such as class and def are not declarations. Instead, they are real live statements which are executed. If they were not executed your module would be empty.
The idiomatic approach is:
# stuff to run always here such as class/def
def main():
pass
if __name__ == "__main__":
# stuff only to run when not called via 'import' here
main()
It does require source control over the module being imported, however.
Due to the way Python works, it is necessary for it to run your modules when it imports them.
To prevent code in the module from being executed when imported, but only when run directly, you can guard it with this if:
if __name__ == "__main__":
# this won't be run when imported
You may want to put this code in a main() method, so that you can either execute the file directly, or import the module and call the main(). For example, assume this is in the file foo.py.
def main():
print "Hello World"
if __name__ == "__main__":
main()
This program can be run either by going python foo.py, or from another Python script:
import foo
...
foo.main()
Use the if __name__ == '__main__' idiom -- __name__ is a special variable whose value is '__main__' if the module is being run as a script, and the module name if it's imported. So you'd do something like
# imports
# class/function definitions
if __name__ == '__main__':
# code here will only run when you invoke 'python main.py'
Unfortunately, you don't. That is part of how the import syntax works and it is important that it does so -- remember def is actually something executed, if Python did not execute the import, you'd be, well, stuck without functions.
Since you probably have access to the file, though, you might be able to look and see what causes the error. It might be possible to modify your environment to prevent the error from happening.
Put the code inside a function and it won't run until you call the function. You should have a main function in your main.py. with the statement:
if __name__ == '__main__':
main()
Then, if you call python main.py the main() function will run. If you import main.py, it will not. Also, you should probably rename main.py to something else for clarity's sake.
There was a Python enhancement proposal PEP 299 which aimed to replace if __name__ == '__main__': idiom with def __main__:, but it was rejected. It's still a good read to know what to keep in mind when using if __name__ = '__main__':.
You may write your "main.py" like this:
#!/usr/bin/env python
__all__=["somevar", "do_something"]
somevar=""
def do_something():
pass #blahblah
if __name__=="__main__":
do_something()
I did a simple test:
#test.py
x = 1
print("1, has it been executed?")
def t1():
print("hello")
print("2, has it been executed?")
def t2():
print("world")
print("3, has it been executed?")
def main():
print("Hello World")
print("4, has it been executed?")
print("5, has it been executed?")
print(x)
# while True:
# t2()
if x == 1:
print("6, has it been executed?")
#test2.py
import test
When executing or running test2.py, the running result:
1, has it been executed?
5, has it been executed?
1
6, has it been executed?
Conclusion: When the imported module does not add if __name__=="__main__":, the current module is run, The code in the imported module that is not in the function is executed sequentially, and the code in the function is not executed when it is not called.
in addition:
def main():
# Put all your code you need to execute directly when this script run directly.
pass
if __name__ == '__main__':
main()
else:
# Put functions you need to be executed only whenever imported
A minor error that could happen (at least it happened to me), especially when distributing python scripts/functions that carry out a complete analysis, was to call the function directly at the end of the function .py file.
The only things a user needed to modify were the input files and parameters.
Doing so when you import you'll get the function running immediately. For proper behavior, you simply need to remove the inside call to the function and reserve it for the real calling file/function/portion of code
Another option is to use a binary environment variable, e.g. lets call it 'run_code'. If run_code = 0 (False) structure main.py to bypass the code (but the temporarily bypassed function will still be imported as a module). Later when you are ready to use the imported function (now a module) set the environment variable run_code = 1 (True). Use the os.environ command to set and retrieve the binary variable, but be sure to convert it to an integer when retrieving (or restructure the if statement to read a string value),
in main.py:
import os
#set environment variable to 0 (False):
os.environ['run_code'] = '0'
def binary_module():
#retrieve environment variable, convert to integer
run_code_val = int(os.environ['run_code'] )
if run_code_val == 0:
print('nope. not doing it.')
if run_code_val == 1:
print('executing code...')
# [do something]
...in whatever script is loading main.py:
import os,main
main.binary_module()
OUTPUT: nope. not doing it.
# now flip the on switch!
os.environ['run_code'] = '1'
main.binary_module()
OUTPUT: executing code...
*Note: The above code presumes main.py and whatever script imports it exist in the same directory.
Although you cannot use import without running the code; there is quite a swift way in which you can input your variables; by using numpy.savez, which stores variables as numpy arrays in a .npz file. Afterwards you can load the variables using numpy.load.
See a full description in the scipy documentation
Please note this is only the case for variables and arrays of variable, and not for methods, etc.
Try just importing the functions needed from main.py? So,
from main import SomeFunction
It could be that you've named a function in batch.py the same as one in main.py, and when you import main.py the program runs the main.py function instead of the batch.py function; doing the above should fix that. I hope.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 10 months ago.
The community reviewed whether to reopen this question 10 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I have a script named test1.py which is not in a module. It just has code that should execute when the script itself is run. There are no functions, classes, methods, etc. I have another script which runs as a service. I want to call test1.py from the script running as a service.
For example:
File test1.py:
print "I am a test"
print "see! I do nothing productive."
File service.py:
# Lots of stuff here
test1.py # do whatever is in test1.py
I'm aware of one method which is opening the file, reading the contents, and basically evaluating it. I'm assuming there's a better way of doing this. Or at least I hope so.
The usual way to do this is something like the following.
test1.py
def some_func():
print 'in test 1, unproductive'
if __name__ == '__main__':
# test1.py executed as script
# do something
some_func()
service.py
import test1
def service_func():
print 'service func'
if __name__ == '__main__':
# service.py executed as script
# do something
service_func()
test1.some_func()
This is possible in Python 2 using
execfile("test2.py")
See the documentation for the handling of namespaces, if important in your case.
In Python 3, this is possible using (thanks to #fantastory)
exec(open("test2.py").read())
However, you should consider using a different approach; your idea (from what I can see) doesn't look very clean.
Another way:
File test1.py:
print "test1.py"
File service.py:
import subprocess
subprocess.call("test1.py", shell=True)
The advantage to this method is that you don't have to edit an existing Python script to put all its code into a subroutine.
Documentation: Python 2, Python 3
import os
os.system("python myOtherScript.py arg1 arg2 arg3")
Using os you can make calls directly to your terminal. If you want to be even more specific you can concatenate your input string with local variables, ie.
command = 'python myOtherScript.py ' + sys.argv[1] + ' ' + sys.argv[2]
os.system(command)
If you want test1.py to remain executable with the same functionality as when it's called inside service.py, then do something like:
test1.py
def main():
print "I am a test"
print "see! I do nothing productive."
if __name__ == "__main__":
main()
service.py
import test1
# lots of stuff here
test1.main() # do whatever is in test1.py
I prefer runpy:
#!/usr/bin/env python
# coding: utf-8
import runpy
runpy.run_path(path_name='script-01.py')
runpy.run_path(path_name='script-02.py')
runpy.run_path(path_name='script-03.py')
You should not be doing this. Instead, do:
test1.py:
def print_test():
print "I am a test"
print "see! I do nothing productive."
service.py
#near the top
from test1 import print_test
#lots of stuff here
print_test()
Use import test1 for the 1st use - it will execute the script. For later invocations, treat the script as an imported module, and call the reload(test1) method.
When reload(module) is executed:
Python modules’ code is recompiled and the module-level code reexecuted, defining a new set of objects which are bound to names in the module’s dictionary. The init function of extension modules is not called
A simple check of sys.modules can be used to invoke the appropriate action. To keep referring to the script name as a string ('test1'), use the 'import()' builtin.
import sys
if sys.modules.has_key['test1']:
reload(sys.modules['test1'])
else:
__import__('test1')
As it's already mentioned, runpy is a nice way to run other scripts or modules from current script.
By the way, it's quite common for a tracer or debugger to do this, and under such circumstances methods like importing the file directly or running the file in a subprocess usually do not work.
It also needs attention to use exec to run the code. You have to provide proper run_globals to avoid import error or some other issues. Refer to runpy._run_code for details.
Why not just import test1? Every python script is a module. A better way would be to have a function e.g. main/run in test1.py, import test1 and run test1.main(). Or you can execute test1.py as a subprocess.
I found runpy standard library most convenient. Why? You have to consider case when error raised in test1.py script, and with runpy you are able to handle this in service.py code. Both traceback text (to write error in log file for future investigation) and error object (to handle error depends on its type): when with subprocess library I wasn't able to promote error object from test1.py to service.py, only traceback output.
Also, comparing to "import test1.py as a module" solution, runpy is better cause you have no need to wrap code of test1.py into def main(): function.
Piece of code as example, with traceback module to catch last error text:
import traceback
import runpy #https://www.tutorialspoint.com/locating-and-executing-python-modules-runpy
from datetime import datetime
try:
runpy.run_path("./E4P_PPP_2.py")
except Exception as e:
print("Error occurred during execution at " + str(datetime.now().date()) + " {}".format(datetime.now().time()))
print(traceback.format_exc())
print(e)
This process is somewhat un-orthodox, but would work across all python versions,
Suppose you want to execute a script named 'recommend.py' inside an 'if' condition, then use,
if condition:
import recommend
The technique is different, but works!
Add this to your python script.
import os
os.system("exec /path/to/another/script")
This executes that command as if it were typed into the shell.
An example to do it using subprocess.
from subprocess import run
import sys
run([sys.executable, 'fullpathofyourfile.py'])
This is an example with subprocess library:
import subprocess
python_version = '3'
path_to_run = './'
py_name = '__main__.py'
# args = [f"python{python_version}", f"{path_to_run}{py_name}"] # works in python3
args = ["python{}".format(python_version), "{}{}".format(path_to_run, py_name)]
res = subprocess.Popen(args, stdout=subprocess.PIPE)
output, error_ = res.communicate()
if not error_:
print(output)
else:
print(error_)
According to the given example, this is the best way:
# test1.py
def foo():
print("hellow")
# test2.py
from test1 import foo # might be different if in different folder.
foo()
But according to the title, using os.startfile("path") is the best way as its small and it works. This would execute the file specified. My python version is 3.x +.
I just want to have some ideas to know how to do that...
I have a python script that parses log files, the log name I give it as an argument so that when i want to run the script it's like that.. ( python myscript.py LOGNAME )
what I'd like to do is to have two scripts one that contains the functions and another that has only the main function so i don't know how to be able to give the argument when i run it from the second script.
here's my second script's code:
import sys
import os
path = "/myscript.py"
sys.path.append(os.path.abspath(path))
import myscript
mainFunction()
the error i have is:
script, name = argv
valueError: need more than 1 value to unpack
Python (just as most languages) will share parameters across imports and includes.
Meaning that if you do:
python mysecondscript.py heeey that will flow down into myscript.py as well.
So, check your arguments that you pass.
Script one
myscript = __import__('myscript')
myscript.mainfunction()
script two
import sys
def mainfunction():
print sys.argv
And do:
python script_one.py parameter
You should get:
["script_one.py", "parameter"]
You have several ways of doing it.
>>> execfile('filename.py')
Check the following link:
How to execute a file within the python interpreter?
I need to execute a python script from within another python-script multiple times with different arguments.
I know this sounds horrible but there are reasons for it.
Problem is however that the callee-script does not check if it is imported or executed (if __name__ == '__main__': ...).
I know I could use subprocess.popen("python.exe callee.py -arg") but that seems to be much slower then it should be, and I guess thats because Python.exe is beeing started and terminated multiple times.
I can't import the script as a module regularily because of its design as described in the beginning - upon import it will be executed without args because its missing a main() method.
I can't change the callee script either
As I understand it I can't use execfile() either because it doesnt take arguments
Found the solution for you. You can reload a module in python and you can patch the sys.argv.
Imagine echo.py is the callee script you want to call a multiple times :
#!/usr/bin/env python
# file: echo.py
import sys
print sys.argv
You can do as your caller script :
#!/usr/bin/env python
# file: test.py
import sys
sys.argv[1] = 'test1'
import echo
sys.argv[1] = 'test2'
reload(echo)
And call it for example with : python test.py place_holder
it will printout :
['test.py', 'test1']
['test.py', 'test2']
Imagine a python script that will take a long time to run, what will happen if I modify it while it's running? Will the result be different?
Nothing, because Python precompiles your script into a PYC file and launches that.
However, if some kind of exception occurs, you may get a slightly misleading explanation, because line X may have different code than before you started the script.
When you run a python program and the interpreter is started up, the first thing that happens is the following:
the module sys and builtins is initialized
the __main__ module is initialized, which is the file you gave as an argument to the interpreter; this causes your code to execute
When a module is initialized, it's code is run, defining classes, variables, and functions in the process. The first step of your module (i.e. main file) will probably be to import other modules, which will again be initialized in just the same way; their resulting namespaces are then made available for your module to use. The result of an importing process is in part a module (python-) object in memory. This object does have fields that point to the .py and .pyc content, but these are not evaluated anymore: module objects are cached and their source never run twice. Hence, modifying the module afterwards on disk has no effect on the execution. It can have an effect when the source is read for introspective purposes, such as when exceptions are thrown, or via the module inspect.
This is why the check if __name__ == "__main__" is necessary when adding code that is not intended to run when the module is imported. Running the file as main is equivalent to that file being imported, with the exception of __name__ having a different value.
Sources:
What happens when a module is imported: The import system
What happens when the interpreter starts: Top Level Components
What's the __main__ module: __main__- Top-level code environment
This is a fun question. The answer is that "it depends".
Consider the following code:
"Example script showing bad things you can do with python."
import os
print('this is a good script')
with open(__file__, 'w') as fdesc:
fdesc.write('print("this is a bad script")')
import bad
Try saving the above as "/tmp/bad.py" then do "cd /tmp" and finally "python3 bad.py" and see what happens.
On my ubuntu 20 system I see the output:
this is a good script
this is a bad script
So again, the answer to your question is "it depends". If you don't do anything funky then the script is in memory and you are fine. But python is a pretty dynamic language so there are a variety of ways to modify your "script" and have it affect the output.
If you aren't trying to do anything funky, then probably one of the things to watch out for are imports inside functions.
Below is another example which illustrates the idea (save as "/tmp/modify.py" and do "cd /tmp" and then "python3 modify.py" to run). The fiddle function defined below simulates you modifying the script while it is running (if desired, you could remove the fiddle function, put in a time.sleep(300) at the second to last line, and modify the file yourself).
The point is that since the show function is doing an import inside the function instead of at the top of the module, the import won't happen until the function is called. If you have modified the script before you call show, then your modified version of the script will be used.
If you are seeing surprising or unexpected behavior from modifying a running script, I would suggest looking for import statements inside functions. There are sometimes good reasons to do that sort of thing so you will see it in people's code as well as some libraries from time to time.
Below is the demonstration of how an import inside a function can cause strange effects. You can try this as is vs commenting out the call to the fiddle function to see the effect of modifying a script while it is running.
"Example showing import in a function"
import time
def yell(msg):
"Yell a msg"
return f'#{msg}#'
def show(msg):
"Print a message nicely"
import modify
print(modify.yell(msg))
def fiddle():
orig = open(__file__).read()
with open(__file__, 'w') as fdesc:
modified = orig.replace('{' + 'msg' + '}', '{msg.upper()}')
fdesc.write(modified)
fiddle()
show('What do you think?')
No, the result will not reflect the changes once saved. The result will not change when running regular python files. You will have to save your changes and re-run your program.
If you run the following script:
from time import sleep
print("Printing hello world in: ")
for i in range(10, 0, -1):
print(f"{i}...")
sleep(1)
print("Hello World!")
Then change "Hello World!" to "Hello StackOverflow!" while it's counting down, it will still output "Hello World".
Nothing, as this answer. Besides, I did experiment when multiprocessing is involved. Save the script below as x.py:
import multiprocessing
import time
def f(x):
print(x)
time.sleep(10)
if __name__ == '__main__':
with multiprocessing.Pool(2) as pool:
for _ in pool.imap(f, ['hello'] * 5):
pass
After python3 x.py and after the first two 'hello' being printed out, I modified ['hello'] to ['world'] and observed what happend. Nothing interesting happened. The result was still:
hello
hello
hello
hello
hello
It happens nothing. Once the script is loaded in memory and running it will keep like this.
An "auto-reloading" feature can be implemented anyway in your code, like Flask and other frameworks does.
This is slightly different from what you describe in your question, but it works:
my_string = "Hello World!"
line = input(">>> ")
exec(line)
print(my_string)
Test run:
>>> print("Hey")
Hey
Hello World!
>>> my_string = "Goodbye, World"
Goodbye, World
See, you can change the behavior of your "loaded" code dynamically.
depending. if a python script links to other modified file, then will load newer version ofcourse. but if source doesnt point to any other file it'll just run all script from cache as long as its run. changes will be visible next time...
and if about auto-applying changes when they're made - yes, #pcbacterio was correct. its possible to do thar but script which does it just remembers last action/thing what was doing and checks when the file is modified to rerun it (so its almost invisible)
=]