I was under the impression that a calling script can access the namespace of the called script. Following is a code section from my calling script:
x= 'python precision.py'
args=shlex.split(x)
print args
p=subprocess.Popen(args)
p.wait()
result.write("\tprecision = "+str(precision)+", recall = ")
where "precision" is a variable in the called script "precision.py".
But this gives a NameError. How could i fix this?
You can't access this. By the time you have arrived in the last line of your script, the called script has finished executing. Therefore its variables don't exist any more. You need to send this data to the calling script in some other way (such as the called script printing it on the standard output and the calling script getting it from there).
Even if it hadn't finished executing, I don't think you could access its variables. In other words, your impression is wrong :-)
subprocess.Popen() allows you to run a command and read from its standard output and/or write to its standard input. It doesn't make much sense to popen a process and then wait for it to finish without communicating with it. That's pretty much like os.system()
If you want a variable in precision.py you do something like the following:
import precision
print "precision variable value =", precision.precision
of course, importing means executing any statements not inside classes or def's
Related
I am working on a program that requires to call another python script and truncate the execution of the current file. I tried doing the same using the os.close() function. As follows:
def call_otherfile(self):
os.system("python file2.py") #Execute new script
os.close() #close Current Script
Using the above code I am able to open the second file but am unable to close the current one.I know I am silly mistake but unable to figure out what's it.
To do this you will need to spawn a subprocess directly. This can either be done with a more low-level fork and exec model, as is traditional in Unix, or with a higher-level API like subprocess.
import subprocess
import sys
def spawn_program_and_die(program, exit_code=0):
"""
Start an external program and exit the script
with the specified return code.
Takes the parameter program, which is a list
that corresponds to the argv of your command.
"""
# Start the external program
subprocess.Popen(program)
# We have started the program, and can suspend this interpreter
sys.exit(exit_code)
spawn_program_and_die(['python', 'path/to/my/script.py'])
# Or, as in OP's example
spawn_program_and_die(['python', 'file2.py'])
Also, just a note on your original code. os.close corresponds to the Unix syscall close, which tells the kernel that your program that you no longer need a file descriptor. It is not supposed to be used to exit the program.
If you don't want to define your own function, you could always just call subprocess.Popen directly like Popen(['python', 'file2.py'])
Use the subprocess module which is the suggested way to do that kind of stuff (execute new script, process), in particular look at Popen for starting a new process and to terminate the current program you can use sys.exit().
Its very simple use os.startfile and after that use exit() or sys.exit() it will work 100%
#file 1 os.startfile("file2.py") exit()
Hy Python Community -
This is a basic terminology question about Argv and "invoke"
I'm new to python and programmring.
I was reading about the argv function in the sys module on openbookproject.com:
"The argv variable holds a list of strings read in from the command line when a Python script is run. These command line arguments can be used to pass information into a program at the same time it is invoked." http://openbookproject.net/thinkcs/python/english2e/ch10.html
It seems really clear from the defintion, but I still wanted to double check: Does "at the time it is invoked" just mean, "when you run the program?" Would it be appropriate in a third way to say, "Argv can pass information into a program at runtime?"
Thank you.
Yes, that's what "invoked" means.
No, because "at runtime" covers the entire time window in which the process is running. It is precisely accurate to say that argv can pass information into a program at invocation.
Does "at the time it is invoked" just mean, "when you run the program?"
Yes. "at the same time it is invoked" implies that you can pass data to the program later while it is running too i.e., you can use command-line arguments (sys.argv) to pass data to the program "at the same time it is invoked" and some other means (IPC) to pass it later e.g., via standard input while it is running.
Would it be appropriate in a third way to say, "Argv can pass information into a program at runtime?"
No.
argv defines how the command line looks like to the process e.g., argv[0] sets one of the name for the process (another one is derived from the path to the actual executable). On POSIX, argv is a parameter for exec*() functions that is passed to C main(argc, argv) that is the entry point for a C program.
In other words, argv is used to invoke (start/run) the process but as #G Fetterman mentioned "at runtime" may refer to the whole process running time, not only the invocation time. argv may be known even before the process is running and argv usually stays the same after the process is started.
Yes, that's correct. Consider the following code testargs.py:
import sys
print(sys.argv[1])
When you run this script as python testargs.py banana you see that the result prints "banana". Note that argv[0] is the script name, any argument given after that is argv[1], argv[2] and so on. For more sophisticated use of command line arguments, consider using the Argparse module which contains options add help docs and other features.
Edit: I only covered the invoked portion, not the runtime question.
I have a ROS code rostopic pub toggle_led std_msgs/Empty that basically starts once and keeps running until CTRL+C is pressed.
Now, I would like to automate this command from Python. I checked Calling an external command in Python but it only shows how to start the command.
How would I start and stop running this process as and when I want?
How would I start and stop running this process as and when I want?
Well, you already know how to start it, as you said in the previous sentence.
How do you stop it? If you want to stop it exactly like a Ctrl-C,* you do that by calling send_signal on it, using CTRL_C_EVENT on Windows, or SIGTERM on Unix.** So:
import signal
import subprocess
try:
sig = signal.CTRL_C_EVENT
except NameError:
sig = signal.SIGTERM
p = subprocess.Popen(['/path/to/prog', '-opt', '42', 'arg'])
# ... later
p.send_signal(sig)
If you only care about Linux (or *nix in general), you can make this even simpler: terminate is guaranteed to do the same thing as send_signal(SIGTERM). So:
import subprocess
p = subprocess.Popen(['/path/to/prog', '-opt', '42', 'arg'])
# ... later
p.terminate()
Since you asked in a comment "Could you please explain the various parameters to subprocess.Popen()": Well, there are a whole lot of them (see Popen Constructor and Frequently Used Arguments in the docs, but I'm only using one, the args parameter.
Normally, you pass a list to args, with the name of the program as the first element in the list, and each separate command-line argument as a separate element. But if you want to use the shell, you pass a string for args, and add a shell=True as another argument.
* Note that "exactly like a Ctrl-C" may not actually be what you want on Windows, unless the program has a console and is a process group owner. This may mean you'll need to add creationflags=subprocess.CREATE_NEW_PROCESS_GROUP to the Popen call. Or it may not—e.g.., if you use shell=True.
** In Python, you can usually ignore the platform differences between CTRL_C_EVENT and SIGTERM and always use the latter, but subprocess.send_signal is one of the few places you can't. On Windows, send_signal(SIGTERM) will call terminate instead of sending a Ctrl-C. If you don't actually care exactly how the process gets stopped, just that it gets stopped somehow, then of course you can use SIGTERM… but in that case, you might as well just call terminate.
I am using a build system(waf) which is a wrapper around python. There are some programs(perl scripts,exe's etc) calling the python build system. When I execute the build scripts from cmd.exe, I need to find out the program that called it. My OS is windows 7. I tried getting the parent PID in a python module and it returns "cmd" as PPID and "python.exe" as PID, so that approach did not help me in finding what I am looking for.
I believe I should be looking at some stacktraces on a OS level, but am not able to find how to do it. Please help me with the approach I should take or a possible code snippet. I just need to know the name of the script or program that called the system, example caller.perl, callload.exe
Thank you
Though I am not sure why it would be needed but this is a fun problem in itself, so here are few tips, once you have parent PID loop thru processes and get name e.g.
using WMI
import wmi
c = wmi.WMI ()
for process in c.Win32_Process ():
if process.ProcessId == ppid:
print process.ProcessId, process.Name
I think you can do same thing using win32 API, e.g.
processes = win32process.EnumProcesses()
for pid in processes:
if pid == ppid:
handle = win32api.OpenProcess(win32con.PROCESS_ALL_ACCESS,
False, pid)
exe = win32process.GetModuleFileNameEx(handle, 0)
This will work for simple cases when progA directly executes progB but if there is a long chain of child process in between, it may not be good solution. Best way for a generic case would be for calling program to tell his identity by passing it as argument e.g.
progB --calledfrom progA
modify the python script to add an argument to it, stating which file called it. then log it into a logger file. all scripts calling it will have to identify themselves to the python script via the argument vector.
For example:
foo.pl calls yourfile.py as:
yourfile.py /path/to/foo.pl
yourfile.py:
def main(argv):
logger.print(argv[1])
I was able to use process explorer to see the chain of processes called and was able to retrieve the name by just traversing the parent. Thanks for all who replied.
First let me say that I know it's better to use the subprocess module, but I'm editing other people's code and I'm trying to make as few changes as possible, which includes avoiding the importing any new modules. So I'd like to stick to the currently-imported modules (os, sys, and paths) if at all possible.
The code is currently (in a file called postfix-to-mailman.py that some of you may be familiar with):
if local in ('postmaster', 'abuse', 'mailer-daemon'):
os.execv("/usr/sbin/sendmail", ("/usr/sbin/sendmail", 'first#place.com'))
sys.exit(0)
This works fine (though I think sys.exit(0) might be never be called and thus be unnecessary).
I believe this replaces the current process with a call to /usr/sbin/sendmail passing it the arguments /usr/sbin/sendmail (for argv[0] i.e. itself) and 'someaddress#someplace.com', then passes the environment of the current process - including the email message in sys.stdin - to the child process.
What I'd like to do is essentially send another copy of the message before doing this. I can't use execv again because then execution will stop. So I've tried the following:
if local in ('postmaster', 'abuse', 'mailer-daemon'):
os.spawnv(os.P_WAIT, "/usr/sbin/sendmail", ("/usr/sbin/sendmail", 'other#place.com'))
os.execv("/usr/sbin/sendmail", ("/usr/sbin/sendmail", 'first#place.com'))
sys.exit(0)
However, while it sends the message to other#place.com, it never sends it to first#place.com
This surprised me because I thought using spawn would start a child process and then continue execution in the current process when it returns (or without waiting, if P_NOWAIT is used).
Incidentally, I tried os.P_NOWAIT first, but the message I got at other#place.com was empty, so at least when I used P_WAIT the message came through intact. But it still never got sent to first#place.com which is a problem.
I'd rather not use os.system if I can avoid it because I'd rather not go out to a shell environment if it can be avoided (security issues, possible performance? I admit I'm being paranoid here, but if I can avoid os.system I'd still like to).
The only thing I can think of is that the call to os.spawnv is somehow consuming/emptying the contents of sys.stdin, but that doesn't really make sense either. Ideas?
While it might not make sense, that does appear to be the case
import os
os.spawnv(os.P_WAIT,"/usr/bin/wc", ("/usr/bin/wc",))
os.execv("/usr/bin/wc", ("/usr/bin/wc",))
$ cat j.py | python j.py
4 6 106
0 0 0
In which case you might do something like this
import os
import sys
buf = sys.stdin.read()
wc = os.popen("usr/sbin/sendmail other#place.com","w")
wc.write(buf)
wc.close()
wc = os.popen("usr/sbin/sendmail first#place.com","w")
wc.write(buf)
wc.close()
sys.exit(0)
sys.stdin is a pipe and those aren't seekable so you can never rewind that file-like object to read its contents again. To actually invoke sendmail(1) twice, you need to save the contents of stdin, preferably in a temporary file but if the data is guaranteed to have a limited size you could safe it in memory instead.
But why go through the trouble? Do you specifically need the email copy to be a separately queued email (and if so, why)? Just add the wanted recipient in your original invocation of sendmail(1). The additional recipient will not be seen in the email headers.
if local in ('postmaster', 'abuse', 'mailer-daemon'):
os.execv("/usr/sbin/sendmail", ("/usr/sbin/sendmail",
'first#place.com',
'otheruser#example.com'))
sys.exit(0)
Oh, and the sys.exit(0) line will be executed if os.execv() for some reason fails. This'll happen if /usr/sbin/sendmail cannot be executed, e.g. if the executable file doesn't exist or isn't actually executable. In other words, this is an error condition that you should take care of.