Hy Python Community -
This is a basic terminology question about Argv and "invoke"
I'm new to python and programmring.
I was reading about the argv function in the sys module on openbookproject.com:
"The argv variable holds a list of strings read in from the command line when a Python script is run. These command line arguments can be used to pass information into a program at the same time it is invoked." http://openbookproject.net/thinkcs/python/english2e/ch10.html
It seems really clear from the defintion, but I still wanted to double check: Does "at the time it is invoked" just mean, "when you run the program?" Would it be appropriate in a third way to say, "Argv can pass information into a program at runtime?"
Thank you.
Yes, that's what "invoked" means.
No, because "at runtime" covers the entire time window in which the process is running. It is precisely accurate to say that argv can pass information into a program at invocation.
Does "at the time it is invoked" just mean, "when you run the program?"
Yes. "at the same time it is invoked" implies that you can pass data to the program later while it is running too i.e., you can use command-line arguments (sys.argv) to pass data to the program "at the same time it is invoked" and some other means (IPC) to pass it later e.g., via standard input while it is running.
Would it be appropriate in a third way to say, "Argv can pass information into a program at runtime?"
No.
argv defines how the command line looks like to the process e.g., argv[0] sets one of the name for the process (another one is derived from the path to the actual executable). On POSIX, argv is a parameter for exec*() functions that is passed to C main(argc, argv) that is the entry point for a C program.
In other words, argv is used to invoke (start/run) the process but as #G Fetterman mentioned "at runtime" may refer to the whole process running time, not only the invocation time. argv may be known even before the process is running and argv usually stays the same after the process is started.
Yes, that's correct. Consider the following code testargs.py:
import sys
print(sys.argv[1])
When you run this script as python testargs.py banana you see that the result prints "banana". Note that argv[0] is the script name, any argument given after that is argv[1], argv[2] and so on. For more sophisticated use of command line arguments, consider using the Argparse module which contains options add help docs and other features.
Edit: I only covered the invoked portion, not the runtime question.
Related
I am working on a program that requires to call another python script and truncate the execution of the current file. I tried doing the same using the os.close() function. As follows:
def call_otherfile(self):
os.system("python file2.py") #Execute new script
os.close() #close Current Script
Using the above code I am able to open the second file but am unable to close the current one.I know I am silly mistake but unable to figure out what's it.
To do this you will need to spawn a subprocess directly. This can either be done with a more low-level fork and exec model, as is traditional in Unix, or with a higher-level API like subprocess.
import subprocess
import sys
def spawn_program_and_die(program, exit_code=0):
"""
Start an external program and exit the script
with the specified return code.
Takes the parameter program, which is a list
that corresponds to the argv of your command.
"""
# Start the external program
subprocess.Popen(program)
# We have started the program, and can suspend this interpreter
sys.exit(exit_code)
spawn_program_and_die(['python', 'path/to/my/script.py'])
# Or, as in OP's example
spawn_program_and_die(['python', 'file2.py'])
Also, just a note on your original code. os.close corresponds to the Unix syscall close, which tells the kernel that your program that you no longer need a file descriptor. It is not supposed to be used to exit the program.
If you don't want to define your own function, you could always just call subprocess.Popen directly like Popen(['python', 'file2.py'])
Use the subprocess module which is the suggested way to do that kind of stuff (execute new script, process), in particular look at Popen for starting a new process and to terminate the current program you can use sys.exit().
Its very simple use os.startfile and after that use exit() or sys.exit() it will work 100%
#file 1 os.startfile("file2.py") exit()
This question already has answers here:
How do I execute a program or call a system command?
(65 answers)
Closed 4 years ago.
I have a Python program, it generates a string, say fileName. In fileName I hold something like "video1.avi", or "sound1.wav". Then I make an os call to start a program ! program arg1 arg2, where my fileName is arg2. How can I achieve that on the fly, without making the whole program return a single string (the fileName) and then pass it to the shell line? How can I make that during execution. The script executes in Jupyter.
P.S. I am looping and changing the file name and I have to run that script at every loop.
If you want your script to run some outside program, passing in an argument, the way to do that is the subprocess module.
Exactly which function to call depends on what exactly do you want to do. Just start it in the background and ignore the result? Wait for it to finish, ignore any output, but check that it returned success? Collect output so you can log it? I'm going to pick one of the many options arbitrarily, but read the linked docs to see how to do whichever one you actually want.
for thingy in your_loop:
fileName = your_filename_creating_logic(thingy)
try:
subprocess.run(['program', 'arg1', fileName],
check=True)
print(f'program ran on {filename} successfully')
except subprocess.CalledProcessError as e:
print(f'program failed on {filename} with #{e.returncode}')
Notice that I'm passing a list of arguments, with the program name (or full path) as the first one. You can throw in hardcoded strings like arg1 or --breakfast=spam, and variables like fileName. Because it's a list of strings, not one big string, and because it's not going through the shell at all (at least on Mac and Linux; things are a bit more complicated on Windows, but mostly it "just works" anyway), I don't have to worry about quoting filename in case it has spaces or other funky characters.
If you're using Python 3.4 or 2.7, you won't have that run function; just change it to check_call (and without that check=True argument).
I have a ROS code rostopic pub toggle_led std_msgs/Empty that basically starts once and keeps running until CTRL+C is pressed.
Now, I would like to automate this command from Python. I checked Calling an external command in Python but it only shows how to start the command.
How would I start and stop running this process as and when I want?
How would I start and stop running this process as and when I want?
Well, you already know how to start it, as you said in the previous sentence.
How do you stop it? If you want to stop it exactly like a Ctrl-C,* you do that by calling send_signal on it, using CTRL_C_EVENT on Windows, or SIGTERM on Unix.** So:
import signal
import subprocess
try:
sig = signal.CTRL_C_EVENT
except NameError:
sig = signal.SIGTERM
p = subprocess.Popen(['/path/to/prog', '-opt', '42', 'arg'])
# ... later
p.send_signal(sig)
If you only care about Linux (or *nix in general), you can make this even simpler: terminate is guaranteed to do the same thing as send_signal(SIGTERM). So:
import subprocess
p = subprocess.Popen(['/path/to/prog', '-opt', '42', 'arg'])
# ... later
p.terminate()
Since you asked in a comment "Could you please explain the various parameters to subprocess.Popen()": Well, there are a whole lot of them (see Popen Constructor and Frequently Used Arguments in the docs, but I'm only using one, the args parameter.
Normally, you pass a list to args, with the name of the program as the first element in the list, and each separate command-line argument as a separate element. But if you want to use the shell, you pass a string for args, and add a shell=True as another argument.
* Note that "exactly like a Ctrl-C" may not actually be what you want on Windows, unless the program has a console and is a process group owner. This may mean you'll need to add creationflags=subprocess.CREATE_NEW_PROCESS_GROUP to the Popen call. Or it may not—e.g.., if you use shell=True.
** In Python, you can usually ignore the platform differences between CTRL_C_EVENT and SIGTERM and always use the latter, but subprocess.send_signal is one of the few places you can't. On Windows, send_signal(SIGTERM) will call terminate instead of sending a Ctrl-C. If you don't actually care exactly how the process gets stopped, just that it gets stopped somehow, then of course you can use SIGTERM… but in that case, you might as well just call terminate.
In Linux. I have a c program that reads a 2048Byte text file as an input. I'd like to launch the c program from a Python script. I'd like the Python script to hand the c program the text string as an argument, instead of writing the text string to a file for the c program to then read.
How can a Python program launch a c program handing it a ~2K (text) data structure?
Also note, I cannot use "subprocess.check_output()". I have to use "os.system()". That's because the latter allows my c-program direct access to terminal input/output. The former does not.
You can pass it as an argument by just… passing it as an argument. Presumably you want to quote it rather than passing it as an arbitrary number of arguments that need to be escaped and so on, but that's easy with shlex.quote. For example:
with open('bigfile.txt', 'rb') as infile:
biginput = infile.read(2048)
os.system('cprogram {}'.format(shlex.quote(biginput)))
If you get an error about the argument or the command line being too long for the shell… then you can't do it. Python can't make the shell do things it can't do, and you refuse to go around the shell (I think because of a misunderstanding, but let's ignore that for the moment). So, you will need some other way to pass the data.
But that doesn't mean you have to store it in a file. You can use the shell from subprocess just as easily as from os.system, which means you can pass it to your child process's stdin:
with subprocess.Popen('cprogram {}'.format(shlex.quote(biginput)),
shell=True, stdin=subprocess.PIPE) as p:
p.communicate(biginput)
Since you're using shell=True, and not replacing either stdout or stderr, it will get the exact same terminal that it would get with os.system. So, for example, if it's doing, say, isatty(fileno(stdout)), it will be true if your Python script is running in a tty, false otherwise.
As a side note, storing it in a tempfile.NamedTemporaryFile may not cost nearly as much as you expect it to. In particular, the child process will likely be able to read the data you wrote right out of the in-memory disk cache instead of waiting for it to be flushed to disk (and it may never get flushed to disk).
I suspect that the reason you thought you couldn't use subprocess is that you were using check_output when you wanted check_call.
If you use check_output (or if you explicit pass stdout=PIPE to most other subprocess functions), the child process's stdout is the pipe that you're reading from, so it's obviously not a tty.
This makes sense: either you want to capture the output, in which case the C program can't output to the tty, or you want to let the C program output to the tty, in which case you can't capture it.* So, just don't capture the output, and everything will be fine.
If I'm right, this means you have no reason to use the shell in the first place, which makes everything a whole lot easier. Of course your data might still be larger than the maximum system argument size** or resource limits***, even without the shell. On most modern systems, you can count on at least 64KB, so definitely try it first:
subprocess.check_call(['cprogram', biginput])
But if you get an E2BIG error:
with subprocess.Popen(['cprogram', biginput], stdin=subprocess.PIPE) as p:
p.communicate(biginput)
* Unless, of course, you want to fake a tty for your child process, in which case you need to look at os.forkpty and related functions, or the pty module.
** On most *BSD and related systems, sysctl kern.argmax and/or getconf ARG_MAX will give you the system limit, or sysconf(_SC_ARG_MAX) from C. There may also be a constant ARG_MAX accessible through <limits.h>. On linux, things are a bit more complicated, because there are a number of different limits (most of which are very, very high) rather than just one single limit. Check your platform's manpage for execve for the details.
*** On some platforms, including recent linux, RLIMIT_STACK affects the max arg size that you can pass. Again, see your platform's execve manpage.
I was under the impression that a calling script can access the namespace of the called script. Following is a code section from my calling script:
x= 'python precision.py'
args=shlex.split(x)
print args
p=subprocess.Popen(args)
p.wait()
result.write("\tprecision = "+str(precision)+", recall = ")
where "precision" is a variable in the called script "precision.py".
But this gives a NameError. How could i fix this?
You can't access this. By the time you have arrived in the last line of your script, the called script has finished executing. Therefore its variables don't exist any more. You need to send this data to the calling script in some other way (such as the called script printing it on the standard output and the calling script getting it from there).
Even if it hadn't finished executing, I don't think you could access its variables. In other words, your impression is wrong :-)
subprocess.Popen() allows you to run a command and read from its standard output and/or write to its standard input. It doesn't make much sense to popen a process and then wait for it to finish without communicating with it. That's pretty much like os.system()
If you want a variable in precision.py you do something like the following:
import precision
print "precision variable value =", precision.precision
of course, importing means executing any statements not inside classes or def's