I want to replace the current process with a new process using the same Python interpreter, but with a new script. I have tried using os.execl, which seemed like the most intuitive approach:
print(sys.executable, script_path, *args)
os.execl(sys.executable, script_path, *args)
The result is that this is printed to the screen (from the print function):
/home/tomas/.pyenv/versions/3.4.1/bin/python script.py arg1 arg2 arg3
And the Python interactive interpreter is launched. Entering this into the interpreter:
>>> import sys
>>> print(sys.argv)
['']
Shows that Python received no arguments.
If I copy the output of the print function and enter it into my terminal, it works as expected. I have also tried using execv and execlp with identical results.
Why doesn't the execl call pass the arguments to the Python executable?
The arg0, arg1, arg2, ... (arguments after the sys.executable) are passed to subprogram as argv. If you pass script_path as a the first argument the subprogram will interpret script_path as argv[0] instead of sys.executable.
Replace the execl line as following will solve your problem:
os.execl(sys.executable, sys.executable, script_path, *args)
^^^^^^^^^^^^^^
Related
I would like to run a command in Python Shell to execute a file with an argument.
For example: execfile("abc.py") but how to add 2 arguments?
Actually, wouldn't we want to do this?
import sys
sys.argv = ['abc.py','arg1', 'arg2']
execfile('abc.py')
execfile runs a Python file, but by loading it, not as a script. You can only pass in variable bindings, not arguments.
If you want to run a program from within Python, use subprocess.call. E.g.
import subprocess
subprocess.call(['./abc.py', arg1, arg2])
try this:
import sys
sys.argv = ['arg1', 'arg2']
execfile('abc.py')
Note that when abc.py finishes, control will be returned to the calling program. Note too that abc.py can call quit() if indeed finished.
import sys
import subprocess
subprocess.call([sys.executable, 'abc.py', 'argument1', 'argument2'])
For more interesting scenarios, you could also look at the runpy module. Since python 2.7, it has the run_path function. E.g:
import runpy
import sys
# argv[0] will be replaced by runpy
# You could also skip this if you get sys.argv populated
# via other means
sys.argv = ['', 'arg1' 'arg2']
runpy.run_path('./abc.py', run_name='__main__')
You're confusing loading a module into the current interpreter process and calling a Python script externally.
The former can be done by importing the file you're interested in. execfile is similar to importing but it simply evaluates the file rather than creates a module out of it. Similar to "sourcing" in a shell script.
The latter can be done using the subprocess module. You spawn off another instance of the interpreter and pass whatever parameters you want to that. This is similar to shelling out in a shell script using backticks.
You can't pass command line arguments with execfile(). Look at subprocess instead.
If you set PYTHONINSPECT in the python file you want to execute
[repl.py]
import os
import sys
from time import time
os.environ['PYTHONINSPECT'] = 'True'
t=time()
argv=sys.argv[1:len(sys.argv)]
there is no need to use execfile, and you can directly run the file with arguments as usual in the shell:
python repl.py one two 3
>>> t
1513989378.880822
>>> argv
['one', 'two', '3']
If you want to run the scripts in parallel and give them different arguments you can do like below.
import os
os.system("python script.py arg1 arg2 & python script.py arg11 arg22")
Besides subprocess.call, you can also use subprocess.Popen. Like the following
subprocess.Popen(['./script', arg1, arg2])
This works:
subprocess.call("python abc.py arg1 arg2", shell=True)
runfile('abc.py', ['arg1', 'arg2'])
This works for me :
import subprocess
subprocess.call(['python.exe', './abc.py', arg1, arg2])
I have been working on a script mixture of bash and python script. The bash script can receive unknown count input arguments. For example :
tinify.sh test1.jpg test2.jpg test3.jpg .....
After the bash receives all, it pass these arguments to tinify.py. Now I have come out two ways to do that.
Loop in bash and call python tinify.py testx.jpg
In another word, python tinify test1.jpg then python tinify test2.jpg, finaly python tinify test3.jpg
Pass all arguments to tinify.py then loop in python
But there is a problem, I want to filter same parameter for example if user input tinify.sh test1.jpg test1.jpg test1.jpg , I only want tinify.sh test1.jpg.So I think it's easier to do in second way because python may be convenient.
How can I do to pass all arguments to python script? Thanks in advance!
In addition to Chepner's answer above:
#!/bin/bash
tinify.py "$#"
within python script, tinify.py:
from sys import argv
inputArgs = sys.argv[1:]
def remove_duplicates(l):
return list(set(l))
arguments=remove_duplicates(inputArgs)
The list arguments will contain the arguments passed to python script (duplicated removed as set can't contain duplicated values in python).
You use $# in tinify.sh
#!/bin/bash
tinify.py "$#"
It will be far easier to eliminate duplicates inside the Python script that to filter them out from the shell. (Of course, this raises the question whether you need a shell script at all.)
A python program can accept any number of command line arguments, using sys.argv — just remember that sys.argv[0] is the name of the script
and actual arguments are contained in sys.argv[1:]
$ cat test_args.py
from sys import argv
prog_name = argv[0]
print('Program name:', prog_name)
for arg in argv[1:]:
print(arg)
$ python test_args.py a b 'c d'
Program name: test_args.py
a
b
c d
$
note that an argument containing spaces must be quoted according to the shell syntax.
Your file tinify.py should start with the following (if you have two arguments):
import sys
arg1, arg2 = sys.argv[1], sys.argv[2]
sys.argv[0] is the name of the script itself. You can of course loop over sys.argv. Personally, i like to pass all the arguments as json objects, so afterwards I do json.loads()
Is it possible to run a python script with parameters in command line like this:
./hello(var=True)
or is it mandatory to do like this:
python -c "from hello import *;hello(var=True)"
The first way is shorter and simpler.
Most shells use parentheses for grouping or sub-shells. So you can't call any commands like command(arg) from a normal shell ...but you can write a python script (./hello.py) that takes an argument.
import optparse
parser = optparse.OptionParser()
parser.add_option('-f', dest="f", action="store_true", default=False)
options, remainder = parser.parse_args()
print ("Flag={}".format(options.f))
And the call it with python hello.py -f
./hello(var=True) would be impossible from REPL shell. In some case it could be useful to have python function available in your current shell session. Here a workaround to make your python functions available in your shell environment.
# python-tools.sh
#!/usr/bin/env bash
set -a # make all available export all variable)
function hello(){
cd "/app/python/commands"
python "test.py" $#
}
Content of the python script
#! /usr/bin/env python
# /app/python/commands/test.py script
import sys
def test(*args):
print(args)
if __name__ == '__main__':
if sys.argv[1] in globals().keys():
print(sys.argv[1])
globals()[sys.argv[1]](sys.argv[2:])
else:
print("%s Not known function" % sys.argv[1])
Then source python-tools.sh
source python-tools.sh
After the hello function is available
$ hello test arg2 arg2
test
(['arg2', 'arg2'],)
I am writing a python script which uses os.system command to call a shell script.I need help understanding how I can pass the arguments to the shell script? Below is what I am trying..but it wouldn't work.
os.system('./script.sh arg1 arg2 arg3')
I do not want to use subprocess for calling the shell script. Any help is appreciated.
Place your script and it's args into a string, see example below.
HTH
#!/usr/bin/env python
import os
arg3 = 'arg3'
cmd = '/bin/echo arg1 arg2 %s' % arg3
print 'running "%s"' % cmd
os.system(cmd)
If you insert the following line before os.system (...), you will likely see your problem.
print './script.sh arg1 arg2 arg3'
When developing this type of thing, it usually is useful to make sure the command really is what you expect, before you actually try it.
example:
def Cmd():
return "something"
print Cmd()
when you are satisfied, comment out the print Cmd() line and use os.system (Cmd ()) or subprocess version.
I'm testing some python code that parses command line input. Is there a way to pass this input in through IDLE? Currently I'm saving in the IDLE editor and running from a command prompt.
I'm running Windows.
It doesn't seem like IDLE provides a way to do this through the GUI, but you could do something like:
idle.py -r scriptname.py arg1 arg2 arg3
You can also set sys.argv manually, like:
try:
__file__
except:
sys.argv = [sys.argv[0], 'argument1', 'argument2', 'argument2']
(Credit http://wayneandlayne.com/2009/04/14/using-command-line-arguments-in-python-in-idle/)
In a pinch, Seth's #2 worked....
2) You can add a test line in front of your main function call which
supplies an array of arguments (or create a unit test which does the
same thing), or set sys.argv directly.
For example...
sys.argv = ["wordcount.py", "--count", "small.txt"]
Here are a couple of ways that I can think of:
1) You can call your "main" function directly on the IDLE console with arguments if you want.
2) You can add a test line in front of your main function call which supplies an array of arguments (or create a unit test which does the same thing), or set sys.argv directly.
3) You can run python in interactive mode on the console and pass in arguments:
C:\> python.exe -i some.py arg1 arg2
Command-line arguments have been added to IDLE in Python 3.7.4+. To auto-detect (any and older) versions of IDLE, and prompt for command-line argument values, you may paste (something like) this into the beginning of your code:
#! /usr/bin/env python3
import sys
def ok(x=None):
sys.argv.extend(e.get().split())
root.destroy()
if 'idlelib.rpc' in sys.modules:
import tkinter as tk
root = tk.Tk()
tk.Label(root, text="Command-line Arguments:").pack()
e = tk.Entry(root)
e.pack(padx=5)
tk.Button(root, text="OK", command=ok,
default=tk.ACTIVE).pack(pady=5)
root.bind("<Return>", ok)
root.bind("<Escape>", lambda x: root.destroy())
e.focus()
root.wait_window()
You would follow that with your regular code. ie. print(sys.argv)
Note that with IDLE in Python 3.7.4+, when using the Run... Customized command, it is NOT necessary to import sys to access argv.
If used in python 2.6/2.7 then be sure to capitalize: import Tkinter as tk
For this example I've tried to strike a happy balance between features & brevity. Feel free to add or take away features, as needed!
Based on the post by danben, here is my solution that works in IDLE:
try:
sys.argv = ['fibo3_5.py', '30']
fibonacci(int(sys.argv[1]))
except:
print(str('Then try some other way.'))
Auto-detect IDLE and Prompt for Command-line Arguments
#! /usr/bin/env python3
import sys
# Prompt user for (optional) command line arguments, when run from IDLE:
if 'idlelib' in sys.modules: sys.argv.extend(input("Args: ").split())
Change "input" to "raw_input" for Python2.
This code works great for me, I can use "F5" in IDLE and then call again from the interactive prompt:
def mainf(*m_args):
# Overrides argv when testing (interactive or below)
if m_args:
sys.argv = ["testing mainf"] + list(m_args)
...
if __name__ == "__main__":
if False: # not testing?
sys.exit(mainf())
else:
# Test/sample invocations (can test multiple in one run)
mainf("--foo=bar1", "--option2=val2")
mainf("--foo=bar2")
Visual Studio 2015 has an addon for Python. You can supply arguments with that. VS 2015 is now free.
import sys
sys.argv = [sys.argv[0], '-arg1', 'val1', '-arg2', 'val2']
//If you're passing command line for 'help' or 'verbose' you can say as:
sys.argv = [sys.argv[0], '-h']
IDLE now has a GUI way to add arguments to sys.argv! Under the 'Run' menu header select 'Run... Customized' or just Shift+F5...A dialog will appear and that's it!
Answer from veganaiZe produces a KeyError outside IDLE with python 3.6.3. This can be solved by replacing if sys.modules['idlelib']: by if 'idlelib' in sys.modules: as below.
import argparse
# Check if we are using IDLE
if 'idlelib' in sys.modules:
# IDLE is present ==> we are in test mode
print("""====== TEST MODE =======""")
args = parser.parse_args([list of args])
else:
# It's command line, this is production mode.
args = parser.parse_args()
There seems like as many ways to do this as users. Me being a noob, I just tested for arguments (how many). When the idle starts from windows explorer, it has just one argument (... that is len(sys.argv) returns 1) unless you started the IDLE with parameters. IDLE is just a bat file on Windows ... that points to idle.py; on linux, I don't use idle.
What I tend to do is on the startup ...
if len(sys.argv) == 1
sys.argv = [sys.argv[0], arg1, arg2, arg3....] <---- default arguments here
I realize that is using a sledge hammer but if you are just bringing up the IDLE by clicking it in the default install, it will work. Most of what I do is call the python from another language, so the only time it makes any difference is when I'm testing.
It is easy for a noob like me to understand.