How to run different python functions from command line - python

I'm trying to run different functions from a python script(some with arguments and some without)
So far I have
def math(x):
ans = 2*x
print(ans)
def function1():
print("hello")
if __name__ == '__main__':
globals()[sys.argv[1]]()
and in the command line if I type python scriptName.py math(2)
I get the error
File "scriptName.py", line 28, in <module>
globals()[sys.argv[1]]()
KeyError: 'mat(2)'
New to python and programming so any help would be apprecitated. This is also a general example...my real script will have a lot more functions.
Thank you

Try this!
import argparse
def math(x):
try:
print(int(x) * 2)
except ValueError:
print(x, "is not a number!")
def function1(name):
print("Hello!", name)
if __name__ == '__main__':
# if you type --help
parser = argparse.ArgumentParser(description='Run some functions')
# Add a command
parser.add_argument('--math', help='multiply the integer by 2')
parser.add_argument('--hello', help='say hello')
# Get our arguments from the user
args = parser.parse_args()
if args.math:
math(args.math)
if args.hello:
function1(args.hello)
You run it from your terminal like so:
python script.py --math 5 --hello ari
And you will get
>> 10
>> Hello! ari
You can use --help to describe your script and its options
python script.py --help
Will print out
Run some functions
optional arguments:
-h, --help show this help message and exit
--math MATH multiply the integer by 2
--hello HELLO say hello
Read More: https://docs.python.org/3/library/argparse.html

Here is another approach you can take:
#!/usr/bin/env python3
"""Safely run Python functions from command line.
"""
import argparse
import ast
import operator
def main():
# parse arguments
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument("function", help="Python function to run.")
parser.add_argument("args", nargs='*')
opt = parser.parse_args()
# try to get the function from the operator module
try:
func = getattr(operator, opt.function)
except AttributeError:
raise AttributeError(f"The function {opt.function} is not defined.")
# try to safely eval the arguments
try:
args = [ast.literal_eval(arg) for arg in opt.args]
except SyntaxError:
raise SyntaxError(f"The arguments to {opt.function}"
f"were not properly formatted.")
# run the function and pass in the args, print the output to stdout
print(func(*args))
if __name__ == "__main__":
main()
Then you can execute this by doing the following:
./main.py pow 2 2
4
We use the argparse module from Python's Standard Library to facilitate the parsing of arguments here. The usage for the script is below:
usage: main.py [-h] function [args [args ...]]
function is the name of the function you want to run. The way this is currently structured is to pull functions from the operator module, but this is just an example. You can easily create your own file containing functions and use that instead, or just pull them from globals().
Following function you can supply any number of arguments that you want. Those arguments will be ran through ast.literal_eval to safely parse the arguments and get the corresponding types.
The cool thing about this is your arguments are not strictly limited to strings and numbers. You can pass in any literal. Here is an example with a tuple:
./main.py getitem '(1, 2, 3)' 1
2
These arguments are then passed to the selected function, and the output is printed to stdout. Overall, this gives you a pretty flexibly framework in which you can easily expand the functionality. Plus, it avoids having to use eval which greatly reduces the risk of doing something of this nature.
Why not to use eval:
Here is a small example of why just using eval is so unsafe. If you were to simply use the following code to solve your issue:
def math(x):
ans = 2*x
print(ans)
def function1():
print("hello")
if __name__ == '__main__':
print(eval(sys.argv[1]])) # DO NOT DO IT THIS WAY
Someone could pass in an argument like so:
python main.py 'import shutil; shutil.rmtree("/directory_you_really_dont_want_to_delete/")'
Which would in effect, import the shutil module and then call the rmtree function to remove a directory you really do not want to delete. Obviously this is a trivial example, but I am sure you can see the potential here to do something really malicious. An even more malicious, yet easily accessible, example would be to import subprocess and use recursive calls to the script to fork-bomb the host, but I am not going to share that code here for obvious reasons. There is nothing stopping that user from downloading a malicious third party module and executing code from it here (a topical example would be jeilyfish which has since been removed from PyPi). eval does not ensure that the code is "safe" before running it, it just arbitrarily runs any syntactically correct Python code given to it.

Related

Command line argument in python to run one of two scripts

My package has the following structure:
mypackage
|-__main__.py
|-__init__.py
|-model
|-__init__.py
|-modelfile.py
|-simulation
|-sim1.py
|-sim2.py
The content of the file __main__.py is
from mypackage.simulation import sim1
if __name__ == '__main__':
sim1
So that when I execute python -m mypackage, the script sim1.py runs.
Now I would like to add an argument to the command line, so that python -m mypackage sim1 runs sim1.py and python -m mypackage sim2 runs sim2.py.
I've tried the follwing:
import sys
from mypackage.simulation import sim1,sim2
if __name__ == '__main__':
for arg in sys.argv:
arg
But it runs boths scripts instead of the one passed in argument.
In sim1.py and sim2.py I have the following code
from mypackage.model import modelfile
print('modelfile.ModelClass.someattr')
You can simply call __import__ with the module name as parameter, e.g.:
new_module = __import__(arg)
in your loop.
So, for example, you have your main program named example.py:
import sys
if __name__ == '__main__':
for arg in sys.argv[1:]:
module=__import__(arg)
print(arg, module.foo(1))
Note that sys.argv[0] contains the program name.
You have your sim1.py:
print('sim1')
def foo(n):
return n+1
and your sim2.py:
print('sim2')
def foo(n):
return n+2
then you can call
python example.py sim1 sim2
output:
sim1
sim1 2
sim2
sim2 3
Suppose you have you files with following content.
sim1.py
def simulation1():
print("This is simulation 1")
simulation1()
main.py
import sim1
sim1.simulation1()
output
This is simulation 1
This is simulation 1
When you import sim1 into main.py and calls its function simulation1, then This is simulation 1 gets printed 2 times.
Because, simulation1 is called inside sim1.py and also in main.py.
If you want to run that function in sim1.py, but don't want to run when sim1 is imported, then you can place it inside if __name__ == "__main__":.
sim1.py
def simulation1():
print("This is simulation 1")
if __name__ == "__main__":
simulation1()
main.py
import sim1
sim1.simulation1()
output
This is simulation 1
Your code doesn't do what you want it to do. Just sim1 doesn't actually call the function; the syntax to do that is sim1().
You could make your Python script evaluate random strings from the command line as Python expressions, but that's really not a secure or elegant way to solve this. Instead, have the strings map to internal functions, which may or may not have the same name. For example,
if __name__ == '__main__':
import sys
for arg in sys.argv[1:]:
if arg == 'sim1':
sim1()
if arg == 'mustard':
sim2()
if arg == 'ketchup':
sim3(sausages=2, cucumber=user in cucumberlovers)
else:
raise ValueError('Anguish! Don\'t know how to handle %s' % arg)
As this should hopefully illustrate, the symbol you accept on the command line does not need to correspond to the name of the function you want to run. If you want that to be the case, you can simplify this to use a dictionary:
if __name__ == '__main__':
import sys
d = {fun.__name__: fun for fun in (sim1, sim2)}
for arg in sys.argv[1:]:
if arg in d:
d[arg]()
else:
raise ValueError('Anguish! etc')
What's perhaps important to note here is that you select exactly which Python symbols you want to give the user access to from the command line, and allow no others to leak through. That would be a security problem (think what would happen if someone passed in 'import shutil; shutil.rmtree("/")' as the argument to run). This is similar in spirit to the many, many reasons to avoid eval, which you will find are easy to google (and you probably should if this is unfamiliar to you).
If sim1 is a module name you want to import only when the user specifically requests it, that's not hard to do either; see importing a module when the module name is in a variable but then you can't import it earlier on in the script.
if __name__ == '__main__':
import sys
modules = ['sim1', 'sim2']
for arg in sys.argv[1:]:
if arg in modules:
globals()[arg] = __import__(arg)
else:
raise ValueError('Anguish! etc')
But generally speaking, modules should probably only define functions, and leave it to the caller to decide if and when to run them at some time after they import the module.
Perhaps tangentially look into third-party libraries like click which easily allow you to expose selected functions as "subcommands" of your Python script, vaguely similarly to how git has subcommands init, log, etc.

Passing directory paths as strings to argparse in Python

Scenario: I have a python script that receives as inputs 2 directory paths (input and output folders) and a variable ID. With these, it performs a data gathering procedure from xlsx and xlsm macros, modifies the data and saves to a csv (from the input folder, the inner functions of the code will run loops, to get multiple files and process them, one at a time).
Issue: Since the code was working fine when I was running it from the Spyder console, I decided to step it up and learn about cmd caller, argparse and the main function. I trying to implement that, but I get the following error:
Unrecognized arguments (the output path I pass from cmd)
Question: Any ideas on what I am doing wrong?
Obs: If the full script is required, I can post it here, but since it works when run from Spyder, I believe the error is in my argparse function.
Code (argparse function and __main__):
# This is a function to parse arguments:
def parserfunc():
import argparse
parser = argparse.ArgumentParser(description='Process Files')
parser.add_argument('strings', nargs=3)
args = parser.parse_args()
arguments = args.strings
return arguments
# This is the main caller
def main():
arguments = parserfunc()
# this next function is where I do the processing for the files, based on the paths and id provided):
modifierfunc(arguments[0], arguments[1], arguments[2])
#
if __name__ == "__main__":
main()
If you decided to use argparse, then make use of named arguments, not indexed. Following is an example code:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('input')
parser.add_argument('output')
parser.add_argument('id')
args = parser.parse_args()
print(args.input, args.output, args.id) # this is how you use them
In case you miss one of them on program launch, you will get human readable error message like
error: the following arguments are required: id
You could drop the entire parserfunc() function.
sys.argv does indeed contain all arguments (always processed as a string) as mentioned by grapes.
So instead of this:
modifierfunc(arguments[0], arguments[1], arguments[2])
This should suffice:
import sys
modifierfunc(sys.argv[0], sys.argv[1], sys.argv[2])
Perhaps, first do a print, to see if the sys.argv holds the values you expect.
print('Argument 0='+sys.argv[0])
print('Argument 1='+sys.argv[1])
print('Argument 2='+sys.argv[2])

Run Python function with input arguments from command line

My function convert.py is:
def convert(a,b)
factor = 2194.2
return (a-b)*factor
How do I run it from the command line with input arguments a and b?
I tried:
python convert.py 32 46
But got an error.
I did try to find the answer online, and I found related things but not the answer:
Run function from the command line (Stack Overflow)
How to read/process command line arguments? (Stack Overflow)
http://www.cyberciti.biz/faq/python-command-line-arguments-argv-example/
http://www.saltycrane.com/blog/2007/12/how-to-pass-command-line-arguments-to/
Also, where can I find the answer myself so that I can save this site for more non-trivial questions?
You could do:
import sys
def convert(a,b):
factor = 2194.2
return (a-b)*factor
print(convert(int(sys.argv[1]), int(sys.argv[2])))
If that is all what should do the script, you dont have to define a function:
import sys
factor = 2194.2
print((int(sys.argv[1]), int(sys.argv[2])*factor)
If you want change your file (nonetheless you have to add the colon after the function definiton), you could follow your first linked approach:
python -c 'import convert, sys; print convert.convert(int(sys.argv[1]), int(sys.argv[2])'
There exists a Python module for this sort of thing called argparse, which allows you to do really fancy things around command line flags. You don't really need that - you've just got two numbers on the command line. This can be handled really naively.
Python allows you direct access to the command line arguments via an array called sys.argv - you'll need to import sys first. The first element in this array is always the program name, but the second and third will be the numbers you pass in i.e. sys.argv[1] and sys.argv[2]. For a more complete example:
if len(sys.argv) < 3:
print 'Didnt supply to numbers'
a = int(sys.argv[1])
b = int(sys.argv[2])
Of course you'll need some error checking around making sure they are actuall integers/floats.
A bit of extra reading around sys.argv if you're interested here
To be complete, we can give an argparse example as well:
import argparse
parser = argparse.ArgumentParser(description='')
parser.add_argument('numbers', type=float, nargs=2,
help='Things to perform actions on')
args = parser.parse_args()
a = args.numbers[0]
b = args.numbers[1]
print a, b

Unittest with command-line arguments

From what I understand from another SO post, to unittest a script that takes command line arguments through argparse, I should do something like the code below, giving sys.argv[0] as arg.
import unittest
import match_loc
class test_method_main(unittest.TestCase):
loc = match_loc.main()
self.assertEqual(loc, [4])
if __name__ == '__main__':
sys.argv[1] = 'aaaac'
sys.argv[2] = 'ac'
unittest.main(sys.argv[0])
This returns the error:
usage: test_match_loc.py [-h] text patterns [patterns ...]
test_match_loc.py: error: the following arguments are required: text, patterns
I would like to understand what is going on here deeper. I understand
if __name__ == '__main__':
main()
says that if this is being executed by the 'main', highest level, default interpreter, to just automatically run the 'main' method. I'm assuming
if __name__ == '__main__':
unittest.main()
just happens to be the way you say this for running unittest scripts.
I understand when any script is run, it automatically has an argv object, a vector collecting all the items on the command line.
But I do not understand what unittest.main(sys.arg[0]) would do. What does 'unittest.main' do with arguments? How can I pre-set the values of sys.argv - doesn't it automatically reset every time you run a script? Furthermore, where does this object 'sys.argv' exist, if outside of any script? Finally, what is the correct way to implement tests of command-line arguments?
I am sorry if my questions are vague and misguided. I would like to understand all the components relevant here so I can actually understand what I am doing.
Thank you very much.
Just by playing around with a pair of simple files, I find that modifying sys.argv in the body of the caller module affects the sys.argv that the imported module sees:
import sys
sys.argv[1] = 'aaaac'
sys.argv[2] = 'ac'
class test_method_main(unittest.TestCase):
...
But modifying sys.argv in the main block as you do, does not show up in the imported one. We could dig into the documentation (and code) to see exactly why, but I think it's enough to just identify what works.
Here's what I reconstructed from your previous question of the imported module - with a few diagnostic prints
import argparse
import sys
def main():
print(sys.argv)
parser = argparse.ArgumentParser(
description='Takes a series of patterns as fasta files'
' or strings and a text as fasta file or string and'
' returns the match locations by constructing a trie.')
parser.add_argument('text')
parser.add_argument('patterns', nargs='+')
args = parser.parse_args()
print(args)
return 1
You could also test a parser with your own list of strings, recognising that parse_args uses sys.argv[1:] if its argument is missing or None:
def main(argv=None):
print(argv)
...
args = parser.parse_args(argv)
print(args)
return 1
loc = match_loc.main(['abc','ab']) # and in the caller
Even though I was able to construct a working test case, you really should have given enough information that I didn't need to guess or dig around.

is there a way to clear python argparse?

Consider the following script:
import argparse
parser1 = argparse.ArgumentParser()
parser1.add_argument('-a')
args1 = parser1.parse_args()
parser2 = argparse.ArgumentParser()
parser2.add_argument('-b')
args2 = parser2.parse_args()
I have several questions:
Is parse_args a one-time method or is there a way to clear the
arguments before adding new ones? (e.g. something like
args1.clear() or parser1.clear())
The result of this script is unusable. Although this script accepts
the -a argument, it does not accept any value for 'a'. Nor does it
accept any -b argument. Is there some way to make any of the arguments really work?
This is my actual scenario: I have 2 scripts. Both import the same
file which has initialization code (load config files, create
loggers, etc.), lets call it init.py This init.py file also parses
the arguments only because it needs one value from it. The problem
is that I need one of the scripts to accept other arguments as well.
Since init.py does something with one argument, I cannot wait with
parse_args. How can I make it work?
Edit:
Here is the output of my script:
[prompt]# python2.7 myscript.py -a
usage: a.py [-h] [-a A]
myscript.py: error: argument -a: expected one argument
[prompt]# python2.7 myscript.py -a 1
Namespace(a='1')
usage: a.py [-h] [-b B]
myscript.py: error: unrecognized arguments: -a 1
Your scenario is quite unclear, but I guess what you're looking for is parse_known_args
Here I guessed that you called init.py from the other files, say caller1.py and caller2.py
Also suppose that init.py only parses -a argument, while the original script will parse the rest.
You can do something like this:
in init.py put this in do_things method:
parser = argparse.ArgumentParser()
parser.add_argument('-a')
parsed = parser.parse_known_args(sys.argv)
print 'From init.py: %s' % parsed['a']
In caller1.py:
init.do_things(sys.argv)
parser = argparse.ArgumentParser()
parser.add_argument('-b')
parsed = parser.parse_known_args(sys.argv)
print 'From caller1.py: %s' % parsed['b']
If you call caller1.py as follows: python caller1.py -a foo -b bar, the result will be:
From init.py: foo
From caller1.py: bar
But if your scenario is not actually like this, I would suggest to use #Michael0x2a answer, which is just to use single ArgumentParser object in caller1.py and pass the value appropriately for init.py
This doesn't really make sense, because for all intents and purposes, the parser object is stateless. There's nothing to clear, since all it does is takes in the console arguments, and returns a Namespace object (a pseudo-dict) without ever modifying anything in the process.
Therefore, you can consider parse_args() to be idempotent. You can repeatedly call it over and over, and the same output will occur. By default, it will read the arguments from sys.argv, which is where the console arguments are stored.
However, note that you can pipe in custom arguments by passing in a list to the parse_args function so that the parser will using something other then sys.argv as input.
I'm not sure what you mean. If you call python myscript.py -a 15, args1 will equal Namespace(a='15'). You can then do args1['a'] to obtain the value of 15. If you want to make the flag act as a toggle, call parser.add_argument('-a', action='store_true'). Here is a list of all available actions.
I would try and confine all the console/interface code into a single module and into a single parser. Basically, remove the code to parse the command line from init.py and the second file into an independent little section. Once you run the parser, which presents a unified interface for everything in your program, pass in the appropriate variables to functions inside init.py. This has the added advantage of keeping the UI separate and more easily interchangeable with the rest of the code.

Categories