How to save a previous command line argument - python

I'm writing my first python command line tool using docopt and have run into an issue.
My structure is like this:
Usage:
my-tool configure
my-tool [(-o <option> | --option <option>)]
...
I'm trying to find a way to run my-tool -o foo-bar first, and then optionally pass the value 'foo-bar' into my configure function if I run my-tool configure next.
In pseduocode, that translates to this:
def configure(option=None):
print option # With the above inputs this should print 'foo-bar'
def main():
if arguments['configure']:
configure(option=arguments['<option>'])
return
...
Is there a way to get this working without changing the argument structure?
I'm looking for a way to avoid my-tool configure [(-o <option> | --option <option>)]

Since you run this on 2 different instances it might be best to store the values in some sort of config/json file that will be cleared each time you run "configure".
import json
def configure(config_file):
print config_file[opt_name] # do something with options in file
# clear config file
with open("CONFIG_FILE.JSON", "wb") as f: config = json.dump([], f)
def main():
# load config file
with open("CONFIG_FILE.JSON", "rb") as f: config = json.load(f)
# use the configure opt only when called and supply the config json to it
if sys.argv[0] == ['configure']:
configure(config)
return
# parse options example (a bit raw, and should be done in different method anyway)
parser = OptionParser()
parser.add_option("-q", action="store_false", dest="verbose")
config_file["q"] = OPTION_VALUE

I tried writing some script to help you but it was a bit beyond my (current) Newbie skill level.
However, the tools/approach I started taking may be able to help. Try using sys.argv (which generates a list of all arguments from when the script is run), and then using some regular expressions (import re...).
I hope this helps someone else help you. (:

Related

Is it possible to pass python behave command-line arguments from file

I wonder, is it possible to pass behave arguments, eg. "-D environment". by default they are taken from the behave file.
Maybe there is some way to keep each configuration in a different file? Maybe many behave files? or Such as: behave "path to file with arguments"?
At this point i figured out that I could put bash scripts containing various configuration " #!/bin/bash behave ..."
I asking because i want easily manage my configurations when I run "behave -..." without editing many arguments
I think you could take advantage of using custom json configuration files described in behave Advanced Cases section: https://behave.readthedocs.io/en/latest/new_and_noteworthy_v1.2.5.html#advanced-cases
from documentation:
# -- FILE: features/environment.py
import json
import os.path
def before_all(context):
"""Load and update userdata from JSON configuration file."""
userdata = context.config.userdata
configfile = userdata.get("configfile", "userconfig.json")
if os.path.exists(configfile):
assert configfile.endswith(".json")
more_userdata = json.load(open(configfile))
context.config.update_userdata(more_userdata)
# -- NOTE: Reapplies userdata_defines from command-line, too.
So, if you like to use custom config by specifying it at the command line, you could run it as for example:
behave -D conffile=myconfig.json
Then I would parametrize this line to something like:
myconfigfile = context.config.userdata["conffile"]
configfile = userdata.get("configfile", myconfigfile)

How to get input variable from other function while running pytest for SeleniumBase in Python?

The file I upload to a website changes with each run, and I'd like a way to change it in the test method. Currently, I have it set up like this:
functions.py
def get_file():
...
file = file_ex1.doc
return file
def run_test1():
cmd = "pytest -v webtests.py::EF --browser=chrome"
subprocess.run(split(cmd)) # run command line
...
webtests.py
file_path = file # changes with each run, should come from get_file()
class EF(BaseCase):
def test_now(self):
self.open('https://www...')
self.find_element('input[name="Query"]').send_keys(file_path)`
I was wondering about the best way to change the file_path variable based on the output of certain functions in functions.py. For example, it could sometimes produce a file called file1 to upload, or another time filet2.txt. What's the best way to connect the two scripts and run test_now() with the updated file path for each run? Or should they be set up in a different way?
Thanks so much in advance.
You can use the pytest args that come with SeleniumBase.
--data=DATA # (Extra test data. Access with "self.data" in tests.)
--var1=DATA # (Extra test data. Access with "self.var1" in tests.)
--var2=DATA # (Extra test data. Access with "self.var2" in tests.)
--var3=DATA # (Extra test data. Access with "self.var3" in tests.)
Source: Docs

What is the best method for an adaptive file path in python? e.g. using pd.read_csv()

I am trying to write a script which allows everyone in my remote team to run scripts from our shared dropbox. However, I am struggling with paths when reading csv's.
df = pd.read_csv('C:\\Users\\Joe\\Dropbox........')
For the above, I need to be able to automatically replace the user (in this case Joe) with whoever is trying to run the script. I have done this before, but its always been clunky, so was wondering if someone have a clean method for doing this?
Would setting up a script argument work for you?
https://docs.python.org/3/library/argparse.html
import argparse
# Creating the parser
parser = argparse.ArgumentParser(description = "some decription")
# Adding some argument
parser.add_argument("-p", "--path", help = "path/to/csv", required = True)
# Parsing the arguments
args = parser.parse_args()
# Reading the csv
df = pd.read_csv(args.path)
Then to run the script:
python my_script.py -p path/to/my/csv

How to open a file using argparse.FileType only if a condition is satisfied?

I am using python 3 argparse. I have multiple files passed as argparse.FileType which I use to write some data. I want to check some conditions and open those files only if they are met. However argparse opens them immediately, and they are created even if I exit with error code.
import argparse
from sys import exit
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--condition', action='store_true')
parser.add_argument('out1', type=argparse.FileType('w'))
parser.add_argument('out2', type=argparse.FileType('w'))
args = parser.parse_args()
if not args.condition:
print('ERROR: please use --condition')
exit(2)
args.out1.write('hello\n')
args.out2.write('world\n')
if __name__ == '__main__':
main()
If I run this example without passing the --condition argument, it will still create 2 new files. I don't want to create them in that case. Can I do that without passing filename and opening the files manually?
The simplest thing is to just accept the filenames as the default string type, and open the files later.
http://bugs.python.org/issue13824 (argparse.FileType opens a file and never closes it) implements a FileContext type, one that operates like FileType except that it returns a context that can be used later as:
with args.input() as f:
f.read()
etc
But doing filename checking while parsing, without actually opening or creating a file, is not trivial. And handling stdin/out which should not be closed adds a complication.
In that bug issue, Steven Bethard, the argparse developer, notes that FileType was intended for quick scripts, not for larger projects where proper opening and closing files matters.

run a function (from a .py file) from a linux console

maybe the title is not very clear, let me elaborate.
I have a python script that open a ppm file , apply a chosen filter(rotations...) and create a new picture. until here everything work fine.
but I want to do the same thing through a linux console like:
ppmfilter.py ROTD /path/imageIn.ppm /path/imageOut.ppm
here ROTD is the name of the function that apply a rotation.
I don't know how to do this, I'm looking for a library that'll allow me to do this.
looking forward for your help.
P.S.: I'm using python 2.7
There is a relatively easy way:
You can determine the global names (functions, variables, etc.) with the use of 'globals()'. This gives you a dictionary of all global symbols. You'll just need to check the type (with type() and the module types) and if it's a function, you can call it with sys.argv:
import types
import sys
def ROTD(infile, outfile):
# do something
if __name__ == '__main__':
symbol = globals().get(sys.argv[1])
if hasattr(symbol, '__call__'):
symbol(*sys.argv[2:])
This will pass the program argument (excluding the filename and the command name) to the function.
EDIT: Please, don't forget the error handling. I omited it for reasons of clarity.
Use main() function:
def main()
# call your function here
if __name__ == "__main__":
main()
A nice way to do it would be to define a big dictionary {alias: function} inside your module. For instance:
actions = {
'ROTD': ROTD,
'REFL': reflect_image,
'INVT': invIm,
}
You get the idea. Then take the first command-line argument and interpret it as a key of this dictionary, applying actions[k] to the rest of the arguments.
You can define in your ppmfilter.py main section doing this:
if __name__ == "__main__":
import sys
ROTD(sys.argv[1], sys.argv[2]) # change according to the signature of the function
and call it: python ppmfilter.py file1 file2
You can also run python -c in the directory that contains you *.py file:
python -c "import ppmfilter; ppmfilter.ROTD('/path/to/file1', '/path/to/file2')"

Categories