I have the following CSV:
ip, arg1, arg2
1.2.3.4, foo, bar
1.3.4.5, baz, bub
I am trying to perform:
For each row in the CSV: ssh into the ip specificed,
Execute commands such as apt-get with arg1 for that row
Execute custom commands that will utilize arg2 for that row
Fabric has execute(do_work, hosts=host_list) but I can't really specify the right context for that. So far, I have hacked together something:
from fabric.api import env, execute
arguments = {}
def _deploy():
print env.host_string, arguments[env.host_string]
def deploy():
global arguments
arguments['1.2.3.4'] = ('foo', 'bar')
arguments['2.3.4.5'] = ('baz', 'bub')
execute(_deploy, hosts=arguments.keys())
This is printed:
[1.2.3.4] Executing task '_deploy'
1.2.3.4 ('foo', 'bar')
[2.3.4.5] Executing task '_deploy'
2.3.4.5 ('baz', 'bub')
Currently, this hasn't broken anything. Is there a better way or even a better lib for me to do this?
Note: I am not a fan of paramiko because it's too low level.
Not elegant per se, but this is the solution that I am settling with:
def special_echo(matrix):
key = 'ubuntu#' + env.host
name = matrix[key]
run('echo %s `hostname --ip-address`' % name)
A = {}
A['ubuntu#54.219.171.62'] = 'first'
A['ubuntu#52.53.149.140'] = 'second'
A['ubuntu#54.183.255.58'] = 'third'
execute(special_echo, A, hosts=A.keys())
Results in:
[ubuntu#54.219.171.62] Executing task 'special_echo'
[ubuntu#54.219.171.62] run: echo first `hostname --ip-address`
[ubuntu#54.219.171.62] out: first 172.31.1.234
[ubuntu#54.219.171.62] out:
[ubuntu#54.183.255.58] Executing task 'special_echo'
[ubuntu#54.183.255.58] run: echo third `hostname --ip-address`
[ubuntu#54.183.255.58] out: third 172.31.15.36
[ubuntu#54.183.255.58] out:
[ubuntu#52.53.149.140] Executing task 'special_echo'
[ubuntu#52.53.149.140] run: echo second `hostname --ip-address`
[ubuntu#52.53.149.140] out: second 172.31.8.138
[ubuntu#52.53.149.140] out:
This is an edge case in fabric that comes up from time to time. I can show you a strategy I've used in the past, but I'll start on a safer variation of your strategy.
The env dictionary is designed to store environment data that can be used by other tasks. Any data that should be shared between tasks but that cannot be passed as arguments can go here, so I'd change your code to keep arguments in env:
from fabric.api import env, execute, task
def _deploy():
print env.host_string, env.arguments[env.host_string]
#task
def deploy():
env.arguments = {}
env.arguments['1.2.3.4'] = ('foo', 'bar')
env.arguments['2.3.4.5'] = ('baz', 'bub')
execute(_deploy, hosts=env.arguments.keys())
The second strategy is one that I've used myself in the past to perform work like this and involves passing all arguments as part of the host string and letting the task be responsible for parsing that in to usable data. It keeps you from passing a large dictionary around globally, and with your use-case might be effective:
hosts.csv
1.2.3.4, foo, bar
1.3.4.5, baz, bub
1.4.5.6, complex-args, maybe with spaces
fabfile.py
from fabric.api import env, execute, run, settings, task
def _parse_args(argument_string):
# you should really use the csv module here to split properly (escapes, etc)
args = argument_string.split(', ')
if len(args) != 3:
raise ValueError("Only the host and 2 arguments are allowed")
return args
def _deploy():
host, arg1, arg2 = _parse_args(env.host_string)
with settings(host_string=host):
# run("apt-get '{}'".format(arg1))
# run("service '{}' restart".format(arg2)
print "[{}] apt-get '{}'".format(env.host_string, arg1)
print "[{}] service '{}' restart".format(env.host_string, arg2)
#task
def deploy():
with open('hosts.csv') as f:
hosts = [ line.strip() for line in f ]
execute(_deploy, hosts=hosts)
Here's the output you'd see using this strategy:
$ fab deploy
[1.2.3.4, foo, bar] Executing task '_deploy'
[1.2.3.4] apt-get 'foo'
[1.2.3.4] service 'bar' restart
[1.3.4.5, baz, bub] Executing task '_deploy'
[1.3.4.5] apt-get 'baz'
[1.3.4.5] service 'bub' restart
[1.4.5.6, complex-args, maybe with spaces] Executing task '_deploy'
[1.4.5.6] apt-get 'complex-args'
[1.4.5.6] service 'maybe with spaces' restart
Done.
Related
I need to run python test script for different environments (different urls). And I need to define which variable use from command line. In future this parameter will be used in Jenkins job.
script.py:
class TestLogin(unittest.TestCase):
#allure.step
def test_LoginValidation(self):
devURL = "http://url1/admin/login/"
stagingURL = "http://url2/admin/login/"
prodURL = "https://url3/admin/login"
driver.maximize_window()
driver.implicitly_wait(10)
driver.get(url)
lp = LoginPage(driver)
lp.login("login", "password")
time.sleep(2)
driver.quit()
In command line I need to write
python script.py stagingURL
In a result in method test_LoginValidation in driver.get(url) will be used url which I defined in command line.
You can use argparse to do this:
import argparse
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Description')
parser.add_argument('--dev',
dest='dev',
action='store_true',
help="Help message")
parser.add_argument('--stage',
dest='stage',
action='store_true',
help="Help message")
parser.add_argument('--prod',
dest='prod',
action='store_true',
help="Help message")
parser.set_defaults(dev=True,
stage=False,
action=False)
args = parser.parse_args()
url = None
if args.dev:
url = "http://url1/admin/login/"
if args.stage:
url = "http://url2/admin/login/"
if args.prod:
url = "https://url3/admin/login"
# do something with the url
This is one way to do it. You are creating some arg parameters --dev, --stage, --prod and by default --dev is set to true. You can also have no default (just set dev=False).
So next time you can run:
python program.py --dev
python program.py --stage
python program.py --prod
You might want to handle the case where more than one flag is passed.
You can also do it this way:
import argparse
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Description')
parser.add_argument("--env",
choices={"dev", "stage", "prod"},
help="Some help message.")
args = parser.parse_args()
url = None
if args.env == "dev":
url = "http://url1/admin/login/"
elif args.env == "stage":
url = "http://url2/admin/login/"
elif args.env == "prod":
url = "https://url3/admin/login"
else:
print("Please specify the environment using --env flag.")
if url is not None:
print(url)
Example:
$ python3 test2.py
Please specify the environment using --env flag.
$ python3 test2.py --env prod
https://url3/admin/login
$ python3 test2.py --env stage
http://url2/admin/login/
$ python3 test2.py --env dev
http://url1/admin/login/
$ python3 test2.py --env wrong
usage: test2.py [-h] [--env {stage,dev,prod}]
test2.py: error: argument --env: invalid choice: 'wrong' (choose from 'stage', 'dev', 'prod')
You can read more about argparse here.
I can recommend click package for creating CLI. It's really simple, well documented, has a lot of options and in my opinion much easier to use than argparse.
A dummy example:
import click
#click.command()
#click.option(
'--count',
default=1,
help='Number of greetings.'
)
#click.option(
'--name',
prompt='Your name',
help='The person to greet.'
)
def hello(**options):
"""Simple program that greets NAME for a total of COUNT times."""
for x in range(options['count']):
click.echo('Hello %s!' % options['name'])
if __name__ == '__main__':
hello()
And what it looks like when run:
$ python hello.py --count=3
Your name: John
Hello John!
Hello John!
Hello John!
It automatically generates nicely formatted help pages:
$ python hello.py --help
Usage: hello.py [OPTIONS]
Simple program that greets NAME for a total of COUNT times.
Options:
--count INTEGER Number of greetings.
--name TEXT The person to greet.
--help Show this message and exit.
You can get the library directly from PyPI:
pip install click
If you want to create CLI just to parametrize unit test you may consider using #pytest.mark.parametrize which allows one to define multiple sets of arguments and fixtures at the test function or class.
An example:
import pytest
class TestLogin(object):
#pytest.mark.parametrize("url", [
"http://url1/admin/login/",
"http://url2/admin/login/",
"https://url3/admin/login",
])
def test_LoginValidation(self, url):
driver.maximize_window()
driver.implicitly_wait(10)
driver.get(url)
lp = LoginPage(driver)
lp.login("login", "password")
time.sleep(2)
driver.quit()
What you're looking for is argparse. That should allow you to do exactly what you're looking for, for example:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('url', help = 'The URL to use for ...', type = str)
This sets up the url as a required argument to be passed to the function, and sets its type to str (this is the default behavior, but being explicit is good).
You can then extract the arguments using:
args = parser.parse_args()
specified_url = args.url
From here you can proceed as you normally would. If you wish to make the argument optional but with a default value, that is also possible using argparse.
Using the environment variables works but is much harder to debug, especially if you expect this script to be run by another piece of software argparse is much more reliable.
It's almost certainly easier to do this in Jenkins than it is to do it in Python. Additionally it seems to make sense that your devops pipeline controls the location of dev, staging, and release URIs (at least as much as it is sensible to do that).
def targetUrl = ''
switch (env.TARGET) {
case 'dev':
targetUrl = "http://url1/admin/login/"
break
// etc
}
sh "python script.py ${targetUrl}"
then have the python script look at sys.argv[1] (which is the first argument passed to it) and use that URL directly.
demo.py
from fabric.api import env, run,execute
env.hosts = ['10.1.1.100','10.1.1.200']
env.remotePath = {'10.1.1.100':'/home','10.1.1.200':'/var'}
env.parallel=True
def mytask(remotePath):
run('ls %s' % remotePath)
def test():
execute(mytask,env.remotePath[env.host])
fab -f demo.py test
I want to execute command ls /home at 10.1.1.100,and ls /var at 10.1.1.200 in parallel using #parallel decorator,is there any way to make it possible?
Use host_string to get the current host, then the command/argument you want to use.
#parallel
def mytask():
host_ip = env.host_string
remote_path = env.remotePath[host_ip]
run('ls %s' % remote_path)
According to Fabric's API doc: host_string
Defines the current user/host/port which Fabric will connect to when
executing run, put and so forth. This is set by fab when iterating
over a previously set host list, and may also be manually set when
using Fabric as a library.
Hope this helps :)
I need to write a command line application, like a shell. So it will include commands etc. The thing is I don't know how to pass parameters to the funcions in a module. For example:
User writes: function1 folder1
Program should now pass the 'folder1' parameter to the function1 function, and run it. But also it has to support other functions with different parameters ex:
User input: function2 folder2 --exampleparam
How to make this to work? I mean, I could just write a module, import it in python and just use the python console, but this is not the case. I need a script that takes command input and runs it.
I tried to use eval(), but that doesn't solve the problem with params. Or maybe it does but I don't see it?
The first part of your problem -- parsing the command line -- can be solved with argparse.
The second part -- converting the string name of a function into a function call -- can be done with exec or a dispatching dict which maps from strings to function objects.
I would recommend NOT using exec for this, since
allowing a user to call arbitrary Python functions from the command line might be dangerous. Instead, make a whitelist of allowable functions:
import argparse
def foo(path):
print('Running foo(%r)' % (path, ))
def bar(path):
print('Running bar(%r)' % (path, ))
dispatch = {
'foo': foo,
'bar': bar,
}
parser = argparse.ArgumentParser()
parser.add_argument('function')
parser.add_argument('arguments', nargs='*')
args = parser.parse_args()
dispatch[args.function](*args.arguments)
% test.py foo 1
Running foo('1')
% test.py bar 2
Running bar('2')
% test.py baz 3
KeyError: 'baz'
The above works when the command is typed into the command-line itself. If the command is being typed into stdin, then we'll need to do something a bit different.
A simple way would be to call raw_input to grab the string from stdin. We could then parse the string with argparse, as we did above:
shmod.py:
import argparse
def foo(path):
print('Running foo(%r)' % (path, ))
def bar(path):
print('Running bar(%r)' % (path, ))
dispatch = {
'foo': foo,
'bar': bar,
}
def parse_args(cmd):
parser = argparse.ArgumentParser()
parser.add_argument('function')
parser.add_argument('arguments', nargs='*')
args = parser.parse_args(cmd.split())
return args
main.py:
import shmod
while True:
cmd = raw_input('> ')
args = shmod.parse_args(cmd)
try:
shmod.dispatch[args.function](*args.arguments)
except KeyError:
print('Invalid input: {!r}'.format(cmd))
Another, more sophisticated way to handle this is to use the cmd module, as #chepner mentioned in the comments.
from cmd import Cmd
class MyInterpreter(Cmd):
prompt = '> '
def do_prompt(self, line):
"Change the interactive prompt"
self.prompt = line + ': '
def do_EOF(self, line):
return True
def do_foo(self, line):
print('Running foo {l}'.format(l=line))
def do_bar(self, line):
print('Running bar {l}'.format(l=line))
if __name__ == '__main__':
MyInterpreter().cmdloop()
For more information on how to use the cmd module, see Doug Hellman's excellent tutorial.
Running the code above yields a result like this:
% test.py
> foo 1
Running foo 1
> foo 1 2 3
Running foo 1 2 3
> bar 2
Running bar 2
> baz 3
*** Unknown syntax: baz 3
optparse is deprecated since python 2.7 and anyway argparse is much more flexible.
The approach of unutbu is safe, but in case you provide whitelist, I would suggest you to let the user know which functions are accepted
dispatch = {
'foo': foo,
'bar': bar,
}
parser = argparse.ArgumentParser()
parser.add_argument('function', choices=dispatch.keys() )
FYI: if the parsing is not too complicated, docopt looks like a very nice package
How about sys.argv? For more advanced stuff check out argsparse. optparse seems depreciated now, but there's a lot of answers here about this question.
Take a look at the optparse module in python. It's exactly what you would need:
http://docs.python.org/2/library/optparse.html
Or you can write your own custom opt-parser (minimalistic though)
def getopts(argv):
opts = {}
while argv:
if argv[0][0] == '-': # find "-name value" pairs
opts[argv[0]] = argv[1] # dict key is "-name" arg
argv = argv[2:]
else:
argv = argv[1:]
return opts
if __name__ == '__main__':
from sys import argv # example client code
myargs = getopts(argv)
# DO something based on your logic here
But in case your script needs to run on python 3 and beyond, you need to consider argparse module.\
Hope that helps.
Take a look at optparse . This can help passing and receiving shell style parameters to python scripts.
Update:
Apparently optparse is deprecated now and argparse is now preferred option for parsing command line arguments.
import sys
def main(arg):
return arg
print main(sys.argv[1])
where sys.argv[0] is the .py file you're running, and all the ones after it would be each argument. you could check the length of the list, then iterate through it, and parse them as necessary and pass the correct things to each function
When I run this fabfile.py...
from fabric.api import env, run, local, cd
def setenv(foo):
env.hosts = ['myhost']
def mycmd(foo):
setenv(foo)
print(env.hosts)
run('ls')
with this command fab mycmd:bar. I get this output...
['myhost']
No hosts found. Please specify (single) host string for connection:
What, what?! I don't get it? I've set the env.hosts and it seems to be valid "inside" the mycmd function, but for some reason that run command doesn't know about the hosts I've specified.
Color me confused. Any help would be appreciated!
#Chris, the reason you're seeing this behavior is because the host list is constructed before the task function is called. So, even though you're changing env.hosts inside the function, it is too late for it to have any effect.
Whereas the command fab setenv:foo mycmd:bar, would have resulted in something you would have expected:
$ fab setenv:foo mycmd:bar
[myhost] Executing task 'mycmd'
['myhost']
[myhost] run: ls
This is the same as the accepted answer, but because of the way setenv is defined, an argument is needed.
Another example:
from fabric.api import env, run, local, cd
env.hosts = ['other_host']
def setenv(foo):
env.hosts = ['myhost']
def mycmd(foo):
setenv(foo)
print('env.hosts inside mycmd: %s' % env.hosts)
run('ls')
The output of this is:
$ fab mycmd:bar
[other_host] Executing task 'mycmd'
env.hosts inside mycmd: ['myhost']
[other_host] run: ls
Fatal error: Name lookup failed for other_host
Underlying exception:
(8, 'nodename nor servname provided, or not known')
Aborting.
As you can see, the host-list is already set to ['other_host', ] when fabric starts to execute mycmd.
The way you are doing it is not normally how I would use Fabric.
from fabric.api import *
def hostname():
env.hosts = ['myhosts']
def mycmd():
print env.hosts
run('ls -l')
To run this I would then do
fab hostname mycmd
this allows you to seperate which host/hosts you want to perform the command on.
hope it helps.
Have you tried to used the hosts decorator?
from fabric.api import env, run, hosts
#hosts('myhost')
def mycmd(foo):
print(env.hosts)
run('ls')
I know this question is super old, but just in case someone stumbles across this, I have found that you don't need to call this as a fab file per se (your file doesn't need to be called "fabfile.py" and command doesn't need to be fab setenv(foo) mycmd(bar). Since you are importing the needed fab elements, you can call the file anything you want (let's call it "testfile.py") and simply use the execute function in the file. That would make your command python testfile.py.
Inside the testfile, set everything up like you would normally, but start the function using the execute keyword. Your file would look something like this:
from fabric.api import env, run
def setenv(foo):
env.hosts = ['myhost']
execute(mycmd, bar)
def mycmd(bar):
run('ls')
setenv(foo)
** It's important to note that the execute command does look like a regular function call. It will call the function and send the arguments in a single comma separated line. You can find more information here
So you'd start your program which would first call setenv, then setenv would execute the mycmd function. With this, you can also set multiple hosts within the same array. Something like:
env.hosts=['myhost1','myhost2','myhost3']
I have figured out how to make it work:
from fabric.api import env, run, local, cd
def setenv(foo):
env.hosts = ['myhost']
return env
def mycmd(foo):
env = setenv(foo)
print(env.hosts)
run('ls')
How can I pass a parameter to a fabric task when calling "fab" from the command line? For example:
def task(something=''):
print "You said %s" % something
$ fab task "hello"
You said hello
Done.
Is it possible to do this without prompting with fabric.operations.prompt?
Fabric 2 task arguments documentation:
http://docs.pyinvoke.org/en/latest/concepts/invoking-tasks.html#task-command-line-arguments
Fabric 1.X uses the following syntax for passing arguments to tasks:
fab task:'hello world'
fab task:something='hello'
fab task:foo=99,bar=True
fab task:foo,bar
You can read more about it in Fabric docs.
In Fabric 2, simply add the argument to your task function. For example, to pass the version argument to task deploy:
#task
def deploy(context, version):
...
Run it as follows:
fab -H host deploy --version v1.2.3
Fabric even documents the options automatically:
$ fab --help deploy
Usage: fab [--core-opts] deploy [--options] [other tasks here ...]
Docstring:
none
Options:
-v STRING, --version=STRING
Fabric 1.x arguments are understood with very basic string parsing, so you have to be a bit careful with how you send them.
Here are a few examples of different ways to pass arguments to the following test function:
#task
def test(*args, **kwargs):
print("args:", args)
print("named args:", kwargs)
$ fab "test:hello world"
('args:', ('hello world',))
('named args:', {})
$ fab "test:hello,world"
('args:', ('hello', 'world'))
('named args:', {})
$ fab "test:message=hello world"
('args:', ())
('named args:', {'message': 'hello world'})
$ fab "test:message=message \= hello\, world"
('args:', ())
('named args:', {'message': 'message = hello, world'})
I use double quote here to take the shell out of the equation, but single quotes may be better for some platforms. Also note the escapes for characters that fabric considers delimiters.
More details in the docs:
http://docs.fabfile.org/en/1.14/usage/fab.html#per-task-arguments
You need to pass all Python variables as strings, especially if you are using sub-process to run the scripts, or you will get an error. You will need to convert the variables back to int/boolean types separately.
def print_this(var):
print str(var)
fab print_this:'hello world'
fab print_this='hello'
fab print_this:'99'
fab print_this='True'
If someone is looking to pass parameters from one task to another in fabric2, just use the environment dictionary for that:
#task
def qa(ctx):
ctx.config.run.env['counter'] = 22
ctx.config.run.env['conn'] = Connection('qa_host')
#task
def sign(ctx):
print(ctx.config.run.env['counter'])
conn = ctx.config.run.env['conn']
conn.run('touch mike_was_here.txt')
And run:
fab2 qa sign