I have a single file script for operations automation (log file downloads, stop/start several containers. User is choosing what to do via command arguments) and want to have fabric functions in the same script as well as argument parsing class and possibly some other. How do I call fabric functions from within the same python script? I do not want to use "fab" as it is.
And as a side note, I'd like to have these calls parallel as well.
This is a model class that would ideally contain all necessary fabric functions:
class fabricFuncs:
def appstate(self):
env.hosts = hosts
run('sudo /home/user/XXX.sh state')
This is launcher section:
if __name__ == "__main__":
argParser().argParse()
fabricFuncs().ihsstate()
argParser sets variables globaly using command line arguments specified (just to clarify what that part does).
Which sadly results in a failure where no hosts are defined (env.hosts should contain that inside the function...or is it too late to declare them there?)
EDIT1:
I have tried launching the fabric function using this:
for h in env.hosts:
with settings(hosts_string=user + "#" + h):
fabricFuncs().ihsstate()
It kind of works. I kind of hoped though, that I will be able to paralelize the whole process using fabric module as it is (via decorators) without wraping the whole thing in threading code.
EDIT2:
I have tried this as well:
execute(fabricFuncs().ihsstate())
Which fails with:
Fatal error: Needed to prompt for the target host connection string (host: None)
Can i put a whole env.hosts variable into "settings" above without iterating over that list with a "for" statement?
EDIT3:
I have tried editing the fab function like this to see if env.hosts are set properly:
class fabricFuncs:
def appstate(self):
env.hosts = hosts
print env.hosts
run('sudo /home/user/XXX.sh state')
And it prints out correctly, but still the "run" command fails with:
Fatal error: Needed to prompt for the target host connection string (host: None)
Use the execute command:
from fabric.api import execute
execute(argParser().argParse())
execute(fabricFuncs().ihsstate())
if you run the script without fab command env.host will set to None.
so if you want to use 'execute' you have to pass also 'hosts' parameter.
try this:
from fabric.api import execute, run
if __name__ == "__main__":
hosts = ["host1", "host2"]
execute(run('sudo /home/user/XXX.sh state'), hosts=hosts)
Related
I'm working on cloning a Virtual Machine (VM) in vCenter environment using this code. It takes command line arguments for name of the VM, template, datastore, etc. (e.g. $ clone_vm.py -s <host_name> -p < password > -nossl ....)
I have another Python file where I've been able to list the Datastore volumes in descending order of free_storage. I have stored the datastore with maximum available storage in a variable ds_max. (Let's call this ds_info.py)
I would like to use ds_max variable from ds_info.py as a command line argument for datastore command line argument in clone_vm.py.
I tried importing the os module in ds_info.py and running os.system(python clone_vm.py ....arguments...) but it did not take the ds_max variable as an argument.
I'm new to coding and am not confident to change the clone_vm.py to take in the Datastore with maximum free storage.
Thank you for taking the time to read through this.
I suspect there is something wrong in your os.system call, but you don't provide it, so I can't check.
Generally it is a good idea to use the current paradigm, and the received wisdom (TM) is that we use subprocess. See the docs, but the basic pattern is:
from subprocess import run
cmd = ["mycmd", "--arg1", "--arg2", "val_for_arg2"]
run(cmd)
Since this is just a list, you can easily drop arguments into it:
var = "hello"
cmd = ["echo", var]
run(cmd)
However, if your other command is in fact a python script it is more normal to refactor your script so that the main functionality is wrapped in a function, called main by convention:
# script 2
...
def main(arg1, arg2, arg3):
do_the_work
if __name__ == "__main__":
args = get_sys_args() # dummy fn
main(*args)
Then you can simply import script2 from script1 and run the code directly:
# script 1
from script2 import main
args = get_args() # dummy fn
main(*args)
This is 'better' as it doesn't involve spawning a whole new python process just to run python code, and it generally results in neater code. But nothing stops you calling a python script the same way you'd call anything else.
I’m using Python 3.6 and Fabric 2.4. I’m using Fabric to SSH into a server and run some commands. I need to set an environment variable for the commands being run on the remote server. The documentation indicates that something like this should work:
from fabric import task
#task(hosts=["servername"])
def do_things(c):
c.run("command_to_execute", env={"KEY": "VALUE"})
But that doesn’t work. Something like this should also be possible:
from fabric import task
#task(hosts=["servername"])
def do_things(c):
c.config.run.env = {"KEY": "VALUE"}
c.run("command_to_execute")
But that doesn’t work either. I feel like I’m missing something. Can anyone help?
I was able to do it by setting inline_ssh_env=True, and then explicitly setting the env variable, ex:
with Connection(host=hostname, user=username, inline_ssh_env=True) as c:
c.config.run.env = {"MY_VAR": "this worked"}
c.run('echo $MY_VAR')
As stated on the site of Fabric:
The root cause of this is typically because the SSH server runs non-interactive commands via a very limited shell call: /path/to/shell -c "command" (for example, OpenSSH). Most shells, when run this way, are not considered to be either interactive or login shells; and this then impacts which startup files get loaded.
You read more on this page link
So what you try to do won't work, and the solution is to pass the environment variable you want to set explicitly:
from fabric import task
#task(hosts=["servername"])
def do_things(c):
c.config.run.env = {"KEY": "VALUE"}
c.run('echo export %s >> ~/.bashrc ' % 'ENV_VAR=VALUE' )
c.run('source ~/.bashrc' )
c.run('echo $ENV_VAR') # to verify if it's set or not!
c.run("command_to_execute")
You can try that:
#task
def qa(ctx):
ctx.config.run.env['counter'] = 22
ctx.config.run.env['conn'] = Connection('qa_host')
#task
def sign(ctx):
print(ctx.config.run.env['counter'])
conn = ctx.config.run.env['conn']
conn.run('touch mike_was_here.txt')
And run:
fab2 qa sign
When creating the Connection object, try adding inline_ssh_env=True.
Quoting the documentation:
Whether to send environment variables “inline” as prefixes in front of command strings (export VARNAME=value && mycommand here), instead of trying to submit them through the SSH protocol itself (which is the default behavior). This is necessary if the remote server has a restricted AcceptEnv setting (which is the common default).
According to that part of the official doc, the connect_kwargs attribute of the Connection object is intended to replace the env dict. I use it, and it works as expected.
I have a saved python script. I run this python script from the command prompt in Windows 10.
This is as simple as navigating to the directory where the script is located and then typing:
python myscript.py
and the script will run fine.
However, sometimes, I want to run this script such that a variable within that script is set to one value and sometimes to another. This variable tells the script which port to operate an API connection through (if this is relevant).
At the moment, I go into the script each time and change the variable to the one that I want and then run the script after that. Is there a way to set the variable at the time of running the script from the command prompt in Windows 10?
Or are there potentially any other efficient solutions to achieve the same flexibility at the time of running?
Thanks
The usual way to do this is with command-line arguments. In fact, passing a port number is, after passing a list of filenames, almost the paradigm case for command-line arguments.
For simple cases, you can handle this in your code with sys.argv
port = int(sys.argv[1])
Or, if you want a default value:
port = int(sys.argv[1]) if len(sys.argv) > 1 else 12345
Then, to run the program:
python myscript.py 54321
For more complicated cases—when you have multiple flags, some with values, etc.—you usually want to use something like argparse. But you'll probably want to read up a bit on typical command-line interfaces, and maybe look at the arguments of tools you commonly, before designing your first one. Because just looking at all of the options in argparse without knowing what you want in advance can be pretty overwhelming.
Another option is to use an environment variable. This is more tedious if you want to change it for each run, but if you want to set it once for an entire series of runs in a command-line session, or even set a computer-wide default, it's a lot easier.
In the code, you'd look in os.environ:
port = int(os.environ.get('MYSCRIPT_PORT', 12345))
And then, to set a port:
MYSCRIPT_PORT=54321
python myscript.py
You can combine the two: use a command-line argument if present, otherwise fall back to the environment variable, otherwise fall back to a default. Or even add a config file and/or (if you only care about Windows) registry setting. Python itself does something like three-step fallback, as do many major servers, but it may be overkill for your simple use case.
You should look at argparse. Heres an example:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("-m", help='message to be sent', type=str)
args = (parser.parse_args())
print args.m
Each argument you create is saved like a dictionary so you have to call it in your code like I did with my print statement
"args.m" < ---- This specifies the argument passed you want to do stuff with
here was my input/output:
C:\Users\Vinny\Desktop>python argtest.py -m "Hi"
Hi
C:\Users\Vinny\Desktop>
More info on argparse:https://docs.python.org/3/library/argparse.html
You need the argparse library.
https://docs.python.org/3/library/argparse.html
https://docs.python.org/2/library/argparse.html
I am new to python & fabric.
I have a python module module1.py which would like to take in a command parameter, we use argparse inside the module1.py to require a command line parameter.
But we are trying to run the whole program through fabric, if I directly specify the command line when running through fab, I got
mycode.py: error: argument --config_yaml is required
How could I pass the argument through fab?
Thanks!
From what i can see from here:
https://github.com/fabric/fabric/blob/master/fabric/main.py#L340
https://github.com/fabric/fabric/blob/master/fabric/main.py#L619
You CAN'T do it. You can't add things to env_options before main.py runs, your code inside fabfile.py will run after main() has already been processed.
You can however do this:
Rename fabfile.py -> whatever you want as long as its not fabric.py. I called mine fabricc.py
from fabric.state import env_options, make_option
env_options.append(make_option('--myconfig', dest='config_file', default='default.ini'))
from fabric import main
from fabric.api import task, env
#task
def do_something():
print('config file: {}'.format(env.config_file))
if __name__ == '__main__':
main.find_fabfile = lambda x: __file__
main.main()
now run it:
$ python fabricc.py do_something
config file: default.ini
Done.
or..
$ python fabricc.py do_something --myconfig=somethinelse.ini
config file: somethinelse.ini
Done.
Word of warning DO NOT call it --config -- this is a builtin param.
With this you still enjoy everything you still love about fabric and nothing has changed.
$ python fabricc.py do_something --help
Usage: fab [options] <command>[:arg1,arg2=val2,host=foo,hosts='h1;h2',...] ...
Options:
-h, --help show this help message and exit
-d NAME, --display=NAME
...
-z INT, --pool-size=INT
number of concurrent processes to use in parallel mode
--myconfig=CONFIG_FILE
PS. i dont suggest you do the fabric override to handle it manually. Im just showing you it can be done. (the reason i dont condone this is: compatibility. If the version changes tomorrow this might break) best to use the product how the author meant it to be used.
====================================================================
Also, fabric is meant to be a fancy orm builder -- for lack of a better word. You cannot have command line arguments on a per-task-basis, it wasnt design to work like that. BUT it was designed to take in task-function arguments:
#task
def do_something(file=None):
print('config file: {}'.format(file or 'default.ini'))
$ fab dosomething:test
config file: override.ini
Done.
and that is what fabric was created to do.
its meant to be used like this:
#task
def environment(box_env=None):
...
#task
def deploy(branch='master'):
...
#task
def provision(recreate_if_existing=False, start_nignx=True):
...
fab evironment:dev deploy:development
fab evironment:dev provision:True,False
all together.
fab evironment:dev provision deploy:development
I know argparse, but not fabric. From the error is looks like your script defines a '--config_yaml' argparse argument (and makes it 'required'). But fabric apparently also uses that argument name.
I don't know if fabric also uses argparse. But it is common for programs like that to strip off the commandline arguments that it expects, and pass the rest on to your code.
Do you need to use --config_yaml in your script? And why is it set to required=True? argparse allows you to specify a required parameter, but ideally flagged arguments like this are optional. If not given they should have a reasonable default.
I see a --config=... argument in the fabric API, but not a --config_yaml.
I wonder if this section about per-task arguments is relevant
http://docs.fabfile.org/en/1.10/usage/fab.html#per-task-arguments
It sounds like you need to add the task name to the argument. fabric doesn't just pass all unknown arguments to the task (which is what I was assuming above). But I don't have fabric installed, so can't test it.
Question can be related to Use python subprocess module like a command line simulator
I have written some infrastructure code called my_shell to which you can pass shell commands of my application that looks like this
class ApplicationTestShell(object):
def __init__(self):
'''
Constructor
'''
self.play_ground_dir = "/var/tmp/MyAppDir"
ensure_dir_exists_and_empty(self.play_ground_dir)
def execute_command(self, command, on_success = None, on_failure = None):
p = create_shell_process(self, self.play_ground_dir)
sout, serr = p.communicate(input = command)
if p.returncode == 0:
on_success(sout)
else:
on_failure(serr)
def create_shell_process(self, cwd):
return Popen("/bin/bash", env= {WHAT DO I DO HERE?},cwd = test_dir, stdout=PIPE, stderr=PIPE, stdin=PIPE)
The interesting bit to me here is the env parameter. Python expects like a 'map' datastructure of all environment variable. My application requires several variables exported and set. The script for setting and exporting is generated by running say '/bin/appload myapp' (Assume appload is always available on the path). What I do currently
is when I call p.communicate I do the following
p.communicate(input = "eval `/bin/appload myapp`;" + command)
So basically before running the command I call the infrastructure setup.
Is there any way to do this in a better fashion in Python. I somehow want to push the eval /bin/appload part to the env parameter on the Popen class OR as part of the shell creation process.
What are the problems with my current implementation? (I feel it is hacky but I may be wrong)
It depends on how /bin/appload myapp works. If it only guarantees that it will output bash syntax, then parsing that output in Python in order to construct the environment object there is almost certainly more trouble than it's worth (you might need to support parameter and variable expansion, subshells, process substitution, etc, etc). On the other hand, if you are sure that /bin/appload myapp will only ever output lines of the form "VARIABLENAME=someword", then that's pretty trivial to parse in Python and you could move it into your Python code if you like.
There are an awful lot of different directions you could go with these requirements; you could capture the output of appload myapp into a tempfile and set the subprocess's $BASH_ENV to that filename; that would cause the shell to source your environment setup before running your command in a way that some might consider cleaner. You could give your command (with the eval-ing prefix) as the first argument to Popen and pass shell=True, and let Popen do the bash invocation on its own (setting $SHELL explicitly to bash if necessary). You could use bash's -c option to specify the code to run on the command line rather than via stdin. You could have a multi-tiered approach by invoking a shell from Python which eval's the appload myapp environment and then exec's another shell underneath it, so that the first doesn't show up in ps listings and the command given to create_shell_process has the shell all to itself (although that shouldn't really matter). You could do a lot of things, depending on what your concerns are with respect to how the shell is invoked, how it looks in ps listings, whether you want your command to still be run if the appload myapp output produces an error when eval'd, etc. But for a general solution, I think what you have is perfectly fine.
I don't see any real problems with the implementation, besides cosmetic things or minor things that probably only came from copying and pasting the code: create_shell_process doesn't use its cwd parameter, and the on_success and on_failure parameters look like they're optional but the defaults will break things (you can't call None).