I'm using fabric to execute some remote commands on several hosts by setting:
env.hosts = [host1, host2, ...]
There are several tasks I want to perform on some of the hosts, and some I don't.
is there's a way I can retrieve the current hostname the task is executing on?
Any help would be great.
Thanks,
Meny
Why not use different host roles?
from fabric.api import env, roles, run
env.roledefs['webservers'] = ['www1', 'www2', 'www3']
#roles('webservers')
def my_task():
run('ls -l')
Additionally, you can get the current executing host from the env dictionary:
def my_task():
print 'Currently executing on {0}'.format(env.host)
eclaird answer definitely helped me use a better practice when executing tasks on several hosts with different roles.
Though, while playing with it, it seems that within the task, env.host will give the name of the current hostname.
for example:
#parallel(pool_size=len(env.hosts))
def upload_to_s3(to):
awscreds = 'some credentials...'
cmd = '%s aws s3 sync /mnt/backup %s/%s/' % (awscreds, to, env.host)
run(cmd)
Related
I want to use Fabric and run a command on local, without having to establish any additional connections.
How do I do this in fabric 2? ... documentation seems to miss to give any example.
The design decision to drop the local command in Fabric 2 makes this more difficult, but I was able to simulate it by using Context from Invoke instead of Connection:
from fabric import Connection
from invoke.context import Context
#task
def hostname(c):
c.run('hostname')
#task
def test(c):
conn = Connection('user#host')
hostname(conn)
local_ctx = Context(c.config) # can be passed into #task;
# Connection is a subclass of Context
hostname(local_ctx)
After several different attemps and spending lots of time I found this elegant solution for starting a server (pty=True) and run local commands.
fabfile.py
from fabric import task
#task
def env_test(c):
c.run("env", replace_env=False)
#task
def go(c):
c.run("manage.py runserver", replace_env=False, pty=True)
Please be aware again, these two commands are only meant for local development tasks!
Further Reading: Fabric2 Connections, Upgrading from 1.x
run, sudo, and local are done the same:
from fabric import Connection
cn = Connection('scott#104.131.61.12') # presumes ssh keys were exchanged
cn.run('ls -al') # assuming ssh to linux server - as scott
cn.sudo('whoami') # as root
cn.local('echo ---------- now from local')
cn.local('dir /w') # assuming client is windows
I am adding #TheRealChx101's comment as an answer because I ran into troubles with Connection.local.
Not all environment variables got into the pty, so some of my scripts did not work properly.
With the import from invoke run as local stanza (Invoke's local instead of Fabric's), everything worked fine.
# -*- coding: utf-8 -*-
from fabric import task
from invoke import run as local
#task(default=True)
def testwxmsg(c):
local("pytest --reuse-db --no-migrations tests/weixin/test_release_accrual.py")
This is similar to the answer by #phoibos, but I wanted to show that #task is not needed.
import sys
from fabric import Connection
from invoke.context import Context
target_host=sys.argv[1]
if target_host == 'localhost':
ctx = Context()
else:
ctx = Connection(target_host)
ctx.run('hostname', echo=False, hide=None)
if isinstance(ctx, Connection):
ctx.close()
Local:
> python demo.py localhost
MyComputerName
Remote:
> python demo.py demo.example.com
demo.example.com
I was looking a way to execute tasks on multiple servers (yes I'm aware of roledefs -R and hosts -H options) but I need some extra parameters like different user for each host etc and I wanted to keep it clean and tidy as I keep my host as functions definitions (as suggested on stack somewhere)to have possibility to execute task on just one like this
def test():
user='root'
env.host='myapp-test.com'
I started with modifying env.tasks but it turned out they are iterated by generator and access to them through context_manager was only for view (as in fab docs).
I wanted to keep my "function-like hosts" so I ended up with modifying env.hosts dynamically and writing decorator which updates server-specific data depending on current env.host (will override default task decorator in future) working code below (I had to change names in code for security reasons hope didn't break any functionality):
APP_SERVERS= {
'test':{
'envname':'test',
'user':'deploy_user',
'host':'myapp-test.com',
'host_string':'myapp-test.com',
'path':'/opt/myapp/test/',
'www_root':'http://myapp-test.com/',
'retries_before_killing':3,
'retry_sleep':2
},
'staging':{
'envname':'staging',
'user':'deploy_user',
'host':'myapp-staging.com',
'host_string':'myapp-staging.com',
'path':'/opt/myapp/staging/',
'www_root':'http://myapp-staging.com/',
'retries_before_killing':3,
'retry_sleep':2
},
'uat':{
'envname':'uat',
'user':'deploy_user',
'host':'myapp-uat.com',
'host_string':'myapp-uat.com',
'path':'/opt/myapp/uat/',
'www_root':'http://myapp-uat.com/',
'retries_before_killing':3,
'retry_sleep':2
},
'live1':{
'envname':'live1',
'user':'deploy_user',
'host':'myapp-live1.com',
'host_string':'myapp-live1.com',
'path':'/opt/myapp/live1/',
'www_root':'http://myapp-live1.com/',
'retries_before_killing':1,
'retry_sleep':1
},
'live2':{
'envname':'live2',
'user':'deploy_user',
'host':'myapp-live2.com',
'host_string':'myapp-live2.com',
'path':'/opt/myapp/live2/',
'www_root':'http://myapp.com/',
'retries_before_killing':1,
'retry_sleep':1
}
}
TEST_HOSTS = ['test','staging','uat']
LIVE_HOSTS = ['live1','live2']
def test():
env.update(dict(APP_SERVERS['test']))
def staging():
env.update(dict(APP_SERVERS['staging']))
def uat():
env.update(dict(APP_SERVERS['uat']))
def live1():
env.update(dict(APP_SERVERS['live1']))
def live2():
env.update(dict(APP_SERVERS['live2']))
# GROUPS OF SERVERS DEFINITION
def live_servers():
env['hosts'] = [APP_SERVERS[a]['host'] for a in APP_HOSTS]
def test_servers():
env['hosts'] = [APP_SERVERS[a]['host'] for a in APP_HOSTS]
def env_update(func):
def func_wrapper(*args,**kwargs):
if not len(env.hosts):
return func(*args,**kwargs)
else:
env.update(dict(APP_SERVERS[filter(lambda x: APP_SERVERS[x]['host']==env.host,APP_SERVERS)[0]]))
func(*args,**kwargs)
return func_wrapper
#env_update
def pull_commits():
#some_code
run('uptime')
I have possibility to run group execute task fab live_servers pull_commits and also single fab live1 pull_commits.
I know there could be also something like duplication of tasks with separate servers fab live1 pull_commits live2 pull_commits but I believe that fabric was written for distributed systems which has different paths of apps and users etc.
So my question is: Is there easier way to do it? like something build in to fabric(also roledefs with extra dict keys didn't work for me)? Or am I not seeing some fabric functionality?
I want to keep this simple single/multiple host deployment commands like : fab live_servers pull_commits, fab test pull_commits
Rather than using your current approach, consider just using the host_string as the key to the ARG_SERVERS dictionary, and then load information as required inside the tasks (including any required updates to to the fabric environment. Making this adjustment means you should also be able get roles working.
For example:
#task
def pull_commits():
hostinfo = ARG_SERVERS[env.host_string]
# ... more code
run(...)
One further suggestion is to consider using an ssh config file (~/.ssh/config) to define ssh aliases for all of your machines. This puts all of your host/host_string/username information in a central location, and has the advantage that you can then refer to a host with a single meaningful name and simplifies ARG_SERVERS.
# contents of $HOME/.ssh/config
Host test
HostName myapp-test.com
User deploy-user
Host staging
HostName myapp-staging.com
User deploy-user
Host uat
HostName myapp-uat.com
User deploy-user
Host live1
HostName myapp-live1.com
User deploy-user
Host live2
HostName myapp-live2.com
User deploy-user
As per a more traditional fabric task model, you should now be able to do things like:
# single host
$ fab -H live1 pull_commits
# multiple hosts
$ fab -H test,staging pull_commits
Or roles (as per the fabric documentation).
I've some fabric tasks in my fabfile and I need to initialize, the env variable before their execution. I'm trying to use a decorator, it works but fabric always says "no host found Please specify (single)" however if I print the content of my variable "env" all seems good.
Also I call my tasks from another python script.
from fabric.api import *
from instances import find_instances
def init_env(func):
def wrapper(*args, **kwargs):
keysfolder = 'keys/'
env.user = 'admin'
env.key_filename = '%skey_%s_prod.pem'%(keysfolder, args[0])
env.hosts = find_instances(args[1])
return func(args[0], args[1])
return wrapper
#init_env
def restart_apache2(region, groupe):
print(env.hosts)
run('/etc/init.d/apache2 restart')
return True
My script which call the fabfile:
from fabfile import init_env, restart_apache2
restart_apache2('eu-west-1', 'apache2')
Output of print in restart apache2:
[u'10.10.0.1', u'10.10.0.2']
Any idea why my task restart_apache2 doesn't use the env variable?
Thanks
EDIT:
Which is interesting it's if in my script which calls the fabfile, I use settings from fabric.api and set a host ip, it works. This show that my decorator has well initialized the env variable because the key and user are send to fabric. It's only the env.hosts that's not read by fabric...
EDIT2:
I can reach my goal with using settings from fabric.api, like that:
#init_env
def restart_apache2(region, groupe):
for i in env.hosts:
with settings(host_string = '%s#%s' % (env.user, i)):
run('/etc/init.d/apache2 restart')
return True
Bonus question, has there a solution to use directly the env.hosts without settings?
I'm guessing here a little, but I'm assuming you've got into trouble because you're trying to solve two problems at once.
The first issue relates to the issue of multiple hosts. Fabric includes the concepts of roles, which are just groups of machines that you can issue commands to in one go. The information in the find_instances function could be used to populate this data.
from fabric import *
from something import find_instances
env.roledefs = {
'eu-west-1' : find_instances('eu-west-1'),
'eu-west-2' : find_instances('eu-west-2'),
}
#task
def restart_apache2():
run('/etc/init.d/apache2 restart')
The second issue is that you have different keys for different groups of servers. One way to resolve this problem is to use an SSH config file to prevent you from having to mix the details of the keys / users accounts with your fabric code. You can either add an entry per instance into your ~/.ssh/config, or you can use local SSH config (env.use_ssh_config and env.ssh_config_path)
Host instance00
User admin
IdentityFile keys/key_instance00_prod.pem
Host instance01
User admin
IdentityFile keys/key_instance01_prod.pem
# ...
On the command line, you should then be able to issue the commands like:
fab restart_apache2 -R eu-west-1
Or, you can still do single hosts:
fab restart_apache2 -H apache2
In your script, these two are equivalent to the execute function:
from fabric.api import execute
from fabfile import restart_apache2
execute(restart_apache2, roles = ['eu-west-1'])
execute(restart_apache2, hosts = ['apache2'])
I am use fabric to write my rsync wrapper, the variable env.host_string will be set by execute() to run task. To get env.host_string, I run('pwd') first, and run rsync.
Is It possible to make sure user set env.hosts before some checkpoint arrived, such as the condition src == env.host_string ?
from fabric.api import run, env, task, abort
from string import Template
#task
def sync_to_same_dir(src, path):
env.host_string
cmd_template = Template("""rsync --dry-run -e ssh -avr $user#$src_host:$path/ $path/""")
if path.endswith("/"):
path = path[:-1]
run("pwd") # it's a work around
if src == env.host_string:
abort("cannot rysnc to the same host")
cmd = cmd_template.substitute(path=path, user=env.user, src_host=src)
run(cmd)
I find the answer from fabric's source code. There is a simple idea: how run check host as needed ?
#needs_host
def put(local_path=None, remote_path=None, use_sudo=False,
mirror_local_mode=False, mode=None):
"""
Upload one or more files to a remote host.
"""
I trace the needs_host it will prompt to ask hosts, when the user don't assign any hosts:
No hosts found. Please specify (single) host string for connection:
We can rewrite the task as:
from fabric.network import needs_host
#task
#needs_host
def the_task_needs_host(): pass
What are you trying to do? A task knows what host it's using w/o having any other fabric calls:
fab -f host-test.py foo
[98.98.98.98] Executing task 'foo'
98.98.98.98
98.98.98.98
Done.
And here is the script eg:
#!/user/bin/env python
from fabric.api import *
env.user = 'mgoose'
#task
#hosts("98.98.98.98")
def foo():
print(env.host)
print(env.host_string)
So you don't have to do anything special to know what host your task is on.
How do you configure fabric to connect to remote hosts using SSH keyfiles (for example, Amazon EC2 instances)?
Finding a simple fabfile with a working example of SSH keyfile usage isn't easy for some reason. I wrote a blog post about it (with a matching gist).
Basically, the usage goes something like this:
from fabric.api import *
env.hosts = ['host.name.com']
env.user = 'user'
env.key_filename = '/path/to/keyfile.pem'
def local_uname():
local('uname -a')
def remote_uname():
run('uname -a')
The important part is setting the env.key_filename environment variable, so that the Paramiko configuration can look for it when connecting.
Also worth mentioning here that you can use the command line args for this:
fab command -i /path/to/key.pem [-H [user#]host[:port]]
Another cool feature available as of Fabric 1.4 - Fabric now supports SSH configs.
If you already have all the SSH connection parameters in your ~/.ssh/config file, Fabric will natively support it, all you need to do is add:
env.use_ssh_config = True
at the beginning of your fabfile.
For fabric2 in fabfile use the following:
from fabric import task, Connection
#task
def staging(ctx):
ctx.name = 'staging'
ctx.user = 'ubuntu'
ctx.host = '192.1.1.1'
ctx.connect_kwargs.key_filename = os.environ['ENV_VAR_POINTS_TO_PRIVATE_KEY_PATH']
#task
def do_something_remote(ctx):
with Connection(ctx.host, ctx.user, connect_kwargs=ctx.connect_kwargs) as conn:
conn.sudo('supervisorctl status')
and run it with:
fab staging do_something_remote
UPDATE:
For multiple hosts (one host will do also) you can use this:
from fabric2 import task, SerialGroup
#task
def staging(ctx):
conns = SerialGroup(
'user#10.0.0.1',
'user#10.0.0.2',
connect_kwargs=
{
'key_filename': os.environ['PRIVATE_KEY_TO_HOST']
})
ctx.CONNS = conns
ctx.APP_SERVICE_NAME = 'google'
#task
def stop(ctx):
for conn in ctx.CONNS:
conn.sudo('supervisorctl stop ' + ctx.APP_SERVICE_NAME)
and run it with fab or fab2:
fab staging stop
For me, the following didn't work:
env.user=["ubuntu"]
env.key_filename=['keyfile.pem']
env.hosts=["xxx-xx-xxx-xxx.ap-southeast-1.compute.amazonaws.com"]
or
fab command -i /path/to/key.pem [-H [user#]host[:port]]
However, the following did:
env.key_filename=['keyfile.pem']
env.hosts=["ubuntu#xxx-xx-xxx-xxx-southeast-1.compute.amazonaws.com"]
or
env.key_filename=['keyfileq.pem']
env.host_string="ubuntu#xxx-xx-xxx-xxx.ap-southeast-1.compute.amazonaws.com"
I had to do this today, my .py file was as simple as possible, like the one posted in the answer of #YuvalAdam but still I kept getting prompted for a password...
Looking at the paramiko (the library used by fabric for ssh) log, I found the line:
Incompatible ssh peer (no acceptable kex algorithm)
I updated paramiko with:
sudo pip install paramiko --upgrade
And now it's working.
None of these answers worked for me on py3.7, fabric2.5.0 and paramiko 2.7.1.
However, using the PKey attribute in the documentation does work: http://docs.fabfile.org/en/2.5/concepts/authentication.html#private-key-objects
from paramiko import RSAKey
ctx.connect_kwargs.pkey = RSAKey.from_private_key_file('path_to_your_aws_key')
with Connection(ctx.host, user, connect_kwargs=ctx.connect_kwargs) as conn:
//etc....
As stated above, Fabric will support .ssh/config file settings after a fashion, but using a pem file for ec2 seems to be problematic. IOW a properly setup .ssh/config file will work from the command line via 'ssh servername' and fail to work with 'fab sometask' when env.host=['servername'].
This was overcome by specifying the env.key_filename='keyfile' in my fabfile.py and duplicating the IdentityFile entry already in my .ssh/config.
This could be either Fabric or paramiko, which in my case was Fabric 1.5.3 and Paramiko 1.9.0.