Does python fabric support dynamic set env.hosts? - python

I want to change the env.hosts dynamically because sometimes I want to deploy to one machine first, check if ok then deploy to many machines.
Currently I need to set env.hosts first, how could I set the env.hosts in a method and not in global at script start?

Yes you can set env.hosts dynamically. One common pattern we use is:
from fabric.api import env
def staging():
env.hosts = ['XXX.XXX.XXX.XXX', ]
def production():
env.hosts = ['YYY.YYY.YYY.YYY', 'ZZZ.ZZZ.ZZZ.ZZZ', ]
def deploy():
# Do something...
You would use this to chain the tasks such as fab staging deploy or fab production deploy.

Kind of late to the party, but I achieved this with ec2 like so (note in EC2 you do not know what the ip/hostname may be, generally speaking - so you almost have to go dynamic to really account for how the environment/systems could come up - another option would be to use dyndns, but then this would still be useful):
from fabric.api import *
import datetime
import time
import urllib2
import ConfigParser
from platform_util import *
config = ConfigParser.RawConfigParser()
#task
def load_config(configfile=None):
'''
***REQUIRED*** Pass in the configuration to use - usage load_config:</path/to/config.cfg>
'''
if configfile != None:
# Load up our config file
config.read(configfile)
# Key/secret needed for aws interaction with boto
# (anyone help figure out a better way to do this with sub modules, please don't say classes :-) )
global aws_key
global aws_sec
aws_key = config.get("main","aws_key")
aws_sec = config.get("main","aws_sec")
# Stuff for fabric
env.user = config.get("main","fabric_ssh_user")
env.key_filename = config.get("main","fabric_ssh_key_filename")
env.parallel = config.get("main","fabric_default_parallel")
# Load our role definitions for fabric
for i in config.sections():
if i != "main":
hostlist = []
if config.get(i,"use-regex") == 'yes':
for x in get_running_instances_by_regex(aws_key,aws_sec,config.get(i,"security-group"),config.get(i,"pattern")):
hostlist.append(x.private_ip_address)
env.roledefs[i] = hostlist
else:
for x in get_running_instances(aws_key,aws_sec,config.get(i,"security-group")):
hostlist.append(x.private_ip_address)
env.roledefs[i] = hostlist
if config.has_option(i,"base-group"):
if config.get(i,"base-group") == 'yes':
print "%s is a base group" % i
print env.roledefs[i]
# env["basegroups"][i] = True
where get_running_instances and get_running_instances_by_regex are utility functions that make use of boto (http://code.google.com/p/boto/)
ex:
import logging
import re
from boto.ec2.connection import EC2Connection
from boto.ec2.securitygroup import SecurityGroup
from boto.ec2.instance import Instance
from boto.s3.key import Key
########################################
# B-O-F get_instances
########################################
def get_instances(access_key=None, secret_key=None, security_group=None):
'''
Get all instances. Only within a security group if specified., doesnt' matter their state (running/stopped/etc)
'''
logging.debug('get_instances()')
conn = EC2Connection(aws_access_key_id=access_key, aws_secret_access_key=secret_key)
if security_group:
sg = SecurityGroup(connection=conn, name=security_group)
instances = sg.instances()
return instances
else:
instances = conn.get_all_instances()
return instances
Here is a sample of what my config looked like:
# Config file for fabric toolset
#
# This specific configuration is for <whatever> related hosts
#
#
[main]
aws_key = <key>
aws_sec = <secret>
fabric_ssh_user = <your_user>
fabric_ssh_key_filename = /path/to/your/.ssh/<whatever>.pem
fabric_default_parallel = 1
#
# Groupings - Fabric knows them as roledefs (check env dict)
#
# Production groupings
[app-prod]
security-group = app-prod
use-regex = no
pattern =
[db-prod]
security-group = db-prod
use-regex = no
pattern =
[db-prod-masters]
security-group = db-prod
use-regex = yes
pattern = mysql-[d-s]01

Yet another new answer to an old question. :) But I just recently found myself attempting to dynamically set hosts, and really have to disagree with the main answer. My idea of dynamic, or at least what I was attempting to do, was take an instance DNS-name that was just created by boto, and access that instance with a fab command. I couldn't do fab staging deploy, because the instance doesn't exist at fabfile-editing time.
Fortunately, fabric does support a truly dynamic host-assignment with execute. (It's possible this didn't exist when the question was first asked, of course, but now it does). Execute allows you to define both a function to be called, and the env.hosts it should use for that command. For example:
def create_EC2_box(data=fab_base_data):
conn = boto.ec2.connect_to_region(region)
reservations = conn.run_instances(image_id=image_id, ...)
...
return instance.public_dns_name
def _ping_box():
run('uname -a')
run('tail /var/log/cloud-init-output.log')
def build_box():
box_name = create_EC2_box(fab_base_data)
new_hosts = [box_name]
# new_hosts = ['ec2-54-152-152-123.compute-1.amazonaws.com'] # testing
execute(_ping_box, hosts=new_hosts)
Now I can do fab build_box, and it will fire one boto call that creates an instance, and another fabric call that runs on the new instance - without having to define the instance-name at edit-time.

Related

Updating class / instance variables in Python using methods

I have different set of config files initialized under a Config class and would need them updated dynamically in the run time.
config.py
# Package import
import reusable.common as common
import reusable.JSON_utils as JSON
class Config:
def __init__(self):
# run_config
run_file = common.file_path('run_config.JSON')
self.run_dict = JSON.read(run_file)
self.env_list = list(self.run_dict.keys())
# connection_config
self.conn_file = common.file_path('connection_config.JSON')
self.conn_dict = JSON.read(self.conn_file)
# snapshot_config
self.snap_file = common.file_path('snapshot_config.JSON')
self.snap_dict = JSON.read(self.snap_file)
For example I have to iterate through different environments like (DEV, STAGE, QA, PROD) and want to update conn_dict('env') to 'QA' from 'DEV' after DEV tasks are completed. Currently I have the update dict/JSON code in main() but I want to have this as a method inside the Config class
# Package import
import reusable.config as config
import reusable.JSON_utils as JSON
config_obj = config.Config()
for env in config_obj.env_list:
config_obj.conn_dict['env'] = env
JSON.write(config_obj.conn_dict, config_obj.conn_file)
src_id_list = config_obj.run_dict.get(env).get('snapshot')
# do stuff in the current env
for src_id in src_id_list:
config_obj.snap_dict['source_id'] = str(src_id)
JSON.write(config_obj.snap_dict, config_obj.snap_file)
# do stuff for the current data source
Q1 Which method is the optimal and conventional way for this. Class or instant or static method? kindly explain as I'm not clear with those completely
Q2 Can we have a unified method inside the class that takes the dict variable and function as parameters to update the dict and JSON file. If yes how it can be achieved?

Default value for a variable in a class

I need to parse the environment value from a config file or from os environments in a class.
I am looking for a way to have a default for the env variable in case the environment is not found in neither the os.environ nor in the parsed config file.
I have done this: but I am not sure it is the right place? is the __init__ the right place to define those variables? that are to be re-used later on to establish db connections?
import yaml
import os
from socket import gethostname
class wrapper(object):
with open('config') as fd:
config = yaml.safe_load(fd)
hostname = gethostname()
def __init__(self, env='prod'):
self.db_server = None
self.db_default_user = None
self.db_connection = None
for envmt,data in self.config.items():
if self.hostname in data.get('host'):
env = envmt
#override by environment variable
if 'CMS_ENV' in os.environ:
env = os.environ['CMS_ENV']
# Might be overwritten by ENV variables
db_default_user = self.config[env]['db_default_user']
db_server = self.config[env]['db_server']
def db_conn(self):
user = self.db_default_user
the question is how to define a default value to env to fallback to 'prod'? should this defined at class level or while initializing the instance.
the variable dev is used to get the right db_server and correct user_name to connect and fetch data from a mssql db I must make sure it is defined.
this is the content of the config file:
test:
hosts: [vmtest,vmtest2]
db_server: cmreplsta01.netdev.deutsche-boerse.de
db_default_user: example\DB-user
prod:
hosts: [vmprod,vmprod2]
db_server: cmsdb.io.deutsche-boerse.de
db_default_user: example\DB-userprod
I detect the hostname where the script runs the load the db_server and db_default_user accordingly. if the script is running from a host not in the config then i default the env to 'prod' an use the prod values.
i have not been able to find any similar question that would fit my use case.
Why not just use a dictionary to store the defaults, then update the values from the environment?
Something like:
import os
defaults = {'EDITOR': 'nano'}
defaults.update(...) # e.g. your config file
defaults.update(os.environ)
Edit
If you don't want to clutter up your defaults variable with all the system environment variables, you can filter it to only contain the variables which are common between the original defaults and the environment:
import os
defaults = {'EDITOR': 'nano'}
common_variables = os.environ & env.keys() # set intersection
filtered_env = {k:os.environ[k] for k in common_variables}
defaults.update(filtered_env)

How to use Ansible Style dynamic inventory with Nornir?

I want to move from Ansible to Nornir. In Ansbile I use dynamic inventory, where I use this python script to reference the host_var folder:
import json
import yaml
import glob
groups = {}
for hostfilename in glob.glob('./host_vars/*.yml'):
with open(hostfilename, 'r') as hostfile:
host = yaml.load(hostfile, Loader=yaml.FullLoader)
for hostgroup in host['host_groups']:
if hostgroup not in groups.keys():
groups[ hostgroup ] = { 'hosts': [] }
groups[ hostgroup ]['hosts'].append( host['hostname'] )
print(json.dumps(groups))
Question:
How can I use my existing Ansible Inventory in Nornir.
nornir.plugins.inventory.ansible.AnsibleInventory can only be used with 1x host.yaml file not with many, at least this is my understanding
Edit: Goal is to create always new Inventory files on every run. The workflow would be to generate the inventory yaml files in host_vars and then use it during the play.
Can somebody please help me?
Thanks
F.
If I understood you correctly, you want each yaml file in the host_vars folder to be interpreted as one host and its data. This feature is not part of base Nornir, but can be implemented via a custom inventory plugin.
The custom inventory plugin should implement a load() method that returns an Inventory-type object that Nornir can then use normally (see here for an example of the SimpleInventory implementation). I came up with this snippet adapted from the code that was given:
import os
import yaml
import glob
import pathlib
from nornir.core.inventory import (
Inventory,
Hosts,
Host,
Groups,
Group)
def map_host_data(host_dict):
return({
'hostname' : host_dict['hostname'],
'port': host_dict.get('port',22),
'username' : host_dict['username'],
'password' : host_dict['password'],
'platform' : host_dict['platform'],
'data' : host_dict.get('data', None)
})
class DynamicInventory:
def __init__(self, inventory_dir: str = "host_vars/") -> None:
self.inventory_dir = pathlib.Path(inventory_dir).expanduser()
def load(self):
hosts = Hosts()
groups = Groups()
for hostfilename in glob.glob(f"{self.inventory_dir}/*.yaml"):
with open(hostfilename,'r') as hostfile:
host_name = os.path.basename(hostfilename).replace('.yaml','')
host = yaml.load(hostfile, Loader=yaml.FullLoader)
for hostgroup in host['host_groups']:
if hostgroup not in groups.keys():
group = Group(name=hostgroup)
groups[hostgroup] = group
hosts[host_name] = Host(name=host_name, **map_host_data(host))
return Inventory(hosts=hosts,groups=groups,defaults={})
I'm assuming you're using Nornir >= 3 (which you really should), so don't forget to register your plugin if using it on your configuration. Assuming you put the above code under plugins/inventory.py:
from nornir import InitNornir
from plugins.inventory import DynamicInventory
from nornir.core.plugins.inventory import InventoryPluginRegister
InventoryPluginRegister.register("DynamicInventoryPlugin",DynamicInventory)
nr = InitNornir(inventory={'plugin': 'DynamicInventoryPlugin'},
runner={'plugin': 'threaded','options': {'num_workers': 20}})
This of course ignores some features (such as setting defaults), but can be modified to add more features that better match your current setup.

Moto does not appear to be mocking aws interactions in a pytest

Say I want to mock the following:
session = boto3.Session(profile_name=profile)
resource = session.resource('iam')
iam_users = resource.users.all()
policies = resource.policies.filter(Scope='AWS', OnlyAttached=True, PolicyUsageFilter='PermissionsPolicy')
How do I go about starting to mock this with in pytest? I could create mocked objects by creating a dummy class and the necessary attributes, but I suspect that's the wrong approach.
Some additional details, here's what I'm trying to test out:
def test_check_aws_profile(self, mocker):
mocked_boto3 = mocker.patch('myapp.services.utils.boto3.Session')
mocker.patch(mocked_boto3.client.get_caller_identity.get, return_value='foo-account-id')
assert 'foo-account-id' == my_func('foo')
#in myapp.services.utils.py
def my_func(profile):
session = boto3.Session(profile_name=profile)
client = session.client('sts')
aws_account_number = client.get_caller_identity().get('Account')
return aws_account_number
But I can't quite seem to be able to get this patched correctly. I'm trying to make it so that I can patch session and the function calls in that method
I tried using moto and got this:
#mock_sts
def test_check_aws_profile(self):
session = boto3.Session(profile_name='foo')
client = session.client('sts')
client.get_caller_identity().get('Account')
But I'm running into
> raise ProfileNotFound(profile=profile_name)
E botocore.exceptions.ProfileNotFound: The config profile (foo) could not be found
So it seems like it's not mocking anything :|
Edit:
Turns out you need to have the mocked credentials in a config and credentials file for this to work.
If you want to use moto, you can use the AWS_SHARED_CREDENTIALS_FILE environment variable, to point it to a dummy credentials file which can be kept in the tests folder.
You can define your profiles there. Example:
Files: test_stuff.py. dummy_aws_credentials
test_stuff.py:
import os
from pathlib import Path
import boto3
import pytest
from moto import mock_sts
#pytest.fixture(scope='module')
def aws_credentials():
"""Mocked AWS Credentials for moto."""
moto_credentials_file_path = Path(__file__).parent.absolute() / 'dummy_aws_credentials'
os.environ['AWS_SHARED_CREDENTIALS_FILE'] = str(moto_credentials_file_path)
#mock_sts
def test_check_aws_profile(aws_credentials):
session = boto3.Session(profile_name='foo')
client = session.client('sts')
client.get_caller_identity().get('Account')
dummy_aws_credentials:
[foo]
aws_access_key_id = mock
aws_secret_access_key = mock
I'm not sure what exactly you want, so I'll give you something to start.
You let unittest.mock to mock everything for you, for example. (Useful reading: https://docs.python.org/3/library/unittest.mock.html)
module.py:
import boto3
def function():
session = boto3.Session(profile_name="foobar")
client = session.resource("sts")
return client.get_caller_identity().get('Account')
test_module.py:
from unittest.mock import patch
import module
#patch("module.boto3") # this creates mock which is passed to test below
def test_function(mocked_boto):
# mocks below are magically created by unittest.mock when they are accessed
mocked_session = mocked_boto.Session()
mocked_client = mocked_session.resource()
mocked_identity = mocked_client.get_caller_identity()
# now mock the return value of .get()
mocked_identity.get.return_value = "foo-bar-baz"
result = module.function()
assert result == "foo-bar-baz"
# we can make sure mocks were called properly, for example
mocked_identity.get.assert_called_once_with("Account")
Results of pytest run:
$ pytest
================================ test session starts ================================
platform darwin -- Python 3.7.6, pytest-5.3.2, py-1.8.1, pluggy-0.13.1
rootdir: /private/tmp/one
collected 1 item
test_module.py . [100%]
================================= 1 passed in 0.09s =================================
I would also recommend to install pytest-socket and run pytest --disable-socket to make sure your tests do not talk with network by accident.
Although there is nothing wrong with manually patching boto using mock.patch, you could also consider using a higher level testing utility like moto.

How do I search for unpublished Plone content in an IPython debug shell?

I like to use IPython's zope profile to inspect my Plone instance, but a few annoying permissions differences come up compared to inserting a breakpoint and hitting it with the admin user.
For example, I would like to iterate over the content objects in an unpublished testing folder. This query will return no results in the shell, but works from a breakpoint.
$ bin/instance shell
$ ipython --profile=zope
from Products.CMFPlone.utils import getToolByName
catalog = getToolByName(context, 'portal_catalog')
catalog({'path':'Plone/testing'})
Can I authenticate as admin or otherwise rejigger the permissions to fully manipulate my site from ipython?
here's the (very dirty) code I use to manage my plone app from the debug shell. It may requires some updates depending on your versions of Zope and Plone.
from sys import stdin, stdout, exit
import base64
from thread import get_ident
from ZPublisher.HTTPRequest import HTTPRequest
from ZPublisher.HTTPResponse import HTTPResponse
from ZPublisher.BaseRequest import RequestContainer
from ZPublisher import Publish
from AccessControl import ClassSecurityInfo, getSecurityManager
from AccessControl.SecurityManagement import newSecurityManager
from AccessControl.User import UnrestrictedUser
def loginAsUnrestrictedUser():
"""Exemple of use :
old_user = loginAsUnrestrictedUser()
# Manager stuff
loginAsUser(old_user)
"""
current_user = getSecurityManager().getUser()
newSecurityManager(None, UnrestrictedUser('manager', '', ['Manager'], []))
return current_user
def loginAsUser(user):
newSecurityManager(None, user)
def makerequest(app, stdout=stdout, query_string=None, user_pass=None):
"""Make a request suitable for CMF sites & Plone
- user_pass = "user:pass"
"""
# copy from Testing.makerequest
resp = HTTPResponse(stdout=stdout)
env = {}
env['SERVER_NAME'] = 'lxtools.makerequest.fr'
env['SERVER_PORT'] = '80'
env['REQUEST_METHOD'] = 'GET'
env['REMOTE_HOST'] = 'a.distant.host'
env['REMOTE_ADDR'] = '77.77.77.77'
env['HTTP_HOST'] = '127.0.0.1'
env['HTTP_USER_AGENT'] = 'LxToolsUserAgent/1.0'
env['HTTP_ACCEPT']='image/gif, image/x-xbitmap, image/jpeg, */* '
if user_pass:
env['HTTP_AUTHORIZATION']="Basic %s" % base64.encodestring(user_pass)
if query_string:
p_q = query_string.split('?')
if len(p_q) == 1:
env['PATH_INFO'] = p_q[0]
elif len(p_q) == 2:
(env['PATH_INFO'], env['QUERY_STRING'])=p_q
else:
raise TypeError, ''
req = HTTPRequest(stdin, env, resp)
req['URL1']=req['URL'] # fix for CMFQuickInstaller
#
# copy/hacked from Localizer __init__ patches
# first put the needed values in the request
req['HTTP_ACCEPT_CHARSET'] = 'latin-9'
#req.other['AcceptCharset'] = AcceptCharset(req['HTTP_ACCEPT_CHARSET'])
#
req['HTTP_ACCEPT_LANGUAGE'] = 'fr'
#accept_language = AcceptLanguage(req['HTTP_ACCEPT_LANGUAGE'])
#req.other['AcceptLanguage'] = accept_language
# XXX For backwards compatibility
#req.other['USER_PREF_LANGUAGES'] = accept_language
#req.other['AcceptLanguage'] = accept_language
#
# Plone stuff
#req['plone_skin'] = 'Plone Default'
#
# then store the request in Publish._requests
# with the thread id
id = get_ident()
if hasattr(Publish, '_requests'):
# we do not have _requests inside ZopeTestCase
Publish._requests[id] = req
# add a brainless session container
req['SESSION'] = {}
#
# ok, let's wrap
return app.__of__(RequestContainer(REQUEST = req))
def debug_init(app):
loginAsUnrestrictedUser()
app = makerequest(app)
return app
This lives in a wshelpers Zope product. Once the debug shell launched, it's just a matter of;
>> from Products.wshelpers import wsdebug
>> app = wsdebug.debug_init(app)
>> # now you're logged in as admin
Just use catalog.search({'path':'Plone/testing'}). It performs the same query as catalog() but does not filter the results based on the current user's permissions.
IPython's zope profile does provide a method utils.su('username') to change the current user, but it does not recognize the admin user (defined in /acl_users instead of /Plone/acl_users) and after calling it subsequent calls to catalog() fail with AttributeError: 'module' object has no attribute 'checkPermission'.

Categories