How to call salt-ssh (SSHClient) via Python API - python

I installed Salt in a Python 3 virtual environment and created a Salt configuration that uses a non-root folder for everything (/home/user/saltenv). When using the salt-ssh command inside the venv, e.g. salt-ssh '*' test.ping, everything works as exptected. (Please note that the config dir is resolved via a Saltfile, so the -c option is omitted, but that should not matter.)
When calling the SSHClient directly via Python however, I get no results. I already figured out that the roster file is not read, obviously resulting in an empty target list. I am stuck somehow and the documentation is not that helpful.
Here is the code:
import salt.config
from salt.client.ssh.client import SSHClient
def main():
c_path = '/home/user/saltenv/etc/salt/master'
master_opts = salt.config.client_config(c_path)
c = SSHClient(c_path=c_path, mopts=master_opts)
res = c.cmd(tgt='*', fun='test.ping')
print(res)
if __name__ == '__main__':
main()

As it seems, the processing of some options differs between the CLI and the Client. salt-ssh does not use the SSHClient. Instead, the class salt.client.ssh.SSH is used directly.
While salt-ssh adds the config_dir from the Saltfile to the opts dictionary to resolve the master config file, the SSHClient reads the config file passed to the constructor directly and config_dir is not added to the options (resulting in the roster file not being found).
My solution is to include config_dir in the master config file as well. The code from the question will then be working unchanged.
Alternative 1: If you only have one Salt configuration, it is also possible to set the environment variable SALT_CONFIG_DIR.
Alternative 2: The mopts argument of SSHClient can be used to pass a custom configuration directory, but it requires more lines of code:
config = '/home/user/saltenv/etc/salt/master'
defaults = dict(salt.config.DEFAULT_MASTER_OPTS)
defaults['config_dir'] = os.path.dirname(config)
master_opts = salt.config.client_config(config, defaults=defaults)
c = SSHClient(mopts=master_opts)

Related

How to make path directory generic and dynamic for any user?

I have trace32 installed at C drive and have mentioned that directory in my code. Suppose if some other user run this code in their system, the code does not work because the user has installed application in different location. How can I make this directory generic and dynamic and make it work for all users?
You have multiple possibilities. Bevor explaining them some generic tips:
Make the TRACE32 system path configurable, not a path inside the installation. In your case this would be r"C:\T32". This path is called t32sys or T32SYS.
Make sure you use os.path.join to concatenate your strings, so it works on the users operating system: os.path.join(r"C:\T32", "bin/windows64")
Command line arguments using argparse. This is the simplest solution which requires the user to start the Python script like this: python script.py --t32sys="C:\t32".
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--t32sys", help="TRACE32 system directory.")
args = parser.parse_args()
t32sys = args["t32sys"]
Instead of command line parameters you could also use a configuration file. For this you can use the built-in configparser module. This has the advantage that the user doesn't need to specify the directory as a command line argument, but the disadvantage that the user needs to be aware of the configuration file.
Configuration file (example.ini):
[DEFAULT]
t32sys = C:\T32
import configparser
parser = configparser.ConfigParser()
parser.read("example.ini")
args = parser["DEFAULT"]
t32sys = args["t32sys"]
Environment variables using os.environ. T32SYS is a environment variable often used for this, but it's not ensured to be set. So you have to tell users that they have to set the variable before using your tool. This approach has the advantage to work in the background, but also in my opinion a little bit obfuscating. I'd only use this in combination with argparse or configparse to override.
import os
t32sys = os.environ.get('T32SYS')
You can of course combine multiple ways with fallbacks / overrides.

Start new subprocess with 'default' environment variables

I'm writing a build script to resolve dependent shared libraries (and their shared libraries, etc.). These shared libraries do not exist in the normal PATH environment variable.
For the build process to work (for the compiler to find these libraries), the PATH has been changed to include the directories of these libraries.
The build process is thus:
Loader script (changes PATH) -> Python-based build script -> Configure -> Build -> Resolve Dependencies -> Install.
The Python instance inherits a changed PATH variable from its parent shell.
From within Python, I'm trying to get the default PATH (not the one inherited from its parent shell).
The idea:
The idea to resolve the 'default' PATH variable is to somehow 'signal' the OS to start a new process (running a script that prints PATH) but this process is NOT a child of the current Python process (and presumably won't inherit its modified environment variables).
The attempted implementation:
import os
import sys
print os.environ["PATH"]
print "---"
os.spawnl(os.P_WAIT, sys.executable, "python", "-c \"import os;print(os.environ['PATH']);\"")
os.spawn appears to use the same environment variables as the Python process which calls it. I've also tried this approach with subprocess.POpen, with no success.
Can this approach be implemented ? If not, what is an alternative approach (given that the loader script and the overall process can't change)?
I'm currently using Windows but the build script is to be cross-platform.
EDIT:
The cross-platform constraint appears to be too restrictive. Different implementations of the same concept can now be considered.
As an example, using code from this answer, the Windows registry can be used to get the 'default' system PATH variable.
try:
import _winreg as winreg
except ImportError:
try:
import winreg
except ImportError:
winreg = None
def env_keys(user=True):
if user:
root = winreg.HKEY_CURRENT_USER
subkey = "Environment"
else:
root = winreg.HKEY_LOCAL_MACHINE
subkey = r"SYSTEM\CurrentControlSet\Control\Session Manager\Environment"
return root, subkey
def get_env(name, user=True):
root, subkey = env_keys(user)
key = winreg.OpenKey(root, subkey, 0, winreg.KEY_READ)
try:
value, _ = winreg.QueryValueEx(key, name)
except WindowsError:
return ""
value = winreg.ExpandEnvironmentStrings(value)
return value
print get_env("PATH", False)
A consistent approach for *nix is needed.
Using subprocess.Popen, you can provide an environment for the child process to use:
default_path = os.environ['PATH'] # save the default path before changing it
os.environ['PATH'] = # whatever you want
child_env = os.environ.copy()
child_env['PATH'] = default_path
# change env
subprocess.Popen(..., env=child_env)
The documentation states that the provided environment will be used instead of inheriting it from the parent:
If env is not None, it must be a mapping that defines the environment
variables for the new process; these are used instead of inheriting
the current process’ environment, which is the default behavior.
What do you really mean by 'default' value of PATH? The value it had when you logged in? Some system-wide default? The value the loader script started with before it made changes?
The easiest thing would be to wrap the loader script (if you really cannot change it) with one of your own that saves the current value of PATH in some other environment variable like OLD_PATH. Then you can use something like:
os.spawnle( ... , {'PATH' : os.environ['OLD_PATH']})
Or you could spawn a shell as a login or at least interactive shell and let it source the user's .bashrc (or other startup) before invoking python.
** update ** for windows, and assuming you just want to get the PATH:
Spawn CMD.EXE, have it execute command 'echo %PATH%'

Call python Fabric functions from within the same script

I have a single file script for operations automation (log file downloads, stop/start several containers. User is choosing what to do via command arguments) and want to have fabric functions in the same script as well as argument parsing class and possibly some other. How do I call fabric functions from within the same python script? I do not want to use "fab" as it is.
And as a side note, I'd like to have these calls parallel as well.
This is a model class that would ideally contain all necessary fabric functions:
class fabricFuncs:
def appstate(self):
env.hosts = hosts
run('sudo /home/user/XXX.sh state')
This is launcher section:
if __name__ == "__main__":
argParser().argParse()
fabricFuncs().ihsstate()
argParser sets variables globaly using command line arguments specified (just to clarify what that part does).
Which sadly results in a failure where no hosts are defined (env.hosts should contain that inside the function...or is it too late to declare them there?)
EDIT1:
I have tried launching the fabric function using this:
for h in env.hosts:
with settings(hosts_string=user + "#" + h):
fabricFuncs().ihsstate()
It kind of works. I kind of hoped though, that I will be able to paralelize the whole process using fabric module as it is (via decorators) without wraping the whole thing in threading code.
EDIT2:
I have tried this as well:
execute(fabricFuncs().ihsstate())
Which fails with:
Fatal error: Needed to prompt for the target host connection string (host: None)
Can i put a whole env.hosts variable into "settings" above without iterating over that list with a "for" statement?
EDIT3:
I have tried editing the fab function like this to see if env.hosts are set properly:
class fabricFuncs:
def appstate(self):
env.hosts = hosts
print env.hosts
run('sudo /home/user/XXX.sh state')
And it prints out correctly, but still the "run" command fails with:
Fatal error: Needed to prompt for the target host connection string (host: None)
Use the execute command:
from fabric.api import execute
execute(argParser().argParse())
execute(fabricFuncs().ihsstate())
if you run the script without fab command env.host will set to None.
so if you want to use 'execute' you have to pass also 'hosts' parameter.
try this:
from fabric.api import execute, run
if __name__ == "__main__":
hosts = ["host1", "host2"]
execute(run('sudo /home/user/XXX.sh state'), hosts=hosts)

Get list of used configuration files from Nose

From the code that runs the tests using nose, how do I retrieve a list of config files that have been passed on the command line (without parsing the args myself since nose should expose these values somewhere) as in,
nosetests -c default.ini -c staging.ini
which would then result in,
[default.ini, staging.ini]
I can't seem to find these values on the nose.config object.
Seems like your problem is that you're naming your configuration files differently than what the default nose configuration files should be named.
From nose.config
config_files = [
# Linux users will prefer this
"~/.noserc",
# Windows users will prefer this
"~/nose.cfg"
]
def user_config_files():
"""Return path to any existing user config files
"""
return filter(os.path.exists,
map(os.path.expanduser, config_files))
def all_config_files():
"""Return path to any existing user config files, plus any setup.cfg
in the current working directory.
"""
user = user_config_files()
if os.path.exists('setup.cfg'):
return user + ['setup.cfg']
return user
The short of this is, that nose is looking for default configuration files named ~/.noserc or ~/nose.cfg. If they're not named like this nose will not pick them up and you will have to manually specify the name of the configuration files, like you are doing on the command line
Now say for instance that you have some object config which is an instance of nose.config.Config then the best way to get your config file names would be to say
>>> from nose.config import Config
>>> c = Config()
>>> c.configure(argv=["nosetests", "-c", "foo.txt"])
>>> c.options.files
['foo.txt']

Change python-gnupg lock file location?

I'm trying to deploy a project which uses GPG to encrypt data being sent to a SOAP WebService. When I tried to encrypt the file, I found that python-gnupg was trying to put a lock file into my gnupghome directory, which is not writable by the user Apache is run under. I'd rather not give write access to "nobody", so is there a way to change the location python-gnupg uses to store lock files?
Clarification:
It was pointed out to me that I may not have made it clear that I am currently setting gnupghome when I initialize the object, but I do not want the lock files to be placed there, because I do not want "nobody" to have write access to that location.
The lock file is created by gnupg, not the python wrapper, and it is always created in the GNUPGHOME path, defaulting to ~/.gnupghome.
You cannot prevent the lockfile, but you can set the directory to a temporary one. The disadvantage is that it'll not be able to load the default keyring so you'll need to pass it in explicitly, telling GNUPG to ignore the default file (it'll complain bitterly if you do not):
import tempfile
import shutil
home = tempfile.mkdtemp()
try:
gpg = gnupg.GPG(gnupghome=home, keyring='/path/to/keyring/file',
options=['--no-default-keyring'])
finally:
shutil.rmtree(home)
In fact, I've gone a far as using a temporary file for the keyring as well; use the tempfile.mkstemp() function to create an empty file in the temporary directory generated above, import the key (drawn from a database) into that keyring (using .import_keys()) then use the imported key to do the encryption, before cleaning up the whole temporary home.
Python-GnuPG Getting Started shows that you can set gnupghome like so:
gpg = gnupg.GPG(gnupghome='/path/to/home/directory')
If you're using python-gnupg version 0.3.1 or above and thus have the options parameter, you can use this solution, provided the keyrings you need are readable:
>>> g = gnupg.GPG(gnupghome='/path/to/gnupghome', options=['--lock-never'])
I'm using this in a scenario where a verifying user has no write permissions to the gnupghome at all--just read permissions on pubring.gpg and trustdb.gpg:
>>> v = g.verify(open('message.gpg', 'r').read())
>>> v.valid
True

Categories