I'm managing two server environments that are configured differently. I access the two environments by specifying different SSH configurations on the command line because I need to specify a different User, ProxyCommand, and a list of other options for SSH.
e.g.
ssh oldserver.example.org -F config_legacy
ssh newserver.example.org -F config
To configure and maintain state on my servers, I've been using Ansible (version 1.9.0.1), which reads an SSH configuration file that is specified by a line in its ansible.cfg:
...
ssh_args = -F some_configuration_file
...
The ansible.cfg is loaded a number of ways:
def load_config_file():
''' Load Config File order(first found is used): ENV, CWD, HOME, /etc/ansible '''
p = configparser.ConfigParser()
path0 = os.getenv("ANSIBLE_CONFIG", None)
if path0 is not None:
path0 = os.path.expanduser(path0)
path1 = os.getcwd() + "/ansible.cfg"
path2 = os.path.expanduser("~/.ansible.cfg")
path3 = "/etc/ansible/ansible.cfg"
for path in [path0, path1, path2, path3]:
if path is not None and os.path.exists(path):
try:
p.read(path)
except configparser.Error as e:
print("Error reading config file: \n{0}".format(e))
sys.exit(1)
return p
return None
I could use this behavior to set an environmental variable before each command to load an entirely different ansible.cfg, but that seems messy as I only need to fiddle the ssh_args. Unfortunately, Ansible doesn't expose the command switch to specify an SSH config.
I'd like to not maintain any modifications to Ansible I'd like to not wrap all calls to the ansible or ansible-playbook commands. To preserve the behavior of Ansible's commands, I believe my options are:
a) have the target of ssh_args = -F <<config_file>> be a script that's opened
b) have the target of p.read(path) be a script that gets expanded to generate a valid ansible.cfg
c) just maintain different ansible.cfg files and take advantage of the fact that Ansible picks this file in the order of environmental variable, cwd.
Option C is the only way that I can see accomplishing this. You could have your default/most-used ansible.cfg be the one that is read in the cwd ansible.cfg, then optionally setting/unsetting an environmental variable that points to the version that specifies the ssh_args = -F config_legacy line that you need (ANSIBLE_SSH_ARGS).
The reason for needing to do an ansible.cfg instead of just passing an envvar with SSH options is because Ansible does not honor the User setting in an ssh configuration file -- it's already decided who it wants to run as on kick off of a command.
Dynamic inventory (ec2.py) files are incredibly poor places to hack in a change for maintenance reasons, which is why it's typical to see --user=REMOTE_USER flags, which coupled with setting an ANSIBLE_SSH_ARGS="-F some_ssh_config" environmental variable, make for a ugly commands to give to a casual user of an Ansible repo.
e.g.
ANSIBLE_SSH_ARGS="-F other_ssh_config" ansible-playbook playbooks/why_i_am_doing_this.yml -u ubuntu
v.
ansible-playbook playbooks/why_i_am_doing_this.yml -F other_ansible.cfg
Option A doesn't work because the file is opened all at once for loading into Python, per the p.read() above, not that it matters because if files could arbitrarily decide to open as scripts, we'd be living in a very scary world.
This is how the ansible.cfg loading looks from a system perspective:
$ sudo dtruss -a ansible ......
74947/0x11eadf: 312284 3 2 stat64("/Users/tfisher/code/ansible/ansible.cfg\0", 0x7FFF55D936C0, 0x7FD70207EA00) = 0 0
74947/0x11eadf: 312308 19 17 open_nocancel("/Users/tfisher/code/ansible/ansible.cfg\0", 0x0, 0x1B6) = 5 0
74947/0x11eadf: 312316 3 1 read_nocancel(0x5, "# ansible.cfg \n#\n# Config-file load order is:\n# envvar ANSIBLE_CONFIG\n# `pwd`/ansible.cfg\n# ~/.ansible.cfg\n# /etc/ansible/ansible.cfg\n\n# Some unmodified settings are left as comments to encourage research/suggest modific", 0x1000) = 3477 0
74947/0x11eadf: 312308 19 17 open_nocancel("/Users/tfisher/code/ansible/ansible.cfg\0", 0x0, 0x1B6) = 5 0
Option B doesn't work for the same reasons why A doesn't work -- even if you create a mock Python file object with proper read/readline/readlines sigatures, the file is still being opened for reading only, not execution.
And if this is the correct repo for OpenSSH, the config file is specified like so:
#define _PATH_HOST_CONFIG_FILE SSHDIR "/ssh_config"
processed like so:
/* Read systemwide configuration file after user config. */
(void)read_config_file(_PATH_HOST_CONFIG_FILE, pw,
host, host_arg, &options,
post_canon ? SSHCONF_POSTCANON : 0);
and read here with an fopen, which leaves no room for "file as a script" schenanigans.
Another option is set the environment variable ANSIBLE_SSH_ARGS to the arguments you want Ansible to pass to the ssh command.
Related
I have been tasked with making a custom python script (since i'm bad with Bash) to run on a remote NRPE client which recursively counts the number of files in the /tmp directory. This is my script:
#!/usr/bin/python3.5
import os
import subprocess
import sys
file_count = sum([len(files) for r, d, files in os.walk("/tmp")]) #Recursive check of /tmp
if file_count < 1000:
x = subprocess.Popen(['echo', 'OK -', str(file_count), 'files in /tmp.'], stdout=subproce$
print(x.communicate()[0].decode("utf-8")) #Converts from byteobj to str
# subprocess.run('exit 0', shell=True, check=True) #Service OK - exit 0
sys.exit(0)
elif 1000 <= file_count < 1500:
x = subprocess.Popen(['echo', 'WARNING -', str(file_count), 'files in /tmp.'], stdout=sub$
print(x.communicate()[0].decode("utf-8")) #Converts from byteobj to str
sys.exit(1)
else:
x = subprocess.Popen(['echo', 'CRITICAL -', str(file_count), 'files in /tmp.'], stdout=su$
print(x.communicate()[0].decode("utf-8")) #Converts from byteobj to str
sys.exit(2)
EDIT 1: I tried hardcoding file_count to 1300 and I got a WARNING: 1300 files in /tmp. It appears the issue is solely in the nagios server's ability to read files in the client machine's /tmp.
What I have done:
I have the script in the directory with the rest of the scripts.
I have edited /usr/local/nagios/etc/nrpe.cfg on the client machine with the following line:
command[check_tmp]=/usr/local/nagios/libexec/check_tmp.py
I have edited this /usr/local/nagios/etc/servers/testserver.cfg file on the nagios server as follows:
define service {
use generic-service
host_name wp-proxy
service_description Files in /tmp
check_command check_nrpe!check_tmp
}
The output:
correct output is: OK - 3 files in /tmp
When I run the script on the client machine as root, I got a correct output
When I run the script on the client machine as the nagios user, I get a correct output
My output on the Nagios core APPEARS to be working, but it shows there are 0 files in /tmp when I know there are more. I made 2 files on the client machine and 1 file on the nagios server.
The server output for reference:
https://puu.sh/BioHW/838ba84c3e.png
(Ignore the bottom server, any issues solved with the wp-proxy will also be changed on the wpreess-gkanc1)
EDIT 2: I ran the following on the nagios server:
/usr/local/nagios/libexec/check_nrpe -H 192.168.1.59 -c check_tmp_folder
I indeed got a 0 file return. I still don't know how this can be fixed, however.
systemd service file, maybe this var is set to true :)
PrivateTmp= Takes a boolean argument. If true, sets up a new file system namespace for the executed processes and mounts private /tmp and /var/tmp directories inside it that is not shared by processes outside of the namespace.
This is useful to secure access to temporary files of the process, but makes sharing between processes via /tmp or /var/tmp impossible. If this is enabled, all temporary files created by a service in these directories will be removed after the service is stopped. Defaults to false. It is possible to run two or more units within the same private /tmp and /var/tmp namespace by using the JoinsNamespaceOf= directive, see systemd.unit(5) for details.
This setting is implied if DynamicUser= is set. For this setting the same restrictions regarding mount propagation and privileges apply as for ReadOnlyPaths= and related calls, see above. Enabling this setting has the side effect of adding Requires= and After= dependencies on all mount units necessary to access /tmp and /var/tmp.
Moreover an implicitly After= ordering on systemd-tmpfiles-setup.service(8) is added. Note that the implementation of this setting might be impossible (for example if mount namespaces are not available), and the unit should be written in a way that does not solely rely on this setting for security.
SOLVED!
Solution:
Go to your systemd file for nrpe. Mine was found here:
/lib/systemd/system/nrpe.service
If not there, run:
find / -name "nrpe.service"
and ignore all system.slice results
Open the file with vi/nano
Find a line which says PrivateTmp= (usually second to last line)
If it is set to true, set it to false
Save and exit the file and run the following 2 commands:
daemon-reload
restart nrpe.service
Problem solved.
Short explanation: The main reason for that issue is, that with debian 9.x, some processes which use systemd forced the private tmp directories by default. So if you have any other programs which have issues searching or indexing in /tmp, this solution can be tailored to fit.
I installed Salt in a Python 3 virtual environment and created a Salt configuration that uses a non-root folder for everything (/home/user/saltenv). When using the salt-ssh command inside the venv, e.g. salt-ssh '*' test.ping, everything works as exptected. (Please note that the config dir is resolved via a Saltfile, so the -c option is omitted, but that should not matter.)
When calling the SSHClient directly via Python however, I get no results. I already figured out that the roster file is not read, obviously resulting in an empty target list. I am stuck somehow and the documentation is not that helpful.
Here is the code:
import salt.config
from salt.client.ssh.client import SSHClient
def main():
c_path = '/home/user/saltenv/etc/salt/master'
master_opts = salt.config.client_config(c_path)
c = SSHClient(c_path=c_path, mopts=master_opts)
res = c.cmd(tgt='*', fun='test.ping')
print(res)
if __name__ == '__main__':
main()
As it seems, the processing of some options differs between the CLI and the Client. salt-ssh does not use the SSHClient. Instead, the class salt.client.ssh.SSH is used directly.
While salt-ssh adds the config_dir from the Saltfile to the opts dictionary to resolve the master config file, the SSHClient reads the config file passed to the constructor directly and config_dir is not added to the options (resulting in the roster file not being found).
My solution is to include config_dir in the master config file as well. The code from the question will then be working unchanged.
Alternative 1: If you only have one Salt configuration, it is also possible to set the environment variable SALT_CONFIG_DIR.
Alternative 2: The mopts argument of SSHClient can be used to pass a custom configuration directory, but it requires more lines of code:
config = '/home/user/saltenv/etc/salt/master'
defaults = dict(salt.config.DEFAULT_MASTER_OPTS)
defaults['config_dir'] = os.path.dirname(config)
master_opts = salt.config.client_config(config, defaults=defaults)
c = SSHClient(mopts=master_opts)
I'm trying to use CircleCI to run automated tests. I have a config.yml file tat contains secrets that I don't want to upload to my repo for obvius reasons.
Thus I've created a set of env varialbes in the Project Settings section:
VR_API_KEY = some_value
CLARIFAI_CLIENT_ID = some_value
CLARIFAI_CLIENT_SECRET = some_value
IMAGGA_API_KEY = some_value
IMAGGA_API_SECRET = some_value
The config.yml, I've removed the actual values and looks like this
visual-recognition:
api-key: ${VR_API_KEY}
clarifai:
client-id: ${CLARIFAI_CLIENT_ID}
client-secret: ${CLARIFAI_CLIENT_SECRET}
imagga:
api-key: ${IMAGGA_API_KEY}
api-secret: ${IMAGGA_API_SECRET}
I have a test that basically creates the API client instances and configures everything, this test fails because it looks like CircleCI is not correctly substituting the values...here is the output of some prints (this is just when the values are read from config.yml)
-------------------- >> begin captured stdout << ---------------------
Checking tagger queries clarifai API
${CLARIFAI_CLIENT_ID}
${CLARIFAI_CLIENT_SECRET}
COULD NOT LOAD: 'UNAUTHORIZED'
--------------------- >> end captured stdout << ----------------------
The COULD NOT LOAD: 'UNAUTHORIZED' is expected since unvalid credentials lead to Oauth dance failure
Any clues? Thanks!
Meaning there is no substitution and therefore all tests will fail....what I'm doing wrong here...by the way, I don't have a circle.yml file yet...do I need one?
Thanks!
EDIT: If anyone runs into the same problem, solution was rather simple, I've simple ciphered the config.yml file as depicted here
https://github.com/circleci/encrypted-files
Then in circle.yml just add an instruction to decypher and name the output file config.yml...and that's it!
dependencies:
pre:
# update locally with:
# openssl aes-256-cbc -e -in secret-env-plain -out secret-env-cipher -k $KEY
- openssl aes-256-cbc -d -in config-cipher -k $KEY >> config.yml
CircleCI also supports putting in environment variables (CircleCI Environment Variables). Instead of putting the value of the environment variable in the code, you go to project settings -> Environment Variables. Then just click add variable with name and value. You access the environment variable normally through the name.
Is it possible using Python 3.5 to create and update environment variables in Windows and Linux so that they get persisted?
At the moment I use this:
import os
os.environ["MY_VARIABLE"] = "TRUE"
However it seems as if this does not "store" the environment variable persistently.
I'm speaking for Linux here, not sure about Windows.
Environment variables don't work that way. They are a part of the process (which is what you modify by changing os.environ), and they will propagate to child processes of your process (and their children obviously). They are in-memory only, and there is no way to "set and persist" them directly.
There are however several configuration files which allow you to set the environment on a more granular basis. These are read by various processes, and can be system-wide, specific to a user, specific to a shell, to a particular type of process etc.
Some of them are:
/etc/environment for system-wide variables
/etc/profile for shells (and their children)
Several other shell-specific files in /etc
Various dot-files in a user's home directory such as .profile, .bashrc, .bash_profile, .tcshrc and so on. Read your shell's documentation.
I believe that there are also various ways to configure environment variables launched from GUIs (e.g. from the gnome panel or something like that).
Most of the time you'll want to set environment variables for the current user only. If you only care about shells, append them to ~/.profile in this format:
NAME="VALUE"
The standard way to 'persist' an environment variable is with a configuration file. Write your application to open the configuration file and set every NAME=VARIABLE pair that it finds. Optionally this step could be done in a wrapper startup script.
If you wish to 'save' the state of a variable, you need to open the configuration file and modify its contents. Then when it's read in again, your application will set the environment accordingly.
You could of course store the configuration in some other way. For example in a configuration_settings class that you pickle/shelve. Then on program startup you read in the pickled class and set the environment. The important thing to understand is that when a process exits its environment is not saved. Environments are inherited from the parent process as an intentional byproduct of forking.
Config file could look like:
NAME=VALUE
NAME2=VALUE2
...
Or your config class could look like:
class Configuration():
env_vars = {}
import os
def set(name, val):
env_vars[name] = val
os.environ[name] = val
def unset(name):
del env_vars[name]
del os.environ[name]
def init():
for name in env_vars:
os.environ[name] = env_vars[name]
Somewhere else in our application
import shelve
filename = "config.db"
d = shelve.open(filename)
# get our configuration out of shelve
config = d['configuration']
# initialize environment
config.init()
# setting an environment variable
config.set("MY_VARIABLE", "TRUE")
#unsetting
config.unset("MY_VARIABLE")
# save our configuration
d['configuration'] = config
Code is not tested but I think you get the jist.
I would like to ssh to another server to run some script.
But before I run the script, I need to change directory to the path where the script is locate and set some environment variables.
In my local host, it can be done by
os.chdir(path)
os.environ["xxx"] = "xxx"
But in paramiko, I am not sure if any method can accomplish the things above. The closest thing I found is
ssh.exec_command("cd /xxx/yyy;xxx.sh")
But I would not like to execute several commands connect together with ; .
Would like to ask is there any other way that can change directory/set environment variables when ssh using paramiko?
For environment variables I could not get them to be set, however using an interactive shell will load the environment variables of the user. Those you can change in the .bashrc file.
For how to set up an interactive shell:
http://snipplr.com/view/12940/
I haven't found a solution yet for how to change the host directory; like you, I've been trying to use sshClient.exec_command("cd " + directory_name), but to no effect.
However, I can help with you question of issuing multiple commands. You could simply call sshClient.exec_command("command1; command2; command3;"). Alternatively, you could create a helper method such as:
def execCmd(ssh_client, *commands):
for command in commands:
stdin, stdout, stderr = ssh_client.exec_command(command)
for line in stdout.readlines():
print line
for line in stderr.readlines():
print line
cmds = [command1,command2,command3]
execCmd(SSH_Client,*cmds)
you can use '|' pipe to combine different commands.
It will work with ssh.exec_command().