NRPE Python script output bug - python

I have been tasked with making a custom python script (since i'm bad with Bash) to run on a remote NRPE client which recursively counts the number of files in the /tmp directory. This is my script:
#!/usr/bin/python3.5
import os
import subprocess
import sys
file_count = sum([len(files) for r, d, files in os.walk("/tmp")]) #Recursive check of /tmp
if file_count < 1000:
x = subprocess.Popen(['echo', 'OK -', str(file_count), 'files in /tmp.'], stdout=subproce$
print(x.communicate()[0].decode("utf-8")) #Converts from byteobj to str
# subprocess.run('exit 0', shell=True, check=True) #Service OK - exit 0
sys.exit(0)
elif 1000 <= file_count < 1500:
x = subprocess.Popen(['echo', 'WARNING -', str(file_count), 'files in /tmp.'], stdout=sub$
print(x.communicate()[0].decode("utf-8")) #Converts from byteobj to str
sys.exit(1)
else:
x = subprocess.Popen(['echo', 'CRITICAL -', str(file_count), 'files in /tmp.'], stdout=su$
print(x.communicate()[0].decode("utf-8")) #Converts from byteobj to str
sys.exit(2)
EDIT 1: I tried hardcoding file_count to 1300 and I got a WARNING: 1300 files in /tmp. It appears the issue is solely in the nagios server's ability to read files in the client machine's /tmp.
What I have done:
I have the script in the directory with the rest of the scripts.
I have edited /usr/local/nagios/etc/nrpe.cfg on the client machine with the following line:
command[check_tmp]=/usr/local/nagios/libexec/check_tmp.py
I have edited this /usr/local/nagios/etc/servers/testserver.cfg file on the nagios server as follows:
define service {
use generic-service
host_name wp-proxy
service_description Files in /tmp
check_command check_nrpe!check_tmp
}
The output:
correct output is: OK - 3 files in /tmp
When I run the script on the client machine as root, I got a correct output
When I run the script on the client machine as the nagios user, I get a correct output
My output on the Nagios core APPEARS to be working, but it shows there are 0 files in /tmp when I know there are more. I made 2 files on the client machine and 1 file on the nagios server.
The server output for reference:
https://puu.sh/BioHW/838ba84c3e.png
(Ignore the bottom server, any issues solved with the wp-proxy will also be changed on the wpreess-gkanc1)
EDIT 2: I ran the following on the nagios server:
/usr/local/nagios/libexec/check_nrpe -H 192.168.1.59 -c check_tmp_folder
I indeed got a 0 file return. I still don't know how this can be fixed, however.

systemd service file, maybe this var is set to true :)
PrivateTmp= Takes a boolean argument. If true, sets up a new file system namespace for the executed processes and mounts private /tmp and /var/tmp directories inside it that is not shared by processes outside of the namespace.
This is useful to secure access to temporary files of the process, but makes sharing between processes via /tmp or /var/tmp impossible. If this is enabled, all temporary files created by a service in these directories will be removed after the service is stopped. Defaults to false. It is possible to run two or more units within the same private /tmp and /var/tmp namespace by using the JoinsNamespaceOf= directive, see systemd.unit(5) for details.
This setting is implied if DynamicUser= is set. For this setting the same restrictions regarding mount propagation and privileges apply as for ReadOnlyPaths= and related calls, see above. Enabling this setting has the side effect of adding Requires= and After= dependencies on all mount units necessary to access /tmp and /var/tmp.
Moreover an implicitly After= ordering on systemd-tmpfiles-setup.service(8) is added. Note that the implementation of this setting might be impossible (for example if mount namespaces are not available), and the unit should be written in a way that does not solely rely on this setting for security.

SOLVED!
Solution:
Go to your systemd file for nrpe. Mine was found here:
/lib/systemd/system/nrpe.service
If not there, run:
find / -name "nrpe.service"
and ignore all system.slice results
Open the file with vi/nano
Find a line which says PrivateTmp= (usually second to last line)
If it is set to true, set it to false
Save and exit the file and run the following 2 commands:
daemon-reload
restart nrpe.service
Problem solved.
Short explanation: The main reason for that issue is, that with debian 9.x, some processes which use systemd forced the private tmp directories by default. So if you have any other programs which have issues searching or indexing in /tmp, this solution can be tailored to fit.

Related

Replicating ssh behavior with jupyternotebook spawn

OBS1: this question is duplicated here as suggested by Wayne in the comments, but still with no answer.
I have a remote machine running ubuntu where i am configuring a jupyterhub notebooks server. The server is already up and running, however, i noticed that it only works well with users that have previously logged in the machine via ssh.
For users that have never logged in the machine via ssh before, the server spawns a login screen but after the login comes the following image:
It displayed a different directory path before (i mean different than /user/john.snow), but i configured the jupyterhub spawner class to make the directory by adding the lines:
if os.path.exists('mkdir /home/FOLDER/' + env['JUPYTERHUB_USER'])!=True:
os.system('mkdir /home/FOLDER/' + env['JUPYTERHUB_USER'])
(i append the complete spawner code at the end of the question, if thats useful)
Since i dont intend to need to test every single directory that jupyter notebook looks for, my desire is to find the ssh configuration files in the computer and mimic what ssh does for that particular user with the spawner.
Is it possible? I tried looking at /etc/ssh/ssh_config and similar but almost all of the file is commented and the syntax is mysterious.
Thanks for any suggestions.
OBS: full spawner code:
import os, getpass
import yaml
from jupyterhub.spawner import Spawner, LocalProcessSpawner
class spawner(LocalProcessSpawner):
def start(self):
# get environment variables,
# several of which are required for configuring the single-user server
env = self.get_env()
ret = super(spawner, self).start()
if os.path.exists('mkdir /home/FOLDER/' + env['JUPYTERHUB_USER'])!=True:
os.system('mkdir /home/FOLDER/' + env['JUPYTERHUB_USER'])
os.system('mkdir /home/FOLDER/' + env['JUPYTERHUB_USER'] + '/notebooks')
os.system('cp -r /usr/local/scripts/notebooks/* /home/FOLDER/' + env['JUPYTERHUB_USER'] + '/notebooks/')
os.system('chmod -R 777 /home/FOLDER/' + env['JUPYTERHUB_USER'] + '/notebooks/')
return ret
I found a solution to the problem. Since the spawner code was trying to access folders created by an ssh into the machine, the lines
if os.path.exists('mkdir /home/FOLDER/' + env['JUPYTERHUB_USER'])!=True:
os.system('mkdir /home/FOLDER/' + env['JUPYTERHUB_USER'])
were trying to create this folder if it didnt exist. However, there were other mysterious configurations that ssh generated and i couldnt figure out to replicate. Instead, i found out that ssh configuration files are at /etc/skel, so i removed these two lines from the spawner and, instead, added:
os.system('su ' + env['JUPYTERHUB_USER'])
os.system('source /etc/skel/.bashrc')
os.system('source /etc/skel/.profile')
os.system('exit')
the 'su env["JUPYTERHUB_USER"]' and 'exit' lines being there because the spawner seems to be executed as root. It solved for new users, but old users who had already spawned the red bar were still dealing with it. It seems that deleting their home folders in the machine solved the problem.

"/Library" directory permission denied on Mac - Python3

I'm trying to create a program that copies a directory in the library directory on mac (path : "/Library"). I use shutil which works very well in other directories but not in the Library directory...
I want to be able to compile my program, so I can't run it as root.
Here is my code :
import shutil
def copy(src_path, dir_path):
try:
shutil.copytree(src_path, dir_path)
print("Success!")
except:
print("Impossible to copy the folder...")
print("Failed!")
copy("/Users/marinnagy/Desktop/Test", "Library/Test")
I think it's because the library directory is protected and requires authentication to make changes.
Do I have to make an authentication request to the user ? Or do I need to use another method than shutil ?
Thanks for your help !
After a good deal of research and many attempts, I finally managed to copy a folder into my Library directory.
On macOS, the process of writing to a protected directory like the Library directory is blocked for python program. Once compiled (I use pyinstaller), it seems to be impossible for a python application to access this kind of folder, even if you give the app Full Disk Access in the System Preferences.
So I used some AppleScript to manage this specific copy/paste task :
on run {scr_path, dir_path} # Run with arguments
# Translate standard paths to their quoted form
set formated_scr_path to quoted form of scr_path
set formated_dir_path to quoted form of dir_path
# Run a simple shell script to copy the repertory in the other
do shell script "cp -R " & formated_scr_path & space & formated_dir_path ¬
with administrator privileges # Ask for administrator privileges
end run
Then, in my python program, I call the AppleScript program when I want to copy/past to a protected repertory like the Library repertory :
import subprocess
def copy(scr_path, dir_path):
# Use the osascript process to call the AppleScript
# Give the paths in arguments
process = subprocess.call(['osascript', "path/to/applescript",
scr_path, dir_path])
return process
copy("path/to/folder 1", "path/to/folder 2")
This method worked for me on protected repertories. The AppleScript run in the background and an authentication window pop in, asking the user to identify himself as an admin :
result screenshot

uwsgi is running my app as root, but shouldn't be

I have a Flask app run via uwsgi being served by nginx, all being controlled by supervisord
I have set my user parameter in /etc/supervisor.conf to user=webdev
and in both ../myapp/uwsgi_app.ini and /etc/uwsgi/emperor.ini, I have set uid=webdev and gid=www-data
Problem is, I am having a permissions issue within my app. with the following print statements in one of my views, I discover that the application is being run as root. This is causing issues in a function call that requires creation of a directory.
All of the following print statements are located inside the Flask view.
print 'group!! {}'.format(os.getegid())
print 'user id!! {}'.format(os.getuid())
print 'user!! {}'.format(os.path.expanduser('~'))
results in...
group!! 1000
user id!! 1000
user!! /root
EDIT: I added the following print statements:
from subprocess import call
print 'here is user',
call('echo $USER', shell=True)
print 'here is home',
call('echo $HOME', shell=True)
This prints
here is user root
here is home /root
in a terminal on the server, I type $ id, I get uid=1000(webdev) gid=1000(webdev) groups=1000(webdev)
Here is the output from $ getent group
root:x:0:
...
webdev:x:1000:
...
www-data:x:1001:nginx
nginx:x:996:nginx
...
Here are some lines from /etc/passwd
webdev:x:1000:1000::/home/webdev:/bin/bash
That's strange, because normally you wouldn't have any permissions issues when running as root (the opposite actually, you'd have more permissions than necessary in this case).
I have the feeling that you might be running the process as webdev and not root after all. Can you try calling os.getuid() instead of os.expanduser()?
The /rootdirectory is often used as a default when there is no home directory set for a user. You can also check your /etc/passwd/ for webdev's entry to see what the home directory is set to.
If you're not running as root, your permissions issue are probably related to something else (maybe webdev isn't the owner of the directory you're writing in?).
EDIT: If you want user webdev to have a proper home directory, run the following as root:
mkdir -p /home/webdev
usermod -m -d /home/webdev webdev
After that, os.expanduser() should display the correct home directory.
EDIT 2: I was wrongly assuming that webdev was not a normal user but just a minimally configured service username like www that you were using. My mistake.
In any case, as I mentioned in the comment, what matters is your uid value. You're not running as root because your uid is not 0. Nothing else matters in UNIX terms.
I think that I figured it out though. The way uWSGI works when you specify the uid & gid options is that it still runs as root originally but immediately calls setuid() to drop its privileges and switch to the uid and gid you provided. This would explain the behavior you're seeing: the environment is still configured for root and even though uWSGI is now running as webdev, $USER and $HOME must be still pointing to root.
You can try to test this by adding this line inside the Flask view:
open('/home/webdev/testfile', 'a').close()
This will create an empty file in webdev's home directory. Now log in afterwards as webdev, go to /home/webdev and do an ls -l. If the owner of testfile is webdev, you're running as webdev.
If you can establish that, then what you'll have to do is write all your code assuming that $HOME and $USER are wrongly set. I'm not sure how it will affect your code, but try, for instance, to avoid relative paths (it is possible assumed that the default destination is your wrong home directory).

Specify SSH configuration file at run time in Ansible 1.9

I'm managing two server environments that are configured differently. I access the two environments by specifying different SSH configurations on the command line because I need to specify a different User, ProxyCommand, and a list of other options for SSH.
e.g.
ssh oldserver.example.org -F config_legacy
ssh newserver.example.org -F config
To configure and maintain state on my servers, I've been using Ansible (version 1.9.0.1), which reads an SSH configuration file that is specified by a line in its ansible.cfg:
...
ssh_args = -F some_configuration_file
...
The ansible.cfg is loaded a number of ways:
def load_config_file():
''' Load Config File order(first found is used): ENV, CWD, HOME, /etc/ansible '''
p = configparser.ConfigParser()
path0 = os.getenv("ANSIBLE_CONFIG", None)
if path0 is not None:
path0 = os.path.expanduser(path0)
path1 = os.getcwd() + "/ansible.cfg"
path2 = os.path.expanduser("~/.ansible.cfg")
path3 = "/etc/ansible/ansible.cfg"
for path in [path0, path1, path2, path3]:
if path is not None and os.path.exists(path):
try:
p.read(path)
except configparser.Error as e:
print("Error reading config file: \n{0}".format(e))
sys.exit(1)
return p
return None
I could use this behavior to set an environmental variable before each command to load an entirely different ansible.cfg, but that seems messy as I only need to fiddle the ssh_args. Unfortunately, Ansible doesn't expose the command switch to specify an SSH config.
I'd like to not maintain any modifications to Ansible I'd like to not wrap all calls to the ansible or ansible-playbook commands. To preserve the behavior of Ansible's commands, I believe my options are:
a) have the target of ssh_args = -F <<config_file>> be a script that's opened
b) have the target of p.read(path) be a script that gets expanded to generate a valid ansible.cfg
c) just maintain different ansible.cfg files and take advantage of the fact that Ansible picks this file in the order of environmental variable, cwd.
Option C is the only way that I can see accomplishing this. You could have your default/most-used ansible.cfg be the one that is read in the cwd ansible.cfg, then optionally setting/unsetting an environmental variable that points to the version that specifies the ssh_args = -F config_legacy line that you need (ANSIBLE_SSH_ARGS).
The reason for needing to do an ansible.cfg instead of just passing an envvar with SSH options is because Ansible does not honor the User setting in an ssh configuration file -- it's already decided who it wants to run as on kick off of a command.
Dynamic inventory (ec2.py) files are incredibly poor places to hack in a change for maintenance reasons, which is why it's typical to see --user=REMOTE_USER flags, which coupled with setting an ANSIBLE_SSH_ARGS="-F some_ssh_config" environmental variable, make for a ugly commands to give to a casual user of an Ansible repo.
e.g.
ANSIBLE_SSH_ARGS="-F other_ssh_config" ansible-playbook playbooks/why_i_am_doing_this.yml -u ubuntu
v.
ansible-playbook playbooks/why_i_am_doing_this.yml -F other_ansible.cfg
Option A doesn't work because the file is opened all at once for loading into Python, per the p.read() above, not that it matters because if files could arbitrarily decide to open as scripts, we'd be living in a very scary world.
This is how the ansible.cfg loading looks from a system perspective:
$ sudo dtruss -a ansible ......
74947/0x11eadf: 312284 3 2 stat64("/Users/tfisher/code/ansible/ansible.cfg\0", 0x7FFF55D936C0, 0x7FD70207EA00) = 0 0
74947/0x11eadf: 312308 19 17 open_nocancel("/Users/tfisher/code/ansible/ansible.cfg\0", 0x0, 0x1B6) = 5 0
74947/0x11eadf: 312316 3 1 read_nocancel(0x5, "# ansible.cfg \n#\n# Config-file load order is:\n# envvar ANSIBLE_CONFIG\n# `pwd`/ansible.cfg\n# ~/.ansible.cfg\n# /etc/ansible/ansible.cfg\n\n# Some unmodified settings are left as comments to encourage research/suggest modific", 0x1000) = 3477 0
74947/0x11eadf: 312308 19 17 open_nocancel("/Users/tfisher/code/ansible/ansible.cfg\0", 0x0, 0x1B6) = 5 0
Option B doesn't work for the same reasons why A doesn't work -- even if you create a mock Python file object with proper read/readline/readlines sigatures, the file is still being opened for reading only, not execution.
And if this is the correct repo for OpenSSH, the config file is specified like so:
#define _PATH_HOST_CONFIG_FILE SSHDIR "/ssh_config"
processed like so:
/* Read systemwide configuration file after user config. */
(void)read_config_file(_PATH_HOST_CONFIG_FILE, pw,
host, host_arg, &options,
post_canon ? SSHCONF_POSTCANON : 0);
and read here with an fopen, which leaves no room for "file as a script" schenanigans.
Another option is set the environment variable ANSIBLE_SSH_ARGS to the arguments you want Ansible to pass to the ssh command.

python check free disk space for remote addresses

I have made a script like this
import os
disk = os.statvfs("/home/")
print "~~~~~~~~~~calculation of disk usage:~~~~~~~~~~"
totalBytes = float(disk.f_bsize*disk.f_blocks)
print("Total space : {} GBytes".format(totalBytes/1024/1024/1024))
totalUsedSpace = float(disk.f_bsize*(disk.f_blocks-disk.f_bfree))
print("Used space : {} GBytes".format(totalUsedSpace/1024/1024/1024))
totalAvailSpace = float(disk.f_bsize*disk.f_bfree)
print("Available space : {} GBytes".format(totalAvailSpace/1024/1024/1024))
It checks all for my computer but I want to check for the remote address also from my computer by running this script. How can I do that? Like I want to check the space of my server then? Need help.
Checkout fabric, a tool that provides a high-level python API for executing SSH commands on remote servers.
from fabric.api import run
def disk_free():
run('df -h')
Then you can run this command on any server:
server:misc$ fab disk_free -H vagrant#192.168.1.7
Executing task 'disk_free'
run: df -h
out: Filesystem Size Used Avail Use% Mounted on
out: /dev/sda1 7.3G 3.3G 3.7G 47% /
out: tmpfs 927M 0 927M 0% /dev/shm
out: /vagrant 409G 339G 71G 83% /vagrant
You could write a simple xml rpc server with your function and deploy it on each remote node you want to check. Then you write a collection script to iterate over all nodes and fire your remote function.
For a large number of remote machines, I recommend Ansible. You have to predefined your list of hosts, but once you do it's as simple as:
ansible -m command -a 'df -h'

Categories