I have made a script like this
import os
disk = os.statvfs("/home/")
print "~~~~~~~~~~calculation of disk usage:~~~~~~~~~~"
totalBytes = float(disk.f_bsize*disk.f_blocks)
print("Total space : {} GBytes".format(totalBytes/1024/1024/1024))
totalUsedSpace = float(disk.f_bsize*(disk.f_blocks-disk.f_bfree))
print("Used space : {} GBytes".format(totalUsedSpace/1024/1024/1024))
totalAvailSpace = float(disk.f_bsize*disk.f_bfree)
print("Available space : {} GBytes".format(totalAvailSpace/1024/1024/1024))
It checks all for my computer but I want to check for the remote address also from my computer by running this script. How can I do that? Like I want to check the space of my server then? Need help.
Checkout fabric, a tool that provides a high-level python API for executing SSH commands on remote servers.
from fabric.api import run
def disk_free():
run('df -h')
Then you can run this command on any server:
server:misc$ fab disk_free -H vagrant#192.168.1.7
Executing task 'disk_free'
run: df -h
out: Filesystem Size Used Avail Use% Mounted on
out: /dev/sda1 7.3G 3.3G 3.7G 47% /
out: tmpfs 927M 0 927M 0% /dev/shm
out: /vagrant 409G 339G 71G 83% /vagrant
You could write a simple xml rpc server with your function and deploy it on each remote node you want to check. Then you write a collection script to iterate over all nodes and fire your remote function.
For a large number of remote machines, I recommend Ansible. You have to predefined your list of hosts, but once you do it's as simple as:
ansible -m command -a 'df -h'
Related
OBS1: this question is duplicated here as suggested by Wayne in the comments, but still with no answer.
I have a remote machine running ubuntu where i am configuring a jupyterhub notebooks server. The server is already up and running, however, i noticed that it only works well with users that have previously logged in the machine via ssh.
For users that have never logged in the machine via ssh before, the server spawns a login screen but after the login comes the following image:
It displayed a different directory path before (i mean different than /user/john.snow), but i configured the jupyterhub spawner class to make the directory by adding the lines:
if os.path.exists('mkdir /home/FOLDER/' + env['JUPYTERHUB_USER'])!=True:
os.system('mkdir /home/FOLDER/' + env['JUPYTERHUB_USER'])
(i append the complete spawner code at the end of the question, if thats useful)
Since i dont intend to need to test every single directory that jupyter notebook looks for, my desire is to find the ssh configuration files in the computer and mimic what ssh does for that particular user with the spawner.
Is it possible? I tried looking at /etc/ssh/ssh_config and similar but almost all of the file is commented and the syntax is mysterious.
Thanks for any suggestions.
OBS: full spawner code:
import os, getpass
import yaml
from jupyterhub.spawner import Spawner, LocalProcessSpawner
class spawner(LocalProcessSpawner):
def start(self):
# get environment variables,
# several of which are required for configuring the single-user server
env = self.get_env()
ret = super(spawner, self).start()
if os.path.exists('mkdir /home/FOLDER/' + env['JUPYTERHUB_USER'])!=True:
os.system('mkdir /home/FOLDER/' + env['JUPYTERHUB_USER'])
os.system('mkdir /home/FOLDER/' + env['JUPYTERHUB_USER'] + '/notebooks')
os.system('cp -r /usr/local/scripts/notebooks/* /home/FOLDER/' + env['JUPYTERHUB_USER'] + '/notebooks/')
os.system('chmod -R 777 /home/FOLDER/' + env['JUPYTERHUB_USER'] + '/notebooks/')
return ret
I found a solution to the problem. Since the spawner code was trying to access folders created by an ssh into the machine, the lines
if os.path.exists('mkdir /home/FOLDER/' + env['JUPYTERHUB_USER'])!=True:
os.system('mkdir /home/FOLDER/' + env['JUPYTERHUB_USER'])
were trying to create this folder if it didnt exist. However, there were other mysterious configurations that ssh generated and i couldnt figure out to replicate. Instead, i found out that ssh configuration files are at /etc/skel, so i removed these two lines from the spawner and, instead, added:
os.system('su ' + env['JUPYTERHUB_USER'])
os.system('source /etc/skel/.bashrc')
os.system('source /etc/skel/.profile')
os.system('exit')
the 'su env["JUPYTERHUB_USER"]' and 'exit' lines being there because the spawner seems to be executed as root. It solved for new users, but old users who had already spawned the red bar were still dealing with it. It seems that deleting their home folders in the machine solved the problem.
I have been tasked with making a custom python script (since i'm bad with Bash) to run on a remote NRPE client which recursively counts the number of files in the /tmp directory. This is my script:
#!/usr/bin/python3.5
import os
import subprocess
import sys
file_count = sum([len(files) for r, d, files in os.walk("/tmp")]) #Recursive check of /tmp
if file_count < 1000:
x = subprocess.Popen(['echo', 'OK -', str(file_count), 'files in /tmp.'], stdout=subproce$
print(x.communicate()[0].decode("utf-8")) #Converts from byteobj to str
# subprocess.run('exit 0', shell=True, check=True) #Service OK - exit 0
sys.exit(0)
elif 1000 <= file_count < 1500:
x = subprocess.Popen(['echo', 'WARNING -', str(file_count), 'files in /tmp.'], stdout=sub$
print(x.communicate()[0].decode("utf-8")) #Converts from byteobj to str
sys.exit(1)
else:
x = subprocess.Popen(['echo', 'CRITICAL -', str(file_count), 'files in /tmp.'], stdout=su$
print(x.communicate()[0].decode("utf-8")) #Converts from byteobj to str
sys.exit(2)
EDIT 1: I tried hardcoding file_count to 1300 and I got a WARNING: 1300 files in /tmp. It appears the issue is solely in the nagios server's ability to read files in the client machine's /tmp.
What I have done:
I have the script in the directory with the rest of the scripts.
I have edited /usr/local/nagios/etc/nrpe.cfg on the client machine with the following line:
command[check_tmp]=/usr/local/nagios/libexec/check_tmp.py
I have edited this /usr/local/nagios/etc/servers/testserver.cfg file on the nagios server as follows:
define service {
use generic-service
host_name wp-proxy
service_description Files in /tmp
check_command check_nrpe!check_tmp
}
The output:
correct output is: OK - 3 files in /tmp
When I run the script on the client machine as root, I got a correct output
When I run the script on the client machine as the nagios user, I get a correct output
My output on the Nagios core APPEARS to be working, but it shows there are 0 files in /tmp when I know there are more. I made 2 files on the client machine and 1 file on the nagios server.
The server output for reference:
https://puu.sh/BioHW/838ba84c3e.png
(Ignore the bottom server, any issues solved with the wp-proxy will also be changed on the wpreess-gkanc1)
EDIT 2: I ran the following on the nagios server:
/usr/local/nagios/libexec/check_nrpe -H 192.168.1.59 -c check_tmp_folder
I indeed got a 0 file return. I still don't know how this can be fixed, however.
systemd service file, maybe this var is set to true :)
PrivateTmp= Takes a boolean argument. If true, sets up a new file system namespace for the executed processes and mounts private /tmp and /var/tmp directories inside it that is not shared by processes outside of the namespace.
This is useful to secure access to temporary files of the process, but makes sharing between processes via /tmp or /var/tmp impossible. If this is enabled, all temporary files created by a service in these directories will be removed after the service is stopped. Defaults to false. It is possible to run two or more units within the same private /tmp and /var/tmp namespace by using the JoinsNamespaceOf= directive, see systemd.unit(5) for details.
This setting is implied if DynamicUser= is set. For this setting the same restrictions regarding mount propagation and privileges apply as for ReadOnlyPaths= and related calls, see above. Enabling this setting has the side effect of adding Requires= and After= dependencies on all mount units necessary to access /tmp and /var/tmp.
Moreover an implicitly After= ordering on systemd-tmpfiles-setup.service(8) is added. Note that the implementation of this setting might be impossible (for example if mount namespaces are not available), and the unit should be written in a way that does not solely rely on this setting for security.
SOLVED!
Solution:
Go to your systemd file for nrpe. Mine was found here:
/lib/systemd/system/nrpe.service
If not there, run:
find / -name "nrpe.service"
and ignore all system.slice results
Open the file with vi/nano
Find a line which says PrivateTmp= (usually second to last line)
If it is set to true, set it to false
Save and exit the file and run the following 2 commands:
daemon-reload
restart nrpe.service
Problem solved.
Short explanation: The main reason for that issue is, that with debian 9.x, some processes which use systemd forced the private tmp directories by default. So if you have any other programs which have issues searching or indexing in /tmp, this solution can be tailored to fit.
I want to get mac address of all the device connected to network. [in a script - all windows environment]
I decided to use python for that. I have used nmap for it.
import nmap
nm = nmap.PortScanner()
nm.scan('127.0.0.1', '22-443')
#nm.scan(hosts='192.168.1.0/24', arguments='-n -sP -PE -PA21,23,80,3389')
nm.scan(hosts = '192.168.1.0/24', arguments = '-n -sP -PE -T5')
for host in nm.all_hosts():
mac = nm[host]['addresses']['mac']
print("mac " + mac)
[The problem with this method is that, sometime it misses few devices. Meaning,
If there are 5 devices connected to router, and I run the script first time, it will return only one or two devices. Next time I run the script it will return all 5, third time may be only one...like that.]
Where as if I open command prompt and do,
arp -a
It works perfectly every time.
So my question is that, is there a way I could parse the result from cmd in python script?
I looked onto os.popen(..??..),
But I am not able to understand exactly how can I do that?
Is there any good library for this?https://pypi.python.org/pypi/arprequest/0.3
You should have a look at the subprocess module. You probably want to use the check_output method:
import subprocess
output = subprocess.check_output(("arp", "-a"))
# Parse output here
check_output will return a str object in python 2, and a bytes object in python 3 which you can convert with output.decode("ascii") for example.
I'm managing two server environments that are configured differently. I access the two environments by specifying different SSH configurations on the command line because I need to specify a different User, ProxyCommand, and a list of other options for SSH.
e.g.
ssh oldserver.example.org -F config_legacy
ssh newserver.example.org -F config
To configure and maintain state on my servers, I've been using Ansible (version 1.9.0.1), which reads an SSH configuration file that is specified by a line in its ansible.cfg:
...
ssh_args = -F some_configuration_file
...
The ansible.cfg is loaded a number of ways:
def load_config_file():
''' Load Config File order(first found is used): ENV, CWD, HOME, /etc/ansible '''
p = configparser.ConfigParser()
path0 = os.getenv("ANSIBLE_CONFIG", None)
if path0 is not None:
path0 = os.path.expanduser(path0)
path1 = os.getcwd() + "/ansible.cfg"
path2 = os.path.expanduser("~/.ansible.cfg")
path3 = "/etc/ansible/ansible.cfg"
for path in [path0, path1, path2, path3]:
if path is not None and os.path.exists(path):
try:
p.read(path)
except configparser.Error as e:
print("Error reading config file: \n{0}".format(e))
sys.exit(1)
return p
return None
I could use this behavior to set an environmental variable before each command to load an entirely different ansible.cfg, but that seems messy as I only need to fiddle the ssh_args. Unfortunately, Ansible doesn't expose the command switch to specify an SSH config.
I'd like to not maintain any modifications to Ansible I'd like to not wrap all calls to the ansible or ansible-playbook commands. To preserve the behavior of Ansible's commands, I believe my options are:
a) have the target of ssh_args = -F <<config_file>> be a script that's opened
b) have the target of p.read(path) be a script that gets expanded to generate a valid ansible.cfg
c) just maintain different ansible.cfg files and take advantage of the fact that Ansible picks this file in the order of environmental variable, cwd.
Option C is the only way that I can see accomplishing this. You could have your default/most-used ansible.cfg be the one that is read in the cwd ansible.cfg, then optionally setting/unsetting an environmental variable that points to the version that specifies the ssh_args = -F config_legacy line that you need (ANSIBLE_SSH_ARGS).
The reason for needing to do an ansible.cfg instead of just passing an envvar with SSH options is because Ansible does not honor the User setting in an ssh configuration file -- it's already decided who it wants to run as on kick off of a command.
Dynamic inventory (ec2.py) files are incredibly poor places to hack in a change for maintenance reasons, which is why it's typical to see --user=REMOTE_USER flags, which coupled with setting an ANSIBLE_SSH_ARGS="-F some_ssh_config" environmental variable, make for a ugly commands to give to a casual user of an Ansible repo.
e.g.
ANSIBLE_SSH_ARGS="-F other_ssh_config" ansible-playbook playbooks/why_i_am_doing_this.yml -u ubuntu
v.
ansible-playbook playbooks/why_i_am_doing_this.yml -F other_ansible.cfg
Option A doesn't work because the file is opened all at once for loading into Python, per the p.read() above, not that it matters because if files could arbitrarily decide to open as scripts, we'd be living in a very scary world.
This is how the ansible.cfg loading looks from a system perspective:
$ sudo dtruss -a ansible ......
74947/0x11eadf: 312284 3 2 stat64("/Users/tfisher/code/ansible/ansible.cfg\0", 0x7FFF55D936C0, 0x7FD70207EA00) = 0 0
74947/0x11eadf: 312308 19 17 open_nocancel("/Users/tfisher/code/ansible/ansible.cfg\0", 0x0, 0x1B6) = 5 0
74947/0x11eadf: 312316 3 1 read_nocancel(0x5, "# ansible.cfg \n#\n# Config-file load order is:\n# envvar ANSIBLE_CONFIG\n# `pwd`/ansible.cfg\n# ~/.ansible.cfg\n# /etc/ansible/ansible.cfg\n\n# Some unmodified settings are left as comments to encourage research/suggest modific", 0x1000) = 3477 0
74947/0x11eadf: 312308 19 17 open_nocancel("/Users/tfisher/code/ansible/ansible.cfg\0", 0x0, 0x1B6) = 5 0
Option B doesn't work for the same reasons why A doesn't work -- even if you create a mock Python file object with proper read/readline/readlines sigatures, the file is still being opened for reading only, not execution.
And if this is the correct repo for OpenSSH, the config file is specified like so:
#define _PATH_HOST_CONFIG_FILE SSHDIR "/ssh_config"
processed like so:
/* Read systemwide configuration file after user config. */
(void)read_config_file(_PATH_HOST_CONFIG_FILE, pw,
host, host_arg, &options,
post_canon ? SSHCONF_POSTCANON : 0);
and read here with an fopen, which leaves no room for "file as a script" schenanigans.
Another option is set the environment variable ANSIBLE_SSH_ARGS to the arguments you want Ansible to pass to the ssh command.
I have a text file on my local machine that is generated by a daily Python script run in cron.
I would like to add a bit of code to have that file sent securely to my server over SSH.
To do this in Python (i.e. not wrapping scp through subprocess.Popen or similar) with the Paramiko library, you would do something like this:
import os
import paramiko
ssh = paramiko.SSHClient()
ssh.load_host_keys(os.path.expanduser(os.path.join("~", ".ssh", "known_hosts")))
ssh.connect(server, username=username, password=password)
sftp = ssh.open_sftp()
sftp.put(localpath, remotepath)
sftp.close()
ssh.close()
(You would probably want to deal with unknown hosts, errors, creating any directories necessary, and so on).
You can call the scp bash command (it copies files over SSH) with subprocess.run:
import subprocess
subprocess.run(["scp", FILE, "USER#SERVER:PATH"])
#e.g. subprocess.run(["scp", "foo.bar", "joe#srvr.net:/path/to/foo.bar"])
If you're creating the file that you want to send in the same Python program, you'll want to call subprocess.run command outside the with block you're using to open the file (or call .close() on the file first if you're not using a with block), so you know it's flushed to disk from Python.
You need to generate (on the source machine) and install (on the destination machine) an ssh key beforehand so that the scp automatically gets authenticated with your public ssh key (in other words, so your script doesn't ask for a password).
You'd probably use the subprocess module. Something like this:
import subprocess
p = subprocess.Popen(["scp", myfile, destination])
sts = os.waitpid(p.pid, 0)
Where destination is probably of the form user#remotehost:remotepath. Thanks to
#Charles Duffy for pointing out the weakness in my original answer, which used a single string argument to specify the scp operation shell=True - that wouldn't handle whitespace in paths.
The module documentation has examples of error checking that you may want to perform in conjunction with this operation.
Ensure that you've set up proper credentials so that you can perform an unattended, passwordless scp between the machines. There is a stackoverflow question for this already.
There are a couple of different ways to approach the problem:
Wrap command-line programs
use a Python library that provides SSH capabilities (eg - Paramiko or Twisted Conch)
Each approach has its own quirks. You will need to setup SSH keys to enable password-less logins if you are wrapping system commands like "ssh", "scp" or "rsync." You can embed a password in a script using Paramiko or some other library, but you might find the lack of documentation frustrating, especially if you are not familiar with the basics of the SSH connection (eg - key exchanges, agents, etc). It probably goes without saying that SSH keys are almost always a better idea than passwords for this sort of stuff.
NOTE: its hard to beat rsync if you plan on transferring files via SSH, especially if the alternative is plain old scp.
I've used Paramiko with an eye towards replacing system calls but found myself drawn back to the wrapped commands due to their ease of use and immediate familiarity. You might be different. I gave Conch the once-over some time ago but it didn't appeal to me.
If opting for the system-call path, Python offers an array of options such as os.system or the commands/subprocess modules. I'd go with the subprocess module if using version 2.4+.
Reached the same problem, but instead of "hacking" or emulating command line:
Found this answer here.
from paramiko import SSHClient
from scp import SCPClient
ssh = SSHClient()
ssh.load_system_host_keys()
ssh.connect('example.com')
with SCPClient(ssh.get_transport()) as scp:
scp.put('test.txt', 'test2.txt')
scp.get('test2.txt')
You can do something like this, to handle the host key checking as well
import os
os.system("sshpass -p password scp -o StrictHostKeyChecking=no local_file_path username#hostname:remote_path")
fabric could be used to upload files vis ssh:
#!/usr/bin/env python
from fabric.api import execute, put
from fabric.network import disconnect_all
if __name__=="__main__":
import sys
# specify hostname to connect to and the remote/local paths
srcdir, remote_dirname, hostname = sys.argv[1:]
try:
s = execute(put, srcdir, remote_dirname, host=hostname)
print(repr(s))
finally:
disconnect_all()
You can use the vassal package, which is exactly designed for this.
All you need is to install vassal and do
from vassal.terminal import Terminal
shell = Terminal(["scp username#host:/home/foo.txt foo_local.txt"])
shell.run()
Also, it will save you authenticate credential and don't need to type them again and again.
Using the external resource paramiko;
from paramiko import SSHClient
from scp import SCPClient
import os
ssh = SSHClient()
ssh.load_host_keys(os.path.expanduser(os.path.join("~", ".ssh", "known_hosts")))
ssh.connect(server, username='username', password='password')
with SCPClient(ssh.get_transport()) as scp:
scp.put('test.txt', 'test2.txt')
I used sshfs to mount the remote directory via ssh, and shutil to copy the files:
$ mkdir ~/sshmount
$ sshfs user#remotehost:/path/to/remote/dst ~/sshmount
Then in python:
import shutil
shutil.copy('a.txt', '~/sshmount')
This method has the advantage that you can stream data over if you are generating data rather than caching locally and sending a single large file.
Try this if you wan't to use SSL certificates:
import subprocess
try:
# Set scp and ssh data.
connUser = 'john'
connHost = 'my.host.com'
connPath = '/home/john/'
connPrivateKey = '/home/user/myKey.pem'
# Use scp to send file from local to host.
scp = subprocess.Popen(['scp', '-i', connPrivateKey, 'myFile.txt', '{}#{}:{}'.format(connUser, connHost, connPath)])
except CalledProcessError:
print('ERROR: Connection to host failed!')
A very simple approach is the following:
import os
os.system('sshpass -p "password" scp user#host:/path/to/file ./')
No python library are required (only os), and it works, however using this method relies on another ssh client to be installed. This could result in undesired behavior if ran on another system.
Calling scp command via subprocess doesn't allow to receive the progress report inside the script. pexpect could be used to extract that info:
import pipes
import re
import pexpect # $ pip install pexpect
def progress(locals):
# extract percents
print(int(re.search(br'(\d+)%$', locals['child'].after).group(1)))
command = "scp %s %s" % tuple(map(pipes.quote, [srcfile, destination]))
pexpect.run(command, events={r'\d+%': progress})
See python copy file in local network (linux -> linux)
Kind of hacky, but the following should work :)
import os
filePath = "/foo/bar/baz.py"
serverPath = "/blah/boo/boom.py"
os.system("scp "+filePath+" user#myserver.com:"+serverPath)