I'm making a script that changes your dns and then pings a website to test latency and I've created a list with all the DNS and I want to use an external batch script to change the dns. However, I'm reasonably new to python and I don't know how to make python take data from the list and replace it in the batch file. This would help me very much, thank you!
**Python script **
from tcp_latency import measure_latency
host = input("Enter host: ")
def pinger():
latency = sum(measure_latency(host, port=80, runs=10, timeout=2.5))
latency = latency/10
print("Your average latency is",latency)
dns = ["1.1.1.1","1.0.0.1","8.8.8.8","8.8.4.4","9.9.9.9","149.112.112.112","208.67.222.222","208.67.220.220","8.26.56.26","8.20.247.20","185.228.168.9","185.228.169.9"]
Batch script
#echo off
cls
for /F "skip=3 tokens=1,2,3* delims= " %%G in ('netsh interface show interface') DO (
IF "%%H"=="Disconnected" netsh interface set interface "%%J" enabled
IF "%%H"=="Connected" netsh interface set interface "%%J" enabled
echo %%J
netsh interface ip set dns %%J static 1.1.1.1
)
I haven't tried any approaches just yet
Simple string replacement should work nicely
dns = ["1.1.1.1","1.0.0.1","8.8.8.8","8.8.4.4","9.9.9.9","149.112.112.112","208.67.222.222","208.67.220.220","8.26.56.26","8.20.247.20","185.228.168.9","185.228.169.9"]
# Assumes .bat and .py scripts are in the same directory
bat_file = "tester.bat"
# Read original .bat file
with open(bat_file, "r") as fs:
bat_str = fs.read()
base_name = bat_file.split(".")[0]
for dns_ip in dns:
new_bat_str = bat_str.replace("1.1.1.1", dns_ip)
# Parse new name for .bat file
new_bat_file = f"{base_name}_dns_{dns_ip.replace('.', '')}.bat"
with open(new_bat_file, "w") as fs:
fs.write(new_bat_str)
Related
I have a Python app which creates containers for the project and the project database using Docker. By default, it uses port 80 and if we would like to create the multiple instances of the app, I can explicitly provide the port number,
# port 80 is already used, so, try another port
$ bin/butler.py setup --port=82
However, it also happens that the port info provided (using --port) is already used by another instance of the same app. So, it will be better to know which ports are already being used for the app and choose not to use any of them.
How do I know which ports the app use till now? I would like to execute that inside Python.
you can always use subprocess module, run ps -elf | grep bin/butler.py for example and parse the output with regex or simple string manipulation, then extract the used ports .
psutil might be the package you need. You can use the net_connections and grab listen ports from there.
[conn.laddr.port for conn in psutil.net_connections() if conn.status=='LISTEN']
[8000,80,22,1298]
I write a solution where you can get all the ports used by docker from the Python code,
def cmd_ports_info(self, args=None):
cmd = "docker ps --format '{{.Ports}}'"
try:
cp = subprocess.run(cmd,
shell=True,
check=True,
stdout=subprocess.PIPE)
cp = cp.stdout.decode("utf-8").strip()
lines = str(cp).splitlines()
ports = []
for line in lines:
items = line.split(",")
for item in items:
port = re.findall('\d+(?!.*->)', item)
ports.extend(port)
# create a unique list of ports utilized
ports = list(set(ports))
print(colored(f"List of ports utilized till now {ports}\n" + "Please, use another port to start the project", 'green',
attrs=['reverse', 'blink']))
except Exception as e:
print(f"Docker exec failed command {e}")
return None
I tried using (going from memory, this may not be 100% accurate):
import socket
socket.sethostname("NewHost")
I got a permissions error.
How would I approach this entirely from within the Python program?
If you only need to do change the hostname until the next reboot, many linux system can change it with:
import subprocess
subprocess.call(['hostname', 'newhost'])
or with less typing but some potential pitfalls:
import os
os.system('hostname %s' % 'newhost')
I wanted to change the hostname permanently, which required making changes in a few places, so I made a shell script:
#!/bin/bash
# /usr/sbin/change_hostname.sh - program to permanently change hostname. Permissions
# are set so that www-user can `sudo` this specific program.
# args:
# $1 - new hostname, should be a legal hostname
sed -i "s/$HOSTNAME/$1/g" /etc/hosts
echo $1 > /etc/hostname
/etc/init.d/hostname.sh
hostname $1 # this is to update the current hostname without restarting
In Python, I ran the script with subprocess.run:
subprocess.run(
['sudo', '/usr/sbin/change_hostname.sh', newhostname])
This was happening from a webserver which was running as www-data, so I allowed it to sudo this specific script without a password. You can skip this step and run the script without sudo if you're running as root or similar:
# /etc.d/sudoers.d/099-www-data-nopasswd-hostname
www-data ALL = (root) NOPASSWD: /usr/sbin/change_hostname.sh
Here is a different approach
import os
def setHostname(newhostname):
with open('/etc/hosts', 'r') as file:
# read a list of lines into data
data = file.readlines()
# the host name is on the 6th line following the IP address
# so this replaces that line with the new hostname
data[5] = '127.0.1.1 ' + newhostname
# save the file temporarily because /etc/hosts is protected
with open('temp.txt', 'w') as file:
file.writelines( data )
# use sudo command to overwrite the protected file
os.system('sudo mv temp.txt /etc/hosts')
# repeat process with other file
with open('/etc/hostname', 'r') as file:
data = file.readlines()
data[0] = newhostname
with open('temp.txt', 'w') as file:
file.writelines( data )
os.system('sudo mv temp.txt /etc/hostname')
#Then call the def
setHostname('whatever')
At the next reboot the hostname will be set to the new name
I have developed the following python script to help me upload NX-OS images to the Cisco Nexus switches.
The script is running just fine with small files. Tried with files under 100M and it's working fine. However I have also NX-OS images which are about 600M . At some point while script is running and the TFTP upload in in progress the upload stops when the file on the Cisco flashdisk reach size: 205987840. The programs freezes and when I type show users in the cisco console I can see that the user used for upload is already disconnected.
I am thinking that maybe is something related to the ssh session timed out ? Or maybe something wrong in my script? I am new with python.
I am posting only relevant parts of the script:
def ssh_connect_no_shell(command):
global output
ssh_no_shell = paramiko.SSHClient()
ssh_no_shell.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_no_shell.connect(device, port=22, username=myuser, password=mypass)
ssh_no_shell.exec_command('terminal length 0\n')
stdin, stdout, stder = ssh_no_shell.exec_command(command)
output = stdout.readlines()
ssh_no_shell.close()
def upload_file():
cmd_1 = "copy tftp:" + "//" + tftp_server + "/" + image + " " + "bootflash:" + " vrf " + my_vrf
ssh_connect_no_shell(cmd_1)
print '\n##### Device Output Start #####'
print '\n'.join(output)
print '\n##### Device Output End #####'
def main():
print 'Program starting...\n'
time.sleep(1)
variables1()
check_if_file_present()
check_if_enough_space()
upload_file()
check_file_md5sum()
are_you_sure(perform_upgrade)
perform_upgrade_and_reboot()
if __name__ == '__main__':
clear_screen()
main()
My experience is:
don't use TFTP
...it's incredibly slow for large files
...it doesn't work well with some firewalls
...it depends on the server implementation to handle large files
=> i'd guess, your script would just run fine using a different TFTP-server-software...
Rather than troubleshooting TFTP I'd suggest to
go with SCP
...it requires an open SSH-Port at your Nexus-Device
...if SSH is possible through your firewall, SCP is, too - no extra rule required
+++ you can "push" the images from your laptop to your device without having to login into your device
for example - use "putty scp" => pscp.exe
//pscp # Windows-Client
cd d:\DOWNLOADS
start pscp n7000-s1-kickstart.6.2.12.bin admin#10.10.10.11:bootflash:
start pscp n7000-s1-dk9.6.2.12.bin admin#10.10.10.11:bootflash:
This copies, in parallel!, the nxos- and the kickstart-image to a device.
...easy to loop over several devices to add more parallel transfers
btw. some "IOS"-based devices require additonal flags:
pscp -2 -scp ...
I'm trying to create a scheduled task using the Unix at command. I wanted to run a python script, but quickly realized that at is configured to use run whatever file I give it with sh. In an attempt to circumvent this, I created a file that contained the command python mypythonscript.py and passed that to at instead.
I have set the permissions on the python file to executable by everyone (chmod a+x), but when the at job runs, I am told python: can't open file 'mypythonscript.py': [Errno 13] Permission denied.
If I run source myshwrapperscript.sh, the shell script invokes the python script fine. Is there some obvious reason why I'm having permissions problems with at?
Edit: I got frustrated with the python script, so I went ahead and made a sh script version of the thing I wanted to run. I am now finding that the sh script returns to me saying rm: cannot remove <filename>: Permission denied (this was a temporary file I was creating to store intermediate data). Is there anyway I can authorize these operations with my own credentials, despite not having sudo access? All of this works perfectly when I run it myself, but everything seems to go to shit when I have at do it.
Start the script using python not the actual script name, ex : python path/to/script.py.
at tries to run everything as a sh script.
EDIT: The at command tries running everything as a list of shell commands. So you should start your script like this:
at now + 1 minute < python mypythonscript.py
In this case, the #! line at the beginning of the script is not necessary.
I have been working on task scheduling between servers and clients recently. I just abstracted out my scheduling code and put it up on Github. It was meant to schedule several simulations across multiple machines that have all simulations in their filesystems. The idea is that since each machine had a different processor, it would compute each simulation, scp the results back into the server and request the server for the next simulation. The server responds by scheduling a task on the client to run the next unrun simulation
Hope this will help you.
NOTE: Since I only abstracted and uploaded the files about 5 minutes ago, I haven't had the chance to test the abstractions. However, if you come across any bugs, please let me know and I'll debug then as soon as I can.
Github seems to be down now. So here are the files that you'll need:
On the server:
serverside
#!/bin/bash
projectDir=~/
minute=`atq | sort -t" " -k1 -nr | head -n1 | cut -d' ' -f4 | cut -d":" -f1,2`
curr=`date | cut -d' ' -f4 | cut -d':' -f1,2`
time=`python -c "import sys; hour,minute=map(int,max(sys.argv[1:]).split(':')); minute += 2; hour, minute = [(hour,minute), ((hour+1)%24,minute%60)][minute>=60]; print '%d:%02d'%(hour, minute)" "$minute" "$curr"`
cat <<EOF | at "$time"
python $projectDir/serverside.py $1
EOF
serverside.py
import sys
import time
import smtplib
import subprocess
import os
import itertools
IP = sys.argv[1].strip()
PROJECT_DIR = "" # relative path (relative to the home directory) to the root directory of the project, which contains all subdirs containing simulation files
USERS = { # keys are IPs of the clients, values are user names on those clients
}
HOMES = { # keys are the IPs of clients, values are the absolute paths to the home directories on these clients for the usernames on these clients identified in USERS
}
HOME = None # absolute path to the home directory on the server
SMTP_SERVER = ""
SMTP_PORT = None
FROM_ADDR = None # the email address from which notification emails will be sent
TO_ADDR = None # the email address to which notification emails will be sent
def get_next_simulation():
""" This function returns a list.
The list contains N>0 elements.
Each of the first N-1 elements are names of directories (not paths), which when joined together form a relative path (relative from PROJECT_DIR).
The Nth element is the name of the file - the simulation to be run.
Before the end user implements this function, it is assumed that N=3.
Once this function has been implemented, if N!=3, change the code in the lines annotated with "Change code for N in this line"
Also look for this annotation in clientside.py and clientsideexec """
pass
done = False
DIR1, DIR2, FILENAME = get_next_simulation() # Change code for N in this line
while not done:
try:
subprocess.check_call("""ssh %(user)s#%(host)s 'sh %(home)s/%(project)/clientside %(dir1)s %(dir2)s %(filename)s %(host)s' """ %{'user':USER, 'host':IP, 'home':HOME[IP], 'project':PRJECT_DIR, 'dir1':DIR1, 'dir2':DIR2, 'filename':FILENAME}, shell=True) # Change code for N in this line
done = True
os.remove("%(home)s/%(project)/%(dir1)s/%(dir2)s/%(filename)s" %{'home':HOME, 'project':PROJECT_DIR, 'dir1':DIR1, 'dir2':DIR2, 'filename':FILENAME}) # Change code for N in this line
sm = smtplib.SMTP(SMTP_SERVER, SMTP_PORT)
sm.sendmail(FROM_ADDR, TO_ADDR, "running %(project)s/%(dir1)s/%(dir2)s/%(filename)s on %(host)s" %{'project':PROJECT_DIR, 'dir1':DIR1, 'dir2':DIR2, 'filename':FILENAME, 'host':IP}) # Change code for N in this line
except:
pass
On the client:
clientside
#!/bin/bash
projectpath=~/
python $projectpath/clientside.py "$#"
clientside.py
import subprocess
import sys
import datetime
import os
DIR1, DIR2, FILENAME, IP = sys.argv[1:]
try:
subprocess.check_call("sh ~/cisdagp/clientsideexec %(dir1)s %(dir2)s %(filename)s %(ip)s" %{'dir1':, 'dir2':, 'filename':, ip':IP}, shell=True, executable='/bin/bash') # Change code for N in this line
except:
pass
clientsideexec
#!/bin/bash
projectpath=~/
user=''
serverIP=''
SMTP_SERVER=''
SMTP_PORT=''
FROM_ADDR=''
TO_ADDR=''
MESSAGE=''
cat <<EOF | at now + 2 minutes
cd $projectpath/$1/$2 # Change code for N in this line
sh $3
# copy the logfile back to the server
scp logfile$3 $user#$serverIP:$projectpath/$1/$2/
cd $projectpath
python -c "import smtplib; sm = smtplib.SMTP('$SMTP_SERVER', $SMTP_PORT); sm.sendmail('$FROM_ADDR', '$TO_ADDR', '$MESSAGE')"
python clientsiderequest.py
EOF
Could you try: echo 'python mypythonscript.py' | at ...
This is my first post in StackOverflow, so I hope to do it the right way! :)
I have this task to do for my new job that needs to connect to several servers and execute a python script in all of them. I'm not very familiar with servers (and just started using paramiko), so I apologize for any big mistakes!
The script I want to run on them modifies the authorized_keys file but to start, I'm trying it with only one server and not yet using the aforementioned script (I don't want to make a mistake and block the server in my first task!).
I'm just trying to list the directory in the remote machine with a very simple function called getDir(). So far, I've been able to connect to the server with paramiko using the basics (I'm using pdb to debug the script by the way):
try_paramiko.py
#!/usr/bin/python
import paramiko
from getDir import get_dir
import pdb
def try_this(server):
pdb.set_trace()
ssh = paramiko.SSHClient()
ssh.load_host_keys("pth/to/known_hosts")
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
my_key = paramiko.RSAKey.from_private_key_file("pth/to/id_rsa")
ssh.connect(server, username = "root", pkey = my_key)
i, o, e = ssh.exec_command(getDir())
This is the function to get the directory list:
getDir.py
#!/usr/bin/python
import os
import pdb
def get_dir():
pdb.set_trace()
print "Current dir list is:"
for item in os.listdir(os.getcwd()):
print item
While debugging I got the directory list of my local machine instead of the one from the remote machine... is there a way to pass a python function as a parameter through paramiko? I would like to just have the script locally and run it remotely like when you do it with a bash file from ssh with:
ssh -i pth/to/key username#domain.com 'bash -s' < script.sh
so to actually avoid to copy the python script to every machine and then run it from them (I guess with the above command the script would also be copied to the remote machine and then deleted, right?) Is there a way to do that with paramiko.sshClient()?
I have also tried to modify the code and use the standard output of the channel that creates exec_command to list the directory leaving the scripts like:
try_paramiko.py
#!/usr/bin/python
import paramiko
from getDir import get_dir
import pdb
def try_this(server):
pdb.set_trace()
ssh = paramiko.SSHClient()
ssh.load_host_keys("pth/to/known_hosts")
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
my_key = paramiko.RSAKey.from_private_key_file("pth/to/id_rsa")
ssh.connect(server, username = "root", pkey = my_key)
i, o, e = ssh.exec_command(getDir())
for line in o.readlines():
print line
for line in e.readlines():
print line
getDir.py
def get_dir():
return ', '.join(os.listdir(os.getcwd()))
But with this, it actually tries to run the local directory list as commands (which actually makes sense they way I have it). I had to convert the list to a string because I was having a TypeError saying that it expects a string or a read-only character buffer, not a list... I know this was a desperate attempt to pass the function... Does anyone know how I could do such thing (pass a local function through paramiko to execute it on a remote machine)?
If you have any corrections or tips on the code, they are very much welcome (actually, any kind of help would be very much appreciated!).
Thanks a lot in advance! :)
You cannot just execute python function through ssh. ssh is just a tunnel with your code on one side (client) and shell on another (server). You should execute shell commands on remote side.
If using raw ssh code is not critical, i suggest fabric as library for writing administration tools. It contains tools for easy ssh handling, file transferring, sudo, parallel execution and other.
I think you might want change the paramaters you're passing into ssh.exec_command Here's an idea:
Instead of doing:
def get_dir():
return ', '.join(os.listdir(os.getcwd()))
i, o, e = ssh.exec_command(getDir())
You might want to try:
i, o, e = ssh.exec_command('pwd')
o.printlines()
And other things to explore:
Writing a bash script or a Python that lives on your servers. You can use Paramiko to log onto the server and executing the script with ssh.exec_command(some_script.sh) or ssh.exec_command(some_script.py)
Paramiko has some FTP/SFTP utilities so you can actually use it to put the script on the server and then execute it.
It is possible to do this by using a here document to feed a module into the remote server's python interpreter.
remotepypath = "/usr/bin/"
# open the module as a text file
with open("getDir.py", "r") as f:
mymodule = f.read()
# setup from OP code
ssh = paramiko.SSHClient()
ssh.load_host_keys("pth/to/known_hosts")
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
my_key = paramiko.RSAKey.from_private_key_file("pth/to/id_rsa")
ssh.connect(server, username = "root", pkey = my_key)
# use here document to feed module into python interpreter
stdin, stdout, stderr = ssh.exec_command("{p}python - <<EOF\n{s}\nEOF".format(p=remotepypath, s=mymodule))
print("stderr: ", stderr.readlines())
print("stdout: ", stdout.readlines())