I have a Python app which creates containers for the project and the project database using Docker. By default, it uses port 80 and if we would like to create the multiple instances of the app, I can explicitly provide the port number,
# port 80 is already used, so, try another port
$ bin/butler.py setup --port=82
However, it also happens that the port info provided (using --port) is already used by another instance of the same app. So, it will be better to know which ports are already being used for the app and choose not to use any of them.
How do I know which ports the app use till now? I would like to execute that inside Python.
you can always use subprocess module, run ps -elf | grep bin/butler.py for example and parse the output with regex or simple string manipulation, then extract the used ports .
psutil might be the package you need. You can use the net_connections and grab listen ports from there.
[conn.laddr.port for conn in psutil.net_connections() if conn.status=='LISTEN']
[8000,80,22,1298]
I write a solution where you can get all the ports used by docker from the Python code,
def cmd_ports_info(self, args=None):
cmd = "docker ps --format '{{.Ports}}'"
try:
cp = subprocess.run(cmd,
shell=True,
check=True,
stdout=subprocess.PIPE)
cp = cp.stdout.decode("utf-8").strip()
lines = str(cp).splitlines()
ports = []
for line in lines:
items = line.split(",")
for item in items:
port = re.findall('\d+(?!.*->)', item)
ports.extend(port)
# create a unique list of ports utilized
ports = list(set(ports))
print(colored(f"List of ports utilized till now {ports}\n" + "Please, use another port to start the project", 'green',
attrs=['reverse', 'blink']))
except Exception as e:
print(f"Docker exec failed command {e}")
return None
Related
i want to send text files via ssh to 2 servers. My servers have the same name and ip but different ports.
I can do it with 1 server but not with 2 how do I do this (normally there should be a port next to -p).
import subprocess
with open("hosts_and_ports.txt") as hp_fh:
hp_contents = hp_fh.readlines()
for hp_pair in hp_contents:
with open("commands.txt") as fh:
completed = subprocess.run("ssh ubuntussh#127.0.0.1 -p ", capture_output=True, text=True, shell=True, stdin=hp_pair)
My text file hosts_and_ports.txt contains the ports of my servers
2222;
2224;
exit;
My text file commands.txt contains the files I want to forward via ssh
touch demofile1.txt;
touch demofile2.txt;
exit;
ssh is always only to one (1) single port only. In your scenario you need to define the port with -p 2222 OR -p 2224 e.g. `ssh user#192.168.1.1 -p 2224 for one (1) and the same again for the other connection.
ssh user#192.168.1.1 -p 2224 "command1 && command2" #executes a remote command.
To send a local file:scp -p 2224 local_file user#192.168.1.1:/remote/directory
Your attempt obviously doesn't pass in the port number at all.
As a simplification, I'll assume that you can remove the silly exit; line from both files, and just keep on reading as long as there are lines in both files. Also, trim the semicolon from the end of each line; it is simply in the way. (It's not hard to ignore in the Python program, either, but why put such chaff in the file in the first place?)
import subprocess
with open("commands.txt") as cmd:
cmds = cmd.readlines()
with open("hosts_and_ports.txt") as hp_fh:
for line in hp_fh:
port = line.rstrip('\n')
for cmd in cmds:
completed = subprocess.run(
["ssh", "ubuntussh#127.0.0.1", "-p", port, cmd],
capture_output=True, text=True, check=True)
We don't need a shell here, and we are better off without it.
Actually probably also rename the file which only contains port numbers, as its name is currently misleading.
Tangentially, touch demofile1.txt demofile2.txt will create both files with a single remote SSH command. I'm guessing maybe you will have other commands you want to add to the file later on, so this runs all commands in the file on all the servers in the other file. Generally speaking, you will probably want to minimize the number of remote connections because there is a fair bit of overhead with each login ... so in fact it would make more sense to send the entire command.txt to each server in one go:
import subprocess
with open("commands.txt") as cmd:
cmds = cmd.read()
with open("hosts_and_ports.txt") as hp_fh:
for line in hp_fh:
port = line.rstrip('\n')
completed = subprocess.run(
["ssh", "ubuntussh#127.0.0.1", "-p", port, cmds],
capture_output=True, text=True, check=True)
I am using python 3 with docker sdk and using
containers.run in order to create a container and run my code
when I use command argument with one command as a string it works fine
see code
client = docker.from_env()
container = client.containers.run(image=image, command="echo 1")
When I try to use a list of commands (which is fine according to the docs)
client = docker.from_env()
container = client.containers.run(image=image, command=["echo 1", "echo 2"])
I am getting this error
OCI runtime create failed: container_linux.go:345: starting container
process caused "exec: \"echo 1\": executable file not found in $PATH
same happens when using one string as such
"echo 1; echo 2"
I am using ubuntu 19 with docker
Docker version 18.09.9, build 1752eb3
It used to work just fine with a list of commands, is there anything wrong with the new version of docker or am i missing something here?
You can use this.
client = docker.from_env()
container = client.containers.run(image=image, command='/bin/sh')
result = container.exec_run('echo 1')
result = container.exec_run('echo 2')
container.stop()
container.remove()
try this:
container = client.containers.run(image="alpine:latest", command=["/bin/sh", "-c", 'echo 1 && echo 2'])
This is my first post in StackOverflow, so I hope to do it the right way! :)
I have this task to do for my new job that needs to connect to several servers and execute a python script in all of them. I'm not very familiar with servers (and just started using paramiko), so I apologize for any big mistakes!
The script I want to run on them modifies the authorized_keys file but to start, I'm trying it with only one server and not yet using the aforementioned script (I don't want to make a mistake and block the server in my first task!).
I'm just trying to list the directory in the remote machine with a very simple function called getDir(). So far, I've been able to connect to the server with paramiko using the basics (I'm using pdb to debug the script by the way):
try_paramiko.py
#!/usr/bin/python
import paramiko
from getDir import get_dir
import pdb
def try_this(server):
pdb.set_trace()
ssh = paramiko.SSHClient()
ssh.load_host_keys("pth/to/known_hosts")
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
my_key = paramiko.RSAKey.from_private_key_file("pth/to/id_rsa")
ssh.connect(server, username = "root", pkey = my_key)
i, o, e = ssh.exec_command(getDir())
This is the function to get the directory list:
getDir.py
#!/usr/bin/python
import os
import pdb
def get_dir():
pdb.set_trace()
print "Current dir list is:"
for item in os.listdir(os.getcwd()):
print item
While debugging I got the directory list of my local machine instead of the one from the remote machine... is there a way to pass a python function as a parameter through paramiko? I would like to just have the script locally and run it remotely like when you do it with a bash file from ssh with:
ssh -i pth/to/key username#domain.com 'bash -s' < script.sh
so to actually avoid to copy the python script to every machine and then run it from them (I guess with the above command the script would also be copied to the remote machine and then deleted, right?) Is there a way to do that with paramiko.sshClient()?
I have also tried to modify the code and use the standard output of the channel that creates exec_command to list the directory leaving the scripts like:
try_paramiko.py
#!/usr/bin/python
import paramiko
from getDir import get_dir
import pdb
def try_this(server):
pdb.set_trace()
ssh = paramiko.SSHClient()
ssh.load_host_keys("pth/to/known_hosts")
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
my_key = paramiko.RSAKey.from_private_key_file("pth/to/id_rsa")
ssh.connect(server, username = "root", pkey = my_key)
i, o, e = ssh.exec_command(getDir())
for line in o.readlines():
print line
for line in e.readlines():
print line
getDir.py
def get_dir():
return ', '.join(os.listdir(os.getcwd()))
But with this, it actually tries to run the local directory list as commands (which actually makes sense they way I have it). I had to convert the list to a string because I was having a TypeError saying that it expects a string or a read-only character buffer, not a list... I know this was a desperate attempt to pass the function... Does anyone know how I could do such thing (pass a local function through paramiko to execute it on a remote machine)?
If you have any corrections or tips on the code, they are very much welcome (actually, any kind of help would be very much appreciated!).
Thanks a lot in advance! :)
You cannot just execute python function through ssh. ssh is just a tunnel with your code on one side (client) and shell on another (server). You should execute shell commands on remote side.
If using raw ssh code is not critical, i suggest fabric as library for writing administration tools. It contains tools for easy ssh handling, file transferring, sudo, parallel execution and other.
I think you might want change the paramaters you're passing into ssh.exec_command Here's an idea:
Instead of doing:
def get_dir():
return ', '.join(os.listdir(os.getcwd()))
i, o, e = ssh.exec_command(getDir())
You might want to try:
i, o, e = ssh.exec_command('pwd')
o.printlines()
And other things to explore:
Writing a bash script or a Python that lives on your servers. You can use Paramiko to log onto the server and executing the script with ssh.exec_command(some_script.sh) or ssh.exec_command(some_script.py)
Paramiko has some FTP/SFTP utilities so you can actually use it to put the script on the server and then execute it.
It is possible to do this by using a here document to feed a module into the remote server's python interpreter.
remotepypath = "/usr/bin/"
# open the module as a text file
with open("getDir.py", "r") as f:
mymodule = f.read()
# setup from OP code
ssh = paramiko.SSHClient()
ssh.load_host_keys("pth/to/known_hosts")
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
my_key = paramiko.RSAKey.from_private_key_file("pth/to/id_rsa")
ssh.connect(server, username = "root", pkey = my_key)
# use here document to feed module into python interpreter
stdin, stdout, stderr = ssh.exec_command("{p}python - <<EOF\n{s}\nEOF".format(p=remotepypath, s=mymodule))
print("stderr: ", stderr.readlines())
print("stdout: ", stdout.readlines())
I'm trying to write a python script that can ssh into remote server and can execute simple commands like ls,cd from the python client. However, I'm not able to read the output from the pseudo-terminal after successfully ssh'ing into the server. Could anyone please help me here so that I could execute some commands on the server.
Here is the sample code:
#!/usr/bin/python2.6
import os,sys,time,thread
pid,fd = os.forkpty()
if pid == 0:
os.execv('/usr/bin/ssh',['/usr/bin/ssh','user#host',])
sys.exit(0)
else:
output = os.read(fd,1024)
print output
data = output
os.write(fd,'password\n')
time.sleep(1)
output = os.read(fd,1024)
print output
os.write(fd,'ls\n')
output = os.read(fd,1024)
print output
Sample output:
user#host's password:
Last login: Wed Aug 24 03:16:57 2011 from 1x.x.x.xxxx
-bash: ulimit: open files: cannot modify limit: Operation not permitted
host: /home/user>ls
I'd suggest trying the module pexpect, which is built exactly for this sort of thing (interfacing with other applications via pseudo-TTYs), or Fabric, which is built for this sort of thing more abstractly (automating system administration tasks on remote servers using SSH).
pexpect: http://pypi.python.org/pypi/pexpect/
Fabric: http://docs.fabfile.org/en/1.11/
As already stated, better use public keys. As I use them normally, I have changed your program so that it works here.
#!/usr/bin/python2.6
import os,sys,time,thread
pid,fd = os.forkpty()
if pid == 0:
os.execv('/usr/bin/ssh',['/usr/bin/ssh','localhost',])
sys.exit(0)
else:
output = os.read(fd,1024)
print output
os.write(fd,'ls\n')
time.sleep(1) # this is new!
output = os.read(fd,1024)
print output
With the added sleep(1), I give the remote host (or, in my case, not-so-remote host) time to process the ls command and produce its output.
If you send ls and read immediately, you only read what is currently present. Maybe you should read in a loop or so.
Or you just should do it this way:
import subprocess
sp = subprocess.Popen(("ssh", "localhost", "ls"), stdout=subprocess.PIPE)
print sp.stdout.read()
I've just become the system admin for my research group's cluster and, in this respect, am a novice. I'm trying to make a few tools to monitor the network and need help getting started implementing them with python (my native tongue).
For example, I would like to view who is logged onto remote machines. By hand, I'd ssh and who, but how would I get this info into a script for manipulation? Something like,
import remote_info as ri
ri.open("foo05.bar.edu")
ri.who()
Out[1]:
hutchinson tty7 2009-08-19 13:32 (:0)
hutchinson pts/1 2009-08-19 13:33 (:0.0)
Similarly for things like cat /proc/cpuinfo to get the processor information of a node. A starting point would be really great. Thanks.
Here's a simple, cheap solution to get you started
from subprocess import *
p = Popen('ssh servername who', shell=True, stdout=PIPE)
p.wait()
print p.stdout.readlines()
returns (eg)
['usr pts/0 2009-08-19 16:03 (kakapo)\n',
'usr pts/1 2009-08-17 15:51 (kakapo)\n',
'usr pts/5 2009-08-17 17:00 (kakapo)\n']
and for cpuinfo:
p = Popen('ssh servername cat /proc/cpuinfo', shell=True, stdout=PIPE)
I've been using Pexpect, which let's you ssh into machines, send commands, read the output, and react to it, with success. I even started an open-source project around it, Proxpect - which haven't been updated in ages, but I digress...
The pexpect module can help you interface with ssh. More or less, here is what your example would look like.
child = pexpect.spawn('ssh servername')
child.expect('Password:')
child.sendline('ABCDEF')
(output,status) = child.sendline('who')
If your needs overgrow simple "ssh remote-host.example.org who" then there is an awesome python library, called RPyC. It has so called "classic" mode which allows to almost transparently execute Python code over the network with several lines of code. Very useful tool for trusted environments.
Here's an example from Wikipedia:
import rpyc
# assuming a classic server is running on 'hostname'
conn = rpyc.classic.connect("hostname")
# runs os.listdir() and os.stat() remotely, printing results locally
def remote_ls(path):
ros = conn.modules.os
for filename in ros.listdir(path):
stats = ros.stat(ros.path.join(path, filename))
print "%d\t%d\t%s" % (stats.st_size, stats.st_uid, filename)
remote_ls("/usr/bin")
If you're interested, there's a good tutorial on their wiki.
But, of course, if you're perfectly fine with ssh calls using Popen or just don't want to run separate "RPyC" daemon, then this is definitely an overkill.
This covers the bases. Notice the use of sudo for things that needed more privileges. We configured sudo to allow those commands for that user without needing a password typed.
Also, keep in mind that you should run ssh-agent to make this "make sense". But all in all, it works really well. Running deploy-control httpd configtest will check the apache configuration on all the remote servers.
#!/usr/local/bin/python
import subprocess
import sys
# The user#host: for the SourceURLs (NO TRAILING SLASH)
RemoteUsers = [
"deploy#host1.example.com",
"deploy#host2.appcove.net",
]
###################################################################################################
# Global Variables
Arg = None
# Implicitly verified below in if/else
Command = tuple(sys.argv[1:])
ResultList = []
###################################################################################################
for UH in RemoteUsers:
print "-"*80
print "Running %s command on: %s" % (Command, UH)
#----------------------------------------------------------------------------------------------
if Command == ('httpd', 'configtest'):
CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd configtest'))
#----------------------------------------------------------------------------------------------
elif Command == ('httpd', 'graceful'):
CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd graceful'))
#----------------------------------------------------------------------------------------------
elif Command == ('httpd', 'status'):
CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd status'))
#----------------------------------------------------------------------------------------------
elif Command == ('disk', 'usage'):
CommandResult = subprocess.call(('ssh', UH, 'df -h'))
#----------------------------------------------------------------------------------------------
elif Command == ('uptime',):
CommandResult = subprocess.call(('ssh', UH, 'uptime'))
#----------------------------------------------------------------------------------------------
else:
print
print "#"*80
print
print "Error: invalid command"
print
HelpAndExit()
#----------------------------------------------------------------------------------------------
ResultList.append(CommandResult)
print
###################################################################################################
if any(ResultList):
print "#"*80
print "#"*80
print "#"*80
print
print "ERRORS FOUND. SEE ABOVE"
print
sys.exit(0)
else:
print "-"*80
print
print "Looks OK!"
print
sys.exit(1)
Fabric is a simple way to automate some simple tasks like this, the version I'm currently using allows you to wrap up commands like so:
run('whoami', fail='ignore')
you can specify config options (config.fab_user, config.fab_password) for each machine you need (if you want to automate username password handling).
More info on Fabric here:
http://www.nongnu.org/fab/
There is a new version which is more Pythonic - I'm not sure whether that is going to be better for you int his case... works fine for me at present...