(please, if you have an answer, please post a tested script, because I've tried the perfect theoretical script and it didn't work)
I have a cronjob in python that connects to an external server through a ssh tunnel to read and write a MySQL database there.
The python program opens a ssh tunnel and binds the mysql port to a local one, then connects to the external db through the pymysql library as if it were local.
Everything works fine, but... I couldn't make the tunnel to be opened and closes by the same program. So, I have to manually open a tunnel and let it open with the -f -N -T params to the shh.
It works ok, but I'd like that the progam could manage the tunnel without need of leaving a tunnel open. I couldn't get the right parameters for this.
I've read on Internet some answers that implies waiting for tunnel to open, and then shutting down by killing de process in the os. I don't like those solutions. In fact, I could do this in the past with this commands:
sys_command = "ssh -T -f user#external.server -i /home/user/.ssh/id_rsa -p 22222 -L 3310:localhost:3306"
prog = subprocess.Popen(sys_command, stdout=subprocess.PIPE , stderr=subprocess.PIPE , shell = True)
out,err = prog.communicate()
and then I connected to the db with:
con_test = pymysql.connect(user=conf.dbUser, passwd=conf.dbPass, host='localhost', port = 3310, db=conf.database)
It worked in another environment, but when I try it now I get:
Cannot fork into background without a command to execute.
I've tried with a lot of combinations of parameters. The only one that worked was:
ssh -f -N -T .....
But, as I said, this opens a tunnel indefinitely.
I've also tried using Popen as it should be (separating parameter per parameter with quotes and commas and with Shell = False):
sys_command = ["ssh","-f","-T","user#external.server","-i","/home/user/.ssh/id_rsa","-p","22222","-L","3310:localhost:3306"]
prog = subprocess.Popen( sys_command, stdout=subprocess.PIPE , stderr=subprocess.PIPE , shell = False)
but i get the same error.
But, oh surprise, if I make a trick that I red by there and I add "sleep","10" to the end:
sys_command = ["ssh","-f","-T","user#external.server","-i","/home/user/.ssh/id_rsa","-p","22222","-L","3310:localhost:3306","sleep","10"]
...I get the greeting from the other server through outerr!
and, below, the error:
bind: Cannot assign requested address
So, if someone knows a correct way to do it, please answer with a tested script, because many solutions I found in internet just didn't work in my environment.
Thanks in advance.
Related
The following is the skeleton code for a script that addresses servers that are down on the network. The script does the job, but I would like it to operate faster/better.
The script does the following:
Determines if machine is reachable via ssh.
If not reachable install a recovery image.
If reachable then send a script that take server name as a command line argument and does a quick diagnosis to determine why the server went down.
Problems:
Some servers that are reachable over the network get stuck when is_reachable() is called. The diagnosis_script.py uses linux commands to find hardware issues and logging errors. The script hangs for up to 30 mins until the ssh connection is terminated. It will continue to the next reachable server in the for loop, but this is a huge time sink.
Is there a way to put a timer on this? To exit the ssh connection and continue to the next server if the current server takes too long?
I believe a queue based multiprocessing algorithm could also expedite this script as well. Does anyone have exp or an have an example of how to implement something like this?
Example Skeleton Code:
import os
server_list = [machine1, machine2, machine3, machine4, machine5, machine6, ... , machine100]
reachable = []
unreachable = []
def is_sshable(server_list):
for server in server_list:
ssh_tester = 'ssh -o ConnectTimeout=3 -T root#{}'.format(server)
ssh = os.popen(ssh_tester).read()
if "0" not in ssh:
unreachable.append(server)
else:
reachable.append(server)
def is_unreachable(servername):
# Recover server is an internal linux command
for server in unreachable:
os.system('recover server {}'.format(servername))
def is_reachable(servername):
for server in reachable:
os.system('python3 diagnosis_script.py {}'.format(server))
I am trying to connect a remote server using Paramiko and send some files to other remote server. I tried the code below, but it didn't work. I checked all connections, username and password parameters, they don't have any problem. Also the file which I want to transfer exist in first remote server in proper path.
The reason why I don't download files to my local computer and upload to second server is, connection speed between two remote servers is a lot faster.
Things that I tried:
I set paramiko log level to debug, but couldn't find any useful information.
I tried same scp command from first server to second server from command line, worked fine.
I tried to log by data = stdout.readlines() after stdin.flush() line but that didn't log anything.
import paramiko
s = paramiko.SSHClient()
s.set_missing_host_key_policy(paramiko.AutoAddPolicy())
s.connect("10.10.10.10", 22, username='oracle', password='oracle', timeout=4)
stdin, stdout, stderr = s.exec_command(
"scp /home/oracle/myFile.txt oracle#10.10.10.20:/home/oracle/myFile.txt")
stdin.write('password\n')
stdin.flush()
s.close()
You cannot write a password to the standard input of OpenSSH scp.
Try it in a shell, it won't work either:
echo password | scp /home/oracle/myFile.txt oracle#10.10.10.20:/home/oracle/myFile.txt
OpenSSH tools (including scp) read the password from a terminal only.
You can emulate the terminal by setting get_pty parameter of SSHClient.exec_command:
stdin, stdout, stderr = s.exec_command("scp ...", get_pty=True)
stdin.write('password\n')
stdin.flush()
Though enabling terminal emulation can bring you unwanted side effects.
A way better solution is to use a public key authentication. There also other workarounds. See How to pass password to scp? (though they internally have to do something similar to get_pty=True anyway).
Other issues:
You have to wait for the command to complete. Calling s.close() will likely terminate the transfer. Using stdout.readlines() will do in most cases. But it may hang, see Paramiko ssh die/hang with big output.
Do not use AutoAddPolicy – You are losing a protection against MITM attacks by doing so. For a correct solution, see Paramiko "Unknown Server".
This question already has an answer here:
Executing command using Paramiko exec_command on device is not working
(1 answer)
Closed 2 years ago.
I'm working on a GPS position retrieval project, I have to connect in SSH on routers, then launch commands to retrieve latitude and longitude.
I recently received new routers, when we connect to this router, we receive an "OK" signal when we are connected to ensure proper operation, then we run the command we want, and we get the data as in this example below, always followed by the "OK" message indicating that the command worked well :
AT*GNSSSTATUS?
Location Fix=1
Number of satellites = 14
Latitude=+49.17081
Longitude=-123.06970
Date=2016/02/29
Time= 18:55:28
TTFF=9449 milliSeconds
OK
When I connect in SSH with the help of PUTTY, it works, but when I use my code that sends the same command as mentioned above (AT*GNSSSTATUS?) through my Python script and the Paramiko library, the result is just "OK" which indicates that the connection is just active. It's like the command line opened by the script doesn't take the "ENTER" that should come next.
To test this, I tried to put a command returning "ERROR" in case I use PUTTY, but even in this case the Python script returns "OK".
To try to fix this I tried different options by adding :
stdin, stdout, stderr = client.exec_command('AT*GNSSSTATUS? \r\n')
or
stdin, stdout, stderr = client.exec_command('AT*GNSSSTATUS? <CR>')
But in no case does this change the result.
My data list contains only one string marked "OK".
For the connection part on the router everything works.
Anyone have any ideas?
Thanks a lot!
Sorry if there are spelling mistakes ahah.
Thanks Martin Prikryl !
So I looked at the link you sent me and it worked:
Executing command using Paramiko exec_channel on device is not working.
So I changed my code to use a shell and send my commands through it.
Here is my code
shell = client.invoke_shell()
shell.send('AT*GNSSSTATUS? \r')
Thank you very much and have a nice day
I'm working on a little python program to speed up managing various Raspberry Pi servers over SSH. It's all done but for one thing. Interacting with an SSH session isn't wokring the way I want it to.
I can interact with a command but some commands (specifically apt full-upgrade) which ask or have potential to ask questions whilst they're running aren't working. So when it reaches the point where it asks do you want to continue [Y/n] it falls over. I believe it's because apt can't read from the stdin so aborts.
I know I could run the apt command with the -y flag and bypass the question but ideally I'd like to be able to capture requests and ask the user for input. I've been using Paramiko to manage my SSH sessions and what I'm doing is capturing the stdout and passing it to the find function to look for things like [Y/n] and if it finds that then redirect the user to an input prompt which works but because theres no stdin when apt asks the question it aborts and when I send my user input back to the SSH session I get a socket closed error.
I've been looking for alternatives or ways to get round the issue but apart from seeing fabric mentioned as an alternative to paramiko I can't see a lot of other options out there. Does anyone know of any alternatives I can try to paramiko. I don't think fabric will work for me given its based off paramiko so I assume I'd hit the same error there. I'd appreciate any recoomendations or pointers if there's other parts of Paramiko I can try (I've stuck to using exec_command). I have tried channels which work to a point but I don't think keeping the SSH session open is the issue I think I need someway to keep stdin open/accessible to the apt command on the remote machine so it doesn't abort the command.
At the minute the best idea I've got to get round it is to run the command let it potentially abort look in stdout for the relevant phrases then run the command again after giving the user chance to set their inputs and pass the whole lot to stdin?
EDIT:
My program in steps:
login to the remote host
issue a command
use .find on the command to check for the use of 'sudo'
if sudo is present additionally send the user password to stdin along with the command
read the stdout to check for keywords/phrases like '[Y/n]' which are present when the user is being asked for input when running a command
if a keyword is found then ask the user for their input which can then be sent back to stdin to continue the command.
Steps 5 and 6 are where it fails and returns with a socket closed error. Looking online I don't think the issue is with paramiko as such but with the command running on the remote host. In my case sudo apt full-upgrade.
When I run that command it runs up to the 'Would you like to continue' point the automatically aborts, I believe the issue there is because there is nothing present in the stdin at that point (thats what I'm asking the user for) Apt automatically aborts
This is the part of my code where I'm running the commands:
admin = issue_cmd.find('sudo')
connect.connect(ip_addr, port, uname, passwd)
stdin, stdout, stderr = connect.exec_command(issue_cmd, get_pty=True)
if admin != -1:
print('Sudo detected. Attempting to elevate privileges...')
stdin.write(passwd + '\n')
stdin.flush()
else:
continue
output = stdout.read()
search = str(output).find('[Y/n]')
if search != -1:
resp = input(': ')
print(resp)
stdin.write(resp + '\n')
stdin.flush()
else:
pass
print(stdout.read().decode('utf-8').strip("\n"))
print(stderr.read().decode('utf-8').strip("\n"))
connect.close()
and here's the error message I'm seeing:
OSError: Socket is closed
I am running a script that telnets to a terminal server. Occasionally the script is launched while one instance is already running, which causes the already running script to fail with
EOFError: telnet connection closed
Is there a quick and easy and pythonic way to check if the required socket is already in use on the client computer before I try to open a connection with telnetlib?
SOLUTION:
I wanted to avoid making a subprocess call but since I do not control software on the client computers, and other programs may be using the same socket, the file lock suggestion below (a good idea) wouldn't work for me. I ended up using SSutrave's suggestion. Here is my working code that uses netstat in Windows 7:
# make sure the socket is not already in use
try:
netstat = subprocess.Popen(['netstat','-nao'],stdout=subprocess.PIPE)
except:
raise ValueError("couldn't launch netstat to check sockets. exiting")
ports = netstat.communicate()[0]
if (ip + ':' + port) in ports:
print 'socket ' + ip + ':' + port + ' in use on this computer already. exiting'
return
You can check for open ports by running the following linux command netstat | grep 'port number' | wc -l by importing subprocess library in python.
There is not a standard way to know if a server has other opened connections before you attempt to connect to it. You must ask it either connecting to another service in the server that checks it, or by asking the other clients, if you know all of them.
That said, telnet servers should be able to handle more than one connection at a time, so it should not matter if there are more clients connected.