No output when traversing SOCKS5 proxy with Fabric/Paramiko - python

When trying to traverse a SOCKS5 proxy to a RHEL5 Linux host using Fabric 1.6, the command returns but no output is returned to the stdout.
$> fab -H myhost -f ./fabfile.py remote_test --show=debug
Using fabfile '/home/myuser/fabric/fabfile.py'
Commands to run: remote_test
Parallel tasks now using pool size of 1
[myhost] Executing task 'remote_test'
[myhost] run: echo testing
Enter SOCKS5 password for myuser:
[myhost] Login password for 'myuser':
$> echo $?
0
$>
The remote_test function is:
def remote_test():
run('echo testing')
If I run the command against a non SOCKS5 host it works fine.
I am running the latest builds, although I haven't to date gotten this to work:
Python 2.7.3
Paramiko == 1.10.0
pycrypto == 2.6
fabric == 1.6.0
RHEL5.9
openssh-4.3p2-82.el5
My ~/.ssh/config looks like the following:
Host *.domain
ProxyCommand connect -S socksproxy.domain:1080 %h %p
And using the connect binary built from http://www.meadowy.org/~gotoh/ssh/connect.c
I haven't got access to github from the Company network so I will ask there when I get a chance as well.
Has anyone got any ideas why this could be occuring?
Thanks
Matt

I use connect rather than fabric but the answer is surely the same. There is an explination in connect.c that the SOCKS5_PASSWORD, HTTP_PROXY_PASSWORD, and CONNECT_PASSWORD do what you want. I've a script called ssh-tbb that goes as follows.
#!/bin/bash
export CONNECT_PASSWORD=""
exec ssh -o ProxyCommand="connect -5 -S 127.0.0.1:9150 %h %p" $*
Ideally, one should call this script ssh-tor and detect if tor lives on port 9050 or 9150 of course.

Related

Finding on which port a given server is running within python program

I am developing a python 3.11 program which will run on a few different servers and needs to connect to the local Redis server. On each machine the latter might run on a different port, sometimes the default 6379 but not always.
On the commandline I can issue the following command which on both my Linux and MacOS servers works well:
(base) bob#Roberts-Mac-mini ~ % sudo lsof -n -i -P | grep LISTEN | grep IPv4 | grep redis
redis-ser 60014 bob 8u IPv4 0x84cd01f56bf0ee21 0t0 TCP *:9001 (LISTEN)
What's the better way to get the running port using python functions/libraries?
What if you run your commands within a py script using the os library:
import os
cmd = 'ls -l' <-- change the command you want to run
os.system(cmd)
or else you could also use subprocess library as well:
import subprocess
print(subprocess.check_output(['ls', '-l']))

Script hangs while using Fabric's Connection.run() in the background

Overview
I'm trying to use python fabric to run an ssh command as root on a remote server.
The command: nohup ./foo &
foo is expected to command run for several days. I must be able to disassociate foo from fabric's remote ssh session, and put foo in the background.
The Fabric FAQ says you should use something like screen or tmux when you run your fabric script (which runs the backgrounded command). I tried that, but my fabric script still hung. foo is not hanging.
Question
How do I use fabric to run this command on a remote server without the script hanging: nohup ./foo &
Details
This is my script:
#!/bin/sh
# Credit: https://unix.stackexchange.com/a/20895/6766
if "true" : '''\'
then
exec "/nfs/it/network_python/$OSREL/bin/python" "$0" "$#"
exit 127
fi
'''
from getpass import getpass
import os
from fabric import Connection, Config
assert os.geteuid()==0, "ERROR: Must run as root"
for host in ['host1.foo.local', 'host2.foo.local']:
# Make an ssh connection to the host...
conn = Connection(host)
# The script always hangs at this line
result = conn.run('nohup ./foo &', warn=True, hide=True)
I always open a tmux session to run the aforementioned script in; even doing so, the script hangs when I get to conn.run(), above.
I'm running the script on a vanilla CentOS 6.5 VM; it runs under python 2.7.10 and fabric 2.1.
The Fabric FAQ is unclear... I thought the FAQ wanted tmux used on the local side when I executed the Fabric script.
The correct way to fix this problem is to replace nohup in the remote command, with screen -d -m <command>. Now I can run the whole script locally with no hangs (and I don't have to use tmux in the local term).
Explicitly, I have to rewrite the last line of my script in my question as:
# Remove &, and nohup...
result = conn.run('screen -d -m ./foo', warn=True, hide=True)

How to write a function which will run a command a certain condition in output?

I have a curl command which is having some issues sending data to proxy, so sometimes it sends the data perfectly other times it fails. The command looks like
curl --tlsv1 --cipher ALL --connect-timeout 90 -T nint.txt ftps://ftp.box.com/Backup/nants.txt --user "admin:pass" -x socks4://10.21.0.10:1080 -v
Now i need to write a python code which executes this command until the output contains Connection #0 to host ftp.box.com left intact
something like
def send_to_box(zip_name):
curl --tlsv1 --cipher ALL --connect-timeout 90 -T nint.txt ftps://ftp.box.com/Backup/nants.txt --user "admin:pass" -x socks4://10.21.0.10:1080 2>&1 | grep -q "Connection #0 to host ftp.box.com left intact" "
while "Connection #0 to host ftp.box.com left intact" in os.system(curl_cmd_grep):
print successful send the data
Not sure what is the best way to achive this
You can use ftputil which has a TLS support. See an example here: Does ftputil support SSL/TLS?
To use a proxy, consult this SO question: Proxies in Python FTP application
You can also search in the Python documentation: ftplib module.

How to check if NTP server is running on a Linux machine by a python script

I need to get IP address of a machine and check if NTP server is running on that. In addition, if it is not running, just start it.
I checked several posts and non of them worked for me.
You can use fabric for this. It's great! Here's a quick and dirty way:
from fabric.api import run
def restart_ntp_if_not_running():
run('if [[ $(netstat -p tcp -n | grep [your ip].123 | grep ESTABLISHED) ]]; then true; else [your command to restart here]; fi;')
The execute this like:
fab -H [host name] restart_ntp_if_not_running

Python fabric unable to start process

I'm using python fabric to deploy binaries to an ec2 server and am attempting to run them in background (a subshell).
All the fabric commands for performing local actions, putting files, and executing remote commands w/o elevated privileges work fine. The issue I run into is when I attempt to run the binary.
with cd("deploy"):
run('mkdir log')
sudo('iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080', user="root")
result = sudo('./dbserver &', user="root") # <---- This line
print result
if result.failed:
print "Running dbserver failed"
else:
print "DBServer now running server" # this gets printed despite the binary not running
After I login to the server and ps aux | grep dbserver nothing shows up. How can I get fabric to execute the binary? The same command ./dbserver & executed from the shell does exactly what I want it to. Thanks.
This is likey reated to TTY issues, and/or that you're attempting to background a process.
Both of these are discussed in the FAQ under these two headings:
http://www.fabfile.org/faq.html#init-scripts-don-t-work
http://www.fabfile.org/faq.html#why-can-t-i-run-programs-in-the-background-with-it-makes-fabric-hang
Try making the sudo like this:
sudo('nohup ./dbserver &', user=root, pty=False)

Categories