Python pxssh execute iptables not working - python

I'm using pxssh to establish an SSH connection to a server. To connection can be establish and I can run simple commands such as ls -l.
What I need now is to create iptable entries via that SSH connection.
I've tried the following
s = pxssh.pxssh()
print(ip)
if not s.login(ip, username, auth_password):
Log("SSH session failed on login")
Log(str(s))
else:
Log("SSH session login successful")
cmd = 'sudo iptables -I INPUT -p udp -m udp --dport 53 -j ACCEPT;'
s.sendline(cmd)
s.prompt()
print(s.before)
s.logout()
which runs without error, but when connection to the server, no iptable entry had been created!?

Try modifying your python script like this:
cmd = '/usr/bin/sudo /usr/sbin/iptables -I INPUT -p udp -m udp --dport 53 -j ACCEPT'
s.sendline(cmd)
You should change the sudo and iptables path if it's different on your OS
Also try printing the s.sendline(cmd) to see what actually is executed via the ptyhon script on the server, just to be sure that the correct iptables command is executed

Related

Establishing SSH connection using Python

This is the first time I'm asking a question on stackoverflow, so do inform me if I should ask questions differently or provide additional data.
I have an ssh command that I run on my command prompt to connect to postgreSQL server
ssh -L 5433:<host>:5432 -i <public_key> <username>#<IP address> -p 1024 -N
I try run the same through a python script, when I run it using
os.system("ssh -L 5433:<host>:5432 -i <public_key> <username>#<IP address> -p 1024 -N")
conn1 = None
conn1 = psycopg2.connect(dbname="db",user="username",
host="localhost", password="xxxx",port="5433")
It throws me the following error,
could not connect to server: Connection refused (0x0000274D/10061)
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5433?
I have tried using paramiko to connect to the same, but I'm unable to translate the command line that I have to all the parameters required, for example, I don't understand port1:xxx:port2 and how to implement that in paramiko, the same goes for the public key. Given below is the code I use for that,
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('<hostname>', username='<username>, key_filename='<public_key>')
I've looked through many documentations but can't find a solution to the same. Any help is massively appreciated.

Unstable connection to remote server in Docker

I have created a docker container to run my python program inside.
My program requires retrieving the known_host under my .ssh folder:
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.load_host_keys(os.path.expanduser(os.path.join("~", ".ssh", "known_hosts")))
ssh.connect(server, username=username, password=password)
I have mounted it into the docker container using:
docker run --name test_cntr --rm \
-v $SCRIPT_DIR:/home/ \
-v $DATA_DIR:/home/data \
-v $HOME/.ssh/known_hosts:/root/.ssh/known_hosts \
-e PYTHONPATH=/home/sciprt_dir:/home/sciprt_dir/lib \
-e INDEX=0 \
dummy_image python /home/run.py
Found that my program can successfully get the known_hosts file sometimes, but sometimes not, below error is shown:
Exception is [Errno -2] Name or service not known
I didn't re-run the container during the run.py execution. Suppose the known_hosts mounted to the container at the beginning and run.py should be able to use it throughout whole running.
At the end I found that, one of the servers using for this program, did not register on the domain server, so that sometimes my program works when using server that is registered, and sometimes it does not work when the server is not registered.. Thanks all for help!

Access Ipython Notebook Running on Remote Server

Have installed Anaconda Python on remote linux machine.
Used putty on local Windows to login to remote linux machine to remotely start the Ipython Notebook. It's started on port 8888.
remote_user#remote_host$ ipython notebook --no-browser --port=8888
Now I need to access this Notebook on local browser.
Have tried doing ssh tunnel.
C:\Users\windowsUser> ssh -N -f -L localhost:8888:localhost:8888 remote_user#remote_host
ssh: connect to host remote_host port 22: Bad file number
But not able to get it right. Getting the above error
Note: the user windowsUser does not exist on remote_host(linux). Remote user account is remote_user.
Where am i going wrong?
Plzz help
It appears you have a typo. In your ssh command you should not have "localhost" twice.
The corrected command is:
ssh -N -f -L 8888:localhost:8888 remote_user#remote_host
Because the syntax for the command is:
ssh -L <Local Port>:<Local Machine>:<Target Port> <Target Machine>
(see http://www.slashroot.in/ssh-port-forwarding-linux-configuration-and-examples)
Furthermore, you could instead modify your ssh config file (in ~/.ssh/config or /etc/ssh_config) to include port forwarding:
Host remote_host
Hostname PUT_REMOTE_IP_HERE
Port 22
User remote_user
LocalForward 8888 localhost:8888
I don't think window has a ssh cmd,
if local is a standard ssh client use
C:\Users\windowsUser> ssh -N -f -L 8888:localhost:8888 remote_user#remote_host
BTW
ipython notebook --ip=remote_host_ip
then you can access with http://remote_host_ip:8888/tree

Process dies, if it is run via paramiko ssh session and with "&" in the end

I just want to run tcpdump in background using paramiko.
Here is the part of the code:
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host, username=login, password=password)
transport = ssh.get_transport()
channel = transport.open_session()
channel.get_pty()
channel.set_combine_stderr(True)
cmd = "(nohup tcpdump -i eth1 port 443 -w /tmp/dump20150317183305940107.pcap) &"
channel.exec_command(cmd)
status = channel.recv_exit_status()
After I execute this code, pgrep tcpdump returns nothing.
If I remove & sign tcpdump runs correctly, but my ssh shell is blocked.
How can I run tcpdump in background correctly?
What command I've tried:
cmd = 'nohup tcpdump -i eth1 port 443 -w /tmp/dump20150317183305940107.pcap &\n'
cmd = "screen -d -m 'tcpdump -i eth1 port 443 -w /tmp/dump20150317183305940107.pcap'"
cmd = 'nohup sleep 5 && echo $(date) >> "test.log" &'
With & you make your remote command exit instantly. The remote sshd will therefore likely (depends on implementation, but openssh does) kill all processes that were started from your command invocation. In your case, you just spawn a new process nohup tcpdump which will instantly return due to & at the end. The channel.recv_exit_status() will only block until the exit code for the & ooperation is ready which instantly is. Your code then just terminates, terminating your ssh session which will make the remote sshd kill the spawned nohup tcpdump proc. That why you end up with no tcpdump process.
Here's what you can do:
Since exec_command is going to spawn a new thread for your command, you can just leave it open and proceed with other tasks. But make sure to empty buffers every now and then (for verbose remote commands) to prevent paramiko from stalling.
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host, username=login, password=password)
transport = ssh.get_transport()
channel_tcpdump = transport.open_session()
channel_tcpdump.get_pty()
channel_tcpdump.set_combine_stderr(True)
cmd = "tcpdump -i eth1 port 443 -w /tmp/dump20150317183305940107.pcap" # command will never exit
channel_tcpdump.exec_command(cmd) # will return instantly due to new thread being spawned.
# do something else
time.sleep(15) # wait 15 seconds
_,stdout,_ = ssh.exec_command("pgrep tcpdump") # or explicitly pkill tcpdump
print stdout.read() # other command, different shell
channel_tcpdump.close() # close channel and let remote side terminate your proc.
time.sleep(10)
just need add a sleep command,
cmd = "(nohup tcpdump -i eth1 port 443 -w /tmp/dump20150317183305940107.pcap) &"
change to
cmd = "(nohup tcpdump -i eth1 port 443 -w /tmp/dump20150317183305940107.pcap) &; sleep 1"

Permission denied on git repository with Fabric

I'm writing a fab script to do a git pull on a remote server, but I get Permission denied (publickey,keyboard-interactive). when fabric runs the command.
If I ssh to the server and then do the pull, it works. (I've already setup the keys on the server, so it doesn't ask for passphrases, etc.)
Here's my fabric task:
import fabric.api as fab
def update():
'''
update workers code
'''
with fab.cd('~/myrepo'):
# pull changes
print colors.cyan('Pulling changes...')
fab.run('git pull origin master')
How do I get it to work with Fabric?
Edit: My server is a Google Compute instance, and it provides a gcutil tool to ssh to the instance. This is the command it runs to connect to the server:
ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -i /Users/John/.ssh/google_compute_engine -A -p 22 John#123.456.789.101
The script is able to connect to the server AFAICT (it's able to run commands on the server like cd and supervisor and git status), it's just git pull that fails.
you need to edit fabfile like this in order to enable ssh agent fowarding option.
from fabric.api import *
env.hosts = ['123.456.789.101']
env.user = 'John'
env.key_filename = '/Users/John/.ssh/google_compute_engine'
env.forward_agent = True
def update():
'''
update workers code
'''
with cd('~/myrepo'):
# pull changes
print colors.cyan('Pulling changes...')
run('git pull origin master')

Categories