Unstable connection to remote server in Docker - python

I have created a docker container to run my python program inside.
My program requires retrieving the known_host under my .ssh folder:
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.load_host_keys(os.path.expanduser(os.path.join("~", ".ssh", "known_hosts")))
ssh.connect(server, username=username, password=password)
I have mounted it into the docker container using:
docker run --name test_cntr --rm \
-v $SCRIPT_DIR:/home/ \
-v $DATA_DIR:/home/data \
-v $HOME/.ssh/known_hosts:/root/.ssh/known_hosts \
-e PYTHONPATH=/home/sciprt_dir:/home/sciprt_dir/lib \
-e INDEX=0 \
dummy_image python /home/run.py
Found that my program can successfully get the known_hosts file sometimes, but sometimes not, below error is shown:
Exception is [Errno -2] Name or service not known
I didn't re-run the container during the run.py execution. Suppose the known_hosts mounted to the container at the beginning and run.py should be able to use it throughout whole running.

At the end I found that, one of the servers using for this program, did not register on the domain server, so that sometimes my program works when using server that is registered, and sometimes it does not work when the server is not registered.. Thanks all for help!

Related

Running mkdir -p remotely via ssh results in is not a valid local path or glob error

I'm using Fabric (http://www.fabfile.org) framework which connects via ssh to a VPS (Droplet on DigitalOcean) to push some bash commands.
Running a simple bash command mkdir fails with
ValueError: 'mkdir -p /opt/create_this_dir' is not a valid local path or glob.
What could be the problem here? When I log into the VPS via ssh as root, I'm able to run
"mkdir -p /opt/create_this_dir"
and the directory gets created under /opt/ without same error I get when I run command remotely with fabric script seen in screenshot below.
I need to use
run("sudo mkdir -p /opt/reimaginedworks")
instead of
put("sudo mkdir -p /opt/reimaginedworks")

pyftpdlib Networkprotocol Error

I am using pyftpdlib and pymongo to build a FTP server with GridFS.
Locally everything is working great.
Now I want to run the server using Docker. I am using the Dockerimage python:3.6-alpine and a mongo:latest image.
I run the ftp with:
docker run -it --rm -p 21:21 ftpimage
And the mongo image with:
docker run -it --rm mongo
Then I connect with:
ftp localhost
Login is working and pwd comand aswell. But when I run ls I get the following error:
522 Network protocol not supported (use 1).
500 Command "LPRT" not understood.
ftp: bind: Address already in use
I was looking through the RFCs and use 1 means IPv4. But I don't use anything else.
The FTP-Server doesn't list any erorrs. Just my ftp client. And I don't know why it uses IPv6.
When I enter sudo netstat -lptu I get this:
tcp6 0 0 [::]:ftp [::]:* LISTEN 4972/docker-proxy
Can anybody tell me where this comes from? I haven't setup any IPv6 stuff.
Thanks for any help :)

Python pxssh execute iptables not working

I'm using pxssh to establish an SSH connection to a server. To connection can be establish and I can run simple commands such as ls -l.
What I need now is to create iptable entries via that SSH connection.
I've tried the following
s = pxssh.pxssh()
print(ip)
if not s.login(ip, username, auth_password):
Log("SSH session failed on login")
Log(str(s))
else:
Log("SSH session login successful")
cmd = 'sudo iptables -I INPUT -p udp -m udp --dport 53 -j ACCEPT;'
s.sendline(cmd)
s.prompt()
print(s.before)
s.logout()
which runs without error, but when connection to the server, no iptable entry had been created!?
Try modifying your python script like this:
cmd = '/usr/bin/sudo /usr/sbin/iptables -I INPUT -p udp -m udp --dport 53 -j ACCEPT'
s.sendline(cmd)
You should change the sudo and iptables path if it's different on your OS
Also try printing the s.sendline(cmd) to see what actually is executed via the ptyhon script on the server, just to be sure that the correct iptables command is executed

/var/run/docker.sock: permission denied while running docker within Python CGI script

I am trying to run Python CGI script inside which I need to run docker image.
I am using Docker version 1.6.2. user is "www-data", which is added in docker group.
www-data : www-data sudo docker
On machine, with www-data I am able to execute docker commands
www-data#mytest:~/html/new$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
I am getting following error while running docker image from Python CGI script:
fatal msg="Get http:///var/run/docker.sock/v1.18/images/json: dial unix /var/run/docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?"
Is there anything I am missing here?
Permission denied on a default install indicates you are trying to access the socket from a user other than root or that is not in the docker group. You should be able to run:
sudo usermod -a -G docker $username
on your desired $username to add them to the group. You'll need to logout and back in for this to take effect (use newgrp docker in an existing shell, or restart the daemon if this is an external service accessing docker like your cgi scripts).
Note that doing this effectively gives that user full root access on your host, so do this with care.

Permission denied on git repository with Fabric

I'm writing a fab script to do a git pull on a remote server, but I get Permission denied (publickey,keyboard-interactive). when fabric runs the command.
If I ssh to the server and then do the pull, it works. (I've already setup the keys on the server, so it doesn't ask for passphrases, etc.)
Here's my fabric task:
import fabric.api as fab
def update():
'''
update workers code
'''
with fab.cd('~/myrepo'):
# pull changes
print colors.cyan('Pulling changes...')
fab.run('git pull origin master')
How do I get it to work with Fabric?
Edit: My server is a Google Compute instance, and it provides a gcutil tool to ssh to the instance. This is the command it runs to connect to the server:
ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -i /Users/John/.ssh/google_compute_engine -A -p 22 John#123.456.789.101
The script is able to connect to the server AFAICT (it's able to run commands on the server like cd and supervisor and git status), it's just git pull that fails.
you need to edit fabfile like this in order to enable ssh agent fowarding option.
from fabric.api import *
env.hosts = ['123.456.789.101']
env.user = 'John'
env.key_filename = '/Users/John/.ssh/google_compute_engine'
env.forward_agent = True
def update():
'''
update workers code
'''
with cd('~/myrepo'):
# pull changes
print colors.cyan('Pulling changes...')
run('git pull origin master')

Categories