Python docker-py Connection Refused - python

I am having trouble accessing docker daemon from a client using docker-py in Python. I started a docker daemon by the command
sudo docker -d & and the output was [1] 4894. Then I tried to access the daemon from python using the code that I got from here as root
from docker import Client
cli = Client(base_url='unix://var/run/docker.sock')
cli.containers()
This gave me the error:
requests.exceptions.ConnectionError: ('Connection aborted.', error(111, 'Connection refused'))
I also tried
cli = Client(base_url='tcp://127.0.0.1:4894')
but it gave me the same error.

This seems that the /var/run/docker.sock file has the incorrect permissions. As the docker daemon is started as root the permissions are probably to restrictive.
If you change the permissions to allow other users to access it you should have more success (e.g. o=rwx).

The issue is indeed that /var/run/docker.sock has the incorrect permissions.
To fix it, you need to give the current user access to this file.
However, on Linux, giving o=rwx rights to /var/run/docker.sock is very dangerous as it allows any user and service on the system to run commands as root. Indeed access to /var/run/docker.sock implies full root access to the machine. See https://docs.docker.com/engine/security/#docker-daemon-attack-surface
A less dangerous approach consists in creating the group docker and adding the current user to this group. See https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user
However, this approach is still potentially dangerous as it gives the current user full root access without the protections that sudo offers (i.e., asking the user password from time to time and logging sudo calls.
See also What is the Docker security risk of /var/run/docker.sock?
(I unfortunately cannot comment hence I write my comment as an answer.)

Related

Running python code in a docker image in a server where I dont have root access

So, I have access to a server by ssh with some gpus where I can run some python code. I need to do that using a docker container, however if I try to do anything with docker in the server i get permission denied as I dont have root access (and I am not in the list of sudoers). What am I missing here?
Btw, I am totally new to Docker (and quite new to linux itself) so it might be that I am not getting some fundamental.
I solved my problem. Turns out I simply had to ask the server administrator to add me to a group and everything worked.

Fabric 2 automating deployment error when git pulling on remote server. Repository not found

I'm trying to automate my deployment with Fabric 2.
When I manually do a git pull through the command line on the remote server everything works fine.
When I try to do the same with my Fabric/Invoke script it does not allow me to pull.
It does though allow me to do git status and other commands.
The code:
# Imports
from fabric import Connection
from fabric.tasks import task
import os
# Here i pass my local passphrase:
kwargs = {'passphrase': os.environ["SSH_PASSPHRASE"]}
#task
def serverdeploy(c, branch="Staging"):
con = Connection('myuser#myhost', connect_kwargs=kwargs)
with con.cd("/home/user/repository/"):
# Activate the virtual environment:
with con.prefix("source ENV/bin/activate"):
con.run("git pull origin {}".format(branch))
The results are:
git#bitbucket.org: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Notes:
I don't even get asked for a passphrase while doing the pull.
I have tried doing the pull without activating the environment but that didn't work either.
What could possibly be the problem?
Please place con.run("git pull origin {}".format(branch)) outside the with con.prefix("source ENV/bin/activate"):.
Your code has nothing to do with the interpreter or the virtual env! Try that and it should works!
The most likely issue is that the user that you log in as has the proper ssh key setup for bitbucket.org, but the fabric connection user is different. You can test whether the setup is correct by using these two commands as the user that fabric connects as:
ssh -T git#bitbucket.org
ssh -T -i /path/to/private_key git#bitbucket.org
In order to fix this issue, copy the private key to the /home/myuser/.ssh directory and add an ssh config entry for bitbucket.org to /home/myuser/.ssh/config:
Host bitbucket.org
IdentityFile /path/to/private_key

nginx permission denied while reading upstream - even when run as root

I have a flask app running under uWSGI behind nginx.
*1 readv() failed (13: Permission denied) while reading upstream, client: 10.0.3.1, server: , request: "GET /some/path/constants.js HTTP/1.1", upstream: "uwsgi://unix:/var/uwsgi.sock:", host: "dev.myhost.com"
The permissions on the socket are okay (666, and set to the same user as nginx), in fact, even when I run nginx as root I still get this error.
The flask app/uwsgi is sending the request properly. But it's just not being read by Nginx. This is on Ubuntu Utopic Unicorn.
Any idea where the permission might be getting denied if the nginx process has full access to the socket?
As a complicating factor this server is running in a container that has Ubuntu 14.04 installed in it. And this setup used to work... but I recently upgraded the host to 14.10... I can fully understand that this could be the cause of the problem. But before I downgrade the host or upgrade the container I want to understand why.
When I run strace on a worker that's generating this error I see the call it's making is something like this:
readv(14, 0x7fffb3d16a80, 1) = -1 EACCES (Permission denied)
14 seems to be the file descriptor created by this system call
socket(PF_LOCAL, SOCK_STREAM, 0) = 14
So it can't read from a local socket that it has just created?
Okay! So the problem was, I think, related to this bug. It seems that even though apparmor wasn't configured to prevent access to sockets inside the containers it was actually doing something to prevent reading from them (though not creation...) so turning off apparmor for the container (following these instructions) worked to fix it.
The two relevant lines were:
sudo apparmor_parser -R /etc/apparmor.d/usr.bin.lxc-start
sudo ln -s /etc/apparmor.d/usr.bin.lxc-start /etc/apparmor.d/disabled/
and adding
lxc.aa_profile = unconfined
To the containers config file.
NB: These errors were not recorded in any apparmor logs.
This problem was probably introduced in kernel 3.16, because it does not reproduce on 14.04 with 3.13 kernel. Strange apparmor bug was indeed responsible for that.
Unfortunately #aychedee's solution didn't work for me. In my case I had to add the following parameter to docker run command to get rid of the issue:
docker run --security-opt apparmor:unconfined ...
If someone's aware what is the current state of the issue, please consider adding comment under this answer :)

different behavior in Python shell and program

I'm using subprocess.Popen to instantiate an ssh-agent, add a key and push a git repository to a remote. To do this I string them together with &&. The code I'm using is
subprocess.Popen("eval $(ssh-agent) && ssh-add /root/.ssh/test_rsa && git push target HEAD", shell=True)
When I run this as a .py file I am prompted for the key's password. This seems to work as I get.
Identity added: /root/.ssh/test_rsa (/root/.ssh/test_rsa).
But when it tries to push the repository to the remote, an error occurs.
ssh: connect to host ***.***.***.*** port 22: Connection refused
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
However, if I simply run the same command in the interactive shell, it works. What causes this difference in behaviour, and what can I do to fix this?
The git server was on an aws instance that was being started earlier in the script. There was a check to make sure it was running, but aws seems to report an instance as running once boot has begun. This means that there is a brief time in which the instance is running, but an ssh daemon doesn't exist. Because the script moved very quickly into trying to push, it was falling within this time period and the server was refusing its connection attempt. By the time I would try anything in the interactive shell the instance was running long enough that it was working.
In short, aws says instances are running before the OS has started services.

How to clone a mercurial repository over an ssh connection initiated by fabric when http authorization is required?

I'm attempting to use fabric for the first time and I really like it so far, but at a certain point in my deployment script I want to clone a mercurial repository. When I get to that point I get an error:
err: abort: http authorization required
My repository requires http authorization and fabric doesn't prompt me for the user and password. I can get around this by changing my repository address from:
https://hostname/repository
to:
https://user:password#hostname/repository
But for various reasons I would prefer not to go this route.
Are there any other ways in which I could bypass this problem?
Here are four options with various security trade-offs and requiring various amounts of sys admin mojo:
With newer mercurial's you could put the password in the [auth] section of the local user's .hgrc file. The password will still be on disk in plaintext, but at least not in the URL
Or
You could locally set up a HTTP proxy that presents as no-auth locally and does the auth for you when communicating with remote.
Or
Of you're able to alter configuration on the hosting server you could set it (Apache?) to not require a user/pass when accessed from localhost, and then use a SSH tunnel to make the local machine look like it's coming from localhost when it access the server:
ssh -L 8080:localhost:80 user#hostname # run in background and leave running
and then have fabric connect to http://localhost:8080/repository
Or
Newer mercurial's support client side certificates for authentication, so you could configure your Apache to honor those as authorization/authentcation and then tweak your local hg to provide the certificate.
Depending on your fabfile, you might be able to reframe the problem. Instead of doing a hg clone on the remote system you could do your mercurial commands on your local system, and then ship the artifact you've constructed across with fabric.
Specifically, you could clone the mercurial repository by using fabric's local() commands, and run a 'hg archive' command to prepare a tarball. Then you can use fabrics put() to upload that tarball, and fabrics run() to unpack it in the correct location.
A code snippet for the clone, pack, put might look a bit like the following:
from fabric.api import local
def task():
local("hg clone ssh://hg#host/repo tmpdir")
with lcd("tmpdir"):
local("hg archive ../repo.tgz")
local("rm tmpdir")
put("repo.tgz")

Categories