I've just started to use docker and got an error.
I use Pycharm on macOS. In my project I clone a github repo (that's a simple LogisticRegression from sklearn) that includes dockerfile.
I expected, that all what I needed was
docker build . -t servername
But I got an error
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Where should i run Docker daemon?
Thank you for your help!
You need to install and run Docker first. Here is the Link for Docker Desktop for MacOS.
Run it with sudo so sudo docker build . -t servername
else try sudo systemctl enable docker.service && systemctl enable containerd.service
Related
I have a React/Flask app running within a Docker container. There is no issue with me building the project using docker-compose, and running the app itself in the container. Where I am running into issues is a particular API route that is supposed to fetch user profiles from the DB, encrypt the values in a text file, and return to the frontend for download. The encryption script is written in C, though the API route is written in Python. When I try and encrypt through the app running in Docker, I am given the following error message:
OSError: [Errno 8] Exec format error: './app/Crypto/encrypt.exe'
I know the following command works in the CLI if invoked outside of the Docker Container (still invoked at the same directory level as it would in app):
./app/Crypto/encrypt.exe -k ./app/Crypto/secretkey -i ./profile.txt -o ./profile.encr
I am using the following Python code to invoke the command in the API route which is where it fails:
proc = subprocess.Popen(f"./app/Crypto/encrypt.exe -k ./app/Crypto/secretkey -i ./{profile.profile_name}.txt -o ./{profile.profile_name}.encr", shell=True)
The Dockerfile for my backend is pasted below:
FROM python:3
WORKDIR /app
ENV FLASK_APP=main.py
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "main.py"]
I have tried to tackle the issue a few different ways:
By default my Docker Container was built with Architecture of ARM64. I read that the OS Error was caused by Architecture not being AMD64, so I rebuilt the container with AMD64 and it gave me the same error.
In case this was a permissions error, I ran chmod +rwx on encrypt.exe through the Dockerfile when building the container. Pretty sure it has nothing to do with permissions especially as it still failed.
I added a shebang (#!/bin/bash) to the script as well as to the Dockerfile.
At the end of the day I know I am failing when using subprocess.Popen, so I am positive I must be missing something when invoking the script using Python, or there is a configuration in my Docker Container that is preventing this functionality. My machine is a Macbook Pro which the script runs fine on. The script has also successfully been utilized on a machine running Linux.
Any chances folks have seen similar issues arise with this error? Thanks in advance!
So thanks to David Maze's comment on this, I followed the lead that maybe the executable I wanted to run needed to be built within the Dockerfile. I destroyed my original container, added in a step to run the Makefile that generates the executable, and finally ran the program through the app running in the Docker container. This did the trick! Not sure as to why the executable needed to be compiled within the Docker container, but running 'make' on the Makefile within the Dockerfile did the trick.
Trying to host a python http server and works fine.
FROM python:latest
COPY index.html /
CMD python3 -m http.server
But when trying with python virtualenv, facing issues.
FROM python:3
COPY index.html .
RUN pip install virtualenv
RUN virtualenv --python="python3" .virtualenv
RUN .virtualenv/bin/pip install boto3
RUN python3 -m http.server &
CMD ["/bin/bash"]
Please help.
I just want to point up that using virtualenv within docker container might be redundant.
With docker, you are encapsulating your one specific application along with its dependencies (libraries, frameworks, boto3 in your case), as opposed to your local machine where you might have several applications being developed, each with different dependencies.
Thus, I would recommend considering again the necessity of virtualenv within docker.
Second, running the command:
RUN python3 -m http.server &
in the background is also bad practice here. You want to run it with the CMD command in the foreground, so it will run as the first process (PID 1). Then it will receive all docker signals, and start automatically with the container start:
CMD ["virtualenv/bin/python3", "-m", "http.server"]
I use PyCharm Professional to develop python.
I am able to connect the PyCharm run/debugs GUI to local Docker image's python interpreter and run local code using the Docker Container python environment libraries, eg. via the procedure described here: Configuring Remote Interpreter via Docker.
I am also able to SSH into AWS instances with PyCharm and connect to remote python interpreters there, which maps files from my local project into a remote directory and again allows me to run a GUI stepping through remote code as though it was local, eg. via the procedure described here: Configuring Remote Interpreters via SSH.
I have a Docker image on Docker hub that I would like to deploy to an AWS instance, and then connect my local PyCharm GUI to the environment inside the remote container, but I can't see how to do this, can anybody help me?
[EDIT] Once proposal that has been made is to put an SSH Server inside the remote container and connect my local PyCharm directly into the container via SSH, for example as described here. It's one solution but has been extensively criticised elsewhere - is there a more canonical solution?
After doing a bit of research, I came to the conclusion that installing an SSH server inside my container and logging in via the PyCharm SSH remote interpreter was the best thing to do, despite concerns raised elsewhere. I managed it as follows.
The Dockerfile below will create an image with an SSH server inside that you can SSH into. It also has anaconda/python, so it's possible to run a notebook server inside and connect to that in the usual way for Jupyter degubbing. Note that it's got a plain-text password (screencast), you should definitely enable key login if you're using this for anything sensitive.
It will take local libraries and install them into your package library inside the container, and optionally you can pull repos from GitHub as well (register for an API key in GitHub if you want to do this so you don't need to enter a plain text password). It also requires you to create a plaintext requirements.txt containing all of the other packages you will need to be pip installed.
Then run build command to create the image, and run to create a container from that image. In the Dockerfile we expose the SSH through the container's port 22, so let's hook that up to an unused port on the AWS instance - this is the port we will SSH through. Also add another port pairing if you want to use Jupyter from your local machine at any point:
docker build -t your_image_name .
don't miss the . at the end - it's important!
docker run -d -p 5001:22 -p8889:8889 --name=your_container_name your_image_name
Nb. you will need to bash into the container (docker exec -it xxxxxxxxxx bash) and turn Jupyter on, with jupyter notebook.
Dockerfile:
ROM python:3.6
RUN apt-get update && apt-get install -y openssh-server
# Load an ssh server. Change root username and password. By default in debian, password login is prohibited,
# go into the file that controls this and make a change to allow password login
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN /etc/init.d/ssh restart
# Install git, so we can pull in some repos
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
# Install the requirements and the libraries we need (from a requirements.txt file)
COPY requirements.txt /tmp/
RUN python3 -m pip install -r /tmp/requirements.txt
# These are local libraries, add them (assuming a setup.py)
ADD your_libs_directory /your_libs_directory
RUN python3 -m pip install /your_libs_directory
RUN python3 your_libs_directory/setup.py install
# Adding git repos (optional - assuming a setup.py)
git clone https://git_user_name:git_API_token#github.com/YourGit/git_repo.git
RUN python3 -m pip install /git_repo
RUN python3 git_repo/setup.py install
# Cleanup
RUN apt-get update && apt-get upgrade -y && apt-get autoremove -y
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
I installed docker and tested with hello-world just to make sure its fine. Docker is set to Windows Containers. Then I used the following :
docker run -it gcr.io/tensorflow/tensorflow:latest-devel
Got error :
C:\Users\pubud>docker run -it gcr.io/tensorflow/tensorflow:latest-devel
Unable to find image 'gcr.io/tensorflow/tensorflow:latest-devel' locally
docker: Error response from daemon: manifest for gcr.io/tensorflow/tensorflow:latest-devel not found.
See 'docker run --help'.
Any help?
I am following this link for setup. I am using Docker for Windows since I am using Windows 10 Pro.
You dont need to add gcr.io/. just add tensorflow/tensorflow to your docker run command.
docker run -it -p 8888:8888 tensorflow/tensorflow
I am new to docker, trying to run multiple python processes in docker.
Though it's not recommended, however, it should work as suggested here "https://docs.docker.com/engine/admin/multi-service_container/"
My Dockerfile :
FROM custom_image
MAINTAINER Shubham
RUN apt-get update -y
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["/bin/bash"]
CMD ["start.sh"]
start.sh :
nohup python flask-app.py &
nohup python sink.py &
nohup python faceConsumer.py &
nohup python classifierConsumer.py &
nohup python demo.py &
echo lastLine
run command :
docker run --runtime=nvidia -p 5000:5000 out_image
the same shell script worked when I go to terminal and run.
tried without nohup, didn't work.
tried python subprocess also to start other python processes, didn't work.
Is it possible to run multiple processes without supervisord or docker-compose?
update: not getting any error, only "lastLine" is getting printed and docker container exits.
Docker docs has examples of how to do this. If you're using Python, then using supervisord is a good option.
FROM ubuntu:latest
RUN apt-get update && apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY my_first_process my_first_process
COPY my_second_process my_second_process
CMD ["/usr/bin/supervisord"]
The advantage of this over running a bunch of background processes is you get better job control and processes that exit prematurely will be restarted.
Your problem is putting everything in background. Your container starts, executes all commands and then exits when CMD process finishes - despite background processes running. Docker does not know that.
You could try running everything else in background but then
python demo.py
as it is. This would cause the process to stay alive assuming demo.py does not exit.
You can also run in detached mode or redirect the nohup to log/nohup.out as default docker runs the command by socket, redirection does not happen.