I am trying to deploy a falcon app with Docker. Here is my Dockerfile:
FROM python:2-onbuild
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
RUN pip install -r ./requirements.txt
RUN pip install uwsgi
EXPOSE 8000
CMD ["uwsgi", "--http”, " :8000" , "—wsgi-file”, "falconapp.wsgi"]
However I keep getting error saying:
/bin/sh: 1: [uwsgi,: not found
I've tried running uwsgi in the local directory and it works well with the command:
uwsgi --http :8000 --wsgi-file falconapp.wsgi
Why is Docker not working in this case???
Here is the log, I'm pretty sure uwsgi is installed:
Step 5/7 : RUN pip install uwsgi
---> Running in 2df7c8e018a9
Collecting uwsgi
Downloading uwsgi-2.0.17.tar.gz (798kB)
Building wheels for collected packages: uwsgi
Running setup.py bdist_wheel for uwsgi: started
Running setup.py bdist_wheel for uwsgi: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/94/c9/63/e7aef2e745bb1231490847ee3785e3d0b5f274e1f1653f89c5
Successfully built uwsgi
Installing collected packages: uwsgi
Successfully installed uwsgi-2.0.17
Removing intermediate container 2df7c8e018a9
---> cb71648306bd
Step 6/7 : EXPOSE 8000
---> Running in 40daaa0d5efa
Removing intermediate container 40daaa0d5efa
---> 39c75687a157
Step 7/7 : CMD ["uwsgi", "--http”, " :8000" , "—wsgi-file”, "falconapp.wsgi"]
---> Running in 67e6eb29f3e0
Removing intermediate container 67e6eb29f3e0
---> f33181adbcfa
Successfully built f33181adbcfa
Successfully tagged image_heatmap:latest
dan#D-MacBook-Pro:~/Documents/falconapp_api$ docker run -p 8000:80 small_runner
/bin/sh: 1: [uwsgi,: not found
very often you have to write the full patch for the executable. If you go to your container and run this command whereis uwsgi it will tell you it is at /usr/local/bin/uwsgi so your CMD should be in the same form:
CMD ["/usr/local/bin/uwsgi", "--http", " :8000" , "--wsgi-file", "falconapp.wsgi"]
I see some incorrect syntax in CMD, please use this:
CMD ["uwsgi", "--http", " :8000" , "--wsgi-file", "falconapp.wsgi"]
some double quotes are incorrect and -- is not before wsgi-file .
Related
Hi guys need some help.
I created a custom docker image and push it to docker hub but when I run it in CI/CD it gives me this error.
exec /usr/bin/sh: exec format error
Where :
Dockerfile
FROM ubuntu:20.04
RUN apt-get update
RUN apt-get install -y software-properties-common
RUN apt-get install -y python3-pip
RUN pip3 install robotframework
.gitlab-ci.yml
robot-framework:
image: rethkevin/rf:v1
allow_failure: true
script:
- ls
- pip3 --version
Output
Running with gitlab-runner 15.1.0 (76984217)
on runner zgjy8gPC
Preparing the "docker" executor
Using Docker executor with image rethkevin/rf:v1 ...
Pulling docker image rethkevin/rf:v1 ...
Using docker image sha256:d2db066f04bd0c04f69db1622cd73b2fc2e78a5d95a68445618fe54b87f1d31f for rethkevin/rf:v1 with digest rethkevin/rf#sha256:58a500afcbd75ba477aa3076955967cebf66e2f69d4a5c1cca23d69f6775bf6a ...
Preparing environment
00:01
Running on runner-zgjy8gpc-project-1049-concurrent-0 via 1c8189df1d47...
Getting source from Git repository
00:01
Fetching changes with git depth set to 20...
Reinitialized existing Git repository in /builds/reth.bagares/test-rf/.git/
Checking out 339458a3 as main...
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:00
Using docker image sha256:d2db066f04bd0c04f69db1622cd73b2fc2e78a5d95a68445618fe54b87f1d31f for rethkevin/rf:v1 with digest rethkevin/rf#sha256:58a500afcbd75ba477aa3076955967cebf66e2f69d4a5c1cca23d69f6775bf6a ...
exec /usr/bin/sh: exec format error
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
any thoughts on this to resolve the error?
The problem is that you built this image for arm64/v8 -- but your runner is using a different architecture.
If you run:
docker image inspect rethkevin/rf:v1
You will see this in the output:
...
"Architecture": "arm64",
"Variant": "v8",
"Os": "linux",
...
Try building and pushing your image from your GitLab CI runner so the architecture of the image will match your runner's architecture.
Alternatively, you can build for multiple architectures using docker buildx . Alternatively still, you could also run a GitLab runner on ARM architecture so that it can run the image for the architecture you built it on.
In my case, I was building it using buildx
docker buildx build --platform linux/amd64 -f ./Dockerfile -t image .
however the problem was in AWS lambda
I'm creating a project which needs to make a connection from Python running in a docker container to a MySQL database running in another container. Currently, my docker-compose file looks like this:
version: "3"
services:
login:
build:
context: ./services/login
dockerfile: docker/Dockerfile
ports:
- "80:80"
# Need to remove this volume - this is only for dev work
volumes:
- ./services/login/app:/app
# Need to remove this command - this is only for dev work
command: /start-reload.sh
db_users:
image: mysql
volumes:
- ./data/mysql/users_data:/var/lib/mysql
- ./databases/users:/docker-entrypoint-initdb.d/:ro
restart: always
ports:
- 3306:3306
# Remove 'expose' below for prod
expose:
- 3306
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: users
MYSQL_USER: user
MYSQL_PASSWORD: password
And my Dockerfile for the login service looks like this:
# Note: this needs to be run from parent service directory
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.8
# Install Poetry
RUN curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | POETRY_HOME=/opt/poetry python && \
cd /usr/local/bin && \
ln -s /opt/poetry/bin/poetry && \
poetry config virtualenvs.create false
# Copy using poetry.lock* in case it doesn't exist yet
COPY ./app/pyproject.toml ./app/poetry.lock* /app/
RUN poetry install --no-root --no-dev
COPY ./app /app
I am trying to connect my login service to db_users, and want to make use of mysqlclient, but when I run poetry add mysqlclient, I get an error which includes the following lines:
/bin/sh: mysql_config: command not found
/bin/sh: mariadb_config: command not found
/bin/sh: mysql_config: command not found
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/private/var/folders/33/5yy7bny964bb0f3zggd1b4440000gn/T/pip-req-build-lak6lqu7/setup.py", line 15, in <module>
metadata, options = get_config()
File "/private/var/folders/33/5yy7bny964bb0f3zggd1b4440000gn/T/pip-req-build-lak6lqu7/setup_posix.py", line 70, in get_config
libs = mysql_config("libs")
File "/private/var/folders/33/5yy7bny964bb0f3zggd1b4440000gn/T/pip-req-build-lak6lqu7/setup_posix.py", line 31, in mysql_config
raise OSError("{} not found".format(_mysql_config_path))
OSError: mysql_config not found
mysql_config --version
mariadb_config --version
mysql_config --libs
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
I'm assuming this is something to do with the fact that I need the mysql-connector-c library to work, but I'm not sure how to go about getting this in poetry.
I was looking at following this tutorial, but since I'm not running MySQL locally but rather in docker, I'm not sure how to translate those steps to work in docker.
So essentially, my question is two-fold:
How do I add mysqlclient to my pyproject.toml file
How do I get this working in my docker env?
I was forgetting that my dev environment is also in Docker so I didn't really need to care about the poetry environment.
With that said, I edited the Dockerfile to look like the below:
# Note: this needs to be run from parent service directory
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.8
RUN apt-get install default-libmysqlclient-dev
# Install Poetry
RUN curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | POETRY_HOME=/opt/poetry python && \
cd /usr/local/bin && \
ln -s /opt/poetry/bin/poetry && \
poetry config virtualenvs.create false
# Copy using poetry.lock* in case it doesn't exist yet
COPY ./app/pyproject.toml ./app/poetry.lock* /app/
RUN poetry install --no-root --no-dev
COPY ./app /app
Which now has everything working as expected.
I have been trying for a long time to find a solution to the scrapyd error message: pkg_resources.DistributionNotFound: The 'idna<3,>=2.5' distribution was not found and is required by requests
What I have done:
$ docker pull ceroic/scrapyd
$ docker build -t scrapyd .
Dockerfile:
FROM ceroic/scrapyd
RUN pip install "idna==2.5"
$ docker build -t scrapyd .
Sending build context to Docker daemon 119.3kB
Step 1/2 : FROM ceroic/scrapyd
---> 868dca3c4d94
Step 2/2 : RUN pip install "idna==2.5"
---> Running in c0b6f6f73cf1
Downloading/unpacking idna==2.5
Installing collected packages: idna
Successfully installed idna
Cleaning up...
Removing intermediate container c0b6f6f73cf1
---> 849200286b7a
Successfully built 849200286b7a
Successfully tagged scrapyd:latest
I run the container:
$ docker run -d -p 6800:6800 scrapyd
Next:
scrapyd-deploy demo -p tutorial
And get error:
pkg_resources.DistributionNotFound: The 'idna<3,>=2.5' distribution was not found and is required by requests
I'm not a Docker expert, and I don't understand the logic. If idna==2.5 has been successfully installed inside the container, why does the error message require version 'idna<3,>=2.5'?
The answer is very simple. I finished my 3 days! torment. When I run the
scrapyd-deploy demo -p tutorial
then I do it not in the created container, but outside it.
The problem was solved by:
pip uninstall idna
pip install "idna == 2.5"
This was to be done on a virtual server, not a container. I can't believe I didn't understand it right away.
What I want to do
I have been trying to follow instructions from my travis-ci environment on using-a-docker-image-from-a-repository-in-a-build.
In my case, and forgive my if I misspeak because I'm not too familiar with docker, I want to start a docker container with a mysql instance that I can use during pytest.
What I've tried
.travis.yml
language: python
python:
- "3.7"
cache:
directories:
- "$HOME/google-cloud-sdk/"
services:
- docker
before_install:
...
install:
...
- pip install -r requirements.txt
script:
- docker pull mysql/mysql-server
- docker run -d -p 127.0.0.1:3306:3306 mysql/mysql-server /bin/sh -c "cd /root/mysql; pip install -r requirements.txt;"
- docker run mysql/mysql-server /bin/sh -c "ls -l /root; cd /root/mysql; pytest"
travis-ci logging
$ docker pull mysql/mysql-server
Using default tag: la[secure]
la[secure]: Pulling from mysql/mysql-server
0e690826fc6e: Pulling fs layer
0e6c49086d52: Pulling fs layer
862ba7a26325: Pulling fs layer
7731c802ed08: Pulling fs layer
7731c802ed08: Waiting
862ba7a26325: Verifying Checksum
862ba7a26325: Download complete
7731c802ed08: Verifying Checksum
7731c802ed08: Download complete
0e690826fc6e: Verifying Checksum
0e690826fc6e: Download complete
0e6c49086d52: Verifying Checksum
0e6c49086d52: Download complete
0e690826fc6e: Pull complete
0e6c49086d52: Pull complete
862ba7a26325: Pull complete
7731c802ed08: Pull complete
Digest: sha256:a82ff720911b2fd40a425fd7141f75d7c68fb9815ec3e5a5a881a8eb49677087
Status: Downloaded newer image for mysql/mysql-server:la[secure]
The command "docker pull mysql/mysql-server" exited with 0.
2.49s$ docker run -d -p 127.0.0.1:3306:3306 mysql/mysql-server /bin/sh -c "cd /root/mysql; pip install -r requirements.txt;"
bfba9cb26b8902682903d8a5576e805e86823096220e723da0df6a6a881c1ef7
The command "docker run -d -p 127.0.0.1:3306:3306 mysql/mysql-server /bin/sh -c "cd /root/mysql; pip install -r requirements.txt;"" exited with 0.
0.74s$ docker run mysql/mysql-server /bin/sh -c "ls -l /root; cd /root/mysql; py[secure]"
[Entrypoint] MySQL Docker Image 8.0.20-1.1.16
total 0
/bin/sh: line 0: cd: /root/mysql: No such file or directory
/bin/sh: py[secure]: command not found
The command "docker run mysql/mysql-server /bin/sh -c "ls -l /root; cd /root/mysql; py[secure]"" exited with 127.
So it seems like for whatever reason my use case for mysql differs from the example provided by travis-ci. The specific issue seems to be that the directory /root/mysql just disappears so when I try the second docker run I get No such file or directory.
To be perfectly honest I don't know much about what is happening, so any help with dockerizing my pytests would be great! Also, if it's possible, I'm also curious if it's possible to move the docker logic into a Dockerfile of some sort.
Here is my main script where I've set it up to connect to a mysql database, so the environment variables would just need to be set appropriately, which is why I thought a Dockerfile might be helpful.
main.py
elif env == "test":
return sqlalchemy.create_engine(
sqlalchemy.engine.url.URL(
drivername="mysql+pymysql",
username=os.environ.get("DB_USER"),
password=os.environ.get("DB_PASS"),
host=os.environ.get("DB_HOST"),
port=3306,
database=PRIMARY_TABLE_NAME
),
pool_size=5,
max_overflow=2,
pool_timeout=30,
pool_recycle=1800
)
My docker file is as follows:
#Use python 3.6 image
FROM python:3.6
ENV PYTHONUNBUFFERED 1
#install required packages
RUN apt-get update
RUN apt-get install libsasl2-dev libldap2-dev libssl-dev python3-dev psmisc -y
#install a pip package
#Note: This pip package has a completely configured django project in it
RUN pip install <pip-package>
#Run a script
#Note: Here appmanage.py is a file inside the pip installed location(site-packages), but it will be accessible directly without cd to the folder
RUN appmanage.py appconfig appadd.json
#The <pip-packge> installed comes with a built in django package, so running it with following CMD
#Note: Here manage.py is present inside the pip package folder but it is accesible directly
CMD ["manage.py","runserver","0.0.0.0:8000"]
When i run :
sudo docker build -t test-app .
The steps in dockerfile till: RUN appmanage.py appconfig runs sucessfully as expected but after that i get the error:
The command '/bin/sh -c appmanage.py appconfig ' returned a non-zero code: 137
When i google for the error i get suggestions like memory is not sufficient. But i have verified, the system(centos) is having enough memory.
Additional info
The commandline output during the execution of RUN appmanage.py appconfig is :
Step 7/8 : RUN appmanage.py appconfig
---> Running in 23cffaacc81f
======================================================================================
configuring katana apps...
Please do not quit (or) kill the server manually, wait until the server closes itself...!
======================================================================================
Performing system checks...
System check identified no issues (0 silenced).
February 08, 2020 - 12:01:45
Django version 2.1.2, using settings 'katana.wui.settings'
Starting development server at http://127.0.0.1:9999/
Quit the server with CONTROL-C.
9999/tcp:
20Killed
As described, the command RUN appmanage.py appconfig appAdd.json run successfully as expected and reported that System check identified no issues (0 silenced)..
Moreover, the command "insisted" on killing itself and return exit code of 137. The minimum changes for this to work is to update your Dockerfile to be like
...
#Run a script
#Note: Here appmanage.py is a file inside the pip installed location(site-packages), but it will be accessible directly without cd to the folder
RUN appmanage.py appconfig appAdd.json || true
...
This will just forcefully ignore the return exit code from the previous command and carry on the build.