How to add mysqlclient to a Poetry environment - python

I'm creating a project which needs to make a connection from Python running in a docker container to a MySQL database running in another container. Currently, my docker-compose file looks like this:
version: "3"
services:
login:
build:
context: ./services/login
dockerfile: docker/Dockerfile
ports:
- "80:80"
# Need to remove this volume - this is only for dev work
volumes:
- ./services/login/app:/app
# Need to remove this command - this is only for dev work
command: /start-reload.sh
db_users:
image: mysql
volumes:
- ./data/mysql/users_data:/var/lib/mysql
- ./databases/users:/docker-entrypoint-initdb.d/:ro
restart: always
ports:
- 3306:3306
# Remove 'expose' below for prod
expose:
- 3306
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: users
MYSQL_USER: user
MYSQL_PASSWORD: password
And my Dockerfile for the login service looks like this:
# Note: this needs to be run from parent service directory
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.8
# Install Poetry
RUN curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | POETRY_HOME=/opt/poetry python && \
cd /usr/local/bin && \
ln -s /opt/poetry/bin/poetry && \
poetry config virtualenvs.create false
# Copy using poetry.lock* in case it doesn't exist yet
COPY ./app/pyproject.toml ./app/poetry.lock* /app/
RUN poetry install --no-root --no-dev
COPY ./app /app
I am trying to connect my login service to db_users, and want to make use of mysqlclient, but when I run poetry add mysqlclient, I get an error which includes the following lines:
/bin/sh: mysql_config: command not found
/bin/sh: mariadb_config: command not found
/bin/sh: mysql_config: command not found
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/private/var/folders/33/5yy7bny964bb0f3zggd1b4440000gn/T/pip-req-build-lak6lqu7/setup.py", line 15, in <module>
metadata, options = get_config()
File "/private/var/folders/33/5yy7bny964bb0f3zggd1b4440000gn/T/pip-req-build-lak6lqu7/setup_posix.py", line 70, in get_config
libs = mysql_config("libs")
File "/private/var/folders/33/5yy7bny964bb0f3zggd1b4440000gn/T/pip-req-build-lak6lqu7/setup_posix.py", line 31, in mysql_config
raise OSError("{} not found".format(_mysql_config_path))
OSError: mysql_config not found
mysql_config --version
mariadb_config --version
mysql_config --libs
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
I'm assuming this is something to do with the fact that I need the mysql-connector-c library to work, but I'm not sure how to go about getting this in poetry.
I was looking at following this tutorial, but since I'm not running MySQL locally but rather in docker, I'm not sure how to translate those steps to work in docker.
So essentially, my question is two-fold:
How do I add mysqlclient to my pyproject.toml file
How do I get this working in my docker env?

I was forgetting that my dev environment is also in Docker so I didn't really need to care about the poetry environment.
With that said, I edited the Dockerfile to look like the below:
# Note: this needs to be run from parent service directory
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.8
RUN apt-get install default-libmysqlclient-dev
# Install Poetry
RUN curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | POETRY_HOME=/opt/poetry python && \
cd /usr/local/bin && \
ln -s /opt/poetry/bin/poetry && \
poetry config virtualenvs.create false
# Copy using poetry.lock* in case it doesn't exist yet
COPY ./app/pyproject.toml ./app/poetry.lock* /app/
RUN poetry install --no-root --no-dev
COPY ./app /app
Which now has everything working as expected.

Related

Got ModuleNotFoundError error while running app from docker-compose. How can i solve this?

I got following error while running from docker-compose but it works fine when I run docker run . Can some body help me to debug this.
Error:
File "/home/desktop/.local/bin/docker-compose", line 5, in <module>
from compose.cli.main import main
File "/usr/lib/python3.10/site-packages/compose/cli/main.py", line 19, in <module>
from ..config import ConfigurationError
File "/usr/lib/python3.10/site-packages/compose/config/__init__.py", line 3, in <module>
from .config import ConfigurationError
File "/usr/lib/python3.10/site-packages/compose/config/config.py", line 48, in <module>
from .validation import match_named_volumes
File "/usr/lib/python3.10/site-packages/compose/config/validation.py", line 8, in <module>
from jsonschema import Draft4Validator
File "/usr/lib/python3.10/site-packages/jsonschema/__init__.py", line 21, in <module>
from jsonschema._types import TypeChecker
File "/usr/lib/python3.10/site-packages/jsonschema/_types.py", line 3, in <module>
from pyrsistent import pmap
ModuleNotFoundError: No module named 'pyrsistent'
My Dockerfile:
FROM python:3.9-alpine
ENV PYTHONUNBUFFERED=1
RUN apk update \
&& apk add --no-cache --virtual .build-deps
RUN pip install --upgrade pip
ENV APP_DIR /home/myapp
WORKDIR ${APP_DIR}
ADD requirements.txt ${APP_DIR}/
RUN pip install -r ${APP_DIR}/requirements.txt
COPY . .
EXPOSE 8000
ENTRYPOINT sh -c "python manage.py runserver 0.0.0.0:8000"
my docker compose file:
version: "3.9"
services:
web:
build:
context: .
volumes:
- .:/home/myapp
ports:
- "8000:8000"
command: python manage.py runserver 0.0.0.0:8000
container_name: django_myapp
restart: always
env_file: .env
While I run docker-compose build I get above error. I have Tried adding pyrsistent in requirements.txt but error is still same. How to solve this error??
This is a common Python error stack, you need to have some basic knowledge of Python to understand it, but I'll try to explain it briefly here.
The error starts with /home/desktop/.local/bin/docker-compose which means it comes from docker-compose on the host machine, not from inside Docker. The call stack indicates a compose -> jsonschema -> pyrsistent call path and pyrsistent is not found (ModuleNotFoundError), which means there's a missing dependency on your host machine.
Try pip3 install docker-compose --user and pip3 install pyrsistent --user:
/home/desktop/.local/bin/docker-compose indicates your compose was installed to your home directory instead of the system path, so use --user to try a local installation.
If everything works fine, pip3 install docker-compose --user will resolve the whole dependency tree and install pyrsistent automatically.
If it doesn't work, try the second command to manual fix pyrsistent package.
If it still fails, try pip3 install --force-reinstall --user pyrsistent to reinstall pyrsistent package.

Docker compose - python: can't open file - "No such file or directory" on Windows

I'm a beginner in working with docker especially docker compose.I am working on a Windows machine (Windows 8.1 single language) so I am using Docker Toolbox.
All of my files are present in the Windows machine at path- E:\xyz\docker
Following my docker-compose.yml:
version: '3'
services:
tweet_collector:
build: tweet_collector/
volumes:
- ./tweet_collector/:/app
etl_job:
build: etl_job/
volumes:
- ./etl_job/:/app2
mongodb:
image: mongo
ports:
- "27021:27017"
postgresdb:
image: postgres
environment:
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=admin
- POSTGRES_DB=twitter_database
ports:
- "5554:5432"
slack_bot:
build: slack_bot/
volumes:
- ./slack_bot/:/app3
And my Dockerfile 1:
FROM python:3.6-slim
WORKDIR /app
ADD . /app
RUN pip install --trusted-host pypi.python.org -r requirements.txt
CMD ["python","get_tweets_for_mongo.py"]
my Dockerfile 2:
FROM python:3.6-slim
WORKDIR /app3
ADD . /app3
RUN pip install --trusted-host pypi.python.org -r requirements.txt
CMD ["python", "slack_bot.py"]
And my Dockerfile 3 -
FROM python:3.6-slim
WORKDIR /app2
ADD . /app2
RUN pip install --trusted-host pypi.python.org -r requirements.txt
CMD ["python", "scheduler.py"]
I am running my docker-compose.yml from E:\xyz\docker from my windows machine.
But I am getting the below error
slack_bot_1 | python: can't open file 'slack_bot.py': [Errno 2] No such file or directory
data-pipeline-twitter-master_slack_bot_1 exited with code 2
tweet_collector_1 | python: can't open file 'get_tweets_for_mongo.py': [Errno 2] No such file or directory
data-pipeline-twitter-master_tweet_collector_1 exited with code 2
etl_job_1 | python: can't open file 'scheduler.py': [Errno 2] No such file or directory
data-pipeline-twitter-master_etl_job_1 exited with code 2
postgresdb_1 | The files belonging to this database system will be owned by user "postgres".
I am not sure why I am getting this error, any suggestions on how to fix this error would be greatly appreciated. Thank you.

How to run ansible inventory script with python3

I'm running a docker container with alpine.And running ansible script for getting dynamic inventory from AWS and it works great with python2. But I'm changing it to python3 and this is causing me issues. Getting warnings and unable to parse it
In python2 I was able to run the python script this way ./ec2.py
Now with python3, I'm getting this error: env: can't execute 'python': No such file or directory
[WARNING]: * Failed to parse ci/ec2.py with script
plugin: Inventory script (ci/ec2.py) had an execution
error: env: can't execute 'python': No such file or directory
[WARNING]: * Failed to parse ci/ec2.py with ini plugin:
ci/ec2.py:3: Error parsing host definition ''''': No
closing quotation
[WARNING]: Unable to parse ci/ec2.py as an inventory
source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
Python3
apk --update --no-cache add python3 py3-setuptools
pip3 install --upgrade pip
pip3 install awscli ansible boto
chmod 755 ec2.py
ansible-playbook provisioning/ec2New.yml -i ec2.py --private-key ssh-key.pem -e "type_inventory=${TYPE_INVENTORY}
ansible.cfg
[defaults]
host_key_checking = False
stdout_callback = yaml
ansible_python_interpreter = /usr/bin/python3
My old configuration with python 2
apk --update --no-cache add python py-pip
pip install --upgrade pip
pip install awscli ansible botocore boto
chmod 755 ec2.py
ansible-playbook provisioning/ec2New.yml -i ec2.py --private-key ssh-key.pem -e "type_inventory=${TYPE_INVENTORY}
old ansible.cfg
defaults
host_key_checking = False
stdout_callback = yaml
I had the same issue described above, if you change the first line in your ec2.py file to be:
#!/usr/bin/env python3
Then it should parse and work as expected.
I noticed your comment and it seems python3 was replaced wrong in the shebang.
If I replace it getting this: /usr/bin/python3: can't open file 'python': [Errno 2] No such file or directory –
Diego
Apr 10, 2020 at 3:43
So, if you follow the solution above it "should" work.

How to fix Import Error raised by Docker Compose in a CI tool?

I had to change services: docker in GitLab-CI.yml file to services: docker:19.03.5-dind because I was dealing with some compatibility issues but now GitLab-CI runner is having problems with importing enum for Python 2.7 within the container:
Running after script...
00:01
$ docker-compose down
Traceback (most recent call last):
File "/usr/bin/docker-compose", line 7, in <module>
from compose.cli.main import main
File "/usr/lib/python2.7/site-packages/compose/cli/main.py", line 35, in <module>
from ..project import get_image_digests
File "/usr/lib/python2.7/site-packages/compose/project.py", line 11, in <module>
import enum
ImportError: No module named enum
ERROR: Job failed: exit code 1
This is my GitLab-CI.yml file:
image: docker:stable
services:
- docker:19.03.5-dind
stages:
- build
before_script:
- export REACT_APP_USERS_SERVICE_URL=http://127.0.0.1
compile:
stage: build
script:
- apk add --no-cache py-pip python-dev libffi-dev openssl-dev gcc libc-dev make
- pip install docker-compose
- docker-compose up -d --build
- docker-compose exec -T users python manage.py recreate_db
- docker-compose exec -T users python manage.py seed_db
- docker-compose exec -T users python manage.py test
- docker-compose exec -T users flake8 project
after_script:
- docker-compose down
It's quite obvious that Docker is trying to pull Python's enum module to perform certain tasks and fails. What should I do to resolve this ?
Edit:
I've added python3-dev to the build script to make it read like this
- apk add --no-cache py-pip python-dev python3-dev libffi-dev openssl-dev gcc libc-dev make
surprisingly that didn't help either. Same error.

How to maintain glibc and libmusl Python wheels in the same pip repository?

Previously we've used our internal pip repository for source distributions only. Moving forward we want to host wheels as well to accomplish two things:
serve our own code to both (local) developer machines and Alpine Docker environments
create wheels for packages that don't have Alpine wheels
Unfortunately the wheels built with different libraries share the same artifact name and the second one gets rejected by the pip repository:
docker-compose.yml
version: '3'
services:
build-alpine:
build: alpine
image: build-alpine-wheels
volumes:
- $PWD/cython:/build
working_dir: /build
command: sh -c 'python setup.py bdist_wheel && twine upload --repository-url http://pypi:8080 -u admin -p admin dist/*'
build-debian:
build: debian
image: build-debian-wheels
volumes:
- $PWD/cython-debian:/build
working_dir: /build
command: bash -c 'sleep 10s && python setup.py bdist_wheel && twine upload --repository-url http://pypi:8080 -u admin -p admin dist/*'
pypi:
image: stevearc/pypicloud:1.0.2
volumes:
- $PWD/pypi:/etc/pypicloud/
alpine-test:
image: build-alpine-wheels
depends_on:
- build-alpine
command: sh -c 'while ping -c1 build-alpine &>/dev/null; do sleep 1; done; echo "build container finished" && pip install -i http://pypi:8080/pypi --trusted-host pypi cython && cython --version'
debian-test:
image: python:3.6
depends_on:
- build-debian
command: bash -c 'while ping -c1 build-debian &>/dev/null; do sleep 1; done; echo "build container finished" && pip install -i http://pypi:8080/pypi --trusted-host pypi cython && cython --version'
alpine/Dockerfile
FROM python:3.6-alpine
RUN apk add --update --no-cache build-base
RUN pip install --upgrade pip
RUN pip install twine
debian/Dockerfile
FROM python:3.6-slim
RUN apt-get update && apt-get install -y \
build-essential \
&& rm -rf /var/lib/apt/lists/*
RUN pip install --upgrade pip
RUN pip install twine
pypi/config.ini
[app:main]
use = egg:pypicloud
pyramid.reload_templates = False
pyramid.debug_authorization = false
pyramid.debug_notfound = false
pyramid.debug_routematch = false
pyramid.default_locale_name = en
pypi.default_read =
everyone
pypi.default_write =
everyone
pypi.storage = file
storage.dir = %(here)s/packages
db.url = sqlite:///%(here)s/db.sqlite
auth.admins =
admin
user.admin = $6$rounds=535000$sFuRqMc5PbRccW1J$OBCsn8szlBwr4yPP243JPqomapgInRCUavv/p/UErt7I5FG4O6IGSHkH6H7ZPlrMXO1I8p5LYCQQxthgWZtxe1
# For beaker
session.encrypt_key = s0ETvuGG9Z8c6lK23Asxse4QyuVCsI2/NvGiNvvYl8E=
session.validate_key = fJvHQieaa0g3XsdgMF5ypE4pUf2tPpkbjueLQAAHN/k=
session.secure = False
session.invalidate_corrupt = true
###
# wsgi server configuration
###
[uwsgi]
paste = config:%p
paste-logger = %p
master = true
processes = 20
reload-mercy = 15
worker-reload-mercy = 15
max-requests = 1000
enable-threads = true
http = 0.0.0.0:8080
virtualenv = /env
###
# logging configuration
# http://docs.pylonsproject.org/projects/pyramid/en/latest/narr/logging.html
###
[loggers]
keys = root, botocore, pypicloud
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = INFO
handlers = console
[logger_pypicloud]
level = DEBUG
qualname = pypicloud
handlers =
[logger_botocore]
level = WARN
qualname = botocore
handlers =
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)s %(asctime)s [%(name)s] %(message)s
Setup and execution
git clone https://github.com/cython/cython
git clone https://github.com/cython/cython cython-debian
docker-compose build
docker-compose up
At the end I would like both test containers to be able to execute cython --version. Which works for the Alpine container:
alpine-test_1 | Collecting cython
alpine-test_1 | Downloading http://pypi:8080/api/package/cython/Cython-0.29.12-cp36-cp36m-linux_x86_64.whl (5.0MB)
alpine-test_1 | Installing collected packages: cython
alpine-test_1 | Successfully installed cython-0.29.12
alpine-test_1 | Cython version 0.29.12
But doesn't work for the Debian container:
debian-test_1 | Downloading http://pypi:8080/api/package/cython/Cython-0.29.12-cp36-cp36m-linux_x86_64.whl (5.0MB)
debian-test_1 | Installing collected packages: cython
debian-test_1 | Successfully installed cython-0.29.12
debian-test_1 | Traceback (most recent call last):
debian-test_1 | File "/usr/local/bin/cython", line 6, in <module>
debian-test_1 | from Cython.Compiler.Main import setuptools_main
debian-test_1 | File "/usr/local/lib/python3.6/site-packages/Cython/Compiler/Main.py", line 28, in <module>
debian-test_1 | from .Scanning import PyrexScanner, FileSourceDescriptor
debian-test_1 | ImportError: libc.musl-x86_64.so.1: cannot open shared object file: No such file or directory
I find it particularly curious that both environments try to pull this wheel because there are all sorts of packages which don't work with Alpine (e.g. Pandas) in which case pip goes straight for the source distribution. I suppose I must be doing something wrong in that regard as well.
So now I'm wondering how I can create these wheels such that for each version of the software package two different wheels can live in the pip repository and have pip automatically download and install the correct one.
There is currently no support for musl in the manylinux standard: your options are to always build from source, or target a different, glibc-based platform.
It seems that now PEP656 defines a platform tag 'musllinux'
https://www.python.org/dev/peps/pep-0656/
I would suggest not using Alpine at all—you can get images almost as small with multi-stage builds (https://pythonspeed.com/articles/smaller-python-docker-images/), and musl doesn't just mean lack of binary wheels. There's a whole bunch of production bugs people have had due to musl (Python crashes, timestamp formatting problems—see https://pythonspeed.com/articles/base-image-python-docker-images/ for references).
Most of the known musl links have been fixed, but it's different enough that it doesn't seem worth the production risk (not to mention your very expensive developer time!) just to get a 100MB-smaller image.

Categories