I am trying to use libreoffice in my Django app, to convert a docx file to pdf using python subprocess.
I have included libreoffice in my dockerfile:
Dockerfile:
FROM python:3.8-alpine
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
COPY ./behavioursolutiondjango /behavioursolutiondjango
COPY ./scripts /scripts
WORKDIR /behavioursolutiondjango
EXPOSE 8000
RUN python -m venv /py && \
/py/bin/pip install --upgrade pip && \
apk add --update python3-dev \
xmlsec xmlsec-dev \
gcc \
libc-dev \
libreoffice \
libffi-dev && \
apk add --update --no-cache postgresql-client && \
apk add --update --no-cache --virtual .tmp-deps \
build-base postgresql-dev musl-dev linux-headers && \
/py/bin/pip install -r /requirements.txt && \
apk del .tmp-deps && \
adduser --disabled-password --no-create-home app && \
mkdir -p /vol/web/static && \
mkdir -p /vol/web/media && \
chown -R app:app /vol && \
chmod -R 755 /vol && \
chmod -R +x /scripts
ENV PATH="/scripts:/py/bin:$PATH"
USER app
CMD ["run.sh"]
And have the following running to do the conversion:
subprocess.call(["soffice", "--headless", "--convert-to", "pdf", new_cert.cert.path])
But I am running into the following error:
LibreOffice 7.2 - Fatal Error: The application cannot be started.
User installation could not be completed.
I have spent hours on this and cannot figure out what im missing.
I would be more than happy to use something other than Libreoffice, but cannot find something that will work, other than libreoffice.
Related
In my dockerfile, I was previously using FROM python:3.9-alpineon top of which librdkafka 1.9.2 is built and this was successful. But today, with the same docker file, the build failed by throwing the below error:
#error "confluent-kafka-python requires librdkafka v2.0.2 or later. Install the latest version of librdkafka from the Confluent repositories, see http://docs.confluent.io/current/installation.html".
When I searched in the internet, alpine:edge seems to have the newest version of librdkafka package. So I changed the dockerfile to FROM python:3.9-alpine:edge. But on building, this threw my an error:
Step 1/41 : FROM python:3.9-alpine:edge build 25-Jan-2023 10:25:20 invalid reference format build 25-Jan-2023 10:25:20 [?1h=[41m[37;1mAn error occurred when executing task '
I am new to docker and I used https://www.docker.com/blog/how-to-use-the-alpine-docker-official-image/ for the format. Please do help me regarding this.
This is my dockerfile currently:
FROM python:3.9-alpine:edge
RUN adduser -D pythonwebapi
WORKDIR /home/pythonwebapi
COPY requirements.txt requirements.txt
COPY logger_config.py logger_config.py
# COPY kong.ini kong.ini
# COPY iot.ini iot.ini
# COPY project.ini project.ini
# COPY eom.ini eom.ini
# COPY notify.ini notify.ini
RUN echo 'http://dl-3.alpinelinux.org/alpine/v3.9/main' >> /etc/apk/repositories
RUN apk update \
&& apk upgrade \
&& apk add --no-cache build-base \
autoconf \
bash \
bison \
boost-dev \
cmake \
flex \
zlib-dev
RUN apk add make gcc g++
RUN apk add libffi-dev
RUN apk update && apk --no-cache add librdkafka-dev
RUN apk add postgresql-dev gcc python3-dev musl-dev
RUN pip install --upgrade pip && pip install -r requirements.txt && pip install gunicorn
RUN apk del gcc g++ make
RUN pip install --no-cache-dir six pytest numpy cython
RUN pip install --no-cache-dir pandas
RUN pip install --no-cache-dir confluent-kafka
ARG ARROW_VERSION=3.0.0
ARG ARROW_SHA1=c1fed962cddfab1966a0e03461376ebb28cf17d3
ARG ARROW_BUILD_TYPE=release
ENV ARROW_HOME=/usr/local \
PARQUET_HOME=/usr/local
#Download and build apache-arrow
RUN mkdir -p /arrow \
&& wget -q https://github.com/apache/arrow/archive/apache-arrow-${ARROW_VERSION}.tar.gz -O /tmp/apache-arrow.tar.gz \
&& echo "${ARROW_SHA1} *apache-arrow.tar.gz" | sha1sum /tmp/apache-arrow.tar.gz \
&& tar -xvf /tmp/apache-arrow.tar.gz -C /arrow --strip-components 1 \
&& mkdir -p /arrow/cpp/build \
&& cd /arrow/cpp/build \
&& cmake -DCMAKE_BUILD_TYPE=$ARROW_BUILD_TYPE \
-DOPENSSL_ROOT_DIR=/usr/local/ssl \
-DCMAKE_INSTALL_LIBDIR=lib \
-DCMAKE_INSTALL_PREFIX=$ARROW_HOME \
-DARROW_WITH_BZ2=ON \
-DARROW_WITH_ZLIB=ON \
-DARROW_WITH_ZSTD=ON \
-DARROW_WITH_LZ4=ON \
-DARROW_WITH_SNAPPY=ON \
-DARROW_PARQUET=ON \
-DARROW_PYTHON=ON \
-DARROW_PLASMA=ON \
-DARROW_BUILD_TESTS=OFF \
.. \
&& make -j$(nproc) \
&& make install \
&& cd /arrow/python \
&& python setup.py build_ext --build-type=$ARROW_BUILD_TYPE --with-parquet \
&& python setup.py install \
&& rm -rf /arrow /tmp/apache-arrow.tar.gz
COPY app app
COPY init_app.py ./
ENV FLASK_APP init_app.py
RUN chown -R pythonwebapi:pythonwebapi ./
RUN chown -R 777 ./
USER pythonwebapi
EXPOSE 8000 7000
ENTRYPOINT ["gunicorn","--timeout", "7000","init_app:app","-k","uvicorn.workers.UvicornWorker","-b","0.0.0.0"]```
I am trying to build a python application which require confluent-kafka package. But while building in bamboo, I got the below error
fatal error: librdkafka/rdkafka.h: No such file or directory
build 13-Dec-2022 11:46:59 23 | #include <librdkafka/rdkafka.h>
build 13-Dec-2022 11:46:59 | ^~~~~~~~~~~~~~~~~~~~~~
My dockerfile is as such:
FROM python:3.9-alpine
RUN adduser -D pythonwebapi
WORKDIR /home/pythonwebapi
COPY requirements.txt requirements.txt
COPY logger_config.py logger_config.py
RUN echo 'http://dl-3.alpinelinux.org/alpine/v3.9/main' >> /etc/apk/repositories
RUN apk update \
&& apk upgrade \
&& apk add --no-cache build-base \
autoconf \
bash \
bison \
boost-dev \
cmake \
flex \
# libressl-dev \
zlib-dev
RUN apk add make gcc g++
RUN apk add libffi-dev
RUN apk update && apk --no-cache add librdkafka-dev
RUN apk add postgresql-dev gcc python3-dev musl-dev
RUN pip install --upgrade pip && pip install -r requirements.txt && pip install gunicorn
RUN apk del gcc g++ make
RUN pip install --no-cache-dir six pytest numpy cython
RUN pip install --no-cache-dir pandas
RUN pip install --no-cache-dir confluent-kafka
ARG ARROW_VERSION=3.0.0
ARG ARROW_SHA1=c1fed962cddfab1966a0e03461376ebb28cf17d3
ARG ARROW_BUILD_TYPE=release
ENV ARROW_HOME=/usr/local \
PARQUET_HOME=/usr/local
#Download and build apache-arrow
RUN mkdir -p /arrow \
&& wget -q https://github.com/apache/arrow/archive/apache-arrow-${ARROW_VERSION}.tar.gz -O /tmp/apache-arrow.tar.gz \
&& echo "${ARROW_SHA1} *apache-arrow.tar.gz" | sha1sum /tmp/apache-arrow.tar.gz \
&& tar -xvf /tmp/apache-arrow.tar.gz -C /arrow --strip-components 1 \
&& mkdir -p /arrow/cpp/build \
&& cd /arrow/cpp/build \
&& cmake -DCMAKE_BUILD_TYPE=$ARROW_BUILD_TYPE \
-DOPENSSL_ROOT_DIR=/usr/local/ssl \
-DCMAKE_INSTALL_LIBDIR=lib \
-DCMAKE_INSTALL_PREFIX=$ARROW_HOME \
-DARROW_WITH_BZ2=ON \
-DARROW_WITH_ZLIB=ON \
-DARROW_WITH_ZSTD=ON \
-DARROW_WITH_LZ4=ON \
-DARROW_WITH_SNAPPY=ON \
-DARROW_PARQUET=ON \
-DARROW_PYTHON=ON \
-DARROW_PLASMA=ON \
-DARROW_BUILD_TESTS=OFF \
.. \
&& make -j$(nproc) \
&& make install \
&& cd /arrow/python \
&& python setup.py build_ext --build-type=$ARROW_BUILD_TYPE --with-parquet \
&& python setup.py install \
&& rm -rf /arrow /tmp/apache-arrow.tar.gz
COPY app app
COPY init_app.py ./
ENV FLASK_APP init_app.py
RUN chown -R pythonwebapi:pythonwebapi ./
RUN chown -R 777 ./
USER pythonwebapi
EXPOSE 8000 7000
ENTRYPOINT ["gunicorn","--timeout", "7000","init_app:app","-k","uvicorn.workers.UvicornWorker","-b","0.0.0.0"]
I am unable to gauge why the error is coming since librdkafka is already installed.My requirement is to use alpine image. Can anyone please help me regarding this?
A strange issue with permissions occured when pushing to GitHub. I have a test job which runs tests with coverage and then pushes results to codecov on every push and pull request. However, this scenario only works with root user.
If running with digitalshop user it throws an error:
Couldn't use data file '/digital-shop-app/.coverage': unable to open database file
My question is: how to run coverage in docker container so it won't throw this error? My guess is that it's because of permissions.
docker-compose.yml:
version: '3.9'
services:
test:
build: .
command: >
sh -c "
python manage.py wait_for_db &&
coverage run --source='.' manage.py test mainapp.tests &&
coverage report &&
coverage xml
"
volumes:
- ./digital-shop-app:/digital-shop-app
env_file: .env
depends_on:
- db
db:
image: postgres:13-alpine
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASS}
Dockerfile:
FROM python:3.9-alpine3.13
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
COPY ./digital-shop-app /digital-shop-app
COPY ./scripts /scripts
WORKDIR /digital-shop-app
RUN python -m venv /py && \
/py/bin/pip install --upgrade pip && \
apk add --no-cache bash && \
apk add --update --no-cache postgresql-client && \
apk add --update --no-cache --virtual .tmp-deps \
build-base jpeg-dev postgresql-dev musl-dev linux-headers \
zlib-dev libffi-dev openssl-dev python3-dev cargo && \
apk add --update --no-cache libjpeg && \
/py/bin/pip install -r /requirements.txt && \
apk del .tmp-deps && \
adduser --disabled-password --no-create-home digitalshop && \
chown -R digitalshop:digitalshop /py/lib/python3.9/site-packages && \
chmod -R +x /scripts
ENV PATH="/scripts:/py/bin:/py/lib:$PATH"
USER digitalshop
CMD ["run.sh"]
So I ended up creating another Dockerfile called Dockerfile.test and putting pretty much the same configuration except non-admin user creation. Here's the final variant:
Running code as root user is not recommended thus please read UPDATE section
Dockerfile.test:
FROM python:3.9-alpine3.13
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
COPY ./digital-shop-app /digital-shop-app
WORKDIR /digital-shop-app
RUN python -m venv /py && \
/py/bin/pip install --upgrade pip && \
apk add --no-cache bash curl gnupg coreutils && \
apk add --update --no-cache postgresql-client libjpeg && \
apk add --update --no-cache --virtual .tmp-deps \
build-base jpeg-dev postgresql-dev musl-dev linux-headers \
zlib-dev libffi-dev openssl-dev python3-dev cargo && \
/py/bin/pip install -r /requirements.txt && \
apk del .tmp-deps
ENV PATH="/py/bin:/py/lib:$PATH"
docker-compose.yml:
version: '3.9'
services:
test:
build:
context: .
dockerfile: Dockerfile.test
command: >
sh -c "
python manage.py wait_for_db &&
coverage run --source='.' manage.py test mainapp.tests &&
coverage report &&
coverage xml
"
volumes:
- ./digital-shop-app:/digital-shop-app
env_file: .env
depends_on:
- db
I don't know exactly whether it is a good practice. If not then please tell how to do it correctly.
UPDATE:
Thanks to #β.εηοιτ.βε for giving me food for thought.
After some local debugging I found out that coverage needs user to own the directory where .coverage file is located. So I created subdir named /cov inside project folder and set digitalshop user as its owner including everything inside. Finally I specified path to .coverage file by setting env variable COVERAGE_FILE=/digital-shop-app/cov/.coverage where digital-shop-app is project root folder. And also specified the same path to coverage.xml report in docker-compose.yml. Here's the code:
docker-compose.yml (added -o flag to coverage xml command):
version: '3.9'
services:
test:
build:
context: .
command: >
sh -c "
python manage.py wait_for_db &&
coverage run --source='.' manage.py test mainapp.tests &&
coverage xml -o /digital-shop-app/cov/coverage.xml
"
env_file: .env
depends_on:
- db
db:
image: postgres:13-alpine
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASS}
Dockerfile:
FROM python:3.9-alpine3.13
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
COPY ./digital-shop-app /digital-shop-app
COPY ./scripts /scripts
WORKDIR /digital-shop-app
RUN python -m venv /py && \
/py/bin/pip install --upgrade pip && \
apk add --no-cache bash && \
apk add --update --no-cache postgresql-client && \
apk add --update --no-cache --virtual .tmp-deps \
build-base jpeg-dev postgresql-dev musl-dev linux-headers \
zlib-dev libffi-dev openssl-dev python3-dev cargo && \
apk add --update --no-cache libjpeg && \
/py/bin/pip install -r /requirements.txt && \
apk del .tmp-deps && \
adduser --disabled-password --no-create-home digitalshop && \
chown -R digitalshop:digitalshop /py/lib/python3.9/site-packages && \
chmod -R +x /scripts && \
# New code here
mkdir -p /digital-shop-app/cov && \
chown -R digitalshop:digitalshop /digital-shop-app/cov
ENV PATH="/scripts:/py/bin:/py/lib:$PATH"
USER digitalshop
CMD ["run.sh"]
I'm trying to build a docker image, return error:
DNS lookup error
Dockerfile:
FROM python:3.7-alpine
LABEL maintainer="r.ofc#hotmail.com"
ENV PROJECT_ROOT /app
WORKDIR $PROJECT_ROOT
RUN apk update \
&& apk add mariadb-dev \
gcc\
python3-dev \
pango-dev \
cairo-dev \
libtool \
linux-headers \
musl-dev \
libffi-dev \
openssl-dev \
jpeg-dev \
zlib-dev
RUN pip install --upgrade pip
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD python manage.py runserver 0.0.0.0:8000
i'm runing kubernetes locale useing minikube
I got this error when i run command
sudo docker-compose up
Docker File:
FROM alpine
ARG AWS_RDS_USER
ARG AWS_RDS_PASSWORD
ARG AWS_RDS_HOST
ARG AWS_RDS_DATABASE
ARG LOCALE_SERVICE_URL
ARG CRYPTO_KEY
ENV APP_DIR=/app
ENV APP_ENV=production
ENV DATABASE_CONNECTION_STRING=mysql://${AWS_RDS_USER}:${AWS_RDS_PASSWORD}#${AWS_RDS_HOST}/${AWS_RDS_DATABASE}
ENV LOCALE_SERVICE_URL=$LOCALE_SERVICE_URL
ENV CRYPTO_KEY=$CRYPTO_KEY
COPY build/requirements.txt build/app.ini ${APP_DIR}/
COPY build/nginx.conf /etc/nginx/nginx.conf
COPY api ${APP_DIR}/api
RUN apk add --no-cache curl python pkgconfig python-dev openssl-dev libffi-dev musl-dev make gcc
RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python
RUN apk update && \
apk add --virtual .build-deps autoconf gcc make g++ python-dev && \
apk add nginx uwsgi uwsgi-python py2-pip py-mysqldb && \
chown -R nginx:nginx ${APP_DIR} && \
chmod 777 /run/ -R && \
chmod 777 /root/ -R && \
pip2 install --upgrade pip && \
pip2 install -r ${APP_DIR}/requirements.txt && \
apk del .build-deps && \
rm -fR tmp/* && \
pw_migrate migrate --database=$DATABASE_CONNECTION_STRING --directory=$APP_DIR/api/migrations -v
EXPOSE 80
CMD nginx && uwsgi --ini ${APP_DIR}/app.ini
For solution, I tried to install below packages
1) gcc package.
2) libffi packages.
3) pip openssl packages.
But still error is not resolved. Any help should be appreciated
Try the solution suggested here
This is because you need a working compiler, the easiest way around
this is too install the build-base package like so:
apk add --no-cache --virtual .pynacl_deps build-base python3-dev
libffi-dev This will install various tools that are required to
compile pynacl and pip install pynacl will now succeed.
Note it is optional to use the --virtual flag but it makes it easy to
trim the image because you can run apk del .pynacl_deps later in your
dockerfile as they are not needed any more and would reduce the
overall size of the image.