Unable to find ssl cert or key file in docker build - python

I have a docker file that generates a flask app using gunicorn. For my purposes I need to use https so I'm setting up ssl using openssl. However I keep running into this error:
[2020-02-24 17:01:18 +0000] [1] [INFO] Starting gunicorn 20.0.4
Traceback (most recent call last):
File "/usr/local/bin/gunicorn", line 11, in <module>
sys.exit(run())
File "/usr/local/lib/python3.6/dist-packages/gunicorn/app/wsgiapp.py", line 58, in run
WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
File "/usr/local/lib/python3.6/dist-packages/gunicorn/app/base.py", line 228, in run
super().run()
File "/usr/local/lib/python3.6/dist-packages/gunicorn/app/base.py", line 72, in run
Arbiter(self).run()
File "/usr/local/lib/python3.6/dist-packages/gunicorn/arbiter.py", line 198, in run
self.start()
File "/usr/local/lib/python3.6/dist-packages/gunicorn/arbiter.py", line 155, in start
self.LISTENERS = sock.create_sockets(self.cfg, self.log, fds)
File "/usr/local/lib/python3.6/dist-packages/gunicorn/sock.py", line 162, in create_sockets
raise ValueError('certfile "%s" does not exist' % conf.certfile)
ValueError: certfile "server.crt" does not exist
Here is my Dockerfile:
FROM ubuntu:latest
RUN apt-get update && apt-get install python3-pip -y && \
apt-get install python3-dev openssl
RUN openssl req -nodes -new -x509 -keyout server.key -out server.cert -subj "/C=US/ST=MD/L=Columbia/O=Example/OU=ExampleOU/CN=example.com/emailAddress=seanbrhn3#gmail.com"
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt
COPY . /app
ENV PORT 8080
CMD ["gunicorn", "--certfile=server.crt","--keyfile=server.key","app:app", "--config=config.py"]
All help is most appreciated!

I solved the problem instead of creating the cert and key in the dockerfile I used the copy command to take my cert and keys from my local directory and put it in the dockerfile. My problem was just my lack of knowledge of docker

This probably won't be the case for most people, but I wanted to put this out there just in case. I had this issue and realized that I had forgotten that I was mounting a volume in my docker run command.
So even though I was downloading files in my Dockerfile (during docker build), as soon as I did docker run with the volume getting mounted (the volume did not contain those downloaded files) all the downloaded files would get deleted.

Related

How do you set up a docker container that depends on multiple python libraries being installed?

I am trying to create a docker container to always run mypy in the same environment. The library I want to run mypy on has multiple dependencies, so I have to install those first and have access to them as I am evaluating the library that was passed. This is what it currently looks like, in this example I am only installing scipy as an external dependency, later I would install a regular requirements.txt file instead:
FROM ubuntu:22.04 as builder
RUN apt-get update && apt-get install -y \
bc \
gcc \
musl-dev \
python3-pip \
python3 \
python3-dev
RUN python3.10 -m pip install --no-cache-dir --no-compile scipy && \
python3.10 -m pip install --no-cache-dir --no-compile mypy
FROM ubuntu:22.04 as production
RUN apt-get update && apt-get install -y \
python3 \
COPY --from=builder /usr/local/lib/python3.10/dist-packages /usr/local/lib/python3.10/dist-packages
COPY --from=builder /usr/local/bin/mypy /usr/local/bin/mypy
WORKDIR /data
ENTRYPOINT ["python3.10", "-m", "mypy"]
I install and run my container with
docker build -t my-package-mypy . && docker run -v $(pwd):/data my-package-mypy main.py
Where main.py is a simple one line script that only imports scipy.
This returns the following output:
main.py:1: error: Cannot find implementation or library stub for module named "scipy" [import]
main.py:1: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.10/dist-packages/mypy/__main__.py", line 37, in <module>
console_entry()
File "/usr/local/lib/python3.10/dist-packages/mypy/__main__.py", line 15, in console_entry
main()
File "mypy/main.py", line 95, in main
File "mypy/main.py", line 174, in run_build
File "mypy/build.py", line 193, in build
File "mypy/build.py", line 302, in _build
File "mypy/build.py", line 3579, in record_missing_stub_packages
PermissionError: [Errno 13] Permission denied: '.mypy_cache/missing_stubs'
Where most importantly, the first line says that it cannot find the installation for scipy even though it was installed alongside mypy. How can I adjust my dockerfile to get it to work as described?

Cannot install private dependency from artifact registry inside docker build when pulling from Github

I am trying to deploy a Cloud run application containing a private python package.
The code from the cloudrun is hosted on Github, and when I push code, it triggers a cloudbuild that builds the Docker, pushes it to the Container registry and creates a cloudrun with the image.
Unfortunately, in the docker build stage. The Docker cannot access the private python package that is available on the artifact registry.
I have sucessfully used that package in a cloud function in the past, so I am sure the package works. I have also given the same permissions to the cloudbuild that builds the docker to cloudbuilds that buils functions using that package, and they work.
I have created this issue in the past here, and had possible solutions using the Json Key file of a service account with the owner permission on the project following that tutorial from the Google Cloud documentation. But I would like to avoid using a key, as the key should not be saved on Github. I am sure this is a permission issue, but I could not figure it out.
cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/${_PROJECT}/${_SERVICE_NAME}:$SHORT_SHA', '--network=cloudbuild', '.', '--progress=plain']
Dockerfile
FROM python:3.8.6-slim-buster
ENV APP_PATH=/usr/src/app
ENV PORT=8080
# Copy requirements.txt to the docker image and install packages
RUN apt-get update && apt-get install -y cython
RUN pip install --upgrade pip
# Set the WORKDIR to be the folder
RUN mkdir -p $APP_PATH
COPY / $APP_PATH
WORKDIR $APP_PATH
RUN pip install -r requirements.txt --no-color
RUN pip install --extra-index-url https://us-west1-python.pkg.dev/my-project/my-package/simple/ my-package==0.2.3 # This line is where the bug occurs
# Expose port
EXPOSE $PORT
# Use gunicorn as the entrypoint
CMD exec gunicorn --bind 0.0.0.0:8080 app:app
The permissions I added are:
cloudbuild default service account (project-number#cloudbuild.gserviceaccount.com): Artifact Registry Reader
service account running the cloudbuild : Artifact Registry Reader
service account running the app: Artifact Registry Reader
The cloudbuild error:
Step 10/12 : RUN pip install --extra-index-url https://us-west1-python.pkg.dev/my-project/my-package/simple/ my-package==0.2.3
---> Running in b2ead00ccdf4
Looking in indexes: https://pypi.org/simple, https://us-west1-python.pkg.dev/muse-speech-devops/gcp-utils/simple/
User for us-west1-python.pkg.dev: [91mERROR: Exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/pip/_internal/cli/base_command.py", line 167, in exc_logging_wrapper
status = run_func(*args)
File "/usr/local/lib/python3.8/site-packages/pip/_internal/cli/req_command.py", line 205, in wrapper
return func(self, options, args)
File "/usr/local/lib/python3.8/site-packages/pip/_internal/commands/install.py", line 340, in run
requirement_set = resolver.resolve(
File "/usr/local/lib/python3.8/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 94, in resolve
result = self._result = resolver.resolve(
File "/usr/local/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py", line 481, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/usr/local/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py", line 348, in resolve
self._add_to_criteria(self.state.criteria, r, parent=None)
File "/usr/local/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py", line 172, in _add_to_criteria
if not criterion.candidates:
File "/usr/local/lib/python3.8/site-packages/pip/_vendor/resolvelib/structs.py", line 151, in __bool__

“Could not run curl-config: [Errno 2] No such file or directory” when installing pycurl on Alpine Linux

I'm trying to install pycurl via:
sudo pip install pycurl
It downloaded fine, but when when it runs setup.py I get the following traceback:
Downloading/unpacking pycurl
Running setup.py egg_info for package pycurl
Traceback (most recent call last):
File "<string>", line 16, in <module>
File "/tmp/pip-build-root/pycurl/setup.py", line 563, in <module>
ext = get_extension()
File "/tmp/pip-build-root/pycurl/setup.py", line 368, in get_extension
ext_config = ExtensionConfiguration()
File "/tmp/pip-build-root/pycurl/setup.py", line 65, in __init__
self.configure()
File "/tmp/pip-build-root/pycurl/setup.py", line 100, in configure_unix
raise ConfigurationError(msg)
__main__.ConfigurationError: Could not run curl-config: [Errno 2] No such file or directory
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 16, in <module>
File "/tmp/pip-build-root/pycurl/setup.py", line 563, in <module>
ext = get_extension()
File "/tmp/pip-build-root/pycurl/setup.py", line 368, in get_extension
ext_config = ExtensionConfiguration()
File "/tmp/pip-build-root/pycurl/setup.py", line 65, in __init__
self.configure()
File "/tmp/pip-build-root/pycurl/setup.py", line 100, in configure_unix
raise ConfigurationError(msg)
__main__.ConfigurationError: Could not run curl-config: [Errno 2] No such file or directory
Any idea why this is happening and how to get around it using Alpine Linux?
Found it. I believe this works.
# Install packages
apk add --no-cache libcurl
# Needed for pycurl
ENV PYCURL_SSL_LIBRARY=openssl
# Install packages only needed for building, install and clean on a single layer
RUN apk add --no-cache --virtual .build-dependencies build-base curl-dev \
&& pip install pycurl \
&& apk del .build-dependencies
I had the same issue building a Tornado app based on python:3.7.2-apline3.9 image. I was able to get past this error used the curl-dev package as noted by pycURL's install instructions
Under the pycURL Install header:
NOTE: You need Python and libcurl installed on your system to use or build pycurl. Some RPM distributions of curl/libcurl do not include everything necessary to build pycurl, in which case you need to install the developer specific RPM which is usually called curl-dev.
Here is the relevant part of the Dockerfile
RUN apk add --no-cache libcurl
RUN apk update \
&& apk add --virtual .build-deps \
curl-dev \
&& pip install -e ./ \
&& apk --purge del .build-deps
If you want to verify the features available through curl I did the following.
docker exec -it <container_name> sh
apk add curl
curl --version
The output of curl --version is similar to
curl 7.64.0 (x86_64-alpine-linux-musl) libcurl/7.64.0 OpenSSL/1.1.1b zlib/1.2.11 libssh2/1.8.1 nghttp2/1.35.1
Release-Date: 2019-02-06
Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp
Features: AsynchDNS IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP HTTP2 UnixSockets HTTPS-proxy
Specifically for me I was interested in the AsynchDNS being present so I could use Tornado's curl_httpclient

airflow webserver -p 8080 result in OSError: [Errno 13] Permission denied

After installing airflow package in AWS EC2 instance , i am trying to start the airflow webserver Its showing permission denied issue , i am not getting which file or folder its trying to create/modify to get this error.
[root#ip-172-31-62-1 airflow]# /usr/local/bin/airflow webserver -p 8080
[2017-06-13 04:24:35,692] {__init__.py:57} INFO - Using executor SequentialExecutor
/usr/local/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
[2017-06-13 04:24:36,093] [4053] {models.py:167} INFO - Filling up the DagBag from /home/ec2-user/airflow/dags
Running the Gunicorn Server with:
Workers: 4 sync
Host: 0.0.0.0:8080
Timeout: 120
Logfiles: - -
=================================================================
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 28, in <module>
args.func(args)
File "/usr/local/lib/python2.7/site-packages/airflow/bin/cli.py", line 791, in webserver
gunicorn_master_proc = subprocess.Popen(run_args)
File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__
errread, errwrite)
File "/usr/lib64/python2.7/subprocess.py", line 1343, in _execute_child
raise child_exception
OSError: [Errno 13] Permission denied
------------------------------------
The Value of run_args in above error message is
['gunicorn', '-w', '4', '-k', 'sync', '-t', '120', '-b', '0.0.0.0:8080', '-n', 'airflow-webserver', '-p', '/home/ec2-user/airflow/airflow-webserver.pid', '-c', 'airflow.www.gunicorn_config', '--access-logfile', '-', '--error-logfile', '-', 'airflow.www.app:cached_app()']
I had this same issue. It got resolved when I installed the whole setup in sudo mode. Please find the commands I used:
sudo apt-get update && sudo apt-get -y upgrade
sudo apt-get install python-pip
sudo -H pip install airflow
sudo airflow initdb
sudo airflow webserver -p 8080
I had this same issue and the existing answer wouldn't work for me because the user didn't have sudo permissions. What worked for me was adding bin to the path:
export PATH=$PATH:/usr/local/bin
Or in my case:
export PATH=$PATH:~/.local/bin
Then I could just run:
airflow webserver -p 8080
Instead of:
.local/bin/airflow webserver -p 8080

headless chrome in docker with python. Chrome failed to start: crashed

I want to run this simple script inside a docker container:
def hi_chrome():
from xvfbwrapper import Xvfb
from splinter import Browser
vdisplay = Xvfb()
vdisplay.start()
print "spawning connector"
oBrowser = Browser('chrome')
oBrowser.visit("http://google.co.za")
assert oBrowser.title == "Google"
print "yay"
vdisplay.stop()
if __name__ == '__main__':
hi_chrome()
I've gotten the script to run in a virtual environment by doing all the pip and apt-get installs listed in the my docker file and just running the script. But when I try run it inside a container I get:
Traceback (most recent call last):
File "app.py", line 19, in <module>
hi_chrome()
File "app.py", line 10, in hi_chrome
oBrowser = Browser('chrome')
File "/usr/local/lib/python2.7/dist-packages/splinter/browser.py", line 63, in Browser
return driver(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/splinter/driver/webdriver/chrome.py", line 31, in __init__
self.driver = Chrome(chrome_options=options, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/chrome/webdriver.py", line 69, in __init__
desired_capabilities=desired_capabilities)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 92, in __init__
self.start_session(desired_capabilities, browser_profile)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 179, in start_session
response = self.execute(Command.NEW_SESSION, capabilities)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 236, in execute
self.error_handler.check_response(response)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/errorhandler.py", line 192, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: crashed
(Driver info: chromedriver=2.27.440175 (9bc1d90b8bfa4dd181fbbf769a5eb5e575574320),platform=Linux 4.8.0-34-generic x86_64)
I've had similar problems when trying to run my script using other containers on docker-hub. I've tried using chrome instead of chromium and I've tried using some containers I found on docker-hub but I keep finding broken nonesense. This should be simple.
My main suspicion is that it's a versioning thing. But it works in the venv so that doesnt make too much sense. Or docker just needs something fancy to get the chrome webdriver to run.
Can someone please point out my obvious and noobish mistake?
My Dockerfile looks like
FROM ubuntu:16.04
RUN apt-get update -y && \
apt-get install -y python-pip python-dev xvfb chromium-browser && \
pip install --upgrade pip setuptools
RUN pip install chromedriver_installer
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip install -r requirements.txt
COPY . /app
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
And requirements.txt:
splinter==0.7.5
xvfbwrapper==0.2.8
I found an image that sorta worked and then beat it into submission... Nice thing about this solution is it doesn't need xvfbwrapper so it's nice and simple.
App.py
def hi_chrome():
# from xvfbwrapper import Xvfb
from splinter import Browser
# vdisplay = Xvfb()
# vdisplay.start()
print "spawning connector"
oBrowser = Browser('chrome')
oBrowser.visit("http://google.co.za")
assert oBrowser.title == "Google"
print "yay"
# vdisplay.stop()
if __name__ == '__main__':
hi_chrome()
requirements:
splinter==0.7.5
Dockerfile
FROM markadams/chromium-xvfb
RUN apt-get update && apt-get install -y \
python python-pip curl unzip libgconf-2-4
ENV CHROMEDRIVER_VERSION 2.26
RUN curl -SLO "https://chromedriver.storage.googleapis.com/$CHROMEDRIVER_VERSION/chromedriver_linux64.zip" \
&& unzip "chromedriver_linux64.zip" -d /usr/local/bin \
&& rm "chromedriver_linux64.zip"
COPY requirements.txt /usr/src/app/requirements.txt
WORKDIR /usr/src/app
RUN pip install -r requirements.txt
COPY . /usr/src/app
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]

Categories