Keep postgres docker container in azure pipeline job running - python

I'm rather new to Azure and currently playing around with the pipelines. My goal is to run a postgres alpine docker container in the background, so I can perform tests through my python backend.
This is my pipeline config
trigger:
- main
pool:
vmImage: ubuntu-latest
variables:
POSTGRE_CONNECTION_STRING: postgresql+psycopg2://postgres:passw0rd#localhost/postgres
resources:
containers:
- container: postgres
image: postgres:13.6-alpine
trigger: true
env:
POSTGRES_PASSWORD: passw0rd
ports:
- 1433:1433
options: --name postgres
stages:
- stage: QA
jobs:
- job: test
services:
postgres: postgres
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: $(PYTHON_VERSION)
- task: Cache#2
inputs:
key: '"$(PYTHON_VERSION)" | "$(Agent.OS)" | requirements.txt'
path: $(PYTHON_VENV)
cacheHitVar: 'PYTHON_CACHE_RESTORED'
- task: CmdLine#2
displayName: Wait for db to start
inputs:
script: |
sleep 5
- script: |
python -m venv .venv
displayName: create virtual environment
condition: eq(variables.PYTHON_CACHE_RESTORED, 'false')
- script: |
source .venv/bin/activate
python -m pip install --upgrade pip
pip install -r requirements.txt
displayName: pip install
condition: eq(variables.PYTHON_CACHE_RESTORED, 'false')
- script: |
source .venv/bin/activate
python -m pytest --junitxml=test-results.xml --cov=app --cov-report=xml tests
displayName: run pytest
- task: PublishTestResults#2
condition: succeededOrFailed()
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: 'test-results.xml'
testRunTitle: 'Publish FastAPI test results'
- task: PublishCodeCoverageResults#1
inputs:
codeCoverageTool: 'Cobertura'
summaryFileLocation: 'coverage.xml'
But the pipeline always fails at the step "Initialize Containers", giving this error:
Error response from daemon: Container <containerID> is not running as if it was just shutting down because there is nothing to do. Which seems right, but I don't know how to keep it running until my tests are done, the backend just runs pytest against the database. I also tried adding that resource as container using the container property, but then the pipeline crashes at the same step, saying that the container was just running less than a second.
I'm thankful for any ideas!

I'm suspicious that your container is not stopping because of "there is nothing to do", the postgres image is configured in a way to act as a service. Your container is probably stopping because of an error.
I'm sure there is something to improve: you have to add the PGPORT env var to your container and set to 1433 because that port is not the default port for the postgres docker image, so opening that port on your container like you are doing with ports is not doing too much in this case.
Also, your trigger: true property would mean that you are expecting updates on the official DockerHub repository for postgres and in case of a new image release, run your pipeline. I think that does not makes too much sense, you should remove it, just in case, although this is marginal problem from the perspective of your question.

Related

How to setup psycopg2 in a docker container running on a droplet?

I'm trying to wrap a scraping project in a Docker container to run it on a droplet. The spider scraps a website and then writes the data to a postgres database. The postgres database is already running and managed by Digitalocean.
When I run the command locally to test, everything is fine:
docker compose up
I can visualize the spider writing on the database.
Then, I use github action to build and push my docker image on a registry each time I push the code with the script:
name: CI
# 1
# Controls when the workflow will run.
on:
# Triggers the workflow on push events but only for the master branch
push:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
inputs:
version:
description: 'Image version'
required: true
#2
env:
REGISTRY: "registry.digitalocean.com/*****-registery"
IMAGE_NAME: "******-scraper"
POSTGRES_USERNAME: ${{ secrets.POSTGRES_USERNAME }}
POSTGRES_PASSWORD: ${{ secrets.POSTGRES_PASSWORD }}
POSTGRES_HOSTNAME: ${{ secrets.POSTGRES_HOSTNAME }}
POSTGRES_PORT: ${{ secrets.POSTGRES_PORT }}
POSTGRES_DATABASE: ${{ secrets.POSTGRES_DATABASE }}
SPLASH_URL: ${{ secrets.SPLASH_URL }}
#3
jobs:
build-compose:
name: Build docker-compose
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Insall doctl
uses: digitalocean/action-doctl#v2
with:
token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}
- name: Login to DO Container Registry with short-lived creds
run: doctl registry login --expiry-seconds 1200
- name: Remove all old images
run: if [ ! -z "$(doctl registry repository list | grep "****-scraper")" ]; then doctl registry repository delete-manifest ****-scraper $(doctl registry repository list-tags ****-scraper | grep -o "sha.*") --force; else echo "No repository"; fi
- name: Build compose
run: docker compose -f docker-compose.yaml up -d
- name: Push to Digital Ocean registery
run: docker compose push
deploy:
name: Deploy from registery to droplet
runs-on: ubuntu-latest
needs: build-compose
Then I ssh root#ipv4 manually to my droplet in order to install docker, docker compose and run the image from the registry with:
# Login to registry
docker login -u DO_TOKEN -p DO_TOKEN registry.digitalocean.com
# Stop running container
docker stop ****-scraper
# Remove old container
docker rm ****-scraper
# Run a new container from a new image
docker run -d --restart always --name ****-scraper registry.digitalocean.com/****-registery/****-scraper
As soon as the python script starts on the droplet I have the error:
psycopg2.OperationalError: could not connect to server: No such file
or directory Is the server running locally and accepting connections
on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
It seems like I'm doing something wrong and I can't find how to fix this so far.
I would appreciate some help explanations.
Thanks,
My Dockerfile:
# As Scrapy runs on Python, I run the official Python 3 Docker image.
FROM python:3.9.7-slim
# Set the working directory to /usr/src/app.
WORKDIR /usr/src/app
# Install libpq-dev for psycopg2 python package
RUN apt-get update \
&& apt-get -y install libpq-dev gcc
# Copy the file from the local host to the filesystem of the container at the working directory.
COPY requirements.txt ./
# Install Scrapy specified in requirements.txt.
RUN pip3 install --no-cache-dir -r requirements.txt
# Copy the project source code from the local host to the filesystem of the container at the working directory.
COPY . .
# For Slash
EXPOSE 8050
# Run the crawler when the container launches.
CMD [ "python3", "./****/launch_spiders.py" ]
My docker-compose.yaml
version: "3"
services:
splash:
image: scrapinghub/splash
restart: always
command: --maxrss 2048 --max-timeout 3600 --disable-lua-sandbox --verbosity 1
ports:
- "8050:8050"
launch_spiders:
restart: always
build: .
volumes:
- .:/usr/src/app
image: registry.digitalocean.com/****-registery/****-scraper
depends_on:
- splash
Try installing binary packages of psycopg2-binary instead of psycopg2. Then you don't need gcc and libpq-dev. Probably you have mixed versions of postgreSQL.
Problem solved!
The .env file with all my credentials was in the .dockerignore. It was then impossible to locate this .env when building the image.

GitHub Actions not picking up Django tests

I want to write a simple GitHub Action that runs my Django app's tests when I push to GitHub. GitHub runs the workflow on push, but for some reason, it doesn't pick up any of the tests, even though running python ./api/manage.py test locally works.
The Run tests section of the Job summary shows this:
1s
Run python ./api/manage.py test
System check identified no issues (0 silenced).
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
1s
2s
0s
For background, my local setup is using docker-compose, with a dockerfile for each app. The Django app is the API. All I want to do is runn the django tests on push.
I've come across GitHub service containers, and I thought they might be necessary since django needs a postgres db connection to run its tests.
I'm new to GitHub Actions so any direction would be appreciated. My hunch is that it should be simpler than this, but below is my current .github/workflows/django.yml file:
name: Django CI
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
tests:
runs-on: ubuntu-latest
container: python:3
services:
# Label used to access the service container
db:
# Docker Hub image
image: postgres
# Provide the password for postgres
env:
POSTGRES_PASSWORD: password
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
# Downloads a copy of the code in your repository before running CI tests
- name: Check out repository code
uses: actions/checkout#v2
# Performs a clean installation of all dependencies
- name: Install Dependencies
run: |
python -m pip install --upgrade pip
pip install -r api/requirements.txt
- name: Run Tests
run: |
python ./api/manage.py test
env:
# The hostname used to communicate with the PostgreSQL service container
POSTGRES_HOST: postgres
# The default PostgreSQL port
POSTGRES_PORT: 5432
Don't know if you have solved it by yourself, if so please share your solution. However, what I ran into when doing the same thing as you is that I had to specify the app that I wanted to test for it to work.
For example, if you have your app called "testapp" you need to do:
run: |
python ./api/manage.py test testapp

How to test a dockerized application in an Azure DevOps (Server) pipeline?

I have a simple python dockerized application whose structure is
/src
- server.py
- test_server.py
Dockerfile
requirements.txt
in which the docker base image is Linux-based, and server.py exposes a FastAPI endpoint.
For completeness, server.py looks like this:
from fastapi import FastAPI
from pydantic import BaseModel
class Item(BaseModel):
number: int
app = FastAPI(title="Sum one", description="Get a number, add one to it", version="0.1.0")
#app.post("/compute")
async def compute(input: Item):
return {'result': input.number + 1}
Tests are meant to be done with pytest (following https://fastapi.tiangolo.com/tutorial/testing/) with a test_server.py:
from fastapi.testclient import TestClient
from server import app
import json
client = TestClient(app)
def test_endpoint():
"""test endpoint"""
response = client.post("/compute", json={"number": 1})
values = json.loads(response.text)
assert values["result"] == 2
Dockerfile looks like this:
FROM tiangolo/uvicorn-gunicorn:python3.7
COPY . /app
RUN pip install -r requirements.txt
WORKDIR /app/src
EXPOSE 8000
CMD ["uvicorn", "server:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]
At the moment, if I want to run the tests on my local machine within the container, one way to do this is
Build the Docker container
Run the container, get its name via docker ps
Run docker exec -it <mycontainer> bash and execute pytest to see the tests passing.
Now, I would like to run tests in Azure DevOps (Server) before pushing the image to my Docker registry and triggering a release pipeline. If this sounds an OK thing to do, what's the proper way to do it?
So far, I hoped that something along the lines of adding a "PyTest" step in the build pipeline would magically work:
I am currently using a Linux agent, and the step fails with
The failure is not surprising, as (I think) the container is not run after being built, and therefore pytest can't run within it either :(
Another way to solve the solve this is to include pytest commands in the Dockerfile and deal with the tests in a release pipeline. However I would like to decouple the testing from the container that is ultimately pushed to the registry and deployed.
Is there a standard way to run pytest within a Docker container in Azure DevOps, and get a graphical report?
Update your azure-pipelines.yml file as follows to run the tests in Azure Pipelines
Method-1 (using docker)
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- task: Docker#2
inputs:
command: 'build'
Dockerfile: '**/Dockerfile'
arguments: '-t fast-api:$(Build.BuildId)'
- script: |
docker run fast-api:$(Build.BuildId) python -m pytest
displayName: 'Run PyTest'
Successfull pipeline screenshot
Method-2 (without docker)
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
strategy:
matrix:
Python37:
python.version: '3.7'
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'
- script: |
python -m pip install --upgrade pip
pip install -r requirements.txt
displayName: 'Install dependencies'
- script: |
pip install pytest pytest-azurepipelines
python -m pytest
displayName: 'pytest'
BTW, I have one simple FastAPI project, you can reference if your want.
Test your docker script using pytest-azurepipelines:
- script: |
python -m pip install --upgrade pip
pip install pytest pytest-azurepipelines
pip install -r requirements.txt
pip install -e .
displayName: 'Install dependencies'
- script: |
python -m pytest /src/test_server.py
displayName: 'pytest'
Running pytest with the plugin pytest-azurepipelines will let you see your test results in the Azure Pipelines UI.
https://pypi.org/project/pytest-azurepipelines/
You can run your unit tests directly from within your Docker container using pytest-azurepipelines (that you need to install previously in the Docker image):
- script: |
docker run --mount type=bind,source="$(pwd)",target=/results \
--entrypoint /bin/bash my_docker_image \
-c "cd results && pytest"
displayName: 'tests'
continueOnError: true
pytest will create an xml file containing the test results, that will be made available to Azure DevOps pipeline thanks to the --mount flag in the docker run command. Then pytest-azurepipelines will publish directly the results to Azure DevOps.

View Docker Swarm CMD Line Output

I am trying to incorporate a python container and a dynamodb container into one stack file to experiment with Docker swarm. I have done tutorials on docker swarm seeing web apps running across multiple nodes before but never built anything independently. I am able to run docker-compose up with no issues, but struggling with swarm.
My docker-compose.yml looks like
version: '3.3'
services:
dynamodb:
image: "amazon/dynamodb-local"
ports:
- "8000:8000"
track-count:
image: "my-app"
links:
- "dynamodb:localhost"
Running docker stack deploy -c docker-compose.yml trial_stack brings up no errors however printing 'hello world' as the first line of python code is not displayed in the terminal. I get the following as CMD line output
Ignoring unsupported options: links
Creating network trial_stack_default
Creating service trial_stack_dynamodb
Creating service trial_stack_track-count
My question is:
1) Why is the deploy service ignoring the links? I have noticed this is repeated in the docs https://docs.docker.com/engine/reference/commandline/stack_deploy/ but unsure if this will cause my stack to fail.
2) Assuming the links issue is fixed, where will any command line output be shown, to confirm the system is running? Currently I only have one node, my local machine, which is the manager.
For reference, my python image is being built by the following Dockerfile:
FROM python:3.8-slim-buster
RUN mkdir /app
WORKDIR /app
RUN pip install --upgrade pip
COPY ./requirements.txt ./
RUN pip install -r ./requirements.txt
COPY / /
COPY /resources/secrets.py /resources/secrets.py
CMD [ "python", "/main.py" ]
You can update docker-compose.yaml to enable tty for the services for which you want to see the stdout on console.
Updated docker-compose.yaml should look like this:
version: '3.3'
services:
dynamodb:
image: "amazon/dynamodb-local"
ports:
- "8000:8000"
track-count:
image: "my-app"
tty: true
links:
- "dynamodb:localhost"
and then when once you have the task deployed, to check service logs you can run:
# get the service name
docker stack services <STACK_NAME>
# display the service logs, edited based on user's suggestion
docker service logs --follow --raw <SERVICE_NAME>

Docker container is not cretaed

I work with a Python tool that uses the docker for project management. I run the setup process with command,
$ bin/butler.py setup
The went through seamlessly but when I try to install new PHP plugins using the composure, the tool doesn't find the container itself.
So, my conclusion is the tool is not creating the container properly in the first place.
I describe the process below for the setup. After the initial configuration, this is where it starts,
# all done
print("pull doker images images")
self.docker.compose_pull(self.local_yml)
print("create containers")
self.docker.compose_setup(self.local_yml)
print("setup completed")
This is the general command for the docker execution. I know it has a security bug, but, at this moment this is not the concern.
def compose(self, params, yaml_path="docker-compose.yml"):
""" execte docker-compose commmand """
cmd = f"docker-compose -f {yaml_path} {params}"
print(cmd)
try:
subprocess.run(cmd, shell=True, check=True)
except Exception:
pass
def compose_pull(self, yaml_path):
self.compose("pull --ignore-pull-failures", yaml_path)
def compose_setup(self, yaml_path):
self.compose(f"--project-name {self.project_name} up --no-start ", yaml_path)
The printout provides with the commands,
pull doker images images
# We use a docker-compose.yml and perform the pull operation
docker-compose -f /Users/chaklader/PycharmProjects/Welance-Craft-Starter/build/docker-compose.yml pull --ignore-pull-failures
Pulling database ...
Pulling craft ...
create containers
# We use a docker-compose.yml and perform the up operation for the project
docker-compose -f /Users/chaklader/PycharmProjects/Welance-Craft-Starter/build/docker-compose.yml --project-name p13-27 up --no-start
Creating network "p13-27_default" with the default driver
Creating p13-27_database ...
Creating p13-27_craft ...
setup completed
The docker-compose.yml file is provided,
services:
craft:
container_name: p13-27_craft
environment:
CRAFT_ALLOW_UPDATES: 'false'
CRAFT_DEVMODE: 1
CRAFT_EMAIL: admin#welance.de
CRAFT_ENABLE_CACHE: 0
CRAFT_LOCALE: en_us
CRAFT_PASSWORD: welance
CRAFT_SITENAME: Welance
CRAFT_SITEURL: //localhost
CRAFT_USERNAME: admin
DB_DATABASE: craft
DB_DRIVER: mysql
DB_PASSWORD: craft
DB_PORT: '3306'
DB_SCHEMA: public
DB_SERVER: database
DB_TABLE_PREFIX: craft_
DB_USER: craft
ENVIRONMENT: dev
HTTPD_OPTIONS: ''
LANG: C.UTF-8
SECURITY_KEY: some_key_:)
image: welance/craft:3.1.17.2
links:
- database
ports:
- 80:80
volumes:
- /var/log
- ./docker/craft/conf/apache2/craft.conf:/etc/apache2/conf.d/craft.conf
- ./docker/craft/conf/php/php.ini:/etc/php7/php.ini
- ./docker/craft/logs/apache2:/var/log/apache2
- ./docker/craft/adminer:/data/adminer
- ../config:/data/craft/config
- ../templates:/data/craft/templates
- ../web:/data/craft/web
database:
command: mysqld --character-set-server=utf8 --collation-server=utf8_unicode_ci
--init-connect='SET NAMES UTF8;'
container_name: p13-27_database
environment:
MYSQL_DATABASE: xyz
MYSQL_PASSWORD: xyz
MYSQL_ROOT_PASSWORD: xyz
MYSQL_USER: xyz
image: mysql:5.7
volumes:
- /var/lib/mysql
version: '3.1'
In the summary, my base image is welance/craft:3.1.17.2 and I use that to create the container named p13-27_craft. The additional configuration is provided in the docker-compose.yml file and I run the pull and up command with the docker.
I think the container is itself not created. For example, I provided the data for customer ID 15 and project ID 55 and the printout says informs Creating p15-55_craft ... done.
When I run the command to see if the container is created from the terminal, I find,
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cf2ea4638772 welance/craft:3.1.17.1 "/data/scripts/run-c…" 37 minutes ago Up 37 minutes 0.0.0.0:80->80/tcp p13-17_craft
4504ae62035f mysql:5.7 "docker-entrypoint.s…" About an hour ago Up About an hour 3306/tcp, 33060/tcp p13-17_database
518e3535859b mysql:5.7
So the information from the print is not correct and container is not created in the first place.
How do I investigate what is the issue here and why the container is not creating?
Thank you.
Get rid of the --no-start option, and add the -d flag to run as daemon (background process). If I run my own solution:
docker-compose up --no-start
Creating alerts-cache ... done
Creating mongoClientTemp ... done
Creating apilayer_alerts-api_1 ... done
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Nothing is found, even though my containers are created.
docker-compose up -d
Starting alerts-cache ... done
Starting mongoClientTemp ... done
Starting apilayer_alerts-api_1 ... done
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
af557a2add73 bdsdev.azurecr.io/rva_flask "python app.py alert…" 2 minutes ago Up 1 second 0.0.0.0:5000->5000/tcp apilayer_alerts-api_1
829da0fabe62 bdsdev.azurecr.io/temp_mongo "docker-entrypoint.s…" 2 minutes ago Up 2 seconds 27017/tcp mongoClientTemp
cdb67a305233 mongo
How do I investigate what is the issue here and why the container is
not creating?
Your configuration seems correct and docker-compose does not report any error, its probably that your container was created but either was not started or exited right after being started. You are using docker ps which only shows running container, you will probably see your missing container by running docker ps -a.
docker-compose won't report any error if container is created (and started) successfuly but exited right after starting. If you can see your container with docker ps -a, try running docker logs <container name> to see why your container exited. The step to solve the issues afterward will depend on how your container works.

Categories