Run Selenium tests via Jenkins on Docker Django app - python

I would like to run Selenium integration tests on a Development server.
Our App is Django app and it is deployed via Jenkins and Docker.
We know how to write and run Selenium tests localy
We know how to run tests with Jenkins and present Cobertura and Junit reports
The problem we have is:
In order for Selenium tests (apart from unit tests) the server needs to run
So we can't run tests before we build docker image
How to run tests inside docker images (this could be potentially achieved via script called inside Dockerfile...)
BUT even more important: How can Jenkins get the reports from inside docker containers
Whats are the best practice here.
The Deployment structure:
Jenkins get code from GIT
Jenkins Builds docker images
pass this image to the Docker registry (a private one)
log in to the remote server
on remote server pull the image from the registry
run image on a remote server

Related

Create docker container from flask request in uWSGI instance

I have a docker container that is setup to perform some given actions with selenium. My goal is to have the docker container be created when a request is received for a certain endpoint created using flask. The flask app has been setup with uWSGI and Nginx using this tut.
When the endpoint receives a request it is suppose to run the bash script ./run.sh:
#!/bin/bash
ID=$1
docker run --rm \
-v $(pwd)/code:/code \
-v /etc/hosts:/etc/hosts \
selenium \
python3 \
/code/main.py ${ID}
I can successfully make a call to the endpoint using the IP given from digital ocean but when it gets to the point where it needs to run docker it says:
docker: command not found
Note, I can go into the virtualenv manually, run python app.py, send request to flask endpoint and the docker container is created and everything works great.
You probably need to add a PATH variable to your bash script which includes the location of your docker executable. The user running NGINX likely doesn't have a path set.
PATH=$PATH:/usr/local/bin:/usr/bin
Also you'll need to ensure that the user running NGINX has permission to use docker, so add them to the docker group.
If this is a public service, then I would think carefully about whether you really want internet users to be launching containers on your server, does $1 come from user input?

How can I get the host ip in a docker container using Python?

Context: I implemented tests which use docker-py to create docker networks and run docker containers. The test runner used to execute the tests is pytest. The test setup depends on Python (Python package on my dev machine), on my dev machines docker daemon and my dev machines static ip address. In my dev machine runtime context the tests run just fine (via plain invocation of the test runner pytest). Now I would like to migrate the tests into gitlab-ci. gitlab-ci runs the job in a docker container which accesses the CI servers docker daemon via /var/run/docker.sock mount. The ip address of the docker container which is uses by gitlab-ci to run the job in is not static like in my dev machine context. But for the creation of the docker networks in the test I need the ip address.
Question: How can I get the appropriate ip address of the docker container the tests are executed in with Python?
When you run docker you can share the same network as the host if asked.
Add to the run command --network host
Such parameter will cause the python network code be the same as you run the code without container.

Can't retrieve images from application in Docker container

I'm trying to dockerize a python/django application. When Docker runs the build script in a web container, it is unable to retrieve the images from application.
It shows an error:
and on Google chrome console it shows:
But what is interesting is that I don't get any errors when I do the same on my local machine.
Everything goes as expected.
Docker version 18.03.1-ce, build 9ee9f40

Azure msdeploy Python App Service

I'm trying to deploy an Azure App Service running Flask on Python 3.4. When I deploy from within Visual Studio (2015) via Web Deploy, everything works nicely. But, when I attempt to deploy from my CI/CD server (TeamCity 10.0.3 on Windows Server 2012 R2) using an MSBuild step, the deployment succeeds without errors, but my app is apparently missing some crucial components and just throws HTTP errors on every request (my logging isn't able to capture the actual errors because the app is apparently totally hosed at this point). I'm deploying numerous C# applications from this TeamCity instance using Web Deploy without fail. My build has the following steps:
Command Line Runner - Copy publish profile (because msdeploy looks for it at ~/__profiles for some unknown reason and I can't find a flag or configuration setting to change):
mkdir __profiles
copy *.pubxml __profiles
Command Line Runner - Create venv at top level folder:
c:\python34\python.exe -m venv env
Command Line Runner - Install from requirements.txt:
env\scripts\pip install -r requirements.txt
Powershell Runner - Stop Azure App Service
MSBuild Runner - Deploy (Build file path points to the .pyproj file):
/p:DeployOnBuild=true
/p:PublishProfile="My Publish Profile"
/p:Configuration=Release
/p:AllowUntrustedCertificate=True
/p:UserName=%WebDeployUserName%
/p:Password=%WebDeployPassword%
Powershell Runner - Start Azure App Service
Related GitHub Issue

I'm having trouble using docker-py in a development environment on OSX

I am creating Python code that will be built into a docker image.
My intent is that the docker image will have the capability of running other docker images on the host.
Let's call these docker containers "daemon" and "workers," respectively.
I've proven that this concept works by running "daemon" using
-v /var/run/docker.sock:/var/run/docker.sock
I'd like to be able to write the code so that it will work anywhere that there exists a /var/run/docker.sock file.
Since I'm working on an OSX machine I have to use the Docker Quickstart terminal. As such, on my system there is no docker.sock file.
The docker-py documentation shows this as the way to capture the docker client:
from docker import Client
cli = Client(base_url='unix://var/run/docker.sock')
Is there some hackery I can do on my system so that I can instantiate the docker client that way?
Can I create the docker.sock file on my file system and have it sym-linked to the VM docker host?
I really don't want to have to build my docker image every time I was to test a single line code change... help!!!

Categories