visibility of python output from bash - python

I've python code that includes tqdm code.
the bash build docker image and container however I can't see any output from the container(in CLI).
#!/bin/sh
docker build . -t traffic
docker run -d --name traffic_con traffic
docker wait traffic_con
docker cp -a traffic_con:/usr/TrafficMannager/out/data/. ./out/data/
docker rm /traffic_con
docker rmi /traffic
I've tried to run the container on interactive mode (-it) however it's throwing an error
[EDIT:]
Docker file:
FROM cityflowproject/cityflow
# Create a folder we'll work in
WORKDIR /usr/TrafficMannager
# Upgrade installed packages
RUN apt-get update && apt-get upgrade -y && apt-get clean
# Install vim to open & edit code\text files
RUN apt-get install -y vim
# Install all python code depentences
RUN pip install gym && \
pip install numpy && \
pip install IPython && \
pip install torch && \
python -m pip install python-dotenv &&\
pip install tqdm
COPY . .
CMD chmod u+x script/container_instructions.sh; ./script/container_instructions.sh
container_instructions:
#!/bin/sh
pip install lib/extern/CityFlow/.
python main.py

You run the Docker container in the background, then immediately docker wait for it. If you run the container in the foreground instead, you'll see its output on stdout, and the docker run command will complete when the container exits.
docker run --name traffic_con traffic # without -d
Given the wrapper script you show, you may find this setup much easier to run in a Python virtual environment. Ignore all the Docker parts and run:
python3 -m venv venv
./venv/bin/pip install gym numpy IPython torch python-dotenv tqdm lib/extern/CityFlow
./venv/bin/python3 main.py
The script will directly write to ./out/data on the host system, without the long-winded privileged script to copy data out.
If you really do need a container here, you can also mount the output directory into the container to avoid the manual copy step.
#!/bin/sh
docker build . -t traffic
docker run --rm -v "$PWD/out/data:/usr/TrafficMannager/out/data" traffic
docker rmi traffic

Related

How to install db-core or any adapter in Windows 10 with Python (pip)

For the last couple of days I've struggled to install Dbt in my Windows 10 box. It seems the best way is to emulate Linux, with WSL.
So, in order to help others to save their time and a few neurons, I decided to post a quick recipe in this thread. I summarized the whole process in 7 steps, together with a nice and complete tutorial
Enable WSL
https://learn.microsoft.com/en-us/windows/wsl/install
Install Linux Ubuntu
https://ubuntu.com/tutorials/install-ubuntu-on-wsl2-on-windows-10#1-overview
Install Python
As python3 comes with Ubuntu by default, you won't need to do anything in this step. Otherwise, you can always got to:
https://packaging.python.org/en/latest/tutorials/installing-packages/#requirements-for-installing-packages
Install Pip
https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/#creating-a-virtual-environment
Install VirtualEnv
https://docs.python.org/3/library/venv.html
I hope it helps. If not you can always post a message in this thread!
Best wishes,
I
Another way you can run dbt-core on Windows is with Docker. I'm currently on Windows 10 and use a Docker image for my dbt project without needing WSL. Below is my Dockerfile and requirements.txt file with dbt-core and dbt-snowflake but feel free to swap the packages you need.
In my repo, my dbt project is in a folder at the root level named dbt.
requirements.txt
dbt-core==1.1.0
dbt-snowflake==1.1.0
Dockerfile
FROM public.ecr.aws/docker/library/python:3.8-slim-buster
COPY . /dbt
# Update and install system packages
RUN apt-get update -y && \
apt-get install --no-install-recommends -y -q \
git libpq-dev python-dev && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install dbt
RUN pip install -U pip
RUN pip install -r dbt/requirements.txt
# TEMP FIX due to dependency updates. See https://github.com/dbt-labs/dbt-core/issues/4745
RUN pip install --force-reinstall MarkupSafe==2.0.1
# Install dbt dependencies
WORKDIR /dbt
RUN dbt deps
# Specify profiles directory
ENV DBT_PROFILES_DIR=.dbt
# Expose port for dbt docs
EXPOSE 8080
And then you can build and run it (I personally put both of these commands in a dbt_run.sh file and run with bash dbt_run.sh):
docker build -t dbt_image .
docker run \
-p 8080:8080 \
--env-file .env \
-it \
--mount type=bind,source="$(pwd)",target=/dbt \
dbt_image bash
If you make changes to your dbt project while the container is running they will be reflected in the container which makes it great for developing locally. Hope this helps!

How to build docker to run Python3 from Node.js child_process in Google App Engine?

I'm deploying node.js web application on custom runtime with flex environment. I'm calling child_process in Node.js to open python3 as such:
const spawn = require("child_process").spawn;
pythonProcess = spawn('python3');
Which runs fine locally but when deployed to GAE, it gives me an error as such:
Error: spawn python3 ENOENT
at Process.ChildProcess._handle.onexit (child_process.js:240)
at onErrorNT (internal/child_process.js:415)
at process._tickCallback (next_tick.js:63)
However, when I run python2, it works fine.
After doing some research and digging, I came across this question on stackoverflow
How to install Python3 in Google Cloud Platform for a Node app
It seems that I have to do something with building custom runtime from docker file to allow multiple runtimes (something like that).
I've tried countless things with docker file such as:
# Trying to install python3
FROM ubuntu as stage0
WORKDIR /python/
RUN apt-get update || : && apt-get install --yes python3;
RUN apt-get install python3-pip -y
# My main node.js docker stuff
FROM gcr.io/google_appengine/nodejs
COPY . /app/
... etc
and
# From google app engine python runtime docker repo
FROM gcr.io/google-appengine/python
RUN virtualenv /env
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ADD requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
ADD . /app
# My main node.js docker stuff
FROM gcr.io/google_appengine/nodejs
COPY . /app/
... etc
which none of it worked.
What is the correct approach of doing this and how can I do it?
Thank you.
Google's image is based on ubuntu but only has python 2 and 2.7. This answer showed how to use python3.6, but we're going to install 3.5 it via software-properties-common. Putting things together, you get:
FROM launcher.gcr.io/google/nodejs
# same as
# FROM gcr.io/google-appengine/nodejs
RUN apt-get update && apt-get install software-properties-common -y
# RUN unlink /usr/bin/python
# RUN ln -sv /usr/bin/python3.5 /usr/bin/python
# RUN python -V
RUN python3 -V
# Copy application code.
COPY . /app/
# Install dependencies.
RUN npm --unsafe-perm install
If you're just going to call python3 from your spawn, you don't need to unlink (commented lines) which I included so that you can just call python.

venv directory not being created inside Docker container/image

I am relatively new to Docker and, as an experiment, I am trying to create just a generic Django development container with the following Dockerfile:
FROM python
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get dist-upgrade -y
RUN mkdir /code
WORKDIR /code
RUN python3 -m venv djangoProject
RUN /bin/bash -c "source /code/djangoProject/bin/activate && python3 -m pip install --upgrade pip && pip install django"
EXPOSE 8000
The image seems to build okay, but when I go to run the container:
docker container run -v /home/me/dev/djangoRESTreact/code:/code -it --rm djangodev /bin/bash
My local mount, /home/me/dev/djangoRESTreact/code, is not populated with the djangoProject venv directory I was expecting from this Dockerfile and mount. The docker container also has an empty directory at /code. If I run python3 -m venv djangoProject directly inside the container, the venv directory is created and I can see it both on the host and within the container.
Any idea why my venv is not being created in the image and subsequent container?
I'm pulling my hair out.
Thanks in advance!
You don't need venvs in a Docker container at all, so don't bother with one.
FROM python
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get dist-upgrade -y
RUN mkdir /code
WORKDIR /code
RUN python3 -m pip install --upgrade pip
RUN python3 -m pip install django
EXPOSE 8000
To answer your question, though, you're misunderstanding how -v mounts work; they mount a thing from your host onto a directory in the container. The /code/... created in your dockerfile is essentially overridden by the volume mount, which is why you don't see the venv at all.
When you mount a volume into a container, the volume covers up anything that was already in the container at that location. This is the exact same way that every other mount on Linux works. Also, volumes are only mounted when building containers, not when running them. Thus, the venv that you put in that location while building isn't visible without running. If you want your venv to be visible, then you need to put it in the volume, not just in the container at the same place.
Mounting the volume with -v causes /home/me/dev/djangoRESTreact/code on the host to be mounted at /code in the container. This mounts over anything that was placed there during the build (your venv).
If you run the container without the -v flag, you'll probably find the venv directory exists.
You should probably avoid creating a venv within the container, as it's an isolated environment.
Instead just copy your requirements.txt into the container, and install them directly in the container. Something like:
COPY ./requirements.txt /requirements.txt
RUN pip install -U pip && pip install -r /requirements.txt

debugging containerised python web app

I have made this first docker container, and it works as per the Dockerfile.
FROM python:3.5-slim
RUN apt-get update && \
apt-get -y install gcc mono-mcs && \
apt-get -y install vim && \
apt-get -y install nano && \
rm -rf /var/lib/apt/lists/*
RUN mkdir -p /statics/js
VOLUME ["/statics/"]
WORKDIR /statics/js
COPY requirements.txt /opt/requirements.txt
RUN pip install -r /opt/requirements.txt
EXPOSE 8080
CMD ["python", "/statics/js/app.py"]
after running this command:
docker run -it -p 8080:8080 -v
~/Development/my-Docker-builds/pythonReact/statics/:/statics/ -d
ciasto/pythonreact:v2
and when I open the page localhost:8080 i get error:
A server error occurred. Please contact the administrator.
but if I run this application normally, i.e. not containerised directly on my host machine: it works fine.
So I want to know what is causing server error. How do I debug a python app that runs via container to know what is causing it to not work. or what am I doing wrong.
Mainly, this:
config.paths['static_files'] = 'statics'
Should be:
config.paths['static_files'] = '/statics'
I've got your application up and running with your 'Hello World'
Did these changes:
1) The mentioned config.paths['static_files'] = '/statics'
2) This Dockerfile (removed VOLUME)
FROM python:3.5-slim
RUN apt-get update && \
apt-get -y install gcc mono-mcs && \
apt-get -y install vim && \
apt-get -y install nano && \
rm -rf /var/lib/apt/lists/*
COPY requirements.txt /opt/requirements.txt
RUN pip install -r /opt/requirements.txt
COPY ./statics/ /statics/
COPY app.py /app/app.py
WORKDIR /statics/js
EXPOSE 8080
CMD ["python", "/app/app.py"]
3) Moved the non-static app.py to a proper place: root of the project.
4) Run with: docker build . -t pyapp, then docker run -p 8080:8080 -it pyapp
You should see Serving on port 8080... from terminal output. And Hello World in browser.
I've forked your Github project and did a pull-request.
Edit:
If you need make changes when you develop, run the container with a volume to override the app that is packed in the image. For example:
docker run -v ./static/js/:/static/js -p 8080:8080 -it pyapp
You can have as many volumes as you want, but the app is already packed in the image and ready to push somewhere.
You can use pdb to debug Python code in CLI. To achieve this, you just have to import pdb and call pdb.set_trace() where you would like to have a breakpoint in your Python code. Basically you have to insert the following line where you want a breakpoint:
import pdb; pdb.set_trace()
Then you have to run your Python code interactively.
You could do that by running bash interactively in your container with
docker run -it -p 8080:8080 -v ~/Development/my-Docker-builds/pythonReact/statics/:/statics/ ciasto/pythonreact:v2 /bin/bash
and then running manually your app with
root#5910f24d0d8a:/statics/js# python /statics/js/app.py
When the code will reach the breakpoint, it will pause and a prompt will be shown where you can type commands to inspect your execution.
For more detail about the available commands, you can give a look at the pdb commands documentation.
Also, I noted that you are building your image using the python:3.5-slim base image which is a (very) light Python image which does not include all is commonly included in a Python distribution.
From the Python images page:
This image does not contain the common packages contained in the default tag and only contains the minimal packages needed to run python. Unless you are working in an environment where only the python image will be deployed and you have space constraints, we highly recommend using the default image of this repository.
Maybe using the standard python:3.5 image instead would solve your issue.
As a quick tip for debugging containerized applications. If your application is failing with container crashed/stopped. Just launch the container image with CMD/ENTRYPOINT as /bin/bash then manually start the application once you have the container shell you can debug the application as per normal Linux system. CMD is straightforward to override as per ENTRYPOINT just use --entrypoint flag with docker run command.

Run py.test in a docker container as a service

I am working on setting up a dockerised selenium grid. I can send my python tests [run with pytest] from a pytest container [see below] by attaching to it.
But I have setup another LAMP container that is going to control pytest.
So I want to make the pytest container standalone,running idle and waiting for commands from the LAMP container.
I have this Dockerfile:
# Starting from base image
FROM ubuntu
#-----------------------------------------------------
# Set the Github personal token
ENV GH_TOKEN blablabla
# Install Python & pip
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y python python-pip python-dev && pip install --upgrade pip
# Install nano for #debugging
RUN apt-get install -y nano
# Install xvfb
RUN apt-get install -y xvfb
# Install GIT
RUN apt-get update -y && apt-get install git -y
# [in the / folder]
RUN git clone https://$GH_TOKEN:x-oauth-basic#github.com/user/project.git /project
# Install dependencies via pip
WORKDIR /project
RUN pip install -r dependencies.txt
#-----------------------------------------------------
#
CMD ["/bin/bash"]
I start the pytest container manually [for development] with this:
docker run -dit -v /project --name pytest repo/user:py
The thing is that I finished development and I want to have the pytest container launched from docker-compose and connect it to other containers [with link and volume].
I just cannot make it to stay up .
I used this :
pytest:
image: repo/user:py
volumes:
- "/project"
command: "/bin/bash tail -f /dev/null"
but didnt work.
So, inside the Dockerfile, should I use a specific CMD or ENTRYPOINT ?
Should I use some command from the docker-compose file?
I just enabled it on one of my projects recently. I use a multistage build. At present I put tests in the same folder as the source test_*.py. From my experience with this, it doesn't feel natural, I prefer tests to be in its own folder that is excluded by default.
FROM python:3.7.6 AS build
WORKDIR /app
COPY requirements.txt .
RUN pip3 install --compile -r requirements.txt && rm -rf /root/.cache
COPY src /app
# TODO precompile
# Build stage test - run tests
FROM build AS test
RUN pip3 install pytest pytest-cov && rm -rf /root/.cache
RUN pytest --doctest-modules \
--junitxml=xunit-reports/xunit-result-all.xml \
--cov \
--cov-report=xml:coverage-reports/coverage.xml \
--cov-report=html:coverage-reports/
# Build stage 3 - Complete the build setting the executable
FROM build AS final
CMD [ "python", "./service.py" ]
In order to exclude the test files from coverage. .coveragerc must be present.
[run]
omit = test_*
The test target runs the required tests and generates coverage and execution reports. These are NOT suitable for Azure DevOps and SonarQube. To make it suitable
sed -i~ 's#/app#$(Build.SourcesDirectory)/app#' $(Pipeline.Workspace)/b/coverage-reports/coverage.xml
To run tests
#!/usr/bin/env bash
set -e
DOCKER_BUILDKIT=1 docker build . --target test --progress plain
I am not exactly sure how your tests execute, but I think I have a similar use case. You can see how I do this in my Envoy project in cmd.sh, and a sample test.
Here is how I run my tests. I'm using pytest as well, but thats not important:
1. use docker-compose to bring up the stack, without the tests
2. wait for the stack to be ready for requests. for me this means poll for a 200 response
3. run the test container separately but make sure it uses the same network as the compose stack.
This can be done in several ways. You can put this all in a Bash script and control it all from your host.
In my case I do this all from a Python container. Its a little to wrap your head around, but the idea is there is a Python test container which the host starts. Then the container itself uses compose to bring up the stack back on the host (dockerception). And then in the test container we run the pytest test. When its done, it composes down the stack and pushes up the return code.
At first you must get list of your images with "docker images".
Then watch the list and ensure your image is exists.
So run your docker image with "docker run".
Do not forget this note: you must CMD in DockerFile to run pytest with your container.
my Dockerfile
FROM python:3.6-slim
COPY . /python-test-calculator
WORKDIR /python-test-calculator
RUN pip freeze > requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
RUN mkdir reports
CMD cd reports
CMD [python", "-m", "pytest", "--junitxml=reports/result.xml"]
CMD tail -f /dev/null

Categories