docker build container that I cannot open - python

Wanted to turn my python script into api using Docker.
This is what the Dockerfile looks like:
FROM python:3.9-slim
WORKDIR /app
RUN apt-get update && apt-get install -y \
build-essential \
software-properties-common \
git \
&& apt-get install poppler-utils -y \
&& apt-get -y install tesseract-ocr \
&& apt-get update \
&& apt-get install ffmpeg libsm6 libxext6 -y \
&& apt-get install default-libmysqlclient-dev -y \
&& rm -rf /var/lib/apt/lists/*
COPY . .
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80"]
Docker build container just fine. It is running
No errors within the Docker:
I press the link and its just http://0.0.0.0/
I am using PyCharm and python, so I binded the port in PyCharm too
Did anyone run into similar problem? I am new to Docker and might've missed something obvios, sorry.
I tried adding port manually http://0.0.0.0:80/ and http://0.0.0.0:80/Docs
but nothing shows up.
I build similar project with exactly same parameters but this one doesn't work.

What your program is showing you is what address it has bound to. 0.0.0.0 means that it'll accept connections from anywhere and 0.0.0.0 isn't the actual address you need to talk to to reach your program.
You've mapped port 80 in the container to port 80 on the host, so you should be able to reach your program at http://localhost:80/. Since port 80 is the default for http, you can also just use http://localhost/.

Related

Can't connect to ActiveMQ Console running in Docker container

I made a Dockerfile to run an ActiveMQ service from, and when I try to connect to the console on the host machine using http://127.0.0.1:8161/ in my web browser, it says 127.0.0.1 didn’t send any data. in Google Chrome. This is with running the docker image using docker run -p 61613:61613 -p 8161:8161 -it service_test bash.
However, when I run it using docker run --net host -it service_test bash, Google Chrome says 127.0.0.1 refused to connect., which leads me to believe I'm doing something by adding the --net flag but I'm not sure why it can't connect. Maybe a port forwarding issue?
My Dockerfile is as follows
FROM <...>/library/ubuntu:20.04
ADD <proxy certs>
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends software-properties-common && \
update-ca-certificates && \
add-apt-repository -y ppa:deadsnakes/ppa && \
apt-get update && \
apt-get install -y --no-install-recommends \
curl \
git \
python3.8 \
python3.8-venv \
python3.8-dev \
openjdk-11-jdk \
make \
&& apt-get clean && \
rm -rf /var/lib/apt/lists/*
WORKDIR /opt
RUN <point pip to certs>
RUN echo "timeout = 300" >> /etc/pip.conf
RUN curl -O https://bootstrap.pypa.io/get-pip.py && \
python3.8 get-pip.py
# Run python in a venv
ENV VIRTUAL_ENV=/opt/venv
RUN python3.8 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
# Update pip before continuing
RUN pip install --upgrade pip
# Get wheel
RUN pip install wheel
# add extra index url
RUN echo "extra-index-url = <url>" >> /etc/pip.conf
# Install ActiveMQ
ENV JAVA_HOME="/usr/lib/jvm/java-11-openjdk-amd64"
ENV PATH="$JAVA_HOME/bin:$PATH"
RUN mkdir -p /opt/amq
RUN curl -kL \
http://archive.apache.org/dist/activemq/5.16.3/apache-activemq-5.16.3-bin.tar.gz \
>> /opt/amq/apache-activemq-5.16.3-bin.tar.gz && \
tar -xzf /opt/amq/apache-activemq-5.16.3-bin.tar.gz --directory /opt/amq
ENV PATH="/opt/amq/apache-activemq-5.16.3/bin:$PATH"
# Expose ports 61613 and 8161 to other containers
EXPOSE 61613
EXPOSE 8161
COPY <package>.whl <package>.whl
RUN pip install <package>
Note: Some sensitive info was removed, anything surrounded by <> has been hidden.
For context, I am running activemq from the container using activemq console, and trying to connect to it from my host OS using Google Chrome.
Got it to work!
For those having the same issue, I resolved it by changing the IP address in jetty.xml from 127.0.0.1 to 0.0.0.0. I am now able to connect to my containerized AMQ instance from my host OS.

docker: how to see which driver i'm using

I'm new in the docker world.
I have a Dockerfile that emulates a linux machine and that i use to connect to a microsoft SQL.
FROM ubuntu:20.04
WORKDIR /app
ADD . /app
RUN apt dist-upgrade
RUN apt-get clean
RUN apt-get -y update
RUN apt-get -y install unixodbc unixodbc-dev openssl libkrb5-3 tdsodbc build-essential gcc curl coinor-cbc
RUN apt-get -y install python3.7 python3-pip python3-dev python3-tzlocal
# driver "ODBC Driver 17 for SQL Server"
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/ubuntu/19.04/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get -y update
RUN ACCEPT_EULA=Y apt-get install msodbcsql17
RUN apt-get clean
RUN pip3 install -r requirements.txt
RUN chmod -R 777 ./
EXPOSE 8080
CMD python3 app.py
with RUN apt-get -y install tdsodbc i install a driver called freeTDF (documentation https://www.freetds.org/)
while here i install the ODBC drivers
RUN curl https://packages.microsoft.com/config/ubuntu/19.04/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get -y update
RUN ACCEPT_EULA=Y apt-get install msodbcsql17
RUN apt-get clean
which drivers do I actually use? what I can write in the shell to check that?
thank you.
my problem is: I need to run an app (named app in the docker) that do a lot of queries. I need to have the chance to do concurrent queries on the same connection. meaning at least that if I open two connections on Database1 and on both connections i do one query the two can be evaluated at the same time and not the first one waiting for the end of the first (I'm in this situation right now and I don't know why)
thank you
EDIT:
i tried docker info on the shell. no information about ODBC, SQL or microsoft are given. here the detail
below command give you information about storage driver you are using.
docker info

Couldn't find any package by regex in python:3.8.3 docker image

I'm new to docker and I created a docker image and this is how my docker file looks like.
FROM python:3.8.3
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
postgresql-client \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get install -y gcc libtool-ltdl-devel xmlsec1-1.2.20 xmlsec1-devel-1.2.20 xmlsec1 openssl-
1.2.20 xmlsec1-openssl-devel-1.2.20 \
&& apt-get -y install curl gnupg \
&& curl -sL https://deb.nodesource.com/setup_14.x | bash - \
&& apt-get -y install nodejs
WORKDIR /app/
COPY . /app
RUN pip install -r production_requirements.txt \
&& front_end/noa-frontend/npm install
This image is used in docker-compose.yml's app service. So when I run the docker-compose build, I'm getting the below error saying it couldn't find the package. Those are few dependencies which I want to install in order to install a python package.
In the beginning, I've run the apt-get update to update the package lists.
Can anyone please help me with this issue.
Updated Dockerfile
FROM python:3.8.3
RUN apt-get update
RUN apt-get install -y postgresql-client\
&& apt-get install -y gcc libtool-ltdl-devel xmlsec1-1.2.20 xmlsec1-
devel-1.2.20 xmlsec1 openssl-1.2.20 xmlsec1-openssl-devel-1.2.20 \
&& apt-get -y install curl gnupg \
&& curl -sL https://deb.nodesource.com/setup_14.x | bash - \
&& apt-get -y install nodejs
WORKDIR /app/
COPY . /app
RUN pip install -r production_requirements.txt \
&& front_end/noa-frontend/npm install
You are trying to use apt-get install after doing rm -rf /var/lib/apt/lists/*. That is guaranteed not to end well. Try removing the rm command initially to see if that helps. If you really need to keep the size of the image down then put the rm command as the very last command in the run statement.
If you really want to reduce your image size then try switching to using python:3.8-slim or python:3.8-alpine. Alpine is a different OS to the default of Ubuntu, but its package manager can be told not to cache files locally. eg.
FROM python:3.8-alpine
RUN apk add --no-cache postgresql-client
RUN apk add --no-cache gcc libtool-ltdl-devel xmlsec1-1.2.20 xmlsec1-devel-1.2.20 xmlsec1 \
openssl-1.2.20 xmlsec1-openssl-devel-1.2.20
RUN apk add --no-cache curl gnupg
RUN apk add --no-cache nodejs
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash -
WORKDIR /app/
COPY . /app
RUN pip install -r production_requirements.txt \
&& front_end/noa-frontend/npm install
Certain bits of software might be available under different package names, so you'll have to check that out.
The instruction rm -rf /var/lib/apt/lists/* is more or less negating apt-get update. APT is no longer able to access the list of available packages after that. Move the rm to the end (and perhaps consider using the safer apt-get clean all).

I am using docker for flask and pytesseract container is running but cannot access the page on browser

Using this for DockerFile, on running with
docker run -p 5000:5000 flask_app:1.0
It runs but browser is showing 127.0.0.1 refused to connect.
RUN apt-get update \
&& apt-get install tesseract-ocr -y \
python3 \
#python-setuptools \
python3-pip \
&& apt-get clean \
&& apt-get autoremove
ADD . /home/App
WORKDIR /home/App
COPY requirements.txt ./
COPY . .
RUN pip3 install -r requirements.txt
VOLUME ["/data"]
EXPOSE 5000
ENTRYPOINT [ "python3" ]
CMD [ "app.py" ]```
You are probably listening on interface 127.0.0.1. You need to listen on 0.0.0.0, e.g. app.run(host="0.0.0.0", port=5000).
Basically the container and your host have different 127.0.0.1, so you need to bind to external IPs. For more details, and diagrams, see https://pythonspeed.com/articles/docker-connection-refused/

"http://127.0.0.1:8000/ might be temporarily down" after dockerized badgr-server

I'm trying to dockerize django/python project : badgr-server from here:
I succeed to deployed it on localhost on ubuntu 18.04 without docker.
then I tried to dockerize, the build went well. when I did :
docker container run -it -p 8000:8000 badgr python root/badgr/code/manage.py runserver
and there is nothing on localhost:8000
note: docker container run -it -p 8000:8000 badgr python ./manage.py won't work.
output:
?: (rest_framework.W001) You have specified a default PAGE_SIZE pagination rest_framework setting,without specifying also a DEFAULT_PAGINATION_CLASS.
HINT: The default for DEFAULT_PAGINATION_CLASS is None. In previous versions this was PageNumberPagination. If you wish to define PAGE_SIZE globally whilst defining pagination_class on a per-view basis you may silence this check.
System check identified 1 issue (0 silenced).
August 06, 2019 - 10:01:22
Django version 1.11.21, using settings 'mainsite.settings_local'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
I changed in setting_local.py the ALLOWED_HOSTS to :
ALLOWED_HOSTS = ['*']
Thanks!
** extra advises are more than welcome!**
this is the Dockerfile :
FROM ubuntu:18.04
# Preparation
RUN apt-get update
# Install server dependencies
RUN apt-get install -y curl git git-core python-virtualenv gcc python-pip python-dev libjpeg-turbo8 libjpeg-turbo8-dev zlib1g-dev libldap2-dev libsasl2-dev swig libxslt-dev automake autoconf libtool libffi-dev libcairo2-dev libssl-dev
RUN pip install virtualenv --upgrade
#RUN apt install libjpeg8-dev zlib1g-dev -y libcairo2
RUN pip install pillow
# Install database
Run apt-get install -y libmariadbclient-dev zlib1g-dev libssl-dev
# Install main dependencies
Run apt-get install -y libffi-dev libxslt-dev libsasl2-dev libldap2-dev
Run apt-get install -y libmariadbclient-dev zlib1g-dev python-dev libssl-dev python-virtualenv
# Install other useful tools
RUN apt-get install -y git vim sudo curl unzip
RUN apt-get install -y sqlite3
# Cleaning
RUN apt-get clean
RUN apt-get purge
# ADD settings.py /root/settings.py
ADD settings_local.py /root/settings_local.py
# Install the backend
RUN mkdir ~/badgr \
&& cd ~/badgr \
&& git clone https://github.com/concentricsky/badgr-server.git code \
&& cd code \
&& pip install -r requirements.txt \
&& cp /root/settings_local.py apps/mainsite/ \
&& ./manage.py migrate \
&& ./manage.py dist
EXPOSE 8000
docker container run --net=host -it -p 8000:8000 badgrrr python root/badgr/code/manage.py runserver and it's worked!
Do anyone know why it's doesn't work on the default network?
does it's wrong to run it like dis?
Tx.

Categories