Mongodb error [pymongo.errors.ServerSelectionTimeoutError] in flask web-service - python

So i made a flask web-service that connects to a mongodb , the web-service will be in a docker container and the mongodb will be in another container. When i run the web-service locally on my computer it connects to the mongodb container and everything works fine. When i have them in 2 separated containers as described before, i can use GET methods just fine but POST methods never works, every time i try to make a change in the db or insert something the programm just freezes for some seconds and then i get the timeout error image of the error. One of the endpoints of the web-service that has a problem is this (i will also add the beginning of the file so you can see the connection with the db ):
from flask import Flask, request, Response, jsonify
import json
from pymongo import MongoClient
from datetime import date
client = MongoClient("mongodb://localhost:27017")
db=client["DigitalNotes"]
users=db["Users"]
admins=db["Admins"]
notes=db["Notes"]
app=Flask(__name__)
#app.route("/sign_up",methods=["POST"])
def sign_up():
try:
data=json.loads(request.data)
except Exception as e:
return Response("Bad json content",status=500,mimetype='application/json')
if data==None:
return Response("Bad request",status=500,mimetype='application/json')
if not "email" in data or not "username" in data or not "name" in data or not "password" in data:
return Response("Information incompleted",status=500,mimetype='application/json')
if users.count_documents({"email":data["email"]})==0 or users.count_documents({"username":data["username"]})==0:
user = {"email":data["email"],"username":data["username"],"name":data["name"],"password":data["password"]}
users.insert_one(user)
return Response(data["username"]+" was added to the DataBase",status=200,mimetype='application/json')
else:
return Response("A user with the given username or password already exists",status=200,mimetype='application/json')
and the Dockerfile that creates the container is this:
FROM ubuntu:20.04
RUN apt-get update
RUN apt-get install -y python3 python3-pip
RUN pip3 install flask pymongo
RUN pip3 install datetime
RUN mkdir /app
COPY app.py /app/app.py
EXPOSE 5000
WORKDIR /app
ENTRYPOINT ["python3","-u","app.py"]
The container of the mongodb is created in an docker-compose file which is this:
version: '2'
services:
mongodb:
image: mongo
restart: always
container_name: mongodb
ports:
- 27017:27027
volumes:
- ./mongodb/data:/data/db
flask-service:
build:
context: ./flask-service
restart: always
container_name: flask
depends_on:
- mongodb
ports:
- 5000:5000
environment:
- "MONGO_HOSTNAME=mongodb"
I just don't understand why it works fine locally and not in the containers. Can anyone please help??

The connection string of MongoDb in flask container should be mongodb://mongodb:27017 because between containers you should specify the service name.

Related

sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "postgres" to address: nodename nor servname provided

I am following a youtube tutorial to create an API using FastAPI. I am new to both Docker and FastAPI. I have managed to create a Docker container for the API and another one for the Postgres database using this Dockerfile and docker-compose:
Dockerfile:
FROM python:3.9.7
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
docker-compose:
version: '3'
services:
api:
build: .
depends_on:
- postgres
ports:
- 80:8000
volumes:
- ./:/usr/src/app.ro
env_file:
- ./.env
command: bash -c "uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload"
postgres:
image: postgres
ports:
- 5432:5432
env_file:
- ./.env
volumes:
- postgres-db:/var/lib/posgresql/data
volumes:
postgres-db:
I am also using Alembic, so after setting up both containers, I run
docker-compose run api alembic upgrade head
in order to create the corresponding tables.
Everything runs smoothly when testing with Postman. I can run GET and POST queries that interact with the database and the are no issues. The problem comes when trying to test the API code using TestClient from FastAPI. I am running two tests:
from fastapi.testclient import TestClient
from app.main import app
client = TestClient(app)
def test_root():
response = client.get("/")
assert response.json().get('message') == "Welcome to my API"
assert response.status_code == 200
def test_create_user():
response = client.post("/users/", json = {"email": "hashed_email1222#gmail.com", "password": "test"})
assert response.status_code == 201
The first one passes without a problem, but the second one fails with the following error message:
FAILED tests/test_users.py::test_create_user - sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "postgres" to address: nodename nor servname provided, or not known
I cannot figure out what I am doing wrong. Since there are no issues when testing using Postman, I am not sure if there are any configurations that need to be set in order to use TestClient.
Could someone help me?

Implement pytest over FastAPI app running in Docker

I've created FasAPI app with Postgres DB which lives in docker container.
So now I have docker-compose.yml file with my app and postgres DB:
version: '3.9'
services:
app:
container_name: app_container
build: .
volumes:
- .:/code
ports:
- '8000:8000'
depends_on:
- my_database
#networks:
# - postgres
my_database:
container_name: db_container
image: postgres
environment:
POSTGRES_NAME: dbf
POSTGRES_USER: myuser
POSTGRES_PASSWORD: password
volumes:
- postgres:/data/postgres
ports:
- '5432:5432'
restart: unless-stopped
volumes:
postgres:
And now I want to make pytest over my DB with testing endpoints and testing my DB
BUT, when I run python -m pytest cmd I got the error can not translate hostname "my_database" as in my database.py file I have to set DATABASE_URL = 'postgresql://myuser:password#my_database'. As according to userguide, when I build docker-compose file, in DATABASE_URL I must put name of service instead of hostname.
Anyone have an idea how to solve it?!!
The problem is that, if you use docker-compose to run your app in separate container and run database in another container. It is like your DB is not launched and pytest can't connect to it. This is wrong way to implement pytests in this way!!!!
To run pytest correctly you should:
You must in DATABASE_URL write the name of service instead of the name of host! In my case my_database is name of service in docker-compose.yml file, so I should set it as hostname, like: DATABASE_ULR = postgres://<username>:<password>#<name of service>
pytest must be run in app container! What it means! First of all, start your containers: docker-copose up --build where --build is optional (it just rebuilds your images if you made some changes to code in your programm files. After this, you should jump into app container. It can be done from Docker application on your computer or through the terminal. To make it with terinal window:
cmd: docker exec -it <name of container with your application>. You will dive into container and after this you can simply run cmd pytest or python -m pytest. And your tests will run as allways.
If you will have some questions you can write me anytime)))
So, the reason of this Error was that I run pytest and it tried to connect to DATABASE_URL which, em... has not been launched already (as I understand).

Persisting mysql database with docker

I am trying to containerise a Python script and MySQL database using Docker. The python script interacts with a program running on the host machine using a TCP connection, so I've set up a "host" network for the Docker containers to allow this. The python script is currently speaking to the program on the host machine fine (TCP comms are as expected). The python script is also communicating with the MySQL database running in the other container fine (no errors from pymysql). When I use the Docker Desktop CLI interface I can see the timestamps on the files in /var/lib/mysql/donuts/*.ibd on the database container updating as the python code pushes info into the tables.
However, my problem is that when I bring both containers down using docker compose down and then bring them up again using docker compose up the information in the database is not persisting. Actually, if I enter the database container using the CLI using mysql -u donuts and then try to manually inspect the tables while the containers are running, both tables are completely empty. I've been going in circles trying to find out why I cannot see the data in the tables even though I see the files in /var/lib/mysql/donuts/*.ibd updating at the same instance the Python container is inserting rows. The data is being stored somewhere while the containers are running, at least temporarily, as the python container is reading from one of the tables and using that information while the containers are alive.
Below are my Dockerfile and docker-compose.yml files and the entire project can be found here. The python code that interacts with the database is here, but I think the issue must be with the Docker setup, rather than the Python code.
Any advice on making the database persistent would be much appreciated, thanks.
version: '3.1'
services:
db:
image: mysql:8.0.25
container_name: db
restart: always
secrets:
- mysql_root
environment:
MYSQL_ROOT_PASSWORD_FILE: /run/secrets/mysql_root
MYSQL_DATABASE: donuts
volumes:
- mysql-data:/var/lib/mysql
- ./mysql-init.sql:/docker-entrypoint-initdb.d/mysql-init.sql
network_mode: "host"
voyager_donuts:
container_name: voyager_donuts
build:
context: .
dockerfile: Dockerfile
image: voyager_donuts
network_mode: "host"
volumes:
- c:/Users/user/Documents/Voyager/DonutsCalibration:/voyager_calibration
- c:/Users/user/Documents/Voyager/DonutsLog:/voyager_log
- c:/Users/user/Documents/Voyager/DonutsData:/voyager_data
- c:/Users/user/Documents/Voyager/DonutsReference:/voyager_reference
volumes:
mysql-data:
secrets:
mysql_root:
file: ./secrets/mysql_root
# get a basic python image
FROM python:3.9-slim-buster
# set up Tini to hand zombie processes etc
ENV TINI_VERSION="v0.19.0"
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
# keep setup tools up to date
RUN pip install -U \
pip \
setuptools \
wheel
# set a working directory
WORKDIR /donuts
# make a new user
RUN useradd -m -r donuts && \
chown donuts /donuts
# install requirements first to help with caching
COPY requirements.txt ./
RUN pip install -r requirements.txt
# copy from current dir to workdir
COPY . .
# stop things running as root
USER donuts
# add entry points
ENTRYPOINT ["/tini", "--"]
# start the code once the container is running
CMD python voyager_donuts.py
And of course as soon as I post this I figure out the answer. My database connection context manager was missing the commit() line. Le sigh, I've spent much longer than I care to admit on figuring this out...
#contextmanager
def db_cursor(host='127.0.0.1', port=3306, user='donuts',
password='', db='donuts'):
"""
Grab a database cursor
"""
with pymysql.connect(host=host, \
port=port, \
user=user, \
password=password, \
db=db) as conn:
with conn.cursor() as cur:
yield cur
should have been:
#contextmanager
def db_cursor(host='127.0.0.1', port=3306, user='donuts',
password='', db='donuts'):
"""
Grab a database cursor
"""
with pymysql.connect(host=host, \
port=port, \
user=user, \
password=password, \
db=db) as conn:
with conn.cursor() as cur:
yield cur
conn.commit()

Create a docker image of the mongoDB- atlas version

Hello I want to containarize my flask app and my mongo annd connect them. I have already containerize the flask app.
my current code:
Dockerfile for flask container
FROM python:3.8-buster
WORKDIR /ergasiav3
ADD . /ergasiav3
RUN pip install -r requirements.txt
CMD ["python","app.py"]
I have a db.py with the connection to the mongoDB, here is the code:
from flask_pymongo import pymongo
CONNECTION_STRING = "mongodb+srv://admin:admin#cluster0.sqowy.mongodb.net/InfoCinemas?retryWrites=true&w=majority"
client = pymongo.MongoClient(CONNECTION_STRING)
db = client.get_database('InfoCinemas')
Users = pymongo.collection.Collection(db, 'Users')
Movies = pymongo.collection.Collection(db, 'Movies')
I also created this docker-compose.yml which seems to work but I dont know how to get the mongo as an image too.
version: '3'
services:
infocinemas:
build: .
volumes:
- ./ergasiav3
ports:
- 5000:5000
Do I need to make a second Dockerfile or do I just make the docker-compose.yml for the conterization of mongoDB?
Thank you in advance!
You don't need a separate mongo container, your data is in atlas.
https://www.mongodb.com/compatibility/docker
I also have this same question today as I have just started docker.
No need of separate container if you are using atlas
The information is located at the last of article

Connection Refused on MongoDB Docker Container from Flask Docker Container

I have two docker containers.
Flask app
MongoDB
Flask app has a DockerFile that looks like this.
from alpine:latest
RUN apk add --no-cache python3-dev \
&& pip3 install --upgrade pip
WORKDIR /app
COPY . /app
RUN pip3 --no-cache-dir install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python3"]
CMD ["app.py"]
This is how I am connecting my local Mongo (Not Container) from Flask
mongo_uri = "mongodb://host.docker.internal:27017/myDB"
appInstance.config["MONGO_URI"] = mongo_uri
mongo = PyMongo(appInstance)
MongoDB is running on the container in mongodb://0.0.0.0:2717/myDB.
This is obvious when I run Flask container with local mongo uri which is mongodb://host.docker.internal:27017/myDB, everything works. But It shouldn't work when I try to connect the Mongo Container in the same way. Coz Flask container doesn't know anything about that Mongo Container.
My question is - how do I connect this Mongo Container with Flask Container so that I can query Mongo container from Flask Container.
Thanks in advance.
If I was you, I would use docker-compose.
Solution just using docker
You'd have to find out the IP address of your mongo container and put this IP in the flask configuration file. Keep in mind that the IP address of the container can change - for example if you use a newer image.
Find IP address:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
Solution using docker-compose
In your docker-compose file you'd define two services - one for flask and one for mongo. In the flask configuration file you can then access the mongo container with its service name as both services run in the same network.
docker-compose.yml:
services:
mongo:
...
flask:
...
flask configuration:
mongo_uri = "mongodb://mongo/myDB"
In this example mongo is the name for your mongo service.

Categories