Error when deoplying flask & socket-io script with docker - python

first of all to clarify some things:
This python script works perfectly on my windows machine(without docker)
I am also using virtualenv on my local machine
while running on my machine, I can easily connect to to the socket server from my android phone(websocket tester app)
So now I am trying to run this websocket script(Flask & SocketIO) with docker on my ubuntu server on cloud(digital ocean).
My dockers commands for deploying this:
docker build -t websocketserver .
docker run -d -p 5080:8000 --restart always --name my_second_docker_running websocketserver
The script runs fine, BUT when i try to connect to it(from my phone), I get some errors when typing the command: "docker logs --tail 500 my_second_docker_running"
The error is:
Traceback (most recent call last):
File "/opt/company/project/venv/lib/python3.8/site-packages/gunicorn/workers/sync.py", line 134, in handle
self.handle_request(listener, req, client, addr)
File "/opt/company/project/venv/lib/python3.8/site-packages/gunicorn/workers/sync.py", line 175, in handle_request
respiter = self.wsgi(environ, resp.start_response)
TypeError: __call__() takes 1 positional argument but 3 were given
My requirements.txt:
Flask==1.1.1
Flask-SocketIO==3.0.1
aiohttp-cors==0.7.0
asyncio==3.4.3
gunicorn==20.0.4
My dockerfile:
FROM ubuntu:latest
MAINTAINER raxor2k "xxx.com"
RUN apt-get update -y
#RUN apt-get install -y python3-pip build-essential python3-dev
RUN apt-get install -y build-essential python3-dev python3-venv
COPY . /app
WORKDIR /app
RUN python3 -m venv /opt/company/project/venv
RUN /opt/company/project/venv/bin/python -m pip install -r requirements.txt
#ENTRYPOINT ["gunicorn"]
ENTRYPOINT ["/opt/company/project/venv/bin/gunicorn"]
CMD ["main:app", "-b", "0.0.0.0"]
and finally, my main.py file:
from aiohttp import web
import socketio
import aiohttp_cors
import asyncio
import asyncio as aio
import logging
# creates a new Async Socket IO Server
sio = socketio.AsyncServer()
# Creates
app = web.Application()
sio.attach(app)
# AIOSerial now logs! uncomment below for debugging
logging.basicConfig(level=logging.DEBUG)
async def index(request):
with open('index.html') as f:
print("Somebody entered the server from the browser!")
return web.Response(text=f.read(), content_type='text/html')
#sio.on("android-device")
async def message(sid, data):
print("message: ", data)
#sio.on("device-id")
async def message(sid, android_device_id):
print("DEVICE ID: ", android_device_id)
#sio.on("disconnected-from-socket")
async def message(sid, disconnected_device):
print("Message from client: ", disconnected_device)
async def send_message_to_client():
print("this method got called!")
await sio.emit("SuperSpecialMessage", {"Message from server:": "MESSAGE FROM SENSOR"})
# We bind our aiohttp endpoint to our app
# router
cors = aiohttp_cors.setup(app)
app.router.add_get('/', index)
# We kick off our server
if __name__ == '__main__':
print("websocket server is running!")
the_asyncio_loop = asyncio.get_event_loop()
run_the_websocket = asyncio.gather(web.run_app(app))
run_both_loops_together = asyncio.gather(run_the_websocket)
results = the_asyncio_loop.run_until_complete(run_both_loops_together)
Could someone please help me solve this issue? could perhaps someone here try running this code yourself to see if you get the same error?

I decided to follow this example instead: https://github.com/miguelgrinberg/Flask-SocketIO
It works pretty much the same as my code and everything is fine now.

Related

Sending file from a docker container to an SFTP server raise the error : sock.connect(addr)

I have the following Python code that works locally:
import paramiko
import os
import subprocess
from flask import Flask, request
app = Flask(__name__)
#app.route("/")
def hello():
return "Le service export dump est accessible dans K8S!"
#app.route("/get", methods=['GET'])
def get_ano():
print("Test liveness")
return "Pod is alive !"
#app.route("/run", methods=['POST'])
def run_dump_generation():
print("---- Fetching parameters from flowfile attibutes ! ----")
rules_str = request.headers.get('database')
print(rules_str)
postgres_bin = r"/usr/bin/"
dump_file = "database_dump.sql"
os.environ['PGPASSWORD'] = 'XXX'
print('Before dump generation')
with open(dump_file, "w") as f:
result = subprocess.call([
os.path.join(postgres_bin, "pg_dump"),
"-Fp",
"-d",
"X",
"-U",
"pgsqladmin",
"-h",
"hostname_string_value",
"-p",
"X"
],
stdout=f
)
print('After dump generation')
ssh=paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
print("1 ---- ", ssh)
session = ssh.connect(hostname="hostname_string_value", port=X, username='X', password="X")
print("2 ---- Auth OK")
sftp_connection = ssh.open_sftp()
print('3 ----', sftp_connection)
sftp_connection.put("database_dump.sql", "/data/database_dump.sql")
print("After SSH")
print("---- Dump generated ! ----")
return "Dump generated and loaded to SFTP"
if __name__ == "__main__":
app.run(host='0.0.0.0', debug=True)
It is a Flask app where with a POST request I generate data and send it to an SFTP server.
My final goal is to have a Docker container when this app is in, in order to call it in an ETL pipeline.
So I did the following Dockerfile to build an image:
FROM python:3.8-alpine
USER root
WORKDIR /dump_generator_api
COPY requirements.txt ./
RUN python3 -m pip install --upgrade pip
RUN apk add --no-cache --update python3-dev gcc libc-dev libffi-dev && pip3 install --no-cache-dir -r requirements.txt
RUN apk add postgresql openssh sshpass expect curl
ADD . /dump_generator_api
EXPOSE 5000
CMD ["python", "/dump_generator_api/app.py"]
The build works well.

cron job not running in docker fastapi docker app

I'm working on an app for my homelab, where I have an Intel NUC I'm using for some web scraping tasks. The NUC is accessible to my home network via 192.xxx.x.xx.
On the NUC I've set up nginx to proxy incoming http request to a docker container. In that container I've got a basic fastapi app to handle the request.
app.main.py
import os
from pathlib import Path
from fastapi import FastAPI
app = FastAPI()
cron_path = Path(os.getcwd(), "app", "cron.log")
#app.get("/cron")
def cron():
with cron_path.open("rt") as cron:
return {"cron_state": cron.read().split("\n")}
app.cronjob.py
import os
from pathlib import Path
from datetime import datetime
cron_path = Path(os.getcwd(), "app", "cron.log")
def append_time():
with cron_path.open("rt") as filein:
text = filein.read()
text += f"\n{datetime.utcnow().strftime('%Y-%m%dT%H:%M:%SZ')}"
with cron_path.open("wt") as fileout:
fileout.write(text)
if __name__ == "__main__":
append_time()
cron-job
* * * * * python3 /code/app/cronjob.py
# An empty line is required at the end of this file for a valid cron file.
Dockerfile
FROM python:3.10-slim-buster
#
WORKDIR /code
COPY ./cron-job /etc/cron.d/cron-job
COPY ./app /code/app
COPY ./requirements.txt /code/requirements.txt
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/cron-job
#Install Cron
RUN apt-get update
RUN apt-get -y install cron
# Apply cron job
RUN crontab /etc/cron.d/cron-job
#
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
#
EXPOSE 8080
CMD crontab ; uvicorn app.main:app --host 0.0.0.0 --port 8080
I can access the the app without issues, but I can't seem to get the cron job to run while fastapi is running. Is what I'm attemping to do something better suited for pure python solution like from fastapi_utils.tasks import repeat_every or is there something I'm missing.

How to deploy fast api as a backend in docker container to heroku

I am trying to deploy a fast API todo app onto Heroku by container registry. When I build the docker image and run it in my local. I am able to access my swagger in http://localhost:8001/docs. But I am not able to access when I deployed to heroku and I am getting this error :
Error: Exec format error
Here is my main.py
from fastapi import FastAPI,Depends
from fastapi.middleware.cors import CORSMiddleware
from db.db import get_db
from bson.objectid import ObjectId
from models.todo import Todo
app = FastAPI()
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
#app.get("/")
def root():
return {"message": "Hello World"}
#app.get("/api/todo",response_model=Todo)
async def get_todo(db=Depends(get_db)):
data = await db.todo.find()
result: Todo = Todo(**data)
return result
#app.get("/api/todo/{todo_id}",response_model=Todo)
async def get_todo(todo_id: str,db=Depends(get_db)):
data = await db.todo.find_one({"_id": ObjectId(todo_id)})
result: Todo = Todo(**data)
return result
#app.post("/api/todo")
async def create_todo(db=Depends(get_db),payload:Todo=None):
result = await db.todo.insert_one(payload.dict())
return {"message": "Todo created successfully", "todo_id": str(result.inserted_id)}
#app.put("/api/todo/{todo_id}")
async def update_todo(todo_id: str,db=Depends(get_db),payload:Todo=None):
result = db.todo.update_one({"_id": ObjectId(todo_id)}, {"$set": payload.dict()})
return {"message": "Todo updated successfully", "todo_id": str(result.inserted_id)}
#app.delete("/api/todo/{todo_id}")
async def delete_todo(todo_id: str,db=Depends(get_db)):
result = db.todo.delete_one({"_id": ObjectId(todo_id)})
return {"message": "Todo deleted successfully", "todo_id": str(result.inserted_id)}
and my Dockerfile
FROM python:3.8
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
# EXPOSE 8000
RUN chmod +x run.sh
ENTRYPOINT ["/bin/bash", "-c" ,"./run.sh"]
I tried using CMD instead of ENTERYPOINT did not work. I also tried using
CMD ["uvicorn", "main:app","--proxy-headers", "--host", "${HOST}", "--port", "${PORT}"]
run.sh
#!/bin/sh
export APP_MODULE=${APP_MODULE-main:app}
export HOST=${HOST:-0.0.0.0}
export PORT=${PORT:-8001}
# run gunicorn
gunicorn --bind $HOST:$PORT "$APP_MODULE" -k uvicorn.workers.UvicornWorker
requirements.txt
fastapi
uvicorn
motor
gunicorn
You are building locally (Mac?) the image on a platform which is not compatible with Heroku (linux/amd64)
Set the platform as you build/push the image to the Heroku registry
DOCKER_DEFAULT_PLATFORM=linux/amd64 heroku container:push web -a myapp
You can also set the DOCKER_DEFAULT_PLATFORM as env variables (to avoid setting it everytime - note all images would be linux/amd64)
export DOCKER_DEFAULT_PLATFORM=linux/amd64
Try using a heroku.yml file instead of CMD to start the server when it's deployed to heroku.
I was unable to connect to my server until I switched to the heroku.yml.

Using rabbitmq fails from docker container pika.exceptions.AMQPConnectionError

I am trying to learn how to use docker and Rabbitmq at the same time now.
#Specifying the base image
FROM python:3.10
ADD ./task.py ./home/
#Here we added the python file that we want to run in docker and define its location.
RUN pip install requests celery pika
#Here we installed the dependencies, we are using the pygame library in our main.py file so we
have to use the pip command for installing the library
CMD [ "python3" ,"/home/task.py" ]
#lastly we specified the entry command this line is simply running python
This is how my Dockerfile looks like and then I setup another container with rabbitmq through the command :
docker run -it --rm --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:3.9-management
This is what task.py looks like :
from celery import Celery
import pika
from time import sleep
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost', port=5672))
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='',
routing_key='hello',
body='Hello World!')
connection.close()
app = Celery('task', broker="localhost")
#app.task()
def reverse(text):
sleep(5)
return text[::-1 ]
And i run the docker run command , but I keep getting this error.
PS C:\Users\xyz\PycharmProjects\Sellerie> docker run sellerie
Traceback (most recent call last):
File "/home/task.py", line 5, in <module>
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost', port=5672))
File "/usr/local/lib/python3.10/site-packages/pika/adapters/blocking_connection.py", line
360, in __init__
self._impl = self._create_connection(parameters, _impl_class)
File "/usr/local/lib/python3.10/site-packages/pika/adapters/blocking_connection.py", line 451, in _create_connection
raise self._reap_last_connection_workflow_error(error)
pika.exceptions.AMQPConnectionError
PS >
Can anyone help me better understand where the problem is. Maybe how to connect rabbitmq with the other docker container where my python file is located??
Thank you so much in advance

Can't acces swagger server into docker container

I have a swagger server api in python that I can run on my pc and easily access to the user interface via web. I'm now trying to run this API into a docker container and place it into a remote server. After the doing the 'docker run' command int the remote server all seems to be working fine but when I try to connect I got a ERR_CONNECTION_REFUSED response. The funny thing is that if I enter into the container the swagger server is working and answer my requests.
Here is my Dockerfile:
FROM python:3
MAINTAINER Me
ADD . /myprojectdir
WORKDIR /myprojectdir
RUN pip install -r requirements.txt
RUN ["/bin/bash", "-c", "chmod 777 {start.sh,stop.sh,restart.sh,test.sh}"]
Here are my commands to build/run:
sudo docker build -t mycontainer .
sudo docker run -d -p 33788:80 mycontainer ./start.sh
Here is the start.sh script:
#!/bin/bash
echo $'\r' >> log/server_log_`date +%Y%m`.dat
python3 -m swagger_server >> log/server_log_`date +%Y%m`.dat 2>&1
And the main.py of the swagger server:
#!/usr/bin/env python3
import connexion
from .encoder import JSONEncoder
if __name__ == '__main__':
app = connexion.App(__name__, specification_dir='./swagger/')
app.app.json_encoder = JSONEncoder
app.add_api('swagger.yaml', arguments={'title': 'A title'})
app.run(port=80, threaded=True, debug=False)
Does anyone know why I can't acces to 'myremoteserver:33788/myservice/ui' and what to change for solving it.
Thanks in advance
I finally managed to find out the solution. It's needed to tell the flask server of connexion to run on 0.0.0.0 so that not only local connections are allowed and to change in the swagger.yaml the url with the name of the server where the docker container is located
app.run(port=80, threaded=True, debug=False, host='0.0.0.0')

Categories