Sentry Aiohttp not working when started from docker - python

I started sentry using the recommended method for aiohttp as follows. When I start my script with "python [script name]", it works like a charm. However, when I start the same server inside a minimal docker environment (from from python:3.8), it never captures errors. Is there a problem with sentry's official recommended setup?
from sentry_sdk.integrations.aiohttp import AioHttpIntegration
# Sentry
sentry_sdk.init(
dsn="https://xxxx.ingest.sentry.io/12345",
integrations=[AioHttpIntegration()]
)
The server is running correctly, so it can't be that the library is missing. Indeed, it's in requirements.txt:
sentry-sdk==0.14.3
The Dockerfile couldn't be simpler:
from python:3.8
copy . /app
workdir /app
run pip install -r requirements.txt
expose 5000
cmd [ "python", "file.py" ]

Related

Docker containers crashed: /bin/sh: 1: [uvicorn,: not found

I am new to Docker and trying to Dockerize my FastAPI application.
First I created a Dockerfile:
FROM python:3.9.9
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
Then ran the following command:
docker build -t fastapi .
The command ran successfully.
After that I created the following docker-compose.yml:
version: "3"
services:
api:
build: .
ports:
- 8000:8000
env_file:
./.env
Then ran the following command:
docker-compose up -d
Ran successfully:
Network fastapi_default Created 0.7s
- Container fastapi_api_1 Started
Then to check if its running properly I ran the following command:
docker ps -a
And it showed that Container exited few seconds after it was created.
Then I ran this command:
docker logs fastapi_api_1
And it says:
/bin/sh: 1: [uvicorn,: not found
Not sure what is the reason. Tried some solutions that I found online but nothing worked out. I do have uvicorn in my requirements.txt file.
Help will be appriciated. Please let me know if additional information is required.
Note: You don't need to do docker build -t fastapi . manually. Docker-compose will do it for you (because you set build: .) But! You must run up command with --build parameter (docker-compose up --build) to force rebuild image even if it exists.
And about your problem:
Here is a very good article (and one more) about RUN, ENTRYPOINT and CMD
Here is three forms for CMD:
CMD ["executable","param1","param2"] (exec form, preferred)
CMD ["param1","param2"] (sets additional default parameters for ENTRYPOINT in exec form)
CMD command param1 param2 (shell form)
According error, looks like Docker interpreting CMD as a shell form or additional parameters for default ENTRYPOINT
Actually still not sure why it happens, but changing CMD to
CMD uvicorn app.main:app --host 0.0.0.0 --port 8000
or
ENTRYPOINT ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
should solve your problem
Also it will be better to use full path to uvicorn executable (/usr/bin/uvicorn or where it installed by default?). It is just my opinion but, that is may be a reason why CMD is interpreted as parameters instead of command.
PS In addition here is note from docker docs:
Note
The exec form is parsed as a JSON array, which means that you must use double-quotes (“) around words not single-quotes (‘).
So exec form syntax must meet the conditions of JSON syntax.
So, basically there was something wrong with the docker. I had created mulitple images. I removed all of them and ran the same commands again and it worked. I don't know the exact reason but its working now.
What I think was happening is that instead of deleting the old images and creating new one. I was just doing
docker-compose down
and then
docker-compose up -t
I think that command was not taking the changes into consideration.
then i ran:
docker-compose up --build
and I think that created a new image and it worked.
Then I noticed that there were atleast 10 images created. I deleted all of them and ran the same commands:
docker build .
docker-compose up -t
and it worked fine again.
So basically instead of using creating new image it was using the old one which was not created correctly:
docker-compose up --build
In short you should use docker-compose up --build whenever you make changes in your dockerfile or docker-compose.yml instead of docker-compose up -t
It might be confusing but I am also very new to Docker.
Thanks for the help everyone!
I've had the same issue with a Dockerfile in my docker-compose environment containing
COPY ./requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
RUN pip install uvicorn==0.20.0
CMD ["uvicorn", "--host", "0.0.0.0", "--port", "6000", "app:app"]
So I don't need an extra command:line in my docker-compose.yml
It turned out that if you install uvicorn in your requirements.txt, as I like to do for testing purposes
then it gets installed locally, and
RUN pip install uvicorn==0.20.0 is skipped, which means,
there is no /usr/bin/uvicorn 'executable' available, just somewhere in site-packages and CMD will fail.
So, if you use uvicorn in your requirements.txt, and in Dockerfile as well, you can maybe
force the reinstallation
RUN pip install --ignore-installed uvicorn==0.20.0
in the Dockerfile,
or set the PATH to find it somewhere in the guts of python,
or - what I find is a better solution to keep the image size small -
is to remove uvicorn from requirements.txt...

Could not locate a Flask application when creating a Docker Image

I hope someone can help me with this. The following error code comes up when I try and run docker run todolist-docker.
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
Error: Could not locate a Flask application. You did not provide the "FLASK_APP" environment variable, and a "wsgi.py" or "app.py" module was not found in the current directory.
My folder directory is here:
Folder name: todoflaskappfinal
__pycache__
static
templates
venv
App.py
Dockerfile
requirements.txt
todo.db
ToDoList.db
within the todoflaskappfinal folder, I have a Dockerfile file:
# syntax=docker/dockerfile:1
FROM python:3.7.8
WORKDIR /tododocker
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]
And within App.py, I've set up everything (I assume) correctly, obviously with more code between this.
#Website Configuration
app = Flask(__name__)
if __name__ == "__main__":
app.run(debug=True, port=5000, host='0.0.0.0')
I've set FLASK_APP as App.py, made the virtual environment with venv, etc. When I type in flask run in the terminal it loads the website up correctly and displays it on 127.0.0.01. However, when I try and use the docker build --tag todolist-docker command and then docker run todolist-docker, the error message appears above. Can anyone see what I'm doing wrong?
Is FLASK_APP defined in the docker container? There's no ENV statement in the Dockerfile, and you didn't mention using docker's -e or --env command option. Your container won't inherit your environment variables from the hosting environment.
# syntax=docker/dockerfile:1
FROM python:3.7.8
WORKDIR /tododocker
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
# Add this:
ENV FLASK_APP=App.py
CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]

docker stuck on django runserver

Dockerfile:
FROM python:3.6-slim
ENV root=/test
ENV django=$root/test
COPY ./code $root
WORKDIR $django
RUN pip install -r requirements.txt --no-cache-dir
CMD ["python3", "manage.py", "runserver", "--noreload"]
without --noreload it will stuck on
Watching for file changes with StatReloader
FYI, "docker run hello-world" is working fine.
FYI, running ubuntu on virtualbox on windows 10 home(as dev env)
UPDATE:
I have changed the base image to
FROM python:3.6
and it works, but the question is still there why it is not working with slim?
What is your DEBUG value in settings? Can you change to False.
It's not related to docker image slip or any other image per say. Django is looking for hot-reload, whenever there is change in code, used for development purpose. But inside Docker it's not required as, I believe, you aren't changing your code.
Also use a wsgi/u for deployments - Gunicorn, uvicorn etc.

How to run a scrapy spider in a flask app, from a docker container?

When running my flask app, which uses Python's subprocess to use scrapy within a flask app as specified here (How to integrate Flask & Scrapy?), from a Docker Container and calling the appropriate endpoints specified in my flask app, I receive the error message: ERR_EMPTY_RESPONSE. Executing the flask app outside of my docker container (python app.py, where app.py has my flask code), everything works as intended and my spiders are called using subprocess within the flask app.
Instead of using flask & subprocess to call my spiders within a web app, I tried using twisted & twisted-klein python libraries, with the same result when called from a docker Container. I have also created a new, clean scrapy project, meaning no specific code of my own, just the standard scrapy code and project structure upon creation. This resulted in the same error. I am not quite certain whether my approach is anti-pattern, since flask and scrapy are in bundled into run image, resulting in one container for two purposes.
Here is my server.py code. When executing outside a container (using python interpreter) everything works as intended.
When running it from a container, then I receive the error message (ERR_EMPTY_RESPONSE).
# server.py
import subprocess
from flask import Flask
from medien_crawler.spiders.firstclassspider import FirstClassSpider
app = Flask(__name__)
#app.route("/")
def return_hello():
return "Hello!"
#app.route("/firstclass")
def return_firstclass_comments():
spider_name = "firstclass"
response = subprocess.call(['scrapy', 'crawl', spider_name, '-a', 'start_url=https://someurl.com'])
return "OK!"
if __name__ == "__main__":
app.run(debug=True)
FROM python:3
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 5000
CMD [ "python", "./server.py" ]
Finally I run docker run -p 5000:5000 . It does not work. Any ideas?
Please try it.
.Dockerfile
FROM python:3.6
RUN apt-get update && apt-get install -y wget
WORKDIR /usr/src/app
ADD . /usr/src/app
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 5000
CMD [ "python", "./server.py" ]

What is a good way to add python dependencies to a Docker container?

I am trying to integrate docker in to my django workflow and I have everything set up except one really annoying issue. If I want to add dependencies to my requirements.txt file I basically just have to rebuild the entire container image for those dependencies to stick.
For example, I followed the docker-compose example for django here. the yaml file is set up like this:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db
and the Docker file used to build the web container is set up like this:
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
So when the image is built for this container requirements.txt is installed with whatever dependencies are initially in it.
If I am using this as my development environment it becomes very difficult to add any new dependencies to that requirements.txt file because I will have to rebuild the container for the changes in requirements.txt to be installed.
Is there some sort of best practice out there in the django community to deal with this? If not, I would say that docker looks very nice for packaging up an app once it is complete, but is not very good to use as a development environment. It takes a long time to rebuild the container so a lot of time is wasted.
I appreciate any insight . Thanks.
You could mount requirements.txt as a volume when using docker run (untested, but you get the gist):
docker run container:tag -v /code/requirements.txt ./requirements.txt
Then you could bundle a script with your container which will run pip install -r requirements.txt before starting your application, and use that as your ENTRYPOINT. I love the custom entrypoint script approach, it lets me do a little extra work without needing to make a new container.
That said, if you're changing your dependencies, you're probably changing your application and you should probably make a new container and tag it with a later version, no? :)
So I changed the yaml file to this:
db:
image: postgres
web:
build: .
command: sh startup.sh
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db
I made a simple shell script startup.sh:
#!/bin/bash
#restart this script as root, if not already root
[ `whoami` = root ] || exec sudo $0 $*
pip install -r dev-requirements.txt
python manage.py runserver 0.0.0.0:8000
and then made a dev-requirements.txt that is installed by the above shell script as sort of a dependency staging environment.
when I am satisfied with a dependency in dev-requirements.txt I will just move it over to the requirements.txt to be committed to the next build of the image. This gives me flexibility to play with adding and removing dependencies while developing.
I think the best way is to ignore what's currently the most common way to install python dependencies (pip install -r requirements.txt) and specify your requirements directly in the Dockerfile, effectively getting rid of the requirements.txt file. Additionally you get dockers layer caching for free.
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
# make sure you install requirements before the ADD, since everything after ADD is not cached
RUN pip install flask==0.10.1
RUN pip install sqlalchemy==1.0.6
...
ADD . /code/
If the docker container is the only way your application is ever run, then I would suggest you do it this way. If you want to support other means of setting up your code (e.g. virtualenv) then this is of course not for you and you should fall back to either using a requirements file or use a setup.py routine. Either way, I found this way to be most simple and straightforward without dealing with all the messed up python package distribution issues.

Categories