I'm trying to use Heroku to deploy a simple python script that I have written and dockerized. I also added a heroku.yml.
Since this is a python script that I would like to run daily and not an API or a website, I concluded that the script should be deployed in Heroku worker container and not as a web container.
The containerized python script works locally on my own PC, but on heroku, the logs say:
2022-02-25T06:09:13.899148+00:00 heroku[worker.1]: State changed from up to crashed
2022-02-25T06:09:13.930822+00:00 heroku[worker.1]: State changed from crashed to starting
2022-02-25T06:09:15.148500+00:00 heroku[worker.1]: Starting process with command `python -u main.py`
2022-02-25T06:09:15.790174+00:00 heroku[worker.1]: State changed from starting to up
2022-02-25T06:09:16.033414+00:00 app[worker.1]: Traceback (most recent call last):
2022-02-25T06:09:16.033431+00:00 app[worker.1]: File "/app/main.py", line 3, in <module>
2022-02-25T06:09:16.033533+00:00 app[worker.1]: from venmo_api import Client
2022-02-25T06:09:16.033552+00:00 app[worker.1]: ModuleNotFoundError: No module named 'venmo_api'
2022-02-25T06:09:16.155312+00:00 heroku[worker.1]: Process exited with status 1
2022-02-25T06:09:16.232899+00:00 heroku[worker.1]: State changed from up to crashed
I don't understand how the module isn't found when my Dockerfile says COPY requirements.txt and RUN pip install -r requirements.txt
For context, here is my Dockerfile:
FROM python:3.9.10-alpine3.15
RUN addgroup app && adduser -S -G app app
USER app
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "-u", "main.py"]
, my requirements.txt
certifi==2021.10.8
charset-normalizer==2.0.11
idna==3.3
requests==2.27.1
urllib3==1.26.8
venmo-api==0.3.1
get-docker-secret==1.0.1
and my heroku.yml:
build:
docker:
worker: Dockerfile
All 3 of these files are in the same directory, and you can see venmo-api is in my requirements.txt.
Prior to deploying, I set the Heroku stack to container with: heroku stack:set container
I used heroku config:set [redacted] to set runtime environment variables like my API key
, and finally I ran heroku container:push worker and heroku container:release worker to build and push the python script image to Heroku's container registry and deploy it.
Can someone please tell me where I went wrong? I just want this script to be able to run on the cloud and be able to schedule it to run daily with Heroku Scheduler?
Thanks so much!!
Related
I am having this python command, not sure how to pass it in dockerfile.
Command
python3 app/main.py start --config config.yml
I am writing dockerfile but not sure how to pass the above command in my docker file. In my main.py file I have given start, stop condition in form of actions.
config.yaml file
host: 127.0.0.1
port: 9000
db: elastic
elastic:
port: 9200
host: localhost
user: null
secret: null
ssl: false
sqlserver:
port: 1433
host: localhost
instance: MSSQLSERVER
ssl: false
user: null
password: null
kafka:
port: null
host: null
api-docs: true
rocketchat:
host:null
port:null
auth-backend:
- basic
- bearer
- oauth
name: Smartapp
Dockerfile
FROM python:3.8
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
RUN python3 app/main.py start --config config.yml
When I am running the dockerfile, it is going in an infinite loop at RUN step.
Step 7/7 : RUN python3 smartinsights/main.py start --config config.yml
---> Running in 8a81bfe608d6
[91m/usr/src/app/smartinsights/system/DB.py:27: SyntaxWarning: "is" with a literal. Did you mean "=="?
if self.database_name is 'elastic':
/usr/src/app/smartinsights/system/DB.py:29: SyntaxWarning: "is" with a literal. Did you mean "=="?
elif self.database_name is 'sqlserver':
[0mSetting /usr/src/app/smartinsights as project folder
Running...
registering ActionIncident
registering ActionIncidentTag
registering MemoryCount
registering MemoryCloseCount
registering MemoryOpenCount
registering AutoCloseCount
registering AgeingAnalysisData
[2021-04-14 09:57:19 +0000] [9] [INFO] Goin' Fast # http://127.0.0.1:8000
[2021-04-14 09:57:19 +0000] [9] [INFO] Starting worker [9]
Below error can also be seen at startup server
[2021-04-14 10:17:37 +0000] [9] [INFO] Goin' Fast # http://localhost:8000
[2021-04-14 10:17:37 +0000] [9] [ERROR] Unable to start server
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/sanic/server.py", line 891, in serve
http_server = loop.run_until_complete(server_coroutine)
File "uvloop/loop.pyx", line 1494, in uvloop.loop.Loop.run_until_complete
File "uvloop/loop.pyx", line 1768, in create_server
OSError: [Errno 99] error while attempting to bind on address ('::1', 8000, 0, 0): cannot assign requested address
[2021-04-14 10:17:37 +0000] [9] [INFO] Server Stopped
The dockerfile 'builds' an image -- you should/must not run your application during the build process. You want your application to run only when the container runs.
Change your dockerfile to look like this:
FROM python:3.8
WORKDIR /pyapp/
COPY app/* app/
COPY . .
RUN pip install -r requirements.txt
CMD ["python3", "app/main.py", "start", "--config", "config.yml"]
This CMD line tells docker that when it runs the container, it should run this command within it. You can build it like this:
docker build --tag myPythonApp .
And run it like this
docker run -it --rm myPythonApp
You have added some output in the comments that suggests that this container is listening on port 9000. You can expose this port on the host like this:
docker run -it --rm -p 9000:9000 myPythonApp
And maybe access it in your browser on `http://localhost:9000/".
That command will run the container in the current shell process. When you hit ctrl+c then the process will stop and the container will exit. If you want to keep the container running in the background try this:
docker run -it --rm -p 9000:9000 -d myPythonApp
And, if you're sure that you'll only be running one container at a time, it may help to give it a name.
docker run -it --rm -p 9000:9000 -d --name MyPythonApp myPythonApp
That will allow you to kill a background container with:
docker rm -f MyPythonApp
Btw, if you're in a mess, and you're running bash, you can remove all running and stopped containers with:
docker rm -f $(docker ps -qa)
1.create any python script
2.create the docker file using the following code
FROM python:3
WORKDIR /usr/src/app
COPY . .
CMD ["test.py"]
ENTRYPOINT ["python3"]
3.Build the docker
docker build -t hello
4.run the docker
docker run -it hello test.py
I am trying to dockerize airflow, my Dockerfile looks like this
FROM python:3.5.2
RUN mkdir -p /src/airflow
RUN mkdir -p /src/airflow/logs
RUN mkdir -p /src/airflow/plugins
WORKDIR /src
COPY . .
RUN pip install psycopg2
RUN pip install -r requirements.txt
COPY airflow.cfg /src/airflow
ENV AIRFLOW_HOME /src/airflow
ENV PYTHONPATH "${PYTHONPATH}:/src"
RUN airflow initdb
EXPOSE 8080
ENTRYPOINT ./airflow-start.sh
while my docker-compose.yml looks like this
version: "3"
services:
airflow:
container_name: airflow
network_mode: host
build:
context: .
dockerfile: Dockerfile
ports:
- 8080:8080
The output of $ docker-compose build comes up like normal, every step executes and then
Step 12/14 : RUN airflow initdb
---> Running in 8b7ebe406978
[2020-04-21 10:34:21,419] {__init__.py:45} INFO - Using executor LocalExecutor
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 17, in <module>
from airflow.bin.cli import CLIFactory
File "/usr/local/lib/python3.5/site-packages/airflow/bin/cli.py", line 59, in <module>
from airflow.www.app import cached_app
File "/usr/local/lib/python3.5/site-packages/airflow/www/app.py", line 20, in <module>
from flask_cache import Cache
File "/usr/local/lib/python3.5/site-packages/flask_cache/__init__.py", line 24, in <module>
from werkzeug import import_string
ImportError: cannot import name 'import_string'
ERROR: Service 'airflow' failed to build: The command '/bin/sh -c airflow initdb' returned a non-zero code: 1
postgres is running on host system.
I have tried multiple ways but this keeps on happening.
I even tried puckel/docker-airflow image and the same error occurred.
Can someone tell me what am I doing wrong?
Project Structure:
root
-airflow_dags
-Dockerfile
-docker-compose.yml
-airflow-start.sh
-airflow.cfg
In case it's relevant: airflow-start.sh
In airflow.cfg:
dags_folder = /src/airflow_dags/
sql_alchemy_conn = postgresql://airflow:airflow#localhost:5432/airflow
If possible get your code running without touching docker ... run it directly on your host ... of course this means your host ( your laptop or wherever you are executing your commands, could be a remote VPS debian box ) must have the same OS as your Dockerfile, I see in this case FROM python:3.5.2 is actually using debian 8
Short of doing above launch a toy container which does nothing yet executes and lets you login to it to manually run your commands to aid troubleshooting ... so use following as this toy container's Dockerfile
FROM python:3.5.2
CMD ["/bin/bash"]
so now issue this
docker build --tag saadi_now . # creates image saadi_now
now launch that image
docker run -d saadi_now sleep infinity # launches container
docker ps # lets say its container_id is b91f8cba6ed1
now login to that running container
docker exec -ti b91f8cba6ed1 bash
cool so you are now inside the docker container so run the commands which were originally in the real Dockfile ... this sometime makes it easier to troubleshoot
one by one add to this toy Dockerfile your actual commands from the real Dockerfile and redo above until you discover the underlying issues
Most likely this is related to either a bug in airflow with the werkzeug package, or your requirements might be clobbering something.
I recommend checking the versions of airflow, flask, and werkzueg that are used in the environment. It may be that you need to pin the version of flask or werkzueg as discussed here.
I'm working on a Flask application based on the Microblog app from Miguel Grinberg's mega-tutorial. Code lives here: https://github.com/dnilasor/quickgig . I have a working docker implementation with a linked MySQL 5.7 container. Today I added an Admin View function using the Flask-Admin module. It works beautifully served locally (OSX) on Flask server via 'flask run' but when I build and run the new docker image (based on python:3.8-alpine), it crashes on boot with a OSError: libc not found error, the code for which seems to indicate an unknown library
It looks to me like Gunicorn is unable to serve the app following my additions. My classmate and I are stumped!
I originally got the error using the python:3.6-alpine base image and so tried with 3.7 and 3.8 to no avail. I also noticed that I was redundantly adding PyMySQL, once in requirements.txt specifying version no. and again explicitly in the dockerfile with no spec. Removed the requirements.txt entry. Also tried incrementing the Flask-Admin version no. up and down. Also tried cleaning up my database migrations as I have seen multiple migration files causing the container to fail to boot (admittedly this was when using SQLite). Now there is only a single migration file and based on the stack trace it seems like the flask db upgrade works just fine.
One thing I have yet to try is a different base image (less minimal?), can try soon and update this. But the issue is so mysterious to me that I thought it time to ask if anyone else has seen it : )
I did find this socket bug which seemed potentially relevant but it was supposed to be fully fixed in python 3.8.
Also FYI I followed some of the advice here on circular imports and imported my admin controller function inside create_app.
Dockerfile:
FROM python:3.8-alpine
RUN adduser -D quickgig
WORKDIR /home/quickgig
COPY requirements.txt requirements.txt
RUN python -m venv venv
RUN venv/bin/pip install -r requirements.txt
RUN venv/bin/pip install gunicorn pymysql
COPY app app
COPY migrations migrations
COPY quickgig.py config.py boot.sh ./
RUN chmod +x boot.sh
ENV FLASK_APP quickgig.py
RUN chown -R quickgig:quickgig ./
USER quickgig
EXPOSE 5000
ENTRYPOINT ["./boot.sh"]
boot.sh:
#!/bin/sh
source venv/bin/activate
while true; do
flask db upgrade
if [[ "$?" == "0" ]]; then
break
fi
echo Upgrade command failed, retrying in 5 secs...
sleep 5
done
# flask translate compile
exec gunicorn -b :5000 --access-logfile - --error-logfile - quickgig:app
Implementation in init.py:
from flask_admin import Admin
app_admin = Admin(name='Dashboard')
def create_app(config_class=Config):
app = Flask(__name__)
app.config.from_object(config_class)
...
app_admin.init_app(app)
...
from app.admin import add_admin_views
add_admin_views()
...
return app
from app import models
admin.py:
from flask_admin.contrib.sqla import ModelView
from app.models import User, Gig, Neighborhood
from app import db
# Add views to app_admin
def add_admin_views():
from . import app_admin
app_admin.add_view(ModelView(User, db.session))
app_admin.add_view(ModelView(Neighborhood, db.session))
app_admin.add_view(ModelView(Gig, db.session))
requirements.txt:
alembic==0.9.6
Babel==2.5.1
blinker==1.4
certifi==2017.7.27.1
chardet==3.0.4
click==6.7
dominate==2.3.1
elasticsearch==6.1.1
Flask==1.0.2
Flask-Admin==1.5.4
Flask-Babel==0.11.2
Flask-Bootstrap==3.3.7.1
Flask-Login==0.4.0
Flask-Mail==0.9.1
Flask-Migrate==2.1.1
Flask-Moment==0.5.2
Flask-SQLAlchemy==2.3.2
Flask-WTF==0.14.2
guess-language-spirit==0.5.3
idna==2.6
itsdangerous==0.24
Jinja2==2.10
Mako==1.0.7
MarkupSafe==1.0
PyJWT==1.5.3
python-dateutil==2.6.1
python-dotenv==0.7.1
python-editor==1.0.3
pytz==2017.2
requests==2.18.4
six==1.11.0
SQLAlchemy==1.1.14
urllib3==1.22
visitor==0.1.3
Werkzeug==0.14.1
WTForms==2.1
When I run the container in interactive terminal I see the following stack trace:
(venv) ****s-MacBook-Pro:quickgig ****$ docker run -ti quickgig:v7
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> 1f5feeca29ac, test
Traceback (most recent call last):
File "/home/quickgig/venv/bin/gunicorn", line 6, in <module>
from gunicorn.app.wsgiapp import run
File "/home/quickgig/venv/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 9, in <module>
from gunicorn.app.base import Application
File "/home/quickgig/venv/lib/python3.8/site-packages/gunicorn/app/base.py", line 12, in <module>
from gunicorn.arbiter import Arbiter
File "/home/quickgig/venv/lib/python3.8/site-packages/gunicorn/arbiter.py", line 16, in <module>
from gunicorn import sock, systemd, util
File "/home/quickgig/venv/lib/python3.8/site-packages/gunicorn/sock.py", line 14, in <module>
from gunicorn.socketfromfd import fromfd
File "/home/quickgig/venv/lib/python3.8/site-packages/gunicorn/socketfromfd.py", line 26, in <module>
raise OSError('libc not found')
OSError: libc not found
I'd like the app to boot/be served by gunicorn inside the container so I can continue developing with my team using the docker implementation and leveraging dockerized MySQL vs the pain of local MySQL for development. Can you advise?
In your Dockerfile:
RUN apk add binutils libc-dev
Yes Gunicorn 20.0.0 requires the package libc-dev.
So this works for me:
RUN apk --no-cache add libc-dev
This was an issue with gunicorn 20.0.0, tracked here:
https://github.com/benoitc/gunicorn/issues/2160
The issue is fixed in 20.0.1 and forward. So, change this:
RUN venv/bin/pip install gunicorn pymysql
to this:
RUN venv/bin/pip install 'gunicorn>=20.0.1,<21' pymysql
If upgrading is not an option, as a workaround you can add the following line:
RUN apk --no-cache add binutils musl-dev
Unfortunately this adds about 20MB to the resulting docker container, but there isn't any other known workaround at the moment.
This problem seems related to a new version of Gunicorn 20.0.0. Try to use a previous one 19.9.0
I have solved this problem:
Dockerfile: remove this installation "RUN venv/bin/pip install
gunicorn"
requirement.txt: add this line "gunicorn==19.7.1"
I'm developing an app on Python using flask and I'm getting this error while trying to deploy it to Heroku:
Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
On the Heroku logs, I can see this line
Few possibilities that I have tried
In my Procfile I have written this web: python hello-mysql.py
I have also tried web: python hello-mysql.py runserver 0.0.0.0=$PORT
Replace "web" with "worker" in your Procfile.
To #damien's point, it looks like you're not binding to the $PORT env var. Here's some documentation that may help: https://devcenter.heroku.com/articles/getting-started-with-python#define-a-procfile and https://devcenter.heroku.com/articles/dynos#web-dynos
Also, do not rename your process to "worker" since only processes named web will be accessible via http/https.
Simply, use gunicorn to easen the burden.
Within the project directory, with the virtual environment activated, install gunicorn as follows:
pip install gunicorn
If you're using pipenv you can try:
pipenv install gunicorn
Update the requirements.txt file to include the new installed gunicorn module by running:
pip freeze > requirements.txt
Update the Procfile as follows:
web: gunicorn your_django_project_name.wsgi --log-file -
N.B:
There should be space between the web: and gunicorn as well as between --log-file and the - next to it.
Lastly, add, commit and push the changes
I am trying to get a django project that I have built to run on docker and create an image and container for my project so that I can push it to my dockerhub profile.
Now I have everything set up and I've created the initial image of my project. However, when I run it I am not getting any port number attached to the container. I need this to test and see if the container is actually working.
Here is what I have:
Successfully built a047506ef54b
Successfully tagged test_1:latest
(MySplit) omars-mbp:mysplit omarjandali$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
test_1 latest a047506ef54b 14 seconds ago 810MB
(MySplit) omars-mbp:mysplit omarjandali$ docker run --name testing_first -d -p 8000:80 test_1
01cc8173abfae1b11fc165be3d900ee0efd380dadd686c6b1cf4ea5363d269fb
(MySplit) omars-mbp:mysplit omarjandali$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
01cc8173abfa test_1 "python manage.py ru…" 13 seconds ago Exited (1) 11 seconds ago testing_first
(MySplit) omars-mbp:mysplit omarjandali$ Successfully built a047506ef54b
You can see there is no port number so I don't know how to access the container through my local machine on my web browser.
dockerfile:
FROM python:3
WORKDIR tab/
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "manage.py", "runserver", "0.0.0.0"]
This line from the question helps reveal the problem;
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
01cc8173abfa test_1 "python manage.py ru…" 13 seconds ago Exited (1) 11 seconds ago testing_first
Exited (1) (from the STATUS column) means that the main process has already exited with a status code of 1 - usually meaning an error. This would have freed up the ports, as the docker container stops running when the main process finishes for any reason.
You need to view the logs in order to diagnose why.
docker logs 01cc will show the logs of the docker container that has the ID starting with 01cc. You should find that reading these will help you on your way. Knowing this command will help you immensely in debugging weirdness in docker, whether the container is running or stopped.
An alternative 'quick' way is to drop the -d in your run command. This will make your container run inline rather than as a daemon.
Created Dockerise django seed project
django-admin.py startproject djangoapp
Need a requirements.txt file outlining the Python dependencies
cd djangoapp/
RUN follwoing command to create the files required for dockerization
cat <<EOF > requirements.txt
Django
psycopg2
EOF
Dockerfile
cat <<EOF > Dockerfile
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir /app
WORKDIR /app
ADD requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
EOF
docker-compose.yml
cat <<EOF > docker-compose.yml
version: "3.2"
services:
web:
image: djangoapp
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
EOF
Run the application with
docker-compose up -d
When you created the container you published the ports. Your container would be accessible via port 8000 if it successfully built. However, as Shadow pointed out, your container exited with an error. That is why you must add the -a flag to your docker container ls command. docker container ls only shows running containers without the -a flag.
I recommend forgoing the detached flag -d to see what is causing the error. Then creating a new container after you have successfully launched the one you are working on. Or simply run the following commands once you fix the issue. docker stop testing_first then docker container rm testing_first finally run the same command you ran before. docker run --name testing_first -d -p 8000:80 test_1
I ran into similar problems with the first docker instances I attempted to run as well.