Use included XML files in deployed server - python

I have xml files that reside in the directory like this
src
|---lib
| |---folder
| | |---XML files
| |---script.py
|---app.py
The app.py file runs the codes in script.py, and in script.py it requires the XML files. When I run the server locally (window) I can just use the relative path "lib\folder\'xml files'". But when I deploy my server to Cloud Run, it says the files don't exist.
I've tried to specify the absolute path by doing this in script.py
package_directory = os.path.dirname(os.path.abspath(__file__))
path = os.path.join(package_directory, "folder\'xml files")
and tried changing all backward dash to forward dash, but the error still occurs.
In the dockerfile, I had this:
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
which I believe to copy everything in the src folder except things specified in .dockerignore, which I had these insides:
Dockerfile
README.md
*.pyc
*.pyo
*.pyd
__pycache__
.pytest_cache

Because Cloud Run requires a container, a good test for you would be to create the container and run it locally. I suspect that it's your container that's incorrect rather than Cloud Run.
I created the following repro of your code:
.
├── app.py
├── Dockerfile
└── lib
├── folder
│   └── XML files
│   └── test
├── __init__.py
└── script.py
app.py:
from lib import script
script.foo()
script.py:
import os
def foo():
package_directory = os.path.dirname(os.path.abspath(__file__))
path = os.path.join(package_directory, "folder/XML files")
for f in os.listdir(path):
if os.path.isfile(os.path.join(path,f)):
print(f)
Dockerfile:
FROM docker.io/python:3.9.9
ENV APP_HOME /app
WORKDIR ${APP_HOME}
COPY . ./
ENTRYPOINT ["python","app.py"]
And, when I build|run the container, it correctly reports test:
Q="70734734"
podman build \
--tag=${Q} \
--file=./Dockerfile \
.
podman run \
--interactive --tty \
localhost/${Q}
I'm confident that, if I were to push it to Cloud Run, it would work correctly there too.
NOTE
Try to avoid spaces in directory names; os.path.join accommodates spaces
You describe XML files but your code references xml files
You don't include a full repro of your issue making it more difficult to help you

Related

Auto Updating the hosts files and the files past to the container synchronously Dockerfile

This is a concept question relating to images and containers. So I have have a python 3.9 image on ubuntu. I have a main.py which will altered and rewritten. However I see that when I change the contents of the main.py from my host computer's files the main.py within the container will not be one to one as the contents of the main.py file will not change unless I change it within the container itself.
Is there a way to have it so that when there is an update on the host computer the files within the container will also pull the latest updates so the files would be one to one? I obviously want to alter the main.py from my host computer as the if I did the vice versa the container changes wouldn't be seeable outside it.
Host Computer:
.
├── files/
│ ├── main.py
│ └── text.csv
└── Dockerfile
Container Directory:
test-poetry/
├─tests/
│ └─__init__.py
├─README.md
├─pyproject.toml
├─test_poetry/
│ ├─__init__.py
│ ├─main.py
│ └─text.csv
└─poetry.lock
Docker Contents:
#Python - 3.9 - ubuntu
FROM python:3.9-slim
ENTRYPOINT [ "/bin/bash" ]
WORKDIR /src/test-poetry/test_poetry
COPY files .
You need to use files as a container volume.
Dockerfile:
FROM python:3.9-slim
# makes no sense to have this as a long path
WORKDIR /project
# keep this towards the end of the file for clarity
ENTRYPOINT [ "/bin/bash" ]
Then build the image and run your container with:
docker build -t test/poetry:0.1 .
docker container run --rm -ti -v $(pwd)/files:/project test/poetry:0.1
** NOTE **
For your purposes you can even skip the build completely and run a simple container like:
docker run \
--rm -ti \
-v $(pwd)/files:/project \
--workdir /project \
python:3.9-slim bash

How to import python file into another python file so that it works in Docker?

When I run my app.py file I can access it on localhost and it works.
After running (and having no issues): docker build -t flask-container .
When I run: docker run -p 5000:5000 flask-container
I get: from helpers import apology, login_required, usd
ModuleNotFoundError: No module named 'helpers'
In app.py I have: from helpers import apology, login_required, usd
I have tried putting in an empty __init__.py folder in the main folder, still doesn't work.
Question: How do I fix the Module Not Found Error when trying to run docker?
Dockerfile
FROM python:3.8-alpine
# By default, listen on port 5000
EXPOSE 5000/tcp
# Set the working directory in the container
WORKDIR /app
# Copy the dependencies file to the working directory
COPY requirements.txt .
# Install any dependencies
RUN pip install -r requirements.txt
# Copy the content of the local src directory to the working directory
COPY app.py .
# Specify the command to run on container start
CMD [ "python", "./app.py" ]
requirements.txt
flask===2.1.0
flask_session
Python Version: 3.10.5
Please copy the helpers.py as well into the working directory.
COPY helpers.py .
OR
ADD . /app
#This will add all files in the current local dir to /app

How to run multiple Python scripts and an executable files using Docker?

I want to create a container that is contained with two Python packages as well as a package consist of an executable file.
Here's my main package (dockerized_package) tree:
dockerized_project
├── docker-compose.yml
├── Dockerfile
├── exec_project
│   ├── config
│   │   └── config.json
│   ├── config.json
│   ├── gowebapp
├── pythonic_project1
│   ├── __main__.py
│   ├── requirements.txt
│   ├── start.sh
│   └── utility
│   └── utility.py
└── pythonic_project2
├── collect
│   ├── collector.py
├── __main__.py
├── requirements.txt
└── start.sh
Dockerfile content:
FROM ubuntu:18.04
RUN apt update
RUN apt-get install -y python3.6 python3-pip python3-dev build-essential gcc \
libsnmp-dev snmp-mibs-downloader
RUN pip3 install --upgrade pip
RUN mkdir /app
WORKDIR /app
COPY . /app
WORKDIR /app/snmp_collector
RUN pip3 install -r requirements.txt
WORKDIR /app/proto_conversion
RUN pip3 install -r requirements.txt
WORKDIR /app/pythonic_project1
CMD python3 __main__.py
WORKDIR /app/pythonic_project2
CMD python3 __main__.py
WORKDIR /app/exec_project
CMD ["./gowebapp"]
docker-compose content:
version: '3'
services:
proto_conversion:
build: .
image: pc:2.0.0
container_name: proto_conversion
# command:
# - "bash snmp_collector/start.sh"
# - "bash proto_conversion/start.sh"
restart: unless-stopped
ports:
- 8008:8008
tty: true
Problem:
When I run this project with docker-compose up --build, only the last CMD command runs. Hence, I think the previous CMD commands are killed in Dockerfile because when I remove the last two CMD, the first CMD works well.
Is there any approach to run multiple Python scripts and an executable file in the background?
I've also tried with the bash files without any success either.
As mentioned in the documentation, there can be only one CMD in the docker file and if there is more, the last one overrides the others and takes effect.
A key point of using docker might be to isolate your programs, so at first glance, you might want to move them to separate containers and talk to each other using a shared volume or a docker network, but if you really need them to run in the same container, including them in a bash script and replacing the last CMD with CMD run.sh will run them alongside each other:
#!/bin/bash
exec python3 /path/to/script1.py &
exec python3 /path/to/script2.py
Add COPY run.sh to the Dockerfile and use RUN chmod a+x run.sh to make it executable. CMD should be CMD ["./run.sh"]
try it via entrypoint.sh
ENTRYPOINT ["/docker_entrypoint.sh"]
docker_entrypoint.sh
#!/bin/bash
set -e
exec python3 not__main__.py &
exec python3 __main__.py
symbol & says that you run service as daemon in background
Best practice is to launch these as three separate containers. That's doubly true since you're taking three separate applications, bundling them into a single container, and then trying to launch three separate things from them.
Create a separate Dockerfile in each of your project subdirectories. These can be simpler, especially for the one that just contains a compiled binary
# execproject/Dockerfile
FROM ubuntu:18.04
WORKDIR /app
COPY . ./
CMD ["./gowebapp"]
Then in your docker-compose.yml file have three separate stanzas to launch the containers
version: '3'
services:
pythonic_project1:
build: ./pythonic_project1
ports:
- 8008:8008
env:
PY2_URL: 'http://pythonic_project2:8009'
GO_URL: 'http://execproject:8010'
pythonic_project2:
build: ./pythonic_project2
execproject:
build: ./execproject
If you really can't rearrange your Dockerfiles, you can at least launch three containers from the same image in the docker-compose.yml file:
services:
pythonic_project1:
build: .
workdir: /app/pythonic_project1
command: ./__main__.py
pythonic_project2:
build: .
workdir: /app/pythonic_project1
command: ./__main__.py
There's several good reasons to structure your project with multiple containers and images:
If you roll your own shell script and use background processes (as other answers have), it just won't notice if one of the processes dies; here you can use Docker's restart mechanism to restart individual containers.
If you have an update to one of the programs, you can update and restart only that single container and leave the rest intact.
If you ever use a more complex container orchestrator (Docker Swarm, Nomad, Kubernetes) the different components can run on different hosts and require a smaller block of CPU/memory resource on a single node.
If you ever use a more complex container orchestrator, you can individually scale up components that are using more CPU.

Non existing path when setting up Flask to have separated configurations for each environment

I have separated configs for each environment and one single app, the
directory tree looks like:
myapp
├── __init__.py # empty
├── config
│   ├── __init__.py # empty
│   ├── development.py
│   ├── default.py
│   └── production.py
├── instance
│   └── config.py
└── myapp
├── __init__.py
   └── myapp.py
Code
The relevant code, myapp/__init__.py:
from flask import Flask
app = Flask(__name__, instance_relative_config=True)
app.config.from_object('config.default')
app.config.from_pyfile('config.py')
app.config.from_envvar('APP_CONFIG_FILE')
myapp/myapp.py:
from myapp import app
# ...
Commands
Then I set the variables:
$export FLASK_APP=myapp.py
And try to run the development server from the project root:
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp.py) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
And from the project myapp folder:
$ cd myapp
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp.myapp.myapp) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
With another FLASK_APP variable:
$ export FLASK_APP=myapp/myapp.py
# in project root
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp.myapp.myapp) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
# moving to project/myapp
$ cd myapp
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp/myapp.py) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
Other test without success
$ python -c 'import myapp; print(myapp)'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/user/myapp/myapp/__init__.py", line 6, in <module>
app.config.from_envvar('APP_CONFIG_FILE')
File "/home/user/.virtualenvs/myapp/lib/python3.5/site-packages/flask/config.py", line 108, in from_envvar
variable_name)
RuntimeError: The environment variable 'APP_CONFIG_FILE' is not set and as such configuration could not be loaded. Set this variable and make it point to a configuration file
$ export APP_CONFIG_FILE="/home/user/myapp/config/development.py"
$ python -c 'import myapp; print(myapp)'<module 'myapp' from '/home/user/myapp/myapp/__init__.py'>
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp.myapp) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
Notes:
I am not using the PYTHON_PATH variable, it is empty
I have already seen other related questions (Flask: How to manage different environment databases?) but my problem is the (relatevely new) flask command
Using Python 3.5.2+
It took me a while but I finally found it:
Flask doesn't like projects with __init__.py at root level, delete myapp/__init__.py. This is the one located at the root folder:
myapp
├── __init__.py <--- DELETE
...
└── myapp
├── __init__.py <--- keep
└── myapp.py
Use $ export FLASK_APP=myapp/myapp.py
The environment variable specifying the configuration should be the absolut path to it: export APP_CONFIG_FILE="/home/user/myapp/config/development.py"
Now everything works \o/
$ flask run
* Serving Flask app "myapp.myapp"
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
$ flask shell
Python 3.5.2+ (default, Sep 22 2016, 12:18:14)
[GCC 6.2.0 20160927] on linux
App: myapp
Instance: /home/user/myapp/instance
>>>

`docker run -v` doesn't work as expected

I'm experimenting with a Docker image repository cloned from https://github.com/amouat/example_app.git (which is based on another repository: https://github.com/mrmrcoleman/python_webapp).
The structure of this repository is:
├── Dockerfile
├── example_app
│ ├── app
│ │ ├── __init__.py
│ │ └── views.py
│ └── __init__.py
├── example_app.wsgi
After building this repository with tag example_app, I try to mount a directory from the host in the repository:
$ pwd
/Users/satoru/Projects/example_app
$ docker run -v $(pwd):/opt -i -t example_app bash
root#3a12236a1471:/# ls /opt/example_app/
root#3a12236a1471:/# exit
$ ls example_app
__init__.py app run.py
Note that when I tried to list files in /opt/example_app in the container it turned out to be empty.
What's wrong in my configuration?
Your Dockerfile looks like this:
FROM python_webapp
MAINTAINER amouat
ADD example_app.wsgi /var/www/flaskapp/flaskapp.wsgi
CMD service apache2 start && tail -F /var/log/apache2/error.log
So you won't find the files you mentioned since there were nonADD-d in the Dockerfile. Also, this is not going to work unless python_webapp installs apache and creates /var/www/flaskapp and /var/log/apache2 exists. Without knowing what these other customs parts do, it is hard to know what to expect.

Categories