I'm attempting to run lektor within a docker container and have hit a problem.
If I 'ADD' (or 'COPY') my source code folder within my Dockerfile, everything works perfectly but, of course, the container is then not dynamic and doesn't respond to changes in the code.
If, instead, I use a volume, the container becomes dynamic and lektor successfully rebuilds and serves my site as I make changes.
However, when I come to publish the site, an error appears in the container's log and it enters a never-ending loop:
Started build
Debugging middleware caught exception in streamed response at a point where response headers were already sent.
Traceback (most recent call last):
File "/usr/local/lib/lektor/lib/python2.7/site-packages/lektor/admin/utils.py", line 18, in generate
for event in chain(f(*args, **kwargs), (None,)):
File "/usr/local/lib/lektor/lib/python2.7/site-packages/lektor/admin/modules/api.py", line 309, in generator
for event in event_iter:
File "/usr/local/lib/lektor/lib/python2.7/site-packages/lektor/publisher.py", line 639, in publish
self.link_artifacts(path)
File "/usr/local/lib/lektor/lib/python2.7/site-packages/lektor/publisher.py", line 602, in link_artifacts
link(full_path, dst)
OSError: [Errno 18] Invalid cross-device link
Minimal Dockerfile:
FROM python:2.7.11
RUN curl -sf https://www.getlektor.com/install.sh | \
sed '/stdin/d;s/input = .*/return/' | \
sh
I'm actually using docker-compose.
Minimal docker-compose.yml:
version: '2'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/project
working_dir: /project/source
command: ['lektor', 'server', '--host', '0.0.0.0.']
(My project folder is structured such that the lektor project file and all the expected lektor folders are in the 'source' sub-folder).
The lektor build process uses hard links and a temporary folder for the built files. If the source code is on a mounted volume (which it is in a docker volume), then the two filesystems are different and the linking fails as above.
Deploying and building via the command line and specifying the output path can get around the problem (described here: https://www.getlektor.com/docs/deployment/), but it's not a great solution within a Docker container where the aim is to make life as simple as possible.
The method that does the linking within lektor actually falls back to copying instead in some circumstances. I've created an issue (https://github.com/lektor/lektor/issues/315) suggesting that the fall back also occurs if the project and output folders are on different volumes. I suspect that would solve the problem properly.
Related
I currently have a Python application running in a Docker container on Ubuntu 20.04.
In this Python application I want to create a text file every few minutes for use in other applications on the Ubuntu server. However, I am finding it challenging to create a file and save it on the server from inside a containerised Python application.
The application Dockerfile/start.sh/main.py files reside in /var/www/my_app_name/ and I would like to have the output.txt file that main.py creates in that same folder, the location of the Dockerfile/main.py source.
The text file is created in Python using a simple line:
text_file = open("my_text_file.txt", "wt")
I have seen that the best way to do this is to use a volume. My current docker run which is called by batch script start.sh includes the line:
docker run -d --name=${app} -v $PWD:/app ${app}
However I am not having much luck and the file is not created in the working directory where main.py resides.
A bind mount is the way to go and your -v $PWD:/app should work.
If you run a command like
docker run --rm -v $PWD:/app alpine /bin/sh -c 'echo Hello > /app/hello.txt'
you'll get a file in your current directory called hello.txt.
The command runs an Alpine image, maps the current directory to /app in the container and then echoes 'Hello' to /app/hello.txt in the container.
Ended up solving this myself, thanks to those who answered to get me on the right path.
For those finding this question in the future, creating a bind mound using the -v flag in docker run is indeed the correct way to go, just ensure that you have the correct path.
To get the file my_text_file.txt in the folder where my main.py and Dockerfile files are located, I changed my above to:
-v $PWD:/src/output ${app}
The /src/output is a folder structure within the container where my_text_file.txt is created, so in the Python, you should be saving the file using:
text_file = open("/src/output/my_text_file.txt", "wt")
The ${PWD} is where the my_text_file.txt is located on the local machine.
In short, I was not saving my file to the correct folder within the container in the Python code, it should have been saved to app in the container, and then my solution in the OP would have worked.
I have a python code. The code checks the folder A every 15 seconds and if there is a csv file there, process it and generates an html report with the same name. I want to run my code on a docker container. The code is working without docker. However, when i want to run my code on docker container i get a path error.
( my dockerfile )
Here is my error
These are lines about errors :
1
2
A docker container cannot access the host file system. You need to mount your folder into the container.
docker run docker-hw -v "C:\\path\\host:/path/inside/container"
Then use "/path/inside/container" for the python script.
https://docs.docker.com/storage/bind-mounts/
you are trying to load a local file on your host machine through the script which is running in the docket instance. use the cp command in your yaml file to copy that file to the docket file system, and then just reference it from there
I've seen other posts with similar questions but I can't find a solution for my case scenario. I hope someone can help.
Here is the thing: I have a python script that listens for UDP traffic and stores the messages in a log file. I've put this script in a docker image so I can run it in a container.
I need to map the generated logs (python script logs) FROM inside the container TO a folder outside the container, on the host machine (Windows host).
If I use docker-compose everything works fine!! but I can't find the way to make it work using a "docker run ..." command.
Here is my docker-compose.yml file
version: '3.3'
services:
'20001':
image: udplistener
container_name: '20001'
ports:
- 20001:20001/udp
environment:
- UDP_PORT=20001
- BUFFER_SIZE=1024
volumes:
- ./UDPLogs/20001:/usr/src/app/logs
And here is the corresponding docker run command:
docker run -d --name 20001 -e UDP_PORT=20001 -e BUFFER_SIZE=1024 -v "/C/Users/kgonzale/Desktop/UDPLogs/20001":/usr/src/app/logs -p 20001:20001 udplistener
I think the problem may be related to the way I'm creating the volumes, I know is different (docker-compose -> Relative path) (docker command -> Absolute path) but I can't find the way to use relative paths when using the docker run command..
To summarize: the python script is creating logs Inside the container, I want to map those logs Outside the container. I can see the logs in the host machine if I use "docker-compose up -d" but I need the "docker run ..." corresponding command.
Container: python:3.7-alpine
Host: Windows 10
Thanks in advance for your help!
I'm trying to get a Docker Compose script to test a python program. The following is the .sh to run the docker-compose command.
#!/usr/bin/env bash
script_path=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
base_path=${script_path}/src
container_path=${base_path}/deploy/
docker_compose_path=${container_path}docker-compose.yml
sudo nano ${base_path}/tests/scrap/test_r.py
docker-compose -f "${docker_compose_path}" run service_name pytest -x ${base_path}/tests/scrap/test_r.py
docker-compose -f "${docker_compose_path}" down
The problem when running it is that an error shows up with the following message:
ERROR: file not found: /correct/path/to/test_r.py
NOTE: I have checked many times and the path (absolute) is correct.
The odd thing about the error is that the shell can find it with no problem. The nano command that is right on top of the docker-compose one works perfectly, loading the file with the correct content. Still, with the same exact path, docker-compose seems to not be able to find it and execute it (already tried chmod +x test_r.py but it did not work. Docker Compose simply doesn't find it).
Whatever extra info that you need, you can ask. Thanks :P
NOTE: I'm not considering as a source of the problem the fact that the project worked perfectly on another computer that was running MacOS. My pc is trying to execute the scripts in an Ubuntu environment. If taking that into account is a step in the right direction, I don't know how to move toward solving the issue.
Edit: adding info of the file docker-compose.yml:
version: '3.5'
services:
service_name:
image: image_name
build:
context: ../
dockerfile: Dockerfile
tty: true
volumes:
- ../../../workers:/opt/workers/
working_dir: /opt/workers/src
env_file:
- ../local/worker.env
Turns out all I had to do was update the "working_dir" path in the docker-compose.yml file to make it find the corresponding py file. The problem was that I had no idea that I had to pay attention to that info. Thanks to #David Maze for asking me all of the questions. Sometimes asking the right ones points on in the right direction.
I want to dockerize my discord bot written in python for developement process but I can't get it done. In docker-compose it is like that right now:
discord_bot:
build: ./discord
volumes:
- ./discord:/usr/src/discord
depends_on:
- mongo
- node
Is there a way I can hot reload this code while still using discord.py?
If you want it to auto-reload on code change for local development, what you have is mostly correct. The one thing you're missing is launching the main process via some sort of file watcher. You could use nodemon with python, or find some equivalent for python specifically.
Changes you need to make:
You're build image needs to contain some sort of file watcher. You could use Nodemon for this (even for python , or use some python equivalent)
You should override the default command of the image to launch via your file watcher.
discord_bot:
build: ./discord <--- Should include file watcher executable (nodemon or some python equivalent)
command: nodemon /usr/src/discord/index.js <--- add this line
volumes:
- ./discord:/usr/src/discord
depends_on:
- mongo
- node