I have been working with docker-py, in order to build images and launch containers all in one script. So far it has been very smooth. However, I am currently having issues with the ADD/COPY commands in the Dockerfile string variable.
I need to add a file from the source directory directly into the image. With standard Dockerfiles, I have been able to achieve this successfully, using the docker ADD command. But using docker-py, it throws the exception:
Exception: Error building docker image: lstat simrun.py: no such file or directory
The script simrun.py is stored in the same directory as the docker-py script, so I cannot understand why I would be receiving this exception. The relative line in dockerpy.py is:
ADD ./simrun.py /opt
Is there something that I've missed, or will this functionality just not work in docker-py yet?
You need to set the path in the docker build context using the path parameter.
See here
Related
I've hit another bug. I'm now trying to set up continuous deployment for Google Cloud Run from my GitHub, and it's not finding my Dockerfile. I've tried various combinations with my file paths, but it still gives me the
Already have image (with digest): gcr.io/cloud-builders/docker
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
error.
This is my file structure from VSCode, and if I run the command from the first Exeplore folder, it finds the Dockerfile
The source directory is public, so I just can't figure out why it's not finding the Dockerfile.
https://github.com/Pierre-siddall/exeplore
Any advice at all would be greatly appreciated, thank you!
EDIT 1:
For some reason, the file path had some capitalisation that was different than what was on my VSCode - I suspect something on Github's end. Now I'm having issues getting the continuous deployment to actually update the page, but it is redeploying, but not updating now.
Today i tried to create a docker of the final version of my project(my project has a size of 3.2 Go) with the command "docker build --tag my-python-app". I had several problems during the manipulation : i ran the command docker. it several times because in the file "Dockerfile", i specified the wrong python and wrong pip. Anyway, now i have "no space left on device" and my screen is blinking.
For the moment, I tried an autoremove, but still not enough space.
Someone has an idea of what is happening?
Have you tried to remove unused Docker images or containers?
You can see the list of images by running:
docker images
And you can remove them like:
docker rmi <id>
The process is fairly similar for the containers:
docker ps -a
And:
docker rm <id>
Be careful not to remove used containers or images.
Run docker image ls to see if you have any images that you can delete to free up space.
If you find images that you do not need, you can remove them individually or all at once. Run docker image prune -h to display the options for removing images.
I found the problem:
When you build your docker, and this print an error, it doesn't delete intermediate containers created in the folder /var/lib/docker/aufs/diff (my error was in the dockerfile, where i precised the wrong python and the wrong pip), and this folder is saved in the peripherical part of the memory. So if you take all the memory, your computer is full of bug (screen, keyword, ...) and you can only use your terminal.
To resolver the problem you have to delete these folders
I have a problem with docker. I have a docker image
docker images
and I want to execute a simple python script on the kakadadroid/python27-talib image. The script is located in /home/Elise/technical_indicator.py.
but when I try:
docker run --volume=$(pwd):/workspace kakadadroid/python27-talib:latest python home/Elise/technical_indicator
I have the following error:
python: can't open file 'home/Elise/technical_indicator.py': [Errno 2] No such file or directory
Can someone help me. I'm not an expert in docker.
Kind regards
Emmanuel
If you see your command line docker run --volume=$(pwd):/workspace kakadadroid/python27-talib:latest python home/Elise/technical_indicator it could have many causes. Try the following:
Your file path it's not correct. It should be /home/Elise/technical_indicator.py instead of home/Elise/technical_indicator.py. You've forgotten the / at the start of path specification.
The volume does not exist.
If the volume exists, the solution would be:
docker run --volume=$(pwd):/workspace kakadadroid/python27-talib:latest python /home/Elise/technical_indicator
Note: You should add your Dockerfile to let us understand better your problem.
Regards :)
My goal is to create an Amazon Lambda Function to compile .tex files into .pdf using the pdflatex tool through python.
I've built an EC2 instance using Amazon's AMI and installed pdflatex using yum:
yum install texlive-collection-latex.noarch
This way, I can use the pdflatex and my python code works, compiling my .tex into a .pdf the way I want.
Now, I need to create a .zip file bundle containing the pdflatex tool; latexcodec (a python library I've used, no problem with this one); and my python files: handler (lambda function handler) and worker (which compiles my .tex file).
This bundle is the deployment package needed to upload my code and libraries to Amazon Lambda.
The problem is: pdflatex has a lot of dependencies, and I'd have to gather everything in one place. I've found a script which does that for me:
http://www.metashock.de/2012/11/export-binary-with-lib-dependencies/
I've set my PATH to find the pdflatex binary at the new directory so I can use it and I had an issue: pdflatex couldn't find some dependencies. I was able to fix it by setting an environment variable to the folder where the script moved everything to:
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/ec2-user/lambda/local/lib64:/home/ec2-user/lambda/local/usr/lib64"
At this point, I was running pdflatex directly, through bash. But my python script was firing an error when trying to use the pdflatex:
mktexfmt: No such file or directory
I can't find the format file `pdflatex.fmt'!
I was also able to solve this by moving the pdflatex.fmt and texmf.cnf files to my bundle folder and setting some environment variables as well:
export TEXFORMATS=/home/ec2-user/lambda/local/usr/bin
And now, my current problem, the python script keeps throwing the following error:
---! /home/ec2-user/lambda/local/usr/bin/pdflatex.fmt doesn't match pdftex.pool
(Fatal format file error; I'm stymied)
I've found some possible solutions; deleting a .texmf-var folder, which in my case, does not exist; using fmtutil, which I don't have in my AMI image...
1 - Was I missing any environment variable?
2 - Or moving my pdflatex binary and all its dependencies the wrong way?
3 - Is there any correct way to move a binary and all its dependencies so it can be used in other machine (considering the env variables)?
Lambda environment is a container and not a common EC2 Instance. All files in your .zip is deployed in /var/task/ inside the container. By the way, everything is mounted as read-only, except the directory /tmp. So, it's impossible to run a yum, for example.
For you case, I'd recommend you to put the binaries in your zip and invoke it in /var/task/<binary name>. Remember to put a binary compiled statically in a linux compatible with the container's kernel.
samoconnor is doing pretty much exactly what you want in https://github.com/samoconnor/lambdalatex. Note that he sets environment variables in his handler function
os.environ['PATH'] += ":/var/task/texlive/2017/bin/x86_64-linux/"
os.environ['HOME'] = "/tmp/latex/"
os.environ['PERL5LIB'] = "/var/task/texlive/2017/tlpkg/TeXLive/"
that might do the trick for you as-well.
I have a Docker container that when run, starts a single Python script. On my local machine, the Python script executes without issue. However, the script is unable to find the relevant library files (no external libraries, part of my own repo) that I've confirmed do exist within the container.
Error:
Directory Structure Inside Container:
Dockerfile:
Import Statements:
The repository does contain multiple Dockerfiles in different directories for easier deployment, but removing them did not change this behavior.
Adding sys.path.append("/tmp/") fixed the issue (the directory containing the upper most directory referenced in the import statements).