I'm trying to deploy a custom machine learning library on Lambda using a Lambda docker image.
The image is looking approximately as follows:
FROM public.ecr.aws/lambda/python:3.9
CMD mkdir -p /workspace
WORKDIR /workspace
# Copy necessary files (e.g. setup.py and library/)
COPY ...
# Install library and requirements
RUN pip3 install . --target "${LAMBDA_TASK_ROOT}"
# Copy lambda scripts
COPY ... "${LAMBDA_TASK_ROOT}/"
# CMD [ my_script.my_handler ]
Thus, it installs a local python package including dependencies to LAMBDA_TASK_ROOT (/var/task). The CMD (handler) is overriden in AWS, e.g. preprocessing.lambda_handler.
The container works fine for handlers that DO NOT use the custom python library (on AWS and locally). However, trying to use the custom library on AWS, it fails with Runtime.ImportModuleError claiming that "errorMessage": "Unable to import module 'MY_HANDLER': No module named 'MY_LIBRARY'".
Everything works when running the container locally with the runtime interface emulator. The file-level permissions should be ok as well.
Is there anything wrong with this approach?
Answering my own question here: There was no problem with the Docker container itself. The problem was that Lambda references docker image versions via their SHA, not the tag, thus update to the tag did not update the container of the functions. To update the container image, you have to use something like
aws lambda update-function-code --function-name $MY_FN --image-uri $MY_URI
Related
I am using Node.js version 14.15.1 on my Mac. I installed the AWS CDK using
sudo npm install -g aws-cdk
When I check my cdk version, the output is just "cdk" without telling me the version
% cdk --version
cdk
When I try to initialize a sample app in python, I get this result rather than the expected result in the tutorial I am following.
% cdk init sample-app --language python
Usage:
cdk [-vbo] [--toc] [--notransition] [--logo=<logo>] [--theme=<theme>] [--custom-css=<cssfile>] FILE
cdk --install-theme=<theme>
cdk --default-theme=<theme>
cdk --generate=<name>
Likely there is something else called cdk ahead of the nodejs aws-cdk package in your path. You can use the which command to figure out what path is actually being called when you run cdk. On my system, the nodejs aws-cdk package gets installed to /usr/local/bin/cdk.
Try running which cdk and if you find that your shell tells you it's running a different cdk binary, uninstall whatever that package is and retry.
I have a Python Azure function which executes locally. It is deployed to Azure and I selected the 'free app plan'. The Python has dependencies on various modules, such as requests. The modules are not loaded into the app like they are locally on my machine. The function fails when triggered.
I have tried installing the dependencies using Kudu console from my site, this hangs with message cleaning up >> every time.
I have tried installing the dependencies using SSH terminal from my site, the installations succeed but i cannot see the modules when python pip list in kudo and the app still fails. I cannot navigate the directories ls does nothing.
I tried to install extensions using the portal but this option is greyed out in development-tools.
You can find a requirements.txt in your local function folder.
If you want function on azure to install the 'requests', your requirements.txt should be like this:(Azure will install the extension based on this file)
azure-functions
requests
And all these packages will be packaged into a new package on Azure, so you can not display which packages using pip list. Also, please keep in mind that Linux's Kudu feature is limited and you cannot install packages through it.
Problem seems comes from VS Code, you can use command to deploy your function app.
For example, my functionapp on Azure named 423PythonBowman2, So this is my command:
func azure functionapp publish 423PythonBowman --build remote
I quoted requests in the code, and with cmd deploy my function can works fine on portal with no errors.
Have a look of the offcial doc:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=macos%2Ccsharp%2Cbash#publish
I am using a docker image (not mine) created through this dockerfile.
ROS kinetic, ROS2 and some important packages are already installed on this image.
When I run the docker image with docker run -it <image-hash-code> ROS kinetic is working well and the packages, like gym, can be found by python3.
So, all in all the docker image is a great starting point for my own project.
However, I would like to change the python scripts, which are stored on the docker image. The python scripts are using the installed packages and are interacting with ROS kinetic as well as with ROS2.
I do not want to install on my Ubuntu system all these programs and packages, which are already installed on the docker image in order to test my own python scripts.
Is there a way to mount the docker image so that I can test my python scripts?
Of course, I can use vim to edit python scripts, but I am thinking more of IntelliJ.
So, how can an IDE (e.g. IntelliJ) access and run a python script, which is stored on the docker image, with the same result as I would execute this script directly on the running container.
The method by Lord Johar, mounting the docker, edit the scripts with an IDE, save the image and then to run the image, is working, but is not what I would like to achieve.
My goal is to use the docker container as a development environment on which an IDE has access to and can use the installed programs and packages.
In other words: I would like to use an IDE on my host system in order to test my python scripts in the same way as the IDE would be installed on the docker image.
you can use docker commit
use this command docker commit <your python container>.
Now type docker images to see the image.
You should rename and tag image like this command docker tag myphthon:v1 <image ID>
use docker run command and then enjoy your code.
It's better to mount a volume to your container to persist your code and data Docker volume.
However, I would like to change the python scripts, which are stored on the docker image. The python scripts are using the installed packages and are interacting with ROS kinetic as well as with ROS2.
you must mount volume to your docker and edit your file.
better way is make your image
install docker on your ubuntu, pull python image, use Dockerfile to create your image, every time you change your code build new image by new tag then run image and enjoy docker container
In second way
Copy your python app to /path/to/your/app (My main file is index.py)
Change your directory to /path/to/your/app
Create a file with name Dockerfile :
FROM python:alpine3.7
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5000
CMD python ./index.py
Also note is the RUN directive that is calling PyPi (pip) and pointing to the requirements.txt file. This file contains a list of the dependencies that the application needs to run.
Build Your Image.
docker build --tag my-app .
Note: at the end of command is a dot, that is too important. another thing is you must be at /path/to/your/app inside Dockerfile
now you can run your container
docker run --name python-app -p 5000:5000 my-app
What you are looking for is a tooling which can communicate with a local or remote docker demon.
I know that eclipse can do that. The tooling for this is called Docker Tooling. It can explore docker images and containers on a machine running a docker demon in your network. It can start and stop containers, commit containers to images and create images.
What you require (as I understand) is the ability to commit containers, since you are asking for changing scripts inside your container. If you like to persist your work on those docker containers, committing is indispensable.
Since I am not familiar wit IntelliJ, I would suggest to have a look onto the eclipse's docker tooling wiki to get a clue whether it is what you are looking for. And then after you got an idea, look for analogies in your favorite IDE like IntelliJ.
Another IDE which supports docker exploring is clion
I'm new to using Docker, so I'm either looking for direct help or a link to a relevant guide. I need to train some deep learning models on my school's linux server, but I can't manually install pytorch and other python packages since I don't have root access (sudo). Another student said that he uses docker and has everything ready to go in his container.
I'm wondering how to wrap up my code and relevant packages into a container that I can push to the linux server and then run.
To address your specific problem the easiest way I found to get code into a container is to use git.
start the container in interactive mode or ssh to it if it's attached to a network.
git clone <your awesome deep learning code>. In your git repo have a requirements.txt file. Change directories into your local clone of your repo and run pip install -r requirements.txt
Run whatever script you need to run your code. Note you can easily put your pip install command in one of your run scripts.
It's important to remember that docker containers are stateless/ephemeral. You should not expect the container nor its contents to exist in some durable fashion. This specific issue is addressed by mapping a directory on the host system to a directory in the container.
Side note: I first recommend starting with the docker tutorial. You can easily skip over the installation parts if you are working on system that already has docker installed and where you have permissions to build, start, and stop containers.
I don't have root access (sudo). Another student said that he uses docker
I would like to point out that docker requires sudo permissions.
Instead I think you should look at using something like Google Colab or JupyterLab. This gives you the added benefit of code that is backed-up on a remote server
I would like to deploy a Python Flask application on beanstalk.
The application depends on external packages (e.g. geopy) and internal packages (e.g. adam_geography).
The manual
Create a requirements.txt file and place it in the top-level directory
of your source bundle.
This would probably fetch geopy and its dependencies, but would not fetch adam_geography which is available from a custom repo inside my VPC.
How do I specify/upload private, internal Python package dependencies in a Beanstalk application?
1) copy internal Python package to server
2) use Pip's "editable installs" feature to install the private package:
pip install -e path/to/SomeProject
http://pip.readthedocs.org/en/latest/reference/pip_install.html#editable-installs
Use ebextensions to specify custom commands you can use to download files on all your EC2 instances. These ebextensions can be used to run pip like #shavenwarthog suggested in his answer.
Create a directory called .ebextensions in your app source root directory. Inside this directory create a file with a .config extension say 01-custom-files.config.
This file can contain custom unix commands you want to run on each EC2 instance.
You can run your own scripts here.
You can also use container_commands which are executed after unzipping your app source on the EC2 instance.
Read more about commands and container_commands here. You can also find examples here:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#customize-containers-format-commands
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#customize-containers-format-container_commands