AWS CDK version not displaying in Terminal - python

I am using Node.js version 14.15.1 on my Mac. I installed the AWS CDK using
sudo npm install -g aws-cdk
When I check my cdk version, the output is just "cdk" without telling me the version
% cdk --version
cdk
When I try to initialize a sample app in python, I get this result rather than the expected result in the tutorial I am following.
% cdk init sample-app --language python
Usage:
cdk [-vbo] [--toc] [--notransition] [--logo=<logo>] [--theme=<theme>] [--custom-css=<cssfile>] FILE
cdk --install-theme=<theme>
cdk --default-theme=<theme>
cdk --generate=<name>

Likely there is something else called cdk ahead of the nodejs aws-cdk package in your path. You can use the which command to figure out what path is actually being called when you run cdk. On my system, the nodejs aws-cdk package gets installed to /usr/local/bin/cdk.
Try running which cdk and if you find that your shell tells you it's running a different cdk binary, uninstall whatever that package is and retry.

Related

Changes are not being deployed to AWS console

I am deploying changes to AWS console throuhg the command
cdk deploy --all
Previously it worked well and created the stack on AWS console but now after creating another stack when I tried to run the same command cdk deploy all rather than deploying code to AWS it shows just following four statements
Usage:
cdk [-vbo] [--toc] [--notransition] [--logo=<logo>] [--theme=<theme>] [--custom-css=<cssfile>] FILE
cdk --install-theme=<theme>
cdk --default-theme=<theme>
cdk --generate=<name>
Something changed in your environment and now cdk is pointing to the Courseware Development Kit instead of the aws-cdk.
You can confirm this by studying the output of which cdk.
To fix this, uninstall Courseware Development Kit or create a shell alias for it (after putting it further down in your $PATH).
Also, cdk deploy all is not the right command - you're looking for cdk deploy --all.

AWS Lambda Docker Custom Python Library (Runtime.ImportModuleError)

I'm trying to deploy a custom machine learning library on Lambda using a Lambda docker image.
The image is looking approximately as follows:
FROM public.ecr.aws/lambda/python:3.9
CMD mkdir -p /workspace
WORKDIR /workspace
# Copy necessary files (e.g. setup.py and library/)
COPY ...
# Install library and requirements
RUN pip3 install . --target "${LAMBDA_TASK_ROOT}"
# Copy lambda scripts
COPY ... "${LAMBDA_TASK_ROOT}/"
# CMD [ my_script.my_handler ]
Thus, it installs a local python package including dependencies to LAMBDA_TASK_ROOT (/var/task). The CMD (handler) is overriden in AWS, e.g. preprocessing.lambda_handler.
The container works fine for handlers that DO NOT use the custom python library (on AWS and locally). However, trying to use the custom library on AWS, it fails with Runtime.ImportModuleError claiming that "errorMessage": "Unable to import module 'MY_HANDLER': No module named 'MY_LIBRARY'".
Everything works when running the container locally with the runtime interface emulator. The file-level permissions should be ok as well.
Is there anything wrong with this approach?
Answering my own question here: There was no problem with the Docker container itself. The problem was that Lambda references docker image versions via their SHA, not the tag, thus update to the tag did not update the container of the functions. To update the container image, you have to use something like
aws lambda update-function-code --function-name $MY_FN --image-uri $MY_URI

Is there anyway to run and deploy ubuntu packages on Azure functions Startup?

In my Az Function app, I have some ubuntu packages like Azure CLI and Kubectl that I need to install on the AZ Host whenever it starts a new container. I have already tried Start-up Commands and also going into the Bash. The former doesnt work and the latter tells me permission is denied and resource is locked. Is there any way to install these packages on function start-up in Azure Functions?
If you try to install the package via bash, it is impossible and will not be dealt with at all. The reason is because when you use python to write functions and deploy them to linux os on azure, in fact it installs various packages according to requirements.txt, and finally merges these packages into a whole. When you run the function on azure, you are based on this whole package. Therefore, if it is incorrect to try to install the package after deployment, you should specify the package to be installed in requirements.txt before deployment and then deploy to azure.

Cannot install packages for Python Azure Function

I have a Python Azure function which executes locally. It is deployed to Azure and I selected the 'free app plan'. The Python has dependencies on various modules, such as requests. The modules are not loaded into the app like they are locally on my machine. The function fails when triggered.
I have tried installing the dependencies using Kudu console from my site, this hangs with message cleaning up >> every time.
I have tried installing the dependencies using SSH terminal from my site, the installations succeed but i cannot see the modules when python pip list in kudo and the app still fails. I cannot navigate the directories ls does nothing.
I tried to install extensions using the portal but this option is greyed out in development-tools.
You can find a requirements.txt in your local function folder.
If you want function on azure to install the 'requests', your requirements.txt should be like this:(Azure will install the extension based on this file)
azure-functions
requests
And all these packages will be packaged into a new package on Azure, so you can not display which packages using pip list. Also, please keep in mind that Linux's Kudu feature is limited and you cannot install packages through it.
Problem seems comes from VS Code, you can use command to deploy your function app.
For example, my functionapp on Azure named 423PythonBowman2, So this is my command:
func azure functionapp publish 423PythonBowman --build remote
I quoted requests in the code, and with cmd deploy my function can works fine on portal with no errors.
Have a look of the offcial doc:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=macos%2Ccsharp%2Cbash#publish

Deploying dash app to AWS Elastic Beanstalk, setuptools version error

I've developed and tested a dash app. It works as expected. Next step is to deploy the app to AWS Elastic Beanstalk using a preconfigured Docker container.
I am currently trying to set up a local docker environment for testing as described here
Running the command (via PowerShell):
docker build -t dash-app -f Dockerfile.
successfully downloads the preconfigured image, then proceeds to install python modules as specified in requirements.txt, until it gets to the cryptography module, where it throws a runtime error saying it requires setuptools version 18.5 or newer.
My Dockerfile has this line in it:
FROM amazon/aws-eb-python:3.4.2-onbuild-3.5.1
I've tried adding a line to the dockerfile to force upgrade pip and setuptools within the container as suggested here and here, but nothing seems to work.

Categories