I've developed and tested a dash app. It works as expected. Next step is to deploy the app to AWS Elastic Beanstalk using a preconfigured Docker container.
I am currently trying to set up a local docker environment for testing as described here
Running the command (via PowerShell):
docker build -t dash-app -f Dockerfile.
successfully downloads the preconfigured image, then proceeds to install python modules as specified in requirements.txt, until it gets to the cryptography module, where it throws a runtime error saying it requires setuptools version 18.5 or newer.
My Dockerfile has this line in it:
FROM amazon/aws-eb-python:3.4.2-onbuild-3.5.1
I've tried adding a line to the dockerfile to force upgrade pip and setuptools within the container as suggested here and here, but nothing seems to work.
Related
I am creating docker containers to deploy my apps via API, so there is a docker for app_manager that exposes an API and a docker container for an application(s) itself. Both app_manager and deploying applications are written in Python. I am testing deployment locally prior to deploying on the server. After the successful installation of app_manager I sent a request to deploy a specific version of my application via app_manager API (that uses subprocess.run) in a web browser. However, the response is null and by checking the docker logs I found the following error:
docker: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by docker)
I do not use Jenkins and adding libc6 library to DockerFile does not change anything. I still can install my app in Docker container manually with a couple of bash commands. I can
even deploy my app in the VSCode editor via subprocess.run() directly. However, how can I fix it, so I can deploy via sending API request?
I am using Ubuntu 22.04.1 LTS, Python 3.8, Docker version 20.10.12, build 20.10.12-0ubuntu4
Thank you
I am using Node.js version 14.15.1 on my Mac. I installed the AWS CDK using
sudo npm install -g aws-cdk
When I check my cdk version, the output is just "cdk" without telling me the version
% cdk --version
cdk
When I try to initialize a sample app in python, I get this result rather than the expected result in the tutorial I am following.
% cdk init sample-app --language python
Usage:
cdk [-vbo] [--toc] [--notransition] [--logo=<logo>] [--theme=<theme>] [--custom-css=<cssfile>] FILE
cdk --install-theme=<theme>
cdk --default-theme=<theme>
cdk --generate=<name>
Likely there is something else called cdk ahead of the nodejs aws-cdk package in your path. You can use the which command to figure out what path is actually being called when you run cdk. On my system, the nodejs aws-cdk package gets installed to /usr/local/bin/cdk.
Try running which cdk and if you find that your shell tells you it's running a different cdk binary, uninstall whatever that package is and retry.
I have a Python Azure function which executes locally. It is deployed to Azure and I selected the 'free app plan'. The Python has dependencies on various modules, such as requests. The modules are not loaded into the app like they are locally on my machine. The function fails when triggered.
I have tried installing the dependencies using Kudu console from my site, this hangs with message cleaning up >> every time.
I have tried installing the dependencies using SSH terminal from my site, the installations succeed but i cannot see the modules when python pip list in kudo and the app still fails. I cannot navigate the directories ls does nothing.
I tried to install extensions using the portal but this option is greyed out in development-tools.
You can find a requirements.txt in your local function folder.
If you want function on azure to install the 'requests', your requirements.txt should be like this:(Azure will install the extension based on this file)
azure-functions
requests
And all these packages will be packaged into a new package on Azure, so you can not display which packages using pip list. Also, please keep in mind that Linux's Kudu feature is limited and you cannot install packages through it.
Problem seems comes from VS Code, you can use command to deploy your function app.
For example, my functionapp on Azure named 423PythonBowman2, So this is my command:
func azure functionapp publish 423PythonBowman --build remote
I quoted requests in the code, and with cmd deploy my function can works fine on portal with no errors.
Have a look of the offcial doc:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=macos%2Ccsharp%2Cbash#publish
I am trying to deploy a Flask app to an Azure Web App (Linux, python3.7 runtime) using FTP.
I copied the "application.py" over and a "requirements.txt", but I can see in the logs that nothing is being installed.
The Web App is using an 'antenv' virtual environment but it won't install anything. How do I add libraries to this 'antenv' virtual environment?
Yes, I see that you have resolved the issue. You must use Git to deploy Python apps to App Service on Linux so that your dependencies in requirements.txt are installed (root folder).
To install Django and any other dependencies, you must provide a requirements.txt file and deploy to App Service using Git.
The antenv folder is where App Service creates a virtual environment with your dependencies. If you expand this node, you can verify that the packages you named in requirements.txt are installed in antenv/lib/python3.7/site-packages. Refer this document for more details.
Additionally, Although the container can run Django and Flask apps automatically, provided the app matches an expected structure, you can also provide a custom startup command file through which you have full control over the Gunicorn command line. A custom startup command is typically required for Flask apps, but not Django apps.
Turns out I had to run these commands and do a git push while my local venv was activated. At that point I saw azure start downloading all the libraries in my requirements.txt
Has anyone managed to deploy the Python ZeroMQ bindings on a vanilla AWS Elastic Beanstalk instance? Specifically I am using 64bit Amazon Linux 2016.09 v2.2.0 running Python 3.4
In my requirements.txt I have pyzmq listed - however when I deploy to AWS, the logs show that the deployment is attempting first to link against an installed libzmq (there isn't one in the standard AMI image) and then once that fails, it will try to compile libzmq from scratch which fails at a step using using cc1plus which fails (I assume) as g++ is also not part of the standard AMI image.
So my question is, how do I get either libzmq or g++ to be installed on my EC2 instance on deployment?
I read somewhere you can make a .ebextensions folder inside your deployment and inside there put a "configuration file" which I attempted to do with
packages:
yum:
g++: []
However this changes nothing. However I am guessing at what to name the configuration file in that folder e.g. test.config
Or am I going about this wrong and I need to instead fiddle with the instance installing stuff like this myself and then create a custom AMI image?