Has anyone managed to deploy the Python ZeroMQ bindings on a vanilla AWS Elastic Beanstalk instance? Specifically I am using 64bit Amazon Linux 2016.09 v2.2.0 running Python 3.4
In my requirements.txt I have pyzmq listed - however when I deploy to AWS, the logs show that the deployment is attempting first to link against an installed libzmq (there isn't one in the standard AMI image) and then once that fails, it will try to compile libzmq from scratch which fails at a step using using cc1plus which fails (I assume) as g++ is also not part of the standard AMI image.
So my question is, how do I get either libzmq or g++ to be installed on my EC2 instance on deployment?
I read somewhere you can make a .ebextensions folder inside your deployment and inside there put a "configuration file" which I attempted to do with
packages:
yum:
g++: []
However this changes nothing. However I am guessing at what to name the configuration file in that folder e.g. test.config
Or am I going about this wrong and I need to instead fiddle with the instance installing stuff like this myself and then create a custom AMI image?
Related
In my Az Function app, I have some ubuntu packages like Azure CLI and Kubectl that I need to install on the AZ Host whenever it starts a new container. I have already tried Start-up Commands and also going into the Bash. The former doesnt work and the latter tells me permission is denied and resource is locked. Is there any way to install these packages on function start-up in Azure Functions?
If you try to install the package via bash, it is impossible and will not be dealt with at all. The reason is because when you use python to write functions and deploy them to linux os on azure, in fact it installs various packages according to requirements.txt, and finally merges these packages into a whole. When you run the function on azure, you are based on this whole package. Therefore, if it is incorrect to try to install the package after deployment, you should specify the package to be installed in requirements.txt before deployment and then deploy to azure.
I have a Python Azure function which executes locally. It is deployed to Azure and I selected the 'free app plan'. The Python has dependencies on various modules, such as requests. The modules are not loaded into the app like they are locally on my machine. The function fails when triggered.
I have tried installing the dependencies using Kudu console from my site, this hangs with message cleaning up >> every time.
I have tried installing the dependencies using SSH terminal from my site, the installations succeed but i cannot see the modules when python pip list in kudo and the app still fails. I cannot navigate the directories ls does nothing.
I tried to install extensions using the portal but this option is greyed out in development-tools.
You can find a requirements.txt in your local function folder.
If you want function on azure to install the 'requests', your requirements.txt should be like this:(Azure will install the extension based on this file)
azure-functions
requests
And all these packages will be packaged into a new package on Azure, so you can not display which packages using pip list. Also, please keep in mind that Linux's Kudu feature is limited and you cannot install packages through it.
Problem seems comes from VS Code, you can use command to deploy your function app.
For example, my functionapp on Azure named 423PythonBowman2, So this is my command:
func azure functionapp publish 423PythonBowman --build remote
I quoted requests in the code, and with cmd deploy my function can works fine on portal with no errors.
Have a look of the offcial doc:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=macos%2Ccsharp%2Cbash#publish
I've developed and tested a dash app. It works as expected. Next step is to deploy the app to AWS Elastic Beanstalk using a preconfigured Docker container.
I am currently trying to set up a local docker environment for testing as described here
Running the command (via PowerShell):
docker build -t dash-app -f Dockerfile.
successfully downloads the preconfigured image, then proceeds to install python modules as specified in requirements.txt, until it gets to the cryptography module, where it throws a runtime error saying it requires setuptools version 18.5 or newer.
My Dockerfile has this line in it:
FROM amazon/aws-eb-python:3.4.2-onbuild-3.5.1
I've tried adding a line to the dockerfile to force upgrade pip and setuptools within the container as suggested here and here, but nothing seems to work.
My Python App Engine Flex application needs to connect to an external Oracle database. Currently I'm using the cx_Oracle Python package which requires me to install the Oracle Instant Client.
I have successfully run this locally (on macOS) by following the Instant Client installation steps. The steps required me to do the following:
Make a directory called /opt/oracle
Create a symlink from /opt/oracle/instantclient_12_2/libclntsh.dylib.12.1 to ~/lib/
However, I am confused about how to do the same thing in App Engine Flex (instructions). Specifically, here's what I'm confused about:
The instructions say I should run sudo yum install libaio to install the libaio package. How do I do this on GAE Flex? Or is this package already available?
I think I can add the Instant Client files to GAE (a whopping ~100MB!), then set the LD_LIBRARY_PATH environment variable in app.yaml to export LD_LIBRARY_PATH=/opt/oracle/instantclient_12_2:$LD_LIBRARY_PATH. Will this work?
Is this even feasible without using custom Docker containers on App Engine Flex?
Overall I'm not sure if I'm on the right track. Would love to hear from someone who has managed this before :)
If any of your dependencies is not available in the base GAE flex images provided by Google and cannot be installed via pip (because it's not a python package or it's not available in PyPI or whatever other reason) then you can't use the requirements.txt file to get it installed in your GAE flex app.
The proper way to satisfy such dependencies would be to build your own custom runtime. From About Custom Runtimes:
Custom runtimes allow you to define new runtime environments, which
might include additional components like language interpreters or
application servers.
Yes, that means providing a custom Docker file. In your particular case you'd be installing the Instant Client and libaio inside this Dockerfile. See also Building Custom Runtimes.
Answering your first question, I think that the instructions in the oracle website just show that you have to install said library for your application to work.
In the case of App engine flex, they way to ensure that the libraries are present in the deployment is with the requirements.txt textfile. There is a documentation page which does explain how to do so.
On the other hand, I will assume that "Instant Client Files" are not libraries, but necessary data for your App to run. You should use Google Cloud Storage to serve them, or any other alternative of Storage within Google Cloud.
I believe that, if this is all what you need for your App to work, pushing your own custom container should not be necessary.
I've grown tired of trying to get elastic beanstalk to run python 3.5. Instead, I want to create a custom ami which establishes a separate virtualenv for the application (with python 3.5) and knows enough to launch the application using that virtualenv.
The problem is that once I ssh into the ec2 instance in order to create my custom ami, I am left wondering where the scripts are which govern the elastic beanstalk deployment behavior.
For example, when deploying via travis to elastic beanstalk, EB knows enough to look in a specific folder for the file application.py and to execute the file using a specific virtualenv (or maybe even the shudder root python installation of the machine). It even knows to execute a pip install -r requirements. Can anyone point me to where the script(s) are which govern this behavior?
UPDATE
Please see Elastic beanstalk require python 3.5 for those referencing the .ebextensions option. So far, it has not proved able to handle this problem due to the interdependency between the EB image operating system and the python environment used to run the application.
All of the EB files can be found in /opt/elasticbeanstalk - /opt/elasticbeanstalk/hooks is probably most relevant for what you're looking for.
You can use the ebextensions to run scripts you want when starting your ami.