I have a python script for provisioning infrastructure in Azure - IaC. This script is using mostly the Python SDK, but is also running multiple Azure CLI commands - it is needed at times when I didn't find an equivalent command in the Python SDK.
My goal is to trigger this script on-demand using Azure Functions. When testing the Azure Function locally, everything works fine as I have Azure CLI installed on my machine, however, when I publish it to Azure functions, I will run into an error that: /bin/sh: 1: az: not found
Below is the sample python function I trigger in the Azure Function (Please note that the rest of script works fine, so I can create RG, SQL server etc, the problem is just the az commands). I wonder, if and how could I install Azure CLI on the Azure Function in order to be able to run the CLI commands?
Here is the python function causing the error:
# Loging to AZ
call("az login --service-principal -u '%s' -p '%s' --tenant '%s'" % (client_id, client_secret, tenant_id), shell=True)
b2c_id = check_output("az resource show -g '<rg_name>' -n '<b2c_name>' --resource-type 'Microsoft.AzureActiveDirectory/b2cDirectories' --query id --output tsv", shell=True)
print("The B2C ID is: %s" % b2c_id)```
I tried to create a simple Azure Function with HttpTrigger to invoke Azure CLI tools by different ways, but never works on cloud after published.
It seems the only solution is to publish the function as docker image with --build-native-deps option for the command func azure functionapp publish <your function app name> after add the required package azure-cli into the requirements.txt file, as the figure with error below said,
There was an error restoring dependencies.ERROR: cannot install antlr4-python3-runtime-4.7.2 dependency: binary dependencies without wheels are not supported. Use the --build-native-deps option to automatically build and configure the dependencies using a Docker container. More information at https://aka.ms/func-python-publish
Due to there is not docker tools in my local, I did not successfully run func azure functionapp publish <your function app name> --build-native-deps.
Meanwhile, running Azure CLI commands is not the only one ways to use the functions of Azure CLI. The az command is just a runnable script file, not a binary execute file. After I reviewd az and some source codes of azure-cli package, I think you can directly import the package via from azure.cli.core import get_default_cli and to use it to do the same operations like the code below.
from azure.cli.core import get_default_cli
az_cli = get_default_cli()
exit_code = az_cli.invoke(args)
sys.exit(exit_code)
The code is written by referring to the source code of azure/cli/__main__.py of azure-cli package, you can see it from the lib/python3.x/site-packages path of your virtual environment.
Hope it helps.
Thanks, Peter, I used something similar in the end and got it working. I have this function which will run the AZ CLI commands and return results if we need to (like for example if I need to run the cli command but also store the output, for example if I want to know what an object ID of a service principal is, I can get the results like in this example:
def az_cli (args):
cli = get_default_cli()
cli.invoke(args)
if cli.result.result:
return cli.result.result
elif cli.result.error:
raise cli.result.error
return True
Now, I can make the call like this (client_id is the ServicePrincipal ID):
ob_id = az_cli(['ad', 'sp', 'show', '--id', client_id])
print(ob_id["objectId"])
Related
I am experimenting with running jenkins on kubernetes cluster. I have achieved running jenkins on the cluster using helm chart. However, I'm unable to run any test cases since my code base requires python, mongodb
In my JenkinsFile, I have tried the following
1.
withPythonEnv('python3.9') {
pysh 'pip3 install pytest'
}
stage('Test') {
sh 'python --version'
}
But it says java.io.IOException: error=2, No such file or directory.
It is not feasible to always run the python install command and have it hardcoded into the JenkinsFile. After some research I found out that I have to declare kube to install python while the pod is being provisioned but there seems to be no PreStart hook/lifecycle for the pod, there is only PostStart and PreStop.
I'm not sure how to install python and mongodb use it as a template for kube pods.
This is the default YAML file that I used for the helm chart - jenkins-values.yaml
Also I'm not sure if I need to use helm.
You should create a new container image with the packages installed. In this case, the Dockerfile could look something like this:
FROM jenkins/jenkins
RUN apt install -y appname
Then build the container, push it to a container registry, and replace the "Image: jenkins/jenkins" in your helm chart with the name of the container image you built plus the container registry you uploaded it to. With this, your applications are installed on your container every time it runs.
The second way, which works but isn't perfect, is to run environment commands, with something like what is described here:
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
the issue with this method is that some deployments already use the startup commands, and by redefining the entrypoint, you can stop the starting command of the container from ever running, thus causing the container to fail.
(This should work if added to the helm chart in the deployment section, as they should share roughly the same format)
Otherwise, there's a really improper way of installing programs in a running pod - use kubectl exec -it deployment.apps/jenkins -- bash then run your installation commands in the pod itself.
That being said, it's a poor idea to do this because if the pod restarts, it will revert back to the original image without the required applications installed. If you build a new container image, your apps will remain installed each time the pod restarts. This should basically never be used, unless it is a temporary pod as a testing environment.
I need a bit of help for the deployment in AWS using dagster projects, unfortunately couldn't found in the offical documentation.
So a bit of context with the simple solids and pipelines using repo.py is working perfectly file. But the problem starts to occur in aws when I change the structure of solid and pipelines in a new directory project. So the objective is not to use repo.py for the trigger of pipelines (here is the example which shows I am refereeing too ). This line 65 of docker-compose.yaml file uses this command
dagster api grpc -h 0.0.0.0 -p 4000 -f repo.py
and the same command goes to our AWS infrastructure for the trigger of pipelines. Instead what I am looking for is to utilise the workspace.yaml file (where i can add multiple python packages ).
So does anyone think can be command be used like this ? (presently there is no existing '-w' parameter with dagster api )
dagster api grpc -h 0.0.0.0 -p 4000 -w workspace.yaml
If not then another Idea is to use Module instead of 'repo.py' in the main directory. Dagit works really well with the module
dagit -m project-01
but can this be possible with the dagster ? so the command would become like this dagster api grpc -h 0.0.0.0 -p 4000 -m project-01 (presently it throws an error that project-01 don't exists )
How to deploy Flask app on AWS Linux/UNIX EC2 instance.
With any way either
1> using Gunicorn
2> using Apache server
It's absolutely possible, but it's not the quickest process! You'll probably want to use Docker to containerize your flask app before you deploy it as well, so it boils down to these steps:
Install Docker (if you don't have it) and build an image for your application and make sure you can start the container locally and the app works as intended. You'll also need to write a Dockerfile that sets your runtime, copies all your directories and exposes port 80 (this will be handy for AWS later).
The command to build an image is docker build -t your-app-name .
Once you're ready to deploy the container, head over to AWS and launch an EC2 instance with the Linux 2 machine. You'll be required to create a security key (.pem file) and move it to somewhere on your computer. This acts like your credential to login to your instance. This is where things get different depending on what OS you use. On Mac, you need to cd into your directory where the key is and modify the permissions of it by running chmod 400 key-file-name.pem. On Windows, you have to go into the security settings and make sure only your account (ideally the owner of the computer) can use this file, basically setting it to private. At this point, you can connect to your instance from your command prompt with the command AWS gives you when you click connect to instance on the EC2 dashboard.
Once you're logged in, you can configure your instance to install docker and let you use it by running the following:
sudo amazon-linux-extras install docker
sudo yum install docker
sudo service docker start
sudo usermod -a -G docker ec2-user
Great, now you need to copy all your files from your local directory to your instance using SCP (secure transfer protocol). The long way is to use this command for each file: scp -i /path/my-key-pair.pem file-to-copy ec2-user#public-dns-name:/home/ec2-user. Another route is to install FileZilla or WinSCP to speed up this process.
Now that all your files are in the instance, build the docker container using the same command from the first step and activate it. If you go to the URL that AWS gives you, your app should be running on AWS!
Here's a reference I used when I did this for the first time, it might be helpful for you to look at too
I am currently running a django app under python3 through kubernetes by going through skaffold dev. I have hot reload working with the Python source code. Is it currently possible to do interactive debugging with python on kubernetes?
For example,
def index(request):
import pdb; pdb.set_trace()
return render(request, 'index.html', {})
Usually, outside a container, hitting the endpoint will drop me in the (pdb) shell.
In the current setup, I have set stdin and tty to true in the Deployment file. The code does stop at the breakpoint but it doesn't give me access to the (pdb) shell.
There is a kubectl command that allows you to attach to a running container in a pod:
kubectl attach <pod-name> -c <container-name> [-n namespace] -i -t
-i (default:false) Pass stdin to the container
-t (default:false) Stdin is a TTY
It should allow you to interact with the debugger in the container.
Probably you may need to adjust your pod to use a debugger, so the following article might be helpful:
How to use PDB inside a docker container.
There is also telepresence tool that helps you to use different approach of application debugging:
Using telepresence allows you to use custom tools, such as a debugger and IDE, for a local service and provides the service full access to ConfigMap, secrets, and the services running on the remote cluster.
Use the --swap-deployment option to swap an existing deployment with the Telepresence proxy. Swapping allows you to run a service locally and connect to the remote Kubernetes cluster. The services in the remote cluster can now access the locally running instance.
It might be worth looking into Rookout which allows in-prod live debugging of Python on Kubernetes pods without restarts or redeploys. You lose path-forcing etc but you gain loads of flexibility for effectively simulating breakpoint-type stack traces on the fly.
This doesn't use Skaffold, but you can attach the VSCode debugger to any running Python pod with an open source project I wrote.
There is some setup involved to install it on your cluster, but after installation you can debug any pod with one command:
robusta playbooks trigger python_debugger name=myapp namespace=default
You can take a look at okteto/okteto. There's a good tutorial which explains how you can develop and debug directly on Kubernetes.
I'm building a Docker container to submit ML training jobs using gcloud - the runnable is actually a Python program and gcloud is being executed via subprocess.check_output. Running the program outside a Docker container works just fine which makes me wonder if there is some dependency that is not installed but gcloud simply outputs no useful logs at all.
While running gcloud ml-engine jobs submit training the executable returns exit status 1 simply outputting Internal Error. The logs that are available on Google Cloud Console are always 5 entries of "Validating job requirements..." with no further information.
The Docker container has the following installed dependencies (some are not relevant to Google Cloud ML but are used by other features in the program):
Via apt-get: python, python-pip, python-dev, libmysqlclient-dev, curl
Via pip install: flask, MySQL-python, configparser, pandas, tensorflow
The gcloud tool itself is installed by downloading the SDK and installing it through command line:
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
RUN mkdir -p /usr/local/gcloud
RUN tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz
RUN /usr/local/gcloud/google-cloud-sdk/install.sh
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
Account credentials are setup via
RUN gcloud auth activate-service-account --key-file path-to-keyfile-in-docker-container
RUN gsutil version -l
Last gsutil version command is pretty much just to make sure SDK installation is working.
Does anyone have any clue what might be happening or how to further debug what might me causing an Internal Error on gcloud?
Thanks in advance! :)
Please make sure all the parameters are set properly and make sure you have all your dependencies uploaded and packaged properly.
If everything is done and you still can't run the job, you will need more than just "Internal Error" to pinpoint the issue. Please either contact Google Cloud Platform support or file a bug on the Public Issue Tracker to get further assistance.