This is a clone of my post in the Robocorp forum.
I’m trying to get the trigger process github action to work, but I’m getting:
Error: Failed to start process - {"error":{"code":"NOT_AUTHORIZED","subCode":"","message":"Not authorized to read process !"}}
The above link says the API key should have the trigger_processes permission and the action’s repo says it should have the read_runs and trigger_processes permissions. In any case, the API key those 2 permissions, as well as read_processes.
I can see that the key is being used, and there is only one .github/workflow file. The workflow is basically the same as the one in the tutorial but here it is for completeness:
name: Trigger a process in Control Room
on:
pull_request:
branches:
- dev
jobs:
run-process:
runs-on: ubuntu-latest
name: Trigger process
steps:
- name: Trigger Control Room process run
uses: robocorp/action-trigger-process#v1
with:
api-key: ${{ secrets.ROBOCORP_WORKSPACE_KEY_TRIGGER }}
workspace-id: ${{ secrets.ROBOCORP_WORKSPACE_ID }}
process-id: ${{ secrets.ROBOCORP_PROCESS_ID }}
payload: '{"foo":"bar"}'
await-complete: true
I’m also getting a warning (Please update the following actions to use Node.js 16: robocorp/action-trigger-process#v1) but I don’t imagine that’s the issue.
What am I missing?
Thanks
I followed the tutorial, even added all of the permissions to the API key. I removed the other workflow .yml to help isolate any issues.
Related
So I am not a good coder in python or an kubernetes expert but I have a project that need to do this:
In python, I want to connect to the BMC (ilo interface of of a baremetal node) to get some hardware info.
My goal is to create a daemonset so the code can run on every node of the k8s cluster and retreive some hardware info. Now, I need the code to detect on which node the daemon is currently running so I can use this a way to connect to the node bmc interface with some API calls (like, if the node detected is node1.domain.com, I can then check node1.bmc.domain.com for ex).
If my question is not clear enough, please let me know. If you can give me some code sample that could acheive this, it will very appreciated :)
Thanks!
Right now, I have only in python a way to connect to the K8s api and get the list of nodes of a cluster but I do not found a way to detect while running as a pod, which node the pod is currently running. Found some infos here https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md#read_namespaced_pod but not sure how to combine runing the code in a pod and getting the pod own info..
I saw this also how to get the host name of the node where a POD is running from within POD but not sure if I have to add something to the pod or the info comes as a environement variable already in a pod.
You can use the downward API to get pod details to specific container as below:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: my-daemon-set
namespace: my-namespace
spec:
selector:
matchLabels:
name: app-name
template:
metadata:
labels:
name: app-name
spec:
containers:
- name: my-image-name
image: my-image:v1
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
More info in Expose Pod Information to Containers Through Environment Variables
I'm trying to port in python an ECS services deployment that at the moment is done with a bunch of bash script containing commands like the following:
ecs-cli compose -f foo.yml -p foo --cluster bar --ecs-params "dir/ecs-params.yml" service up
I thought that the easiest/fastest way could be using boto3 (which I already extensively use elsewhere so It's a safe spot), but I didn't understand from the documentation what would be the instruction equivalent of the formerly written command.
Thanks in advance.
UPDATE: this is the content of foo.yml:
version: '3'
services:
my-service:
image: ecr-image:version
env_file:
- ./some_envs.env
- ./more_envs.env
command: python3 src/main.py param1 param2
logging:
driver: awslogs
options:
awslogs-group: /my-service-log-group
awslogs-region: my-region
awslogs-stream-prefix: my-prefix
UPDATE2: this is the content of dir/ecs-params.yml:
version: 1
task_definition:
task_role_arn: my-role
services:
my-service:
cpu_shares: my-cpu-shares
mem_reservation: my-mem-reservation
The ecs-cli is a high level construct that creates a workflow that wraps many lower level API calls. NOT the same thing but you can think of the ecs-cli compose up command the trigger to deploy what's included in your foo.yml file. Based on what's in your foo.yml file you can walk backwards and try to map to single atomic ECS API calls.
None of this answers your question but, for background, the ecs-cli is no longer what we suggest to use for deploying on ECS. Its evolution is Copilot (if you are not starting from a docker compose story) OR the new docker compose integration with ECS (if docker compose is your jam).
If you want / can post the content of your foo.yml file I can take a stab at how many lower level API calls you'd need to make to do the same (or suggest some other alternatives).
[UPDATE]
Based on the content of your two files you could try this one docker compose file:
services:
my-service:
image: ecr-image:version
env_file:
- ./some_envs.env
- ./more_envs.env
x-aws-policies:
- <my-role>
deploy:
resources:
limits:
cpus: '0.5'
memory: 2048M
Some of the ECS params are interpreted off the compose spec (e.g. resource limits). Some other do not have a specific compose-ECS mapping so they are managed through x-aws extensions (e.g. IAM role). Please note that compose only deploy to Fargate so the shares do not make much sense and you'd need to use limits (to pick the right Fargate task size). As a reminder this is an alternative CLI way to deploy the service to ECS but it does not solve for how you translate ALL API calls to boto3.
I am using self hosted agent for the pipeline I am trying to execute. Assuming the name of self hosted agent be "private-hosted-linux-nonproduction". Yaml file has following lines of code.
"
steps:
- checkout: self
- task: UsePythonVersion#0
displayName: 'Use Python 3.x'
- task: riserrad.azdo-databricks.azdo-databricks-
configuredatabricks.configuredatabricks#0
displayName: 'Configure Databricks CLI'
inputs:
url: $(databricks_host)
token: $(databricks_token)
"
when pipeline starts executing 'Configure Databricks CLI' task, though I have included python installation as first task I am getting an error with message "You must add "Use python version 3.x" as the very first task for this pipeline. Attached screenshot of the error message.
However, pipeline execution is successful if I use Microsoft agent pool. It totally works fine. Could anyone suggest what exactly am I missing here?
enter image description here
Thanks Daniel,
I was trying to set pythonpath, but agent was still considering a different path. So, I had to create symbolic link for the same and it resolved.
I have a Deployment Manager script as follows:
cluster.py creates a kubernetes cluster and when the script was run only for the k8 cluster creation, it was successful -- so it means the cluster.py had no issues in creation of a k8 cluster
cluster.py also exposes ouputs:
A small snippet of the cluster.py is as follows:
outputs.append({
'name': 'v1endpoint' ,
'value': type_name + type_suffix })
return {'resources': resources, 'outputs': outputs}
If I try to access the exposed output inside dmnginxservice resource below as $(ref.dmcluster.v1endpoint) I get an error as resource not found
imports:
- path: cluster.py
- path: nodeport.py
resources:
- name: dmcluster
type: cluster.py
properties:
zone: us-central1-a
- name: dmnginxservice
type: nodeport.py
properties:
cluster: $(ref.dmcluster.v1endpoint)
image: gcr.io/pr1/nginx:latest
port: 342
nodeport: 32123
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation [operation-1519960432614-566655da89a70-a2f917ad-69eab05a]: errors:
- code: CONDITION_NOT_MET
message: Referenced resource yaml%dmcluster could not be found. At resource
gke-cluster-dmnginxservice.
I tried to reproduce a similar implementation and I have been able to deploy it with no issues making use of your very same sintax for the output.
I deployed 2 VM and a new network. I will post you my code, maybe you find some interesting hints concerning the outputs.
The first VM pass as output the name for the second VM and use a reference from the network
The second VM takes the name from the properties that have been populated from the output of the first VM
the network thanks to the references is the first one to be created.
Keep in mind that:
This can get tricky because the order of creation for resources is important; you cannot add virtual machine instances to a network that does not exist, or attach non-existent persistent disks. Furthermore, by default, Deployment Manager creates all resources in parallel, so there is no guarantee that dependent resources are created in the correct order.
I will skip that is the same. If you provide your code I could try to help you to debug it, but from the error code it seems that the DM is not aware that the first element has been created, but from the info provided is not clear why.
Moreover if I were you I would give a shot to explicitly set that dmnginxservice depends on dmcluster making use of the metadata. In this way you can double check if it is actually waiting the first resource.
UPDATE
I have been able to reproduce the bug with a simpler configuration basically depending on how I reference the variables, the behaviour is different and for some reason the property get expanded to $(ref.yaml%vm-1.paolo), it seems that the combination of project and cluster references causes troubles.
#'name': context.properties["debug"],WORKING
#'name': context.env["project"],WORKING
'name': context.properties["debug"]+context.env["project"],#NOT WORKING
You can check the configuration here, If you need it.
I'm looking for the most modular way to use ansible to provision a server that will host multiple node.js applications on it. The current setup I have is not scalable.
The roles I have are common, nginx, nodejs, mongodb, apps.
The apps hash/dictionary
I maintain a dict called apps in roles/apps/defaults/main.yml and this is both the solution and problem:
- apps:
shark:
repo: git#example.com:shark.git
subdomain: shark
port: 3001
tiger:
repo: git#example.com:tiger.git
subdomain: tiger
port: 3002
Example of how the apps dict is used in roles/apps/tasks/main.yml:
- name: clone repos
git: repo={{ item.value.repo }}
with_dict: apps
- name: create vhost
template:
src=vhost.j2
dest=/etc/nginx/sites-available/{{ item.value.subdomain }}
with_dict: apps
sudo: yes
playbook.yml
Both staging.yml and production.yml playbooks are set to run all of the aforementioned roles. That means that both servers will install all of the apps configured in the apps dict.
I need to figure out a more modular pattern for roles/apps so that I can install different apps on the different environments. Ideally, I'd like to be able to specify which apps go to which machine directly in staging.yml and production.yml.
Any idea how this can be accomplished? Feel free to suggest entirely different methods of configuring multiple apps.
Another solution I figured out was to create dictionaries for the various apps in roles/apps/defaults/main.yml:
shark:
repo: git#example.com:shark.git
subdomain: shark
port: 3001
tiger:
repo: git#example.com:tiger.git
subdomain: tiger
port: 3002
Note the fact that they're not enclosed inside an apps dict.
Then, specify the apps dict contents in staging.yml or production.yml instead:
- hosts: example
vars:
apps:
- '{{ shark }}'
- '{{ tiger }}'
This allows you to direct which applications are included in which playbook.
If you're willing to maintain another dict, you could parameterize the with_dict to:
- name: clone repos
git: repo={{ item.value.repo }}
with_dict: {{env_app_dict}}
and specify your env_app_dict in either the inventory file or via the command line.
edit: Alternatively, try the lookup plugin that I wrote and specify folders for apps_common, apps_production, and apps_staging.
With this plugin, you'd put a collection of common items in apps_common and:
- name: clone common repos
git: repo="{{item.repo}}"
fileglob_to_dict:
- "{{ 'apps_common/*' }}"
and then simply specify the apps_env file parameter (as in the original answer) that you want by either targeting with a host pattern (e.g. - hosts: staging in the play) or by specifying the parameter on the command line or inventory file.