Trying to pass json as input parameter through azure pipeline to pass it as an argument to python and having issues with escaping double quotes.
eg:
{
"name": JohnDoe,
"age": 50
}
code snippet:
userInput = sys.argv[2]
data = json.loads(userInput)
Azure Pipeline YAML input parameter
- name: userInput
displayName: Enter the json
type: object
default: ' '
command from azure python task when executed:
/usr/bin/python /home/vsts/work/1/s/python/scripts/create.py https://localhost/api { name: JohnDoe, age: 50 }
Error:
JSONDecodeError at line 22 of /home/vsts/work/1/s/python/scripts/create.py: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
Alternate solution i found is to use json to one line converter and convert the json to single line and add escape quotes by using .
‘{“\”name\“:JohnDoe,\“age\“:50}’
However i would like to achieve this within python script.
Related
I have created a job in DataBricks. Now, I'm trying to run it using databricks cli. See part of the job json:
"tasks": [
{
"task_key": "task_key",
"python_wheel_task": {
"package_name": "package_name",
"entry_point": "entry_point",
"parameters": [
"command",
"parameter1",
]
},
"libraries": [
{
"whl": "dbfs:/wheels/anme.whl"
}
]
Command I'm using to run the job (this command works):
databricks jobs run-now --job-id 1111
Then, I try to change parameter1 by running
databricks jobs run-now --job-id 1111 --python-params '"["value1", "value2"]"'
But I get error:
Error: JSONDecodeError: Expecting value: line 1 column 2 (char 1)
how can I provide new parameters to python_wheel_task?
On top the answer from Alex, I found that we have to use escape symbols for command to work.
the solution is:
databricks jobs run-now --job-id 1111 --python-params '[\"value1\", \"value2\"]'
The value for --python-params should be JSON array of strings. In your case you don't need the double quotes around array - just use '["value1", "value2"]'.
I am trying to run a YAML SSM document from a Python AWS Lambda, using boto3 ssm.send_command with parameters, but even if I'm just trying to run the sample "Hello World", I get:
"errorMessage": "An error occurred (InvalidParameters) when calling the SendCommand operation: document TestMessage does not support parameters.
JSON Run Documents work without an issue, so it seems like the parameters are being passed in JSON format, but the document I intend this for contains a relatively long Powershell script, JSON needing to run it all on a single line would be awkward, and I am hoping to avoid needing to run it from an S3 bucket. Can anyone suggest a way to run a YAML Run Document with parameters from the Lambda?
As far as I know AWS lambda always gets it's events as JSON. My suggestion would be that in the lambda_handler.py file declare a new variable like this:
import json
import yaml
def handler_name(event, context):
yaml_event = yaml.dump(json.load(event))
#rest of the code...
This way the event will be in YAML format and you can use that variable instead of the event, which is in JSON format.
Here is an example of running a YAML Run Command document using boto3 ssm.send_command in a Lambda running Python 3.8. Variables are passed to the Lambda using either environment variables or SSM Parameter Store. The script is retrieved from S3 and accepts a single parameter formatted as a JSON string which is passed to the bash script running on Linux (sorry I don't have one for PowerShell).
The SSM Document is deployed using CloudFormation but you could also create it through the console or CLI. Based on the error message you cited, perhaps verify the Document Type is set as "Command".
SSM Document (wrapped in CloudFormation template, refer to the Content property)
Neo4jLoadQueryDocument:
Type: AWS::SSM::Document
Properties:
DocumentType: "Command"
DocumentFormat: "YAML"
TargetType: "/AWS::EC2::Instance"
Content:
schemaVersion: "2.2"
description: !Sub "Load Neo4j for ${AppName}"
parameters:
sourceType:
type: "String"
description: "S3"
default: "S3"
sourceInfo:
type: "StringMap"
description: !Sub "Downloads all files under the ${AppName} scripts prefix"
default:
path: !Sub 'https://{{resolve:ssm:/${AppName}/${Stage}/${AWS::Region}/DataBucketName}}.s3.amazonaws.com/config/scripts/'
commandLine:
type: "String"
description: "These commands are invoked by a Lambda script which sets the correct parameters (Refer to documentation)."
default: 'bash start_task.sh'
workingDirectory:
type: "String"
description: "Working directory"
default: "/home/ubuntu"
executionTimeout:
type: "String"
description: "(Optional) The time in seconds for a command to complete before it is considered to have failed. Default is 3600 (1 hour). Maximum is 28800 (8 hours)."
default: "86400"
mainSteps:
- action: "aws:downloadContent"
name: "downloadContent"
inputs:
sourceType: "{{ sourceType }}"
sourceInfo: "{{ sourceInfo }}"
destinationPath: "{{ workingDirectory }}"
- action: "aws:runShellScript"
name: "runShellScript"
inputs:
runCommand:
- ""
- "directory=$(pwd)"
- "export PATH=$PATH:$directory"
- " {{ commandLine }} "
- ""
workingDirectory: "{{ workingDirectory }}"
timeoutSeconds: "{{ executionTimeout }}"
Lambda function
import os
import boto3
neo4j_load_query_document_name = os.environ["NEO4J_LOAD_QUERY_DOCUMENT_NAME"]
# neo4j_database_instance_id = os.environ["NEO4J_DATABASE_INSTANCE_ID"]
neo4j_database_instance_id_param = os.environ["NEO4J_DATABASE_INSTANCE_ID_SSM_PARAM"]
load_neo4j_activity = os.environ["LOAD_NEO4J_ACTIVITY"]
app_name = os.environ["APP_NAME"]
# Get SSM Document Neo4jLoadQuery
ssm = boto3.client('ssm')
response = ssm.get_document(Name=neo4j_load_query_document_name)
neo4j_load_query_document_content = json.loads(response["Content"])
# Get Instance ID
neo4j_database_instance_id = ssm.get_parameter(Name=neo4j_database_instance_id_param)["Parameter"]["Value"]
# Extract document parameters
neo4j_load_query_document_parameters = neo4j_load_query_document_content["parameters"]
command_line_default = neo4j_load_query_document_parameters["commandLine"]["default"]
source_info_default = neo4j_load_query_document_parameters["sourceInfo"]["default"]
def lambda_handler(event, context):
params = {
"params": {
"app_name": app_name,
"activity_arn": load_neo4j_activity,
}
}
# Include params JSON as command line argument
cmd = f"{command_line_default} \'{json.dumps(params)}\'"
try:
response = ssm.send_command(
InstanceIds=[
neo4j_database_instance_id,
],
DocumentName=neo4j_load_query_document_name,
Parameters={
"commandLine":[cmd],
"sourceInfo":[json.dumps(source_info_default)]
},
MaxConcurrency='1')
if response['ResponseMetadata']['HTTPStatusCode'] != 200:
logger.error(json.dumps(response, cls=DatetimeEncoder))
raise Exception("Failed to send command")
else:
logger.info(f"Command `{cmd}` invoked on instance {neo4j_database_instance_id}")
except Exception as err:
logger.error(err)
raise err
return
Parameters in a JSON document are not necessarily in JSON themselves, they can easily be string or numeric values (more likely IMO). If you want to pass a parameter in JSON format (not the same as a JSON document), pay attention to quotes and escaping.
Okay, this is a bit convoluted but I've got a python script that digests a json file and prints a string representation of that file like so
for id in pwds.keys():
secret += f"\'{id}\' : \'{pwds[id]['username']},{pwds[id]['pswd']}\',"
secret = secret[:-1] + "}\'"
print(secret)
This is taken in by a jenkins pipeline so it can be passed to a bash script
def secret_string = sh (script: "python3 syncToSecrets.py", returnStdout: true)
sh label: 'SYNC', script: "bash sync.sh ${ENVIRONMENT} ${secret_string}"
I can see that when python is printing the output it looks like
'{"key" : "value", "key" : "value"...}'
But when it gets to secret_string, and also the bash script it then looks like
{key : value, key : value}
This is how the bash script is calling it
ENV=$1; SECRET_STRING=$2;
aws secretsmanager create-secret --name NAME --secret-string "${SECRET_STRING}"
Which technically works, it just uploads the whole thing as a string instead of discrete KV-pairs.
I'm trying to run some stuff with the AWS CLI, and it requires that the data be wrapped in quotes, but so far, I've been totally unable to keep the quotes in between processes. Any advice?
Sample pwds dict data:
import json
pwds = {
'id001': {
'username': 'user001',
'pswd': 'pwd123'
},
'id002': {
'username': 'user002',
'pswd': 'pwd123'
}
}
As suggested by SuperStormer, it's a better to use Python types (dict, list, etc) instead of building your own JSON.
secrets = [{id: f"{val['username']}, {val['pswd']}"} for id, val in pwds.items()]
json.dumps(secrets)
'[{"id001": "user001, pwd123"}, {"id002": "user002, pwd123"}]'
The JSON string should be usable within Jenkins script blocks.
Try experimenting with single quotes or --secret-string file://secrets.json as alternatives.
I am transferring variables from one function to the other , where the final step is to use them as a part of a BOTO3 function.
I am getting the needed variable, and converting it to a STR type.
I tried to check the type of the output I'm getting, also tried to change it to Unicode (I'm familiar with the difference between str & Unicode).
check with hexdump to see if there are any secret hidden characters,
tried to save it to a file and reading from it,
compered the string I am generating and the one from the aws console - they are the same !
tried to change the formating method of the variables {} .format vs %s,
tried to update the aws cli \ python versions..
The annoying part is that when using the string from aws console, the BOTO3 function works perfectly. (so it is not a permission issue)
def describe_table():
response = client.list_tables(
)
tablename = response['TableNames']
for n in tablename:
if 'tname' in n:
print n
return n
def tag_dynamo(AID):
AID = '111222333'
response = client.tag_resource(
ResourceArn="arn:aws:dynamodb:region:%s:table/%s" % (AID, n),
Tags=[
{
'Key': 'Key_name',
'Value': 'value_state'
}
]
)
This is the error I'm getting:
botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the
TagResource operation: Invalid TableArn: Invalid ResourceArn provided as input
arn:aws:dynamodb:region:AID:table/TABLE_NAME
Change code to inject values through console arguments (python script)
In this moment i have adapted the code to inject the value that are defined through a json file which the script load and read, but i need to change this approach to read the value directly by arguments in the console, how can i do this?
Ex: "py login.py username: ' ' password: ' ' "
pyhton script code (login.py):
with open(sys.argv[1]) as json_data:
data = json.load(json_data)
susername = data["log_username"]
spassword = data["log_password"]
log_username=susername
log_password=spassword
json file:
"log_username": "blalala",
"log_password": "blalala",