Failed to Parse While creating Index in elastic search using Python Lambda - python

Error:
Failed to parse: http://[https://search--kibana-************.eu-west-1..amazonaws.com/]:9200/example_index: InvalidURL
Details:
NOTE : URL PASSING IS NORMAL LIKE connect_es("https://kibana-endpoint.com"). Also My elastic search connection was successful but while creating Index I was getting this error.
def connect_es(endpoint):
# Handle aws auth for es
session = boto3.Session()
credentials = session.get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, session.region_name, 'es',
session_token=credentials.token)
print('Connecting to the ES Endpoint: {endpoint}'.format(endpoint=endpoint))
#try:
es_client = Elasticsearch(host=endpoint,
port=9200,
connection_class=RequestsHttpConnection,
timeout=60)
#print(es_client.hosts)
print('Connected to elasticsearch')
request_body = {
"settings": {
"number_of_shards": 5,
"number_of_replicas": 1
},
"mappings": {
"properties": {
"address": {
"index": "not_analyzed",
"type": "string"
},
"date_of_birth": {
"index": "not_analyzed",
"format": "dateOptionalTime",
"type": "date"
},
"some_PK": {
"index": "not_analyzed",
"type": "string"
},
"fave_colour": {
"index": "analyzed",
"type": "string"
},
"email_domain": {
"index": "not_analyzed",
"type": "string"
}
}
}}
print("creating 'example_index' index...")
es_client.indices.create(index = 'example_index', body = request_body)
mapping = es_client.indices.get_mapping(index="example_index")
If there is any mistake please let me know

Related

Why is boto3 Python AWS SSM not returning my document?

I'm trying to run a command on an AWS ecs docker container using the Python boto3 library, and get the output using ssm.
It looks like this:
session = boto3.Session(profile_name="default")
client = session.client("ecs", region_name=MY_REGION)
ssm_client = session.client("ssm", region_name=MY_REGION)
result = client.execute_command(
cluster=MY_CLUSTER,
task=MY_TASK,
container=MY_CONTAINER,
command='python -c "for i in range(30_000): print(i)"',
interactive=True,
)
session_id = result["session"]["sessionId"]
session = ssm_client.describe_sessions(State="Active", Filters=[{"key": "SessionId", "value": session_id}])["Sessions"][0]
document_name = session["DocumentName"]
document_response = ssm_client.get_document(
Name=document_name,
DocumentFormat='JSON',
)
print(document_response["Content"])
Instead of the output of my command, I get this:
{
"schemaVersion": "1.0",
"description": "This document holds parameterized settings for starting a session with Session Manager.",
"sessionType": "InteractiveCommands",
"parameters": {
"s3BucketName": {
"type": "String",
"description": "S3 bucket for logging",
"default": ""
},
"s3KeyPrefix": {
"type": "String",
"description": "S3 prefix for logging",
"default": ""
},
"s3EncryptionEnabled": {
"type": "String",
"description": "Enable S3 Encryption",
"allowedValues": [
"true",
"false"
],
"default": "false"
},
"cloudWatchLogGroupName": {
"type": "String",
"description": "Cloud watch log group name for logging",
"default": ""
},
"cloudWatchEncryptionEnabled": {
"type": "String",
"description": "Enable Cloudwatch Encryption",
"allowedValues": [
"true",
"false"
],
"default": "false"
},
"kmsKeyId": {
"type": "String",
"description": "KMS key for encryption",
"default": ""
},
"command": {
"type": "String",
"description": "The command to run on the instance"
}
},
"inputs": {
"s3BucketName": "{{s3BucketName}}",
"s3KeyPrefix": "{{s3KeyPrefix}}",
"s3EncryptionEnabled": "{{s3EncryptionEnabled}}",
"cloudWatchLogGroupName": "{{cloudWatchLogGroupName}}",
"cloudWatchEncryptionEnabled": "{{cloudWatchEncryptionEnabled}}",
"kmsKeyId": "{{kmsKeyId}}"
},
"properties": {
"linux": {
"commands": "{{command}}",
"runAsElevated": false
},
"windows": {
"commands": "{{command}}",
"runAsElevated": false
}
}
}
What am I doing wrong? Do I need to define a different session type or something?

Flasgger - Add bearer authorization

I am running a flask app and using flasgger to generate Swagger Specs as well as a Swagger UI. My API requires the requests to be authenticated using a bearer token. I am able to get the button on the page and set the token. But it is not sent through the requests. I am using OpenAPI 3.0.3. Below is my code:
from flasgger import Swagger
swagger_template = {
'components': {
'securitySchemes': {
'bearerAuth': {
'type': 'http',
'scheme': 'bearer',
'bearerFormat': 'JWT'
}
},
'security': {
'bearerAuth': []
}
}
}
# Register controllers
api = Api(app)
swagger = Swagger(app=app, config={
'headers': [
],
'title': 'Model Design Application API',
'specs': [
{
'endpoint': 'apispec',
'route': '/apispec.json'
}
],
'openapi': '3.0.3'
}, template=swagger_template)
This is the token to be set in the Swagger UI:
This is the UI I get in Swagger:
This is the apispec.json that is generated:
{
"definitions": {
"User": {
"properties": {
"username": {
"default": "Steven Wilson",
"description": "The name of the user",
"type": "string"
}
}
}
},
"info": {
"description": "powered by Flasgger",
"termsOfService": "/tos",
"title": "Model Design Application API",
"version": "0.0.1"
},
"openapi": "3.0.3",
"paths": {
"/profile": {
"get": {
"description": "It works also with swag_from, schemas and spec_dict<br/>",
"responses": {
"200": {
"description": "A single user item",
"schema": {
"$ref": "#/definitions/User"
}
}
},
"summary": "This examples uses FlaskRESTful Resource"
}
}
},
"security": {
"bearerAuth": []
},
"securitySchemes": {
"bearerAuth": {
"bearerFormat": "JWT",
"scheme": "bearer",
"type": "http"
}
}
}
Please advice. Any help is appreciated.
For adding header in Flasgger API, do the following changes:
SWAGGER_TEMPLATE = {"securityDefinitions": {"APIKeyHeader": {"type": "apiKey", "name": "x-access-token", "in": "header"}}}
swagger = Swagger(app, template=SWAGGER_TEMPLATE)
Here, x-access-token is our key name in header. you can change this name according to your requirement.
After this, we need to add this header in our .yml file. Our .yml file will look like this:
summary: "Put your summery here."
description: "Put your description here."
consumes:
- "application/json"
produces:
- "application/json"
security:
- APIKeyHeader: ['x-access-token']
responses:
200:
description: "Success"
Check the working code here
template = {
"swagger": "2.0",
"info": {
"title": XYZ API Docs",
"description": "API Documentation for XYZ Application",
"contact": {
"responsibleOrganization": "",
"responsibleDeveloper": "",
"email": "XYZ#XYZ.com",
"url": "XYZ.com",
},
"termsOfService": "XYZ .com",
"version": "1.0"
},
"basePath": "/api/v1", # base bash for blueprint registration
"schemes": [
"http",
"https"
],
"securityDefinitions": {
"Bearer": {
"type": "apiKey",
"name": "Authorization",
"in": "header",
"description": "\
JWT Authorization header using the Bearer scheme. Example: \"Authorization: Bearer {token}\""
}
},
"security": [
{
"Bearer": []
}
]
}
swagger_config = {
"headers": [
],
"specs": [
{
"endpoint": 'apispec',
"route": '/apispec.json',
"rule_filter": lambda rule: True, # all in
"model_filter": lambda tag: True, # all in
}
],
"static_url_path": "/flasgger_static",
"swagger_ui": True,
"specs_route": "/api/v1/apispec"
}

Load a avro file in BigQuery - Unexpected type for default value. Expected null, but found string: "null"

I need to transfer the result of this query to BigQuery, as you can see, I decode the data I get in Cloud Storage, I created an avro file to load it into a BigQuery table but I receive this error:
BadRequest Traceback (most recent call last)
<ipython-input-8-78860f4800c4> in <module>
110 bucket_name1 = 'gs://new_bucket/insert_transfer/*.avro'
111
--> 112 insert_bigquery_avro(bucket_name1, dataset1, tabela1)
<ipython-input-8-78860f4800c4> in insert_bigquery_avro(target_uri, dataset_id, table_id)
103 )
104 print('Starting job {}'.format(load_job.job_id))
--> 105 load_job.result()
106 print('Job finished.')
107
c:\users\me\appdata\local\programs\python\python37\lib\site-packages\google\cloud\bigquery\job.py in result(self, timeout)
695 self._begin()
696 # TODO: modify PollingFuture so it can pass a retry argument to done().
--> 697 return super(_AsyncJob, self).result(timeout=timeout)
698
699 def cancelled(self):
c:\users\me\appdata\local\programs\python\python37\lib\site-packages\google\api_core\future\polling.py in result(self, timeout)
125 # pylint: disable=raising-bad-type
126 # Pylint doesn't recognize that this is valid in this case.
--> 127 raise self._exception
128
129 return self._result
BadRequest: 400 Error while reading data, error message: The Apache Avro library failed to parse the header with the following error: Unexpected type for default value. Expected null, but found string: "null"
This is the script process:
import csv
import base64
import json
import io
import avro.schema
import avro.io
from avro.datafile import DataFileReader, DataFileWriter
import math
import os
import gcloud
from gcloud import storage
from google.cloud import bigquery
from oauth2client.client import GoogleCredentials
from datetime import datetime, timedelta
import numpy as np
try:
script_path = os.path.dirname(os.path.abspath(__file__)) + "/"
except:
script_path = "C:\\Users\\me\\Documents\\Keys\\key.json"
#Bigquery Credentials and settings
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = script_path
folder = str((datetime.now() - timedelta(days=1)).strftime('%Y-%m-%d'))
bucket_name = 'gs://new_bucket/table/*.csv'
dataset = 'dataset'
tabela = 'table'
schema = avro.schema.Parse(open("C:\\Users\\me\\schema_table.avsc", "rb").read())
writer = DataFileWriter(open("C:\\Users\\me\\table_register.avro", "wb"), avro.io.DatumWriter(), schema)
def insert_bigquery(target_uri, dataset_id, table_id):
bigquery_client = bigquery.Client()
dataset_ref = bigquery_client.dataset(dataset_id)
job_config = bigquery.LoadJobConfig()
job_config.schema = [
bigquery.SchemaField('id','STRING',mode='REQUIRED')
]
job_config.source_format = bigquery.SourceFormat.CSV
job_config.field_delimiter = ";"
uri = target_uri
load_job = bigquery_client.load_table_from_uri(
uri,
dataset_ref.table(table_id),
job_config=job_config
)
print('Starting job {}'.format(load_job.job_id))
load_job.result()
print('Job finished.')
#insert_bigquery(bucket_name, dataset, tabela)
def get_data_from_bigquery():
"""query bigquery to get data to import to PSQL"""
bq = bigquery.Client()
#Busca IDs
query = """SELECT id FROM dataset.base64_data"""
query_job = bq.query(query)
data = query_job.result()
rows = list(data)
return rows
a = get_data_from_bigquery()
length = len(a)
line_count = 0
for row in range(length):
bytes = base64.b64decode(str(a[row][0]))
bytes = bytes[5:]
buf = io.BytesIO(bytes)
decoder = avro.io.BinaryDecoder(buf)
rec_reader = avro.io.DatumReader(avro.schema.Parse(open("C:\\Users\\me\\schema_table.avsc").read()))
out=rec_reader.read(decoder)
writer.append(out)
writer.close()
def upload_blob(bucket_name, source_file_name, destination_blob_name):
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob("insert_transfer/" + destination_blob_name)
blob.upload_from_filename(source_file_name)
print('File {} uploaded to {}'.format(
source_file_name,
destination_blob_name
))
upload_blob('new_bucket', 'C:\\Users\\me\\table_register.avro', 'table_register.avro')
def insert_bigquery_avro(target_uri, dataset_id, table_id):
bigquery_client = bigquery.Client()
dataset_ref = bigquery_client.dataset(dataset_id)
job_config = bigquery.LoadJobConfig()
job_config.autodetect = True
job_config.source_format = bigquery.SourceFormat.AVRO
time_partitioning = bigquery.table.TimePartitioning(type_=bigquery.TimePartitioningType.DAY, field="date")
job_config.time_partitioning = time_partitioning
uri = target_uri
load_job = bigquery_client.load_table_from_uri(
uri,
dataset_ref.table(table_id),
job_config=job_config
)
print('Starting job {}'.format(load_job.job_id))
load_job.result()
print('Job finished.')
dataset1 = 'dataset'
tabela1 = 'table'
bucket_name1 = 'gs://new_bucket/insert_transfer/*.avro'
insert_bigquery_avro(bucket_name1, dataset1, tabela1)
I receive a CSV file in Cloud Storage like this:
And this script decode the register like this:
I want to create a routine to put the decoded information into BigQuery.
The schema file:
{
"namespace": "transfers",
"type": "record",
"name": "Transfer",
"doc": "Represents the The transfer request",
"fields": [
{
"name": "id",
"type": "string",
"doc": "the transfer request id"
},
{
"name": "date",
"type": {
"type": "long",
"logicalType": "timestamp-millis"
},
"doc": "the date where the transaction happend"
},
{
"name": "merchant",
"type": "string",
"doc": "the merchant who owns the payment"
},
{
"name": "amount",
"type": ["null", {
"type": "bytes",
"logicalType": "decimal",
"precision": 4,
"scale": 2
}],
"default": "null",
"doc": "the foreign amount for the payment"
},
{
"name": "status",
"type": {
"type": "enum",
"name": "transfer_status",
"symbols": [
"RECEIVED",
"WAITING_TRANSFER",
"ON_PROCESSING",
"EXECUTED",
"DENIED"
]
},
"default": "DENIED"
},
{
"name": "correlation_id",
"type": ["null", "string"],
"default": "null",
"doc": "the correlation id of the request"
},
{
"name": "transfer_period",
"type": ["null", "string"],
"default": "null",
"doc": "The transfer period spec"
},
{
"name": "payments",
"type": {
"type": "array",
"items": "string"
}
},
{
"name": "metadata",
"type": {
"type": "map",
"values": "string"
}
},
{
"name": "events",
"type": {
"type": "array",
"items": {
"name": "event",
"type": "record",
"fields": [
{
"name": "id",
"type": "string"
},
{
"name": "type",
"type": {
"type": "enum",
"name": "event_type",
"symbols": [
"REQUEST",
"VALIDATION",
"TRANSFER_SCHEDULE",
"TRANSFERENCE"
]
}
},
{
"name": "amount",
"type": ["null", {
"type": "bytes",
"logicalType": "decimal",
"precision": 4,
"scale": 2
}],
"doc": "the original currency amount",
"default": "null"
},
{
"name": "date",
"type": {
"type": "long",
"logicalType": "timestamp-millis"
},
"doc": "the moment where this request was received by the platform"
},
{
"name": "status",
"type": {
"type": "enum",
"name": "event_status",
"symbols": [
"SUCCESS",
"DENIED",
"ERROR",
"TIMEOUT",
"PENDING"
]
}
},
{
"name": "metadata",
"type": {
"type": "map",
"values": "string"
}
},
{
"name": "internal_metadata",
"type": {
"type": "map",
"values": "string"
}
},
{
"name": "error",
"type": {
"type": "record",
"name": "Error",
"fields": [
{
"name": "code",
"type": ["null", "string"],
"default": "null"
},
{
"name": "message",
"type": ["null", "string"],
"default": "null"
}
]
}
},
{
"name": "message",
"type": ["null", "string"],
"default": "null"
}
]
}
}
}
]
}
Try changing the "default" values from "null" to null.
Reference.

Not able to extract my friend images from Facebook API using python

I have got this response from the Facebook Graph API:
{
"taggable_friends": {
"data": [
{
"name": "Friend1 Name",
"picture": {
"data": {
"url": "https://fb-s-c-a.akamaihd.net/h-ak-fbx/v/t1.0-1/p200x200/completeUrl1"
}
},
"id": "response1d"
},
{
"name": "Friend2 name",
"picture": {
"data": {
"url": "https://fb-s-a-a.akamaihd.net/h-ak-fbx/v/t1.0-1/p200x200/completeURL2"
}
},
"id": "responseid2"
}
],
"paging": {
"cursors": {
"before": "xyz",
"after": "abc"
},
"next": "NextpageURl"
}
},
"id": "xxxxxxxxx"
}
I am willing to extract the URL part of the graph API response with field taggable_friends.
I have tried something like this:
for friends in data_json_liked_pages['taggable_friends']['data']:
friend_url = friends['picture']['data']['url']
print friend_url
I am getting the following error:
Exception Type: TypeError
Exception Value: list indices must be integers, not str
What can I do to improve this?

API.ai Actions on Google API Version 2: Failed to parse JSON response string with 'INVALID_ARGUMENT' error: ": Cannot find field."

I am using python to create webhook for Assistat app. I am able to ask user for location permission, but as soon as user gives consent, I receive following error
UnparseableJsonResponse
API Version 2: Failed to parse JSON response string with 'INVALID_ARGUMENT' error: ": Cannot find field.".
I have checked my webhook server and no request comes to it. This looks like some issue at API.ai side. Below is the Debug response from Actions console when using Python client
{
"assistantToAgentDebug": {
"curlCommand": "curl -v '<URL>'{\"user\":{\"userId\":\"<USED_ID>\",\"locale\":\"en-US\"},\"conversation\":{\"conversationId\":\"1504592665563\",\"type\":\"ACTIVE\",\"conversationToken\":\"[\\\"defaultwelcomeintent-followup\\\"]\"},\"inputs\":[{\"intent\":\"actions.intent.PERMISSION\",\"rawInputs\":[{\"inputType\":\"VOICE\",\"query\":\"yes\"}],\"arguments\":[{\"name\":\"PERMISSION\",\"textValue\":\"true\"}]}],\"surface\":{\"capabilities\":[{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.SCREEN_OUTPUT\"}]},\"device\":{\"location\":{\"coordinates\":{\"latitude\":37.4219806,\"longitude\":-122.0841979}}},\"isInSandbox\":true}'",
"assistantToAgentJson": {
"user": {
"userId": "<USED_ID>",
"locale": "en-US"
},
"conversation": {
"conversationId": "1504592665563",
"type": "ACTIVE",
"conversationToken": "[\"defaultwelcomeintent-followup\"]"
},
"inputs": [
{
"intent": "actions.intent.PERMISSION",
"rawInputs": [
{
"inputType": "VOICE",
"query": "yes"
}
],
"arguments": [
{
"name": "PERMISSION",
"textValue": "true"
}
]
}
],
"surface": {
"capabilities": [
{
"name": "actions.capability.AUDIO_OUTPUT"
},
{
"name": "actions.capability.SCREEN_OUTPUT"
}
]
},
"device": {
"location": {
"coordinates": {
"latitude": 37.4219806,
"longitude": -122.0841979
}
}
},
"isInSandbox": true
}
},
"agentToAssistantDebug": {
"agentToAssistantJson": {
"message": "Unexpected apiai response format: Empty speech response",
"apiResponse": {
"id": "<ID>",
"timestamp": "2017-09-05T06:24:41.711Z",
"lang": "en",
"result": {},
"status": {
"code": 200,
"errorType": "success"
},
"sessionId": "1504592665563"
}
}
},
"sharedDebugInfo": [
{
"name": "GOOGLE_SYSTEM_ACTION",
"debugInfo": "Your query was handled by Actions on Google."
},
{
"name": "GOOGLE_SYSTEM_ACTION",
"debugInfo": "Your query was handled by Actions on Google."
},
{
"name": "ResponseValidation",
"subDebugEntry": [
{
"name": "UnparseableJsonResponse",
"debugInfo": "API Version 2: Failed to parse JSON response string with 'INVALID_ARGUMENT' error: \": Cannot find field.\"."
}
]
}
]
}
Using Python library Flask-Assistant
How can I resolve this issue?
UPDATE
Node JS client works... what is the issue with Python client?
Action Console Debug response
{
"assistantToAgentDebug": {
"curlCommand": "curl -v '<URL>'{\"user\":{\"userId\":\"<USER_ID>\",\"locale\":\"en-US\"},\"conversation\":{\"conversationId\":\"<ID>\",\"type\":\"ACTIVE\",\"conversationToken\":\"[\\\"_actions_on_google_\\\",\\\"defaultwelcomeintent-followup\\\"]\"},\"inputs\":[{\"intent\":\"actions.intent.PERMISSION\",\"rawInputs\":[{\"inputType\":\"VOICE\",\"query\":\"yes\"}],\"arguments\":[{\"name\":\"PERMISSION\",\"textValue\":\"true\"}]}],\"surface\":{\"capabilities\":[{\"name\":\"actions.capability.AUDIO_OUTPUT\"}]},\"device\":{\"location\":{\"coordinates\":{\"latitude\":37.4219806,\"longitude\":-122.0841979},\"formattedAddress\":\"Googleplex, Mountain View, CA 94043, United States\",\"zipCode\":\"94043\",\"city\":\"Mountain View\"}},\"isInSandbox\":true}'",
"assistantToAgentJson": {
"user": {
"userId": "<USER_ID>",
"locale": "en-US"
},
"conversation": {
"conversationId": "<ID>",
"type": "ACTIVE",
"conversationToken": "[\"_actions_on_google_\",\"defaultwelcomeintent-followup\"]"
},
"inputs": [
{
"intent": "actions.intent.PERMISSION",
"rawInputs": [
{
"inputType": "VOICE",
"query": "yes"
}
],
"arguments": [
{
"name": "PERMISSION",
"textValue": "true"
}
]
}
],
"surface": {
"capabilities": [
{
"name": "actions.capability.AUDIO_OUTPUT"
}
]
},
"device": {
"location": {
"coordinates": {
"latitude": 37.4219806,
"longitude": -122.0841979
},
"formattedAddress": "Googleplex, Mountain View, CA 94043, United States",
"zipCode": "94043",
"city": "Mountain View"
}
},
"isInSandbox": true
}
},
"agentToAssistantDebug": {
"agentToAssistantJson": {
"conversationToken": "[\"_actions_on_google_\",\"defaultwelcomeintent-followup\"]",
"expectUserResponse": true,
"expectedInputs": [
{
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Given permission"
}
}
]
}
},
"possibleIntents": [
{
"intent": "assistant.intent.action.TEXT"
}
]
}
],
"responseMetadata": {
"status": {
"code": 14
},
"queryMatchInfo": {
"queryMatched": true,
"intent": "Default Welcome Intent - fallback"
}
}
}
}
}
Request from Actions server to my Node JS webhook server
{ originalRequest:
{ source: 'google',
version: '2',
data:
{ isInSandbox: true,
surface: [Object],
inputs: [Array],
user: [Object],
device: [Object],
conversation: [Object] } },
id: '<ID>',
timestamp: '2017-09-06T05:43:21.342Z',
lang: 'en',
result:
{ source: 'agent',
resolvedQuery: 'actions_intent_PERMISSION',
speech: '',
action: 'DefaultWelcomeIntent.DefaultWelcomeIntent-fallback',
actionIncomplete: false,
parameters: {},
contexts: [ [Object], [Object], [Object], [Object], [Object] ],
metadata:
{ intentId: '<ID>',
webhookUsed: 'true',
webhookForSlotFillingUsed: 'false',
nluResponseTime: 2,
intentName: 'Default Welcome Intent - fallback' },
fulfillment: { speech: 'Given permission', messages: [Array] },
score: 1 },
status: { code: 200, errorType: 'success' },
sessionId: '<SID>'
}
API.ai Intent settings
The most likely reason you're not getting any hits on your webhook is that you don't have an intent registered to get the reply.
You can do this by creating an Intent with the Event set to actions_intent_PERMISSION.
See also the following answers on SO:
Unable to accept the permission prompt on Actions on Google
Permission response not handled correctly

Categories