Elasticsearch - Authentication error when check to see if index exist - python

I have my elastic search with cognito authentication enabled for kibana. This is working fine as expected.
In my python script I connect to elasticsearch by providing username/password in http_auth(), while creating the connection object. But when I attempt to check if an indices exist, am getting authentication error? Can someone please help. Heres the sample piece of code for your simulation please.
from __future__ import print_function
import json
import time
import urllib
import re
import sys
import requests
import base64
import time
from elasticsearch import Elasticsearch
from datetime import datetime
esEndpoint = ""
uname
pwd
indexName = 'index_1'
mappings_rds = {
"settings": {
"number_of_shards": 2,
"number_of_replicas": 1
},
"mappings": {
"properties" : {
"tableName": {
"type": "keyword"
},
"tableRows": {
"type": "integer"
},
"updatedTime": {
"type": "date",
"format":"date_optional_time||yyyy-MM-dd'T'HH:mm:ss"
},
"created_timestamp":{
"type": "date",
"format":"date_optional_time||yyyy-MM-dd'T'HH:mm:ss"
}
}
}
}
esClient = Elasticsearch([esEndPoint],http_auth=(uname,pwd))
try:
res = esClient.indices.exists(indexName)
print(res)
if res is False:
r = esClient.indices.create(indexName, body=mapping_rds, ignore=400)
return 1
except Exception as E:
print("Unable to Create Index {0}".format(indexName))

Thank you all for your comments. I had a call with AWS Support team regarding this. The issue was found to be the username & password that I had used. I was using the user credentials created using AWS Cognito. The AWS support team confirmed that the said credentials is only meant for Kibana access, and it cannot be used for connecting to Elasticsearch from databricks / python script.
More info available here: https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-request-signing.html
Once I generate the access/secret keys, and use it from my python script, am able to connect to elasticsearch to create index.
Sharing a sample code for future reference:
session=boto3.session.Session(aws_access_key_id=akey, aws_secret_access_key=skey, region_name=region)
credentials = session.get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
print(credentials.access_key, credentials.secret_key, credentials.token)
es = Elasticsearch(
hosts = [{'host': esEndpoint, 'port': 443}],
http_auth = awsauth,
use_ssl = True,
verify_certs = True,
connection_class = RequestsHttpConnection
)
Thanks

Related

InfluxDB Unauthorized 401 - with localhost access

When try to write the data into influxDB using influxDB client. i am getting the below error. I was able to login to the influxDB web browser using http://localhost:8086 with the same credentials provided in the code. But facing with the unauthorized message when using python code. any help would be appreciated.
Error:
raise InfluxDBClientError(err_msg, response.status_code)
influxdb.exceptions.InfluxDBClientError: 401: {"code":"unauthorized","message":"Unauthorized"}
Code:
from influxdb import InfluxDBClient
from datetime import datetime
client = InfluxDBClient('localhost', 8086, 'username', 'password', 'bucket_name')
for row in df.iterrows():
influxJson = [
{
"measurement":"testing123",
"time" : datetime.utcnow().isoformat() + "Z",
"tags": {
'ResiliencyTier':'targetResiliencyTier',
'lob' : 'abcdefgh'
},
"fields": {
columns[0][0] : str(row[1][0]),
columns[1][0] : str(row[1][1]),
}
}
]
client.write_points(influxJson)
print("InfluxDB injection DONE")
startProcess()
Thanks
Error code 401 (unauthorized) can be avoided in dev env by enabling http access in influx config file:
[http]
# Determines whether HTTP endpoint is enabled.
enabled = true
genarally config file can be found at:
/etc/influxdb/influxdb.conf

Configure diagnostic setting for azure database using Python SDK

I want to configure diagnostic setting for Azure database using Python. I know that I have to use DiagnosticSettingsOperations Class, and MonitorManagementClient Client, and create_or_update method to start. I am fairly new to Python development, and I am struggling to put the pieces together.
However, there is no proper examples on what parameters to pass for the DiagnosticSettingsOperations Class.
Sample code:
from azure.mgmt.monitor import MonitorManagementClient
from azure.identity import ClientSecretCredential
####### FUNCTION TO CREATE AZURE AUTHENTICATION USING SERVICE PRINCIPAL #######
def authenticateToAzureUsingServicePrincipal():
# Authenticate to Azure using Service Principal credentials
client_id = 'client_id'
client_secret = 'client_secret'
client_tenant_id = 'client_tenant_id'
# Create Azure credential object
servicePrincipalCredentialObject = ClientSecretCredential(tenant_id=client_tenant_id, client_id=client_id, client_secret=client_secret)
return servicePrincipalCredentialObject
azureCredential = authenticateToAzureUsingServicePrincipal()
monitorManagerClient = MonitorManagementClient(azureCredential)
I want to configure Diagnostic setting for Azure sql database, which selects ALL Metrics and Logs by default and sends to a Log analytics workspace. Does anyone know how to proceed further?
The code looks like below:
#other code
monitorManagerClient = MonitorManagementClient(azureCredential)
# Creates or Updates the diagnostic setting[put]
BODY = {
"workspace_id": "the resource id of the log analytics workspace",
"metrics": [
{
"category": "Basic",
"enabled": True,
"retention_policy": {
"enabled": False,
"days": "0"
}
}
#other categories
],
"logs": [
{
"category": "SQLInsights",
"enabled": True,
"retention_policy": {
"enabled": False,
"days": "0"
}
}
#other categories
],
# "log_analytics_destination_type": "Dedicated"
}
diagnostic_settings = self.mgmt_client.diagnostic_settings.create_or_update(RESOURCE_URI, INSIGHT_NAME, BODY)
There is an example in github, you can take a look at it. And if you want to select ALL Metrics and Logs, you should add them one by one in the metrics / logs in the BODY in the above code.

How Do I Find Errors and Debug A Cloud Function?

I have created a cloud function that when triggered is supposed to create a VM instance and run a python script.
However, the VM is not being created.
I can see the following message in the CF log, to do with my deployment:
resource.type = "cloud_function"
resource.labels.region = "europe-west2"
severity>=DEFAULT
severity=DEBUG
...
However, for the life of me, I can't see where to go to actually view the error itself.
I then Googled and found the following thread about an issue where Cloud Functions is not showing any logs.
Thinking it may be the same issue I added the recommended environment variables to my deployment but still I cant find the error anywhere in the logs.
Can anyone point me in the right direction?
Here is my cloud function code as well:
import os
from googleapiclient import discovery
from google.oauth2 import service_account
scopes = ["https://www.googleapis.com/auth/cloud-platform"]
sa_file = "key.json"
zone = "europe-west2-c"
project_id = "<<proj id>>" # Project ID, not Project Name
credentials = service_account.Credentials.from_service_account_file(
sa_file, scopes=scopes
)
# Create the Cloud Compute Engine service object
service = discovery.build("compute", "v1", credentials=credentials)
def create_instance(compute, project, zone, name):
# Get the latest Debian Jessie image.
image_response = (
compute.images()
.getFromFamily(project="debian-cloud", family="debian-9")
.execute()
)
source_disk_image = image_response["selfLink"]
# Configure the machine
machine_type = "zones/%s/machineTypes/n1-standard-1" % zone
config = {
"name": name,
"machineType": machine_type,
# Specify the boot disk and the image to use as a source.
"disks": [
{
"kind": "compute#attachedDisk",
"type": "PERSISTENT",
"boot": True,
"mode": "READ_WRITE",
"autoDelete": True,
"deviceName": "instance-1",
"initializeParams": {
"sourceImage": "projects/my_account/global/images/instance-image3",
"diskType": "projects/my_account/zones/europe-west2-c/diskTypes/pd-standard",
"diskSizeGb": "10",
},
"diskEncryptionKey": {},
}
],
"metadata": {
"kind": "compute#metadata",
"items": [
{
"key": "startup-script",
"value": "sudo apt-get -y install python3-pip\npip3 install -r /home/will_charles/requirements.txt\ncd /home/will_peebles/\npython3 /home/will_charles/main.py",
}
],
},
"serviceAccounts": [
{
"email": "837516068454-compute#developer.gserviceaccount.com",
"scopes": ["https://www.googleapis.com/auth/cloud-platform"],
}
],
"networkInterfaces": [
{
"network": "global/networks/default",
"accessConfigs": [{"type": "ONE_TO_ONE_NAT", "name": "External NAT"}],
}
],
"tags": {"items": ["http-server", "https-server"]},
}
return compute.instances().insert(project=project, zone=zone, body=config).execute()
def run(data, context):
create_instance(service, project_id, zone, "pagespeed-vm-4")
The reason that you do not see anything in the logs for Cloud Functions is that your code is executing but is not logging the results of the API calls.
Your code is succeeding in calling the API to create a compute instance. This does not mean the API succeeded just the call itself. The API returns an operation handle that you then later call to check on status. You are not doing that, so your Cloud Function has no idea that the create instance failed.
To see logs for the create instance API, go to Operations Logging -> VM Instance. Select "All instance_id". If the API to create an instance does not succeed, there will be no instance id to select therefore you have to select all instances and then find logs related to the API call.

Firebase Auth + Python backend

I am going to use Firebase Auth and Database modules to create my web app. However, not all things that I want my app to do is possible to achieve on only front end. So I want to also use backend with Python's Bottle framework to handle requests and Pyrebase to get access to Firebase Database.
Let's say that after logging in I need to go to mainpage and see personalized content, for example my notes. They are structured this way in DB:
{
"notes": [{
"id": "1",
"title": "X",
"author": "user1"
},
{
"id": "2",
"title": "Y",
"author": "user2"
} and so on... ]
}
So how it's possible to implement showing only my articles on main page?
I understand that I need to filter my notes based on author value, but how to let Bottle understand who is currently logged in?
I've read there, that I should somehow send unique token to backend server to authenticate current user, but how to do that? Inserting Token in every link as GET parameter seems to be silly, but I see no other way to implement that.
Start by organizing your database so that each note becomes a child object:
{
"notes": {
"id1": {
"id": "id1",
"title": "X",
"author": "user1",
},
"id2": {
}
}
}
Then this particular interaction can be implemented entirely in the client-side. Just execute a query to filter the notes you want. For example in a JS client:
var uid = firebase.auth().currentUser.uid;
var query = ref.orderByChild('author').equalTo(uid);
// Listen for query value events
If you want to run this on a backend server, and you want to ensure that only logged in users are allowed to execute it, then you must pass the ID token from the client app to the server on each request. Here's how to implement the server-side logic using the Python Admin SDK:
import firebase_admin
from firebase_admin import auth
from firebase_admin import db
token = '....' # Extract from the client request
try:
decoded = auth.verify_id_token(token)
uid = decoded.uid
ref = db.reference('path/to/notes')
notes = ref.order_by_child('author').equal_to(uid).get()
# Process notes response
except ValueError as ex:
print(ex)
# Send error to client

while creating the runbook as draft using azure REST api in python i am facing the error...!

I need to create Azure Automation account, and I want to create run book under automation account for auto-scheduling the VM's
Steps I followed for creating Azure automation account.
creating cloud service using API
https://management.core.windows.net/sdjgsdgj-abcd-2323-98cd-3bd6bcf93702/cloudServices/cloudsername
Next step, is I am creating Azure automation account under the created cloud service using above api.
https://management.core.windows.net/sdjgsdgj-abcd-2323-98cd-3bd6bcf93702/cloudServices/cloudsername/resources/automation/AutomationAccount/testacc2?resourceType=AutomationAccount&detailLevel=Full&resourceProviderNamespace=automation'
After that, i want to create runbook under that create automation account for this I am using the below API in Python
import adal
import requests
import json
token_response = adal.acquire_token_with_username_password(
'https://login.windows.net/rapiddirectory.onmicrosoft.com',
'test#xyz.onmicrosoft.com',
'abcd'
)
access_token = token_response.get('accessToken')
create_run_draft = 'https://management.core.windows.net/sdjgsdgj-abcd-2323-98cd-3bd6bcf93702/cloudServices/cloudsername/resources/automation/~/automationAccounts/testacc2/runbooks/write-helloworld/draft?api-version=2014-12-08'
param3 = {
"tags":{
"Testing":"show value",
"Source":"TechNet Script Center"
},
"properties":{
"description":"Hello world",
"runbookType":"Script",
"logProgress":"false",
"logVerbose":"false",
"draft":{
"draftContentLink":{
"uri":"https://gallery.technet.microsoft.com/scriptcenter/The-Hello-World-of-Windows-81b69574/file/111354/1/Write-HelloWorld.ps1",
"contentVersion":"1.0.0.0",
"contentHash":{
"algorithm":"sha256",
"value":"EqdfsYoVzERQZ3l69N55y1RcYDwkib2+2X+aGUSdr4Q="
}
}
}
}
}
headers2 = {'x-ms-version' : '2013-06-01','Content-Type' : 'application/json',"Authorization": 'Bearer ' + access_token}
output = requests.put(create_run_draft,headers=headers2,data=param3).text
print output
I am using Python programming language to achieve this for Azure REST API
I am getting the below error
<Error xmlns="http://schemas.microsoft.com/windowsazure" xmlns:i="http://www.w3.
org/2001/XMLSchema-instance"><Code>InternalError</Code><Message>The server encou
ntered an internal error. Please retry the request.</Message></Error>
Please help me out of this problem I am struggling with error
Could be because you are passing the values of logProgress and logVerbose as strings ("false") instead of as booleans (false).
This worked for me:
Create runbook:
PUT https://management.core.windows.net/90751b51-7cb6-4480-8dbd-e199395b296f/cloudservices/OaaSCS/resources/automation/~/automationAccounts/JoeAutomationAccount/runbooks/testabc?api-version=2014-12-08
Request body:
{
"properties": {
"logVerbose": false,
"logProgress": false,
"runbookType": "Script",
"draft": {
"inEdit": false,
"creationTime": "0001-01-01T00:00:00+00:00",
"lastModifiedTime": "0001-01-01T00:00:00+00:00"
}
},
"name": "testabc"
}
Upload draft content:
PUT https://management.core.windows.net/90751b51-7cb6-4480-8dbd-e199395b296f/cloudservices/OaaSCS/resources/automation/~/automationAccounts/JoeAutomationAccount/runbooks/testabc/draft/content?api-version=2014-12-08
Request body:
workflow testabc {
"hello"
}

Categories