GCP structured logging, works locally but not on kubernetes - python

I'm trying to create structured logging on GCP from my service. When I run it locally I manage to get JsonPayload in the correct format as shown bellow:
jsonPayload: {
exception: {
Message: ""
StackTrace: ""
TargetSite: ""
Type: "value_error"
}
logging.googleapis.com/labels: {2}
logging.googleapis.com/spanId: "94ecf8a83efd7f34"
logging.googleapis.com/trace: "dc4696d790ab643b058f87dbeebf19a3"
message: "Bad Request"
severity: "ERROR"
time: "2022-10-05T14:38:52.965749Z"
but when I run the service on Kubernetes I only get the following in the logging:
jsonPayload: {
exception: {
Message: ""
StackTrace: ""
TargetSite: ""
Type: "value_error"
}
message: "Bad Request"
Why is GCP removing logging.googleapis.com/labels, logging.googleapis.com/spanId,logging.googleapis.com/trace, severity from the logging JsonPayload when I run the service on GCP kubernetes?

This may be working-as-intended (WAI) but it's unclear from your question.
Google Structured Logging attempts to parse log entries as JSON and extracts specific fields including logging.googleapis.com/labels.
However (!) when it does this, some of these fields including logging.googleapis.com/labels are relocated from the jsonPayload field to another LogEntry field.
See:
labels
spanId
trace
So, you should not look for these values in jsonPayload in Cloud Logging but in e.g. labels, spanId and trace:
PROJECT=...
# Filter by entries that contain `jsonPayload`
gcloud logging read "jsonPayload:*" \
--project=${PROJECT} \
--format="value(jsonPayload,labels,spanId,trace)"

Related

InfluxDB Unauthorized 401 - with localhost access

When try to write the data into influxDB using influxDB client. i am getting the below error. I was able to login to the influxDB web browser using http://localhost:8086 with the same credentials provided in the code. But facing with the unauthorized message when using python code. any help would be appreciated.
Error:
raise InfluxDBClientError(err_msg, response.status_code)
influxdb.exceptions.InfluxDBClientError: 401: {"code":"unauthorized","message":"Unauthorized"}
Code:
from influxdb import InfluxDBClient
from datetime import datetime
client = InfluxDBClient('localhost', 8086, 'username', 'password', 'bucket_name')
for row in df.iterrows():
influxJson = [
{
"measurement":"testing123",
"time" : datetime.utcnow().isoformat() + "Z",
"tags": {
'ResiliencyTier':'targetResiliencyTier',
'lob' : 'abcdefgh'
},
"fields": {
columns[0][0] : str(row[1][0]),
columns[1][0] : str(row[1][1]),
}
}
]
client.write_points(influxJson)
print("InfluxDB injection DONE")
startProcess()
Thanks
Error code 401 (unauthorized) can be avoided in dev env by enabling http access in influx config file:
[http]
# Determines whether HTTP endpoint is enabled.
enabled = true
genarally config file can be found at:
/etc/influxdb/influxdb.conf

After migrating to pure websocket, I get a GraphQL subscription error on AWS Appsync

Due to some recent AWS restrictions, I can not use MQTT over Websockets any more when using GraphQL subscriptions on Appsync.
Based on this link from AWS, I changed my Python codebase.
It seems to connect correctly, but when subscribing, I receive the following error:
{'message': "Cannot return null for non-nullable type: 'ID' within parent 'Command' (/onCommandCreated/commandId)"}
This error is repeated for each field of the Command object.
This is the schema:
type Command {
commandId: ID!
createdAt: AWSDateTime!
command: String!
arguments: [String]!
deviceId: ID!
status: Int!
}
type Mutation {
submitCommand(command: NewCommand!): Command
}
input NewCommand {
command: String!
arguments: [String]!
deviceId: ID!
}
type Subscription {
onCommandCreated(deviceId: ID!): Command
#aws_subscribe(mutations: ["submitCommand"])
}
schema {
mutation: Mutation
subscription: Subscription
}
And this is my subscription:
{
"query": """subscription onCommand($deviceId: ID!) {
onCommandCreated(deviceId: $deviceId) {
commandId
deviceId
command
arguments
status
}
}
""",
"variables": {"deviceId": <a device id>}
}
What is absolutely unclear to me is that I have started having this issue after switching to pure websockets.
I have compared the Cloudwatch logs between pure websockets and MQTT but I did not notice any relevant difference, except for request id and timing logs.
Oh, I forgot to mention that I am using OIDC authentication method.
The codebase is available here.
Thanks,
In the end I removed the not-null requirements for the Command object.
As a result, the new Command object is the following:
type Command {
commandId: ID
createdAt: AWSDateTime
command: String
arguments: [String]
deviceId: ID
status: Int
}
This is not to be taken as a solution but a workaround.
Any more decent solutions are welcome.

Azure Function Read & Write to Blobs Storage throwing internal 500 error

by no means am i an expert when it comes to python but i have tried my best. I have written a short code in python that reads a text file from a blob storage and appends it with some column names and outputs it to target folder. The code executes correctly when i run from VS Code.
When i try to run it via Azure Function i get it throws an error.
Can it be that i have wrong parameters in my function.json binding file?
{
"statusCode": 500,
"message": "Internal server error"
}
This is my original code:
import logging
import azure.functions as func
from azure.storage.blob import BlobClient,BlobServiceClient
def main():
logging.info(' start Python Main function processed a request.')
#CONNECTION STRING
blob_service_client = BlobServiceClient.from_connection_string(CONN_STR)
# MAP SOURCE FILE
blob_client = blob_service_client.get_blob_client(container="source", blob="source.txt")
#SOURCE CONTENTS
content= blob_client.download_blob().content_as_text()
# WRITE HEADER TO A OUT PUTFILE
output_file_dest = blob_service_client.get_blob_client(container="target", blob="target.csv")
#INITIALIZE OUTPUT
output_str = ""
#STORE COULMN HEADERS
data= list()
data.append(list(["column1", "column2", "column3", "column4"]))
output_str += ('"' + '","'.join(data[0]) + '"\n')
output_file_dest.upload_blob(output_str,overwrite=True)
logging.info(' END OF FILE UPLOAD')
if __name__ == "__main__":
main()
This is my function.json
{
"scriptFile": "__init__.py",
"bindings": [
{
"type": "blob",
"direction": "in",
"name": "blob_client",
"path": "source/source.txt",
"connection": "STORAGE_CONN_STR"
},
{
"type": "blob",
"direction": "out",
"name": "output_file_dest",
"path": "target/target.csv",
"connection": "STORAGE_CONN_STR"
}
]
}
This is my requirement.txt
azure-functions==1.7.2
azure-storage-blob==12.8.1
Try adding your connection string to the Application settings of your Function App. The Application settings can be found under Configuration (under Settings). Click on New application setting, enter a name and specify the value. Then connect to the blob with the following code:
from azure.storage.blob import BlobClient,BlobServiceClient
import os
#CONNECTION STRING
blob_service_client = BlobServiceClient.from_connection_string(os.getenv('the name of your connection string in the application settings')
There are two possible ways to get 500 Internal Server Error when you Test/Run the code from Azure Portal:
Need to add all the tags mentioned in local.settings.json in Application Settings (Function App -> Configurations -> Application Settings). Mostly we need to add StorageConnections and AzureWebJobsStorage as below:
AppSettings
Check CORS, if possible, add your respective storage link for the access.
CORS = FunctionApp -> CORS
CORS
I tried reproducing 500 Internal Server Error when we Test/Run our code. (I did this by removing “*” in CORS
TestBeforeFix
Now the procedure for tracing this error is as follows:
Go to Log Stream: Here we will be displayed with the live metrics of our function app:
Try to Test/Run the function and in parallel have a look on the log stream and fetch for any errors
Below screenshot is after adding CORS
TestAfterFix
Also according to your code and function.json, you need to test the scenario by uploading blob source.txt in your source container, As soon as you upload it. Target.csv file will be created in target container.

How can I hide my exception stack trace in python code which is deployed on aws lambda and integrated with API gateway

I use raise Exception (" message ") and it returns this on browser
{"errorMessage": "{"httpStatus": 200, "message": "Exception: {\"httpStatus\": 200, \"message\": \"Exception: [InternalServerError] Project not found\"}"}", "errorType": "Exception", "stackTrace": [the stack trace]}
The stack trace causes security issue
If you are using API gateway in front of lambda. You can set reponse mapping template like below, this will override the response error message and response code as well.
#if($inputRoot.toString().contains('InternalServerError'))
{
"message": "Internal Server Error"
}
#set($context.responseOverride.status = 500)
Alternately you can also catch all the exceptions in the lambda and return whatever you like. However this will not override the status code and you would still get 200 even in case of error.
def handler(event, context):
try:
dosomething(event)
except Exception:
retunrn { "message": "Internal Server error" }
def dosomething(event):
.... You business logic.

Firebase custom auth token with debug set to true not being verbose

I am trying to get the verbosity level that the Admin Console Simulator gives but using python on a server. Using the firebase_token_generator suggested in the Firebase docs I wrote some tests.
from firebase_token_generator import create_token
create_token("<secret>", { "uid": "simplelogin:test" },
{ "debug": True, "simulate": True })
Running the token with curl results in the simple "Permission denied" error with no details about which rule failed.
$ curl https://<myapp>.firebaseio.com/.json?auth=<token>
{
"error" : "Permission denied"
}
To make sure that my secret key was correct and I was setting the options in the correct place I generated a token with admin set to true and it was successful.
create_token("<secret>", { "uid": "simplelogin:test" }, { "admin": True })
Why can't I get the verbosity level that is in the simulator?
You must be using a Firebase client library in order to receive verbose security rule logging when using a token with the debug flag set - whether that client be the JS client (Web or Node.js), ObjC (iOS or OS-X), or Java (Android or JVM). Alas, the REST API is not supported.

Categories