I've stuck with problem with google cloud logging and google cloud trace using google cloud kubernetes
I've the application which consumes gcloud pubsub topic and I want to unify logs in trace of every pubsub message handle func call
My Gcloud Logging handler code
class GCLHandler(CloudLoggingHandler):
def emit(self, record):
message = super(GCLHandler, self).format(record)
resource = Resource(
type='k8s_container',
labels={
'cluster_name': os.environ['CLUSTER_NAME'],
'container_name': os.environ['POD_APP_NAME'],
'location': os.environ['CLUSTER_LOCATION'],
'namespace_name': os.environ['POD_NAMESPACE'],
'pod_name': os.environ['POD_NAME'],
'project_id': _settings.PROJECT_NAME
}
)
labels: Dict[str, Any] = {
'k8s-pod/app': os.environ['POD_APP_NAME'],
'k8s-pod/app_kubernetes_io/managed-by': os.environ['POD_MANAGED_BY'],
'k8s-pod/pod-template-hash': os.environ['POD_TEMPLATE_HASH']
}
trace = getattr(record, 'traceId', None)
if trace is not None:
trace = f'projects/{_settings.PROJECT_NAME}/traces/{trace}'
self.transport.send(
record,
message,
resource=resource,
labels=labels,
trace=trace,
span_id=getattr(record, 'spanId', None)
)
I use opensensus integration with gcloud trace and logging, so I can get traceId and spanId and pass it into gcloud logging transport, it work fine and LogEntry in logs viewer contains proper traceId and spanId
My code of using gcloud trace looks like
config_integration.trace_integrations(['logging'])
logger = logging.getLogger(__name__)
exporter = stackdriver_exporter.StackdriverExporter(
project_id=settings.PROJECT_NAME
)
async def handle_message(message: Message) -> None:
tracer = Tracer(exporter=exporter, sampler=AlwaysOnSampler())
with tracer.span(name=f'Message#{message.message_id}'):
logger.debug(f'debug')
logger.info(f'info')
logger.warning(f'warning')
So, I can these logs in Logs Viewer, but they aren't gropped in one trace, but if I use gcloud trace viewer and search by traceId, I will find this trace with connected logs.
Q: There is any way to display trace in logs viewer as it displayed in any appengine service as appengine.googleapis.com/Frequest_log?
As it was confirmed by #Nikita Davydov in the comment section there's a workaround: you can create a fake http_request payload to group logs.
If it doesn't work for you, you can file a feature request at Google Public Issue Tracker in order to change current behavior.
Related
I am trying to setup AI similar to how it is being done here
I am using Python 3.9.13 and following packages: opencensus==0.11.0, opencensus-ext-azure==1.1.7, opencensus-context==0.1.3
My code looks something like this:
import logging
import time
from opencensus.ext.azure.log_exporter import AzureLogHandler
# create the logger
app_insights_logger = logging.getLogger(__name__)
# set the handler
app_insights_logger.addHandler(AzureLogHandler(
connection_string='InstrumentationKey=00000000-0000-0000-0000-000000000000')
)
# set the logging level
app_insights_logger.setLevel(logging.INFO)
# this prints 'logging level = 20'
print('logging level = ',app_insights_logger.getEffectiveLevel())
# try to log an exception
try:
result = 1 / 0
except Exception:
app_insights_logger.exception('Captured a math exception.')
app_insights_logger.handlers[0].flush()
time.sleep(5)
However the exception does not get logged, I tried adding the explicit flush as mentioned in this post
Additionally, I tried adding the instrumentation key as mentioned in the docs, when that didn't work I tried with the entire connection string(the one with the ingestion key)
So,
How can I debug if my app is indeed sending requests to Azure ?
How can I check on the Azure portal if it is a permission issue ?
You can set the severity level before logging any kind of telemetry information to Application Insights.
Note: By default, root logger can be configured with warning severity. if you want to add other severity information you have to set like (logger.setLevel(logging.INFO)).
I am using below code to log the telemetry information in Application Insights
import logging
from logging import Logger
from opencensus.ext.azure.log_exporter import AzureLogHandler
AI_conn_string= '<Your AI Connection string>'
handler = AzureLogHandler(connection_string=AI_conn_string)
logger = logging.getLogger()
logger.addHandler(handler)
#by default root logger can be configured with warning severity.
logger.warning('python console app warning log in AI ')
# setting severity for information level logging.
logger.setLevel(logging.INFO)
logger.info('Test Information log')
logger.info('python console app information log in AI')
try:
logger.warning('python console app Try block warning log in AI')
result = 1 / 0
except Exception:
logger.setLevel(logging.ERROR)
logger.exception('python console app error log in AI')
Results
Warning and Information log
Error log in AI
How can I debug if my app is indeed sending requests to Azure ?
We cannot debug the information whether the telemetry data send to Application Insights or not. But we can see the process like below
How can I check on the Azure portal if it is a permission issue ?
Instrumentation key and connection string has the permission to access the Application Insights resource.
I'm struggling to find out why my log entries are being duplicated in Cloud Logging.
I use a custom dummy handler that does nothing, and I'm also using a named logger.
Here's my code:
import google.cloud.logging
import logging
class MyLogHandler(logging.StreamHandler):
def emit(self, record):
pass
# Setting Up App Engine's Logging
client = google.cloud.logging.Client()
client.get_default_handler()
client.setup_logging()
# Setting Up my custom logger/handler
my_handler = MyLogHandler()
logging.getLogger('my_logger').addHandler(my_handler)
logging.getLogger('my_logger').setLevel(logging.DEBUG)
logging.getLogger('my_logger').debug('Why this message is being duplicated?') # please note that i'm logging into 'my_logger' logger, I'm not using root logger for this message
In the first place, I think this message shouldn't even show in Cloud Logging because i'm using a named logger called 'my_logger' and cloud logging is attached to root logger only, but anyway...
The above code is imported into my app.py which bootstraps a Flask app on app engine.
Here is a screenshot of the issue:
This guy has a similar issue: Duplicate log entries with Google Cloud Stackdriver logging of Python code on Kubernetes Engine
But I tried every workaround suggested in that topic and didn't work either.
Is there something I'm missing here? Thanks in advance.
from google.cloud.logging.handlers import AppEngineHandler
root_logger = logging.getLogger()
# use the GCP appengine handlers only in order to prevent logs from getting written to STDERR
root_logger.handlers = [handler for handler in root_logger.handlers
if isinstance(handler, AppEngineHandler)]
When using Python (3.8) in Azure Functions, is there a way to send structured logs to Application Insights? More specifically, I'm trying to send custom dimensions with a log message. All I could find about logging is this very brief section.
Update 0127:
It's solved as per this github issue. And here is the sample code:
# Change Instrumentation Key and Ingestion Endpoint before you run this function app
import logging
import azure.functions as func
from opencensus.ext.azure.log_exporter import AzureLogHandler
logger_opencensus = logging.getLogger('opencensus')
logger_opencensus.addHandler(
AzureLogHandler(
connection_string='InstrumentationKey=aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee;IngestionEndpoint=https://eastus-6.in.applicationinsights.azure.com/'
)
)
def main(req: func.HttpRequest) -> func.HttpResponse:
properties = {
'custom_dimensions': {
'key_1': 'value_1',
'key_2': 'value_2'
}
}
logger_opencensus.info('logger_opencensus.info Custom Dimension', extra=properties)
logger_opencensus.info('logger_opencensus.info Statement')
return func.HttpResponse("OK")
Please try OpenCensus Python SDK.
The example code is in the Logs section, step 5:
Description: You can also add custom properties to your log messages in the extra keyword argument by using the custom_dimensions field. These properties appear as key-value pairs in customDimensions in Azure Monitor.
The sample:
import logging
from opencensus.ext.azure.log_exporter import AzureLogHandler
logger = logging.getLogger(__name__)
# TODO: replace the all-zero GUID with your instrumentation key.
logger.addHandler(AzureLogHandler(
connection_string='InstrumentationKey=00000000-0000-0000-0000-000000000000')
)
properties = {'custom_dimensions': {'key_1': 'value_1', 'key_2': 'value_2'}}
# Use properties in logging statements
logger.warning('action', extra=properties)
Stack:
Google App Engine Flexible
Python 2.7
I cannot seem to write info messages using the python logging module after configuring it to use stackdriver. In my main function for my service I have:
from paste import httpserver
import google.cloud.logging
client = google.cloud.logging.Client('my-project-id')
client.setup_logging(logging.INFO)
logging.getLogger().setLevel(logging.INFO)
logging.info('in main - info')
httpserver.serve(app, host='127.0.0.1', port='8080')
But I do not see the text 'in main - info' in my stackdriver logs. I also have the following code that responds to a request for the main page of my app:
class MainPage(webapp2.RequestHandler):
def get(self):
logging.info('in MainPage - info')
logging.error('in MainPage - error')
print('in MainPage - print')
self.response.write('nothing to see.')
This code returns 'noting to see' to the browser, and writes 'in MainPage - error' to the stackdriver logs via stderr, as well as 'in MainPage - print' to the stackdriver logs via stdout. however, 'in MainPage - info' does not appear anywhere in the stackdriver logs. It is as if setting the log level of the logger has no effect and it is stuck at ERROR.
I've tried many variations of this and searched for examples but am coming up empty. I am sure there must be something I am missing regarding the use of stackdriver logging - I'm hoping someone can point me to an example that successfully writes info log messages to stackdriver using python 2.7 and google app engine flexible.
Thanks in advance!
I'm using zappa to deploy a python/django wsgi app to AWS API Gateway and Lambda.
I have all of these in my environment:
NEW_RELIC_CONFIG_FILE: /var/task/newrelic.ini
NEW_RELIC_LICENSE_KEY: redacted
NEW_RELIC_ENVIRONMENT: dev-zappa
NEW_RELIC_STARTUP_DEBUG: "on"
NEW_RELIC_ENABLED: "on"
I'm doing "manual agent start" in my wsgi.py as documented:
import newrelic.agent
# Will collect NEW_RELIC_CONFIG_FILE and NEW_RELIC_ENVIRONMENT from the environment
# Dear god why??!?!
# NB: Looks like this IS what makes it go
newrelic.agent.global_settings().enabled = True
newrelic.agent.initialize('/var/task/newrelic.ini', 'dev-zappa', log_file='stderr', log_level=logging.DEBBUG)
I'm not using #newrelic.agent.wsgi_application since django should be auto-magically detected
I've added a middleware to shutdown the agent before the lambda gets frozen, but the logging suggests that only the first request is being sent to New Relic. Without the shutdown, I get no logging from the New Relic agent, and there are no events in APM.
class NewRelicShutdownMiddleware(MiddlewareMixin):
"""Simple middleware that shutsdown the NR agent at the end of a request"""
def process_request(self, request):
pass
# really wait for the agent to register with collector
# Enabling this causes more log messages about starting data samplers, but only on the first request
# newrelic.agent.register_application(timeout=10)
def process_response(self, request, response):
newrelic.agent.shutdown_agent(timeout=2.5)
return response
def process_exception(self, request, exception):
pass
newrelic.agent.shutdown_agent(timeout=2.5)
In my newrelic.ini I have the following, but when I log newrelic.agent.global_settings() it contains the default App name (which did get created in APM) and enabled = False, which led to some of the hacks above (environment var, and just editing newrelic.agent.global_settings() before initialize :
[newrelic:dev-zappa]
app_name = DEV APP zappa
monitor_mode = true
TL;DR - Two questions:
How to get New Relic to read it's ini file when it doesn't want to?
How to get New Relic to record data for all requests in AWS lambda?
Zappa does not use your wsgi.py file (currently), so the hooks there aren't happening. Take a look at this PR which allows for it: https://github.com/Miserlou/Zappa/pull/1251