Stackdriver python log RPC error - python

I am having a RPC issue with logging an error to GCP StackDriver. Following is the error message:
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.DEADLINE_EXCEEDED, Deadline Exceeded)>
Here is the python code for logging:
import logging
import logging.handlers
import os
import config
import google.cloud.logging as gcp_logging
from google.oauth2 import service_account
logger = logging.getLogger('my_logger')
## using Google Stackdriver logging
#client = gcp_logging.Client(project=config.project, credentials=config.credentials_gcp_ml)
#client = gcp_logging.Client.from_service_account_json('./cred.json')
cred = service_account.Credentials.from_service_account_file('./cred.json')
client = gcp_logging.Client(project = config.project, credentials=cred)
hdlr = client.get_default_handler()
logger = logging.getLogger('cloudLogger')
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logger.setLevel(logging.INFO)
I run this code in my local computer connecting to my GCP account.
google-auth 1.2.0; google-cloud-logging 1.4.0

I have tried your code (slightly modified to adapt it to my project, and with a last line added in order to output logs to the Stackdriver Logging console) and it is working for me.
Here I share the code I have used:
import logging
import logging.handlers
import os
import google.cloud.logging as gcp_logging
from google.oauth2 import service_account
logger = logging.getLogger('my_logger')
cred = service_account.Credentials.from_service_account_file('./private-key.json')
client = gcp_logging.Client(project = "<YOUR_PROJECT_ID>", credentials=cred)
hdlr = client.get_default_handler()
logger = logging.getLogger('cloudLogger')
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logger.setLevel(logging.INFO)
logger.info('This is a Logging Test.')
And after running the command python test.py in my local machine, this is the result I obtain:
Program shutting down, attempting to send 1 queued log entries to Stackdriver Logging...
Waiting up to 5 seconds.
Sent all pending logs.
Then, if I move to the console link I provided before and filter using the Global resource type (resource.type="global"), this is what I see:
As stated by #A.Queue, I am also running google-auth version 1.3.0 and google-cloud-logging 1.4.0, so you should try upgrading google-auth, using the command:
pip install --upgrade google-auth
Try that and come back to us if the same error keeps coming up.

Related

Syslog not sending logs to Papertrail

This code snippet, linked here, should be sending a log message to Papertrail, unfortunately, there's nothing happening at all (no error message either). I did replace the "N" and the "XXXXX" with my own values. The environment is Python 3.10.6 and I'm inside of a virtual environment created by Poetry.
I would expect calling an undefined function to log an exception to Papertrail.
Telnet works fine and my other applications work fine as well.
import logging
import socket
from logging.handlers import SysLogHandler
syslog = SysLogHandler(address=('logsN.papertrailapp.com', XXXXX))
format = '%(asctime)s YOUR_APP: %(message)s'
formatter = logging.Formatter(format, datefmt='%b %d %H:%M:%S')
syslog.setFormatter(formatter)
logger = logging.getLogger()
logger.addHandler(syslog)
logger.setLevel(logging.INFO)
def my_handler(type, value, tb):
logger.exception('Uncaught exception: {0}'.format(str(value)))
# Install exception handler
sys.excepthook = my_handler
logger.info('This is a message')
nofunction() #log an uncaught exception

Celery task log using google-cloud-logging

I'm currently managing my API using Celery tasks and a kubernetes cluster on Google Cloud Platform.
Celery is automatically logging input and output of each task. This is something I want but I would like to use the possibility of google-cloud-logging to log input and output as jsonPayload.
I use for all other log the following:
from google.cloud.logging.handlers import CloudLoggingHandler
from google.cloud.logging_v2.handlers import setup_logging
# Imports the Cloud Logging client library
import google.cloud.logging
# Instantiates a client
client = google.cloud.logging.Client()
handler = CloudLoggingHandler(client)
setup_logging(handler)
import logging
logger = logging.getLogger(__name__)
data_dict = {"my": "data"}
logger.info("this is an example", extra={"json_fields": data_dict})
And I use Celery with the following template:
app = Celery(**my_params)
#app.task
def task_test(data):
# Update dictonary with new data
data["key1"] = "value1"
return data
...
detection_task = celery.signature('tasks.task_test', args=([[{"hello": "world"}]]))
r = detection_task.apply_async()
data = r.get()
Here's an example of log I receive from Celery:
The blurred part correspond to the dict/json I would like to have in a jsonPayload instead of a textPayload.
(Also note that this log is marked as error on GCP but INFO from celery)
Any idea how I could connect python built-in logging, celery logger and gcp logger ?
To connect your Python logger with GCP Logger:
import logging
import google.cloud.logging
from google.cloud.logging.handlers import CloudLoggingHandler, setup_logging
client = google.cloud.logging.Client()
handler = CloudLoggingHandler(client, name="your_log_name")
cloud_logger = logging.getLogger('cloudLogger')
# configure cloud_logger
cloud_logger.addHandler(handler)
To connect this logger to Celery's logger:
def initialize_log(logger=None,loglevel=logging.DEBUG, **kwargs):
logger = logging.getLogger('celery')
handler = # use GCP handler defined above
handler.setLevel(loglevel)
logger.addHandler(handler)
return logger
from celery.signals import after_setup_task_logger
after_setup_task_logger.connect(initialize_log)
from celery.signals import after_setup_logger
after_setup_logger.connect(initialize_log)

Streaming structlog events to Logstash

I'm attempting to use structlog in an AWS Lambda function and directly stream log events to our Logstash server. To this end, I'm using the logstash-async library using the beats transport layer.
The code:
from constants import LOGSTASH_HOST, LOGSTASH_PORT
from logstash_async.handler import AsynchronousLogstashHandler
from logstash_async.transport import BeatsTransport
from s4utils.exceptions import CatConfigNotFoundException
import structlog
from structlog.stdlib import LoggerFactory, BoundLogger
def get_log():
logstash_handler = AsynchronousLogstashHandler(
LOGSTASH_HOST,
LOGSTASH_PORT,
database_path=None,
transport=BeatsTransport)
logging.basicConfig(
level=logging.INFO,
format='%(message)s',
handlers=[logstash_handler, logging.StreamHandler(sys.stdout)])
structlog.configure(
wrapper_class=BoundLogger,
logger_factory=LoggerFactory(),
processors=[structlog.processors.TimeStamper(fmt='iso', utc=True),
structlog.processors.JSONRenderer()])
return structlog.get_logger().new(fields={'type': 'aws-lambda'})
I find that while I can see the stdout logging in CloudWatch, the Logstash server doesn't appear to be receiving events.

Error using Google Stackdriver Logging in App Engine Standard python

My Stack:
Google App Engine Standard
Python (2.7)
Goal:
To create named logs in Google Stackdriver Logging, https://console.cloud.google.com/logs/viewer
Docs - Stackdriver Logging:
https://google-cloud-python.readthedocs.io/en/latest/logging/usage.html
Code:
from google.cloud import logging as stack_logging
from google.cloud.logging.resource import Resource
import threading
class StackdriverLogging:
def __init__(self, resource=Resource(type='project', labels={'project_id': 'project_id'}), project_id='project_id'):
self.resource = resource
self.client = stack_logging.Client(project=project_id)
def delete_logger(self, logger_name):
logger = self.client.logger(logger_name)
logger.delete()
def async_log(self, logger_name, sev, msg):
t = threading.Thread(target=self.log, args=(logger_name, sev, msg,))
t.start()
def log(self, logger_name, sev, msg):
logger = self.client.logger(logger_name)
if isinstance(msg, str):
logger.log_text(msg, severity=sev, resource=self.resource)
elif isinstance(msg, dict):
logger.log_struct(msg, severity=sev, resource=self.resource)
class hLog(webapp2.RequestHandler):
def get(self):
stackdriver_logger = StackdriverLogging()
stackdriver_logger.async_log("my_new_log", "WARNING", msg="Hello")
stackdriver_logger.async_log("my_new_log", "INFO", msg="world")
ERROR:
Found 1 RPC request(s) without matching response
If this is not possible in Google App Engine Standard (Python) any way to get this code to work:
from google.cloud import logging
client = logging.Client()
# client = logging.Client.from_service_account_json('credentials.json')
logger = client.logger("my_new_log")
logger.log_text("hello world")
If credentials are required, I like to use the project service account.
Any help would be appreciated. Thank you.
I usually tie the Python logging module directly into Google Stackdriver Logging.
To do this, I create a log_helper module:
from google.cloud import logging as gc_logging
import logging
logging_client = gc_logging.Client()
logging_client.setup_logging(logging.INFO)
from logging import *
Which I then import into other files like this:
import log_helper as logging
After which you can use the module like you would the default python logging module.
To create named logs with the default python logging module, use different loggers for different namespaces:
import log_helper as logging
test_logger = logging.getLogger('test')
test_logger.setLevel(logging.INFO)
test_logger.info('is the name of this logger')
Output:
INFO:test:is the name of this logger

Celery logs to Papertrail

I am building an applicatrion with Flask and Celery and I am trying to send my application logs to Papertrail. This works fine for my regular (synchronous) application logs. The configuration looks like this:
import logging
from logging.handlers import SysLogHandler
import socket
class ContextFilter(logging.Filter):
hostname = socket.gethostname()
def filter(self, record):
record.hostname = ContextFilter.hostname
return True
f = ContextFilter()
logger = logging.getLogger()
logger.setLevel(logging.INFO)
logger.addFilter(f)
formatter = logging.Formatter(
"%(asctime)s - %(name)s - %(levelname)s - %(message)s"
)
syslog = SysLogHandler(address=('<myapp>.papertrailapp.com', <port>))
syslog.setFormatter(formatter)
logger.addHandler(syslog)
I have tried adding this logger to Celery tasks but all I see is output in sdout and nothing in Papertrail. Does Celery do something to get around the normal logging flow?
I realize Celery has a task-specific logger but I cannot find any documentation on how this could be configured with Celery.
If I read this correctly, the secret is to call the function redirect_stdouts_to_logger to send stdout to your SysLogHandler instance. Celery's docs have more.

Categories