How to suppress exit message for google cloud logging - python

Google cloud logging is printing out this message when my (python) program exits:
Program shutting down, attempting to send 1 queued log entries to Stackdriver Logging...
Waiting up to 5 seconds.
Sent all pending logs.
I would like to suppress printing that message. Is there a config setting to control whether the message above does not get printed out when the program exits? Thank you.

Use SyncTransport instead of the default BackgroundThreadTransport
from google.cloud.logging_v2.handlers import CloudLoggingHandler
from google.cloud.logging_v2.handlers.transports import SyncTransport
..........
client = google.cloud.logging.Client()
handler = CloudLoggingHandler(client, name="your_log_name", transport=SyncTransport)

Related

Azure app insights not working with Python console app

I am trying to setup AI similar to how it is being done here
I am using Python 3.9.13 and following packages: opencensus==0.11.0, opencensus-ext-azure==1.1.7, opencensus-context==0.1.3
My code looks something like this:
import logging
import time
from opencensus.ext.azure.log_exporter import AzureLogHandler
# create the logger
app_insights_logger = logging.getLogger(__name__)
# set the handler
app_insights_logger.addHandler(AzureLogHandler(
connection_string='InstrumentationKey=00000000-0000-0000-0000-000000000000')
)
# set the logging level
app_insights_logger.setLevel(logging.INFO)
# this prints 'logging level = 20'
print('logging level = ',app_insights_logger.getEffectiveLevel())
# try to log an exception
try:
result = 1 / 0
except Exception:
app_insights_logger.exception('Captured a math exception.')
app_insights_logger.handlers[0].flush()
time.sleep(5)
However the exception does not get logged, I tried adding the explicit flush as mentioned in this post
Additionally, I tried adding the instrumentation key as mentioned in the docs, when that didn't work I tried with the entire connection string(the one with the ingestion key)
So,
How can I debug if my app is indeed sending requests to Azure ?
How can I check on the Azure portal if it is a permission issue ?
You can set the severity level before logging any kind of telemetry information to Application Insights.
Note: By default, root logger can be configured with warning severity. if you want to add other severity information you have to set like (logger.setLevel(logging.INFO)).
I am using below code to log the telemetry information in Application Insights
import logging
from logging import Logger
from opencensus.ext.azure.log_exporter import AzureLogHandler
AI_conn_string= '<Your AI Connection string>'
handler = AzureLogHandler(connection_string=AI_conn_string)
logger = logging.getLogger()
logger.addHandler(handler)
#by default root logger can be configured with warning severity.
logger.warning('python console app warning log in AI ')
# setting severity for information level logging.
logger.setLevel(logging.INFO)
logger.info('Test Information log')
logger.info('python console app information log in AI')
try:
logger.warning('python console app Try block warning log in AI')
result = 1 / 0
except Exception:
logger.setLevel(logging.ERROR)
logger.exception('python console app error log in AI')
Results
Warning and Information log
Error log in AI
How can I debug if my app is indeed sending requests to Azure ?
We cannot debug the information whether the telemetry data send to Application Insights or not. But we can see the process like below
How can I check on the Azure portal if it is a permission issue ?
Instrumentation key and connection string has the permission to access the Application Insights resource.

psubscribe scheduled to be closed ASAP for overcoming of output buffer limits

this error message has been logged. It has only happened once in about 2 months. We have a sub pub client listening for Redis events. What kind of problem could be causing it? What actions can we take to mitigate it?
Here is the detailed error:
Client id=599475 addr=10.250.0.50:44012 fd=1370 name= age=18039 idle=1 flags=N db=0 sub=0 psub=1 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=555 omem=9085669 events=rw cmd=psubscribe scheduled to be closed ASAP for overcoming of output buffer limits.
Redis is configured to post events to the pub sub using the notify-keyspace-events setting to "AKE". The code used (python) is like this:
#...
redis: StrictRedis = StrictRedis(config.REDIS_HOST, config.REDIS_PORT)
pubsub = redis.pubsub()
pubsub.psubscribe("keyevent#0:*")
PAUSE = True
while PAUSE:
message = pubsub.get_message()
if message:
#.. do something
time.sleep(0.001)

How to intercept messages about reboot or logoff?

I tried to find information on this, but only found the WM_QUERYENDSESSION function. How can I use this to intercept reboot / shutdown messages?
import win32gui, win32con
msg = win32gui.GetMessage(None, 0, 0)
if msg and msg.message == win32con.WM_QUERYENDSESSION:
print('EXIT')
Here is an example of my code, but when I run it it doesn't handle any actions. and does not intercept shutdown messages
According to WM_QUERYENDSESSION: The WM_QUERYENDSESSION message is sent when the user chooses to end the session or when an application calls one of the system shutdown functions. A window receives this message through its WindowProc function.
So this message will only take effect when sent by the application and accepted in the WindowProc function.

Google Cloud Logging malfunctioning from cloud function

I am currently trying to deploy a cloud function triggered by Pub/Sub written in Python. Previously, we used loguru to log. I am now making the switch to the cloud logging. I thought it would be rather simple but am quite puzzled. Here is the code I deployed in a Cloud Function, just to try logging :
import base64
import logging
import google.cloud.logging as google_logging
def hello_pubsub(event, context):
client = google_logging.Client()
client.setup_logging()
logging.debug("Starting function")
logging.info("Hello")
logging.warning("warning ! ")
pubsub_message = base64.b64decode(event['data']).decode('utf-8')
logging.info(pubsub_message)
logging.error("Exit function")
I followed the documentation I could find on the subject (but the pages can show various methods, and are not very clear). Here is the result on the Logging interface :
This is the result on the "Global" logs. Two questions here : why aren't the debug logs not shown, even if I explicitely set the log level as "debug" in the interface ? And why the logs are shown 1, 2 or 3 times, randomly ?
Now I try to display the logs for my Cloud Function only :
This is getting worse, now the logs are displayed up to 5 times (and not even the same number of times than in the "Global" tab), the levels of informations are all wrong (logging.info results in 1 info line, 1 error line ; error and warning results in 2 errors lines...)
I imagine I must be doing something bad, but I can't see what, as what I am trying to do is fairly simple. Can somebody please help me ? Thanks !
EDIT : I did the mistake of putting the initialization of the client in the function, this explains that the logs were displayed more than once. One problem left is that the warnings are displayed as errors in the "Cloud Function" tabs, and displayed correctly in the "Global" tab. Do someone has an idea about this ?
Try moving your setup outside of the function:
import base64
import logging
import google.cloud.logging as google_logging
client = google_logging.Client()
client.setup_logging()
def hello_pubsub(event, context):
logging.debug("Starting function")
logging.info("Hello")
logging.warning("warning ! ")
pubsub_message = base64.b64decode(event['data']).decode('utf-8')
logging.info(pubsub_message)
logging.error("Exit function")
As-is, you're adding new handlers on every request per instance.
You should use the Integration with Python logging moduleĀ¶
import logging
import base64
import google.cloud.logging # Don't conflict with standard logging
from google.cloud.logging.handlers import CloudLoggingHandler
client = google.cloud.logging.Client()
handler = CloudLoggingHandler(client)
cloud_logger = logging.getLogger('cloudLogger')
cloud_logger.setLevel(logging.INFO) # defaults to WARN
cloud_logger.addHandler(handler)
def hello_pubsub(event, context):
import logging
cloud_logger.debug("Starting function")
cloud_logger.info("Hello")
cloud_logger.warning("warning ! ")
pubsub_message = base64.b64decode(event['data']).decode('utf-8')
cloud_logger.info(pubsub_message)
cloud_logger.error("Exit function")
return 'OK', 200

Producing persistent message using stompest for python

I am not able to send persistent message to AMQ queue using stompest and python.. Dont know what header to use???
Below is source code
from stompest.config import StompConfig
from stompest.sync import Stomp
import os
CONFIG = StompConfig('tcp://localhost:61613')
QUEUE = '/queue/myQueue'
if __name__ == '__main__':
try:
client = Stomp(CONFIG)
client.connect({'login':'#####','passcode':'#####'})
for i in range(10):
msg="Test Message" +str(i)
client.send(QUEUE,msg)
client.disconnect()
except Exception,e:
print e
If you go persistent, you may also want to send you message in a transaction.
with client.transaction(receipt='important') as transaction:
client.send(QUEUE, 'test',{'persistent':'true', StompSpec.TRANSACTION_HEADER: transaction})
This way, you can ensure all or none of a set of messages ends up on a queue. If there is an error raised within the transaction block, the message(s) won't be committed to the queue. The same goes for reading messages.
You have to change the send line to this :
client.send(QUEUE,msg, headers={'persistent' :'true'})

Categories