Get the exception easily with Azure app insight using python flask - python

I had tried the code below for getting the exception in https://learn.microsoft.com/en-us/azure/azure-monitor/app/opencensus-python
from opencensus.ext.azure.log_exporter import AzureLogHandler
logger = logging.getLogger(__name__)
# TODO: replace the all-zero GUID with your instrumentation key.
logger.addHandler(AzureLogHandler(
connection_string='InstrumentationKey=00000000-0000-0000-0000-000000000000')
)
properties = {'custom_dimensions': {'key_1': 'value_1', 'key_2': 'value_2'}}
# Use properties in exception logs
try:
result = 1 / 0 # generate a ZeroDivisionError
except Exception:
logger.exception('Captured an exception.', extra=properties)
It is working. I can catch the exception.
However, I want to ask if there is an easy way to catch the exception automatically in the python flask?
Since I try the below code, it just gives me a request record, not the exception.
app = Flask(__name__)
app.logger.addHandler(file_handler)
handler = AzureEventHandler(
connection_string="InstrumentationKey={}".format(app.config['APPINSIGHTS_INSTRUMENTATIONKEY']))
handler.setFormatter(logging.Formatter(
'%(asctime)s %(levelname)s: %(message)s '
'[in %(pathname)s:%(lineno)d]'))
handler.setLevel(logging.ERROR)
app.logger.addHandler(handler)
Thank you for helping

Your code should like below. For more details, you can check offical sample code.
Flask "To-Do" Sample Application
import logging
import sys
from flask import Flask
sys.path.append('..')
from config import Config
from flask_sqlalchemy import SQLAlchemy
from opencensus.ext.azure import metrics_exporter
from opencensus.ext.azure.log_exporter import AzureLogHandler
from opencensus.ext.flask.flask_middleware import FlaskMiddleware
from opencensus.trace import config_integration
logger = logging.getLogger(__name__)
app = Flask(__name__)
app.config.from_object(Config)
db = SQLAlchemy(app)
# Import here to avoid circular imports
from app import routes # noqa isort:skip
# Trace integrations for sqlalchemy library
config_integration.trace_integrations(['sqlalchemy'])
# Trace integrations for requests library
config_integration.trace_integrations(['requests'])
# FlaskMiddleware will track requests for the Flask application and send
# request/dependency telemetry to Azure Monitor
middleware = FlaskMiddleware(app)
# Processor function for changing the role name of the app
def callback_function(envelope):
envelope.tags['ai.cloud.role'] = "To-Do App"
return True
# Adds the telemetry processor to the trace exporter
middleware.exporter.add_telemetry_processor(callback_function)
# Exporter for metrics, will send metrics data
exporter = metrics_exporter.new_metrics_exporter(
enable_standard_metrics=False,
connection_string='InstrumentationKey=' + Config.INSTRUMENTATION_KEY)
# Exporter for logs, will send logging data
logger.addHandler(
AzureLogHandler(
connection_string='InstrumentationKey=' + Config.INSTRUMENTATION_KEY
)
)
if __name__ == '__main__':
app.run(host='localhost', port=5000, threaded=True, debug=True)

Related

Logging in Python with Flask in multiprocessing WSGI server

I have a web app running on Cloudlinux with Litespeed server.
Logging was working fine, but recently started to mess the log files and is loosing most of logged messages when the log file is rotated at midnight. The information I got is that it is a multiprocessing related issue.
Here is my main app (app.py):
from log_module import logger
from flask import Flask
app = Flask(__name__)
#app.route('/')
def index():
logger.debug ('User arrived at index page')
logger.critical ('Wow, we have a visitor!')
return 'Index page'
if __name__ == '__main__':
app.run()
And the logging code is in a separated module, logs DEBUG and WARNING in separated files, and the log file is rotated at midnight UTC time, creating one single file per day for DEBUG and another for WARNING (log_module.py):
import logging
import logging.handlers as handlers
import time
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logging.Formatter.converter = time.gmtime
formatter = logging.Formatter('[%(asctime)s] %(name)-30s %(levelname)-8s %(message)s [%(funcName)s:%(lineno)d]')
logHandler = handlers.TimedRotatingFileHandler('logs/UTC_time__DEBUG.log',
encoding='utf-8',
when='midnight',
interval=1,
backupCount=0,
utc=True)
logHandler.setLevel(logging.DEBUG)
logHandler.setFormatter(formatter)
logHandler.suffix = "%Y-%m-%d__%H_%M_%S.log"
logHandler.encoding = 'utf-8'
errorLogHandler = handlers.TimedRotatingFileHandler('logs/UTC_time__WARNING.log',
encoding='utf-8',
when='midnight',
interval=1,
backupCount=0,
utc=True)
errorLogHandler.setLevel(logging.WARNING)
errorLogHandler.setFormatter(formatter)
errorLogHandler.suffix = "%Y-%m-%d__%H_%M_%S.log"
errorLogHandler.encoding = 'utf-8'
logger.addHandler(logHandler)
logger.addHandler(errorLogHandler)
How to adapt my code to make it work properly on multiprocessing environment, that is, at the server with multiple WSGI "workers"?

Celery task log using google-cloud-logging

I'm currently managing my API using Celery tasks and a kubernetes cluster on Google Cloud Platform.
Celery is automatically logging input and output of each task. This is something I want but I would like to use the possibility of google-cloud-logging to log input and output as jsonPayload.
I use for all other log the following:
from google.cloud.logging.handlers import CloudLoggingHandler
from google.cloud.logging_v2.handlers import setup_logging
# Imports the Cloud Logging client library
import google.cloud.logging
# Instantiates a client
client = google.cloud.logging.Client()
handler = CloudLoggingHandler(client)
setup_logging(handler)
import logging
logger = logging.getLogger(__name__)
data_dict = {"my": "data"}
logger.info("this is an example", extra={"json_fields": data_dict})
And I use Celery with the following template:
app = Celery(**my_params)
#app.task
def task_test(data):
# Update dictonary with new data
data["key1"] = "value1"
return data
...
detection_task = celery.signature('tasks.task_test', args=([[{"hello": "world"}]]))
r = detection_task.apply_async()
data = r.get()
Here's an example of log I receive from Celery:
The blurred part correspond to the dict/json I would like to have in a jsonPayload instead of a textPayload.
(Also note that this log is marked as error on GCP but INFO from celery)
Any idea how I could connect python built-in logging, celery logger and gcp logger ?
To connect your Python logger with GCP Logger:
import logging
import google.cloud.logging
from google.cloud.logging.handlers import CloudLoggingHandler, setup_logging
client = google.cloud.logging.Client()
handler = CloudLoggingHandler(client, name="your_log_name")
cloud_logger = logging.getLogger('cloudLogger')
# configure cloud_logger
cloud_logger.addHandler(handler)
To connect this logger to Celery's logger:
def initialize_log(logger=None,loglevel=logging.DEBUG, **kwargs):
logger = logging.getLogger('celery')
handler = # use GCP handler defined above
handler.setLevel(loglevel)
logger.addHandler(handler)
return logger
from celery.signals import after_setup_task_logger
after_setup_task_logger.connect(initialize_log)
from celery.signals import after_setup_logger
after_setup_logger.connect(initialize_log)

How to use pymongo.monitoring in a Flask project with mongoengine

I am trying to add some monitoring to a simple REST web service with flask and mongoengine and have come across what I think is a lack of understanding on my part of how imports and mongoengine is working in flask applications.
I'm following pymongo's documentation on monitoring : https://pymongo.readthedocs.io/en/3.7.2/api/pymongo/monitoring.html
I defined the following CommandListener in a separate file:
import logging
from pymongo import monitoring
log = logging.getLogger('my_logger')
class CommandLogger(monitoring.CommandListener):
def started(self, event):
log.debug("Command {0.command_name} with request id "
"{0.request_id} started on server "
"{0.connection_id}".format(event))
monitoring.register(CommandLogger())
I made an application_builder.py file to create my flask App, code looks something like this:
from flask_restful import Api
from flask import Flask
from command_logger import CommandLogger # <----
from db import initialize_db
from routes import initialize_routes
def create_app():
app = Flask(__name__)
api = Api(app)
initialize_db(app)
initialize_routes(api)
return app
The monitoring only seems to works if I import : CommandLogger in application_builder.py. I'd like to understand what is going on here, how does the import affect the monitoring registration?
Also I'd like to extract monitoring.register(CommandLogger()) as a function and call it at a latter stage in my code something like def register(): monitoring.register(CommandLogger())
But this doesn't seem to work, "registration' only works when it is in the same file as the CommandLogger class...
From the MongoEngine's doc, it seems important that the listener gets registered before connecting mongoengine
To use pymongo.monitoring with MongoEngine, you need to make sure that
you are registering the listeners before establishing the database
connection (i.e calling connect)
This worked for me. I'm just initializing/registering it the same way as I did other modules to avoid circular imports.
# admin/logger.py
import logging
from pymongo import monitoring
log = logging.getLogger()
log.setLevel(logging.DEBUG)
logging.basicConfig(level=logging.DEBUG)
class CommandLogger(monitoring.CommandListener):
# def methods...
class ServerLogger(monitoring.ServerListener):
# def methods
class HeartbeatLogger(monitoring.ServerHeartbeatListener):
# def methods
def initialize_logger():
monitoring.register(CommandLogger())
monitoring.register(ServerLogger())
monitoring.register(HeartbeatLogger())
monitoring.register(TopologyLogger())
# /app.py
from flask import Flask
from admin.toolbar import initialize_debugtoolbar
from admin.admin import initialize_admin
from admin.views import initialize_views
from admin.logger import initialize_logger
from database.db import initialize_db
from flask_restful import Api
from resources.errors import errors
app = Flask(__name__)
# imports requiring app
from resources.routes import initialize_routes
api = Api(app, errors=errors)
# Logger before db
initialize_logger()
# Database and Routes
initialize_db(app)
initialize_routes(api)
# Admin and Development
initialize_admin(app)
initialize_views()
initialize_debugtoolbar(app)
# /run.py
from app import app
app.run(debug=True)
then in any module...
from admin.logger import log
from db.models import User
# inside some class/view/queryset or however your objects are written...
log.info('Saving an item through MongoEngine...')
User(name='Foo').save()
What I'm trying to figure out now is how to integrate Flask DebuggerToolbar's Logging panel with the monitoring messages from these listeners...

Unable to log messages while running a Flask app

I am trying to tie Flask's logger to a FileHandler, so I can save custom log messages to a file. Anytime I try to log a message when a POST requests hits foo, nothing happens, at all. How can I work around this?
import logging
from logging import FileHandler
from time import strftime
from flask import Flask, Response
app = Flask(__name__)
#app.route('/foo', methods=['POST'])
def bar():
app.logger.info('post request')
...
return Response(), 200
if __name__ == "__main__":
file_date = strftime('%d_%m_%Y')
handler = FileHandler(f'./log/{file_date}.log')
handler.setLevel(logging.INFO)
app.logger.setLevel(logging.INFO)
app.logger.addHandler(handler)
app.run()

How to log to socket in Flask

I have the rest of my program using a socketHandler to log which is being consumed using http://code.activestate.com/recipes/577025-loggingwebmonitor-a-central-logging-server-and-mon/.
How do I setup my flask app to log this?
I tried the following based on their docs (http://flask.pocoo.org/docs/errorhandling/#logging-to-a-file):
class HelloWorld(Resource):
def get(self):
resp = "HELLO!"
app.logger.info("GET HelloWorld %s", resp)
return resp
if __name__ == '__main__':
import logging
import logging.handlers # you NEED this line
socketHandler = logging.handlers.SocketHandler('localhost',
logging.handlers.DEFAULT_TCP_LOGGING_PORT)
socketHandler.setLevel(logging.INFO)
app.logger.addHandler(socketHandler)
app.run('0.0.0.0')

Categories