I am trying to tie Flask's logger to a FileHandler, so I can save custom log messages to a file. Anytime I try to log a message when a POST requests hits foo, nothing happens, at all. How can I work around this?
import logging
from logging import FileHandler
from time import strftime
from flask import Flask, Response
app = Flask(__name__)
#app.route('/foo', methods=['POST'])
def bar():
app.logger.info('post request')
...
return Response(), 200
if __name__ == "__main__":
file_date = strftime('%d_%m_%Y')
handler = FileHandler(f'./log/{file_date}.log')
handler.setLevel(logging.INFO)
app.logger.setLevel(logging.INFO)
app.logger.addHandler(handler)
app.run()
Related
I have a web app running on Cloudlinux with Litespeed server.
Logging was working fine, but recently started to mess the log files and is loosing most of logged messages when the log file is rotated at midnight. The information I got is that it is a multiprocessing related issue.
Here is my main app (app.py):
from log_module import logger
from flask import Flask
app = Flask(__name__)
#app.route('/')
def index():
logger.debug ('User arrived at index page')
logger.critical ('Wow, we have a visitor!')
return 'Index page'
if __name__ == '__main__':
app.run()
And the logging code is in a separated module, logs DEBUG and WARNING in separated files, and the log file is rotated at midnight UTC time, creating one single file per day for DEBUG and another for WARNING (log_module.py):
import logging
import logging.handlers as handlers
import time
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logging.Formatter.converter = time.gmtime
formatter = logging.Formatter('[%(asctime)s] %(name)-30s %(levelname)-8s %(message)s [%(funcName)s:%(lineno)d]')
logHandler = handlers.TimedRotatingFileHandler('logs/UTC_time__DEBUG.log',
encoding='utf-8',
when='midnight',
interval=1,
backupCount=0,
utc=True)
logHandler.setLevel(logging.DEBUG)
logHandler.setFormatter(formatter)
logHandler.suffix = "%Y-%m-%d__%H_%M_%S.log"
logHandler.encoding = 'utf-8'
errorLogHandler = handlers.TimedRotatingFileHandler('logs/UTC_time__WARNING.log',
encoding='utf-8',
when='midnight',
interval=1,
backupCount=0,
utc=True)
errorLogHandler.setLevel(logging.WARNING)
errorLogHandler.setFormatter(formatter)
errorLogHandler.suffix = "%Y-%m-%d__%H_%M_%S.log"
errorLogHandler.encoding = 'utf-8'
logger.addHandler(logHandler)
logger.addHandler(errorLogHandler)
How to adapt my code to make it work properly on multiprocessing environment, that is, at the server with multiple WSGI "workers"?
I had tried the code below for getting the exception in https://learn.microsoft.com/en-us/azure/azure-monitor/app/opencensus-python
from opencensus.ext.azure.log_exporter import AzureLogHandler
logger = logging.getLogger(__name__)
# TODO: replace the all-zero GUID with your instrumentation key.
logger.addHandler(AzureLogHandler(
connection_string='InstrumentationKey=00000000-0000-0000-0000-000000000000')
)
properties = {'custom_dimensions': {'key_1': 'value_1', 'key_2': 'value_2'}}
# Use properties in exception logs
try:
result = 1 / 0 # generate a ZeroDivisionError
except Exception:
logger.exception('Captured an exception.', extra=properties)
It is working. I can catch the exception.
However, I want to ask if there is an easy way to catch the exception automatically in the python flask?
Since I try the below code, it just gives me a request record, not the exception.
app = Flask(__name__)
app.logger.addHandler(file_handler)
handler = AzureEventHandler(
connection_string="InstrumentationKey={}".format(app.config['APPINSIGHTS_INSTRUMENTATIONKEY']))
handler.setFormatter(logging.Formatter(
'%(asctime)s %(levelname)s: %(message)s '
'[in %(pathname)s:%(lineno)d]'))
handler.setLevel(logging.ERROR)
app.logger.addHandler(handler)
Thank you for helping
Your code should like below. For more details, you can check offical sample code.
Flask "To-Do" Sample Application
import logging
import sys
from flask import Flask
sys.path.append('..')
from config import Config
from flask_sqlalchemy import SQLAlchemy
from opencensus.ext.azure import metrics_exporter
from opencensus.ext.azure.log_exporter import AzureLogHandler
from opencensus.ext.flask.flask_middleware import FlaskMiddleware
from opencensus.trace import config_integration
logger = logging.getLogger(__name__)
app = Flask(__name__)
app.config.from_object(Config)
db = SQLAlchemy(app)
# Import here to avoid circular imports
from app import routes # noqa isort:skip
# Trace integrations for sqlalchemy library
config_integration.trace_integrations(['sqlalchemy'])
# Trace integrations for requests library
config_integration.trace_integrations(['requests'])
# FlaskMiddleware will track requests for the Flask application and send
# request/dependency telemetry to Azure Monitor
middleware = FlaskMiddleware(app)
# Processor function for changing the role name of the app
def callback_function(envelope):
envelope.tags['ai.cloud.role'] = "To-Do App"
return True
# Adds the telemetry processor to the trace exporter
middleware.exporter.add_telemetry_processor(callback_function)
# Exporter for metrics, will send metrics data
exporter = metrics_exporter.new_metrics_exporter(
enable_standard_metrics=False,
connection_string='InstrumentationKey=' + Config.INSTRUMENTATION_KEY)
# Exporter for logs, will send logging data
logger.addHandler(
AzureLogHandler(
connection_string='InstrumentationKey=' + Config.INSTRUMENTATION_KEY
)
)
if __name__ == '__main__':
app.run(host='localhost', port=5000, threaded=True, debug=True)
I am new with python. I have a POST api in python deployed on AWS using steps from the below mentioned link.
https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-uwsgi-and-nginx-on-ubuntu-16-04
The api is giving internal server error and the logs are not getting printed. Is there a directory which I'm missing or can I print the logs somehow?
Also if someone can help with what could be the possible issue with the API :
#!/usr/bin/env python2.7
#!flask/bin/python
#!/usr/bin/python
import numpy
import sys,os
import codecs
import json
import phonetics
import transliteration
import jellyfish
import traceback
from pyjarowinkler import distance
from flask import Flask, jsonify, request
import panphon.distance
from datetime import datetime
from flask_cors import CORS
def obj_dict(obj):
return obj.__dict__
# logFile = open('log.txt','a')
app = Flask(__name__)
cors = CORS(app, resources={r"/todo/api/*": {"origins": "*"}})
#app.route('/todo/api/v1.0/tasks', methods=['GET''POST'])
def get_tasks():
if request.method == 'GET':
return "done with get request"
elif request.method == 'POST':
try :
content = request.get_json()
with codecs.open('words.txt', 'a') as file:
for line in content['words']:
file.write("\n"+line.encode('utf-8'))
except Exception:
with open('log.txt','a') as logger:
logger.write("At "+str(datetime.now())+" "+traceback.format_exc())
logger.write("\n\n")
return "done"
if __name__ == '__main__':
import logging
logging.basicConfig(filename='log.txt',level=logging.DEBUG)
app.run(host='0.0.0.0', debug = False)
NOTE: The other imports are for some other purpose. I need the POST API to work.
Try setting debug = True
For example
if __name__ == '__main__':
import logging
logging.basicConfig(filename='log.txt',level=logging.DEBUG)
app.run(host='0.0.0.0', debug = True)
Also you don't need to import logging as you can use flask's logger which should be accessible using app.logger
If this is production then debug=True is not recommended.
Flask considers ERROR as the default logging level if you do not set up log level
So you might try setting the log level using app.logger.setLevel(logging.DEBUG)
#app.route('/todo/api/v1.0/tasks', methods=['GET', 'POST'])
def get_tasks():
if request.method == 'GET':
return "done with get request"
elif request.method == 'POST':
try :
content = json.loads(request.get_data())
with codecs.open('words.txt', 'a') as file:
for line in content['words']:
file.write("\n"+line.encode('utf-8'))
except Exception:
with open('log.txt','a') as logger:
logger.write("At "+str(datetime.now())+" "+traceback.format_exc())
logger.write("\n\n")
return "done"
I am attempting to get an email sent to me any time an error occurs in my Flask application. The email is not being sent despite the handler being registered. I used smtplib to verify that my SMTP login details are correct. The error is displayed in Werkzeug's debugger, but no emails are sent. How do I log exceptions that occur in my app?
import logging
from logging.handlers import SMTPHandler
from flask import Flask
app = Flask(__name__)
app.debug = True
app.config['PROPAGATE_EXCEPTIONS'] = True
if app.debug:
logging.basicConfig(level=logging.INFO)
# all of the $ names have actual values
handler = SMTPHandler(
mailhost = 'smtp.mailgun.org',
fromaddr = 'Application Bug Reporter <$mailgun_email_here>',
toaddrs = ['$personal_email_here'],
subject = 'Test Application Logging Email',
credentials = ('$mailgun_email_here', '$mailgun_password_here')
)
handler.setLevel(logging.ERROR)
app.logger.addHandler(handler)
#app.route('/')
def index():
raise Exception('Hello, World!') # should trigger an email
app.run()
The issue was which logger the handler was added to. Flask uses the werkzeug logger to log exceptions during view functions, not the base app.logger. I had to register my handler with the werkzeug logger:
logging.getLogger('werkzeug').addHandler(handler)
In addition, I had to include the port in mailhost:
handler = SMTPHandler(
mailhost=('smtp.mailgun.org', 587),
fromaddr='Application Bug Reporter <$mailgun_email_here>',
toaddrs=['$personal_email_here'],
subject='Test Application Logging Email',
credentials=('$mailgun_email_here', '$mailgun_password_here')
)
I have the rest of my program using a socketHandler to log which is being consumed using http://code.activestate.com/recipes/577025-loggingwebmonitor-a-central-logging-server-and-mon/.
How do I setup my flask app to log this?
I tried the following based on their docs (http://flask.pocoo.org/docs/errorhandling/#logging-to-a-file):
class HelloWorld(Resource):
def get(self):
resp = "HELLO!"
app.logger.info("GET HelloWorld %s", resp)
return resp
if __name__ == '__main__':
import logging
import logging.handlers # you NEED this line
socketHandler = logging.handlers.SocketHandler('localhost',
logging.handlers.DEFAULT_TCP_LOGGING_PORT)
socketHandler.setLevel(logging.INFO)
app.logger.addHandler(socketHandler)
app.run('0.0.0.0')