How to set up logging with Cherrypy? - python

I am trying to set up logging with Cherrypy on my Openshift python 3.3 app. The 'appserver.log' file only updates until the actual server starts then nothing gets added to the log file. I have read and followed (as far as I know) the documentation at the below links. Still no logging.
CherryPy server errors log
http://docs.cherrypy.org/dev/refman/_cplogging.html
My python code snippet:
def run_cherrypy_server(app, ip, port=8080):
from cherrypy import wsgiserver
from cherrypy import config
# log.screen: Set this to True to have both “error” and “access” messages printed to stdout.
# log.access_file: Set this to an absolute filename where you want “access” messages written.
# log.error_file: Set this to an absolute filename where you want “error” messages written.
appserver_error_log = os.path.join(os.environ['OPENSHIFT_HOMEDIR'], 'python', 'logs','appserver_error.log')
appserver_access_log = os.path.join(os.environ['OPENSHIFT_HOMEDIR'], 'python', 'logs','appserver_access.log')
config.update({
'log.screen': True,
'log.error_file': appserver_error_log,
'log.access_file': appserver_access_log
})
server = wsgiserver.CherryPyWSGIServer(
(ip, port), app, server_name='www.cherrypy.example')
server.start()
The 'appserver_error.log' and 'appserver_access.log' files actually get created in the proper Openshift python directory. However, no logging information in both of the files appserver_error.log and appserver_access.log.
Everything runs fine but no logging.
Any ideas what I am doing wrong?

The WSGI server itself does not do any logging. The CherryPy engine (which controls process startup and shutdown) writes to the "error" log, and only CherryPy applications (that use CherryPy's Request and Response objects) write to the access log. If you're passing your own WSGI application, you'll have to do your own logging.

Related

Flask API Not Writing Out Logs on Elastic Beanstalk

I have a Flask API and I am attempting to implement logging. Everything is working fine in my local machine, but nothing seems to work when deployed on AWS Elastic Beanstalk.
I have a file logger.py that creates and configures the logger that looks like this:
import logging
import os
if not os.path.isdir(os.environ.get('LOG_PATH', 'log')):
os.mkdir(os.environ.get('LOG_PATH', 'log'))
# Configure logger
logger = logging.getLogger(__name__)
formatter = logging.Formatter('[%(asctime)s] %(levelname)s in %(module)s: %(message)s')
debug_file_handler = logging.FileHandler(f'{os.environ.get("LOG_PATH", "log")}/debug.log')
info_file_handler = logging.FileHandler(f'{os.environ.get("LOG_PATH", "log")}/info.log')
error_file_handler = logging.FileHandler(f'{os.environ.get("LOG_PATH", "log")}/error.log')
debug_file_handler.setLevel(logging.DEBUG)
info_file_handler.setLevel(logging.INFO)
error_file_handler.setLevel(logging.ERROR)
debug_file_handler.setFormatter(formatter)
info_file_handler.setFormatter(formatter)
error_file_handler.setFormatter(formatter)
logger.addHandler(debug_file_handler)
logger.addHandler(info_file_handler)
logger.addHandler(error_file_handler)
I import this file int my app factory and add the handlers to my app.
# Configure logger
app.logger.removeHandler(default_handler)
app.logger.addHandler(debug_file_handler)
app.logger.addHandler(info_file_handler)
app.logger.addHandler(error_file_handler)
Then in the app factory still, I have a #app.before_request so that logging should occur on every request (for testing purposes).
#app.before_request
def before_request():
elif request.method == 'GET':
app.logger.debug(f'{request.method} {request.base_url}: parameters {dict(request.args)}')
else:
app.logger.debug(f'{request.method} {request.base_url}: parameters {request.json}')
This all works fine on my local machine without a problem. However when I try to deploy to AWS EB, nothing is written to the log files. The log files appear to be created and present when I pull them down via the cli or gui, but nothing is being written to it. I've followed a few tutorials and have added to my .ebextensions file but am still having no luck. Currently my .ebextensions includes a logging.config that looks like this:
files:
"/opt/elasticbeanstalk/tasks/bundlelogs.d/applogs.conf" :
mode: "000777"
owner: root
group: root
content: |
/tmp/*.log
"/opt/elasticbeanstalk/tasks/taillogs.d/applogs.conf" :
mode: "000777"
owner: root
group: root
content: |
/tmp/*.log
The log files appear to be included when I request the logs (both tail and full), but the logs are completely empty even after making multiple request. I've also seen a few answers on here saying I need to include chown wsgi:wsgi in this file, but that has not worked either. I've also tried writing the logs to var/app/current/log and that doesn't work either. The log files are always created, but not written to.
For context, the Elastic Beanstalk platform is Python 3.8 running on Amazon Linux 2.
Any help would be greatly appreciated.
Turns out all I needed to do was set the level of the logger to logging.DEBUG so I could see debug logs.
in __init__.py:
app.logger.setLevel(logging.DEBUG)
And since my app name is src when I configure it, I can grab this logger in other modules with:
import logging
logger = logging.getLogger('src')

Tornado substituting custom logger on server (not on local computer)

I have a tornado application and a custom logger method. My code to build and use the custom logger is the following:
def create_logger():
"""
This function creates the logger functionality to be used throughout the Python application
:return: bool - true if successful
"""
# Configuring the logger
filename = "PythonLogger.log"
# Change the current working directory to the logs folder, so that the logs files is written in it.
os.chdir(os.path.normpath(os.path.normpath(os.path.dirname(os.path.abspath(__file__)) + os.sep + os.pardir + os.sep + os.pardir + os.sep + 'logs')))
# Create the logs file
logging.basicConfig(filename=filename, format='%(asctime)s %(message)s', filemode='w')
# Creating the logger
logger = logging.getLogger()
# Setting the threshold of logger to DEBUG
logger.setLevel(logging.NOTSET)
logger.log(0, 'El logger está inicializado')
return True
def log_info_message(msg):
"""
Utility for message logging with code 20
:param msg:
:return:
"""
return logging.getLogger().log(20, msg)
In the code, I initialize the logger and already write a message to it before the Tornado application initialization:
if __name__ == '__main__':
# Logger initialization
create_logger()
# First log message
log_info_message('Initiating Python application')
# Starting Tornado
tornado.options.parse_command_line()
# Specifying what app exactly is being started
server = tornado.httpserver.HTTPServer(test.app)
server.listen(options.port)
try:
if 'Windows_NT' not in os.environ.values():
server.start(0)
tornado.ioloop.IOLoop.instance().start()
except KeyboardInterrupt:
tornado.ioloop.IOLoop.instance().stop()
Then let's say my method get of HTTP request is as follows (only interesting lines):
class API(tornado.web.RequestHandler):
def get(self):
self.write('Get request ')
logging.getLogger("tornado.access").log(20, 'Hola')
logging.getLogger("tornado.application").log(20, '1')
logging.getLogger("tornado.general").log(20, '2')
log_info_message('Received a GET request at: ' + datetime.datetime.now().strftime("%d-%b-%Y (%H:%M:%S.%f)"))
What I see is a difference between local testing and testing on server.
A) On local, I can see log message at first script running, and log messages of requests (after initializing Tornado app) in my log file and the Tornado logs.
B) On server, I only see the first message, not my log messages when Get requests are accepted and also see Tornado's loggers when there's an error, but even don't see the messages produced by Tornado's loggers. I guess that means that somehow Tornado is re-initializing the logger and making mine and his 3 ones write in some other file (somehow that does not affect when errors happens??).
I am aware that Tornado uses its own 3 logging functions, but somehow I would like to use mine as well, at the same time as keeping the Tornado's ones and writing them all into the same file. Basically reproduce that local behaviour on server but also keeping it when some error happens, of course.
How could I achieve this?
Thanks in advance!
P.S.: if I add a name to the logger, let's say logging.getLogger('Example') and changed log_info_message function to return logging.getLogger('Example').log(20, msg), Tornado's logger would fail and raise error. So that option destroys its own loggers...
It seems the only problem was that, on the server side, tornado was setting the the mininum level for a log message to be written on log file higher (minimum of 40 was required). So that logging.getLogger().log(20, msg) would not write on the logging file but logging.getLogger().log(40, msg) would.
I would like to understand why, so if anybody knows, your knowledge would be more than welcome. For the time being that solution is working though.
tornado.log defines options that can be used to customise logging via command line (check tornado.options) - one of them is logging that defines the log level used. You are likely using this on the server and setting it to error.
When debugging logging I suggest you create a RequestHandler that will log or return the structure of the existing loggers by inspecting the root logger. When you see the structure it is much easier to understand why it works the way it works.

Python Pyramid Shutdown Event

I'm new-ish to Python and Pyramid so apologies if I'm trying to do the wrong thing here.
I'm currently running a Pyramid application inside of a Docker container, with the following entrypoint:
pipenv run pserve development.ini --reload
This serves my application correctly and I can then edit code directly inside the container. This all works fine. I then attempted to register this service to an instance of Netflix's Eureka Service Registry so I could then proxy to this service with a gateway (such as Netflix Zuul). I used Eureka's REST API to achieve this and again, this all worked fine.
However, when I go to shutdown the Pyramid service, I would like to send an additional HTTP request to Eureka to DELETE the registered service - This is ideal so I don't have to wait for expiry on Eureka and there will never be a window where Zuul might be proxying requests to a downed service.
The problem is I cannot reliably find a way to run a shutdown event in Pyramid. Basically, when i stop the Docker container, the service receives exit code 137 (which I believe is the result of a kill -9) and nothing ever happens. I've attempted using atexit as well as signal event such as SIGKILL, SIGTERM, SIGINT, etc and nothing ever happens. I've also tried running pserve without a --reload flag but that still doesn't work.
Is there anyway for me to reliably get this DELETE event to send right before the server and docker container shuts down?
This is the development.ini file I'm using:
[app:main]
use = egg:my-app
pyramid.reload_templates = true
pyramid.includes =
pyramid_debugtoolbar
pyramid_redis_sessions
pyramid_tm
debugtoolbar.hosts = 0.0.0.0/0
sqlalchemy.url = mysql://root:root#mysql/keyblade
my-app.secret = secretkey
redis.sessions.secret = secretkey
redis.sessions.host = redis
redis.sessions.port = 6379
[server:main]
use = egg:waitress#main
listen = 0.0.0.0:8000
# Logging Configuration
[loggers]
keys = root, debug, sqlalchemy.engine.base.Engine
[logger_debug]
level = DEBUG
handlers =
qualname = debug
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = INFO
handlers = console
[logger_sqlalchemy.engine.base.Engine]
level = INFO
handlers =
qualname = sqlalchemy.engine.base.Engine
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(asctime)s %(levelname)-5.5s [%(name)s][%(threadName)s] %(message)s
There is no shutdown protocol/api for a WSGI application (there technically isn't one for startup either despite people using/hoping that application-creation is close to the same time the server starts handling requests). You may be able to find a WSGI server that provides some hooks (for example gunicorn provides http://docs.gunicorn.org/en/stable/settings.html#worker-exit), but the better approach is to have your upstream handle your servers disappearing via health checks. Expecting that you'll be able to send a DELETE reliably when things go wrong is very unlikely to be a robust solution.
However, when I go to shutdown the Pyramid service, I would like to send an additional HTTP request to Eureka to DELETE the registered service - This is ideal so I don't have to wait for expiry on Eureka and there will never be a window where Zuul might be proxying requests to a downed service.
This is a web server specific and Pyramid cannot provide abstractions for it, as "your mileage may vary". Web server workers itself cannot know when they killed, as it is externally forced.
I would take an approach where you have an external process to monitor to web server and then perform clean up actions when it detects the web server is no longer running. The definition of no longer running could be "no a single process alive". Then you just have a background scheduled job (cron) to check for this condition. Or even better, have it on another monitoring instance that sits on a different server and can act in the situation in the server itself goes down.

Why cherrypy server logs error messages even though it seems to work?

I'm using cherrypy to start a server, that its computation engine is Apache Spark. It logs this weird result:
[06/Dec/2017:13:25:42] ENGINE Bus STARTING
INFO:cherrypy.error:[06/Dec/2017:13:25:42] ENGINE Bus STARTING
[06/Dec/2017:13:25:42] ENGINE Started monitor thread 'Autoreloader'.
INFO:cherrypy.error:[06/Dec/2017:13:25:42] ENGINE Started monitor thread 'Autoreloader'.
[06/Dec/2017:13:25:42] ENGINE Serving on http://0.0.0.0:5432
INFO:cherrypy.error:[06/Dec/2017:13:25:42] ENGINE Serving on http://0.0.0.0:5432
[06/Dec/2017:13:25:42] ENGINE Bus STARTED
INFO:cherrypy.error:[06/Dec/2017:13:25:42] ENGINE Bus STARTED
My question is, what is this INFO:cherrypy.error: it logs?
This is how I run the server:
def run_server(app):
# Enable WSGI access logging via Paste
app_logged = TransLogger(app)
# Mount the WSGI callable object (app) on the root directory
cherrypy.tree.graft(app_logged, '/')
# Set the configuration of the web server
cherrypy.config.update({
'engine.autoreload.on': True,
'log.screen': True,
'server.socket_port': 5432,
'server.socket_host': '0.0.0.0'
})
# Start the CherryPy WSGI web server
cherrypy.engine.start()
cherrypy.engine.block()
There's absolutely nothing wrong with what you're seeing in the log file. I see the same bus and serving statements when I run Cherrypy. I guess Cherrypy haven't really used the term 'error' too effectively there, like some people call the HTTP status codes 'error codes', when in fact code 200 means everything is all good.
I think for your case, the listeners (for activities logging) are essentially wired with the function _buslog in cherrypy/__ init__.py , and they eventually will call the function error() in cherrypy/_cplogging.py
and according to the description there:
""" Write the given ``msg`` to the error log.
This is not just for errors! Applications may call this at any time
to log application-specific information.
If ``traceback`` is True, the traceback of the current exception
(if any) will be appended to ``msg``.
"""
So , yeah, this is not just for errors...

How to collect error message from stderr using a daemonized uwsgi?

I run my uwsgi with --daemonzie=~/uwsgi.log.
I use flask. In my flask app, if I print some message into stdin, it will show on uwsgi.log. If I print to stderr, uwsgi.log won't show these message. How should I enable uwsgi to collect message from stderr.
The major problem is that I can not let uwsgi.log collect the exception track after I catch some exceptions in my flask app.
Flask is catching your exceptions, make sure, you set PROPAGATE_EXCEPTIONS in config.
from flask import Flask
application = Flask(__name__)
application.config['PROPAGATE_EXCEPTIONS'] = True
#application.route('/')
def hello_world():
return 'Hello World!'
Uwsgi logging can be set with
--logto /var/log/uwsgi/app.log
or use logto2 flag if if you want to separate stdout from stderr.
There's also possibility of setting loggers plugin (forward to syslog, etc.), however these plugins have to be compiled into uwsgi.

Categories