How to logging Flask python endpoint actions in Spring Boot Admin? - python

I'm working in PyCharm and building endpoints with Flask, and I use Pyctuator as well. It is connected to a Spring Boot Admin Server, and I saw a lot of logs, which was unuseful for me, like:
2022-07-26 10:28:32,212 INFO 1 -- [Thread-17317] _internal: 172.18.0.1 - - [26/Jul/2022 10:28:32] "GET /actuator/loggers HTTP/1.1" 200 -
I turned off the loggers on the site, and I want to setup in PyCharm if the server do any processes with any endpoints, then send back messages, like: 'System started xy process', 'System stopped xy process.' etc.
Could you please help how can I set it up in PyCharm?
Thanks!

Related

What do these numbers mean in my Flask application's log output?

I have a Flask application that runs with Flask-SocketIO. I recently installed eventlet in order to improve performance and utilise the web socket protocol.
My HTTP logs started having 2 additional parameters at the end (after the status code):
127.0.0.1 - - [26/Sep/2019 15:27:58] "GET /supported_countries HTTP/1.1" 200 488 0.019999
127.0.0.1 - - [26/Sep/2019 15:27:58] "GET /specializations HTTP/1.1" 200 381 0.003003
In this case it's the numbers 488 0.019999 and 381 0.003003.
I am assuming it's the size of the response and the time it took to complete the request?
What are they? (and can I configure what request info is logged?)
Here is my application.py
from my_app import create_app, socketio
app = create_app()
if __name__ == '__main__':
socketio.run(app, host=app.config.get('APP_HOST'),
log_output=app.config.get('LOGGING', False))
Again, please note that this was not happening prior to the installaiton of eventlet. Flask-SocketIO atuomatically detects that I have it installed and selects it (emphasis mine):
The extension automatically detects which asynchronous framework to use based on what is installed. Preference is given to eventlet, followed by gevent. For WebSocket support in gevent, uWSGI is preferred, followed by gevent-websocket. If neither eventlet nor gevent are installed, then the Flask development server is used.
So the Flask dev server does not output these numbers, whilst the eventlet-configured server does.

Python Pyramid Shutdown Event

I'm new-ish to Python and Pyramid so apologies if I'm trying to do the wrong thing here.
I'm currently running a Pyramid application inside of a Docker container, with the following entrypoint:
pipenv run pserve development.ini --reload
This serves my application correctly and I can then edit code directly inside the container. This all works fine. I then attempted to register this service to an instance of Netflix's Eureka Service Registry so I could then proxy to this service with a gateway (such as Netflix Zuul). I used Eureka's REST API to achieve this and again, this all worked fine.
However, when I go to shutdown the Pyramid service, I would like to send an additional HTTP request to Eureka to DELETE the registered service - This is ideal so I don't have to wait for expiry on Eureka and there will never be a window where Zuul might be proxying requests to a downed service.
The problem is I cannot reliably find a way to run a shutdown event in Pyramid. Basically, when i stop the Docker container, the service receives exit code 137 (which I believe is the result of a kill -9) and nothing ever happens. I've attempted using atexit as well as signal event such as SIGKILL, SIGTERM, SIGINT, etc and nothing ever happens. I've also tried running pserve without a --reload flag but that still doesn't work.
Is there anyway for me to reliably get this DELETE event to send right before the server and docker container shuts down?
This is the development.ini file I'm using:
[app:main]
use = egg:my-app
pyramid.reload_templates = true
pyramid.includes =
pyramid_debugtoolbar
pyramid_redis_sessions
pyramid_tm
debugtoolbar.hosts = 0.0.0.0/0
sqlalchemy.url = mysql://root:root#mysql/keyblade
my-app.secret = secretkey
redis.sessions.secret = secretkey
redis.sessions.host = redis
redis.sessions.port = 6379
[server:main]
use = egg:waitress#main
listen = 0.0.0.0:8000
# Logging Configuration
[loggers]
keys = root, debug, sqlalchemy.engine.base.Engine
[logger_debug]
level = DEBUG
handlers =
qualname = debug
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = INFO
handlers = console
[logger_sqlalchemy.engine.base.Engine]
level = INFO
handlers =
qualname = sqlalchemy.engine.base.Engine
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(asctime)s %(levelname)-5.5s [%(name)s][%(threadName)s] %(message)s
There is no shutdown protocol/api for a WSGI application (there technically isn't one for startup either despite people using/hoping that application-creation is close to the same time the server starts handling requests). You may be able to find a WSGI server that provides some hooks (for example gunicorn provides http://docs.gunicorn.org/en/stable/settings.html#worker-exit), but the better approach is to have your upstream handle your servers disappearing via health checks. Expecting that you'll be able to send a DELETE reliably when things go wrong is very unlikely to be a robust solution.
However, when I go to shutdown the Pyramid service, I would like to send an additional HTTP request to Eureka to DELETE the registered service - This is ideal so I don't have to wait for expiry on Eureka and there will never be a window where Zuul might be proxying requests to a downed service.
This is a web server specific and Pyramid cannot provide abstractions for it, as "your mileage may vary". Web server workers itself cannot know when they killed, as it is externally forced.
I would take an approach where you have an external process to monitor to web server and then perform clean up actions when it detects the web server is no longer running. The definition of no longer running could be "no a single process alive". Then you just have a background scheduled job (cron) to check for this condition. Or even better, have it on another monitoring instance that sits on a different server and can act in the situation in the server itself goes down.

Can't see application log in Google Cloud Logs

How can I view log messages on Google Cloud?: https://console.cloud.google.com/logs
This is what I see in the terminal when I run dev_appserver.py (locally running):
INFO 2016-05-16 14:00:45,118 module.py:787] default: "GET /static/images/contact.png HTTP/1.1" 304 -
INFO 2016-05-16 14:00:45,128 module.py:787] default: "GET /static/images/email.png HTTP/1.1" 304 -
INFO 2016-05-16 14:00:45,136 module.py:787] default: "GET /static/images/phone.png HTTP/1.1" 304 -
INFO 2016-05-16 14:00:45,487 basehandler.py:19] entering basehandler.py
INFO 2016-05-16 14:00:45,516 module.py:787] default: "GET /static/images/logo-349x209.png HTTP/1.1" 304 -
INFO 2016-05-16 14:00:45,562 requesthandlers.py:26] entering requesthandlers.py
INFO 2016-05-16 14:00:45,563 app.py:28] entering app.py
INFO 2016-05-16 14:00:45,563 app.py:198] Using development database
Both application log messages and request logging is displayed.
However when I view the log of the same code deployed I can only see the requests being logged:
The code I'm using to generate application log messages is something like:
import logging
logger = logging.getLogger("someLogger")
logger.info("entering app.py")
But I've also tried using logging.info(...) directly with the same results.
I've tried finding an answer to this in various resources but I've come up empty-handed, most refer to how to set log level when developing locally.
I'm guessing that I need to enable some setting in order to view application logs on Google Cloud Logs.
Resources that I've looked at:
https://cloud.google.com/logging/docs/view/logs_viewer
https://cloud.google.com/appengine/docs/python/logs/
How to change the logging level of dev_appserver
How do I write to the console in Google App Engine?
Google App Engine - Can not find my logging messages
https://docs.python.org/3/howto/logging.html
App engine groups the logs by request. You need to expand the log using the triangle/pointer on the left of the request in the 'new' GAE log viewer.
Personally I prefer using the old GAE log viewer, but I am unsure how much longer it will be around:
https://appengine.google.com/logs?app_id=s~xxx
(This viewer shows request + logs and allows log expansion)
An easy way to integrate Google Cloud Platform logging into your Python code is to create a subclass from logging.StreamHandler. This way logging levels will also match those of Google Cloud Logging, enabling you to filter based on severity. This solution also works within Cloud Run containers.
Also you can just add this handler to any existing logger configuration, without needing to change current logging code.
import json
import logging
import os
import sys
from logging import StreamHandler
from flask import request
class GoogleCloudHandler(StreamHandler):
def __init__(self):
StreamHandler.__init__(self)
def emit(self, record):
msg = self.format(record)
# Get project_id from Cloud Run environment
project = os.environ.get('GOOGLE_CLOUD_PROJECT')
# Build structured log messages as an object.
global_log_fields = {}
trace_header = request.headers.get('X-Cloud-Trace-Context')
if trace_header and project:
trace = trace_header.split('/')
global_log_fields['logging.googleapis.com/trace'] = (
f"projects/{project}/traces/{trace[0]}")
# Complete a structured log entry.
entry = dict(severity=record.levelname, message=msg)
print(json.dumps(entry))
sys.stdout.flush()
A way to configure and use the handler could be:
def get_logger():
logger = logging.getLogger(__name__)
if not logger.handlers:
gcp_handler = GoogleCloudHandler()
gcp_handler.setLevel(logging.DEBUG)
gcp_formatter = logging.Formatter(
'%(levelname)s %(asctime)s [%(filename)s:%(funcName)s:%(lineno)d] %(message)s')
gcp_handler.setFormatter(gcp_formatter)
logger.addHandler(gcp_handler)
return logger

Flask: getting random repeated flash messages when flash() is in #app.before_request

When i use flash() in #app.before_request, I get what seems like a random number of repeated entries. Refreshing the page over and over will give me between 1 and 4 repeated messages.
There aren't any redirects.
My code is simply:
if app.config['INSTANCE'] == 'DEV':
flash("This data is from the development DB")
Alternatively, I wasn't able to figure out how to access/modify the array of messages that flash() seems to append to other than in the template via get_flashed_messages(). Anyone know how?
You can access the list of waiting messages via flashes = session.get('_flashes', []). You can view the code on Github
On the note of why you're getting a few messages flashing, it's because you're making multiple requests (but probably don't know it). Your web-browser is probably asking for favicon.ico which is a request, so causes a flash, etc. If you're running in debug mode, your console window will show all the requests being handled. For example loading a simple flask example in Chrome causes this to show:
127.0.0.1 - - [21/Jun/2013 16:35:05] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [21/Jun/2013 16:35:05] "GET /favicon.ico HTTP/1.1" 404 -
One is my request to view the homepage, the other is Chrome asking for the favicon (and it being told it doesn't exist).

Django. Business Logic Bottlenecks.

Is there any smart way to find bottlenecks in business logic. For example, we have application, that have one view that doing HttpResponse('1') in big project. We are sure, that no SQL queries in middlewares exists. But HttpResponse working really slow(50 rps vs 200 rps on clear django project).
What reasons can be?
How to find bottlenecks in this case?
Also we know, that in clear project less than 1 Mb of memory used for objects on each request, and in our project - more than 2Mb. How to find these objects?
The debug toolbar works well, but I also like running django-devserver. It can give you more information than you can process sometimes.
DEVSERVER_MODULES = (
'devserver.modules.sql.SQLRealTimeModule',
'devserver.modules.sql.SQLSummaryModule',
'devserver.modules.profile.ProfileSummaryModule',
# Modules not enabled by default
'devserver.modules.ajax.AjaxDumpModule',
#'devserver.modules.profile.MemoryUseModule',
'devserver.modules.cache.CacheSummaryModule',
#'devserver.modules.profile.LineProfilerModule',
)
This is what modules I have turned on, and one hit to the admin page after start:
Django version 1.3.1, using settings 'myproject.settings' Running django-devserver 0.3.1 Threaded django server is running at http://127.0.0.1:8000/ Quit the server with CTRL-BREAK.
[sql] SELECT ...
FROM "auth_message"
WHERE "auth_message"."user_id" = 1
[sql] SELECT ...
FROM "django_admin_log"
INNER JOIN "auth_user" ON ("django_admin_log"."user_id" = "auth_user"."id")
LEFT OUTER JOIN "django_content_type" ON ("django_admin_log"."content_typ_id" = "django_content_type"."id")
WHERE "django_admin_log"."user_id" = 1
ORDER BY "django_admin_log"."action_time" DESC LIMIT 10
[sql] 4 queries with 0 duplicates
[profile] Total time to render was 0.54s
[cache] 0 calls made with a 100% hit percentage (0 misses) [30/Nov/2011 08:36:34] "GET /admin/ HTTP/1.1" 200 21667 (time: 0.69s; sql: 0ms (4q))
[sql] SELECT ...
FROM "django_flatpage"
INNER JOIN "django_flatpage_sites" ON ("django_flatpage"."id" = "django_fatpage_sites"."flatpage_id")
WHERE ("django_flatpage"."url" = /favicon.ico/
AND "django_flatpage_sites"."site_id" = 1)
[sql] 1 queries with 0 duplicates
[profile] Total time to render was 0.02s
[cache] 0 calls made with a 100% hit percentage (0 misses) [30/Nov/2011 08:36:34] "GET /favicon.ico/ HTTP/1.1" 404 2587 (time:
0.89s; sql: 0ms (1q))
Do you use the django debug toolbar ? You could find what queries are run with it, middleware or not.
How do you monitor the performance of the view ?
Are there much more users in the big project than in the fresh one ?
I guess your bottle neck is not in your or the django code. What webserver do you use, and how many requests are handled by the worker processes?
If you use mod_wsgi be sure to have enough worker processes and that maximum-requests is high.
And of course be sure that settings.DEBUG is not set.
Apache logs can include the request process time in microseconds: http://httpd.apache.org/docs/current/mod/mod_log_config.html check for %D
Check in your middleware how long the interpreter is inside your+django code.
# Middleware to check how long the request was in the wsgi queue:
class FooMiddleware:
def process_request(self, request):
...
queue_start=request.META.get('HTTP_X_QUEUE_START', None)
if queue_start is not None:
# How long was the request waiting in the wsgi queue?
# In Apache Config:
# RequestHeader add X-Queue-Start "%t" (in <VirtualHost>)
queue_start = int(queue_start[2:])/1000000.0
wait_in_queue=time.time()-queue_start
if wait_in_queue>1:
logging.error('Request was too long (%.3fs) in wsgi-queue: %s' % (
wait_in_queue, request.build_absolute_uri()))
You could try New Relic and see if it helpful in narrowing down problem area.
http://newrelic.com/python
http://blog.newrelic.com/2011/11/08/new-relic-supports-python/
Good thing is that you can use it on a production application where Django debug toolbar you cant.

Categories