I'm seeking a way to let the python logger module to log to database and falls back to file system when the db is down.
So basically 2 things: How to let the logger log to database and how to make it fall to file logging when the db is down.
I recently managed to write my own database logger in Python. Since I couldn't find any example I thought I post mine here. Works with MS SQL.
Database table could look like this:
CREATE TABLE [db_name].[log](
[id] [bigint] IDENTITY(1,1) NOT NULL,
[log_level] [int] NULL,
[log_levelname] [char](32) NULL,
[log] [char](2048) NOT NULL,
[created_at] [datetime2](7) NOT NULL,
[created_by] [char](32) NOT NULL,
) ON [PRIMARY]
The class itself:
class LogDBHandler(logging.Handler):
'''
Customized logging handler that puts logs to the database.
pymssql required
'''
def __init__(self, sql_conn, sql_cursor, db_tbl_log):
logging.Handler.__init__(self)
self.sql_cursor = sql_cursor
self.sql_conn = sql_conn
self.db_tbl_log = db_tbl_log
def emit(self, record):
# Set current time
tm = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(record.created))
# Clear the log message so it can be put to db via sql (escape quotes)
self.log_msg = record.msg
self.log_msg = self.log_msg.strip()
self.log_msg = self.log_msg.replace('\'', '\'\'')
# Make the SQL insert
sql = 'INSERT INTO ' + self.db_tbl_log + ' (log_level, ' + \
'log_levelname, log, created_at, created_by) ' + \
'VALUES (' + \
'' + str(record.levelno) + ', ' + \
'\'' + str(record.levelname) + '\', ' + \
'\'' + str(self.log_msg) + '\', ' + \
'(convert(datetime2(7), \'' + tm + '\')), ' + \
'\'' + str(record.name) + '\')'
try:
self.sql_cursor.execute(sql)
self.sql_conn.commit()
# If error - print it out on screen. Since DB is not working - there's
# no point making a log about it to the database :)
except pymssql.Error as e:
print sql
print 'CRITICAL DB ERROR! Logging to database not possible!'
And usage example:
import pymssql
import time
import logging
db_server = 'servername'
db_user = 'db_user'
db_password = 'db_pass'
db_dbname = 'db_name'
db_tbl_log = 'log'
log_file_path = 'C:\\Users\\Yourname\\Desktop\\test_log.txt'
log_error_level = 'DEBUG' # LOG error level (file)
log_to_db = True # LOG to database?
class LogDBHandler(logging.Handler):
[...]
# Main settings for the database logging use
if (log_to_db):
# Make the connection to database for the logger
log_conn = pymssql.connect(db_server, db_user, db_password, db_dbname, 30)
log_cursor = log_conn.cursor()
logdb = LogDBHandler(log_conn, log_cursor, db_tbl_log)
# Set logger
logging.basicConfig(filename=log_file_path)
# Set db handler for root logger
if (log_to_db):
logging.getLogger('').addHandler(logdb)
# Register MY_LOGGER
log = logging.getLogger('MY_LOGGER')
log.setLevel(log_error_level)
# Example variable
test_var = 'This is test message'
# Log the variable contents as an error
log.error('This error occurred: %s' % test_var)
Above will log both to the database and to the file. If file is not needed - skip the 'logging.basicConfig(filename=log_file_path)' line. Everything logged using 'log' - will be logged as MY_LOGGER. If some external error appears (i.e. in the module imported or something) - error will appear as 'root', since 'root' logger is also active, and is using the database handler.
Write yourself a handler that directs the logs to the database in question. When it fails, you can remove it from the handler list of the logger. There are many ways to deal with the failure-modes.
Python logging to a database with a backup logger
Problem
I had the same problem when I ran a Django project inside the server since sometimes you need to check the logs remotely.
Solution
First, there is a need for a handler for the logger to insert logs in to the database. Before that and since my SQL is not good, an ORM is needed that I choose SQLAlchemy.
model:
# models.py
from sqlalchemy import Column, Integer, String, DateTime, Text
from sqlalchemy.ext.declarative import declarative_base
import datetime
base = declarative_base()
class Log(base):
__tablename__ = "log"
id = Column(Integer, primary_key=True, autoincrement=True)
time = Column(DateTime, nullable=False, default=datetime.datetime.now)
level_name = Column(String(10), nullable=True)
module = Column(String(200), nullable=True)
thread_name = Column(String(200), nullable=True)
file_name = Column(String(200), nullable=True)
func_name = Column(String(200), nullable=True)
line_no = Column(Integer, nullable=True)
process_name = Column(String(200), nullable=True)
message = Column(Text)
last_line = Column(Text)
This is the crud for insertion into the database:
#crud.py
import sqlalchemy
from .models import base
from traceback import print_exc
class Crud:
def __init__(self, connection_string=f'sqlite:///log_db.sqlite3',
encoding='utf-8',
pool_size=10,
max_overflow=20,
pool_recycle=3600):
self.connection_string = connection_string
self.encoding = encoding
self.pool_size = pool_size
self.max_overflow = max_overflow
self.pool_recycle = pool_recycle
self.engine = None
self.session = None
def initiate(self):
self.create_engine()
self.create_session()
self.create_tables()
def create_engine(self):
self.engine = sqlalchemy.create_engine(self.connection_string)
def create_session(self):
self.session = sqlalchemy.orm.Session(bind=self.engine)
def create_tables(self):
base.metadata.create_all(self.engine)
def insert(self, instances):
try:
self.session.add(instances)
self.session.commit()
self.session.flush()
except:
self.session.rollback()
raise
def __del__(self):
self.close_session()
self.close_all_connections()
def close_session(self):
try:
self.session.close()
except:
print_exc()
else:
self.session = None
def close_all_connections(self):
try:
self.engine.dispose()
except:
print_exc()
else:
self.engine = None
The handler:
# handler.py
from logging import Handler, getLogger
from traceback import print_exc
from .crud import Crud
from .models import Log
my_crud = Crud(
connection_string=<connection string to reach your db>,
encoding='utf-8',
pool_size=10,
max_overflow=20,
pool_recycle=3600)
my_crud.initiate()
class DBHandler(Handler):
backup_logger = None
def __init__(self, level=0, backup_logger_name=None):
super().__init__(level)
if backup_logger_name:
self.backup_logger = getLogger(backup_logger_name)
def emit(self, record):
try:
message = self.format(record)
try:
last_line = message.rsplit('\n', 1)[-1]
except:
last_line = None
try:
new_log = Log(module=record.module,
thread_name=record.threadName,
file_name=record.filename,
func_name=record.funcName,
level_name=record.levelname,
line_no=record.lineno,
process_name=record.processName,
message=message,
last_line=last_line)
# raise
my_crud.insert(instances=new_log)
except:
if self.backup_logger:
try:
getattr(self.backup_logger, record.levelname.lower())(record.message)
except:
print_exc()
else:
print_exc()
except:
print_exc()
Test to check the logger:
# test.py
from logging import basicConfig, getLogger, DEBUG, FileHandler, Formatter
from .handlers import DBHandler
basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
datefmt='%d-%b-%y %H:%M:%S',
level=DEBUG)
format = Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
backup_logger = getLogger('backup_logger')
file_handler = FileHandler('file.log')
file_handler.setLevel(DEBUG)
file_handler.setFormatter(format)
backup_logger.addHandler(file_handler)
db_logger = getLogger('logger')
db_handler = DBHandler(backup_logger_name='backup_logger')
db_handler.setLevel(DEBUG)
db_handler.setFormatter(format)
db_logger.addHandler(db_handler)
if __name__ == "__main__":
db_logger.debug('debug: hello world!')
db_logger.info('info: hello world!')
db_logger.warning('warning: hello world!')
db_logger.error('error: hello world!')
db_logger.critical('critical: hello world!!!!')
You can see the handler accepts a backup logger that can use it when the database insertion fails.
A good improvement can be logging into the database by threading.
I am digging this out again.
There is a solution with SqlAlchemy (Pyramid is NOT required for this recipe):
https://docs.pylonsproject.org/projects/pyramid-cookbook/en/latest/logging/sqlalchemy_logger.html
And you could improve logging by adding extra fields, here is a guide: https://stackoverflow.com/a/17558764/1115187
Fallback to FS
Not sure that this is 100% correct, but you could have 2 handlers:
database handler (write to DB)
file handler (write to file or stream)
Just wrap the DB-commit with a try-except. But be aware: the file will contain ALL log entries, but not only entries for which DB saving was failed.
Old question, but dropping this for others. If you want to use python logging, you can add two handlers. One for writing to file, a rotating file handler. This is robust, and can be done regardless if the dB is up or not.
The other one can write to another service/module, like a pymongo integration.
Look up logging.config on how to setup your handlers from code or json.
Related
I use pycharm to write a python3 web app project using tornado web framework,
The listing service has been built already. I need to build the remaining two components: the user service and the public API layer. The implementation of the listing service can serve as a good starting point to learn more about how to structure a web application using the Tornado web framework.
I am required to use tornado's built in framework for HTTP request.
error occurs at listening ( app.listen(options.port)) when I tried to run the program:
Traceback (most recent call last):
File "D:/Bill/python/Tornado/99-python-exercise-master/listing_service.py", line 203, in <module>
app.listen(options.port)
File "C:\Program Files\Python38\lib\site-packages\tornado\web.py", line 2116, in listen
server.listen(port, address)
File "C:\Program Files\Python38\lib\site-packages\tornado\tcpserver.py", line 152, in listen
self.add_sockets(sockets)
File "C:\Program Files\Python38\lib\site-packages\tornado\tcpserver.py", line 165, in add_sockets
self._handlers[sock.fileno()] = add_accept_handler(
File "C:\Program Files\Python38\lib\site-packages\tornado\netutil.py", line 279, in add_accept_handler
io_loop.add_handler(sock, accept_handler, IOLoop.READ)
File "C:\Program Files\Python38\lib\site-packages\tornado\platform\asyncio.py", line 100, in add_handler
self.asyncio_loop.add_reader(fd, self._handle_events, fd, IOLoop.READ)
File "C:\Program Files\Python38\lib\asyncio\events.py", line 501, in add_reader
raise NotImplementedError
NotImplementedError
code:
import tornado.web
import tornado.log
import tornado.options
import sqlite3
import logging
import json
import time
class App(tornado.web.Application):
def __init__(self, handlers, **kwargs):
super().__init__(handlers, **kwargs)
# Initialising db connection
self.db = sqlite3.connect("listings.db")
self.db.row_factory = sqlite3.Row
self.init_db()
def init_db(self):
cursor = self.db.cursor()
# Create table
cursor.execute(
"CREATE TABLE IF NOT EXISTS 'listings' ("
+ "id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,"
+ "user_id INTEGER NOT NULL,"
+ "listing_type TEXT NOT NULL,"
+ "price INTEGER NOT NULL,"
+ "created_at INTEGER NOT NULL,"
+ "updated_at INTEGER NOT NULL"
+ ");"
)
self.db.commit()
class BaseHandler(tornado.web.RequestHandler):
def write_json(self, obj, status_code=200):
self.set_header("Content-Type", "application/json")
self.set_status(status_code)
self.write(json.dumps(obj))
# /listings
class ListingsHandler(BaseHandler):
#tornado.gen.coroutine
def get(self):
# Parsing pagination params
page_num = self.get_argument("page_num", 1)
page_size = self.get_argument("page_size", 10)
try:
page_num = int(page_num)
except:
logging.exception("Error while parsing page_num: {}".format(page_num))
self.write_json({"result": False, "errors": "invalid page_num"}, status_code=400)
return
try:
page_size = int(page_size)
except:
logging.exception("Error while parsing page_size: {}".format(page_size))
self.write_json({"result": False, "errors": "invalid page_size"}, status_code=400)
return
# Parsing user_id param
user_id = self.get_argument("user_id", None)
if user_id is not None:
try:
user_id = int(user_id)
except:
self.write_json({"result": False, "errors": "invalid user_id"}, status_code=400)
return
# Building select statement
select_stmt = "SELECT * FROM listings"
# Adding user_id filter clause if param is specified
if user_id is not None:
select_stmt += " WHERE user_id=?"
# Order by and pagination
limit = page_size
offset = (page_num - 1) * page_size
select_stmt += " ORDER BY created_at DESC LIMIT ? OFFSET ?"
# Fetching listings from db
if user_id is not None:
args = (user_id, limit, offset)
else:
args = (limit, offset)
cursor = self.application.db.cursor()
results = cursor.execute(select_stmt, args)
listings = []
for row in results:
fields = ["id", "user_id", "listing_type", "price", "created_at", "updated_at"]
listing = {
field: row[field] for field in fields
}
listings.append(listing)
self.write_json({"result": True, "listings": listings})
#tornado.gen.coroutine
def post(self):
# Collecting required params
user_id = self.get_argument("user_id")
listing_type = self.get_argument("listing_type")
price = self.get_argument("price")
# Validating inputs
errors = []
user_id_val = self._validate_user_id(user_id, errors)
listing_type_val = self._validate_listing_type(listing_type, errors)
price_val = self._validate_price(price, errors)
time_now = int(time.time() * 1e6) # Converting current time to microseconds
# End if we have any validation errors
if len(errors) > 0:
self.write_json({"result": False, "errors": errors}, status_code=400)
return
# Proceed to store the listing in our db
cursor = self.application.db.cursor()
cursor.execute(
"INSERT INTO 'listings' "
+ "('user_id', 'listing_type', 'price', 'created_at', 'updated_at') "
+ "VALUES (?, ?, ?, ?, ?)",
(user_id_val, listing_type_val, price_val, time_now, time_now)
)
self.application.db.commit()
# Error out if we fail to retrieve the newly created listing
if cursor.lastrowid is None:
self.write_json({"result": False, "errors": ["Error while adding listing to db"]}, status_code=500)
return
listing = dict(
id=cursor.lastrowid,
user_id=user_id_val,
listing_type=listing_type_val,
price=price_val,
created_at=time_now,
updated_at=time_now
)
self.write_json({"result": True, "listing": listing})
def _validate_user_id(self, user_id, errors):
try:
user_id = int(user_id)
return user_id
except Exception as e:
logging.exception("Error while converting user_id to int: {}".format(user_id))
errors.append("invalid user_id")
return None
def _validate_listing_type(self, listing_type, errors):
if listing_type not in {"rent", "sale"}:
errors.append("invalid listing_type. Supported values: 'rent', 'sale'")
return None
else:
return listing_type
def _validate_price(self, price, errors):
# Convert string to int
try:
price = int(price)
except Exception as e:
logging.exception("Error while converting price to int: {}".format(price))
errors.append("invalid price. Must be an integer")
return None
if price < 1:
errors.append("price must be greater than 0")
return None
else:
return price
# /listings/ping
class PingHandler(tornado.web.RequestHandler):
#tornado.gen.coroutine
def get(self):
self.write("pong!")
def make_app(options):
return App([
(r"/listings/ping", PingHandler),
(r"/listings", ListingsHandler),
], debug=options.debug)
if __name__ == "__main__":
# Define settings/options for the web app
# Specify the port number to start the web app on (default value is port 6000)
tornado.options.define("port", default=6000)
# Specify whether the app should run in debug mode
# Debug mode restarts the app automatically on file changes
tornado.options.define("debug", default=True)
# Read settings/options from command line
tornado.options.parse_command_line()
# Access the settings defined
options = tornado.options.options
# Create web app
app = make_app(options)
app.listen(options.port)
logging.info("Starting listing service. PORT: {}, DEBUG: {}".format(options.port, options.debug))
# Start event loop
tornado.ioloop.IOLoop.instance().start()
How to fix this problem?
Python 3.8 made a backwards-incompatible change to the asyncio package used by Tornado. Applications that use Tornado on Windows with Python 3.8 must call asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy()) at the beginning of their main file/function. (as documented on the home page of tornadoweb.org)
I have read Flask - How to store logs and add additional information and so on.
But I don't want to write code like extra={} everywhere.
I try custom logger of FlaskApp by use AppFormatter, but it dosen't work. Here is the code sample:
import logging
from flask import session, Flask
from logging.handlers import RotatingFileHandler
class AppFormatter(logging.Formatter):
def format(self, record):
# fixme: AppFormatter.format is not called
s = super(AppFormatter, self).format(record)
user_id = session.get('user_id', '?')
username = session.get('fullanme', '??')
msg = '{} - {} - {}'.format(s, user_id, username)
return msg
LOG_FORMAT = '[%(asctime)s]%(module)s - %(funcName)s - %(message)s'
defaultFormat = AppFormatter(LOG_FORMAT)
def initLogger(logger, **kwargs):
file = kwargs.pop('file', 'debug.log')
fmt = kwargs.pop('format', defaultFormat)
level = kwargs.pop('level', logging.DEBUG)
maxBytes = kwargs.pop('maxBytes', 10 * 1024 * 1024)
backupCount = kwargs.pop('backupCount', 5)
hdl_file = RotatingFileHandler(file, maxBytes=maxBytes, backupCount=backupCount)
hdl_file.setLevel(level)
logger.addHandler(hdl_file)
for hdl in logger.handlers:
hdl.setFormatter(fmt)
app = Flask(__name__)
initLogger(app.logger)
app.run()
Why AppFormatter.format is not called while app.logger stdout the messages ?
Try this out
class AppFormatter(logging.Formatter):
def format(self, record):
user_id = session.get('user_id', '?')
username = session.get('fullanme', '??')
record.msg = '{} - {} - {}'.format(record.getMessage(), user_id, username)
return super(AppFormatter, self).format(record)
I try to save some columns (eg: tags, models) with JSON encoded string.
And I hope to always keep then decoded in use.
I have read some refers to add configs to disable autocommit and autoflush , but it doesn't work.
While the instance was added into db.session and then changed value , orm still try to commit an UPDATE OPERATION and then raise TypeError.
Here is my code.
```python
import json
from sqlalchemy import orm
from flask_sqlalchemy import SQLAlchemy
session_options = dict(
bind=None,
autoflush=False,
autocommit=False,
expire_on_commit=False,
)
db = SQLAlchemy(session_options=session_options)
class Sample(db.Model):
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
# tags, models : string of json.dumps(array)
tags = db.Column(db.String(128), default='')
models = db.Column(db.String(128), default='')
def __init__(self, **kwargs):
cls = self.__class__
super(cls, self).__init__(**kwargs)
self.formatting()
#orm.reconstructor
def init_on_load(self):
self.formatting()
def formatting(self):
self.tags = json.loads(self.tags)
self.models = json.loads(self.models)
def save(self):
self.tags = json.dumps(self.tags)
self.models = json.dumps(self.models)
db.session.add(self)
db.session.commit()
self.formatting()
## fixme !!!
## formatting after saved will cause auto-commit and raise TypeError
```
Thank you :)
ps: Flask-SQLAlchemy==2.3.2
This error was raised by lacking called db.session.close() after db.session.commit()
I was told that db.session.close() is automatically called in db.session.commit(). And the real has denied my cognition.
And I try to refer the source code of sqlalchmey, and then I find the db.session is an instance of sqlalchemy.orm.scoping.scoped_session, NOT sqlalchemy.orm.SessionTransaction.
Here is the source code in sqlalchemy.orm.SessionTransaction
def commit(self):
self._assert_active(prepared_ok=True)
if self._state is not PREPARED:
self._prepare_impl()
if self._parent is None or self.nested:
for t in set(self._connections.values()):
t[1].commit()
self._state = COMMITTED
self.session.dispatch.after_commit(self.session)
if self.session._enable_transaction_accounting:
self._remove_snapshot()
self.close()
return self._parent
It’s really confusing.
If you want to repeat this Error, here is Test code:
"""
# snippet for testing <class:Sample>
"""
from flask import Flask
app = Flask(__name__)
app.config.from_mapping(
SQLALCHEMY_ECHO=True,
SQLALCHEMY_TRACK_MODIFICATIONS=False,
SQLALCHEMY_DATABASE_URI='sqlite:///test_orm.sqlite.db',
)
db.init_app(app=app)
db.app = app
db.create_all()
d1 = dict(
tags='["python2","flask"]',
models='["m1"]'
)
m1 = Sample(**d1)
print(1111, type(m1.tags), m1.tags)
m1.save()
print(1112, type(m1.tags), m1.tags)
dm1 = Sample.query.filter(Sample.id == m1.id).all()[0]
print(1113, dm1, type(dm1.tags), dm1.tags)
## fixme[Q1] !!!
## if not continue with $d2, it won't raise error of UPDATE $d1
d2 = dict(
tags='["python3","flask"]',
models='["m2", "m3"]'
)
m2 = Sample(**d2)
print(2221, type(m2.tags), m2.tags)
## fixme[Q1] !!!
# db.session.close()
## If session was not closed, error raise here.
m2.save()
print(2222, type(m2.tags), m2.tags)
dm2 = Sample.query.filter(Sample.id == m2.id).all()[0]
print(2223, dm2, type(dm2.tags), dm2.tags)
Thank you for your read ,wish to solve your same confusion.
I wrote a small CLI todo app in Docopt but when I run it python t.py I get this exception at the end, everything seems to work fine though. and when I pass a command to the app I get no exceptions at all. One more thing, If I remove the __del__ method no exception appears but I thing we need to close the sqlite db connection. Any suggestions?
Exception AttributeError: "'Todo' object has no attribute 'db'" in <bound method Todo.__del__ of <__main__.Todo object at 0x1038dac50>> ignored
App code:
"""t, a unix command-line todo application
Usage:
t add <task>
t check <id>
t uncheck <id>
t clear
t ls [--all]
t -h | --help
t --version
Commands:
add Add a new task
check Check a new task as done
uncheck Uncheck a task as done
clear Refresh the database
ls List all tasks
Options:
-h --help Show this screen.
--version Show version.
--all List all tasks
"""
import sqlite3
import os
import datetime
from docopt import docopt
from termcolor import colored
from prettytable import PrettyTable
SMILEY = "\xF0\x9F\x98\x83" # Smiley emoji
GRIN = "\xF0\x9F\x98\x81" # Grin face emoji
def echo(msg, err=False):
"""
A simple function for printing to terminal with colors and emoji's
"""
if err:
print colored(msg + " " + GRIN, "red")
else:
print colored(msg + " " + SMILEY, "cyan")
class Todo(object):
def __init__(self):
"""
Set up the db and docopt upon creation of object
"""
self.arg = docopt(__doc__, version=0.10)
# Create a path to store the database file
db_path = os.path.expanduser("~/")
self.db_path = db_path + "/" + ".t-db"
self.init_db()
def init_db(self):
self.db = sqlite3.connect(self.db_path)
self.cursor = self.db.cursor()
self.cursor.execute('''
CREATE TABLE IF NOT EXISTS todo(id INTEGER PRIMARY KEY, task TEXT,
done INT, date_added TEXT, date_completed TEXT)
''')
self.db.commit()
def run(self):
"""
Parse the arg's using docopt and route to the respoctive methods
"""
if self.arg['add']:
self.add_task()
elif self.arg['check']:
self.check_task()
elif self.arg['uncheck']:
self.uncheck_task()
elif self.arg['clear']:
self.clear_task()
else:
if self.arg['--all']:
self.list_task()
else:
self.list_pending_tasks()
def _record_exists(self, id):
"""
Checks if the record exists in the db
"""
self.cursor.execute('''
SELECT * FROM todo WHERE id=?
''', (id,))
record = self.cursor.fetchone()
if record is None:
return False
return True
def _is_done(self, id):
"""
Checks if the task has already been marked as done
"""
self.cursor.execute('''
SELECT done FROM todo WHERE id=?
''', (id,))
record = self.cursor.fetchone()
if record == 0:
return False
return True
def add_task(self):
"""
Add a task todo to the db
"""
task = self.arg['<task>']
date = datetime.datetime.now()
date_now = "%s-%s-%s" % (date.day, date.month, date.year)
self.cursor.execute('''
INSERT INTO todo(task, done, date_added)
VALUES (?, ?, ?)
''', (str(task), 0, date_now))
self.db.commit()
echo("The task has been been added to the list")
def check_task(self):
"""
Mark a task as done
"""
task_id = self.arg['<id>']
date = datetime.datetime.now()
date_now = "%s-%s-%s" % (date.day, date.month, date.year)
if self._record_exists(task_id):
self.cursor.execute('''
UPDATE todo SET done=?, date_completed=? WHERE Id=?
''', (1, date_now, int(task_id)))
echo("Task %s has been marked as done" % str(task_id))
self.db.commit()
else:
echo("Task %s doesn't exist" % (str(task_id)), err=True)
def uncheck_task(self):
"""
Mark as done task as undone
"""
task_id = self.arg['<id>']
if self._record_exists(task_id):
self.cursor.execute('''
UPDATE todo SET done=? WHERE id=?
''', (0, int(task_id)))
echo("Task %s has been unchecked" % str(task_id))
self.db.commit()
else:
echo("Task %s doesn't exist" % str(task_id), err=True)
def list_task(self):
"""
Display all tasks in a table
"""
tab = PrettyTable(["Id", "Task Todo", "Done ?", "Date Added",
"Date Completed"])
tab.align["Id"] = "l"
tab.padding_width = 1
self.cursor.execute('''
SELECT id, task, done, date_added, date_completed FROM todo
''')
records = self.cursor.fetchall()
for each_record in records:
if each_record[2] == 0:
done = "Nop"
else:
done = "Yup"
if each_record[4] is None:
status = "Pending..."
else:
status = each_record[4]
tab.add_row([each_record[0], each_record[1], done,
each_record[3], status])
print tab
def list_pending_tasks(self):
"""
Display all pending tasks in a tabular form
"""
tab = PrettyTable(["Id", "Task Todo", "Date Added"])
tab.align["Id"] = "l"
tab.padding_width = 1
self.cursor.execute('''
SELECT id, task, date_added FROM todo WHERE done=?
''', (int(0),))
records = self.cursor.fetchall()
for each_record in records:
tab.add_row([each_record[0], each_record[1], each_record[2]])
print tab
def clear_task(self):
"""
Delete the table to refresh the app
"""
self.cursor.execute('''
DROP TABLE todo
''')
self.db.commit()
def __del__(self):
self.db.close()
def main():
"""
Entry point for console script
"""
app = Todo()
app.run()
if __name__ == "__main__":
main()
My debugging session tells me that docopt immediately bails out if it can't parse the given options (in your case, for example when no options at all are given).
So in your __init__, before self.init_db() gets called to set up self.db, docopt() is called, fails to parse the (not) given options and immediately tries to do something like exit(1) (I'm guessing here), which then in turn tries to tear down the Todo-object via the __del__-method, but the self.db member variable is not there yet.
So the "best" fix would probably be to set up the database before calling docopt, or to tell docopt that no options are OK as well.
Avoid the use of __del__. If you want to be sure all is closed, I suggest you explicitly call self.db.close() in/after your run method , alternatively register the close using atexit module, see also this similar post
I'm having a few problems with using the HTTPHandler of the python logging to push messages to a customer django app. I have a separate daemon that is part of my infrastructure that I would like for it to push logs to django so I've kinda got everything all in one place.
I'm using:
Ubuntu 10.04
Django 1.2.4
PostgreSQL 8.4
python 2.6.5
This is the model
from django.db import models
# Create your models here.
class application(models.Model):
app_name = models.CharField(max_length= 20)
description = models.CharField(max_length = 500, null=True)
date = models.DateField()
def __unicode__(self):
return ("%s logs - %s") % (self.app_name, self.date.strftime("%d-%m-%Y"))
class log_entry(models.Model):
application = models.ForeignKey(application)
thread_name = models.CharField(max_length = 200,null = True)
name = models.CharField(max_length = 200,null = True)
thread = models.CharField(max_length=50, null = True)
created = models.FloatField(null = True)
process = models.IntegerField(null = True)
args = models.CharField(max_length = 200,null = True)
module = models.CharField(max_length = 256,null = True)
filename = models.CharField(max_length = 256,null = True)
levelno = models.IntegerField(null = True)
msg = models.CharField(max_length = 4096,null = True)
pathname = models.CharField(max_length = 1024,null = True)
lineno = models.IntegerField(null = True)
exc_text = models.CharField(max_length = 200, null = True)
exc_info = models.CharField(max_length = 200, null = True)
func_name = models.CharField(max_length = 200, null = True)
relative_created = models.FloatField(null = True)
levelname = models.CharField(max_length=10,null = True)
msecs = models.FloatField(null = True)
def __unicode__(self):
return self.levelname + " - " + self.msg
This is the view
# Create your views here.
from django.shortcuts import render_to_response, get_list_or_404, get_object_or_404
from django.http import HttpResponse, HttpResponseRedirect
from django.views.decorators.csrf import csrf_protect, csrf_exempt
from inthebackgroundSite.log.models import log_entry, application
import datetime
#csrf_exempt
def log(request):
print request.POST
for element in request.POST:
print ('%s : %s') % (element, request.POST[element])
data = request.POST
today = datetime.date.today()
print today
app = application.objects.filter(app_name__iexact = request.POST["name"], date__iexact=today)
if not app:
print "didnt find a matching application. adding one now.."
print data["name"]
print today
app = application.objects.create(app_name = data["name"],
description = None,
date = today)
app.save()
if not app:
print "after save you cant get at it!"
newApplication = app
print app
print "found application"
newEntry = log_entry.objects.create(application = app,
thread_name = data["threadName"] ,
name = data["name"],
thread = data["thread"],
created = data["created"],
process = data["process"],
args = "'" + data["args"] + "'",
module = data["module"],
filename = data["filename"],
levelno = data["levelno"],
msg = data["msg"],
pathname = data["pathname"],
lineno = data["lineno"],
exc_text = data["exc_text"],
exc_info = data["exc_info"],
func_name = data["funcName"],
relative_created = data["relativeCreated"],
levelname = data["levelname"],
msecs = data["msecs"],
)
print newEntry
#newEntry.save()
return HttpResponse("OK")
and this is the call in the python code to send a message.
import os
import logging
import logging.handlers
import time
if __name__ == '__main__':
formatter = logging.Formatter("%(name)s %(levelno)s %(levelname)s %(pathname)s %(filename)s%(module)s %(funcName)s %(lineno)d %(created)f %(asctime)s %(msecs)d %(thread)d %(threadName)s %(process)d %(processName)s %(message)s ")
log = logging.getLogger("ShoutGen")
#logLevel = "debug"
#log.setLevel(logLevel)
http = logging.handlers.HTTPHandler("192.168.0.5:9000", "/log/","POST")
http.setFormatter(formatter)
log.addHandler(http)
log.critical("Finished MountGen init")
time.sleep(20)
http.close()
Now the first time I send a message with empty tables. It works fine, a new app row gets created and a new log message gets created. But on the second time I call it, I get
<QueryDict: {u'msecs': [u'224.281072617'], u'args': [u'()'], u'name': [u'ShoutGen'], u'thread': [u'140445579720448'], u'created': [u'1299046203.22'], u'process': [u'16172'], u'threadName': [u'MainThread'], u'module': [u'logtest'], u'filename': [u'logtest.py'], u'levelno': [u'50'], u'processName': [u'MainProcess'], u'pathname': [u'logtest.py'], u'lineno': [u'19'], u'exc_text': [u'None'], u'exc_info': [u'None'], u'funcName': [u'<module>'], u'relativeCreated': [u'7.23600387573'], u'levelname': [u'CRITICAL'], u'msg': [u'Finished MountGen init']}>
msecs : 224.281072617
args : ()
name : ShoutGen
thread : 140445579720448
created : 1299046203.22
process : 16172
threadName : MainThread
module : logtest
filename : logtest.py
levelno : 50
processName : MainProcess
pathname : logtest.py
lineno : 19
exc_text : None
exc_info : None
funcName : <module>
relativeCreated : 7.23600387573
levelname : CRITICAL
msg : Finished MountGen init
2011-03-02
[sql] SELECT ...
FROM "log_application"
WHERE (UPPER("log_application"."date"::text) = UPPER(2011-03-02)
AND UPPER("log_application"."app_name"::text) = UPPER(ShoutGen))
[sql] (5.10ms) Found 1 matching rows
[<application: ShoutGen logs - 02-03-2011>]
found application
[sql] SELECT ...
FROM "log_log_entry" LIMIT 21
[sql] (4.05ms) Found 2 matching rows
[sql] (9.14ms) 2 queries with 0 duplicates
[profile] Total time to render was 0.44s
Traceback (most recent call last):
File "/usr/local/lib/python2.6/dist-packages/django/core/servers/basehttp.py", line 281, in run
self.finish_response()
File "/usr/local/lib/python2.6/dist-packages/django/core/servers/basehttp.py", line 321, in finish_response
self.write(data)
File "/usr/local/lib/python2.6/dist-packages/django/core/servers/basehttp.py", line 417, in write
self._write(data)
File "/usr/lib/python2.6/socket.py", line 300, in write
self.flush()
File "/usr/lib/python2.6/socket.py", line 286, in flush
self._sock.sendall(buffer)
error: [Errno 32] Broken pipe
and no extra rows inserted into log_log_entry table. So I don't really know why this is happening at this point.
I've looked around and apparently the Broken pipe traceback isn't a problem, just something that browsers do. But I'm not using a browser so I'm not sure what the issue is.
It may be that the exception is causing a transaction to roll back and undo your changes. Are you using TransactionMiddleware? You could try the transaction.autocommit decorator on your view.
If the "broken pipe" error keeps happening, it's worth finding out why. The HTTPHandler does a normal POST and waits for the response ("OK" from your view) in its emit() call, and it shouldn't break the connection until after this.
You could try doing an equivalent post to your view from a test script, using httplib and urllib as HTTPHandler itself does. Basically, just urlencode a dict for the POST data, as if it were a LogRecord's dict.