How to store RSA encrypted data to postgresql by using pycrypto? - python

I want to use Public/Private key to secure my UserInfo data. I'm new with PyCrypto and PostgreSQL.
I have some items to clarify:
Are Public Key and Private Key constant values?
If it is constant, how can I store it properly?
Lastly but the most important, how can I store my encrypted data to PostgreSQL? and retrieve it for verification?
Would you guide me on how to dealt with Crypto.PublicKey.RSA as method to secure my data.
Environment: Python 2.5, PyCrypto 2.3, PostgreSQL 8.3 UTF-8 encoding
UserInfo model:
class UserInfo(models.Model):
userid = models.TextField(primary_key = True)
password = models.TextField(null = True)
keyword = models.TextField(null = True)
key = models.TextField(null = True, blank = True)
date = models.DateTimeField(null = True, blank = True)
UPDATES1
tests.py:
# -*- encoding:utf-8 -*-
import os
from os.path import abspath, dirname
import sys
from py23.service.models import UserInfo
from Crypto import Random
# Set up django
project_dir = abspath(dirname(dirname(__file__)))
sys.path.insert(0, project_dir)
os.environ['DJANGO_SETTINGS_MODULE'] = 'py23.settings'
from django.test.testcases import TestCase
class AuthenticationTestCase(TestCase):
def test_001_registerUserInfo(self):
import Crypto.PublicKey.RSA
import Crypto.Util.randpool
#pool = Crypto.Util.randpool.RandomPool()
rng = Random.new().read
# craete RSA object by random key
# 1024bit
#rsa = Crypto.PublicKey.RSA.generate(1024, pool.get_bytes)
rsa = Crypto.PublicKey.RSA.generate(1024, rng)
# retrieve public key
pub_rsa = rsa.publickey()
# create RSA object by tuple
# rsa.n is public key?, rsa.d is private key?
priv_rsa = Crypto.PublicKey.RSA.construct((rsa.n, rsa.e, rsa.d))
# encryption
enc = pub_rsa.encrypt("hello", "")
# decryption
dec = priv_rsa.decrypt(enc)
print "private: n=%d, e=%d, d=%d, p=%d, q=%d, u=%d" % (rsa.n, rsa.e, rsa.d, rsa.p, rsa.q, rsa.u)
print "public: n=%d, e=%d" % (pub_rsa.n, pub_rsa.e)
print "encrypt:", enc
print "decrypt:", dec
# text to be signed
text = "hello"
signature = priv_rsa.sign(text, "")
# check if the text has not changed
print pub_rsa.verify(text, signature)
print pub_rsa.verify(text+"a", signature)
# userid = models.TextField(primary_key = True)
# password = models.TextField(null = True)
# keyword = models.TextField(null = True)
# key = models.TextField(null = True, blank = True) is it correct to store the public key here?
# date = models.DateTimeField(null = True, blank = True)
userInfo = UserInfo(userid='test1', password=enc[0], key=pub_rsa.n)
userInfo.save()
print "ok"
result here (failed):
======================================================================
ERROR: test_001_registerUserInfo (py23.service.auth.tests.AuthenticationTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\PIDevelopment\workspace37_pydev\pyh23\py23\service\auth\tests.py", line 64, in test_001_registerUserInfo
userInfo.save()
File "C:\Python25\lib\site-packages\django\db\models\base.py", line 458, in save
self.save_base(using=using, force_insert=force_insert, force_update=force_update)
File "C:\Python25\lib\site-packages\django\db\models\base.py", line 551, in save_base
result = manager._insert(values, return_id=update_pk, using=using)
File "C:\Python25\Lib\site-packages\django\db\models\manager.py", line 195, in _insert
return insert_query(self.model, values, **kwargs)
File "C:\Python25\lib\site-packages\django\db\models\query.py", line 1524, in insert_query
return query.get_compiler(using=using).execute_sql(return_id)
File "C:\Python25\lib\site-packages\django\db\models\sql\compiler.py", line 788, in execute_sql
cursor = super(SQLInsertCompiler, self).execute_sql(None)
File "C:\Python25\lib\site-packages\django\db\models\sql\compiler.py", line 732, in execute_sql
cursor.execute(sql, params)
File "C:\Python25\lib\site-packages\django\db\backends\util.py", line 15, in execute
return self.cursor.execute(sql, params)
File "C:\Python25\lib\site-packages\django\db\backends\postgresql_psycopg2\base.py", line 44, in execute
return self.cursor.execute(query, args)
DatabaseError: invalid byte sequence for encoding "UTF8": 0x97
HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding".
----------------------------------------------------------------------
Ran 1 test in 90.047s
FAILED (errors=1)

Your problem is that you are trying to store binary data in a text file. Try armoring the data or use bytea (with proper encoding/decoding).

Related

f-string in Python with the atlassian-python package. Gives bad request

I'm trying to fetch all issues in JIRA for all projects. When doing the call one at a time, it works perfect. When trying to run it in a for loop I'm prompted with a 400 Client error.
The way that works:
results = jira_instance.jql("project = FJA", limit = 100, fields=["issuetype", "status", "summary"])
The way that does not work:
projects = ["ADV", "WS", "FJA", "FOIJ", "QW", "UOI"]
for key in projects:
results = jira_instance.jql(f"project = {key})", limit = 100, fields=["issuetype", "status", "summary"])
The error:
Traceback (most recent call last):
File "C:\jira-api-python\jira.py", line 24, in <module>
results = jira_instance.jql("project = {key}", limit = 100, fields=["issuetype", "status", "summary"])
File "C:\.virtualenvs\jira-api-python-rouJrYa4\lib\site-packages\atlassian\jira.py", line 2271, in jql
return self.get("rest/api/2/search", params=params)
File "C:\.virtualenvs\jira-api-python-rouJrYa4\lib\site-packages\atlassian\rest_client.py", line 264, in get
response = self.request(
File "C:\.virtualenvs\jira-api-python-rouJrYa4\lib\site-packages\atlassian\rest_client.py", line 236, in request
response.raise_for_status()
File "C:\.virtualenvs\jira-api-python-rouJrYa4\lib\site-packages\requests\models.py", line 943, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://stuff.atlassian.net/rest/api/2/search?startAt=0&maxResults=100&fields=issuetype%2Cstatus%2Csummary&jql=project+%3D+%7Bkey%7D
My guess is that I'm not using the f-string correct. But when I print the value of {key} it is correct.
Any pointers would be greatly appreciated.
Thank you for your time.
Edit:
Added the full traceback, only removed the path to my machine and changed the URL to the endpoint. Below is the full file with credentials and endpoint redacted. The ideas is to create a csv for each project.
The full code:
from atlassian import Jira
import pandas as pd
import time
jira_instance = Jira(
url = "https://stuff.atlassian.net/",
username = "user",
password = "pass",
)
projects = ["ADV", "WS", "FJA", "FOIJ", "QW", "UOI"]
FIELDS_OF_INTEREST = ["key", "fields.summary", "fields.status.name"]
timestamp = time.strftime("%Y%m%d-%H%M%S")
file_ending = ".csv"
for key in projects:
print(f"stuff = {key}")
results = jira_instance.jql(f"project = {key})", limit = 1000, fields=["issuetype", "status", "summary"])
I found the very simple solution.
In this snippet: results = jira_instance.jql(f"project = {key})", limit = 1000, fields=["issuetype", "status", "summary"])
The ) after {key} was not supposed to be there.
Thank you for the help

insert data in mongodb with python

this is my first shot at using dbs and I'm having some trouble with the basics. Tried to look online but couldnt find answers to simple questions. When I try to add some info to my db, I get a whole bunch of errors.
import pymongo
def get_db():
from pymongo import MongoClient
client = MongoClient("mongodb://xxxxxx:xxxxxx#ds029735.mlab.com:29735/xxxxxxx")
db = client.myDB
return db
def add_country(db):
db.countries.insert({"name": "Canada"})
def get_country(db):
return db.contries.find_one()
db = get_db()
add_country(db)
I got this error message:
File "/Users/vincentfortin/Desktop/Python_code/mongo.py", line 21, in <module>
add_country(db)
File "/Users/vincentfortin/Desktop/Python_code/mongo.py", line 11, in add_country
db.countries.insert({"name": "Canada"})
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pymongo/collection.py", line 2212, in insert
check_keys, manipulate, write_concern)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pymongo/collection.py", line 535, in _insert
check_keys, manipulate, write_concern, op_id, bypass_doc_val)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pymongo/collection.py", line 516, in _insert_one
check_keys=check_keys)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pymongo/pool.py", line 239, in command
read_concern)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pymongo/network.py", line 102, in command
helpers._check_command_response(response_doc, None, allowable_errors)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pymongo/helpers.py", line 205, in _check_command_response
raise OperationFailure(msg % errmsg, code, response)
pymongo.errors.OperationFailure: not authorized on myDB to execute command { insert: "countries", ordered: true, documents: [ { _id: ObjectId('579a6c6ed51bef1274162ff4'), name: "Canada" } ] }
Check twice if your xxxxxxx from ds029735.mlab.com:29735/xxxxxxx is equal to myDB from db = client.myDB. I mean if your connection string is mongodb://username:password#ds029735.mlab.com:29735/xyz then your code should be db = client.xyz and not db = client.zyx (or other names).
Check in mLab control panel if your user is Read-Only http://i.imgur.com/It32S1d.png
Both of these issues returns errors like your so I don't know with which one you have faced.
from pymongo import MongoClient
#import json
client = MongoClient('localhost', 27017)
mydb = client.db_University
def add_client(student_name, student_age, student_roll, student_branch):
unique_client = mydb.students.find_one({"name":student_name}, {"_id":0})
if unique_client:
return " already exists"
else:
mydb.students.insert(
{
"name" : student_name,
"age" : student_age,
"roll no" : student_roll,
"branch" : student_branch,
})
return "student added successfully"
student_name = raw_input("Enter stuent Name: ")
student_age = raw_input("Enter student age: ")
student_roll = raw_input("Enter student roll no: ")
student_branch = raw_input("Enter student branch: ")
print add_client(student_name,student_age,student_roll,student_branch)
def view_client(student_name):
user = mydb.students.find_one({"name":student_name}, {"_id":0})
if user:
name = user["name"]
age = user["age"]
rollno = user["roll no"]
branch = user["branch"]
return {"name":name,"age":age,"roll no":rollno,"branch":branch}
else:
return "Sorry, No such student exists"
user = raw_input("Enter student name to find: ")
user_data = view_client(user)
print user_data

Create an instance from volume in openstach with python-novaclient

I am trying to create an instance from a bootable volume in openstack using python-novaclient.
The steps I am taking are following:
Step1: create a volume with an Image "Centos" with 100GB.
Step2: create an instance with the volume that I created in step1.
However, I must be doing something wrong or missing some information that it is not able to complete the task.
Here are my commands in python shell.
import time, getpass
from cinderclient import client
from novaclient.client import Client
project_name = 'project'
region_name = 'region'
keystone_link = 'https://keystone.net:5000/v2.0'
network_zone = "Public"
key_name = 'key_pair'
user = 'user'
pswd = getpass.getpass('Password: ')
# create a connection
cinder = client.Client('1', user, pswd, project_name, keystone_link, region_name = region_name)
# get the volume id that we will attach
print(cinder.volumes.list())
[<Volume: 1d36203e-b90d-458f-99db-8690148b9600>, <Volume: d734f5fc-87f2-41dd-887e-c586bf76d116>]
vol1 = cinder.volumes.list()[1]
vol1.id
block_device_mapping = {'device_name': vol1.id, 'mapping': '/dev/vda'}
### +++++++++++++++++++++++++++++++++++++++++++++++++++++ ###
# now create a connection with nova and create then instance object
nova = Client(2, user, pswd, project_name, keystone_link, region_name = region_name)
# find the image
image = nova.images.find(name="NETO CentOS 6.4 x86_64 v2.2")
# get the flavor
flavor = nova.flavors.find(name="m1.large")
#get the network and attach
network = nova.networks.find(label=network_zone)
nics = [{'net-id': network.id}]
# get the keyname and attach
key_pair = nova.keypairs.get(key_name)
s1 = 'nova-vol-test'
server = nova.servers.create(name = s1, image = image.id, block_device_mapping = block_device_mapping, flavor = flavor.id, nics = nics, key_name = key_pair.name)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.6/site-packages/novaclient/v1_1/servers.py", line 902, in create
**boot_kwargs)
File "/usr/lib/python2.6/site-packages/novaclient/v1_1/servers.py", line 554, in _boot
return_raw=return_raw, **kwargs)
File "/usr/lib/python2.6/site-packages/novaclient/base.py", line 100, in _create
_resp, body = self.api.client.post(url, body=body)
File "/usr/lib/python2.6/site-packages/novaclient/client.py", line 490, in post
return self._cs_request(url, 'POST', **kwargs)
File "/usr/lib/python2.6/site-packages/novaclient/client.py", line 465, in _cs_request
resp, body = self._time_request(url, method, **kwargs)
File "/usr/lib/python2.6/site-packages/novaclient/client.py", line 439, in _time_request
resp, body = self.request(url, method, **kwargs)
File "/usr/lib/python2.6/site-packages/novaclient/client.py", line 433, in request
raise exceptions.from_response(resp, body, url, method)
novaclient.exceptions.BadRequest: Block Device Mapping is Invalid: failed to get volume /dev/vda. (HTTP 400) (Request-ID: req-2b9db4e1-f24f-48c6-8660-822741ca52ad)
>>>
I tried to find any documentation so that I can solve this on my own, however, I was not able to.
If anyone has tried this before, I would appreciate there help on this.
Thanks,
Murtaza
I was able to get it to work by using this dictionary:
block_dev_mapping = {'vda':'uuid of the volume you want to use'}
I then called it in the create method like this:
instance = nova.servers.create(name="python-test3", image='', block_device_mapping=block_dev_mapping,
flavor=flavor, key_name="my-keypair", nics=nics)

Python with sqlalchemy db retrival

I am quite new to python and to sqlalchemy.I have written the following netowrk program.
class SourcetoPort(Base):
""""""
__tablename__ = 'source_to_port'
id = Column(Integer, primary_key=True)
port_no = Column(Integer)
src_address = Column(String)
#----------------------------------------------------------------------
def __init__(self, src_address,port_no):
""""""
self.src_address = src_address
self.port_no = port_no
#
def act_like_switch (self, packet, packet_in):
"""
Implement switch-like behavior.
"""
# Learn the port for the source MAC
print "RECIEVED FROM PORT ",packet_in.in_port , "SOURCE ",packet.src
Session = sessionmaker(bind=engine)
session = Session()
self.mac_to_port[packet.src]=packet_in.in_port
if(self.matrix.get((packet.src,packet.dst))==None):
# create a Session
#print "creating a db session"
#Session = sessionmaker(bind=engine)
#session = Session()
self.matrix[(packet.src,packet.dst)]=0
# Create an entry with address and port
print "creating a new db entry"
entry = SourcetoPort(src_address="packet.src" , port_no=packet_in.in_port)
# Add the record to the session object
session.add(entry)
session.commit()
self.matrix[(packet.src,packet.dst)]+=1
#if self.mac_to_port.get(packet.dst)!=None:
if session.query(SourcetoPort).filter_by(src_address=packet.dst).all():
#send this packet
self.send_packet(packet_in.buffer_id, packet_in.data,self.mac_to_port[packet.dst], packet_in.in_port)
#create a flow modification message
msg = of.ofp_flow_mod()
#set the fields to match from the incoming packet
msg.match = of.ofp_match.from_packet(packet)
#print "SENDING TO PORT " + str(self.mac_to_port[packet.dst]), packet.dst
# send the rule to the switch so that it does not query the controller again.
msg.actions.append(of.ofp_action_output(port=self.mac_to_port[packet.dst]))
# push the rule
self.connection.send(msg)
else:
#print 'flooding the packet '
# Flood this packet out as we don't know about this node.
self.send_packet(packet_in.buffer_id, packet_in.data,
of.OFPP_FLOOD, packet_in.in_port)
In the above code the line
if session.query(SourcetoPort).filter_by(src_address='packet.dst').all():
does not work as I expect it.What I expect is that it should retrive the entry from sqlalchemy database and if it succeeds (as the output is not NONE) it should execute the following code.
When I try to print that line using print
print "session query", session.query(SourcetoPort).filter_by(src_address='packet.dst').all()
The output that I get is
session query []
Am I doing something wrong.It would be great if someone could point this out.
If I change the above line based on the suggestion as
print session.query(SourcetoPort).filter_by(src_address=packet.dst).count()
I get the following error:
creating a new db entry
RECIEVED FROM PORT 2 SOURCE 96:74:ba:a9:92:b9
creating a new db entry
ERROR:core:Exception while handling Connection!PacketIn...
Traceback (most recent call last):
File "ws_thesis/pox/pox/lib/revent/revent.py", line 234, in raiseEventNoErrors
return self.raiseEvent(event, *args, **kw)
File "ws_thesis/pox/pox/lib/revent/revent.py", line 281, in raiseEvent
rv = event._invoke(handler, *args, **kw)
File "ws_thesis/pox/pox/lib/revent/revent.py", line 159, in _invoke
return handler(self, *args, **kw)
File "ws_thesis/pox/tutorial.py", line 137, in _handle_PacketIn
self.act_like_switch(packet, packet_in)
File "ws_thesis/pox/tutorial.py", line 101, in act_like_switch
print session.query(SourcetoPort).filter_by(src_address=packet.dst).count()
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2400, in count
return self.from_self(col).scalar()
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2045, in scalar
ret = self.one()
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2014, in one
ret = list(self)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2057, in __iter__
return self._execute_and_instances(context)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2072, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1405, in execute
If you want to filter items by the contents of a variable, you need to use that variable directly:
session.query(SourcetoPort).filter_by(src_address=packet.dst)
You were instead trying to filter by the string 'packet.dst'.
You made the same mistake when creating the SourcetoPort entry; you probably wanted to store result of the packet.src expression, not the string 'packet.src':
SourcetoPort(src_address=packet.src, port_no=packet_in.in_port)
Note that calling .all() retrieves all matching entries from the database, but you are only using it in a if test. It would be more efficient (potentially much more so) to use .count() instead of .all() to let the database tell us how many items match:
if session.query(SourcetoPort).filter_by(src_address=packet.dst).count():

Python logging HTTPHandler Broken pipe error with django

I'm having a few problems with using the HTTPHandler of the python logging to push messages to a customer django app. I have a separate daemon that is part of my infrastructure that I would like for it to push logs to django so I've kinda got everything all in one place.
I'm using:
Ubuntu 10.04
Django 1.2.4
PostgreSQL 8.4
python 2.6.5
This is the model
from django.db import models
# Create your models here.
class application(models.Model):
app_name = models.CharField(max_length= 20)
description = models.CharField(max_length = 500, null=True)
date = models.DateField()
def __unicode__(self):
return ("%s logs - %s") % (self.app_name, self.date.strftime("%d-%m-%Y"))
class log_entry(models.Model):
application = models.ForeignKey(application)
thread_name = models.CharField(max_length = 200,null = True)
name = models.CharField(max_length = 200,null = True)
thread = models.CharField(max_length=50, null = True)
created = models.FloatField(null = True)
process = models.IntegerField(null = True)
args = models.CharField(max_length = 200,null = True)
module = models.CharField(max_length = 256,null = True)
filename = models.CharField(max_length = 256,null = True)
levelno = models.IntegerField(null = True)
msg = models.CharField(max_length = 4096,null = True)
pathname = models.CharField(max_length = 1024,null = True)
lineno = models.IntegerField(null = True)
exc_text = models.CharField(max_length = 200, null = True)
exc_info = models.CharField(max_length = 200, null = True)
func_name = models.CharField(max_length = 200, null = True)
relative_created = models.FloatField(null = True)
levelname = models.CharField(max_length=10,null = True)
msecs = models.FloatField(null = True)
def __unicode__(self):
return self.levelname + " - " + self.msg
This is the view
# Create your views here.
from django.shortcuts import render_to_response, get_list_or_404, get_object_or_404
from django.http import HttpResponse, HttpResponseRedirect
from django.views.decorators.csrf import csrf_protect, csrf_exempt
from inthebackgroundSite.log.models import log_entry, application
import datetime
#csrf_exempt
def log(request):
print request.POST
for element in request.POST:
print ('%s : %s') % (element, request.POST[element])
data = request.POST
today = datetime.date.today()
print today
app = application.objects.filter(app_name__iexact = request.POST["name"], date__iexact=today)
if not app:
print "didnt find a matching application. adding one now.."
print data["name"]
print today
app = application.objects.create(app_name = data["name"],
description = None,
date = today)
app.save()
if not app:
print "after save you cant get at it!"
newApplication = app
print app
print "found application"
newEntry = log_entry.objects.create(application = app,
thread_name = data["threadName"] ,
name = data["name"],
thread = data["thread"],
created = data["created"],
process = data["process"],
args = "'" + data["args"] + "'",
module = data["module"],
filename = data["filename"],
levelno = data["levelno"],
msg = data["msg"],
pathname = data["pathname"],
lineno = data["lineno"],
exc_text = data["exc_text"],
exc_info = data["exc_info"],
func_name = data["funcName"],
relative_created = data["relativeCreated"],
levelname = data["levelname"],
msecs = data["msecs"],
)
print newEntry
#newEntry.save()
return HttpResponse("OK")
and this is the call in the python code to send a message.
import os
import logging
import logging.handlers
import time
if __name__ == '__main__':
formatter = logging.Formatter("%(name)s %(levelno)s %(levelname)s %(pathname)s %(filename)s%(module)s %(funcName)s %(lineno)d %(created)f %(asctime)s %(msecs)d %(thread)d %(threadName)s %(process)d %(processName)s %(message)s ")
log = logging.getLogger("ShoutGen")
#logLevel = "debug"
#log.setLevel(logLevel)
http = logging.handlers.HTTPHandler("192.168.0.5:9000", "/log/","POST")
http.setFormatter(formatter)
log.addHandler(http)
log.critical("Finished MountGen init")
time.sleep(20)
http.close()
Now the first time I send a message with empty tables. It works fine, a new app row gets created and a new log message gets created. But on the second time I call it, I get
<QueryDict: {u'msecs': [u'224.281072617'], u'args': [u'()'], u'name': [u'ShoutGen'], u'thread': [u'140445579720448'], u'created': [u'1299046203.22'], u'process': [u'16172'], u'threadName': [u'MainThread'], u'module': [u'logtest'], u'filename': [u'logtest.py'], u'levelno': [u'50'], u'processName': [u'MainProcess'], u'pathname': [u'logtest.py'], u'lineno': [u'19'], u'exc_text': [u'None'], u'exc_info': [u'None'], u'funcName': [u'<module>'], u'relativeCreated': [u'7.23600387573'], u'levelname': [u'CRITICAL'], u'msg': [u'Finished MountGen init']}>
msecs : 224.281072617
args : ()
name : ShoutGen
thread : 140445579720448
created : 1299046203.22
process : 16172
threadName : MainThread
module : logtest
filename : logtest.py
levelno : 50
processName : MainProcess
pathname : logtest.py
lineno : 19
exc_text : None
exc_info : None
funcName : <module>
relativeCreated : 7.23600387573
levelname : CRITICAL
msg : Finished MountGen init
2011-03-02
[sql] SELECT ...
FROM "log_application"
WHERE (UPPER("log_application"."date"::text) = UPPER(2011-03-02)
AND UPPER("log_application"."app_name"::text) = UPPER(ShoutGen))
[sql] (5.10ms) Found 1 matching rows
[<application: ShoutGen logs - 02-03-2011>]
found application
[sql] SELECT ...
FROM "log_log_entry" LIMIT 21
[sql] (4.05ms) Found 2 matching rows
[sql] (9.14ms) 2 queries with 0 duplicates
[profile] Total time to render was 0.44s
Traceback (most recent call last):
File "/usr/local/lib/python2.6/dist-packages/django/core/servers/basehttp.py", line 281, in run
self.finish_response()
File "/usr/local/lib/python2.6/dist-packages/django/core/servers/basehttp.py", line 321, in finish_response
self.write(data)
File "/usr/local/lib/python2.6/dist-packages/django/core/servers/basehttp.py", line 417, in write
self._write(data)
File "/usr/lib/python2.6/socket.py", line 300, in write
self.flush()
File "/usr/lib/python2.6/socket.py", line 286, in flush
self._sock.sendall(buffer)
error: [Errno 32] Broken pipe
and no extra rows inserted into log_log_entry table. So I don't really know why this is happening at this point.
I've looked around and apparently the Broken pipe traceback isn't a problem, just something that browsers do. But I'm not using a browser so I'm not sure what the issue is.
It may be that the exception is causing a transaction to roll back and undo your changes. Are you using TransactionMiddleware? You could try the transaction.autocommit decorator on your view.
If the "broken pipe" error keeps happening, it's worth finding out why. The HTTPHandler does a normal POST and waits for the response ("OK" from your view) in its emit() call, and it shouldn't break the connection until after this.
You could try doing an equivalent post to your view from a test script, using httplib and urllib as HTTPHandler itself does. Basically, just urlencode a dict for the POST data, as if it were a LogRecord's dict.

Categories