Can I assign values in RowProxy using the sqlalchemy? - python

When I want to display some data in the web, the data need makeup, and I don't know how to achieve, here is the code:
from sqlalchemy import create_engine
engine = create_engine('mysql://root:111#localhost/test?charset=utf8')
conn = engine.connect()
articles = conn.execute('SELECT * FROM article')
articles = articles.fetchall()
for r in articles:
r['Tags'] = r['Keywords']
It tips that: 'RowProxy' object does not support item assignment.
What should I do for that?
The table 'article' contains the column 'Keywords', and not contains the column 'Tags'.

You can make a dict out of your RowProxy, which would support item assignment.
For example:
result_proxy = query.fetchall()
for row in result_proxy:
d = dict(row.items())
d['Tags'] = d['Keywords']

One nice trick with this is to use a subclass of a dict:
class DBRow(dict):
def __getattr__(self, key):
"""make values available as attributes"""
try:
return self[key]
except KeyError as error:
raise AttributeError(str(error))
#property
def something_calculated(self):
return self.a + self.b
row = DBRow(result_proxy_row, additional_value=123)
row["b"] = 2 * row.b
print something_calculated
The benefit of this is, that you can access the values still as attributes, plus you can have properties, which is a nice way to cleanup and massage the data coming from the database.

Related

How to make a query with whereIn with json values sqlalchemy

I want to make a query in sqlalchemy with a json, but this query should be of the type whereIn where the values ​​are in the values ​​I give them
For examples I give
class Product(Base):
def __init__(self):
super(Product, self).__init__()
self.product = self.Base.classes.products
self.session = Session(self.engine)
def get_products(self, data):
print(data)
query = self.session.query(self.product)
product = query.filter(self.product.attributes.like(data['attributes']))
print(product.get())
In my database these attributes are like this
"[{\"Brand\":\"Kanu\"},{\"Shade\":\"Red, Yellow\"},{\"Material\":\"Artificial Leather\"}]"
How could I proceed?
Can sql alchemy make this difference or should I do the query manually?
What error I got
LINE 3: WHERE products.attributes LIKE '["{''Ideal For'': ''Women''}...
^
HINT: No operator matches the given name and argument types. You might need to add explicit type casts.

TypeError sqlite python

I am new to sqlite in python and I am trying to do the following:
Extract a certain value from a row in a table and compare it to 100 (it's an INT type normally).
school is a table where I have the following attributes: id, class, nbstudent, nbteachers, nbrepresentative
I use the following function:
def select_school_value(conn, class,m):
"""
Query school by class
"""
cur = conn.cursor()
cur.execute("SELECT * FROM school WHERE class=?", (class,))
record = cur.fetchone()
return record[m]
The function's parameter m is just a number that depends on which attribute I want to extract for the comparaison: nbstudent is m=2, nbteacher is m=3..
When I use my function select_school_value() and compare the returned value with 100, I have a TypeError, the return is a NoneType.
How can I have a integer type return (the type of the attribute I need)?
Thank you in advance.
I guess the issue is in class parameter that you pass to your function and query. Rename it somehow, because Python treats it like a pointer to some class within python code.
Also you have that class=? part which means actuallly "find a class which equals '?'". It should be rewritten too.
I suggest trying this:
def select_school_value(conn,cl,m):
"""
Query school by class
"""
cur = conn.cursor()
cur.execute("SELECT * FROM school WHERE class={}".format(cl))
record = cur.fetchone()
return record[m]

Django-tables2: Provide a list of dictionary, how to generate a column for each dictionary entry

I know if we have a model class, we can make a generate table and use:
class Meta:
model = MyModel
To display every field.
Now say if I have a list of dictionaries, instead of model, is there a similar way to do so?
(Since there are so many different dictionaries, which might be dynamically created, I don't wanna create a customized one each time :-))
You can create your own class that inherits from Table and define the fields you want there.
class JsonTable(Table):
json_key_1 = Column()
json_key_2 = Column()
Also django tables2 have a fields attribute but you can't use it if your data is an array of dicts.
I've been doing this too, here's a rough sketch.
Piggy-backing on top of django_tables2 is miles ahead of rolling your own!
Plus, I hook up the results to jquery FooTable plugin.
import django_tables2 as tables
counter = 0
def generate(li_dict):
#unique classname.
global counter
counter += 1
table_classname = "MyTableClass%s" % (counter)
class Meta:
#ahhh... Bootstrap
attrs = {"class": "table table-striped"}
#generate a class dynamically
cls = type(table_classname,(tables.Table,),dict(Meta=Meta))
#grab the first dict's keys
li = li_dict[0].keys()
for colname in li:
column = tables.Column()
cls.base_columns[colname] = column
return cls
#now, to make use of it...
li_dict = [dict(a=11,b=12,c=13),dict(a=21,b=22,c=23)]
cls = generate(li_dict)
table = cls(li_dict)
# below didn't work, wanted a whole bunch of django setup done first.
# but I fairly confident it would...
print table.as_html()
>>>django.core.exceptions.ImproperlyConfigured: {% querystring %} requires django.core.context_processors.request to be in your settings.TEMPLATE_CONTEXT_PROCESSORS in order for the included template tags to function correctly.
#this did...
print "%s" % table
>>><django_tables2.tables.MyTableClass1 object at 0x1070e1090>
i am sorry for poor english :), but a think which can help, actually with this we can transforme a numpy (matrix) in a generic django table2. By the way, thanks Pyeret for your help.
def convert_array_list_dict(arr):
_list = []
for i in xrange(0, arr.shape[0]):
_list.append(dict(enumerate(arr[i,:])))
for i in xrange(0,len(_list)):
for key in _list[i].keys():
_list[i]["col_" + str(key)] = _list[i].pop(key)
return _list`
This function above convert numpy array to list of dict
counter = 0
def list_dict(dict_):
global counter
counter += 1
table_classname = "MyTableClass%s" % (counter)
class Meta:
attrs = {"class": "paleblue", 'width': '150%'}
cls = type(table_classname, (tables.Table,), dict(Meta=Meta))
list_ = dict_[0].keys()
for colname in list_:
column = tables.Column()
cls.base_columns[colname] = column
return cls
This code make a generic table...and
t = np.loadtxt(doc.document)
tab = convert_array_list_dict(t)
table = list_dict(tab)
table_content = table(tab)
RequestConfig(request, paginate={'per_page': 30}).configure(table_content)
return render(request,'app/snippets/upload_file.html',{'document':document,'table_content':table_content})
Above we can see how use all code...

What is the proper way to delineate modules and classes in Python?

I am new to Python, and I'm starting to learn the basics of the code structure. I've got a basic app that I'm working on up on my Github.
For my simple app, I'm create a basic "Evernote-like" service which allows the user to create and edit a list of notes. In the early design, I have a Note object and a Notepad object, which is effectively a list of notes. Presently, I have the following file structure:
Notes.py
|
|------ Notepad (class)
|------ Note (class)
From my current understanding and implementation, this translates into the "Notes" module having a Notepad class and Note class, so when I do an import, I'm saying "from Notes import Notepad / from Notes import Note".
Is this the right approach? I feel, out of Java habit, that I should have a folder for Notes and the two classes as individual files.
My goal here is to understand what the best practice is.
As long as the classes are rather small put them into one file.
You can still move them later, if necessary.
Actually, it is rather common for larger projects to have a rather deep hierarchy but expose a more flat one to the user. So if you move things later but would like still have notes.Note even though the class Note moved deeper, it would be simple to just import note.path.to.module.Note into notes and the user can get it from there. You don't have to do that but you can. So even if you change your mind later but would like to keep the API, no problem.
I've been working in a similar application myself. I can't say this is the best possible approach, but it served me well. The classes are meant to interact with the database (context) when the user makes a request (http request, this is a webapp).
# -*- coding: utf-8 -*-
import json
import datetime
class Note ():
"""A note. This class is part of the data model and is instantiated every
time there access to the database"""
def __init__(self, noteid = 0, note = "", date = datetime.datetime.now(), context = None):
self.id = noteid
self.note = note
self.date = date
self.ctx = context #context holds the db connection and some globals
def get(self):
"""Get the current object from the database. This function needs the
instance to have an id"""
if id == 0:
raise self.ctx.ApplicationError(404, ("No note with id 0 exists"))
cursor = self.ctx.db.conn.cursor()
cursor.execute("select note, date from %s.notes where id=%s" %
(self.ctx.db.DB_NAME, str(self.id)))
data = cursor.fetchone()
if not data:
raise self.ctx.ApplicationError(404, ("No note with id "
+ self.id + " was found"))
self.note = data[0]
self.date = data[1]
return self
def insert(self, user):
"""This function inserts the object to the database. It can be an empty
note. User must be authenticated to add notes (authentication handled
elsewhere)"""
cursor = self.ctx.db.conn.cursor()
query = ("insert into %s.notes (note, owner) values ('%s', '%s')" %
(self.ctx.db.DB_NAME, str(self.note), str(user['id'])))
cursor.execute(query)
return self
def put(self):
"""Modify the current note in the database"""
cursor = self.ctx.db.conn.cursor()
query = ("update %s.notes set note = '%s' where id = %s" %
(self.ctx.db.DB_NAME, str(self.note), str(self.id)))
cursor.execute(query)
return self
def delete(self):
"""Delete the current note, by id"""
if self.id == 0:
raise self.ctx.ApplicationError(404, "No note with id 0 exists")
cursor = self.ctx.db.conn.cursor()
query = ("delete from %s.notes where id = %s" %
(self.ctx.db.DB_NAME, str(self.id)))
cursor.execute(query)
def toJson(self):
"""Returns a json string of the note object's data attributes"""
return json.dumps(self.toDict())
def toDict(self):
"""Returns a dict of the note object's data attributes"""
return {
"id" : self.id,
"note" : self.note,
"date" : self.date.strftime("%Y-%m-%d %H:%M:%S")
}
class NotesCollection():
"""This class handles the notes as a collection"""
collection = []
def get(self, user, context):
"""Populate the collection object and return it"""
cursor = context.db.conn.cursor()
cursor.execute("select id, note, date from %s.notes where owner=%s" %
(context.db.DB_NAME, str(user["id"])))
note = cursor.fetchone()
while note:
self.collection.append(Note(note[0], note[1],note[2]))
note = cursor.fetchone()
return self
def toJson(self):
"""Return a json string of the current collection"""
return json.dumps([note.toDict() for note in self.collection])
I personally use python as a "get it done" language, and don't bother myself with details. This shows in the code above. However one piece of advice: There are no private variables nor methods in python, so don't bother trying to create them. Make your life easier, code fast, get it done
Usage example:
class NotesCollection(BaseHandler):
#tornado.web.authenticated
def get(self):
"""Retrieve all notes from the current user and return a json object"""
allNotes = Note.NotesCollection().get(self.get_current_user(), settings["context"])
json = allNotes.toJson()
self.write(json)
#protected
#tornado.web.authenticated
def post(self):
"""Handles all post requests to /notes"""
requestType = self.get_argument("type", "POST")
ctx = settings["context"]
if requestType == "POST":
Note.Note(note = self.get_argument("note", ""),
context = ctx).insert(self.get_current_user())
elif requestType == "DELETE":
Note.Note(id = self.get_argument("id"), context = ctx).delete()
elif requestType == "PUT":
Note.Note(id = self.get_argument("id"),
note = self.get_argument("note"),
context = ctx).put()
else:
raise ApplicationError(405, "Method not allowed")
By using decorators I'm getting user authentication and error handling out of the main code. This makes it clearer and easier to mantain.

Can Django models use MySQL functions?

Is there a way to force Django models to pass a field to a MySQL function every time the model data is read or loaded? To clarify what I mean in SQL, I want the Django model to produce something like the following:
On model load: SELECT AES_DECRYPT(fieldname, password) FROM tablename
On model save: INSERT INTO tablename VALUES (AES_ENCRYPT(userinput, password))
Instead of on model load, you can create a property on your model, and when the property is accessed, it can read the database:
def _get_foobar(self):
if not hasattr(self, '_foobar'):
cursor = connection.cursor()
self._foobar = cursor.execute('SELECT AES_DECRYPT(fieldname, password) FROM tablename')[0]
return self._foobar
foobar = property(_get_foobar)
Now after loading, you can refer to mything.foobar, and the first access will retrieve the decryption from the database, holding onto it for later accesses.
This also has the advantage that if some of your code has no use for the decryption, it won't happen.
I would define a custom modelfield for the column you want encrypted/decrypted. Override the to_python method to run the decryption when the model is loaded, and get_db_prep_value to run the encryption on saving.
Remember to set the field's metaclass to models.SubfieldBase otherwise these methods won't be called.
Here is a working solution, based in part on (http://www.djangosnippets.org/snippets/824/):
class Employee(models.Model):
social_security_number = models.CharField(max_length=32)
def _get_ssn(self):
cursor = connection.cursor()
cursor.execute("SELECT AES_DECRYPT(UNHEX(social_security_number), %s) as ssn FROM tablename WHERE id=%s", [settings.SECRET_KEY, self.id])
return cursor.fetchone()[0]
def _set_ssn(self, ssn_value):
cursor = connection.cursor()
cursor.execute("SELECT HEX(AES_ENCRYPT(%s, %s)) as ssn", [ssn_value, settings.SECRET_KEY])
self.social_security_number = cursor.fetchone()[0]
ssn = property(_get_ssn, _set_ssn)
And the results:
>>> from foo.bar.models import Employee
>>> p=Employee.objects.create(ssn='123-45-6789')
>>> p.ssn
'123-45-6789'
mysql> select * from foo_employee;
+----+----------------------------------+
| id | social_security_number |
+----+----------------------------------+
| 31 | 41DF2D946C9186BEF77DD3307B85CC8C |
+----+----------------------------------+
1 row in set (0.00 sec)
It's definitely hackish, but it seems Django won't let you do it any other way at the moment. It's also worth noting that to_python will be called every time you change the value in python in addition to when it is first loaded.
from django.db import connection, models
import re
class EncryptedField(models.TextField):
__metaclass__ = models.SubfieldBase
def to_python(self, value):
if not re.match('^*some pattern here*$', value):
cursor = connection.cursor()
cursor.execute('SELECT AES_DECRYPT(%s, %s)', [value, settings.SECRET_KEY])
return cursor.fetchone()[0]
return value
def get_db_prep_value(self, value):
cursor = connection.cursor()
cursor.execute('SELECT AES_ENCRYPT(%s, %s)', [value, settings.SECRET_KEY])
return cursor.fetchone()[0]
class Encrypt(models.Model):
encrypted = EncryptedField(max_length = 32)
After deep search in the implementation of Django ORM,
I found that it can be solved by something like this:
class EncryptedField(models.BinaryField):
#staticmethod
def _pad(value):
return value + (AES.block_size - len(value) % AES.block_size) * b('\x00')
def _encrypt(self, data):
if not data:
return None
return self.cipher.encrypt(self._pad(data.encode('utf8')))
def _decrypt(self, data):
if not data:
return None
return self.cipher.decrypt(force_bytes(data)).rstrip(b'\x00').decode('utf8')
#property
def cipher(self):
return AES.new(KEY, mode=AES.MODE_CBC, IV=self._iv)
def get_db_prep_value(self, value, connection, prepared=False):
if value is not None:
value = self._encrypt(value)
if value:
value = binascii.hexlify(value)
return value
def get_placeholder(self, value, compiler, connection):
return 'unhex(%s)'
Using Django signals you can do stuff when a model instance is saved, but as far as I know you can't trigger anything on read.
EDIT: My bad, it seems you can do stuff when initializing a model instance.

Categories