Array type in SQlite - python

I'm in the middle of developing a small site in Python. I use flask and venv.
I am currently in the middle of writing the data base and here is one of my tables:
class Message(db.Model):
message_id = db.Column(db.Integer, primary_key=True)
session_id = db.Column(db.String(30), unique=True)
application_id = db.Column(db.Integer)
participants = db.Column(db.Array())
content = db.Column(db.String(200))
The problem is in line 5:
"Array".
There is no such variable type.
I want to create a list of message recipients. Is there an Array or List variable type in SQlite?
If so, what is and how is it used?
And if not, how can I make a list of recipients anyway?
Anyone know?
Thank you very much!

SQLite does not support arrays directly. It only does Ints, Floats and Text. See here the type it supports.
To accomplish what you need, you have to use a custom encoding, or use an FK, i.e. create another table, where each item in the array is stored as a row. This would get tedious in my opinion.
Alternatively, it can be done in SQLAlchemy and you will want to have a look at the PickleType:
array = db.Column(db.PickleType(mutable=True))
Please note that you will have to use the mutable=True parameter to be able to edit the column. SQLAlchemy will detect changes automatically and they will be saved as soon as you commit them.
Also, have a look at the ScalarListType in SQLAlchemy for saving multiple values in column.
Update:
In SqlAlchemy You can use array column.
For example:
class Example(db.Model):
id = db.Column(db.Integer, primary_key=True)
my_array = db.Column(db.ARRAY(db.Integer())
# You can easily find records:
# Example.my_array.contains([1, 2, 3]).all()
# You can use text items of array
# db.Column(db.ARRAY(db.Text())
Update: This doesn't work in SQLite, SQLAlchemy's ARRAY type is for Postgres databases only. The best alternative for most people would be something involving JSON or switching to Postgres if possible. I'll be attempting JSON myself. credit to the replier in the comments.

Related

How to rename a column in a table with Flask + SQLAlchemy

I created two tables for my truck scheduling application:
class appts_db(db.Model):
id = db.Column(db.Integer, primary_key=True)
carrier = db.Column(db.String(100))
material = db.Column(db.String(10))
pickup_date = db.Column(db.String(10))
class carriers_db(db.Model):
carrier_id = db.Column(db.Integer, primary_key=True)
carrier = db.Column(db.String(100))
phone_number = db.Column(db.String(15))
How can I rename the column carrier to carrier_name in both tables to make it more clear what the columns contain. I tried using the command prompt
>python3
>db.create_all()
But the column name doesn't update. Is there some command that I'm missing that can update the column name in the db?
(1.) This seems to be a question about "how to migrate a single table?", twice. That is, whatever answer works for appts_db will also need to be applied to carriers_db -- I don't see a FK relation so I think most technical solutions would need to be manually told about that 2nd rename.
(2.) There are many nice "version my schema!" approaches, including the usual ruby-on-rails approach. Here, I recommend alembic. It takes some getting used to, but once implemented it lets you roll forward / roll back in time, and table schemas will match the currently-checked-out source code's expectations. It is specifically very good at column renames.
(3.) The simplest possible thing you could do here is a pair of DROP TABLE and then re-run the db.create_all(). The existing table is preventing create_all from having any effect, but after the DROP it will do just what you want. Of course, if you care about the existing rows you will want to tuck them away somewhere before you get too adventurous.
I ended up using DB Browser for SQLite (I had downloaded it previously) and ran this code in the "Execute SQL" tab:
ALTER TABLE carriers_db
RENAME COLUMN carrier TO carrier_name;

flask-mongoengine and document not accepting unique or primary_key arguments

I'm trying out flask-mongoengine and mongohq and I'm having some difficulty getting it to declare my documents correctly.
I've declaed a db document like so:
class numbers(nodb.Document):
numbers = nodb.StringField(required=True)
simple_date = nodb.DateTimeField(required=True, unique=True, primary_key=True)
date = nodb.DateTimeField(default=datetime.now, required=True)
now when I add an entry to the document it's not taking my _id or even acknowledging that I've put in the unique or primary_key requirement.
test = numbers(
_id=datetime.strptime(currentdate, "%m/%d/%Y").date(),
simple_date=datetime.strptime(currentdate, "%m/%d/%Y").date(),
numbers='12345'
)
test.save()
now if I do those lines again, it creates another identical entry in the db and the requirements on simple_date appear to be ignored. Not sure if I'm hitting a bug here or just doing something wrong?
Mongoengine must create indexes if collection not exists yet. Mongoengine do not take care about data migration. So if you at first created collection without index and next describe index in model then index not created automatically. For your case you must create indexes manually or try drop your numbers collection only for development database when data not necessary.

How can I insert arrays into columns on a database table for my pyramid app?

I am trying to create an sql (sqlite) database where users upload an stl file and the data (all the points, triangles, ect.) is stored in a database so that it is permanently available. Is it possible to do this with a database with only two columns (three with the key): name (title for the url), and data (the array data)?
The data array is in the format: [[[x1,y1,z1],....],[[v1,v2,v3],...]]. All points are given first and then the triangles are defined through ordering of the points given. Can this array be stored in the database, and if so, what data type would it be (integer, string, ect.)?
Upon reading into this issue more, It seems that pickling would be a good way to go: http://docs.python.org/2/library/pickle.html
I am having trouble figuring out how to implement this. Should I just add pickle(data)?
Edit: upon further review, it seems like pickling introduces some security holes that do not exist if using JSON. Is it possible to simply call jsondatastring=JSON.stringify(data) and then save that to the database? If so, what would be the appropriate column type?
If your intention is only to store the array in DB and work with it in your webapp code, SQLAlchemy's PickleType is a good choice. Pickling and unpickling will be done transparently for you:
from sqlalchemy.types import PickleType
class Foo(Base):
__tablename__ = 'foo'
id = Column(Integer, primary_key=True)
name = Column(String)
array = Column(PickleType)
foo = Foo(name=u'bar', array=[1, 2, 3])
session.add(foo)
session.commit()
foo = session.query(Foo).filter_by(name=u'bar').one()
print foo.array

Proper use of MySQL full text search with SQLAlchemy

I would like to be able to full text search across several text fields of one of my SQLAlchemy mapped objects. I would also like my mapped object to support foreign keys and transactions.
I plan to use MySQL to run the full text search. However, I understand that MySQL can only run full text search on a MyISAM table, which does not support transactions and foreign keys.
In order to accomplish my objective I plan to create two tables. My code will look something like this:
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String(50))
description = Column(Text)
users_myisam = Table('users_myisam', Base.metadata,
Column('id', Integer),
Column('name', String(50)),
Column('description', Text),
mysql_engine='MyISAM')
conn = Base.metadata.bind.connect()
conn.execute("CREATE FULLTEXT INDEX idx_users_ftxt \
on users_myisam (name, description)")
Then, to search I will run this:
q = 'monkey'
ft_search = users_myisam.select("MATCH (name,description) AGAINST ('%s')" % q)
result = ft_search.execute()
for row in result: print row
This seems to work, but I have a few questions:
Is my approach of creating two tables to solve my problem reasonable? Is there a standard/better/cleaner way to do this?
Is there a SQLAlchemy way to create the fulltext index, or am I best to just directly execute "CREATE FULLTEXT INDEX ..." as I did above?
Looks like I have a SQL injection problem in my search/match against query. How can I do the select the "SQLAlchemy way" to fix this?
Is there a clean way to join the users_myisam select/match against right back to my user table and return actual User instances, since this is what I really want?
In order to keep my users_myisam table in sync with my mapped object user table, does it make sense for me to use a MapperExtension on my User class, and set the before_insert, before_update, and before_delete methods to update the users_myisam table appropriately, or is there some better way to accomplish this?
Thanks,
Michael
Is my approach of creating two tables to solve my problem reasonable?
Is there a standard/better/cleaner way to do this?
I've not seen this use case attempted before, as developers who value transactions and constraints tend to use Postgresql in the first place. I understand that may not be possible in your specific scenario.
Is there a SQLAlchemy way to create the fulltext index, or am I best
to just directly execute "CREATE FULLTEXT INDEX ..." as I did above?
conn.execute() is fine though if you want something slightly more integrated you can use the DDL() construct, read through http://docs.sqlalchemy.org/en/rel_0_8/core/schema.html?highlight=ddl#customizing-ddl for details
Looks like I have a SQL injection problem in my search/match against query. How can I do the
select the "SQLAlchemy way" to fix this?
note: this recipe is only for MATCH against multiple columns simultaneously - if you have just one column, use the match() operator more simply.
most basically you could use the text() construct:
from sqlalchemy import text, bindparam
users_myisam.select(
text("MATCH (name,description) AGAINST (:value)",
bindparams=[bindparam('value', q)])
)
more comprehensively you could define a custom construct:
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.sql.expression import ClauseElement
from sqlalchemy import literal
class Match(ClauseElement):
def __init__(self, columns, value):
self.columns = columns
self.value = literal(value)
#compiles(Match)
def _match(element, compiler, **kw):
return "MATCH (%s) AGAINST (%s)" % (
", ".join(compiler.process(c, **kw) for c in element.columns),
compiler.process(element.value)
)
my_table.select(Match([my_table.c.a, my_table.c.b], "some value"))
docs:
http://docs.sqlalchemy.org/en/rel_0_8/core/compiler.html
Is there a clean way to join the users_myisam select/match against right back
to my user table and return actual User instances, since this is what I really want?
you should probably create a UserMyISAM class, map it just like User, then use relationship() to link the two classes together, then simple operations like this are possible:
query(User).join(User.search_table).\
filter(Match([UserSearch.x, UserSearch.y], "some value"))
In order to keep my users_myisam table in sync with my mapped object
user table, does it make sense for me to use a MapperExtension on my
User class, and set the before_insert, before_update, and
before_delete methods to update the users_myisam table appropriately,
or is there some better way to accomplish this?
MapperExtensions are deprecated, so you'd at least use the event API, and in most cases we want to try applying object mutations outside of the flush process. In this case, I'd be using the constructor for User, or alternatively the init event, as well as a basic #validates decorator which will receive values for the target attributes on User and copy those values into User.search_table.
Overall, if you've been learning SQLAlchemy from another source (like the Oreilly book), its really out of date by many years, and I'd be focusing on the current online documentation.

In Elixir or SQLAlchemy, is there a way to also store a comment for a/each field in my entities?

Our project is basically a web interface to several systems of record. We have many tables mapped, and the names of each column aren't as well named and intuitive as we'd like... The users would like to know what data fields are available (i.e. what's been mapped from the database). But, it's pointless to just give them column names like: USER_REF1, USER_REF2, etc.
So, I was wondering, is there a way to provide a comment in the declaration of my field?
E.g.
class SegregationCode(Entity):
using_options(tablename="SEGREGATION_CODES")
segCode = Field(String(20), colname="CODE", ...
primary_key=True) #Have a comment attr too?
If not, any suggestions?
Doing some research thru the SQLAlchemy documentation, my buddy and I found a line that says the Column object has a default dictionary called info that is a space to store "application specific data." So, in my case, I can just doing something like:
class SegregationCode(Entity):
using_options(tablename="SEGREGATION_CODES")
segCode = Field(String(20), colname="CODE", ...
primary_key=True, info={'description'='Segregation Code'})

Categories