I want to create a column (Id) of type uniqueidentifier in sqlalchemy in a table called Staging.Transactions. Also, I want the column to automatically generate new guids for inserts.
What I want to accomplish is the following (expressed in sql)
ALTER TABLE [Staging].[Transactions] ADD DEFAULT (newid()) FOR [Id]
GO
The code in sqlalchemy is currently:
from sqlalchemy import Column, Float, Date
import uuid
from database.base import base
from sqlalchemy_utils import UUIDType
class Transactions(base):
__tablename__ = 'Transactions'
__table_args__ = {'schema': 'Staging'}
Id = Column(UUIDType, primary_key=True, default=uuid.uuid4)
AccountId = Column(UUIDType)
TransactionAmount = Column(Float)
TransactionDate = Column(Date)
def __init__(self, account_id, transaction_amount, transaction_date):
self.Id = uuid.uuid4()
self.AccountId = account_id
self.TransactionAmount = transaction_amount
self.TransactionDate = transaction_date
When I create the schema from the python code it does not generate the constraint that I want in SQL - that is - to auto generate new guids/uniqueidentifiers for the column [Id].
If I try to make a manual insert I get error message: "Cannot insert the value NULL into column 'Id', table 'my_database.Staging.Transactions'; column does not allow nulls. INSERT fails."
Would appreciate tips on how I can change the python/sqlalchemy code to fix this.
I've found two ways:
1)
Do not use uuid.uuid4() in init of your table class, keep it simple:
class Transactions(base):
__tablename__ = 'Transactions'
__table_args__ = {'schema': 'Staging'}
Id = Column(String, primary_key=True)
...
def __init__(self, Id, account_id, transaction_amount, transaction_date):
self.Id = Id
...
Instead use it in the creation of a new record:
import uuid
...
new_transac = Transacatins(Id = uuid.uuid4(),
...
)
db.session.add(new_transac)
db.session.commit()
Here, db is my SQLAlchemy(app)
2)
Without uuid, you can use raw SQL to do de job (see SQLAlchemy : how can I execute a raw INSERT sql query in a Postgres database?).
Well... session.execute is a Sqlalchemy solution...
In your case, should be something like this:
table = "[Staging].[Transactions]"
columns = ["[Id]", "[AccountId]", "[TransactionAmount]", "[TransactionDate]"]
values = ["(NEWID(), \'"+str(form.AccountId.data) +"\', "+\
str(form.TransactionAmount.data) +", "+\
str(form.TransactionDate.data))"]
s_cols = ', '.join(columns)
s_vals = ', '.join(values)
insSQL = db.session.execute(f"INSERT INTO {table} ({s_cols}) VALUES {s_vals}")
print (insSQL) # to see if SQL command is OK
db.session.commit()
You have to check where single quotes are really needed.
Related
I'm creating a table using SQLAclhemy. The table includes a unique index, and I would like the table to be setup to ignore attempts to insert a row that is a duplicate of an existing index value. This is what my code looks like:
import os
import sqlalchemy
from sqlalchemy import (Column, String, Integer, Float, DateTime, Index)
from sqlalchemy.ext.declarative import declarative_base
db_user = 'USERNAME'
db_password = 'PASSWORD'
address = '000.000.0.0'
port = '1111'
database = 'my_db'
engine = sqlalchemy.create_engine(
f"mariadb+mariadbconnector://{db_user}:{db_password}#{address}:{port}/{database}")
Base = declarative_base()
class Pet(Base):
__tablename__ = 'pets'
id = Column(Integer, primary_key=True)
name = Column(String(8))
weight = Column(Float)
breed = Column(Float)
birthday = Column(DateTime)
__table_args__ = (Index('breed_bd', "breed", "birthday", unique=True), )
Base.metadata.create_all(engine)
Session = sqlalchemy.orm.sessionmaker()
Session.configure(bind=engine)
session = Session()
I've seen that in straight sql, you can do things like
CREATE TABLE dbo.foo (bar int PRIMARY KEY WITH (IGNORE_DUP_KEY = ON))
or
CREATE UNIQUE INDEX UNQ_CustomerMemo ON CustomerMemo (MemoID, CustomerID)
WITH (IGNORE_DUP_KEY = ON);
I'm wondering what I should change/add in my code to accomplish something similar.
I have an object
class Summary():
__tablename__ = 'employeenames'
name= Column('employeeName', String(128, collation='utf8_bin'))
date = Column('dateJoined', Date)
I want to patch Summary with a mock object
class Summary():
__tablename__ = 'employeenames'
name= Column('employeeName', String)
date = Column('dateJoined', Date)
or just patch name the field to name= Column('employeeName', String)
The reason I'm doing this is that I'm doing my tests in sqlite and some queries that are only for Mysql are interfering with my tests.
I think it would be difficult to mock the column. However you could instead conditionally compile the String type for Sqlite, removing the collation.
import sqlalchemy as sa
from sqlalchemy import orm
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.types import String
#compiles(String, 'sqlite')
def compile_varchar(element, compiler, **kw):
type_expression = kw['type_expression']
type_expression.type.collation = None
return compiler.visit_VARCHAR(element, **kw)
Base = orm.declarative_base()
class Summary(Base):
__tablename__ = 'employeenames'
id = sa.Column(sa.Integer, primary_key=True)
name = sa.Column('employeeName', sa.String(128, collation='utf8_bin'))
date = sa.Column('dateJoined', sa.Date)
urls = ['mysql:///test', 'sqlite://']
for url in urls:
engine = sa.create_engine(url, echo=True, future=True)
Base.metadata.drop_all(engine)
Base.metadata.create_all(engine)
This script produces the expected output for MySQL:
CREATE TABLE employeenames (
id INTEGER NOT NULL AUTO_INCREMENT,
`employeeName` VARCHAR(128) COLLATE utf8_bin,
`dateJoined` DATE,
PRIMARY KEY (id)
)
but removes the collation for Sqlite:
CREATE TABLE employeenames (
id INTEGER NOT NULL,
"employeeName" VARCHAR(128),
"dateJoined" DATE,
PRIMARY KEY (id)
)
I have defined my models as:
class Row(Base):
__tablename__ = "row"
id = Column(Integer, primary_key=True)
key = Column(String(32))
value = Column(String(32))
status = Column(Boolean, default=True)
parent_id = Column(Integer, ForeignKey("table.id"))
class Table(Base):
__tablename__ = "table"
id = Column(Integer, primary_key=True)
name = Column(String(32), nullable=False, unique=True)
rows=relationship("Row", cascade="all, delete-orphan")
to read a table from the db I can simply query Table and it loads all the rows owned by the table. But if I want to filter rows by 'status == True' it does not work. I know this is not a valid query but I want to do something like:
session.query(Table).filter(Table.name == name, Table.row.status == True).one()
As I was not able to make the above query work, I came up with a new solution to query table first without loading any rows, then use the Id to query Rows with filters and then assign the results to the Table object:
table_res = session.query(Table).option(noload('rows')).filter(Table.name == 'test').one()
rows_res = session.query(Row).filter(Row.parent_id == 1, Row.status == True)
table_res.rows = rows_res
But I believe there has to be a better way to do this in one shot. Suggestions?
You could try this SQLAlchemy query:
from sqlalchemy.orm import contains_eager
result = session.query(Table)\
.options(contains_eager(Table.rows))\
.join(Row)\
.filter(Table.name == 'abc', Row.status == True).one()
print(result)
print(result.rows)
Which leads to this SQL:
SELECT "row".id AS row_id,
"row"."key" AS row_key,
"row".value AS row_value,
"row".status AS row_status,
"row".parent_id AS row_parent_id,
"table".id AS table_id,
"table".name AS table_name
FROM "table" JOIN "row" ON "table".id = "row".parent_id
WHERE "table".name = ?
AND "row".status = 1
It does a join but also includes the contains_eager option to do it in one query. Otherwise the rows would be fetched on demand in a second query (you could specify this in the relationship as well, but this is one method of solving it).
I created a table with a primary key and a sequence but via the debug ad later looking at the table design, the sequence isn't applied, just created.
from sqlalchemy import create_engine, MetaData, Table, Column,Integer,String,Boolean,Sequence
from sqlalchemy.orm import mapper, sessionmaker
from sqlalchemy.ext.declarative import declarative_base
import json
class Bookmarks(object):
pass
#----------------------------------------------------------------------
engine = create_engine('postgresql://iser:p#host/sconf', echo=True)
Base = declarative_base()
class Tramo(Base):
__tablename__ = 'tramos'
__mapper_args__ = {'column_prefix':'tramos'}
id = Column(Integer, Sequence('seq_tramos_id', start=1, increment=1),primary_key=True)
nombre = Column(String)
tramo_data = Column(String)
estado = Column(Boolean,default=True)
def __init__(self,nombre,tramo_data):
self.nombre=nombre
self.tramo_data=tramo_data
def __repr__(self):
return '[id:%d][nombre:%s][tramo:%s]' % self.id, self.nombre,self.tramo_data
Session = sessionmaker(bind=engine)
session = Session()
tabla = Tramo.__table__
metadata = Base.metadata
metadata.create_all(engine)
the table is just created like this
CREATE TABLE tramos (
id INTEGER NOT NULL,
nombre VARCHAR,
tramo_data VARCHAR,
estado BOOLEAN,
PRIMARY KEY (id)
)
I was hoping to see the declartion of the default nexval of the sequence
but it isn't there.
I also used the __mapper_args__ but looks like it's been ignored.
Am I missing something?
I realize this is an old thread, but I stumbled on it with the same problem and were unable to find a solution anywhere else.
After some experimenting I was able to solve this with the following code:
TABLE_ID = Sequence('table_id_seq', start=1000)
class Table(Base):
__tablename__ = 'table'
id = Column(Integer, TABLE_ID, primary_key=True, server_default=TABLE_ID.next_value())
This way the sequence is created and is used as the default value for column id, with the same behavior as if created implicitly by SQLAlchemy.
I ran into a similar issue with composite multi-column primary keys. The SERIAL is only implicitly applied to a single column primary key. However, this behaviour can be controlled via the autoincrement argument (defaults to "auto"):
id = Column(Integer, primary_key=True, autoincrement=True)
You specified an explicit Sequence() object with name. If you were to omit that, then SERIAL would be added to the id primary key specification:
CREATE TABLE tramos (
id INTEGER SERIAL NOT NULL,
nombre VARCHAR,
tramo_data VARCHAR,
estado BOOLEAN,
PRIMARY KEY (id)
)
A DEFAULT is only generated if the column is not a primary key.
When inserting, SQLAlchemy will issue a select nextval(..) as needed to create a next value. See the PostgreSQL documentation for details.
I'm making a WebService that sends specific tables in JSON.
I use SQLAlchemy to communicate with the database.
I'd want to retrieve just the columns the user has the right to see.
Is there a way to tell SQLAlchemy to not retrieve some columns ?
It's not correct but something like this :
SELECT * EXCEPT column1 FROM table.
I know it is possible to specify just some columns in the SELECT statement but it's not exactly what I want because I don't know all the table columns. I just want all the columns but some.
I also tried to get all the columns and delete the column attribute I don't want like this :
result = db_session.query(Table).all()
for row in result:
row.__delattr(column1)
but it seems SQLAlchemy doesn't allow to do this.
I get the warning :
Warning: Column 'column1' cannot be null
cursor.execute(statement, parameters)
ok
What would be the most optimized way to do it for you guys ?
Thank you
You can pass in all columns in the table, except the ones you don't want, to the query method.
session.query(*[c for c in User.__table__.c if c.name != 'password'])
Here is a runnable example:
#!/usr/bin/env python
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String
from sqlalchemy.orm import Session
Base = declarative_base()
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String)
fullname = Column(String)
password = Column(String)
def __init__(self, name, fullname, password):
self.name = name
self.fullname = fullname
self.password = password
def __repr__(self):
return "<User('%s','%s', '%s')>" % (self.name, self.fullname, self.password)
engine = create_engine('sqlite:///:memory:', echo=True)
Base.metadata.create_all(engine)
session = Session(bind=engine)
ed_user = User('ed', 'Ed Jones', 'edspassword')
session.add(ed_user)
session.commit()
result = session.query(*[c for c in User.__table__.c if c.name != 'password']).all()
print(result)
You can make the column a defered column. This feature allows particular columns of a table be loaded only upon direct access, instead of when the entity is queried using Query.
See Deferred Column Loading
This worked for me
users = db.query(models.User).filter(models.User.email != current_user.email).all()
return users