sqlalchemy and auto increments with postgresql - python

I created a table with a primary key and a sequence but via the debug ad later looking at the table design, the sequence isn't applied, just created.
from sqlalchemy import create_engine, MetaData, Table, Column,Integer,String,Boolean,Sequence
from sqlalchemy.orm import mapper, sessionmaker
from sqlalchemy.ext.declarative import declarative_base
import json
class Bookmarks(object):
pass
#----------------------------------------------------------------------
engine = create_engine('postgresql://iser:p#host/sconf', echo=True)
Base = declarative_base()
class Tramo(Base):
__tablename__ = 'tramos'
__mapper_args__ = {'column_prefix':'tramos'}
id = Column(Integer, Sequence('seq_tramos_id', start=1, increment=1),primary_key=True)
nombre = Column(String)
tramo_data = Column(String)
estado = Column(Boolean,default=True)
def __init__(self,nombre,tramo_data):
self.nombre=nombre
self.tramo_data=tramo_data
def __repr__(self):
return '[id:%d][nombre:%s][tramo:%s]' % self.id, self.nombre,self.tramo_data
Session = sessionmaker(bind=engine)
session = Session()
tabla = Tramo.__table__
metadata = Base.metadata
metadata.create_all(engine)
the table is just created like this
CREATE TABLE tramos (
id INTEGER NOT NULL,
nombre VARCHAR,
tramo_data VARCHAR,
estado BOOLEAN,
PRIMARY KEY (id)
)
I was hoping to see the declartion of the default nexval of the sequence
but it isn't there.
I also used the __mapper_args__ but looks like it's been ignored.
Am I missing something?

I realize this is an old thread, but I stumbled on it with the same problem and were unable to find a solution anywhere else.
After some experimenting I was able to solve this with the following code:
TABLE_ID = Sequence('table_id_seq', start=1000)
class Table(Base):
__tablename__ = 'table'
id = Column(Integer, TABLE_ID, primary_key=True, server_default=TABLE_ID.next_value())
This way the sequence is created and is used as the default value for column id, with the same behavior as if created implicitly by SQLAlchemy.

I ran into a similar issue with composite multi-column primary keys. The SERIAL is only implicitly applied to a single column primary key. However, this behaviour can be controlled via the autoincrement argument (defaults to "auto"):
id = Column(Integer, primary_key=True, autoincrement=True)

You specified an explicit Sequence() object with name. If you were to omit that, then SERIAL would be added to the id primary key specification:
CREATE TABLE tramos (
id INTEGER SERIAL NOT NULL,
nombre VARCHAR,
tramo_data VARCHAR,
estado BOOLEAN,
PRIMARY KEY (id)
)
A DEFAULT is only generated if the column is not a primary key.
When inserting, SQLAlchemy will issue a select nextval(..) as needed to create a next value. See the PostgreSQL documentation for details.

Related

etl-process: from python-dataframe to postgres with SQLAlchemy

I want to create tables in a Postgres database using Python's SQLAlchemy package and insert data from a dataframe into them. I also want to assign foreign keys and primary keys.
The following code creates the two tables, but in the schema "public" instead of the schema "my_schema".
Can anyone find the error?
from sqlalchemy import create_engine, Column, Integer, String, ForeignKey, MetaData
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship
# Basisklasse für alle Tabellen definieren
metadata_obj = MetaData(schema="mein_schema")
Base = declarative_base(metadata_obj)
# Tabelle 2 definieren
class Tabelle2(Base):
__tablename__ = 'tabelle2'
id = Column(Integer, primary_key=True)
name = Column(String)
# Tabelle 1 definieren
class Tabelle1(Base):
__tablename__ = 'tabelle1'
id = Column(Integer, primary_key=True)
name = Column(String)
tabelle2_id = Column(Integer, ForeignKey('tabelle2.id'))
tabelle2 = relationship(Tabelle2)
# Alle Tabellen in der Datenbank erstellen
Base.metadata.create_all(engine)
You are passing metadata_obj as a parameter to declarative_base but not specifying it belongs to the metadata named parameter:
Base = declarative_base(metadata=metadata_obj)
declarative_base never knows the use the specified schema and falls back to the default schema of the connection.

Patch unwanted column attribute

I have an object
class Summary():
__tablename__ = 'employeenames'
name= Column('employeeName', String(128, collation='utf8_bin'))
date = Column('dateJoined', Date)
I want to patch Summary with a mock object
class Summary():
__tablename__ = 'employeenames'
name= Column('employeeName', String)
date = Column('dateJoined', Date)
or just patch name the field to name= Column('employeeName', String)
The reason I'm doing this is that I'm doing my tests in sqlite and some queries that are only for Mysql are interfering with my tests.
I think it would be difficult to mock the column. However you could instead conditionally compile the String type for Sqlite, removing the collation.
import sqlalchemy as sa
from sqlalchemy import orm
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.types import String
#compiles(String, 'sqlite')
def compile_varchar(element, compiler, **kw):
type_expression = kw['type_expression']
type_expression.type.collation = None
return compiler.visit_VARCHAR(element, **kw)
Base = orm.declarative_base()
class Summary(Base):
__tablename__ = 'employeenames'
id = sa.Column(sa.Integer, primary_key=True)
name = sa.Column('employeeName', sa.String(128, collation='utf8_bin'))
date = sa.Column('dateJoined', sa.Date)
urls = ['mysql:///test', 'sqlite://']
for url in urls:
engine = sa.create_engine(url, echo=True, future=True)
Base.metadata.drop_all(engine)
Base.metadata.create_all(engine)
This script produces the expected output for MySQL:
CREATE TABLE employeenames (
id INTEGER NOT NULL AUTO_INCREMENT,
`employeeName` VARCHAR(128) COLLATE utf8_bin,
`dateJoined` DATE,
PRIMARY KEY (id)
)
but removes the collation for Sqlite:
CREATE TABLE employeenames (
id INTEGER NOT NULL,
"employeeName" VARCHAR(128),
"dateJoined" DATE,
PRIMARY KEY (id)
)

Change SQLAlchemy Primary Key after it has been defined

Problem: Simply put, I am trying to redefine a SQLAlchemy ORM table's primary key after it has already been defined.
Example:
class Base:
#declared_attr
def __tablename__(cls):
return f"{cls.__name__}"
#declared_attr
def id(cls):
return Column(Integer, cls.seq, unique=True,
autoincrement=True, primary_key=True)
Base = declarative_base(cls=Base)
class A_Table(Base):
newPrimaryKeyColumnsDerivedFromAnotherFunction = []
# Please Note: as the variable name tries to say,
# these columns are auto-generated and not known until after all
# ORM classes (models) are defined
# OTHER CLASSES
def changePriKeyFunc(model):
pass # DO STUFF
# Then do
Base.metadata.create_all(bind=arbitraryEngine)
# After everything has been altered and tied into a little bow
*Please note, this is a simplification of the true problem I am trying to solve.
Possible Solution: Your first thought might have been to do something like this:
def possibleSolution(model):
for pricol in model.__table__.primary_key:
pricol.primary_key = False
model.__table__.primary_key = PrimaryKeyConstraint(
*model.newPrimaryKeyColumnsDerivedFromAnotherFunction,
# TODO: ADD all the columns that are in the model that are also a primary key
# *[col for col in model.__table__.c if col.primary_key]
)
But, this doesn't work, because when trying to add, flush, and commit, an error gets thrown:
InvalidRequestError: Instance <B_Table at 0x104aa1d68> cannot be refreshed -
it's not persistent and does not contain a full primary key.
Even though this:
In [2]: B_Table.__table__.primary_key
Out[2]: PrimaryKeyConstraint(Column('a_TableId', Integer(),
ForeignKey('A_Table.id'), table=<B_Table>,
primary_key=True, nullable=False))
as well as this:
In [3]: B_Table.__table__
Out[3]: Table('B_Table', MetaData(bind=None),
Column('id', Integer(), table=<B_Table>, nullable=False,
default=Sequence('test_1', start=1, increment=1,
metadata=MetaData(bind=None))),
Column('a_TableId', Integer(),
ForeignKey('A_Table.id'), table=<B_Table>,
primary_key=True, nullable=False),
schema=None)
and finally:
In [5]: b.a_TableId
Out[5]: 1
Also note that the database actually reflects the changed (and true) primary key, so I know that there's something going on with the ORM/SQLAlchemy.
Question: In summary, how can I change the model's primary key after the model has already been defined?
edit: See below for full code (same type of error, just in SQLite)
from sqlalchemy import Column, Integer, ForeignKey
from sqlalchemy.orm import relationship, sessionmaker
from sqlalchemy.ext.declarative import declared_attr, declarative_base
from sqlalchemy.schema import PrimaryKeyConstraint
from sqlalchemy import Sequence, create_engine
class Base:
#declared_attr
def __tablename__(cls):
return f"{cls.__name__}"
#declared_attr
def seq(cls):
return Sequence("test_1", start=1, increment=1)
#declared_attr
def id(cls):
return Column(Integer, cls.seq, unique=True, autoincrement=True, primary_key=True)
Base = declarative_base(cls=Base)
def relate(model, x):
"""Model is the original class, x is what class needs to be as
an attribute for model"""
attributeName = x.__tablename__
idAttributeName = "{}Id".format(attributeName)
setattr(model, idAttributeName,
Column(ForeignKey(x.id)))
setattr(model, attributeName,
relationship(x,
foreign_keys=getattr(model, idAttributeName),
primaryjoin=getattr(
model, idAttributeName) == x.id,
remote_side=x.id
)
)
return model.__table__.c[idAttributeName]
def possibleSolution(model):
if len(model.defined):
newPriCols = []
for x in model.defined:
newPriCols.append(relate(model, x))
for priCol in model.__table__.primary_key:
priCol.primary_key = False
priCol.nullable = True
model.__table__.primary_key = PrimaryKeyConstraint(
*newPriCols
# TODO: ADD all the columns that are in the model that are also a primary key
# *[col for col in model.__table__.c if col.primary_key]
)
class A_Table(Base):
pass
class B_Table(Base):
defined = [A_Table]
possibleSolution(B_Table)
engine = create_engine('sqlite://')
Base.metadata.create_all(bind=engine)
Session = sessionmaker(bind=engine)
session = Session()
a = A_Table()
b = B_Table(A_TableId=a.id)
print(B_Table.__table__.primary_key)
session.add(a)
session.commit()
session.add(b)
session.commit()
Originally, the error you say the PK reassignment is causing is:
InvalidRequestError: Instance <B_Table at 0x104aa1d68> cannot be refreshed -
it's not persistent and does not contain a full primary key.
I don't get that running you MCVE, instead I get a pretty helpful warning first:
SAWarning: Column 'B_Table.A_TableId' is marked as a member of the
primary key for table 'B_Table', but has no Python-side or server-side
default generator indicated, nor does it indicate 'autoincrement=True'
or 'nullable=True', and no explicit value is passed. Primary key
columns typically may not store NULL.
And a very detailed exception message when the script fails:
sqlalchemy.orm.exc.FlushError: Instance has
a NULL identity key. If this is an auto-generated value, check that
the database table allows generation of new primary key values, and
that the mapped Column object is configured to expect these generated
values. Ensure also that this flush() is not occurring at an
inappropriate time, such as within a load() event.
So assuming that the example accurately describes your problem, the answer is straightforward. A primary key cannot be null.
A_Table inherits off Base:
class A_Table(Base):
pass
Base gives A_Table an autoincrement PK through declared_attr id():
#declared_attr
def id(cls):
return Column(Integer, cls.seq, unique=True, autoincrement=True, primary_key=True)
Similarly, B_Table is defined off Base but the PK is overwritten in possibleSolution() such that it becomes a ForeignKey to A_Table:
PrimaryKeyConstraint(Column('A_TableId', Integer(), ForeignKey('A_Table.id'), table=<B_Table>, primary_key=True, nullable=False))
Then, we instantiate an instance of A_Table without any kwargs and immediately allocate the id attribute of instance a to field A_TableId when constructing b:
a = A_Table()
b = B_Table(A_TableId=a.id)
At this point we can stop and inspect the attribute values of each:
print(a.id, b.A_TableId)
# None None
a.id is None because it's an autoincrement which needs to be populated by the database, not the ORM. So SQLAlchemy doesn't know it's value until after the instance is flushed to the database.
So what happens if we include a flush() operation after adding instance a to the session:
a = A_Table()
session.add(a)
session.flush()
b = B_Table(A_TableId=a.id)
print(a.id, b.A_TableId)
# 1 1
So by issuing the flush first, we've got a value for a.id, meaning that we also have a value for b.A_TableId.
session.add(b)
session.commit()
# no error

How to correctly add Foreign Key constraints to SQLite DB using SQLAlchemy [duplicate]

This question already has answers here:
Sqlite / SQLAlchemy: how to enforce Foreign Keys?
(9 answers)
Closed 3 years ago.
I'm very new to SQLAlchemy and I'm trying to figure it out.
Please have in mind the following test setup:
class Nine(Base):
__tablename__ = 'nine'
__table_args__ = (sqlalchemy.sql.schema.UniqueConstraint('nine_b', name='uq_nine_b'), )
nine_a = sqlalchemy.Column(sqlalchemy.dialects.sqlite.INTEGER(), primary_key=True, autoincrement=False, nullable=False)
nine_b = sqlalchemy.Column(sqlalchemy.String(20), nullable=False)
class Seven(Base):
__tablename__ = 'seven'
__table_args__ = (sqlalchemy.sql.schema.PrimaryKeyConstraint('seven_a', 'seven_b'),
sqlalchemy.sql.schema.Index('fk_seven_c_nine_a_idx', 'seven_c'),)
seven_a = sqlalchemy.Column(sqlalchemy.dialects.sqlite.INTEGER(), nullable=False)
seven_b = sqlalchemy.Column(sqlalchemy.dialects.sqlite.INTEGER(), nullable=False)
seven_c = sqlalchemy.Column(sqlalchemy.dialects.sqlite.INTEGER(), sqlalchemy.ForeignKey('nine.nine_a'), nullable=False)
seven_d = sqlalchemy.Column(sqlalchemy.dialects.sqlite.INTEGER(), nullable=False)
nine = sqlalchemy.orm.relationship(Nine, backref=sqlalchemy.orm.backref('seven'), uselist=False)
class Three(Base):
__tablename__ = 'three'
__table_args__ = (sqlalchemy.sql.schema.UniqueConstraint('three_b', 'three_c', name='uq_three_b_c'),
sqlalchemy.sql.schema.Index('fk_three_c_seven_a_idx', 'three_c'), )
three_a = sqlalchemy.Column(sqlalchemy.dialects.sqlite.INTEGER(), primary_key=True, autoincrement=True, nullable=False)
three_b = sqlalchemy.Column(sqlalchemy.dialects.sqlite.INTEGER(), nullable=False)
three_c = sqlalchemy.Column(sqlalchemy.dialects.sqlite.INTEGER(), sqlalchemy.ForeignKey('seven.seven_a'), nullable=False)
seven = sqlalchemy.orm.relationship(Seven, backref=sqlalchemy.orm.backref('three'), uselist=False)
That translates into the following DDLs:
CREATE TABLE nine (
nine_a INTEGER NOT NULL,
nine_b VARCHAR(20) NOT NULL,
PRIMARY KEY (nine_a),
CONSTRAINT uq_nine_b UNIQUE (nine_b)
);
CREATE TABLE seven (
seven_a INTEGER NOT NULL,
seven_b INTEGER NOT NULL,
seven_c INTEGER NOT NULL,
seven_d INTEGER NOT NULL,
PRIMARY KEY (seven_a, seven_b),
FOREIGN KEY(seven_c) REFERENCES nine (nine_a)
);
CREATE INDEX fk_seven_c_nine_a_idx ON seven (seven_c);
CREATE TABLE three (
three_a INTEGER NOT NULL,
three_b INTEGER NOT NULL,
three_c INTEGER NOT NULL,
PRIMARY KEY (three_a),
CONSTRAINT uq_three_b_c UNIQUE (three_b, three_c),
FOREIGN KEY(three_c) REFERENCES seven (seven_a)
);
CREATE INDEX fk_three_c_seven_a_idx ON three (three_c);
All tables are empty. Then, the following code statements:
session.add(Nine(nine_a=1, nine_b='something'))
session.add(Nine(nine_a=2, nine_b='something else'))
session.commit()
session.add(Seven(seven_a=7, seven_b=7, seven_c=7, seven_d=7))
session.commit()
session.add(Three(three_a=3, three_b=3, three_c=3))
sessionDB.commit()
Can somebody please explain why is the above code snippet executing without errors? Should't the FK constraints stop from inserting a new row into seven or three? I assume there is something wrong with how the FKs are described in the classes themselves, but I don't know where the problem is (and how to fix it).
[Edit 1]
Adding __table_args__ for all classes (forgot to include them).
[Edit 2]
Adding DDLs for further reference.
SQLite by default does not enforce ForeignKey constraints (see here http://www.sqlite.org/pragma.html#pragma_foreign_keys )
To enable, follow these docs here: http://docs.sqlalchemy.org/en/latest/dialects/sqlite.html#foreign-key-support
Here's a copy paste of the official documentation:
SQLite supports FOREIGN KEY syntax when emitting CREATE statements for tables, however by default these constraints have no effect on the operation of the table.
Constraint checking on SQLite has three prerequisites:
At least version 3.6.19 of SQLite must be in use
The SQLite library must be compiled without the SQLITE_OMIT_FOREIGN_KEY or SQLITE_OMIT_TRIGGER symbols enabled.
The PRAGMA foreign_keys = ON statement must be emitted on all connections before use.
SQLAlchemy allows for the PRAGMA statement to be emitted automatically for new connections through the usage of events:
from sqlalchemy.engine import Engine
from sqlalchemy import event
#event.listens_for(Engine, "connect")
def set_sqlite_pragma(dbapi_connection, connection_record):
cursor = dbapi_connection.cursor()
cursor.execute("PRAGMA foreign_keys=ON")
cursor.close()

Problem with sqlalchemy, reflected table and defaults for string fields

hmm, is there any reason why sa tries to add Nones to for varchar columns that have defaults set in in database schema ?, it doesnt do that for floats or ints (im using reflection).
so when i try to add new row :
like
u = User()
u.foo = 'a'
u.bar = 'b'
sa issues a query that has a lot more cols with None values assigned to those, and db obviously bards and doesnt perform default substitution.
What version do you use and what is actual code? Below is a sample code showing that server_default parameter works fine for string fields:
from sqlalchemy import *
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
metadata = MetaData()
Base = declarative_base(metadata=metadata)
class Item(Base):
__tablename__="items"
id = Column(String, primary_key=True)
int_val = Column(Integer, nullable=False, server_default='123')
str_val = Column(String, nullable=False, server_default='abc')
engine = create_engine('sqlite://', echo=True)
metadata.create_all(engine)
session = sessionmaker(engine)()
item = Item(id='foo')
session.add(item)
session.commit()
print item.int_val, item.str_val
The output is:
<...>
<...> INSERT INTO items (id) VALUES (?)
<...> ['foo']
<...>
123 abc
I've found its a bug in sa, this happens only for string fields, they dont get server_default property for some unknow reason, filed a ticket for this already

Categories