Replace integer id field with uuid - python

I need to replace the default integer id in my model with an uuid. The problem is that it's beeing used in another model (foreignkey).
Any idea on how to perform this operation without losing data?
class A(Base):
__tablename__ = 'a'
b_id = Column(
GUID(), ForeignKey('b.id'), nullable=False,
server_default=text("uuid_generate_v4()")
)
class B(Base):
__tablename__ = 'b'
id = Column(
GUID(), primary_key=True,
server_default=text("uuid_generate_v4()")
)
Unfortunately it doesn't work, also I'm afraid I'll break the relation.
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) default for column "id" cannot be cast automatically to type uuid
Alembic migration I've tried looks similar to:
op.execute('ALTER TABLE a ALTER COLUMN b_id SET DATA TYPE UUID USING (uuid_generate_v4())')

Add an id_tmp column to b with autogenerated UUID values, and a b_id_tmp column to a. Update a joining b on the foreign key to fill a.b_id_tmp with the corresponding UUIDs. Then drop a.b_id and b.id, rename the added columns, and reestablish the primary key and foreign key.
CREATE TABLE a(id int PRIMARY KEY, b_id int);
CREATE TABLE b(id int PRIMARY KEY);
ALTER TABLE a ADD CONSTRAINT a_b_id_fkey FOREIGN KEY(b_id) REFERENCES b(id);
INSERT INTO b VALUES (1), (2), (3);
INSERT INTO a VALUES (1, 1), (2, 2), (3, 2);
ALTER TABLE b ADD COLUMN id_tmp UUID NOT NULL DEFAULT uuid_generate_v1mc();
ALTER TABLE a ADD COLUMN b_id_tmp UUID;
UPDATE a SET b_id_tmp = b.id_tmp FROM b WHERE b.id = a.b_id;
ALTER TABLE a DROP COLUMN b_id;
ALTER TABLE a RENAME COLUMN b_id_tmp TO b_id;
ALTER TABLE b DROP COLUMN id;
ALTER TABLE b RENAME COLUMN id_tmp TO id;
ALTER TABLE b ADD PRIMARY KEY (id);
ALTER TABLE a ADD CONSTRAINT b_id_fkey FOREIGN KEY(b_id) REFERENCES b(id);
Just as an aside, it's more efficient to index v1 UUIDs than v4 since they contain some reproducible information, which you'll notice if you generate several in a row. That's a minor savings unless you need the higher randomness for external security reasons.

Related

can we do an autoincrement strings in sqlite3?

Can we do autoincrement string in sqlite3? IF not how can we do that?
Exemple:
RY001
RY002
...
With Python, I can do it easily with print("RY"+str(rowid+1)). But how about it performances?
Thank you
If your version of SQLite is 3.31.0+ you can have a generated column, stored or virtual:
CREATE TABLE tablename(
id INTEGER PRIMARY KEY AUTOINCREMENT,
str_id TEXT GENERATED ALWAYS AS (printf('RY%03d', id)),
<other columns>
);
The column id is declared as the primary key of the table and AUTOINCREMENT makes sure that no missing id value (because of deletions) will ever be reused.
The column str_id will be generated after each new row is inserted, as the concatenation of the 'RY' and the left padded with 0s value of id.
As it is, str_id will be VIRTUAL, meaning that it will be created every time you query the table.
If you add STORED to its definition:
str_id TEXT GENERATED ALWAYS AS (printf('RY%03d', id)) STORED
it will be stored in the table.
Something like this:
select
printf("RY%03d", rowid) as "id"
, *
from myTable
?

sqlalchemy: order of query result unexpected

I'm using SQLAlchemy with MySQL and have a table with two foreign keys:
class CampaignCreativeLink(db.Model):
__tablename__ = 'campaign_creative_links'
campaign_id = db.Column(db.Integer, db.ForeignKey('campaigns.id'),
primary_key=True)
creative_id = db.Column(db.Integer, db.ForeignKey('creatives.id'),
primary_key=True)
Then I use a for loop to insert 3 items into the table like this:
session.add(8, 3)
session.add(8, 2)
session.add(8, 1)
But when I checked the table, the items are ordered reversely
8 1
8 2
8 3
And the query shows the order reversely too. What's the reason for this and how can I keep the order same as when they were added?
A table is a set of rows and are therefore not guaranteed to have any order unless you specify ORDER BY.
In MySQL (InnoDB), the primary key acts as the clustered index. This means that the rows are physically stored in the order specified by the primary key, in this case (campaign_id, created_id), regardless of the order of insertion. This is usually the order the rows are returned in if you don't specify an ORDER BY.
If you need your rows returned in a certain order, specify ORDER BY when you query.

Store array in mySql

I have a table in mysql with restaurants and I want to store an array for the categories the restaurants fall under. How should I do this, as mysql doesn't have an array section. So what I want is something like this:
id|name |categories |
1|chipotle|[mexican,fast_food]|
How can I do this?
Ok, give me a minute or two to add sample data and FK's
create table food
(
id int auto_increment primary key,
name varchar(100) not null
);
create table category
(
id int auto_increment primary key,
name varchar(100) not null
);
create table fc_junction
( -- Food/Category junction table
-- if a row exists here, then the food and category intersect
id int auto_increment primary key,
foodId int not null,
catId int not null,
-- the below unique key makes certain no duplicates for the combo
-- duplicates = trash
unique key uk_blahblah (foodId,catId),
-- Below are the foreign key (FK) constraints. A part of Referential Integrity (RI).
-- So a row cannot exist with faulty foodId or catId. That would mean insert/update here.
-- It also means the parents (food and category) row(s) cannot be deleted and thus
-- orphaning the children (the children are these rows in fc_junction)
CONSTRAINT fc_food FOREIGN KEY (foodId) REFERENCES food(id),
CONSTRAINT fc_cat FOREIGN KEY (catId) REFERENCES category(id)
);
So you are free to add food and categories and hook them up later via the junction table. You can create chipotle, burritos, hotdogs, lemonade, etc. And in this model (the generally accepted way = "don't do it any other way), you do not need to know what categories the foods are in until whenever you feel like it.
In the original comma-separated way (a.k.a. the wrong way), you have zero RI and you can bet there will be no use of fast indexes. Plus getting to your data, modifying, deleting a category, adding one, all of that is a kludge and there is much snarling and gnashing of teeth.
See How to store arrays in MySQL?, you need to create a separate table and use a join.
Or you can use Postgres, http://www.postgresql.org/docs/9.4/static/arrays.html.

Creating partial unique index with sqlalchemy on Postgres

SQLAlchemy supports creating partial indexes in postgresql.
Is it possible to create a partial unique index through SQLAlchemy?
Imagine a table/model as so:
class ScheduledPayment(Base):
invoice_id = Column(Integer)
is_canceled = Column(Boolean, default=False)
I'd like a unique index where there can be only one "active" ScheduledPayment for a given invoice.
I can create this manually in postgres:
CREATE UNIQUE INDEX only_one_active_invoice on scheduled_payment
(invoice_id, is_canceled) where not is_canceled;
I'm wondering how I can add that to my SQLAlchemy model using SQLAlchemy 0.9.
class ScheduledPayment(Base):
id = Column(Integer, primary_key=True)
invoice_id = Column(Integer)
is_canceled = Column(Boolean, default=False)
__table_args__ = (
Index('only_one_active_invoice', invoice_id, is_canceled,
unique=True,
postgresql_where=(~is_canceled)),
)
In case someone stops by looking to set up a partial unique constraint with a column that can optionally be NULL, here's how:
__table_args__ = (
db.Index(
'uk_providers_name_category',
'name', 'category',
unique=True,
postgresql_where=(user_id.is_(None))),
db.Index(
'uk_providers_name_category_user_id',
'name', 'category', 'user_id',
unique=True,
postgresql_where=(user_id.isnot(None))),
)
where user_id is a column that can be NULL and I want a unique constraint enforced across all three columns (name, category, user_id) with NULL just being one of the allowed values for user_id.
To add to the answer by sas, postgresql_where does not seem to be able to accept multiple booleans. So in the situation where you have TWO null-able columns (let's assume an additional 'price' column) it is not possible to have four partial indices for all combinations of NULL/~NULL.
One workaround is to use default values which would never be 'valid' (e.g. -1 for price or '' for a Text column. These would compare correctly, so no more than one row would be allowed to have these default values.
Obviously, you will also need to insert this default value in all existing rows of data (if applicable).

sqlalchemy create a foreign key?

I have a composite PK in table Strings (integer id, varchar(2) lang)
I want to create a FK to ONLY the id half of the PK from other tables. This means I'd have potentially many rows in Strings table (translations) matching the FK. I just need to store the id, and have referential integrity maintained by the DB.
Is this possible? If so, how?
This is from wiki
The columns in the referencing table
must be the primary key or other
candidate key in the referenced table. The values in one row of the referencing columns must occur in a single row in the referenced table.
Let's say you have this:
id | var
1 | 10
1 | 11
2 | 10
The foreign key must reference exactly one row from the referenced table. This is why usually it references the primary key.
In your case you need to make another Table1(id) where you stored the ids and make the column unique/primary key. The id column in your current table is not unique - you can't use it in your situation... so you make a Table1(id - primary key) and make the id in your current table a foreign key to the Table1. Now you can create foreign keys to id in Table1 and the primary key in your current table is ok.

Categories