I want to create 200+ tables using declarative base on the fly. I learnt it's not possible, so my idea was to create a common table and rename it 200+ times.
class Movie(Base):
id = Column(Integer, primary_key=True)
title = Column(String)
release_date = Column(Date)
name=Column(String)
__tablename__ = 'titanic'
def __init__(self, newname,title, release_date):
self.title = title
self.release_date = release_date
What is the code to change the table name from "titanic" to "wild"?
In Postgresql it is
ALTER TABLE table_name
RENAME TO new_table_name;
I am not finding a solution in sqlalchemy.
There are no foreign keys to this table.
The objective of this question is to rename an existing table thru a solution (if) available in sqlalchemy, not in a purely python way (as mentioned in the other question).
The easiest way to rename a table is to create a new table, dumping the data into it with an INSERT INTO statement.
More from the web:
You must issue the appropriate ALTER statements to your database to change the name of the table. As far as the Table metadata itself, you can attempt to set table.name = 'newname', and re-place the Table object within metadata.tables with its new name, but this may have lingering side effects regarding foreign keys that reference the old name. In general, the pattern is not supported - its intended that a SQLAlchemy application runs with a fixed database structure (only new tables can be added on the fly).
(Source)
Related
I created two tables for my truck scheduling application:
class appts_db(db.Model):
id = db.Column(db.Integer, primary_key=True)
carrier = db.Column(db.String(100))
material = db.Column(db.String(10))
pickup_date = db.Column(db.String(10))
class carriers_db(db.Model):
carrier_id = db.Column(db.Integer, primary_key=True)
carrier = db.Column(db.String(100))
phone_number = db.Column(db.String(15))
How can I rename the column carrier to carrier_name in both tables to make it more clear what the columns contain. I tried using the command prompt
>python3
>db.create_all()
But the column name doesn't update. Is there some command that I'm missing that can update the column name in the db?
(1.) This seems to be a question about "how to migrate a single table?", twice. That is, whatever answer works for appts_db will also need to be applied to carriers_db -- I don't see a FK relation so I think most technical solutions would need to be manually told about that 2nd rename.
(2.) There are many nice "version my schema!" approaches, including the usual ruby-on-rails approach. Here, I recommend alembic. It takes some getting used to, but once implemented it lets you roll forward / roll back in time, and table schemas will match the currently-checked-out source code's expectations. It is specifically very good at column renames.
(3.) The simplest possible thing you could do here is a pair of DROP TABLE and then re-run the db.create_all(). The existing table is preventing create_all from having any effect, but after the DROP it will do just what you want. Of course, if you care about the existing rows you will want to tuck them away somewhere before you get too adventurous.
I ended up using DB Browser for SQLite (I had downloaded it previously) and ran this code in the "Execute SQL" tab:
ALTER TABLE carriers_db
RENAME COLUMN carrier TO carrier_name;
I have data for a particular entity partitioned across multiple identical tables, often separated chronologically or by numeric range. For instance, I may have a table called mytable for current data, a mytable_2013 for last year's data, mytable_2012, and so on.
Only the current table is ever written to. The others are only consulted. With SQLAlchemy, is there any way I can specify the table to query from when using the declarative model?
Use mixins and change table names by an object property.
class Node(Base):
__tablename__ = 'node'
nid = Column(Integer, primary_key=True)
uuid = Column(String(128))
vid = Column(Integer)
class Node1(Node):
__tablename__ = 'node_1'
class Node2(Node):
__tablename__ = 'node_2'
As requested, re-posting as answer:
Please take a look at this answer to Mapping lots of similar tables in SQLAlchemy in the Concrete Table Inheritance section.
In your case you can query MyTable only when working with the current data, and do a polymorphic search on all tables when you need the whole history.
lease help me figure it out. I created this class for a table in a database.
class ProfileAdditionalField(peewee.Model):
profile = peewee.ForeignKeyField(RpProfile, on_delete='cascade')
item = peewee.ForeignKeyField(AdditionalField, on_delete='cascade')
is_allowed = peewee.BooleanField(default=True)
class Meta:
database = db
primary_key = peewee.CompositeKey('profile', 'item')
when I try to modify the RpProfile table, all the entries from the ProfileAdditionalField table become deleted. I think the problem is in setting "on_delete = cascade"
playhouse_migrate.migrate(
migrator.add_column('RpProfile', 'show_link',
peewee.BooleanField(default=False)),
)
I use SQLite, and the migrator.alter_column_type command does not work in it. I can’t even change the setting so that the data is no longer deleted automatically.
How to add a new field to the RpProfile table without deleting data from the ProfileAdditionalField table?
Sqlite has limited support for altering columns. This is well-documented:
https://sqlite.org/lang_altertable.html
You might try disabling foreign-key constraints in sqlite (pragma foreign_keys=0) before doing any changes. In order to do anything beyond simply adding a new column in sqlite, peewee has to create a temp table with the desired schema, migrate everything over, then delete the old table and rename the temp table.
I have the following model where TableA and TableB have 1 to 1 relationship:
class TableA(db.Model):
id = Column(db.BigInteger, primary_key=True)
title = Column(String(1024))
table_b = relationship('TableB', uselist=False, back_populates="table_a")
class TableB(db.Model):
id = Column(BigInteger, ForeignKey(TableA.id), primary_key=True)
a = relationship('TableA', back_populates='table_b')
name = Column(String(1024))
when I insert 1 record everything goes fine:
rec_a = TableA(title='hello')
rec_b = TableB(a=rec_a, name='world')
db.session.add(rec_b)
db.session.commit()
but when I try to do this for bulk of records:
bulk_ = []
for title, name in zip(titles, names):
rec_a = TableA(title=title)
bulk_.append(TableB(a=rec_a, name=name))
db.session.bulk_save_objects(bulk_)
db.session.commit()
I get the following exception:
sqlalchemy.exc.InternalError: (pymysql.err.InternalError) (1364, "Field 'id' doesn't have a default value")
Am I doing something wrong? Did I configure the model wrong?
Is there a way to bulk commit this type of data?
The error you see is thrown by Mysql. It is complaining that the attempt to insert records into table_b violates the foreign key constraint.
One technique could be to write all the titles in one bulk statement, then write all the names in a 2nd bulk statement. Also, I've never passed relationships successfully to bulk operations, to this method relies on inserting simple values.
bulk_titles = [TableA(title=title) for title in titles]
session.bulk_save_objects(bulk_titles, return_defauls=True)
bulk_names = [TableB(id=title.id, name=name) for title, name in zip(bulk_titles, names)]
session.bulk_save_objects(bulk_names)
return_defaults=True is needed above because we need title.id in the 2nd bulk operation. But this greatly reduces the performance gains of the bulk operation
To avoid the performance degradation due to return_defauts=True, you could generate the primary keys from the application, rather than the database, e.g. using uuids, or fetching the max id in each table and generating a range from that start value.
Another technique might be to write your bulk insert statement using sqlalchemy core or plain text.
I have data for a particular entity partitioned across multiple identical tables, often separated chronologically or by numeric range. For instance, I may have a table called mytable for current data, a mytable_2013 for last year's data, mytable_2012, and so on.
Only the current table is ever written to. The others are only consulted. With SQLAlchemy, is there any way I can specify the table to query from when using the declarative model?
Use mixins and change table names by an object property.
class Node(Base):
__tablename__ = 'node'
nid = Column(Integer, primary_key=True)
uuid = Column(String(128))
vid = Column(Integer)
class Node1(Node):
__tablename__ = 'node_1'
class Node2(Node):
__tablename__ = 'node_2'
As requested, re-posting as answer:
Please take a look at this answer to Mapping lots of similar tables in SQLAlchemy in the Concrete Table Inheritance section.
In your case you can query MyTable only when working with the current data, and do a polymorphic search on all tables when you need the whole history.