Our current project relies heavily on SQL Alchemy for table creation/data insertion. We would like to switch to timescaledb's hypertables, but it seems the recommended way to create hypertables is by executing a
create_hypertable
command. I need to be able to dynamically create tables, and so manually doing this for every table created is not really an option. One way of handling the conversion is to run a python script sending psycopg2 commands to convert all newly-created tables into hypertables, but this seems a little clumsy. Does timescaledb offer any integration with SQL Alchemy with regards to creating hypertables?
We currently do not offer any specific integrations with SQL Alchemy (broadly or specifically for creating hypertables). We are always interested in hearing new feature requests, so if you wanted to post your issue/use case on our Github it would help us keep better track of it for future work.
One thing that might work for your use case is to create an event trigger that executes on table creation. You'd have to check that it's in the correct schema since TimescaleDB creates its own chunk tables dynamically and you don't want to have them converted to hypertables.
See this answer for more info on event triggers:
execute a trigger when I create a table
Here is a practical example of using event trigger to create a hyper table:
from sqlalchemy import Column, Integer, DateTime, event, DDL, orm
Base = orm.declarative_base()
class ExampleModel(Base):
__tablename__ = 'example_model'
id = Column(Integer, primary_key=True)
time = Column(DateTime)
event.listen(
ExampleModel.__table__,
'after_create',
DDL(f"SELECT create_hypertable('{ExampleModel.__tablename__}', 'time');")
)
Related
I was using SQLAlchemy ORM to connect to a in memory database when I decided to implement a versioning tracking to the DB schema. To do this I've been following the tutorial on how to set up Versioning using SQLAlchemy, but now I'm wondering if there is a way for me to get my upgrade and downgrade scripts to also update/create my SQLAlchemy.orm tables?
I ask this because I now don't know how to write code using only SQLAlchemy Migrate since a developer might not know of the most recent change done to the database. Currently the developer just has to look at the file containing the class that maps to a table in the DB to know what is available, but from my understanding using Migrate would not synchronize these classes with the changes applied in a upgrade/downgrade script. This synchronization would need to be done manually. I looked at reflect but this doesn't seem to require prior knowledge as to the structure of the table.
I know I must be missing something. I could have my DB opened in HeidiSQL and [ALT + TAB] each time my memory wants to confirm something in the DB but this is slows me down a lot when I used to just be able to use auto complete on classes as I type (Note: I'm heavily dyslexic and I'm prone to many spelling mistakes which is why I auto complete drastically improves my productivity). Is there a way for the upgrade scripts to create/update/delete files containing ORM classes?
ie.
class ExtractionEvent(Base):
__tablename__ = 'ExtractionEvents'
Id = Column(Integer, primary_key=True, autoincrement=True)
...
Is there a way to delete or modify a table using flask-sqlalchemy?
I am working on a Flask-based web app. I switched to flask-sqlalchemy as my project is on Heroku and I had to connect my table to Heroku PostgreSQL. I made a flask-sqlalchemy table and created it using the db.create_all() command.
Now, for my app to fulfill its purpose, it is of utmost importance to save images, the best way of which I found to be to add them to the database.
Now, I want to change the particular table class to store a column called image as image = db.Column(db.Text, nullable=False) but I cannot. The former schema is unchanged and it gives me an error signifying that the column image does not exist every time I try to access or add something to the table.
How to do this?
You can use drop()
from sqlalchemy import create_engine
engine = create_engine("...")
my_table.__table__.drop(engine)
I have a table with 30k clients, with the ClientID as primary key.
I'm getting data from API calls and inserting them into the table using python.
I'd like to find a way to insert rows with new clients and, if the ClientID that comes with the API call already exists in the table, update the existing register with the updated information of this client.
Thanks!!
A snippet of code would be nice to show us what exactly you are doing right now. I presume you are using an ORM like SqlAlchemy? If so, then you are looking at doing an UPSERT type of an operation.
That is already answered HERE
Alternatively, if you are executing raw queries without an ORM then you could write a custom procedure and pass required parameters. HERE is a good write up on how that is done in MSSQL under high concurrency. You could use this as a starting point for understanding and then re-write it for PostgreSQL.
I need to write a complex query, which retrieves a lot of data from a bunch of tables. Basically I need to find all instances of the models
Customer
Payment
Invoice
where relationships intersect in a specific way. In SqlAlchemy, I would be able to do something like
for c, p, i in session.query(Customer, Payment, Invoice).\
filter(User.id==Payment.customer_id).\
filter(Invoice.id==Payment.invoice_id).\
filter(Payment.date==...).\
filter(Customer.some_property==...)
all():
# Do stuff ...
This would allow me to set several constraints and retrieve it all at once. In Django, I currently do something stupid like
customers = Customer.objects.filter(...)
payments = Payment.objects.filter(customer=customer)
invoices = Invoice.objects.filter(customer=customer, payment_set=payments)
Now, we already have three different queries (some details are left out to keep it simple). Could I reduce it to one? Well, I could have done something like
customers = Customer.objects.filter(...).prefetch_related(
'payments', 'payments__invoices'
)
but now I have to traverse a crazy tree of data instead of having it all laid out neatly in rows, like with SqlAlchemy. Is there any way Django can do something like that? Or would I have to drop through to custom SQL directly?
After reading up on different solutions, I have decided to use SqlAlchemy on top of my Django models. Some people try to completely replace the Django ORM with SqlAlchemy, but this almost completely defeats the purpose of using Django, since most of the framework relies on the ORM.
Instead, I use SqlAlchemy simple for querying the tables defined by the Django ORM. I follow a recipe similar to this
# Setup sqlalchemy bindings
import sqlalchemy as s
from sqlalchemy.orm import sessionmaker
engine = s.create_engine('postgresql://<user>:<password>#<host>:<port>/<db_name>')
# Automatically read the database tables and create metadata
meta = s.MetaData()
meta.reflect(bind=engine)
Session = sessionmaker(bind=engine)
# Create a session, which can query the tables
session = Session()
# Build table instances without hardcoding tablenames
s_payment = meta.tables[models.Payment()._meta.db_table]
s_allocation = meta.tables[models.Allocation()._meta.db_table]
s_customer = meta.tables[models.Customer()._meta.db_table]
s_invoice = meta.tables[models.Invoice()._meta.db_table]
report = session.query(s_payment.c.amount, ...).all()
There is room for a few improvements on this recipe, e.g. it is not very elegant to create an empty instance of Django models in order to find their table name, however, with a few lines of code, I get the full flexibility of SqlAlchemy without compromising with the Django ORM layer. This means both can live happily alongside each other.
One caveat is that SqlAlchemy will not use the same connection as the Django ORM, which means that the view of things may not appear consistent if I use both approaches in the same context. This won't be a problem for me though, since I just want to read a bunch of data from the database.
I'm working with SQLAlchemy to run SQL queries against an Oracle database. I have read access to the database, but the user I have does not own any of the tables I'm working with.
The database updates on a regular basis, so rather than explicitly listing the MetaData, I was hoping to use reflection. I found this question, that describes an issue similar to what I'm having. However, I don't have a way to change ownership of the tables, nor modify the database in any way. I just have read access.
Is there a way to reflect Oracle tables in SQLAlchemy if I don't have ownership of those tables?
(Edit)
Example Code:
engine = create_engine('ORACLE CONNECTION STRING')
metadata = MetaData()
students = Table('students', metadata, autoload=True, autoload_with=engine)
I receive an exception of sqlalchemy.exc.NoSuchTableError: students
However, when I run the following:
results = engine.execute('SELECT * FROM students')
for r in results:
print(r)
I receive the output that I expected from the table, which is a tuple of all the fields for each row.
So instead of trying to reflect a single table, I try to reflect all of them:
metadata.reflect(bind=engine)
print(metadata.tables)
The output is immutabledict({}).
So essentially it's nothing. All of these tables are owned by user A where as I'm logging in with a read-only of user B.
You might have better luck reflecting someone else's tables if you specify the schema (account) you're targeting:
metadata.reflect(bind=engine, schema='userA')
This way, you'll reflect all readable tables belonging to 'userA'. I'm not sure why you're able to query students using engine.execute, though.