Is it possible to use raw SQL rather than the TABLE construct for creating tables in SQL Alchemy? I would still like to use the rest of SQLAlchemy though, such as the object mapper and session module. I'm just not fond of the SQLAlchemy syntax used to create tables (I've spent too long mired in SAS and SQL to learn another!).
Many thanks, Rich
Yes.
connection.execute("""
CREATE TABLE ...
""")
You can then reflect all tables: MetaData(dsn, reflect=True) or metadata_instance.reflect().
You can use the autoload paramater to the Table constructor to have it automatically load up the table definitions. There are some examples here.
Related
I am working on a new project with Flask that will heavily read from a redshift database and particularly from materialized views. I'm fairly new to Flask/SQLAchemy/ORMs. I want to abstract the database layer with ORM by using Flask-SQLAlchemy. When i was reading the documents, i noticed SQL Alchemy requires underlying database source to have a primary key. However, i am worried that having materialized view without any primary key will cause a problem.
I found out that there are some workarounds to specify some columns as primary key even when they are not but i'm not sure if that will cause an issue when i perform a join on materialized views. I am sure there might be a workaround for this one as well but i'm thinking if using ORM with workarounds is actually a good idea when most of my operations will be heavy read operations from materialized views. So i have two questions
1)
Is it possible to use SQLAlchemy with Redshift Materialized Views (I wasn't able to find enough resources on this one)
2)
If possible, is it a good idea to use SQLAlchemy or should I stick to raw sql queries with my own logic of postgresql pooling?
Thank you.
P.S: I have no primary keys in redshift but i have dist/sort keys.
References/Links I used:
How to define a table without primary key with SQLAlchemy?
sqlalchemy materialized relationships
i noticed SQL Alchemy requires underlying database source to have a primary key.
This is not true. You can use synthetic primary keys. I am using them with TimescaleDB hypertables that do not have single-column primary keys.
Is it possible to use SQLAlchemy with Redshift Materialized Views (I wasn't able to find enough resources on this one)
SQLAlchemy does not care about the underlying database, as long SQL wire protocol and its flavour is compatible (PostgreSQL, MySQL, etc.)
If possible, is it a good idea to use SQLAlchemy or should I stick to raw sql queries with my own logic of postgresql pooling?
Using SQLAlchemy improves the readability your code and then reduces maintenance costs in long term.
Hey i am pretty new to the pynamodb,
i want to know how to use pre-existing table in pynamodb without re-writing the models defination.
from pynamodb.connection import Connection
# Get a connection
conn = Connection(host='http://localhost:8000')
# print(conn)
# List tables
print(conn.list_tables()['TableNames'])
so here i see the table names in my database but i can't use them, please help me how can i use the tables without defining models.
Thanks
okay so, PynamoDB's only advantage over using the native DynamoDB Python API is models. If you don't need models, consider using the native API.
if there is already table created in the database then we have to use boto3 lib.
thanks
Is it possible to create a new database from pony ORM? We couldn't find it within pony ORM docs, it's always assumed the database exists or is using SQLite file.
We would like to create a testing database and drop it afterwards.
No. Per:
https://docs.ponyorm.org/api_reference.html#sqlite
Supported databases
If you look at the .bind() API for the various databases, SQLite is the only one with create_db. This is because in SQLite creating a database is just creating a single file. The other engines need to go through their own program to initialize a database. You will need to create an independent script that creates the database.
If you have your sqlite database file, you can try using pgloader.
I'm developing an aiohttp server application, and I just saw that apparently it isn't able to use SQLAlchemy's ORM layer. So, I was wondering: if my application will only be able to use SQLAlchemy's core, is it still protected against SQL injection attacks?
My code is the following:
async def add_sensor(db_engine, name):
async with db_engine.acquire() as connection:
query = model.Sensor.__table__.insert().values(name=name)
await connection.execute(query)
A comment on the accepted answer in this related question makes me doubt:
you can still use execute() or other literal data that will NOT be
escaped by SQLAlchemy.
So, with the execute() used in my code, does the above quote mean that my code is unsafe? And in general: is protection against SQL Injection only possible with the SQLAlchemy ORM layer, as with the Core layer you'll end up launching execute()?
in your example above i dont see any variable beeing supplied to the database query. Since there is no user supplied input there is also no Sql Injection possible.
Even if there would be a user supplied value as long as you dont use handwritten sql statements with sqlalchemy and instead use the orm model approach (model.Sensor.__table__.select()) as can be seen in your example you are secure against Sql Injection.
In the end its all about telling sqlalchemy explicitely what columns and tables should be used to select and insert data from/to and keeping that separate from the data that is beeing inserted or selected. Never combine the data string with the query string and always use sqlalchemy orm model objects to describe your query.
Bad way (Sql Injectable):
Session.execute("select * form users where name = %s" % request.GET['name'])
Good way (Not Sql Injectable):
Session.execute(model.users.__table__.select().where(model.users.name == request.GET['name']))
I have a relatively complex MySQL Database (60+ Tables) that I need to populate regularly. There are a lot of foreign key constraints on most of the tables. I started writing my import engine using SQL Alchemy.
Do I need to reconstruct my entire Database with SQL Alchemy classes in order to do this? Does anyone have any better suggestions? Only 8 of the tables actually accept new raw data, the rest are populated from these tables.
You can use SQLAlchemy reflection to create the classes mapped to MySQL table structure. See Reflecting Database Objects. There is a subchapter there showing how to reflect all tables (Reflecting Database Objects).
Code copied from the above link to reflect one table:
messages = Table('messages', meta, autoload=True, autoload_with=engine)
All tables:
meta = MetaData()
meta.reflect(bind=someengine)
users_table = meta.tables['users']
addresses_table = meta.tables['addresses']