I have a fastapi app using SQLAlchemy. This application connects to our legacy database and the architecture is a bit unusual.
For every client we have, we have a separate database. There is a table called "prefill_data" (I didn't name these) which is basically a representation of their employee data. So this table is completely dynamic from client to client.
So our application will receive the database name they are trying to connect to and build the connection string.
The issue we are facing is trying to actually query data from that table given the fact it's completely dynamic. We have a somewhat working example by using DeferredReflection. However, the issue we are seeing is this:
customer A connects to the database every thing works fine.
customer B connects to the database, then makes a request to this "prefill_data" table where we try to select something from, the query fails with AttributeError: type object 'DynamicPrefillData' has no attribute 'zone'.
I can actually reproduce this locally by connecting to one db, then logging out and logging in as another user who connects to a different database. If I stop and start the server each time, everything works as expected. So it seems to me the DeferredReflection caches the metadata so it doesn't reflect the table again.
This is problematic for us. We need to reflect the table each time the db connection is changed.
I'm a ruby developer who got assigned to this project, so I have very minimal experience with SQLAlchemy. I'm praying someone can help point me in a direction for a fix.
database_url = f"mysql+mysqlconnector://{user}:{password}#{host}:{port}/{database}?auth_plugin=mysql_native_password"
engine = create_engine(
database_url, isolation_level="READ UNCOMMITTED", pool_recycle=300
)
Reflected.prepare(engine)
return scoped_session(
sessionmaker(
autocommit=False, autoflush=False, bind=engine, expire_on_commit=False
)
)
class Reflected(DeferredReflection):
__abstract__ = True
class DynamicPrefillData(Reflected, Base):
__tablename__ = "prefill_data"
__table_args__ = {"extend_existing": True}
id = Column("sequence", Integer, primary_key=True)
Turns out the connection was closing causing unexpected behavior.
Related
I am in the process of developing a Python3/tkinter application that I want to have its database features based on a remote MySQL server. I have found SQLalchemy and I am trying to understand what the best practices are in terms of remote access and user authentication. There will be a client application that will ask for a username and password and then it will get, insert and update data in the remote database.
Right now, I am starting with a clear board, following the steps in some tutorial:
from sqlalchemy import create_engine, ForeignKey
from sqlalchemy import Column, Date, Integer, String
from sqlalchemy.ext.declarative import declarative_base
engine = create_engine('mysql+pymysql://DB_USER:DB_PASSWORD#DATABASE_URL:PORT/DATABASENAME', pool_recycle=3600, echo=True)
Base = declarative_base()
connection = engine.connect()
class Process(Base):
__tablename__ = "processes"
id = Column(Integer, primary_key=True)
name = Column(String)
Base.metadata.create_all(engine)
Assuming this is the right way to do it, my first question about this code is:
Isn't here a potential security problem by sending unencrypted user and password through the Internet? Should be taken some kind of measures to prevent password steal? If so, what should I be doing instead?
The other question that I have right now is about users:
Should each application user correspond to a different MySQL database user, or is it more correct to have the database client with a single user and then add my client users in a table (user id, password, ...), defining and managing them in the application code?
Isn't here a potential security problem by sending unencrypted user
and password through the Internet? Should be taken some kind of
measures to prevent password steal? If so, what should I be doing
instead?
There is a fundamental issue with having a (MySQL) database available to the web. With MySQL you can configure it to require ssh-tunnels or ssl-certificates, both of which prevents sending passwords in clear text. Generally you'll have to write both your client software, and a piece of server software that sits on a server close to the database (and the protocol between client/server).
Should each application user correspond to a different MySQL database
user, or is it more correct to have the database client with a single
user and then add my client users in a table (user id, password, ...),
defining and managing them in the application code?
Neither is more correct than the other, but depending on your database (and your client machines) it might influence licensing costs.
Normally your client would authenticate users with the server-software you'll be writing, and then the server software would be using a single database login to contact the database.
I'm working on a Flask project and I am using Flask-SQLAlchemy.
I need to work with multiple already existing databases.
I created the "app" object and the SQLAlchemy one:
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
db = SQLAlchemy(app)
In the configuration I set the default connection and the additional binds:
SQLALCHEMY_DATABASE_URI = 'postgresql://pg_user:pg_pwd#pg_server/pg_db'
SQLALCHEMY_BINDS = {
'oracle_bind': 'oracle://oracle_user:oracle_pwd#oracle_server/oracle_schema',
'mssql_bind': 'mssql+pyodbc://msssql_user:mssql_pwd#mssql_server/mssql_schema?driver=FreeTDS'
}
Then I created the table models using the declarative system and, where needed, I set the
__bind_key__ parameter to indicate in which database the table is located.
For example:
class MyTable(db.Model):
__bind_key__ = 'mssql_bind'
__tablename__ = 'my_table'
id = db.Column(db.Integer, nullable=False, primary_key=True)
val = db.Column(db.String(50), nullable=False)
in this way everything works correctly, when I do a query it is made on the right database.
Reading the SQLAlchemy documentation and the Flask-SQLALchemy documentation I understand these things
(i write them down to check I understand correctly):
You can handle the transactions through the session.
In SQLAlchemy you can bind a session with a specific engine.
Flask-SQLAlchemy automatically creates the session (scoped_session) at the request start and it destroys it at the request end
so I can do:
record = MyTable(1, 'some text')
db.session.add(record)
db.session.commit()
I can not understand what happens when we use multiple databases, regarding the session, in Flask-SqlAlchemy.
I verified that the system is able to bind the table correctly at the right database through the __bind_key__ parameter,
I can, therefore, insert data on different databases through db.session and, at the commit, everything is saved.
I can't, however, understand if Flask-SQLAlchemy create multiple sessions (one for each engine) or if manages the thing in a different way.
In both cases, how is it possible refer to the session/transaction of a specific database?
If I use db.session.commit() the system does the commit on all involved databases, but how can I do if I want to commit only for a single database?
I would do something like:
db.session('mssql_bind').commit()
but I can not figure out how to do this.
I also saw a Flask-SQLAlchemy implementation which should ease the management of these situations:
Issue: https://github.com/mitsuhiko/flask-sqlalchemy/issues/107
Implementation: https://github.com/mitsuhiko/flask-sqlalchemy/pull/249
but I can not figure out how to use it.
In Flask-SQLAlchemy how can I manage sessions specifically for each single engine?
Flask-SQLAlchemy uses a customized session that handles bind routing according to given __bind_key__ attribute in mapped class. Under the hood it actually adds that key as info to the created table. In other words, Flask does not create multiple sessions, one for each bind, but a single session that routes to correct connectable (engine/connection) according to the bind key. Note that vanilla SQLAlchemy has similar functionality out of the box.
In both cases, how is it possible refer to the session/transaction of a specific database?
If I use db.session.commit() the system does the commit on all involved databases, but how can I do if I want to commit only for a single database?
It might not be a good idea to subvert and issue commits to specific databases mid session using the connections owned by the session. The session is a whole and keeps track of state for object instances, flushing changes to databases when needed etc. That means that the transaction handled by the session is not just the database transactions, but the session's own transaction as well. All that should commit and rollback as one.
You could on the other hand create new SQLAlchemy (or Flask-SQLAlchemy) sessions that possibly join the ongoing transaction in one of the binds:
session = db.create_scoped_session(
options=dict(bind=db.get_engine(app, 'oracle_bind'),
binds={}))
This is what the pull request is about. It allows using an existing transactional connection as the bind for a new Flask-SQLAlchemy session. This is very useful for example in testing, as can be seen in the rationale for that pull request. That way you can have a "master" transaction that can for example rollback everything done in testing.
Note that the SignallingSession always consults the db.get_engine() method if a bind_key is present. This means that the example session is unable to query tables without a bind key and which don't exist on your oracle DB, but would still work for tables with your mssql_bind key.
The issue you linked to on the other hand does list ways to issue SQL to specific binds:
rows = db.session.execute(query, params,
bind=db.get_engine(app, 'oracle_bind'))
There were other less explicit methods listed as well, but explicit is better than implicit.
I'm struggling with SQLAlchemy and py.test.
In my __init__.py I create a engine and a session using:
engine = create_engine('sqlite://')
Session = sessionmaker(bind=engine)
session = Session()
I also have a entity.py and a test_entity.py. In both files I import session
from __init__ import session
In conftest.py I defined a function which sets up the database and create a schema from Base.metadata.
The point is that all transactions inside my test module all pass, but all transactions in my class being tested fails with errors like Object already bound to session (when adding and commiting an object) or with OperationalError: no such table (when fetching an object).
How do I fix it?
After some trial and error I found out that all works fine when I use a database on disk.
engine = create_engine('sqlite:////path/to/db')
It is documented:
Pysqlite’s default behavior is to prohibit the usage of a single
connection in more than one thread. [...] Pysqlite does include a
now-undocumented flag known as check_same_thread which will disable
this check, however note that pysqlite connections are still not safe
to use in concurrently in multiple threads.
I'm using sqlalchemy in a pyqt desktop application. When I execute and update to the database the changes are not reflected in the database, if I inspect the session object 'sqlalchemy.orm.scoping.ScopedSession', it tell's me that my object is not present in the session, but if I try to added It tells me that is already present.
The way I'm managing the connections is the following, when the application starts I open a connection and keep it open all the user session, when the application is closed I close the connection. Thus all queries are performed opening only one connection.
selected_mine = Mine.query.filter_by(mine_name="some_name").first()
''' updating the object attributes '''
selected_mine.region = self.ui.region.text()
self.sqlite_model.conn.commit()
I've inspect the sessions, and there are two different objects (I don't know why).
s=self.sqlite_model.conn()
>>> s
<sqlalchemy.orm.session.Session object at 0x30fb210>
s.object_session(selected_mine)
>>> <sqlalchemy.orm.session.Session object at 0x30547d0>
how do I solve this? why the commit it's not working?
I'm creating the session in the class SqliteModel (class of the object self.sqlite_model)
Session = scoped_session(sessionmaker(autocommit=False, bind=self.engine))
self.conn = Session()
Base.query = Session.query_property()
''' connect to database '''
Base.metadata.create_all(bind=self.engine)
You don't show what the Mine class is, but I believe your issue lies with this line:
selected_mine = Mine.query.filter_by(mine_name="some_name").first()
You are not actually using the session that you originally setup to query the database thus a new session is being created for you. I believe your code should look like this instead:
selected_mine = self.conn.query(Mine).filter_by(mine_name="some_name").first()
If you haven't done so yet, you should definitely glance through the excellent documentation that SQLAlchemy has made available explaining what a Session is and how to use it.
I have an sqlalchemy application that currently uses a local database.The code for the application is given below.
log = core.getLogger()
engine = create_engine('sqlite:///nwtopology.db', echo=False)
Base = declarative_base()
Session = sessionmaker(bind=engine)
session = Session()
class SourcetoPort(Base):
""""""
__tablename__ = 'source_to_port'
id = Column(Integer, primary_key=True)
port_no = Column(Integer)
src_address = Column(String,index=True)
#-----------------------------------------
def __init__(self, src_address,port_no):
""""""
self.src_address = src_address
self.port_no = port_no
I want to create the database itself in a remote machine.I came across this document.
http://www.sqlalchemy.org/doc_pdfs/sqlalchemy_0_6_3.pdf
In the explanation they mentioned the lines given below.
engine = create_engine(’postgresql://scott:tiger#localhost:5432/mydatabase’)
My first question is
1) does sqlite support remote database creation?
2) How do I keep the connection to the remote machine open always? I don't want to initiate an ssh connection every time I have to insert an entry or make a query.
These question may sound stupid but I am very new to python and sqlalchemy.Any help is appreciated.
Answering your questions:
SQLite doesn't support remote database connection - you'll have to implement this by yourself - like putting sqlite database file on shared by network filesystem, but it would make your solution less reliable
My suggestion - do not try to use user remote sqlite database but switch to traditional RDBMS. Please see below for more details.
Sounds like your application had overgrown SQLite. And it is good time to switch to using traditional RDBMS like MySQL or PosgreSQL where network connections are supporting out of the box.
SQLite is local database. SQLite has a page explaining when to use it. It says:
If you have many client programs accessing a common database over a
network, you should consider using a client/server database engine
instead of SQLite.
The good thing is that your application might be database agnostic as you are using SQLAlchemy for generating queries.
So I would do the following:
install database system to machine (it doesn't matter - local or
remote, you can always repeat move your database to remote machine) and configure permissions for your user (create database, alter, select, update and insert)
create database schema and populate data - to clone your existing. There are some tools available for doing so - i.e. Copying Databases across Platforms with SQLAlchemy
sqlite database.
update db connection string in your application from using sqlite to use remote database of your choice
Times have changed.
If one wishes to make a SQLite database available over the web, one option would be to use CubeSQL as a server, and SQLiteManager for SQLite as its client. For details, see e.g. https://www.sqlabs.com/
Another option might be to use Valentina Server similarly: see https://www.valentina-db.com/en/valentina-server-overview
(These options will probably only be suitable if there is at most one client with write-access at a time.)
Are there any others?