remote database creation in sql alchemy via sqlite - python

I have an sqlalchemy application that currently uses a local database.The code for the application is given below.
log = core.getLogger()
engine = create_engine('sqlite:///nwtopology.db', echo=False)
Base = declarative_base()
Session = sessionmaker(bind=engine)
session = Session()
class SourcetoPort(Base):
""""""
__tablename__ = 'source_to_port'
id = Column(Integer, primary_key=True)
port_no = Column(Integer)
src_address = Column(String,index=True)
#-----------------------------------------
def __init__(self, src_address,port_no):
""""""
self.src_address = src_address
self.port_no = port_no
I want to create the database itself in a remote machine.I came across this document.
http://www.sqlalchemy.org/doc_pdfs/sqlalchemy_0_6_3.pdf
In the explanation they mentioned the lines given below.
engine = create_engine(’postgresql://scott:tiger#localhost:5432/mydatabase’)
My first question is
1) does sqlite support remote database creation?
2) How do I keep the connection to the remote machine open always? I don't want to initiate an ssh connection every time I have to insert an entry or make a query.
These question may sound stupid but I am very new to python and sqlalchemy.Any help is appreciated.

Answering your questions:
SQLite doesn't support remote database connection - you'll have to implement this by yourself - like putting sqlite database file on shared by network filesystem, but it would make your solution less reliable
My suggestion - do not try to use user remote sqlite database but switch to traditional RDBMS. Please see below for more details.
Sounds like your application had overgrown SQLite. And it is good time to switch to using traditional RDBMS like MySQL or PosgreSQL where network connections are supporting out of the box.
SQLite is local database. SQLite has a page explaining when to use it. It says:
If you have many client programs accessing a common database over a
network, you should consider using a client/server database engine
instead of SQLite.
The good thing is that your application might be database agnostic as you are using SQLAlchemy for generating queries.
So I would do the following:
install database system to machine (it doesn't matter - local or
remote, you can always repeat move your database to remote machine) and configure permissions for your user (create database, alter, select, update and insert)
create database schema and populate data - to clone your existing. There are some tools available for doing so - i.e. Copying Databases across Platforms with SQLAlchemy
sqlite database.
update db connection string in your application from using sqlite to use remote database of your choice

Times have changed.
If one wishes to make a SQLite database available over the web, one option would be to use CubeSQL as a server, and SQLiteManager for SQLite as its client. For details, see e.g. https://www.sqlabs.com/
Another option might be to use Valentina Server similarly: see https://www.valentina-db.com/en/valentina-server-overview
(These options will probably only be suitable if there is at most one client with write-access at a time.)
Are there any others?

Related

I want to send the outputs of python to the database

I am trying to send the outputs I got in python to the database. Like I am reading valves from my plc using the OPC UA server via python. I can read my variables every second, Now I want to send those values to my PostgreSQL database. Any help would be appreciated. Thank you very much again.
I created a connection between python and PostgreSQL by using the psycopg module. And I am able to create the tables using my python script.
Rather than having to script directly with psycopg, I usually reccomend using SqlAlchemy, which is a nice wrapper for psycopg that makes editing data in the database much easier
Eg, by using the same psycopg engine you have already set up, you can do:
from sqlalchemy.orm import Session
with Session(engine) as session:
spongebob = User(
name="spongebob",
fullname="Spongebob Squarepants",
addresses=[Address(email_address="spongebob#sqlalchemy.org")],
)
sandy = User(
name="sandy",
fullname="Sandy Cheeks",
addresses=[
Address(email_address="sandy#sqlalchemy.org"),
Address(email_address="sandy#squirrelpower.org"),
],
)
patrick = User(name="patrick", fullname="Patrick Star")
session.add_all([spongebob, sandy, patrick])
session.commit()
I created a connection between python and postgreSQL database by using
psycopg2.connect(dbname="dbname", port= 6543, user="postgres", password="password", host="host.docker.internal")
I give host details for creating docker image and use it.

Connect to sqlite3.Connection using sqlalchemy

I am using a library that creates an SQLite library in-memory by calling sqlite3.connect(':memory:'). I would like to connect to this database using sqlalchemy to use some ORM and other nice bells and whistles. Is there, in the depths of SQLAlchemy's API, a way to pass the resulting sqlite3.Connection object through so that I can re-use it?
I cannot just re-connect with connection = sqlalchemy.create_engine('sqlite:///:memory:').connect() – as the SQLite documentation states: “The database ceases to exist as soon as the database connection is closed. Every :memory: database is distinct from every other. So, opening two database connections each with the filename ":memory:" will create two independent in-memory databases.” (Which makes sense. I also tried it, and the behaviour is as expected.)
I have tried to follow SQLAlchemy's source code to find the low level location where the database connection is established and SQLite is actually called, but so far I found nothing. It looks like SQLAlchemy uses far too much obscure alchemy to do that for me to understand when and where it happens.
Here's a way to do that:
# some connection is created - by you or someone else
conn = sqlite3.connect(':memory:')
...
def get_connection():
# just a debug print to verify that it's indeed getting called:
print("returning the connection")
return conn
# create a SQL Alchamy engine that uses the same in-memory sqlite connection
engine = create_engine('sqlite://', creator = get_connection)
From this point on, just use the engine as you wish.
Here's a link to the documentation of this feature.

How to connect securely to a MySQL database using SQLalchemy?

I am in the process of developing a Python3/tkinter application that I want to have its database features based on a remote MySQL server. I have found SQLalchemy and I am trying to understand what the best practices are in terms of remote access and user authentication. There will be a client application that will ask for a username and password and then it will get, insert and update data in the remote database.
Right now, I am starting with a clear board, following the steps in some tutorial:
from sqlalchemy import create_engine, ForeignKey
from sqlalchemy import Column, Date, Integer, String
from sqlalchemy.ext.declarative import declarative_base
engine = create_engine('mysql+pymysql://DB_USER:DB_PASSWORD#DATABASE_URL:PORT/DATABASENAME', pool_recycle=3600, echo=True)
Base = declarative_base()
connection = engine.connect()
class Process(Base):
__tablename__ = "processes"
id = Column(Integer, primary_key=True)
name = Column(String)
Base.metadata.create_all(engine)
Assuming this is the right way to do it, my first question about this code is:
Isn't here a potential security problem by sending unencrypted user and password through the Internet? Should be taken some kind of measures to prevent password steal? If so, what should I be doing instead?
The other question that I have right now is about users:
Should each application user correspond to a different MySQL database user, or is it more correct to have the database client with a single user and then add my client users in a table (user id, password, ...), defining and managing them in the application code?
Isn't here a potential security problem by sending unencrypted user
and password through the Internet? Should be taken some kind of
measures to prevent password steal? If so, what should I be doing
instead?
There is a fundamental issue with having a (MySQL) database available to the web. With MySQL you can configure it to require ssh-tunnels or ssl-certificates, both of which prevents sending passwords in clear text. Generally you'll have to write both your client software, and a piece of server software that sits on a server close to the database (and the protocol between client/server).
Should each application user correspond to a different MySQL database
user, or is it more correct to have the database client with a single
user and then add my client users in a table (user id, password, ...),
defining and managing them in the application code?
Neither is more correct than the other, but depending on your database (and your client machines) it might influence licensing costs.
Normally your client would authenticate users with the server-software you'll be writing, and then the server software would be using a single database login to contact the database.

Flask-SQLAlchemy - how do sessions work with multiple databases?

I'm working on a Flask project and I am using Flask-SQLAlchemy.
I need to work with multiple already existing databases.
I created the "app" object and the SQLAlchemy one:
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
db = SQLAlchemy(app)
In the configuration I set the default connection and the additional binds:
SQLALCHEMY_DATABASE_URI = 'postgresql://pg_user:pg_pwd#pg_server/pg_db'
SQLALCHEMY_BINDS = {
'oracle_bind': 'oracle://oracle_user:oracle_pwd#oracle_server/oracle_schema',
'mssql_bind': 'mssql+pyodbc://msssql_user:mssql_pwd#mssql_server/mssql_schema?driver=FreeTDS'
}
Then I created the table models using the declarative system and, where needed, I set the
__bind_key__ parameter to indicate in which database the table is located.
For example:
class MyTable(db.Model):
__bind_key__ = 'mssql_bind'
__tablename__ = 'my_table'
id = db.Column(db.Integer, nullable=False, primary_key=True)
val = db.Column(db.String(50), nullable=False)
in this way everything works correctly, when I do a query it is made on the right database.
Reading the SQLAlchemy documentation and the Flask-SQLALchemy documentation I understand these things
(i write them down to check I understand correctly):
You can handle the transactions through the session.
In SQLAlchemy you can bind a session with a specific engine.
Flask-SQLAlchemy automatically creates the session (scoped_session) at the request start and it destroys it at the request end
so I can do:
record = MyTable(1, 'some text')
db.session.add(record)
db.session.commit()
I can not understand what happens when we use multiple databases, regarding the session, in Flask-SqlAlchemy.
I verified that the system is able to bind the table correctly at the right database through the __bind_key__ parameter,
I can, therefore, insert data on different databases through db.session and, at the commit, everything is saved.
I can't, however, understand if Flask-SQLAlchemy create multiple sessions (one for each engine) or if manages the thing in a different way.
In both cases, how is it possible refer to the session/transaction of a specific database?
If I use db.session.commit() the system does the commit on all involved databases, but how can I do if I want to commit only for a single database?
I would do something like:
db.session('mssql_bind').commit()
but I can not figure out how to do this.
I also saw a Flask-SQLAlchemy implementation which should ease the management of these situations:
Issue: https://github.com/mitsuhiko/flask-sqlalchemy/issues/107
Implementation: https://github.com/mitsuhiko/flask-sqlalchemy/pull/249
but I can not figure out how to use it.
In Flask-SQLAlchemy how can I manage sessions specifically for each single engine?
Flask-SQLAlchemy uses a customized session that handles bind routing according to given __bind_key__ attribute in mapped class. Under the hood it actually adds that key as info to the created table. In other words, Flask does not create multiple sessions, one for each bind, but a single session that routes to correct connectable (engine/connection) according to the bind key. Note that vanilla SQLAlchemy has similar functionality out of the box.
In both cases, how is it possible refer to the session/transaction of a specific database?
If I use db.session.commit() the system does the commit on all involved databases, but how can I do if I want to commit only for a single database?
It might not be a good idea to subvert and issue commits to specific databases mid session using the connections owned by the session. The session is a whole and keeps track of state for object instances, flushing changes to databases when needed etc. That means that the transaction handled by the session is not just the database transactions, but the session's own transaction as well. All that should commit and rollback as one.
You could on the other hand create new SQLAlchemy (or Flask-SQLAlchemy) sessions that possibly join the ongoing transaction in one of the binds:
session = db.create_scoped_session(
options=dict(bind=db.get_engine(app, 'oracle_bind'),
binds={}))
This is what the pull request is about. It allows using an existing transactional connection as the bind for a new Flask-SQLAlchemy session. This is very useful for example in testing, as can be seen in the rationale for that pull request. That way you can have a "master" transaction that can for example rollback everything done in testing.
Note that the SignallingSession always consults the db.get_engine() method if a bind_key is present. This means that the example session is unable to query tables without a bind key and which don't exist on your oracle DB, but would still work for tables with your mssql_bind key.
The issue you linked to on the other hand does list ways to issue SQL to specific binds:
rows = db.session.execute(query, params,
bind=db.get_engine(app, 'oracle_bind'))
There were other less explicit methods listed as well, but explicit is better than implicit.

Select a single active database connection when starting Plone

I have a Plone 4 site which uses an additional Postgres database via a Z Psycopg 2 Database Connection object. Since the ZODB is sometimes replicated for testing and development purposes, there are a few fellow database connection objects, in a project_suffix naming scheme; this way, I can select one of the existing database adapters via buildout configuraton script.
However, I noticed that all existing database connection objects are apparently opened when Plone starts up. I don't know whether this is a real problem (e.g. when applying changes to the schema of the database of another instance), but I'd rather have Plone open only the single database which is actually used. How can I achieve this?
(Plone 4.2.4, Postgres 9.1.9, psycopg2 2.5.1, Debian Linux)
Update:
I added some code to the __init__.py of my product, which looks roughly like this:
from Shared.DC.ZRDB.Connection import Connection
...
dbname = env['DATABASE']
db = None
for id, obj in portalfolder.objectItems():
if isinstance(obj, Connection):
if id == dbname:
db = obj
else:
print 'before:', obj._v_connected
obj._v_database_connection.close()
print 'after: ', obj._v_connected
However, this seems not to work; there are no exceptions I'm aware of, but for both before and after, I get a timestamp, and when looking in the ZMI afterwards, the connections seem to be open.
Any ideas, please?

Categories