sqlite multiple writes at same time - python

i'm making a python flask app with sqlite database
is there a way to make a queue for write requests so that it can run smoothly AS SQLITE doesn't support multiple concurrent writes or commits
this is my connection string
engine = create_engine('sqlite:///IT_DataBase.db',
connect_args={'check_same_thread': False})
Base.metadata.bind = engine
DBSession = sessionmaker(bind=engine)
session = DBSession()
and this is the commit code as example:
#app.route('/NewRequest', methods=['GET', 'POST'])
#login_required
def NewRequest():
connUser=session.query(User).filter(User.id==Session.get('user_id')).one()
if request.method == 'GET':
Types = session.query(Req_Type.id,Req_Type.Type_name)
Pr = session.query(Req_Priorities.id,Req_Priorities.Priority_name)
return render_template('NewRequest.html',conn=connUser ,name=current_user.name, items=Types,priorities=Pr)
else:
name= request.form['Name']
Description= request.form['Description']
Type = request.form.get('Type')
Priority = request.form.get('Priority')
newRequest = Requests(name=name, Record_Created=datetime.now().strftime("%Y-%m-%d %H:%M"), Description=Description, Assigned_To=None, Type_Name=str(Type), Priority_Name=str(Priority), Status_Name='Opened', User_ID=Session.get('user_id') )
session.add(newRequest)
flash('New Request With Name %s Successfully Created' % newRequest.name)
session.commit()
UserRequests= session.query(Requests).filter_by(User_ID=Session.get('user_id')).filter(Requests.Status_Name!='Solved').all()
return render_template('ReqData.html',conn=connUser , title='User Requests', rows=UserRequests)
i think that if we didn't change the database engine the solution is either to queue the commits but i don't know how
or to make flask wait random time before commiting but i think this will make performance poor
what should i do

You need to serialize the commits. Create a lock like below.
from threading import RLock
sql_lock = RLock()
Wrap session.add and session.commit like the following. lock.acquire() will block the code while another thread has acquired the lock, and is yet to release the lock. This ensures that only one thread (or none) is running between acquire() and release() at all times.
try:
sql_lock.acquire()
session.add(newRequest)
session.commit()
finally:
sql_lock.release()

Related

scoped_session.close() in sqlalchemy

I am using the scoped_session for my APIs from sqlalchemy python
class DATABASE():
def __init__(self):
engine = create_engine(
'mssql+pyodbc:///?odbc_connect=%s' % (
urllib.parse.quote_plus(
'DRIVER={/usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so};SERVER=localhost;'
'DATABASE=db1;UID=sa;PWD=admin;port=1433;'
)), isolation_level='READ COMMITTED', connect_args={'options': '-c lock_timeout=30 -c statement_timeout=30', 'timeout': 40}, max_overflow=10, pool_size=30, pool_timeout=60)
session = sessionmaker(bind=engine)
self.Session = scoped_session(session)
def calculate(self, book_id):
session = self.Session
output = None
try:
result = session.query(Book).get(book_id)
if result:
output = result.pages
except:
session.rollback()
finally:
session.close()
return output
def generate(self):
session = self.Session
try:
result = session.query(Order).filter(Order.product_name=='book').first()
pages = self.calculate(result.product_id)
if not output:
result.product_details = str(pages)
session.commit()
except:
session.rollback()
finally:
session.close()
return output
database = DATABASE()
database.generate()
Here, the session is not committing, then I go through the code, the generate function calls the calculate function, there, after the calculations are completed, the session is closed - due to this, the changes made in the generate function is not committed to the database
If I remove the session.close() from calculate function, the changes made in generate function is committed to the database
From the blogs, it is recommend to close the session after API complete its accessing to the database
How to resolve this, and what is the flow of sqlalchemy?
Thanks
Scoped sessions default to being thread-local, so as long as you are in the same thread, the factory (self.Session in this case) will always return the same session. So calculate and generate are both using the same session, and closing it in calculate will roll back the changes made in generate.
Moreover, scoped sessions should not be closed, they should be removed from the session registry (by calling self.Session.remove()); they will be closed automatically in this case.
You should work out where in your code you will have finished with your session, and remove it there and nowhere else. It will probably be best to commit or rollback in the same place. In the code in the question I'd remove rollback and close from calculate.
The docs on When do I construct a Session, when do I commit it, and when do I close it? and Contextual/Thread-local Sessions should be helpful.

How to yield a db connection in a python sqlalchemy function similar to how it is done in FastAPI?

In FastAPI I had the following function that I used to open and close a DB session:
def get_db():
try:
db = SessionLocal()
yield db
finally:
db.close()
And within the routes of my API I would do something like that:
#router.get("/")
async def read_all_events(user: dict = Depends(get_current_user), db: Session = Depends(get_db)):
logger.info("API read_all_events")
if user is None:
raise http_user_credentials_not_valid_exception()
return db.query(models.Events).all()
You can see that I am injectin the session in the api call.
So now i want to do something similar within a python function:
def do_something():
#get person data from database
#play with person data
#save new person data in database
#get cars data from database
So i am wondering if I should use the same approach than in FastAPI (i do not know how) or if i just should be openning and clossing the connection manually like that:
def do_something():
try:
db = SessionLocal()
yield db
#get person data from database
#play with person data
#save new person data in database
#get cars data from database
finally:
db.close()
Thanks
The usage of yield in this case is so that Depends(get_db) returns the db session instance, so that it can be used in the fastapi route, and as soon as the fastapi route returns response to user, the finally clause (db.close()) will be executed. This is good because every request will be using a separate db session, and db connections will be closed after every route response.
If you want to use the db session normally in a function, just get the db instance using db = SessionLocal(), and proceed to use the db instance in the function.
Example:
def do_something():
db = SessionLocal()
event = db.query(models.Events).first()
db.delete(event)
db.commit()
db.close()

SQL Flask methods objects in same thread

SQLite objects created in a thread can only be used in that same thread.
app = Flask(__name__)
#app.route("/test/")
def test():
conn = sqlite3.connect("god_attributes.db")
c = conn.cursor()
c.execute("SELECT * FROM god_icon_table")
all = c.fetchall()
return render_template("test.html", all = all)
I'm making a flask app and I have a lot of methods that need to pull data from a db using SQL db calls. I am wondering if I can store methods somewhere else and call them by importing to organize things. Basically I want the entire app route for test to be like this:
app = Flask(__name__)
#app.route("/test/")
def test():
all = get_all()
return render_template("test.html", all = all)
where get_all() does everything from conn to fetchall in the first code sample
So I figured out a structural solution. Wasnt able to place a get_all() method that does the SQL commands directly. I made a new json file from the database and used that json to get the information

How to use cx_Oracle session pool with Flask gracefuly?

I'm a newbie to Python and Flask, and I use Oracle, when learning Flask tutorial, I code as follow, but it smells really bad, please help me with these questions, thanks a lot!
1) need I release connection to poll explicitly?
2) how can I implement poll acquire and release gracefully?
def get_dbpool():
if not hasattr(g, 'db_pool'):
g.dbPool = connect_db()
return g.dbPool
#app.teardown_appcontext
def close_db(error):
if hasattr(g, 'db_pool'):
g.dbPool.close()
#app.route('/')
def hello_world():
db = get_dbpool().acquire()
cursor=db.cursor()
sql=''
cursor.execute(sql)
rows = cursor.fetchall()
cursor.close()
get_dbpool().release(db)
return json.jsonify(combines=rows)
There is no need to release the connection to the pool explicitly unless you intend to keep processing for some time and don't need the connection any longer. cx_Oracle automatically releases the connection back to the pool when the connection goes out of scope (function ends), provided that you haven't implemented a circular reference to the connection, of course! In that case you would have to wait until garbage collection executes. Hopefully that answers your questions!

SQLAlchemy ThreadPoolExecutor "Too many clients"

I wrote a script with this sort of logic in order to insert many records into a PostgreSQL table as they are generated.
#!/usr/bin/env python3
import asyncio
from concurrent.futures import ProcessPoolExecutor as pool
from functools import partial
import sqlalchemy as sa
from sqlalchemy.ext.declarative import declarative_base
metadata = sa.MetaData(schema='stackoverflow')
Base = declarative_base(metadata=metadata)
class Example(Base):
__tablename__ = 'example'
pk = sa.Column(sa.Integer, primary_key=True)
text = sa.Column(sa.Text)
sa.event.listen(Base.metadata, 'before_create',
sa.DDL('CREATE SCHEMA IF NOT EXISTS stackoverflow'))
engine = sa.create_engine(
'postgresql+psycopg2://postgres:password#localhost:5432/stackoverflow'
)
Base.metadata.create_all(engine)
session = sa.orm.sessionmaker(bind=engine, autocommit=True)()
def task(value):
engine.dispose()
with session.begin():
session.add(Example(text=value))
async def infinite_task(loop):
spawn_task = partial(loop.run_in_executor, None, task)
while True:
await asyncio.wait([spawn_task(value) for value in range(10000)])
def main():
loop = asyncio.get_event_loop()
with pool() as executor:
loop.set_default_executor(executor)
asyncio.ensure_future(infinite_task(loop))
loop.run_forever()
loop.close()
if __name__ == '__main__':
main()
This code works just fine, creating a pool of as many processes as I have CPU cores, and happily chugging along forever. I wanted to see how threads would compare to processes, but I could not get a working example. Here are the changes I made:
from concurrent.futures import ThreadPoolExecutor as pool
session_maker = sa.orm.sessionmaker(bind=engine, autocommit=True)
Session = sa.orm.scoped_session(session_maker)
def task(value):
engine.dispose()
# create new session per thread
session = Session()
with session.begin():
session.add(Example(text=value))
# remove session once the work is done
Session.remove()
This version runs for a while before a flood of "too many clients" exceptions:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL: sorry, too many clients already
What am I missing?
It turns out that the problem is engine.dispose(), which, in the words of Mike Bayer (zzzeek) "is leaving PG connections lying open to be garbage collected."
Source: https://groups.google.com/forum/#!topic/sqlalchemy/zhjCBNebnDY
So the updated task function looks like this:
def task(value):
# create new session per thread
session = Session()
with session.begin():
session.add(Example(text=value))
# remove session object once the work is done
session.remove()
It looks like you're opening a lot of new connections without closing them, try to add engine.dispose() after:
from concurrent.futures import ThreadPoolExecutor as pool
session_maker = sa.orm.sessionmaker(bind=engine, autocommit=True)
Session = sa.orm.scoped_session(session_maker)
def task(value):
engine.dispose()
# create new session per thread
session = Session()
with session.begin():
session.add(Example(text=value))
# remove session once the work is done
Session.remove()
engine.dispose()
Keep in mind the cost of a new connection, so ideally you should have one connection per process/thread, but I'm not sure how ThreadPoolExecutor works and probably connections are not being closed on thread's execution finish.

Categories