Im writting a flask api using flaskrestful,sqlalchemy, Postgres, nginx,uwsgi. Im a newbie to python.These are my configuration
database.py
from cuewords import app
from flask.ext.sqlalchemy import SQLAlchemy
from sqlalchemy.pool import NullPool
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import create_engine
from sqlalchemy import Column, Integer, String , Text , Boolean , DateTime, MetaData, Table ,Sequence
from sqlalchemy.dialects.postgresql import JSON
Base = declarative_base()
db_name="postgresql+psycopg2://admin:password#localhost:5434/database_name"
from sqlalchemy.orm import sessionmaker
engine = create_engine(db_name,poolclass=NullPool ,echo=True)
Session = sessionmaker(autocommit=False ,bind=engine)
connection = engine.connect()
metadata = MetaData()
api.py
class Webcontent(Resource):
def post(self):
session=Session()
...assign some params...
try:
insert_data=Websitecontent(site_url,publisher_id)
session.add(insert_data)
session.commit()
Websitecontent.update_url(insert_data.id,session)
except:
session.rollback()
raise
finally:
return "Data created "
session.close()
else:
return "some value"
Here im first saving the just the url then saving all the content of the site using boilerpipe later .Idea is to move to queue later
model.py
class Websitecontent(Base):
#classmethod
def update_url(self,id,session):
existing_record=session.query(Websitecontent).filter_by(id=int(id)).first()
data=Processing.processingContent(str(existing_record.url))
#boilerpipe processing the content here
#assigning some data to existing record in session
session.add(existing_record)
session.commit()
Websitecontent.processingWords(existing_record,session)
#classmethod
def processingWords(self,record,session)
...processing
Websitecontent.saveKeywordMapping(session,keyword_url,record)
#classmethod
def saveKeywordMapping(session,keyword_url,record)
session.commit()
session.close()
So this code works perfectly in locally but its doesnt work in production .So when i check pag_stat_activity it show the state "idle in transaction". The app hangs then i have to restart the servers. i dont get it why session.close() does not close the pool connection why its keeping psql transaction state busy . Guys any help would be really appreciated.
You are returning before closing the session:
return "Data created "
session.close()
I think returning inside finally might swallow the exception, as well.
Related
I'm using fastapi to create a basic API to do some statistics from a postgres database.
I have just started using sqlalchemy as I want to do connection pooling, and based off of my googling it seems the route to go down.
I've implemented this in my main.py file,
def get_db():
try:
db = SessionLocal()
yield db
finally:
db.close()
Then using depends from fastapi with my URL params,
async def get_data(xxx ,db: SessionLocal = Depends(get_db)):
conn = db()
With the sessions function being,
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from environment.config import settings
SQLALCHEMY_DATABASE_URL = settings.DATABASE_URL
engine = create_engine(SQLALCHEMY_DATABASE_URL)
SessionLocal = sessionmaker(autocommit=False,autoflush=False,bind=engine)
I'm receiving a type error of SessionLocal not being callable, wondering what I'm missing here?
The issue I was having was when testing the API against being called, multiple calls to the API were essentially recreating a connection to the database, which was super laggy just testing locally - so wanting to make that...well work :)
Imports for Main.py
from plistlib import UID
import string
from typing import Optional
from pandas import concat
from fastapi import FastAPI, HTTPException, Header,Depends
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel
from db.session import SessionLocal
from biometrics.TaskGet import GetSentiment, GetScheduled, GetAllActualSubstance, GetEmotions
import aggregate.biometric_aggregators as biometric_aggregators
import os
Based on answer below, I just let it use DB. Now I get this error weirdly though.
web_1 | AttributeError: 'Session' object has no attribute 'cursor'
Made sqlalchemy functions to do the same calls, now getting the same error as before.
I originally tried to just return the function without using pd.read_sql, but it didn't work - any idea on what I've done wrong here?
from sqlalchemy.orm import Session
import pandas as pd
from . import models
def sentimentDataframe(db: Session, user_id: str):
Sentiment = pd.read_sql((get_sentiment(db,user_id)),con=db)
Sentiment['created'] =pd.to_datetime(Sentiment['created'], unit='s')
return Sentiment.set_index('created')
def get_sentiment(db: Session, user_id: str, skip: int = 0, limit: int = 100):
return db.query(models.Sentiment).filter(models.Sentiment.user_id == user_id).order_by(models.Sentiment.created.desc()).offset(skip).limit(limit).all()
You should not be doing this
conn = db()
as db is already an instance of session
You can already use it like so
db.add(<SQLAlchemy Base Instance>)
db.commit()
I'm working on an async FastAPI project and I want to connect to the database during tests. Coming from Django, my instinct was to create pytest fixtures that take care of creating/dropping the test database. However, I couldn't find much documentation on how to do this. The most complete instructions I could find were in this tutorial, but they don't work for me because they are all synchronous. I'm somewhat new to async development so I'm having trouble adapting the code to work async. This is what I have so far:
import pytest
from sqlalchemy.ext.asyncio import create_async_engine, session
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from sqlalchemy_utils import database_exists, create_database
from fastapi.testclient import TestClient
from app.core.db import get_session
from app.main import app
Base = declarative_base()
#pytest.fixture(scope="session")
def db_engine():
default_db = (
"postgresql+asyncpg://postgres:postgres#postgres:5432/postgres"
)
test_db = "postgresql+asyncpg://postgres:postgres#postgres:5432/test"
engine = create_async_engine(default_db)
if not database_exists(test_db): # <- Getting error on this line
create_database(test_db)
Base.metadata.create_all(bind=engine)
yield engine
#pytest.fixture(scope="function")
def db(db_engine):
connection = db_engine.connect()
# begin a non-ORM transaction
connection.begin()
# bind an individual Session to the connection
Session = sessionmaker(bind=connection)
db = Session()
# db = Session(db_engine)
yield db
db.rollback()
connection.close()
#pytest.fixture(scope="function")
def client(db):
app.dependency_overrides[get_session] = lambda: db
PREFIX = "/api/v1/my-endpoint"
with TestClient(PREFIX, app) as c:
yield c
And this is the error I'm getting:
E sqlalchemy.exc.MissingGreenlet: greenlet_spawn has not been called; can't call await_() here. Was IO attempted in an unexpected place? (Background on this error at: https://sqlalche.me/e/14/xd2s)
/usr/local/lib/python3.9/site-packages/sqlalchemy/util/_concurrency_py3k.py:67: MissingGreenlet
Any idea what I have to do to fix it?
You try to use sync engine with async session. Try to use:
from sqlalchemy.ext.asyncio import AsyncSession
Session = sessionmaker(bind= connection, class_=AsyncSession)
https://docs.sqlalchemy.org/en/14/orm/extensions/asyncio.html
I'm utilizing Flask and SqlAlchemy. The database I've created for SqlAlchemy seems to mess up when I try to run my website and will pop up with the error stating that there's a thread error. I'm wondering if it's because I haven't dropped my table from my previous schema. I'm using a linux server to try and run the "python3" and the file to set up my database.
I've tried to physically delete the table from my local drive and the re run it but I still up this error.
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.orm import scoped_session
from database_setup import Base, Category, Item
engine = create_engine('sqlite:///database_tables.db')
Base.metadata.bind = engine
Session = sessionmaker()
Session.bind = engine
session = Session()
brushes = Category(id = 1, category_name = 'Brushes')
session.add(brushes)
session.commit()
pencils = Category(id = 2, category_name = 'Pencils')
session.add(pencils)
session.commit()
When I am in debug mode using Flask, I click the links I've made using these rows, but after three clicks I get the error
"(sqlite3.ProgrammingError) SQLite objects created in a thread can only be used in that same thread.The object was created in thread id 140244909291264 and this is thread id 140244900898560 [SQL: SELECT category.id AS category_id, category.category_name AS category_category_name FROM category] [parameters: [{}]] (Background on this error at: http://sqlalche.me/e/f405)"
you can use for each thread a session, by indexing them using the thread id _thread.get_ident():
import _thread
engine = create_engine('sqlite:///history.db', connect_args={'check_same_thread': False})
...
Base.metadata.create_all(engine)
sessions = {}
def get_session():
thread_id = _thread.get_ident() # get thread id
if thread_id in sessions:
return sessions[thread_id]
session_factory = sessionmaker(bind=engine)
Session = scoped_session(session_factory)
sessions[thread_id] = Session()
return sessions[thread_id]
then use get_session() where it is needed, in your case:
get_session().add(brushes)
get_session().commit()
I have the following set up for which on session.query() SqlAlchemy returns stale data:
Web application running on Flask with Gunicorn + supervisor.
one of the services is composed in this way:
app.py:
#app.route('/api/generatepoinvoice', methods=["POST"])
#auth.login_required
def generate_po_invoice():
try:
po_id = request.json['po_id']
email = request.json['email']
return jsonify(response=POInvoiceGenerator.get_invoice(po_id, email))
except Exception as ex:
app.logger.error("generate_po_invoice(): " + ex.message)
in another folder i have the database related stuff:
DatabaseModels (folder)
|-->Model.py
|-->Connection.py
that's what is contained in the connection.py file:
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker, scoped_session
from sqlalchemy.ext.declarative import declarative_base
engine = create_engine(DB_BASE_URI, isolation_level="READ COMMITTED")
Session = scoped_session(sessionmaker(bind=engine))
session = Session()
Base = declarative_base()
and thats an extract of the model.py file:
from DatabaseModels.Connection import Base
from sqlalchemy import Column, String, etc...
class Po(Base):
__tablename__ = 'PLC_PO'
id = Column("POId", Integer, primary_key=True)
code = Column("POCode", String(50))
etc...
Then i have another file POInvoiceGenerator.py
that contains the call to the database for fetching some data:
import DatabaseModels.Connection as connection
import DatabaseModels.model as model
def get_invoice(po_code, email):
try:
po_code = po_code.strip()
PLCConnection.session.expire_all()
po = connection.session.query(model.Po).filter(model.Po.code == po_code).first()
except Exception as ex:
logger.error("get_invoice(): " + ex.message)
in subsequent users calls to this service sometimes i start to get errors like: could not find data in the db for that specific code and so on. Like if the data are stale and so on.
My first approach was to add isolation_level="READ COMMITTED" to the engine declaration and then to create a scoped session, but the stale data reading keeps appening.
Is there anyone that had any idea if my setup is wrong (the session and the model are reused among multiple methods and files)
Thanks in advance.
even if the solution pointed by #TonyMountax seems valid and made me discover something that i didn't know about SqlAlchemy, In the end i opted for something different.
I figured out that the connection established by SqlAlchemy was durable since it was created from a pool of connection everytime, this somehow was causing the data to be stale.
i added a NullPool to my code:
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker, scoped_session
from sqlalchemy.pool import NullPool
engine = create_engine(DB_URI, isolation_level="READ COMMITTED", poolclass=NullPool)
Session = scoped_session(sessionmaker(bind=engine))
session = Session()
and then i'm calling a session close for every query that i make:
session.query("some query..")
session.close()
this will cause SqlAlchemy to create a new connection every time and get fresh data from the db.
Hope that this is the correct way to use it and that might be useful to someone else.
The way you instantiate your database connections means that they are reused for the next request, and they have some state left from the previous request. SQLAlchemy uses a concept of sessions to interact with the database, so that your data does not abruptly change in a single request even if you happen to perform the same query twice. This makes sense when you are using the ORM query features. For instance, if you were to query len(User.friendlist) twice during the same session, but a friend request was accepted during the request, then it will still show the same number in both locations.
To fix this, you must set up the session on first request, then you must tear it down when the request is finished. To do so is not trivial, but there is a well-established project that does it already: Flask-SQLAlchemy. It's from Pallets, the people behind Flask itself and Jinja2.
I'm getting this error sometime (sometime is ok, sometime is wrong):
sqlalchemy.exc.OperationalError: (OperationalError) MySQL Connection not available.
while using session.query
I'm writing a simple server with Flask and SQLAlchemy (MySQL). My app.py like this:
Session = sessionmaker(bind=engine)
session = Session()
#app.route('/foo')
def foo():
try:
session.query(Foo).all()
except Exception:
session.rollback()
Update
I also create new session in another file and call it in app.py
Session = sessionmaker(bind=engine)
session = Session()
def foo_helper(): #call in app.py
session.query(Something).all()
Update 2
My engine:
engine = create_engine('path')
How can I avoid that error?
Thank you!
Make sure the value of ‘pool_recycle option’ is less than your MYSQLs wait_timeout value when using SQLAlchemy ‘create_engine’ function.
engine = create_engine("mysql://username:password#localhost/myDatabase", pool_recycle=3600)
Try to use scoped_session to make your session:
from sqlalchemy.orm import scoped_session, sessionmaker
session = scoped_session(sessionmaker(autocommit=False, autoflush=False, bind=engine))
and close/remove your session after retrieving your data.
session.query(Foo).all()
session.close()