I have stored my database in the Snowflake server and using https://pypi.org/project/snowflake-connector-python/ I am accessing the database and doing database queries from flask server as below:
ctx = snowflake.connector.connect(
user='username',
password='pass',
account='account',
client_session_keep_alive=True
)
cs = ctx.cursor()
try:
cs.execute("SELECT current_version()")
one_row = cs.fetchone()
print("Successfully connected to snowflake version: {}".format(one_row[0]))
cs.close()
except Exception as e:
print("Snowflake connection error: {}".format(e))
cs.close()
I have defined the snowflake connection session as a global ctx variable to be accessible from any flask api function.
After starting the flask server, everything works fine but if no api calls are made for continuously few hours then it throws an error 'snowflake.connector.errors.ProgrammingError: 390114 (08001): Authentication token has expired. The user must authent
icate again.'
As you can see i have kept the 'client_session_keep_alive=True' parameter in the snowflake connect api to keep the session alive but still somehow this fails.
I explored about this issue but did not get any conclusive information.
So i want to know how can i keep the database connection session alive or do I have to create new connection session for each query?
Any suggestions will be very helpfull.
Related
I'm trying to test my FASTAPI app. Seems to me, all settings are correct.
test_users.py
engine = create_engine(
f"postgresql"
f"://{settings.database_username}"
f":{settings.database_password}"
f"#{settings.database_hostname}"
f":{settings.database_port}"
f"/test_{settings.database_name}"
)
TestingSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base.metadata.create_all(bind=engine)
def override_get_db():
try:
db = TestingSessionLocal()
yield db
finally:
db.close()
app.dependency_overrides[get_db] = override_get_db
client = TestClient(app)
def test_create_user():
response = client.post(
"/users/",
json={"email": "nikita#gmail.com", "password": "password"}
)
new_user = schemas.UserOutput(**response.json())
assert response.status_code == 201
assert new_user.email == "nikita#gmail.com"
When I run pytest, I get this error:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at "localhost" (::1), port 5432 failed: FATAL: database "test_social_media_api" does not exist
Why is the code not creating the database?
With engine = create_engine("postgresql://...") you define a connection to an existing PostgreSql database.
And with Base.metadata.create_all(bind=engine) you create the tables - according to your models - in the existing database.
So the code that you written does not create a database, it expects that you give it an already existing database.
And that has to do with PostgreSQL itself.
PostgreSQL runs as a server, and a PostgreSQL server can run multiple databases. And each database has to be created explicitly.
Just telling SQLAlchemy the connection string is not enough.
It's possible to create a new database from Python itself by connecting to the PostgreSQL server (see https://www.tutorialspoint.com/python_data_access/python_postgresql_create_database.htm), or alternatively you can create it manually before you run your script. E.g. by running CREATE DATABASE databasename; inside psql (or any other database tool).
However if you want to test using a running database, I would suggest using testcontainers. They will spawn a new PostgreSQL server with an empty database everytime you run the tests.
Notice, that the example from the FastAPI documentation works differently.
They just use
SQLALCHEMY_DATABASE_URL = "sqlite:///./test.db"
engine = create_engine(
SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False}
)
Base.metadata.create_all(bind=engine)
which creates the database.
This works, because SQLite doesn't run as a server. It's just one file that represents the full database, and if the file doesn't exist, the sqlite database adapter will assume that the database is just empty, and create a new file for you. PostgreSQL doesn't work like this though.
I am trying to open a web service app that will allow me to connect to the database on azure with python code, Ive tried using sqlalchemy and pyodbc and i am successfully able to connect to the database on my machine, in the local host, i can perform all necessary actions i want to there. but i want to be able to set this code up to be able to hit specific routes in an ajax call that can perform certain actions on my database, like flipping a users active flag to false. however the problem is that when i upload the python code to azure, using this guide here (https://learn.microsoft.com/en-us/azure/app-service/quickstart-python?tabs=bash&pivots=python-framework-flask) it just returns a 500 server error, i cant find anything in the trace as to why its not working, I thought that it might just be that my local machine is whitelisted in the IP address's to the database, but even still if i add that app services IP address to the allowed IP's it still returns a server error. here is the setup of the code:
from flask import Flask
from sqlalchemy import create_engine
import pyodbc
app = Flask(__name__)
#app.route('/')
def connection():
Driver = "{ODBC Driver 13 for SQL Server}"
Server = "server string"
Port = 1433
Database = "dbname"
Uid = "user"
Pwd = "pass"
try:
cnxn = pyodbc.connect(f'DRIVER={Driver};SERVER={Server};DATABASE={Database};Uid={Uid};Pwd={Pwd};Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;')
cursor = cnxn.cursor()
cursor.execute(f"SELECT * FROM (Database Table) where id = 999999999;")
for row in cursor:
print('row = %r' % (row,))
return "Connection to database successful"
I ommitted certain details but the syntax should remain intact, Again, in my local machine I can connect to the database and return data. but once it compiles the code in azure, it no longer works.
excuse the try statement. I was trying to catch the error and send it to the client in hopes of gathering more information from the 500 error, but it didnt work.
**Edit: its worth mentioning that if i remove the actual connection string and anything to do with connecting to the database, the code will return "connection to database successful" this leads me to believe that it isnt fully making a connection to the database at all. and thats why its erroring out, however the question remains, why can i connect in my local enviornment but not in azure?
have you heard about remote debugging in azure. This is the best tool to trace out errors using your visual studio code or VS where you web or app is initially working well. You can use multiple methods to debug line by line or cluster by cluster. Kindly check it out here
https://learn.microsoft.com/en-us/visualstudio/debugger/remote-debugging?view=vs-2019
So the short answer to this is the App service was missing the driver {ODBC Driver 13 for SQL Server} it wasn't erroring out locally because I have that driver installed locally, but this driver isn't available in the app. To get this to work I would have to install the App to a Virtual Machine to be able to download the driver appropriately.
fortunately however Azure web apps built on a linux base already come with {ODBC Driver 17 for SQL Server} So flipping the driver from 13 to 17 allowed the app to successfully connect to my Database.
I'm running a setup of FastAPI and SQLAlchemy to have a web server that runs basic CRUD operations. After a certain time the server is up in a docker container I get the following error message
sqlalchemy.exc.StatementError: (sqlalchemy.exc.InvalidRequestError) Can't reconnect until invalid transaction is rolled back
FROM users
WHERE users.email = %s]
[parameters: [immutabledict({})]]
INFO:- "POST /routename HTTP/1.0" 500 Internal Server Error
2020-11-23T11:54:38.764592599Z ERROR: Exception in ASGI application
Now to my knowledge, the issue is related to sessions and sessions closing.
Here's the route in question
#app.post('/routename')
async def getLogin(request: GetLoginRequest):
...
what it does here it just fetches the current user and verifies by come parameters
...
session.close()
return JSONResponse(content=currentUser)
I don't know why it crashes despite having the session.close() at the end.
I define my session in the root of the application
SQLALCHEMY_DATABASE_URL = settings.connection_string
engine = create_engine(SQLALCHEMY_DATABASE_URL)
Session = sessionmaker(autocommit=False, autoflush=False, bind=engine)
session = Session()
Base = automap_base()
I honestly believe this is just a SQLAlchemy session issue and not a FASTApi issue, but I can't crack what's wrong and why it keep crashing.
I had the same error running as a long session waiting for data.
My solution:
try:
session.close()
except:
session.rollback()
session.close()
I'm using python to try and connect to a DB. This code worked and something in my environment changed so that the host in not present/accessible. This is as expected. The thing that I'm trying to work out is, I can't seem to catch the error of this happening. This is my code:
def create_db_connection(self):
try:
message('try...')
DB_HOST = os.environ['DB_HOST']
DB_USERNAME = os.environ['DB_USERNAME']
DB_PASSWORD = os.environ['DB_PASSWORD']
message('connecting...')
db = mysql.connector.connect(
host=DB_HOST,
user=DB_USERNAME,
password=DB_PASSWORD,
auth_plugin='mysql_native_password'
)
message('connected...')
return db
except mysql.connector.Error as err:
log.info('bad stuff happened...')
log.info("Something went wrong: {}".format(err))
message('exception connecting...')
except Exception as ex:
log.info('something bad happened')
message("Exception: {}".format(ex))
message('returning false connection...')
return False
I see up to the message('connecting...') call, but nothing afterwards. Also, I don't see any of the except messages/logs at all.
Is there something else I need to catch/check in order to know that a DB connection attempt has failed?
This is running inside an AWS Lambda and was working until I changed some subnets/etc. The key thing is I want to catch it no longer being able to connect.
The issue is most likely that your lambda function is timing out before the database connection is timing out.
First, modify the lambda function to execute for 60 seconds and test. You should find after about 30 seconds you will see the connection to the database timeout.
To resolve this issue, modify the security group on the database instance to include the security group configured for lambda. Use this entry to open a the correct port 3306
I have a projects deployed on Google App Engine having Google API (Python). Every request to any of API make a database connection , execute a procedure and return data and close the connection. I was not able to access any of API as it was showing
"Process terminated because the request deadline was exceeded. (Error code 123)" and "This request caused a new process to be started for your application, and thus caused your application code to be loaded for the first time. This request may thus take longer and use more CPU than a typical request for your application." error.
Database is also on cloud (Google Cloud SQL). As I checked there was 900 connection and more than 150 instances were up but no api request was getting handled. This happens frequently. So I restart database server and deploy API code again to solve this issue. What is the issue and how can I solve this permanently ? Here is my python code for database connectivity :-
import logging
import traceback
import os
import MySQLdb
from warnings import filterwarnings
filterwarnings('ignore', category = MySQLdb.Warning)
class TalkWithDB:
def callQueries(self,query,req_args):
try:
if (os.getenv('SERVER_SOFTWARE') and os.getenv('SERVER_SOFTWARE').startswith('Google App Engine/')):
db = MySQLdb.connect(unix_socket = UNIX_SOCKET + INSTANCE_NAME, host = HOST, db = DB, user = USER ,charset='utf8',use_unicode=True)
else:
db = MySQLdb.connect(host = HOST, port = PORT, db = DB, user = USER, passwd = PASSWORD,charset='utf8',use_unicode=True)
cursor = db.cursor()
cursor.connection.autocommit(True)
try:
sql = query+str(req_args)
logging.info("QUERY = "+ sql )
cursor.execute(sql)
procedureResult = cursor.fetchall();
if str(procedureResult) == '()':
logging.info("Procedure Returned 0 Record")
procedureResult = []
procedureResult.append({0:"NoRecord", 1:"Error"})
#procedureResult = (("NoRecord","Error",),)
elif procedureResult[0][0] == 'Session Expired'.encode(encoding='unicode-escape',errors='strict'):
procedureResult = []
procedureResult.append({0:"SessionExpired", 1:"Error"})
except Exception, err:
logging.info("ConnectDB.py : - Error in Procedure Calling : " + traceback.format_exc())
#procedureResult = (('ProcedureCallError','Error',),)
procedureResult = []
procedureResult.append({0:"ProcedureCallError", 1:"Error"})
except Exception, err:
logging.info("Error In DataBase Connection : " + traceback.format_exc())
#procedureResult = (('DataBaseConnectionError','Error',),)
procedureResult = []
procedureResult.append({0:"DataBaseConnectionError", 1:"Error"})
# disconnect from server
finally:
try:
cursor.close()
db.close()
except Exception, err:
logging.info("Error In Closing Connection : " + traceback.format_exc())
return procedureResult
Two possible improvements :
your startup code for instances may take too long, check what is the startup time and if possible use warmup requests to reduce startup times. Since increasing your idle instances seems to help, your startup time may take too long.
A better approach would be to call external services (e.g. talk to Google Calendar) in a Task Queue outside of the user request scope. This gives you a 10-min deadline instead of the 60s deadline for user requests