I recently found out that FTS5 extension has been released.
What is the best way to check if my application can use it on users system?
Just python3 version check, or sqlite3.sqlite_version check according to release page?? Or something else?
/ this was previously edit of the OP post, but I moved it down here to keep the question clear
so this feels like it could work, found it in the question here
import sqlite3
con = sqlite3.connect(':memory:')
cur = con.cursor()
cur.execute('pragma compile_options;')
available_pragmas = cur.fetchall()
con.close()
print(available_pragmas)
if ('ENABLE_FTS5',) in available_pragmas:
print('YES')
else:
print('NO')
But what is strange is that I run it in a few virtual machines, and none had fts4 enabled, yet I used it happily like nothing... maybe its fts3/fts4 being rather the same, or maybe its just all wrong.
/edit
from the documentation
Note that enabling FTS3 also makes FTS4 available. There is not a separate SQLITE_ENABLE_FTS4 compile-time option. A build of SQLite either supports both FTS3 and FTS4 or it supports neither.
The documentation you link to mentions that FTS5 is disabled by default. Did you enable it when compiling SQLite?
One quick way to know is to use the peewee ORM:
from playhouse.sqlite_ext import FTS5Model
FTS5Model.fts5_installed()
The above will return True if you can use FTS5. You can install peewee with pip install peewee.
You could also use the apsw wrapper, which includes FTS5 by default since version 3.11.0-r1. See the build instructions and use the --enable-all-extensions flag. The apsw wrapper uses the amalgamation.
EDIT:
Here is the code from the peewee source demonstrating how this is done:
def fts5_installed(cls):
if sqlite3.sqlite_version_info[:3] < FTS5_MIN_VERSION:
return False
# Test in-memory DB to determine if the FTS5 extension is installed.
tmp_db = sqlite3.connect(':memory:')
try:
tmp_db.execute('CREATE VIRTUAL TABLE fts5test USING fts5 (data);')
except:
try:
sqlite3.enable_load_extension(True)
sqlite3.load_extension('fts5')
except:
return False
else:
cls._meta.database.load_extension('fts5')
finally:
tmp_db.close()
return True
Related
The to_sql() function in pandas is now producing a SADeprecationWarning.
df.to_sql(name=tablename, con=c, if_exists='append', index=False )
[..]/lib/python3.8/site-packages/pandas/io/sql.py:1430: SADeprecationWarning:The Connection.run_callable() method is deprecated and will be removed in a future release. Use a context manager instead. (deprecated since: 1.4)
I was getting this even with df.read_sql() command, when running sql select statements. Changing it to the original df.read_sql_query() that it wraps around, got rid of it. I'm suspecting there would be some linkage there.
So, question is, how to do I write a dataframe table to SQL without it getting deprecated in a future release? What does "use a context manager" mean, how can I implement that?
Versions:
pandas: 1.1.5 | SQLAlchemy: 1.4.0 | pyodbc: 4.0.30 | Python: 3.8.0
Working with a mssql database.
OS: Linux Mint Xfce, 18.04. Using a python virtual environment.
If it matters, connection created like so:
conn_str = r'mssql+pyodbc:///?odbc_connect={}'.format(dbString).strip()
sqlEngine = sqlalchemy.create_engine(conn_str,echo=False, pool_recycle=3600)
c = sqlEngine.connect()
And after the db operation,
c.close()
Doing so keeps the main connection sqlEngine "alive" between api calls and lets me use a pooled connection rather than having to connect anew.
Update: according to the pandas team, this will be fixed in Pandas 1.2.4 which as of the time of writing has not been released yet.
Adding this as an answer since Google led here but the accepted answer is not applicable.
Our surrounding code that uses Pandas does use a context manager:
with get_engine(dbname).connect() as conn:
df = pd.read_sql(stmt, conn, **kwargs)
return df
In my case, this error is being thrown from within pandas itself, not in the surrounding code that uses pandas:
/Users/tommycarpenter/Development/python-indexapi/.tox/py37/lib/python3.7/site-packages/pandas/io/sql.py:1430: SADeprecationWarning: The Engine.run_callable() method is deprecated and will be removed in a future release. Use the Engine.connect() context manager instead. (deprecated since: 1.4)
The snippet from pandas itself is:
def has_table(self, name, schema=None):
return self.connectable.run_callable(
self.connectable.dialect.has_table, name, schema or self.meta.schema
)
I raised an issue: https://github.com/pandas-dev/pandas/issues/40825
You could try...
connection_string = r'mssql+pyodbc:///?odbc_connect={}'.format(dbString).strip()
engine = sqlalchemy.create_engine(connection_string, echo=False, pool_recycle=3600)
with engine.connect() as connection:
df.to_sql(name=tablename, con=connection, if_exists='append', index=False)
This approach uses a ContextManager. The ContextManager of the engine returns a connection and automatically invokes connection.close() on it, see. Read more about ContextManager here. Another useful thing to know is, that a connection is a ContextManager as well and handles transactions for you. This means it begins and ends a transaction and in case of an error it automatically invokes a rollback.
I'm trying to execute many (~1000) MERGE INTO statements into oracledb 11.2.0.4.0(64bit) using python 3.9.2(64bit) and pyodbc 4.0.30(64bit). However, all the statements return an exception:
HY000: The driver did not supply an error
I've tried everything I can think of to solve this problem, but no luck. I tried changing code, encodings/decodings and ODBC driver from oracle home 12.1(64bit) to oracle home 19.1(64bit). I also tried using pyodbc 4.0.22 in which case the error just changed into:
<class 'pyodbc.ProgrammingError'> returned a result with an error set
Which is not any more helpful error than the first one. The issue I assume cannot be the MERGE INTO statement itself, because when I try running them directly in the database shell, it completes without issue.
Below is my code. I guess I should also mention the commands and parameters are read from stdin before being executed, and oracledb is using utf8 characterset.
cmds = sys.stdin.readlines()
comms = json.loads(cmds[0])
conn = pyodbc.connect(connstring)
conn.setencoding(encoding="utf-8")
cursor = conn.cursor()
cursor.execute("""ALTER SESSION SET NLS_DATE_FORMAT='YYYY-MM-DD"T"HH24:MI:SS.....'""")
for comm in comms:
params = [(None) if str(x)=='None' or str(x)=='NULL' else (x) for x in comm["params"]]
try:
cursor.execute(comm["sql"],params)
except Exception as e:
print(e)
conn.commit()
conn.close()
Edit: Another things worth mentioning for sure - this issue began after python2.7 to 3.9.2 update. The code itself didn't require any changes at all in this particular location, though.
I've had my share of HY000 errors in the past. It almost always came down to a syntax error in the SQL query. Double check all your double and single quotes, and makes sure the query works when run independently in an SQL session to your database.
I have created a stored procedure usuarios_get , I test it in oracle console and work fine. This is the code of stored procedure
create or replace PROCEDURE USUARIOS_GET(
text_search in VARCHAR2,
usuarios_list out sys_refcursor
)
AS
--Variables
BEGIN
open usuarios_list for select * from USUARIO
END USUARIOS_GET;
The python code is this:
with connection.cursor() as cursor:
listado = cursor.var(cx_Oracle.CURSOR)
l_query = cursor.callproc('usuarios_get', ('',listado)) #in this sentence produces error
l_results = l_query[1]
The error is the following:
NotSupportedError: Variable_TypeByValue(): unhandled data type VariableWrapper
I've also tried with other stored procedure with a out parameter number type and modifying in python code listado= cursor.var(cx_Oracle.NUMBER) and I get the same error
NotSupportedError: Variable_TypeByValue(): unhandled data type VariableWrapper
I work with
python 2.7.12
Django 1.10.4
cx_Oracle 5.2.1
Oracle 12c
Can any help me with this?
Thanks
The problem is that Django's wrapper is incomplete. As such you need to make sure you have a "raw" cx_Oracle cursor instead. You can do that using the following code:
django_cursor = connection.cursor()
raw_cursor = django_cursor.connection.cursor()
out_arg = raw_cursor.var(int) # or raw_cursor.var(float)
raw_cursor.callproc("<procedure_name>", (in_arg, out_arg))
out_val = out_arg.getvalue()
Then use the "raw" cursor to create the variable and call the stored procedure.
Looking at the definition of the variable wrapper in Django it also looks like you can access the "var" property on the wrapper. You can also pass that directly to the stored procedure instead -- but I don't know if that is a better long-term option or not!
Anthony's solution works for me with Django 2.2 and Oracle 12c. Thanks! Couldn't find this solution anywhere else on the web.
dcursor = connection.cursor()
cursor = dcursor.connection.cursor()
import cx_Oracle
out_arg = cursor.var(cx_Oracle.NUMBER)
ret = cursor.callproc("<procedure_name>", (in_arg, out_arg))
I have some code written with pyodbc on win x64 using python 2.6 and I get no problem.
Using the same code switching to MySQLdb I get errors.
Example. Long object not iterable....
whats the difference between pyodbc and MySQLdb?
EDIT
import csv, pyodbc, os
import numpy as np
cxn = pyodbc.connect('DSN=MySQL;PWD=me')
import MySQLdb
cxn = MySQLdb.connect (host = "localhost",user="root",passwd ="me")
csr = cxn.cursor()
try:
csr.execute('Call spex.updtop')
cxn. commit
except: pass
csr.close()
cxn.close()
del csr, cxn
Without seeing code, it's not obvious why you're getting errors. You can connect to MySQL databases using either one, and they both implement version 2.x of the Python DB API, though their underlying workings are totally different, as Ignacio Vazquez-Abrams commented.
Some things to consider:
Are you using extensions to the Python DB API that might not be implemented in both?
Are the two libraries translating MySQL datatypes to Python datatypes the same way?
Is there example code you could post?
The new version of SQLite has the ability to enforce Foreign Key constraints, but for the sake of backwards-compatibility, you have to turn it on for each database connection separately!
sqlite> PRAGMA foreign_keys = ON;
I am using SQLAlchemy -- how can I make sure this always gets turned on?
What I have tried is this:
engine = sqlalchemy.create_engine('sqlite:///:memory:', echo=True)
engine.execute('pragma foreign_keys=on')
...but it is not working!...What am I missing?
EDIT:
I think my real problem is that I have more than one version of SQLite installed, and Python is not using the latest one!
>>> import sqlite3
>>> print sqlite3.sqlite_version
3.3.4
But I just downloaded 3.6.23 and put the exe in my project directory!
How can I figure out which .exe it's using, and change it?
For recent versions (SQLAlchemy ~0.7) the SQLAlchemy homepage says:
PoolListener is deprecated. Please refer to PoolEvents.
Then the example by CarlS becomes:
engine = create_engine(database_url)
def _fk_pragma_on_connect(dbapi_con, con_record):
dbapi_con.execute('pragma foreign_keys=ON')
from sqlalchemy import event
event.listen(engine, 'connect', _fk_pragma_on_connect)
Building on the answers from conny and shadowmatter, here's code that will check if you are using SQLite3 before emitting the PRAGMA statement:
from sqlalchemy import event
from sqlalchemy.engine import Engine
from sqlite3 import Connection as SQLite3Connection
#event.listens_for(Engine, "connect")
def _set_sqlite_pragma(dbapi_connection, connection_record):
if isinstance(dbapi_connection, SQLite3Connection):
cursor = dbapi_connection.cursor()
cursor.execute("PRAGMA foreign_keys=ON;")
cursor.close()
I now have this working:
Download the latest sqlite and pysqlite2 builds as described above: make sure correct versions are being used at runtime by python.
import sqlite3
import pysqlite2
print sqlite3.sqlite_version # should be 3.6.23.1
print pysqlite2.__path__ # eg C:\\Python26\\lib\\site-packages\\pysqlite2
Next add a PoolListener:
from sqlalchemy.interfaces import PoolListener
class ForeignKeysListener(PoolListener):
def connect(self, dbapi_con, con_record):
db_cursor = dbapi_con.execute('pragma foreign_keys=ON')
engine = create_engine(database_url, listeners=[ForeignKeysListener()])
Then be careful how you test if foreign keys are working: I had some confusion here. When using sqlalchemy ORM to add() things my import code was implicitly handling the relation hookups so could never fail. Adding nullable=False to some ForeignKey() statements helped me here.
The way I test sqlalchemy sqlite foreign key support is enabled is to do a manual insert from a declarative ORM class:
# example
ins = Coverage.__table__.insert().values(id = 99,
description = 'Wrong',
area = 42.0,
wall_id = 99, # invalid fkey id
type_id = 99) # invalid fkey_id
session.execute(ins)
Here wall_id and type_id are both ForeignKey()'s and sqlite throws an exception correctly now if trying to hookup invalid fkeys. So it works! If you remove the listener then sqlalchemy will happily add invalid entries.
I believe the main problem may be multiple sqlite3.dll's (or .so) lying around.
As a simpler approach if your session creation is centralised behind a Python helper function (rather than exposing the SQLA engine directly), you can just issue session.execute('pragma foreign_keys=on') before returning the freshly created session.
You only need the pool listener approach if arbitrary parts of your application may create SQLA sessions against the database.
From the SQLite dialect page:
SQLite supports FOREIGN KEY syntax when emitting CREATE statements for tables, however by default these constraints have no effect on the operation of the table.
Constraint checking on SQLite has three prerequisites:
At least version 3.6.19 of SQLite must be in use
The SQLite libary must be compiled without the SQLITE_OMIT_FOREIGN_KEY or SQLITE_OMIT_TRIGGER symbols enabled.
The PRAGMA foreign_keys = ON statement must be emitted on all connections before use.
SQLAlchemy allows for the PRAGMA statement to be emitted automatically for new connections through the usage of events:
from sqlalchemy.engine import Engine
from sqlalchemy import event
#event.listens_for(Engine, "connect")
def set_sqlite_pragma(dbapi_connection, connection_record):
cursor = dbapi_connection.cursor()
cursor.execute("PRAGMA foreign_keys=ON")
cursor.close()
One-liner version of conny's answer:
from sqlalchemy import event
event.listen(engine, 'connect', lambda c, _: c.execute('pragma foreign_keys=on'))
I had the same problem before (scripts with foreign keys constraints were going through but actuall constraints were not enforced by the sqlite engine); got it solved by:
downloading, building and installing the latest version of sqlite from here: sqlite-sqlite-amalgamation; before this I had sqlite 3.6.16 on my ubuntu machine; which didn't support foreign keys yet; it should be 3.6.19 or higher to have them working.
installing the latest version of pysqlite from here: pysqlite-2.6.0
after that I started getting exceptions whenever foreign key constraint failed
hope this helps, regards
If you need to execute something for setup on every connection, use a PoolListener.
Enforce Foreign Key constraints for sqlite when using Flask + SQLAlchemy.
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
def create_app(config: str=None):
app = Flask(__name__, instance_relative_config=True)
if config is None:
app.config.from_pyfile('dev.py')
else:
logger.debug('Using %s as configuration', config)
app.config.from_pyfile(config)
db.init_app(app)
# Ensure FOREIGN KEY for sqlite3
if 'sqlite' in app.config['SQLALCHEMY_DATABASE_URI']:
def _fk_pragma_on_connect(dbapi_con, con_record): # noqa
dbapi_con.execute('pragma foreign_keys=ON')
with app.app_context():
from sqlalchemy import event
event.listen(db.engine, 'connect', _fk_pragma_on_connect)
Source:
https://gist.github.com/asyd/a7aadcf07a66035ac15d284aef10d458