sqlite3 unknown database schema when using a checkpoint? - python

I am getting:
sqlite3.OperationalError: unknown database schema
conn = sqlite3.connect('tick.db', detect_types=sqlite3.PARSE_DECLTYPES, timeout=20,isolation_level=None)
# conn1 = sqlite3.connect('nifty_tick.db', detect_types=sqlite3.PARSE_DECLTYPES, timeout=20,isolation_level=None)
c = conn.cursor()
# c1 = conn1.cursor()
c.execute('PRAGMA journal_mode=wal')
def tick_entry1(inst,timestamp,ltp, bid, ask):
if inst == 12335874:
c.execute('INSERT INTO niftyfut (timestamp, close, bid, ask) VALUES (?,?,?,?)',
(timestamp, ltp, bid, ask))
def on_ticks(ws, ticks):
global c, conn
for t in ticks:
if t['instrument_token'] == 12335874:
timestamp = t['timestamp']
ltp = t['last_price']
inst = t['instrument_token']
try:
tick_entry1(inst,timestamp,ltp)
except:
# print('problem with db')
pass
c.execute('PRAGMA schema.wal_checkpoint(FULL);')
I have tried:
c.execute('PRAGMA schema.wal_checkpoint(FULL);')
and
c.execute('PRAGMA schema.wal_checkpoint(FULL)')
Edit
I just tried:
c.execute('PRAGMA wal_checkpoint(FULL)')
It seems to work. Now wondering if the following should be executed at the start.:
c.execute("PRAGMA wal_autocheckpoint = 0")

The message is saying that no database has a schema named schema.
In the documentation the schema is italic, which indicates that it is symbolic rather than an actual value and would be replaced with an appropriate value, the value is used to distinguish between attached databases.
e.g. if you used
ATTACH DATABASE the_database_path AS database2 /*<<<<<<<<<< THE SCHEMA is database2 */;
then you could use
PRAGMA database2.wal_checkpoint(FULL);
to checkpoint the attached database.
The initial database's schema is main but isn't required as it's the default if no schema is supplied.
Hence why c.execute('PRAGMA wal_checkpoint(FULL)') worked (as would c.execute('PRAGMA main.wal_checkpoint(FULL)')). i.e. the are effectively the same.
If you use c.execute("PRAGMA wal_autocheckpoint = 0") then you will have to manage all the checkpointing as auto-checkpointing will be turned off (note that closing all the database connections checkpoints).
You may wish to consider :-
Disabling the automatic checkpoint mechanism. In its default
configuration, SQLite will checkpoint the WAL file at the conclusion
of any transaction when the WAL file is more than 1000 pages long.
However, compile-time and run-time options exist that can disable or
defer this automatic checkpoint. If an application disables the
automatic checkpoint, then there is nothing to prevent the WAL file
from growing excessively. Write-Ahead Logging - 6. Avoiding Excessively Large WAL Files
I'd suggest not using PRAGMA wal_autocheckpoint = 0 autocheckpointing does not hinder forced checkpointing, other than if a forced checkpoint happens after an auto checkpoint (and all pages are written) and nothing has been updated then it will do nothing (gracefully), otherwise more pages would be written to the database file.

Related

Python SQLite - fuse_hidden not deleted

I am trying to setup a python script to get some data and store it into a SQLite database. However when I am running the script a .fuse_hidden file is created.
On windows no .fuse_hidden file is observed but on ubuntu it generates at each call. The .fuse_hidden file seems to contain some form of sql query with input and tables.
I can delete the files without error during runtime but they are not deleted automatically. I make sure to end my connection to the db when I am finished with the query.
lsof give no information.
I am out of ideas on what to try next to get the files removed automatically. Any suggestions?
Testing
In order to confirm that it is nothing wrong with the code I made a simple script
(Assume there is an empty error.db)
import sqlite3
conn = sqlite3.connect("error.db")
cur = conn.cursor()
create_query = """
CREATE TABLE Errors (
name TEXT
);"""
try:
cur.execute(create_query)
except:
pass
cur.execute("INSERT INTO Errors (name) VALUES(?)", ["Test2"])
conn.commit()
cur.close()
conn.close()

Using Sqlite with WAL

I've been following Python documentation on the SQLite tutorial and I managed to create an Employee table and write to it.
import sqlite3
conn = sqlite3.connect('employee.db')
c = conn.cursor()
firstname = "Ann Marie"
lastname = "Smith"
email = "ams#cia.com"
employee = (email, firstname, lastname)
c.execute('INSERT INTO Employee Values (?,?,?)', employee)
conn.commit()
# Print the table contents
for row in c.execute("select * from Employee"):
print(row)
conn.close()
I've been reading about the Write-Ahead Logging, but I can't find a tutorial that explains how to implement it. Can someone provide an example?
I notice Firefox, which uses SQLite, locks the file in such a way that if you attempt to delete the sqlite file while using Firefox, it will fail saying "file is open or being used"(or something similar), how do I achieve this? I'm running Python under Windows 10.
conn = sqlite3.connect('app.db', isolation_level=None)
Set journal mode to WAL:
conn.execute('pragma journal_mode=wal')
Or another way (just show how to off wal mode)
cur = conn.cursor()
cur.execute('pragma journal_mode=DELETE')
The PRAGMA journal_mode documentation says:
If the journal mode could not be changed, the original journal mode is returned. […]
Note also that the journal_mode cannot be changed while a transaction is active.
So you have to ensure that the database library does not try to be clever and automatically starts a transaction.

PyHDB gets "Could not find table/view" from SAP HANA Express

I have the following Python Code [pyhdb] to connect to SAP HANA Express:
Is there an error in my code? or has it something to do with the SYSTEM user?
Error Message is:
Could not find table/view TABLE in schema APP: line 1 col 19 (at pos 18)
import os
import random
import platform
from constant import *
import pyhdb
def is_rpi():
return 'arm' in platform.uname()[4]
if is_rpi():
import Adafruit_DHT
def read_dht():
if is_rpi():
sensor = Adafruit_DHT.DHT22
humidity, temperature = Adafruit_DHT.read_retry(sensor, DHT_PIN)
if humidity is not None and temperature is not None:
print('Temp={0:0.1f}*C Humidity={1:0.1f}%'.format(temperature, humidity))
return int(humidity), int(temperature)
else:
return None, None
else:
return random.randint(20, 30), random.randint(40, 70)
if __name__ == '__main__':
connection = pyhdb.connect(host=SAP_HOST, port=39015, user=SAP_USER, password=SAP_PWD)
cursor = connection.cursor()
temp, humi = read_dht()
query = "INSERT INTO \"{}\".\"{}\" VALUES(\'{}\', {}, {}, \'{}\')".format(
SAP_SCHEMA, SAP_TABLE, DEVICE_ID, temp, humi, ROOM_NAME)
print("Executing query: "), query
cursor.execute(query)
print("New Row count: "), cursor.rowcount
connection.close()
And here is the constant code:
DHT_PIN = 4
DEVICE_ID = '0ada9de4-bc4f-4e53-990a-cbcfccaed4c4'
ROOM_NAME = 'room 101
SAP_HOST = 'hxehost'
SAP_USER = 'SYSTEM'
SAP_PWD = 'XXXXXXXXXXXX'
SAP_SCHEMA = 'APP'
SAP_TABLE = 'TABLE'
The error message
Could not find table/view TABLE in schema APP
points to the fact that the table does not exist. In order to check whether the table is known to the system you could, e.g., also run the SQL statement
SELECT * FROM TABLES WHERE SCHEMA_NAME='APP' AND TABLE_NAME='TABLE';
which would lead to an empty result set for a non-existing table.
In case of an authorization problem you could rather expect an error like
insufficient privilege: Not authorized
Regarding the question about checking the authorization, you might want to take a look into the system views EFFECTIVE_PRIVILEGES, EFFECTIVE_ROLES resp. GRANTED_PRIVILEGES and GRANTED_ROLES (refer to the SAP HANA Security Guide). Generally, a privilege can be granted either by a user or a role. Roles can contain other roles, which might make finding the authorization a bit more complex.
However, in your specific case, you could probably try the rather simple SQL query:
SELECT * FROM "PUBLIC"."EFFECTIVE_PRIVILEGES"
WHERE USER_NAME='SYSTEM' AND SCHEMA_NAME='APP' AND PRIVILEGE='INSERT';
(Depending on your scenario, you might also want to check for the UPDATE privilege.)
Please allow me to add the remark that your INSERT statement from the example probably needs to be explicitly committed to be effective, as by default the connection sets autocommit=False, if I remember correctly.
The user SYSTEM had not enough privilege to insert into table. Solved Thanks to everyone.

DB-API with Python

I'm trying to insert some data into a local MySQL database by using MySQL Connector/Python -- apparently the only way to integrate MySQL into Python 3 without breaking out the C Compiler.
I tried all the examples that come with the package; Those who execute can enter data just fine. Unfortunately my attempts to write anything into my tables fail.
Here is my code:
import mysql.connector
def main(config):
db = mysql.connector.Connect(**config)
cursor = db.cursor()
stmt_drop = "DROP TABLE IF EXISTS urls"
cursor.execute(stmt_drop)
stmt_create = """
CREATE TABLE urls (
id TINYINT UNSIGNED NOT NULL AUTO_INCREMENT,
str VARCHAR(50) DEFAULT '' NOT NULL,
PRIMARY KEY (id)
) CHARACTER SET 'utf8'"""
cursor.execute(stmt_create)
cursor.execute ("""
INSERT INTO urls (str)
VALUES
('reptile'),
('amphibian'),
('fish'),
('mammal')
""")
print("Number of rows inserted: %d" % cursor.rowcount)
db.close()
if __name__ == '__main__':
import config
config = config.Config.dbinfo().copy()
main(config)
OUTPUT:
Number of rows inserted: 4
I orientate my code strictly on what was given to me in the examples and can't, for the life of mine, figure out what the problem is. What am I doing wrong here?
Fetching table data with the script works just fine so I am not worried about the configuration files. I'm root on the database so rights shouldn't be a problem either.
You need to add a db.commit() to commit your changes before you db.close()!

using pyodbc on ubuntu to insert a image field on SQL Server

I am using Ubuntu 9.04
I have installed the following package versions:
unixodbc and unixodbc-dev: 2.2.11-16build3
tdsodbc: 0.82-4
libsybdb5: 0.82-4
freetds-common and freetds-dev: 0.82-4
python2.6-dev
I have configured /etc/unixodbc.ini like this:
[FreeTDS]
Description = TDS driver (Sybase/MS SQL)
Driver = /usr/lib/odbc/libtdsodbc.so
Setup = /usr/lib/odbc/libtdsS.so
CPTimeout =
CPReuse =
UsageCount = 2
I have configured /etc/freetds/freetds.conf like this:
[global]
tds version = 8.0
client charset = UTF-8
text size = 4294967295
I have grabbed pyodbc revision 31e2fae4adbf1b2af1726e5668a3414cf46b454f from http://github.com/mkleehammer/pyodbc and installed it using "python setup.py install"
I have a windows machine with Microsoft SQL Server 2000 installed on my local network, up and listening on the local ip address 10.32.42.69. I have an empty database created with name "Common". I have the user "sa" with password "secret" with full privileges.
I am using the following python code to setup the connection:
import pyodbc
odbcstring = "SERVER=10.32.42.69;UID=sa;PWD=secret;DATABASE=Common;DRIVER=FreeTDS"
con = pyodbc.connect(odbcstring)
cur = con.cursor()
cur.execute("""
IF EXISTS(SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = 'testing')
DROP TABLE testing
""")
cur.execute('''
CREATE TABLE testing (
id INTEGER NOT NULL IDENTITY(1,1),
myimage IMAGE NULL,
PRIMARY KEY (id)
)
''')
con.commit()
Everything WORKS up to this point. I have used SQLServer's Enterprise Manager on the server and the new table is there.
Now I want to insert some data on the table.
cur = con.cursor()
# using web data for exact reproduction of the error by all.
# I'm actually reading a local file in my real code.
url = 'http://www.forestwander.com/wp-content/original/2009_02/west-virginia-mountains.jpg'
data = urllib2.urlopen(url).read()
sql = "INSERT INTO testing (myimage) VALUES (?)"
Now here on my original question, I was having trouble using cur.execute(sql, (data,)) but now I've edited the question, because following Vinay Sajip's answer below (THANKS), I have changed it to:
cur.execute(sql, (pyodbc.Binary(data),))
con.commit()
And insertion is working perfectly. I can confirm the size of the inserted data using the following test code:
cur.execute('SELECT DATALENGTH(myimage) FROM testing WHERE id = 1')
data_inside = cur.fetchone()[0]
assert data_inside == len(data)
Which passes perfectly!!!
Now the problem is on retrieval of the data back.
I am trying the common approach:
cur.execute('SELECT myimage FROM testing WHERE id = 1')
result = cur.fetchone()
returned_data = str(result[0]) # transforming buffer object
print 'Original: %d; Returned: %d' % (len(data), len(returned_data))
assert data == returned_data
However that fails!!
Original: 4744611; Returned: 4096
Traceback (most recent call last):
File "/home/nosklo/devel/teste_mssql_pyodbc_unicode.py", line 53, in <module>
assert data == returned_data
AssertionError
I've put all the code above in a single file here, for easy testing of anyone that wants to help.
Now for the question:
I want python code to insert an image file into mssql. I want to query the image back and show it to the user.
I don't care about the column type in mssql. I am using the "IMAGE" column type on the example, but any binary/blob type would do, as long as I get the binary data for the file I inserted back unspoiled. Vinay Sajip said below that this is the preferred data type for this in SQL SERVER 2000.
The data is now being inserted without errors, however when I retrieve the data, only 4k are returned. (Data is truncated on 4096).
How can I make that work?
EDITS: Vinay Sajip's answer below gave me a hint to use pyodbc.Binary on the field. I have updated the question accordingly. Thanks Vinay Sajip!
Alex Martelli's comment gave me the idea of using the DATALENGTH MS SQL function to test if the data is fully loaded on the column. Thanks Alex Martelli !
Huh, just after offering the bounty, I've found out the solution.
You have to use SET TEXTSIZE 2147483647 on the query, in addition of text size configuration option in /etc/freetds/freetds.conf.
I have used
cur.execute('SET TEXTSIZE 2147483647 SELECT myimage FROM testing WHERE id = 1')
And everything worked fine.
Strange is what FreeTDS documentation says about the text size configuration option:
default value of TEXTSIZE, in bytes. For text and image datatypes, sets the maximum width of any returned column. Cf. set TEXTSIZE in the T-SQL documentation for your server.
The configuration also says that the maximum value (and the default) is 4,294,967,295. However when trying to use that value in the query I get an error, the max number I could use in the query is 2,147,483,647 (half).
From that explanation I thought that only setting this configuration option would be enough. It turns out that I was wrong, setting TEXTSIZE in the query fixed the issue.
Below is the complete working code:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import pyodbc
import urllib2
odbcstring = "SERVER=10.32.42.69;UID=sa;PWD=secret;DATABASE=Common;DRIVER=FreeTDS"
con = pyodbc.connect(odbcstring)
cur = con.cursor()
cur.execute("""
IF EXISTS(SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = 'testing')
DROP TABLE testing
""")
cur.execute('''
CREATE TABLE testing (
id INTEGER NOT NULL IDENTITY(1,1),
myimage IMAGE NULL,
PRIMARY KEY (id)
)
''')
con.commit()
cur = con.cursor()
url = 'http://www.forestwander.com/wp-content/original/2009_02/west-virginia-mountains.jpg'
data = urllib2.urlopen(url).read()
sql = "INSERT INTO testing (myimage) VALUES (?)"
cur.execute(sql, (pyodbc.Binary(data),))
con.commit()
cur.execute('SELECT DATALENGTH(myimage) FROM testing WHERE id = 1')
data_inside = cur.fetchone()[0]
assert data_inside == len(data)
cur.execute('SET TEXTSIZE 2147483647 SELECT myimage FROM testing WHERE id = 1')
result = cur.fetchone()
returned_data = str(result[0])
print 'Original: %d; Returned; %d' % (len(data), len(returned_data))
assert data == returned_data
I think you should be using a pyodbc.Binary instance to wrap the data:
cur.execute('INSERT INTO testing (myimage) VALUES (?)', (pyodbc.Binary(data),))
Retrieving should be
cur.execute('SELECT myimage FROM testing')
print "image bytes: %r" % str(cur.fetchall()[0][0])
UPDATE: The problem is in insertion. Change your insertion SQL to the following:
"""DECLARE #txtptr varbinary(16)
INSERT INTO testing (myimage) VALUES ('')
SELECT #txtptr = TEXTPTR(myimage) FROM testing
WRITETEXT testing.myimage #txtptr ?
"""
I've also updated the mistake I made in using the value attribute in the retrieval code.
With this change, I'm able to insert and retrieve a 320K JPEG image into the database (retrieved data is identical to inserted data).
N.B. The image data type is deprecated, and is replaced by varbinary(max) in later versions of SQL Server. The same logic for insertion/retrieval should apply, however, for the newer column type.
I had a similar 4096 truncation issue on TEXT fields, which SET TEXTSIZE 2147483647 fixed for me, but this also fixed it for me:
import os
os.environ['TDSVER'] = '8.0'

Categories