sqlite3.DatabaseError: file is encrypted or is not a database - python

I have created a sqlite db and uploaded it to a hosting.
Then I'm retrieving it from my script and trying to insert some data, but execute() is returning a
DatabaseError (file is encrypted or is not a database).
urllib.urlretrieve('http://%s/%s' % (HOST, NAME_DB), NAME_DB)
con = sqlite3.connect(NAME_DB)
cur = con.cursor()
cur.execute('insert into log(date, count, average) values(date("now"), ?, ?)', (1, 1.2))
con.commit()
con.close()
Traceback (most recent call last):
File "mylog.py", line 17, in <module>
cur.execute('insert into log(date, count, average) values(date("now"), ?, ?)', (1, 1.2))
sqlite3.DatabaseError: file is encrypted or is not a database
Such error doesn't happen if I use the sqlite CLI to insert data. Could you please help me?

Version mismatch between sqlite CLI and python sqlite API? I created again my db from the script instead of the CLI. Now insert and select work from the script, but not from the CLI. $sqlite -version returns 2.8.17, while the python version is 2.7.3.

I had the same problem with a database created by C++ code using SQLite3 library which was later accessed by Python 2.7 version of SQLite3. I was unable to query the database in Python scripts. To solve the problem on my computer, I changed the version of :
C:\Python27\DLLs\sqlite3.dll
for the version found in C++ Sqlite library directory.

Okay I faced the same problem and as Visionnaire said Just replace the sqlite3.dll in the pythonXX\DLLs with sqlite3.dll in the CLI sqllite folder which originally contains sqlite3.exe and the problem was solved

I had the same problem, and I thought it's something wrong with the sqlite3 db I was working on. But it appeared, that I've accidentally overrode another sqlite3.db file (that was present in my project), as an ASCII. I was not aware that the error was coming from another db.
Pay attention :)

Related

Is there a function under pyodbc that can replace cursor.copy_expert

I use a code that opens a csv file to store it in a database. I use SQL SERVER.
when the file is opened in the RAM, after a processing that is done before, we want to store it in the database.
under Postgresql we use the following code but I want an equivalent under SQL SERVER
# upload to db
SQL_STATEMENT = """
COPY %s FROM STDIN WITH
CSV
HEADER
DELIMITER AS ','
"""
cursor.copy_expert(sql=SQL_STATEMENT % tbl_name, file=my_file)
I have no idea how to change the code block without changing the code
Whereas psycopg2 is a Postgres specific DB-API to maintain extended methods like copy_expert, copy_from, copy_to that are only supported in Postgres, pyodbc is a generalized DB-API that interfaces with any ODBC driver including SQL Server, Teradata, MS Access, even PostgreSQL ODBC drivers! Therefore, it is not likely an SQL Server specific convenience command exists to replace copy_expert.
However, consider submitting an SQL Server specific SQL command such as BULK INSERT that can read from flat files and then run cursor.execute. Below uses F-strings (introduced in Python 3.6) for string formatting:
# upload to db
SQL_STATEMENT = (
f"BULK INSERT {tbl_name} "
f"FROM '{my_file}' "
"WITH (FORMAT='CSV');"
)
cur.execute(SQL_STATEMENT)
conn.commit()

Sqlite Database Access : No such table (Within Django no models)

I have a django & docker server running on my computer and I have created a database with code from outside this server. I am trying to access this database ('test.sqlite3') within the server.
I made sure the path was the correct one and that the file name was correct as well. When I open the database with DB Browser, I can see the tables and all my data. But I still get the following error text:
OperationalError no such table: NAMEOFTABLE
When I use the exact same code from another python IDE (spyder) it works fine. I'm guessing there's something weird going on with django?
Here is some of the code:
conn = sqlite3.connect("../test.sqlite3")
c = conn.cursor()
c.execute("SELECT firstName, lastName FROM RESOURCES")
conn.close()
(Yes, I have also tried using the absolute path and I get the same error.)
Also to be noted: I get this same error when I try to create the database file & table from within the django code (the path should then be the same but it still get the error in this case).
Update: it seems I have a problem with my path because I can't even open a text file with python and it's absolute path. So if anyone has any idea why that'd be great.
try:
f = open("/Users/XXXXX/OneDrive/XXXXX/XXXX/Autres/argon-dashboard-django/toto.txt")
# Do something with the file
except IOError:
q="File not accessible"
finally:
f.close()
always return the following error 'f referenced before assignment' and q = "File not accesible" so that means I can't even find the text file.
To answer this problem, I used two things:
I moved the sqlite3 file within the app folder and used '/app/db.sqlite3' as the path
Added ; at the ends of my SQL requests
c.execute("SELECT firstName, lastName FROM RESOURCES;")
Not sure which one solved the problem but everything works for me now.
Had a similar issue, possibly something about leaving Django Model metadata files outside of the image. I needed to synchronize the model with the DB using a run-syncdb
RUN ["python", "manage.py", "migrate"]
RUN ["python", "manage.py", "migrate", "--run-syncdb"]
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]

Python - the SQLite database file was loaded in a wrong encoding 'UTF-8'

I wrote a query to create a SQLite database and the query is completely correct. The database file is created in my project files but when I try to open it (in pycharm), this message shows up:
The file was loaded in a wrong encoding 'UTF-8'
This is the code causing this problem :
import sqlite3
connection = sqlite3.connect("./one_database.db")
cursor = connection.cursor()
sql = """
CREATE TABLE IF NOT EXISTS User (
user_NAME VARCHAR (60),
user_CHATID FLOAT (20),
user_PHONENUMBER VARCHAR )
"""
cursor.execute(sql)
connection.commit()
connection.close()
So far, I tried to download and update all drivers and needs for SQLite3 and everything is up to date. I've tried all solutions I found on Google (including JetBrains official documentation, Stack Overflow, etc.) but none of the above is working and the result is the same!
I'm using: Python 3.8 | PyCharm 2021.1
By the way I fixed my problem using "db.browser" !
This is used for deal with sqlite databases .

disk I/O error with SQLite3 in Python 3 when writing to a database

i am a student just starting out with python, and i was tasked with creating a relational database management system. I think i came pretty far, but i seem to have hit a wall. This is my code:
import csv
import sqlite3
conn = sqlite3.connect('unfccc.db')
c = conn.cursor()
c.execute('''CREATE TABLE unfccc (
Country TEXT,
CodeCountryFormat TEXT,
NamePollutant TEXT,
NameYearSector TEXT,
NameParent TEXT,
Sector TEXT,
CodeSector TEXT,
CNUEDSPD TEXT
)''')
def insert_row(Country, CodeCountryFormat, NamePollutant, NameYearSector, NameParent, Sector, CodeSector, CNUEDSPD):
c.execute("INSERT INTO unfccc VALUES (?, ?, ?, ?, ?, ?, ?, ?)", (Country, CodeCountryFormat, NamePollutant, NameYearSector, NameParent, Sector, CodeSector, CNUEDSPD))
conn.commit()
with open('UNFCCC_v20.csv') as csvfile:
readCSV = csv.reader(csvfile, delimiter='\t')
counter = 0
for row in readCSV:
insert_row(row[0], row[1], row[2], row[3], row[4], row[5], row[6], row[7])
counter = counter+1
print('%d down, more to go' % counter)
conn.close()
when i run it with line 4 directing the input towards :memory: it works perfectly and i have myself what i think is a relational database.
However, when i try to run the code like this, writing the data to a db file, i get this error:
File "<ipython-input-13-4c50216842bc>", line 19, in insert_row
c.execute("INSERT INTO unfccc VALUES (?, ?, ?, ?, ?, ?, ?, ?)", (Country, CodeCountryFormat, NamePollutant, NameYearSector, NameParent, Sector, CodeSector, CNUEDSPD))
OperationalError: disk I/O error
I've searched stackoverflow, and i've used google, but i don't think any of the cases i found match up to what i'm trying to do here (or i don't have the knowledge to figure out whats going on). One other thing i noticed about my code is that it inputs the data into memory super fast, but when i write to a db file it is really slow, it shouldn't be a hardware limit as i am using an SSD. Any help will be greatly appreciated!
Setting Backup/Sync to pause on the system tray icon while working with a project stored on Google Drive will prevent disk i/o errors.
This is because when the file is written to or changed, backup & sync attempts to upload the new version to your Google Drive, while it is doing this; the file becomes a 'Read-Only' file.
While sync is paused your Google Drive folder acts more like a normal directory.
(click -> settings -> pause/resume)
Another cause for this problem is if the journal file is not writable, but the SQLite data file is writable. If the SQLite data file isn't writable, it will tell you you're trying to write to a read-only database. But if the database file is writable, but the journal file (filename same as the SQLite data file, but ending in -journal) isn't writable, it will give you an I/O error instead.
I know I might be answering really late, but for others... I had the same problem, and the problem I found was that a python code was already using that database. When I stoped the code and ran the main code again, it did worked..
It's too late for answer but I had same error in a python script. For me it was just save a db file in another program where I update the data.
Not an issue with yours but I was getting an error using uppercase characters in my db file name. Changed it and fixed this issue... Worth a try for others!
sqlite3.connect('lowercase_name.db')
The answer was simple, i was working from my Google Drive directory on my personal computer, i was offline at the time of the error and the Backup and Sync program was not running therefore it took me a while to realize that i still don't have full permissions over the folder.
Moving my script and necessary files to a folder in my Documents fixed the error. Thanks for the help Marco and Josh!

Python MySQLdb execution of multiple statements from a file without parsing

Is it possible to apply a MySQL batch file using Python mysqldb library. So far I tried to "execute" the content of the file:
cur = connection.cursor()
cur.execute(file(filename).read())
cur.commit() # and without commit
This works only on a single statement. Otherwise I get the error:
Failed to apply content. Error 2014: Commands out of sync; you can't run this command now
I intend supporting any kind of MySQL schema, table changes so parsing the file line by line is not an option. Is there other solution than calling mysql client from Python?
I suppose you are using the cx_oracle?
The issue is due to calling a non existant method in cursor where as it should have been called in connection.
It should have been
cur = connection.cursor()
cur.execute(file(filename).read())
connection.commit()

Categories