Sqlite Database Access : No such table (Within Django no models) - python

I have a django & docker server running on my computer and I have created a database with code from outside this server. I am trying to access this database ('test.sqlite3') within the server.
I made sure the path was the correct one and that the file name was correct as well. When I open the database with DB Browser, I can see the tables and all my data. But I still get the following error text:
OperationalError no such table: NAMEOFTABLE
When I use the exact same code from another python IDE (spyder) it works fine. I'm guessing there's something weird going on with django?
Here is some of the code:
conn = sqlite3.connect("../test.sqlite3")
c = conn.cursor()
c.execute("SELECT firstName, lastName FROM RESOURCES")
conn.close()
(Yes, I have also tried using the absolute path and I get the same error.)
Also to be noted: I get this same error when I try to create the database file & table from within the django code (the path should then be the same but it still get the error in this case).
Update: it seems I have a problem with my path because I can't even open a text file with python and it's absolute path. So if anyone has any idea why that'd be great.
try:
f = open("/Users/XXXXX/OneDrive/XXXXX/XXXX/Autres/argon-dashboard-django/toto.txt")
# Do something with the file
except IOError:
q="File not accessible"
finally:
f.close()
always return the following error 'f referenced before assignment' and q = "File not accesible" so that means I can't even find the text file.

To answer this problem, I used two things:
I moved the sqlite3 file within the app folder and used '/app/db.sqlite3' as the path
Added ; at the ends of my SQL requests
c.execute("SELECT firstName, lastName FROM RESOURCES;")
Not sure which one solved the problem but everything works for me now.

Had a similar issue, possibly something about leaving Django Model metadata files outside of the image. I needed to synchronize the model with the DB using a run-syncdb
RUN ["python", "manage.py", "migrate"]
RUN ["python", "manage.py", "migrate", "--run-syncdb"]
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]

Related

sqlite3 OperationalError(disk I/O error) when accessing a DB with Python inside a Virtual Machine

I have already looked at these (and others):
disk I/O error with SQLite3 in Python 3 when writing to a database
Error: disk I/O error on a newly created database
Might be similar/related to SQLite disk I/O error (3850)
I am parsing an iOS full extraction image (about 61G) from zip file. The file is copied from the zip into a temp folder and is read from there. I do not create the database nor do anything else to it.
So running this code errors inside of VMware 16.2.1 using a Windows 10 Host and a Windows 10 virtual machine. If I run this directly on the host, it will work fine -- change "Z:\" to "D:\VM Shares" + the rest of the path.
import sqlite3
from pathlib import Path
>>> file_path = Path('Z:\\Forensics\\CTF21_Marsha_iPhoneX_FFS_Premium_2021_07_29\\xLEAPP_Reports_2021-11-23_Tuesday_115054\\temp\\filesystem2\\containers\\Shared\\SystemGroup\\30CCFC92-08E6-458A-B78B-DA920EF2EF82\\Library\\Database\\com.apple.MobileBluetooth.ledevices.other.db')
>>> db = sqlite3.connect(f'file:{file_path}?mode=ro', uri=True)
>>> cursor = db.cursor()
# The line below provides a method to check: Did I just open a sqliteDB? Errors if not
# a sqliteDB. I have seen that you can basically open a text file through the connect()
# statement and get a cursor. So this just a check before other code would run.
>>> cursor.execute("PRAGMA page_count").fetchone()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
sqlite3.OperationalError: disk I/O error
Now, if I change the connect() statement to the following:
>>> db = sqlite3.connect(f'file:{file_path}?mode=ro&immutable=1', uri=True)
You can read about immutable object on SQL Uniform Resource Identifiers (section 3.3. This seems to work leading me to believe the DB is locked somehow.
I have reboot the virtual machine and rebooted the host. I did notice copying the DB in the same directly (sometimes) works but copying to another directly (even one higher) always works.
I thought it might be done to the length of the path name. I've tried added "\?", "\\?\", or other variations for the longer paths but this does not work. The path length is under 256 anyway.
This is related to xLEAPP with the problematic code located in abstract.py.
Added a issue on Github for it - xleapp#5

pandas.tocsv() format issue (?) resulting in error when using psycopg2 copy_from

Overview:
I have a function in an external file that returns a dataframe that I then save to a csv with:
df.to_csv('filepath.csv', na_rep="0", index=False)
I then try to import the csv into a postgres table using the pyscopg2 function copy_from:
try:
connect = psycopg2.connect(database = "", user = "", password = "", host = "", port = "")
except:
print("Could not connect to database")
cursor = connect.cursor()
with open("filepath", 'r') as open_csv:
next(open_csv)
try:
cursor.copy_from(open_csv, sep=",")
connect.commit()
print("Copy Complete")
except:
print("Copy Error")
cursor.close()
This results in a copy error exception in the code above (so no real detail) but there are some weird caveats:
For some reason, if I open the csv in libre office and manually save it as a text csv and then run just the above psycopg2 copy_from process, the copy works and there are no issues. So for whatever reason, in the eyes of psycopg2 copy_from, something is off with the to.csv() write that gets fixed if I just manually save the file. Manually saving the csv does not result in any visual changes so what is happening here?
Also, the above psycopg2 code snippet works without error in another file in which all dataframe manipulation is contained within the single file where the to.csv() is completed. So something about returning a dataframe from a function in an external file is off?
Fwiw, when debugging, the issue came up on the .copy_from() line so the issue has something to do with csv formatting and I cannot figure it out. I found a workaround with sqlalchemy but would like to know what I am missing here instead of ignoring the problem.
In the postgres error log, the error is: "invalid input syntax for type integer: "-1.0". This error is occurring in the last column of my table where the value is set as an INT and in the csv, the value is -1 but it is being interpreted as -1.0. Where I am confused is that if I use a COPY query to directly input the csv file into postgres, it does not have a problem. Why does it interpret the value as -1.0 through psycopg2 but not directly in postgres?
This is my first post so if more detail is needed let me know - thanks in advance for the help

Python & Sqlite3 Changes not saving to database

I am using a docker while I develop a web app and I am using an sqlite3 database to store all the data I need.
conn = sqlite3.connect(path)
c = conn.cursor()
today = str(datetime.today()).split(' ')[0]
c.execute('UPDATE CONTACTS SET lastUpdate="%s" WHERE id=%s;'%(today,conId))
conn.commit()
conn.close()
I know the path is correct because I can easily retrieve information. But when I try to execute this function, the changes remain while the docker is still running but when I restart it, the data reverts to what it was previously.
Any ideas as to why this is, and how I can fix it?
Ok so to do this I simply added a few lines to the docker-compose.yml file.
volumes:
- ./app/database.sqlite3:/app/database.sqlite3
which is also the relative path to my database file.
This works perfectly.

flask sqlalchemy update sql raw command does not provide any response

I'm trying to perform an update using flask-sqlalchemy but when it gets to the update script it does not return anything. it seems the script is hanging or it is not doing anything.
I tried to wrap a try catch on the code that does not complete but there are no errors.
I gave it 10 minutes to complete the update statement which only updates 1 record and still, it will not do anything for some reason.
When I cancel the script, it provides an error Communication link failure (0) (SQLEndTran) but I don't think this is the root cause of the error because on the same script, I have other sql scripts that works ok so the connection to db is good
what my script does is get some list of filenames that I need to process (I have no issues with this). then using the retrieved list of filenames, I will look into the directory to check if the file exists. if it does not exists, I will update the database to tag the file as it is not found. this is where I get the issue, it does not perform the update nor provide an error message of some sort.
I even tried to create a new engine just for the update script, but still I get the same behavior.
I also tried to print out the sql script first in python before executing. I ran the printed sql command on my sql browser and it worked ok.
The code is very simple, I'm not really sure why it's having the issue.
#!/usr/bin/env python3
from flask_sqlalchemy import sqlalchemy
import glob
files_directory = "/files_dir/"
sql_string = """
select *
from table
where status is null
"""
# ommited conn_string
engine1 = sqlalchemy.create_engine(conn_string)
result = engine1.execute(sql_string)
for r in result:
engine2 = sqlalchemy.create_engine(conn_string)
filename = r[11]
match = glob.glob(f"{files_directory}/**/{filename}.wav")
if not match:
print('no match')
script = "update table set status = 'not_found' where filename = '" + filename + "' "
engine2.execute(script)
engine2.dispose()
continue
engine1.dispose()
it appears that if I try to loop through 26k records, the script doesn't work. but when I try to do by batches of 2k records per run, then the script will work. so my sql string will become (added top 2000 on the query)
sql_string = """
select top 2000 *
from table
where status is null
"""
it's manual yeah, but it works for me since I just need to run this script once. (I mean 13 times)

Use python and psycopg2 to execute a sql file that contains a DROP DATABASE statement

I am trying to use a python function to execute a .sql file.
The sql file begins with a DROP DATABASE statement.
The first lines of the .sql file look like this:
DROP DATABASE IF EXISTS myDB;
CREATE DATABASE myDB;
The rest of the .sql file defines all the tables and views for 'myDB'
Python Code:
def connect():
conn = psycopg2.connect(dbname='template1', user='user01')
conn.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
cursor = conn.cursor()
sqlfile = open('/path/to/myDB-schema.sql', 'r')
cursor.execute(sqlfile.read())
db = psycopg2.connect(dbname='myDB', user='user01')
cursor = db.cursor()
return db,cursor
When I run the connect() function, I get an error on the DROP DATABASE statement.
ERROR:
psycopg2.InternalError: DROP DATABASE cannot be executed from a function or multi-command string
I spent a lot of time googling this error, and I can't find a solution.
I also tried adding an AUTOCOMMIT statement to the top of the .sql file, but it didn't change anything.
SET AUTOCOMMIT TO ON;
I am aware that postgreSQL doesn't allow you to drop a database that you are currently connected to, but I didn't think that was the problem here because I begin the connect() function by connecting to the template1 database, and from that connection create the cursor object which opens the .sql file.
Has anyone else run into this error, is there any way to to execute the .sql file using a python function?
This worked for me for a file consisting of SQL Queries in each line:
sql_file = open('file.sql','r')
cursor.execute(sql_file.read())
You are reading in the entire file and passing the whole thing to PostgreSQL as one string (as the error message says, "multi-command string". Is that what you are intending to do? If so, it isn't going to work.
Try this:
cursor.execute(sqlfile.readline())
Or, shell out to psql and let it do the work.
In order to deploy scripts using CRON that serve as ETL files that use .SQL, we had to expand how we call the SQL file itself.
sql_file = os.path.join(os.path.dirname(__file__), "../sql/ccd_parcels.sql")
sqlcurr = open(sql_file, mode='r').read()
curDest.execute(sqlcurr)
connDest.commit()
This seemed to please the CRON job...

Categories