I am currently working on a script to import a .db file into Postgresql database, including the data. Is there any way to do so without using third party tools and by using python?
You can do it with Django for sure.
python manage.py dumpdata > db.json
Change the database settings to new database such as of PostgreSQL.
python manage.py migrate
python manage.py shell
Enter the following in the shell
from django.contrib.contenttypes.models import ContentType
ContentType.objects.all().delete()
python manage.py loaddata db.json
Otherwise if you want to tinker your way.
You need to install psycopg2
$ pip install psycopg2
Then you connect to Postgres.
import psycopg2
conn = psycopg2.connect("host=localhost dbname=postgres user=postgres")
This is how you insert values.
cur = conn.cursor()
insert_query = "INSERT INTO users VALUES {}".format("(10, 'hello#dataquest.io', 'Some Name', '123 Fake St.')")
cur.execute(insert_query)
conn.commit()
Now with SQLAlchemy you can easily open an SQLite file.
import sqlite3
conn = sqlite3.connect('database.db')
Fetch the data.
r = conn.execute("""SELECT * FROM books""")
r.fetchall()
Here is how to fetch all tables from SQLite
all the table names within your database:
SELECT name FROM sqlite_master WHERE type = 'table'
sqlite_master can be thought of as a table that contains information about your databases (metadata).
A quick but most likely inefficient way (because it will be running 700 queries with 700 separate resultsets) to get the list of table names, loop through those tables and return data where columnA = "-":
for row in connection.execute('SELECT name FROM sqlite_master WHERE type = "table" ORDER BY name').fetchall()
for result in connection.execute('SELECT * FROM ' + row[1] + ' WHERE "columnA" = "-"').fetchall()
# do something with results
Here is an other approach
import sqlite3
try:
conn = sqlite3.connect('/home/rolf/my.db')
except sqlite3.Error as e:
print('Db Not found', str(e))
db_list = []
mycursor = conn.cursor()
for db_name in mycursor.execute("SELECT name FROM sqlite_master WHERE type = 'table'"):
db_list.append(db_name)
for x in db_list:
print "Searching",x[0]
try:
mycursor.execute('SELECT * FROM '+x[0]+' WHERE columnA" = "-"')
stats = mycursor.fetchall()
for stat in stats:
print stat, "found in ", x
except sqlite3.Error as e:
continue
conn.close()
Related
i'm a bit of an amateur IT Professional who has been getting to grips with Python and Django. This query is just for Python and SQLite3.
So I have the following code, which is meant to take an input from the user, and then pass it to a function I have created.
from dbadd import inputdetails
import sqlite3
conn = sqlite3.connect('torndata.db')
c = conn.cursor()
print('Enter Name')
u_name = str(input())
print('Enter Age')
u_age = int(input())
print('Enter Gender')
u_gender = str(input())
inputdetails(u_name, u_age, u_gender)
conn.close()
And this is the function it is calling:
import sqlite3
conn = sqlite3 . connect ( 'torndata.db' )
cursor = conn.cursor ()
def inputdetails(u_name,u_age,u_gender):
cursor.execute("""
INSERT INTO userdata(name, age, gender)
VALUES (?,?,?)
""", (u_name, u_age, u_gender))
conn.commit()
I get no errors when it runs, but when I run the following, it shows no data has been moved to the table I have specified.
c.execute("SELECT * FROM userdata")
print(c.fetchall())
conn.commit()
conn.close()
The database is already created within SQLite3 and the table has been set up, I can query it directly.
Bad indent. The commit statement is not part of the function.
print ('Files in Drive:')
!ls drive/AI
Files in Drive:
database.sqlite
Reviews.csv
Untitled0.ipynb
fine_food_reviews.ipynb
Titanic.csv
When I run the above code in Google Colab, clearly my sqlite file is present in my drive. But whenever I run some query on this file, it says
# using the SQLite Table to read data.
con = sqlite3.connect('database.sqlite')
#filtering only positive and negative reviews i.e.
# not taking into consideration those reviews with Score=3
filtered_data = pd.read_sql_query("SELECT * FROM Reviews WHERE Score !=3",con)
DatabaseError: Execution failed on sql 'SELECT * FROM Reviews WHERE
Score != 3 ': no such table: Reviews
Below you will find code that addresses the db setup on the Colab VM, table creation, data insertion and data querying. Execute all code snippets in individual notebook cells.
Note however that this example only shows how to execute the code on a non-persistent Colab VM. If you want to save your database to GDrive you will have to mount your Gdrive first (source):
from google.colab import drive
drive.mount('/content/gdrive')
and navigate to the appropriate file directory after.
Step 1: Create DB
import sqlite3
conn = sqlite3.connect('SQLite_Python.db') # You can create a new database by changing the name within the quotes
c = conn.cursor() # The database will be saved in the location where your 'py' file is saved
# Create table - CLIENTS
c.execute('''CREATE TABLE SqliteDb_developers
([id] INTEGER PRIMARY KEY, [name] text, [email] text, [joining_date] date, [salary] integer)''')
conn.commit()
Test whether the DB was created successfully:
!ls
Output:
sample_data SQLite_Python.db
Step 2: Insert Data Into DB
import sqlite3
try:
sqliteConnection = sqlite3.connect('SQLite_Python.db')
cursor = sqliteConnection.cursor()
print("Successfully Connected to SQLite")
sqlite_insert_query = """INSERT INTO SqliteDb_developers
(id, name, email, joining_date, salary)
VALUES (1,'Python','MakesYou#Fly.com','2020-01-01',1000)"""
count = cursor.execute(sqlite_insert_query)
sqliteConnection.commit()
print("Record inserted successfully into SqliteDb_developers table ", cursor.rowcount)
cursor.close()
except sqlite3.Error as error:
print("Failed to insert data into sqlite table", error)
finally:
if (sqliteConnection):
sqliteConnection.close()
print("The SQLite connection is closed")
Output:
Successfully Connected to SQLite
Record inserted successfully into SqliteDb_developers table 1
The SQLite connection is closed
Step 3: Query DB
import sqlite3
conn = sqlite3.connect("SQLite_Python.db")
cur = conn.cursor()
cur.execute("SELECT * FROM SqliteDb_developers")
rows = cur.fetchall()
for row in rows:
print(row)
conn.close()
Output:
(1, 'Python', 'MakesYou#Fly.com', '2020-01-01', 1000)
Try this instead. See what tables are there.
"SELECT name FROM sqlite_master WHERE type='table'"
give similar sharable id to your database file just like you did with Reviews.csv
database_file=drive.CreateFile({'id':'your_sharable_id for sqlite file'})
database_file.GetContentFile('database.sqlite')
If you are trying to access the files from your google drive, you need to mount the drive first:
from google.colab import drive
drive.mount('/content/drive')
After you do this, right click on the file that you intend to read in colab session and select 'Copy Path'and paste it in the connection string.
con = sqlite3.connect('/content/database.sqlite')
You can now read the file.
con = sqlite3.connect('database.sqlite')
filtered_data = pd.read_sql_query("SELECT * FROM Reviews WHERE Score !=3",con)
If you are executing it twice you will definitely end with this type of error.Execute it exactly once without any fail.
If you get any error then remove
database.sqlite
this file and extract it again.This time execute it again without any fail/error .This worked for me .
The end goal is to modify Firefox cookies.sqlite db before starting Firefox.
At present, I want to display what tables are in the db. I copied the cookies.sqlite db to my desktop. I'm working with the db on my desktop.
This is my first time using sqlite. I copied some code, but I'm not able to read what tables are in the db.
List of tables, db schema, dump etc using the Python sqlite3 API
Here is what I get in the terminal. I see the listing of the tables.
me $ ls cookies.sqlite
cookies.sqlite
me $ sqlite3 cookies.sqlite
SQLite version 3.8.10.2 2015-05-20 18:17:19
Enter ".help" for usage hints.
sqlite> .tables
moz_cookies
sqlite> .exit
me $ # I run my python script
me $ ./sqlTutorial.bash
SQLite version: 3.8.10.2
table records...
[]
more table records...
me $ cat dump.sql
BEGIN TRANSACTION;
COMMIT;
me $
python code. All I want to do is display the tables in the cookie.sqlite db.
#! /bin/python
import sqlite3 as lite
import sys
con = lite.connect('cookie.sqlite')
with con:
cur = con.cursor()
cur.execute('SELECT SQLITE_VERSION()')
data = cur.fetchone()
print ("SQLite version: %s" % data)
print ("table records...")
cursor = con.cursor()
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
print(cursor.fetchall())
print ("more table records...")
with open('dump.sql', 'w') as f:
for line in con.iterdump():
f.write('%s\n' % line)
As you noticed, the database name is cookies.sqlite not cookie.sqlite.
Why didn't I get an error? I though python would error out on. con = lite.connect('cookie.sqlite')
Connecting to a sqlite database either opens an existing database file, or creates a new one if the database didn't exist.
I am trying to write into my localhost MySQL database.
I have created a database named "test", a table called "price_update" and a row called "model"
When I run the script below I get no errors, however, I also get nothing written to my database.
I am not sure where to start looking for the problem. the row is varchar(10) and collation utf9_general_ci.
import MySQLdb
conn = MySQLdb.connect(host="127.0.0.1",user="someUser",passwd="somePassword",db="test")
query = "INSERT INTO price_update (model) values ('12345')"
x = conn.cursor()
x.execute(query)
row = x.fetchall()
You have to commit the changes:
conn.commit()
Also, I'd make your query safer:
query = "INSERT INTO price_update (model) values (%s)"
...
x.execute(query, ('12345',))
I have a .sql file with multiple insert statements ( 1000 + ) and I want to run the statements in this file into my Oracle database.
For now, im using a python with odbc to connect to my database with the following:
import pyodbc
from ConfigParser import SafeConfigParser
def db_call(self, cfgFile, sql):
parser = SafeConfigParser()
parser.read(cfgFile)
dsn = parser.get('odbc', 'dsn')
uid = parser.get('odbc', 'user')
pwd = parser.get('odbc', 'pass')
try:
con = pyodbc.connect('DSN=' + dsn + ';PWD=' + pwd + ';UID=' + pwd)
cur = con.cursor()
cur.execute(sql)
con.commit()
except pyodbc.DatabaseError, e:
print 'Error %s' % e
sys.exit(1)
finally:
if con and cur:
cur.close()
con.close()
with open('theFile.sql','r') as f:
cfgFile = 'c:\\dbinfo\\connectionInfo.cfg'
#here goes the code to insert the contents into the database using db_call_many
statements = f.read()
db_call(cfgFile,statements)
But when i run it i receive the following error:
pyodbc.Error: ('HY000', '[HY000] [Oracle][ODBC][Ora]ORA-00911: invalid character\n (911) (SQLExecDirectW)')
But all the content of the file are only:
INSERT INTO table (movie,genre) VALUES ('moviename','horror');
Edit
Adding print '<{}>'.format(statements) before the db_db_call(cfgFile,statements) i get the results(100+):
<INSERT INTO table (movie,genre) VALUES ('moviename','horror');INSERT INTO table (movie,genre) VALUES ('moviename_b','horror');INSERT INTO table (movie,genre) VALUES ('moviename_c','horror');>
Thanks for your time on reading this.
Now it's somewhat clarified - you have a lot of separate SQL statements such as INSERT INTO table (movie,genre) VALUES ('moviename','horror');
Then, you're effectively after cur.executescript() than the current state (I have no idea if pyodbc supports that part of the DB API, but any reason, you can't just execute an execute to the database itself?
When you read a file using read() function, the end line (\n) at the end of file is read too. I think you should use db_call(cfgFile,statements[:-1]) to eliminate the end line.