How do I execute an SQLite script from within python? - python

My SQLite script works fine, when I type:
.read 'dummy.sql'
from within the SQLite shell.
However, the following Python code is not doing it properly. I'm getting a syntax error in line 5.
import sqlite3
db = sqlite3.connect('scheduling.db')
cursor=db.cursor()
a='''.read "scheduling.sql"'''
cursor.execute(a)
db.commit
db.close()
I know I'm doing something wrong with the quotes. How do I make this work?

The workaround I would recommend is to read the contents of the .sql file into a Python string variable, as you would read any other text file, and then call executescript. Unlike execute, executescript can execute many statements in one call. For example, it will work correctly if your .sql contains the following:
CREATE TABLE contacts (
contact_id INTEGER PRIMARY KEY,
first_name TEXT NOT NULL,
last_name TEXT NOT NULL
);
INSERT INTO contacts (contact_id, first_name, last_name)
VALUES (1, 'John', 'Smith');
Here's the full Python snippet that you'll need:
import sqlite3
with open('scheduling.sql', 'r') as sql_file:
sql_script = sql_file.read()
db = sqlite3.connect('scheduling.db')
cursor = db.cursor()
cursor.executescript(sql_script)
db.commit()
db.close()

You cannot. The program sqlite3 can be seen as splitted in 2 parts:
externally, it parses lines of input into SQL commands
internally, it passes those SQL commands to the engine
externally again, it displays the result of the SQL commands.
.read is kind of a meta command: the parser opens the file and read lines from it. AFAIK, nothing in the sqlite3 library can emulate that parser part, so you would have to the line parsing into SQL statements by hand, and then execute the SQL statements one at a time.

Try this. you can read query from file using the 'open' function - this will replace
.read
functionality; SQL scripts are text files with query. and then run read_sql_query.
import sqlite3
import pandas as pd
sqlite_file = 'scheduling.db'
conn = sqlite3.connect(sqlite_file)
c = conn.cursor()
f = open('scheduling.sql','r')
sql = f.read()
print pd.read_sql_query(sql, conn)

Related

Is there a function under pyodbc that can replace cursor.copy_expert

I use a code that opens a csv file to store it in a database. I use SQL SERVER.
when the file is opened in the RAM, after a processing that is done before, we want to store it in the database.
under Postgresql we use the following code but I want an equivalent under SQL SERVER
# upload to db
SQL_STATEMENT = """
COPY %s FROM STDIN WITH
CSV
HEADER
DELIMITER AS ','
"""
cursor.copy_expert(sql=SQL_STATEMENT % tbl_name, file=my_file)
I have no idea how to change the code block without changing the code
Whereas psycopg2 is a Postgres specific DB-API to maintain extended methods like copy_expert, copy_from, copy_to that are only supported in Postgres, pyodbc is a generalized DB-API that interfaces with any ODBC driver including SQL Server, Teradata, MS Access, even PostgreSQL ODBC drivers! Therefore, it is not likely an SQL Server specific convenience command exists to replace copy_expert.
However, consider submitting an SQL Server specific SQL command such as BULK INSERT that can read from flat files and then run cursor.execute. Below uses F-strings (introduced in Python 3.6) for string formatting:
# upload to db
SQL_STATEMENT = (
f"BULK INSERT {tbl_name} "
f"FROM '{my_file}' "
"WITH (FORMAT='CSV');"
)
cur.execute(SQL_STATEMENT)
conn.commit()

Connect / Store sqlite3 database in specified directory other than default -- "conn = sqlite3.connect('name_of_database')"

I'm working with and Sqlite3 database trying to access its data from a different directory than it was originally created.
The python script(test case) that I have ran through our Squish GUI Automation IDE is located in a directory
C:\Squish_Automation_Functional_nVP2\suite_Production_Checklist\tst_dashboard_functional_setup
There I create a database with the following table inside of that same script:
def get_var_value_pass():
conn = sqlite3.connect('test_result.db')
c = conn.cursor()
c.execute('CREATE TABLE IF NOT EXISTS test_result_pass (pass TEXT,result INTEGER)')
c.execute("INSERT INTO test_result_pass VALUES('Pass', 0)")
conn.commit()
c.execute("SELECT * FROM test_result_pass WHERE pass='Pass'", )
pass_result = c.fetchone()
test.log(str(pass_result[1]))
test_result = pass_result[1]
c.close()
conn.close()
return test_result
Now I'd like to access the same database which I've created formerly by "conn = sqlite3.connect('test_result.db')" inside of another test case which is located in a different directory
C:\Squish_Automation_Functional_nVP2\suite_Production_Checklist\tst_Calling_In-Call-Options_Text_Share-Text_Update-Real-Time
The problem is, when I try a select statement for the same database-- inside of a different script(test case) like so:
def get_var_value_pass(pass_value):
conn = sqlite3.connect('test_result.db')
c = conn.cursor()
c.execute("SELECT * FROM test_result_pass WHERE pass='Pass'", )
My test fails as soon as I try the c.execute("") statement because the table can't be found. Instead, the most recent "conn = sqlite3.connect('test_result.db')" has just created a new empty database, instead of referring to my original created database. Therefore, I've come to the conclusion that I'll want to try and store my original database where both test cases can use them as a test_suite_resource--basically inside of another directory where the other test cases will have reference. Ideally here:
C:\Squish_Automation_Functional_nVP2\suite_Production_Checklist
Is there a sqlite3 function that lets you declare the place where you'd like to Connect / Store your database? Similar to sqlite3.connect('test_result.db') except you define its path?
BTW-- I have tried the second snippet of code inside of the same test script and it runs perfectly. I have also tried an in-memory approach by sqlite3.connect(':memory:') -- still no luck. Please help! Thanks.
It's not clear to me that you received an answer. You have a number of options. I happen to have a sqlite database stored in one directory which I can open in that or another directory by specifying it in one of the following way.
import sqlite3
conn = sqlite3.connect(r'C:\Television\programs.sqlite')
conn.close()
print('Hello')
conn = sqlite3.connect('C:\\Television\\programs.sqlite')
conn.close()
print('Hello')
conn = sqlite3.connect('C:/Television/programs.sqlite')
conn.close()
print('Hello')
All three connection attempts succeed. I see three Hellos as output.
Stick an 'r' ahead of the string. See the documents about string for the reason.
Replace each backward stroke with a pair of backward strokes.
Replace each backward stroke with a forward stroke.

Use python and psycopg2 to execute a sql file that contains a DROP DATABASE statement

I am trying to use a python function to execute a .sql file.
The sql file begins with a DROP DATABASE statement.
The first lines of the .sql file look like this:
DROP DATABASE IF EXISTS myDB;
CREATE DATABASE myDB;
The rest of the .sql file defines all the tables and views for 'myDB'
Python Code:
def connect():
conn = psycopg2.connect(dbname='template1', user='user01')
conn.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
cursor = conn.cursor()
sqlfile = open('/path/to/myDB-schema.sql', 'r')
cursor.execute(sqlfile.read())
db = psycopg2.connect(dbname='myDB', user='user01')
cursor = db.cursor()
return db,cursor
When I run the connect() function, I get an error on the DROP DATABASE statement.
ERROR:
psycopg2.InternalError: DROP DATABASE cannot be executed from a function or multi-command string
I spent a lot of time googling this error, and I can't find a solution.
I also tried adding an AUTOCOMMIT statement to the top of the .sql file, but it didn't change anything.
SET AUTOCOMMIT TO ON;
I am aware that postgreSQL doesn't allow you to drop a database that you are currently connected to, but I didn't think that was the problem here because I begin the connect() function by connecting to the template1 database, and from that connection create the cursor object which opens the .sql file.
Has anyone else run into this error, is there any way to to execute the .sql file using a python function?
This worked for me for a file consisting of SQL Queries in each line:
sql_file = open('file.sql','r')
cursor.execute(sql_file.read())
You are reading in the entire file and passing the whole thing to PostgreSQL as one string (as the error message says, "multi-command string". Is that what you are intending to do? If so, it isn't going to work.
Try this:
cursor.execute(sqlfile.readline())
Or, shell out to psql and let it do the work.
In order to deploy scripts using CRON that serve as ETL files that use .SQL, we had to expand how we call the SQL file itself.
sql_file = os.path.join(os.path.dirname(__file__), "../sql/ccd_parcels.sql")
sqlcurr = open(sql_file, mode='r').read()
curDest.execute(sqlcurr)
connDest.commit()
This seemed to please the CRON job...

Python -printing the sql statement record count in the log file

I am currently using the python program for inserting the record and i am using the below statement.The issue is i am trying to print the no of of record inserted in the log file but it is printing only 0 but i can see the inserted record count in the console while running the program Can you help me to print the record count in the log file
Also i know that redirecting the python program to > file can have the record count but i want to bring all the details in the same log file after the insert record statement is done as i am using loop for different statement.
log="/fs/logfile.txt"
log_file = open(log,'w')
_op = os.system('psql ' + db_host_connection + ' -c "insert into emp select * from emp1;"')
print date , "printing" , _op
You should probably switch to a "proper" python module for postgresql interactions.
Haven't used postgresql in python before, but one of the first search engine hits leads to:
http://initd.org/psycopg/docs/usage.html
You could then do something along the following lines:
import psycopg2
conn = psycopg2.connect("dbname=test user=postgres")
# create a cursor for interaction with the database
cursor = conn.cursor()
# execute your sql statement
cursor.execute("insert into emp select * from emp1")
# retrieve the number of selected rows
number_rows_inserted = cursor.rowcount
# commit the changes
conn.commit()
This should also make things significantly faster than using an os.system call(s), especially if you're planning to execute multiple statements.

python with sqlite3

For now I am trying to use python for sqlite3. My question is, I don't know how to read the existed 'abc.db' with python.
I mean that I just know the abc.db is a sqlite3 file. But I don't know the structure of it, and I also need to get the information from this abc.db.
I used :
import sqlite3
try:
sqlite_conn = sqlite3.connect('abc')
except sqlite3.Error, e:
print 'conntect sqlite database failed.'
sqlite_logger.error("conntect sqlite database failed, ret = %s" % e.args[0])
So, what can I do next? I need to read the abc, and if it is possible, I want to output the content directly on the terminal. Is it possible? Because I need to analyse the data in this file. Thanks a lot!!!!
In your sqlite_conn object you could run the following command
cur = sqlite_conn.cursor()
cur.execute("SELECT name FROM sqlite_master WHERE type='table'")
rows = cur.fetchall()
for row in rows:
print row[0]
Then you could do a SELECT * from <Tablename> for each of those tables. The sqlite_master is a sqlite metadata here.
Using the command-line sqlite3 client, you can see the schema of an unknown db using:
.schema
then poke around a bit with SQL to get a better idea of the data within.

Categories