python with sqlite3 - python

For now I am trying to use python for sqlite3. My question is, I don't know how to read the existed 'abc.db' with python.
I mean that I just know the abc.db is a sqlite3 file. But I don't know the structure of it, and I also need to get the information from this abc.db.
I used :
import sqlite3
try:
sqlite_conn = sqlite3.connect('abc')
except sqlite3.Error, e:
print 'conntect sqlite database failed.'
sqlite_logger.error("conntect sqlite database failed, ret = %s" % e.args[0])
So, what can I do next? I need to read the abc, and if it is possible, I want to output the content directly on the terminal. Is it possible? Because I need to analyse the data in this file. Thanks a lot!!!!

In your sqlite_conn object you could run the following command
cur = sqlite_conn.cursor()
cur.execute("SELECT name FROM sqlite_master WHERE type='table'")
rows = cur.fetchall()
for row in rows:
print row[0]
Then you could do a SELECT * from <Tablename> for each of those tables. The sqlite_master is a sqlite metadata here.

Using the command-line sqlite3 client, you can see the schema of an unknown db using:
.schema
then poke around a bit with SQL to get a better idea of the data within.

Related

Is there a function under pyodbc that can replace cursor.copy_expert

I use a code that opens a csv file to store it in a database. I use SQL SERVER.
when the file is opened in the RAM, after a processing that is done before, we want to store it in the database.
under Postgresql we use the following code but I want an equivalent under SQL SERVER
# upload to db
SQL_STATEMENT = """
COPY %s FROM STDIN WITH
CSV
HEADER
DELIMITER AS ','
"""
cursor.copy_expert(sql=SQL_STATEMENT % tbl_name, file=my_file)
I have no idea how to change the code block without changing the code
Whereas psycopg2 is a Postgres specific DB-API to maintain extended methods like copy_expert, copy_from, copy_to that are only supported in Postgres, pyodbc is a generalized DB-API that interfaces with any ODBC driver including SQL Server, Teradata, MS Access, even PostgreSQL ODBC drivers! Therefore, it is not likely an SQL Server specific convenience command exists to replace copy_expert.
However, consider submitting an SQL Server specific SQL command such as BULK INSERT that can read from flat files and then run cursor.execute. Below uses F-strings (introduced in Python 3.6) for string formatting:
# upload to db
SQL_STATEMENT = (
f"BULK INSERT {tbl_name} "
f"FROM '{my_file}' "
"WITH (FORMAT='CSV');"
)
cur.execute(SQL_STATEMENT)
conn.commit()

pandas.tocsv() format issue (?) resulting in error when using psycopg2 copy_from

Overview:
I have a function in an external file that returns a dataframe that I then save to a csv with:
df.to_csv('filepath.csv', na_rep="0", index=False)
I then try to import the csv into a postgres table using the pyscopg2 function copy_from:
try:
connect = psycopg2.connect(database = "", user = "", password = "", host = "", port = "")
except:
print("Could not connect to database")
cursor = connect.cursor()
with open("filepath", 'r') as open_csv:
next(open_csv)
try:
cursor.copy_from(open_csv, sep=",")
connect.commit()
print("Copy Complete")
except:
print("Copy Error")
cursor.close()
This results in a copy error exception in the code above (so no real detail) but there are some weird caveats:
For some reason, if I open the csv in libre office and manually save it as a text csv and then run just the above psycopg2 copy_from process, the copy works and there are no issues. So for whatever reason, in the eyes of psycopg2 copy_from, something is off with the to.csv() write that gets fixed if I just manually save the file. Manually saving the csv does not result in any visual changes so what is happening here?
Also, the above psycopg2 code snippet works without error in another file in which all dataframe manipulation is contained within the single file where the to.csv() is completed. So something about returning a dataframe from a function in an external file is off?
Fwiw, when debugging, the issue came up on the .copy_from() line so the issue has something to do with csv formatting and I cannot figure it out. I found a workaround with sqlalchemy but would like to know what I am missing here instead of ignoring the problem.
In the postgres error log, the error is: "invalid input syntax for type integer: "-1.0". This error is occurring in the last column of my table where the value is set as an INT and in the csv, the value is -1 but it is being interpreted as -1.0. Where I am confused is that if I use a COPY query to directly input the csv file into postgres, it does not have a problem. Why does it interpret the value as -1.0 through psycopg2 but not directly in postgres?
This is my first post so if more detail is needed let me know - thanks in advance for the help

How do I execute an SQLite script from within python?

My SQLite script works fine, when I type:
.read 'dummy.sql'
from within the SQLite shell.
However, the following Python code is not doing it properly. I'm getting a syntax error in line 5.
import sqlite3
db = sqlite3.connect('scheduling.db')
cursor=db.cursor()
a='''.read "scheduling.sql"'''
cursor.execute(a)
db.commit
db.close()
I know I'm doing something wrong with the quotes. How do I make this work?
The workaround I would recommend is to read the contents of the .sql file into a Python string variable, as you would read any other text file, and then call executescript. Unlike execute, executescript can execute many statements in one call. For example, it will work correctly if your .sql contains the following:
CREATE TABLE contacts (
contact_id INTEGER PRIMARY KEY,
first_name TEXT NOT NULL,
last_name TEXT NOT NULL
);
INSERT INTO contacts (contact_id, first_name, last_name)
VALUES (1, 'John', 'Smith');
Here's the full Python snippet that you'll need:
import sqlite3
with open('scheduling.sql', 'r') as sql_file:
sql_script = sql_file.read()
db = sqlite3.connect('scheduling.db')
cursor = db.cursor()
cursor.executescript(sql_script)
db.commit()
db.close()
You cannot. The program sqlite3 can be seen as splitted in 2 parts:
externally, it parses lines of input into SQL commands
internally, it passes those SQL commands to the engine
externally again, it displays the result of the SQL commands.
.read is kind of a meta command: the parser opens the file and read lines from it. AFAIK, nothing in the sqlite3 library can emulate that parser part, so you would have to the line parsing into SQL statements by hand, and then execute the SQL statements one at a time.
Try this. you can read query from file using the 'open' function - this will replace
.read
functionality; SQL scripts are text files with query. and then run read_sql_query.
import sqlite3
import pandas as pd
sqlite_file = 'scheduling.db'
conn = sqlite3.connect(sqlite_file)
c = conn.cursor()
f = open('scheduling.sql','r')
sql = f.read()
print pd.read_sql_query(sql, conn)

Connect / Store sqlite3 database in specified directory other than default -- "conn = sqlite3.connect('name_of_database')"

I'm working with and Sqlite3 database trying to access its data from a different directory than it was originally created.
The python script(test case) that I have ran through our Squish GUI Automation IDE is located in a directory
C:\Squish_Automation_Functional_nVP2\suite_Production_Checklist\tst_dashboard_functional_setup
There I create a database with the following table inside of that same script:
def get_var_value_pass():
conn = sqlite3.connect('test_result.db')
c = conn.cursor()
c.execute('CREATE TABLE IF NOT EXISTS test_result_pass (pass TEXT,result INTEGER)')
c.execute("INSERT INTO test_result_pass VALUES('Pass', 0)")
conn.commit()
c.execute("SELECT * FROM test_result_pass WHERE pass='Pass'", )
pass_result = c.fetchone()
test.log(str(pass_result[1]))
test_result = pass_result[1]
c.close()
conn.close()
return test_result
Now I'd like to access the same database which I've created formerly by "conn = sqlite3.connect('test_result.db')" inside of another test case which is located in a different directory
C:\Squish_Automation_Functional_nVP2\suite_Production_Checklist\tst_Calling_In-Call-Options_Text_Share-Text_Update-Real-Time
The problem is, when I try a select statement for the same database-- inside of a different script(test case) like so:
def get_var_value_pass(pass_value):
conn = sqlite3.connect('test_result.db')
c = conn.cursor()
c.execute("SELECT * FROM test_result_pass WHERE pass='Pass'", )
My test fails as soon as I try the c.execute("") statement because the table can't be found. Instead, the most recent "conn = sqlite3.connect('test_result.db')" has just created a new empty database, instead of referring to my original created database. Therefore, I've come to the conclusion that I'll want to try and store my original database where both test cases can use them as a test_suite_resource--basically inside of another directory where the other test cases will have reference. Ideally here:
C:\Squish_Automation_Functional_nVP2\suite_Production_Checklist
Is there a sqlite3 function that lets you declare the place where you'd like to Connect / Store your database? Similar to sqlite3.connect('test_result.db') except you define its path?
BTW-- I have tried the second snippet of code inside of the same test script and it runs perfectly. I have also tried an in-memory approach by sqlite3.connect(':memory:') -- still no luck. Please help! Thanks.
It's not clear to me that you received an answer. You have a number of options. I happen to have a sqlite database stored in one directory which I can open in that or another directory by specifying it in one of the following way.
import sqlite3
conn = sqlite3.connect(r'C:\Television\programs.sqlite')
conn.close()
print('Hello')
conn = sqlite3.connect('C:\\Television\\programs.sqlite')
conn.close()
print('Hello')
conn = sqlite3.connect('C:/Television/programs.sqlite')
conn.close()
print('Hello')
All three connection attempts succeed. I see three Hellos as output.
Stick an 'r' ahead of the string. See the documents about string for the reason.
Replace each backward stroke with a pair of backward strokes.
Replace each backward stroke with a forward stroke.

Python -printing the sql statement record count in the log file

I am currently using the python program for inserting the record and i am using the below statement.The issue is i am trying to print the no of of record inserted in the log file but it is printing only 0 but i can see the inserted record count in the console while running the program Can you help me to print the record count in the log file
Also i know that redirecting the python program to > file can have the record count but i want to bring all the details in the same log file after the insert record statement is done as i am using loop for different statement.
log="/fs/logfile.txt"
log_file = open(log,'w')
_op = os.system('psql ' + db_host_connection + ' -c "insert into emp select * from emp1;"')
print date , "printing" , _op
You should probably switch to a "proper" python module for postgresql interactions.
Haven't used postgresql in python before, but one of the first search engine hits leads to:
http://initd.org/psycopg/docs/usage.html
You could then do something along the following lines:
import psycopg2
conn = psycopg2.connect("dbname=test user=postgres")
# create a cursor for interaction with the database
cursor = conn.cursor()
# execute your sql statement
cursor.execute("insert into emp select * from emp1")
# retrieve the number of selected rows
number_rows_inserted = cursor.rowcount
# commit the changes
conn.commit()
This should also make things significantly faster than using an os.system call(s), especially if you're planning to execute multiple statements.

Categories