I want to extract table information from sqlite file.
I could list all the table name following this page and tried to extract table information using query method on the session instance. But I got following error.
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such column: ComponentSizes [SQL: 'SELECT ComponentSizes']
Does anyone know how should I revise following code in order to extract table specifying the table name?
class read():
def __init__(self,path):
engine = create_engine("sqlite:///" + sqlFile)
inspector=inspect(engine)
for table_name in inspector.get_table_names():
for column in inspector.get_columns(table_name):
#print("Column: %s" % column['name'])
print (table_name+" : "+column['name'])
Session = sessionmaker(bind=engine)
self.session = Session()
def getTable(self,name):
table=self.session.query(name).all()
return table
if __name__ == '__main__':
test=read(sqlFile)
test.getTable('ComponentSizes')
The error you are getting is suggestive of what is going wrong. Your code is translating into SQL - SELECT ComponentSizes which is incomplete. It's not clear for what is your end goal. If you want to extract contents of a table into CSV, you could do this:
import sqlite3
import csv
con = sqlite3.connect('mydatabase.db')
outfile = open('mydump.csv', 'wb')
outcsv = csv.writer(outfile)
cursor = con.execute('select * from ComponentSizes')
# dump column titles (optional)
outcsv.writerow(x[0] for x in cursor.description)
# dump rows
outcsv.writerows(cursor.fetchall())
outfile.close()
Else, if you want contents of the table into a pandas df for further analysis, you could choose to do this:
import sqlite3
import pandas as pd
# Create your connection.
cnx = sqlite3.connect('file.db')
df = pd.read_sql_query("SELECT * FROM ComponentSizes", cnx)
Hope it helps. Happy coding!
Related
I have a data base file .db in SQLite3 format and I was attempting to open it to look at the data inside it. Below is my attempt to code using python.
import sqlite3
# Create a SQL connection to our SQLite database
con = sqlite3.connect(dbfile)
cur = con.cursor()
# The result of a "cursor.execute" can be iterated over by row
for row in cur.execute("SELECT * FROM "):
print(row)
# Be sure to close the connection
con.close()
For the line ("SELECT * FROM ") , I understand that you have to put in the header of the table after the word "FROM", however, since I can't even open up the file in the first place, I have no idea what header to put. Hence how can I code such that I can open up the data base file to read its contents?
So, you analyzed it all right. After the FROM you have to put in the tablenames. But you can find them out like this:
SELECT name FROM sqlite_master WHERE type = 'table'
In code this looks like this:
# loading in modules
import sqlite3
# creating file path
dbfile = '/home/niklas/Desktop/Stuff/StockData-IBM.db'
# Create a SQL connection to our SQLite database
con = sqlite3.connect(dbfile)
# creating cursor
cur = con.cursor()
# reading all table names
table_list = [a for a in cur.execute("SELECT name FROM sqlite_master WHERE type = 'table'")]
# here is you table list
print(table_list)
# Be sure to close the connection
con.close()
That worked for me very good. The reading of the data you have done already right just paste in the tablenames.
If you want to see data for visual analysis as pandas dataframe, the below approach could also be used.
import pandas as pd
import sqlite3
import sqlalchemy
try:
conn = sqlite3.connect("file.db")
except Exception as e:
print(e)
#Now in order to read in pandas dataframe we need to know table name
cursor = conn.cursor()
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
print(f"Table Name : {cursor.fetchall()}")
df = pd.read_sql_query('SELECT * FROM Table_Name', conn)
conn.close()
from flask import Flask
app = Flask(__name__)
from sqlalchemy import create_engine, select, MetaData, Table
from sqlalchemy.sql import and_, or_
engine = create_engine('sqlite://username:password#host/databasename')
class UserModel():
def __init__(self):
try:
self.meta = MetaData()
self.users = Table("users", self.meta, autoload=True, autoload_with=engine)
except Exception as e:
print(e)
def get(self):
stmt = select([self.users.c.name, self.users.c.email, self.users.c.password])
print(stmt)
result = engine.execute(stmt)
temp = [dict(r) for r in result] if result else None
print(temp)
return temp
def view_empdetails(): #this is my function: it works great
conn = mysql.connector.connect(host="localhost",user="root",passwd="#####",database="#DB")
cursor = conn.cursor() # this is database connection
viw = """select * from employees"""
cursor.execute(viw)
for emp_no,first_name,last_name,gender,DOB,street,city,state,zipcode,email,phone,hire_date in cursor.fetchall(): # fetch all data from employee table in DB
print('-'*50)
print(emp_no)
print(first_name)
print(last_name)
print(gender)
print(DOB)
print(street)
print(city)
print(state) # I need all these output be in a table or organize format
print(zipcode) #not only list of records
print(email)
print(phone)
print(hire_date)
print('-'*50)
conn.commit()
conn.close()
return menu2()
I need all records in one table|| codes bring data from Database as line by line without formatting> I need them in table
I'm not sure if you are familiar with the pandas library, but I believe it is helpful here. I have never used it with mysql, but I have used it with psycopg2 and pyodbc, so I think the basic idea should work:
data = pd.DataFrame(cur.fetchall(),columns = colnames)
creates a dataFrame (think python spreadsheet or python table) that uses the column names from the table you're querying.
l m fresh user of python.l m working on SQL.l would like to migrate data from excel file to SQL.
Here is my code:
import xlrd
import sqlite3
conn=sqlite3.connect("student.db")
crs=conn.cursor()
file_path="student.xlsx"
file=xlrd.open_workbook(file_path)
sheet=file.sheet_by_index(0)
data=[[sheet.cell_value(r,c)for c in range(sheet.ncols)] for r in range(sheet.nrows)]
for i in range(sheet.ncols):
colm=data[0][i]
crs.execute(("CREATE TABLE IF NOT EXISTS student([%s] VARCHAR(100))")%colm)
firstly, l tried to create tables by using column in excel file but l could not do it. After that , l would like to insert each row in columns. probably, l m doing wrong somewhere but l could not figure out.
why dont you just add the whole table into your sqlite database?
conn = sqlite3.connect('student.db')
cur = conn.cursor()
file = pd.DataFrame("student.xlsx") #assuming your import this as a PD dataframe? You may want to include the path of the file if it is not in your working directory
'yourtablename'.to_sql("file", conn, if_exists="replace")
conn.commit()
'yourtablename' should be replaced by whatever name you want to give the table in your database. You can then use the below code to verify that you table is in the database
con = sqlite3.connect('student.db')
cur = con.cursor()
cur.execute("SELECT name FROM sqlite_master WHERE type='table' ORDER BY name;")
cur.fetchall()
import xlrd
import sqlite3
import pandas as pd
conn = sqlite3.connect("student.db")
crs = conn.cursor()
file_path = "C:/Users/LANDGIS10/PycharmProjects/sqlite/student.xlsx"
file=xlrd.open_workbook(file_path)
sheet = file.sheet_by_index(0)
data = [[sheet.cell_value(r, c) for c in range(sheet.ncols)] for r in range(sheet.nrows)]
df=pd.DataFrame(data)
df.to_sql("student1",conn,if_exists="replace")
conn.commit()
crs.fetchall()
However ,it is not what l want as Columns are in excel migrating in sqlite as row instead of table head.here is the result.
"0" "ID" "NO" "NAME" "SURNAME"
"1" "1.0" "1234.0" "Ali" "ARSA"
"2" "2.0" "23234.0" "Mehmet" "KOT"
"3" "3.0" "234412.0" "Adem" "SAR"
l want "ID" "NO" "NAME" "SURNAME"this part as table head.
Using python, I am trying to import a csv into an sqlite table and use the headers in the csv file to become the headers in the sqlite table. The code runs but the table "MyTable" does not appear to be created. Here is the code:
with open ('dict_output.csv', 'r') as f:
reader = csv.reader(f)
columns = next(reader)
#Strips white space in header
columns = [h.strip() for h in columns]
#reader = csv.DictReader(f, fieldnames=columns)
for row in reader:
print(row)
con = sqlite3.connect("city_spec.db")
cursor = con.cursor()
#Inserts data from csv into table in sql database.
query = 'insert into MyTable({0}) values ({1})'
query = query.format(','.join(columns), ','.join('?' * len(columns)))
print(query)
cursor = con.cursor()
for row in reader:
cursor.execute(query, row)
#cursor.commit()
con.commit()
con.close()
Thanks in advance for any help.
You can use Pandas to make this easy (you may need to pip install pandas first):
import sqlite3
import pandas as pd
# load data
df = pd.read_csv('dict_output.csv')
# strip whitespace from headers
df.columns = df.columns.str.strip()
con = sqlite3.connect("city_spec.db")
# drop data into database
df.to_sql("MyTable", con)
con.close()
Pandas will do all of the hard work for you, including create the actual table!
You haven't marked your answer solved yet so here goes.
Connect to the database just once, and create a cursor just once.
You can read the csv records only once.
I've added code that creates a crude form of the database table based on the column names alone. Again, this is done just once in the loop.
Your insertion code works fine.
import sqlite3
import csv
con = sqlite3.connect("city_spec.sqlite") ## these statements belong outside the loop
cursor = con.cursor() ## execute them just once
first = True
with open ('dict_output.csv', 'r') as f:
reader = csv.reader(f)
columns = next(reader)
columns = [h.strip() for h in columns]
if first:
sql = 'CREATE TABLE IF NOT EXISTS MyTable (%s)' % ', '.join(['%s text'%column for column in columns])
print (sql)
cursor.execute(sql)
first = False
#~ for row in reader: ## we will read the rows later in the loop
#~ print(row)
query = 'insert into MyTable({0}) values ({1})'
query = query.format(','.join(columns), ','.join('?' * len(columns)))
print(query)
cursor = con.cursor()
for row in reader:
cursor.execute(query, row)
con.commit()
con.close()
You can also do it easy with peewee orm. For this you only use an extension from peewee, the playhouse.csv_loader:
from playhouse.csv_loader import *
db = SqliteDatabase('city_spec.db')
Test = load_csv(db, 'dict_output.csv')
You created the database city_spec.db with the headers as fields and the data from the dict_output.csv
If you don't have peewee you can install it with
pip install peewee
I have a Sqlite 3 and/or MySQL table named "clients"..
Using python 2.6, How do I create a csv file named Clients100914.csv with headers?
excel dialect...
The Sql execute: select * only gives table data, but I would like complete table with headers.
How do I create a record set to get table headers. The table headers should come directly from sql not written in python.
w = csv.writer(open(Fn,'wb'),dialect='excel')
#w.writelines("header_row")
#Fetch into sqld
w.writerows(sqld)
This code leaves me with file open and no headers. Also cant get figure out how to use file as log.
import csv
import sqlite3
from glob import glob; from os.path import expanduser
conn = sqlite3.connect( # open "places.sqlite" from one of the Firefox profiles
glob(expanduser('~/.mozilla/firefox/*/places.sqlite'))[0]
)
cursor = conn.cursor()
cursor.execute("select * from moz_places;")
with open("out.csv", "w", newline='') as csv_file: # Python 3 version
#with open("out.csv", "wb") as csv_file: # Python 2 version
csv_writer = csv.writer(csv_file)
csv_writer.writerow([i[0] for i in cursor.description]) # write headers
csv_writer.writerows(cursor)
PEP 249 (DB API 2.0) has more information about cursor.description.
Using the csv module is very straight forward and made for this task.
import csv
writer = csv.writer(open("out.csv", 'w'))
writer.writerow(['name', 'address', 'phone', 'etc'])
writer.writerow(['bob', '2 main st', '703', 'yada'])
writer.writerow(['mary', '3 main st', '704', 'yada'])
Creates exactly the format you're expecting.
You can easily create it manually, writing a file with a chosen separator. You can also use csv module.
If it's from database you can alo just use a query from your sqlite client :
sqlite <db params> < queryfile.sql > output.csv
Which will create a csv file with tab separator.
How to extract the column headings from an existing table:
You don't need to parse an SQL "create table" statement. This is fortunate, as the "create table" syntax is neither nice nor clean, it is warthog-ugly.
You can use the table_info pragma. It gives you useful information about each column in a table, including the name of the column.
Example:
>>> #coding: ascii
... import sqlite3
>>>
>>> def get_col_names(cursor, table_name):
... results = cursor.execute("PRAGMA table_info(%s);" % table_name)
... return [row[1] for row in results]
...
>>> def wrong_way(cur, table):
... import re
... cur.execute("SELECT sql FROM sqlite_master WHERE name=?;", (table, ))
... sql = cur.fetchone()[0]
... column_defs = re.findall("[(](.*)[)]", sql)[0]
... first_words = (line.split()[0].strip() for line in column_defs.split(','))
... columns = [word for word in first_words if word.upper() != "CONSTRAINT"]
... return columns
...
>>> conn = sqlite3.connect(":memory:")
>>> curs = conn.cursor()
>>> _ignored = curs.execute(
... "create table foo (id integer, name text, [haha gotcha] text);"
... )
>>> print get_col_names(curs, "foo")
[u'id', u'name', u'haha gotcha']
>>> print wrong_way(curs, "foo")
[u'id', u'name', u'[haha'] <<<<<===== WHOOPS!
>>>
Other problems with the now-deleted "parse the create table SQL" answer:
Stuffs up with e.g. create table test (id1 text, id2 int, msg text, primary key(id1, id2)) ... needs to ignore not only CONSTRAINT, but also keywords PRIMARY, UNIQUE, CHECK and FOREIGN (see the create table docs).
Needs to specify re.DOTALL in case there are newlines in the SQL.
In line.split()[0].strip() the strip is redundant.
This is simple and works fine for me.
Lets say you have already connected to your database table and also got a cursor object. So following on on from that point.
import csv
curs = conn.cursor()
curs.execute("select * from oders")
m_dict = list(curs.fetchall())
with open("mycsvfile.csv", "wb") as f:
w = csv.DictWriter(f, m_dict[0].keys())
w.writerow(dict((fn,fn) for fn in m_dict[0].keys()))
w.writerows(m_dict)
unless i'm missing something, you just want to do something like so...
f = open("somefile.csv")
f.writelines("header_row")
logic to write lines to file (you may need to organize values and add comms or pipes etc...)
f.close()
It can be easily done using pandas and sqlite3. In extension to the answer from Cristian Ciupitu.
import sqlite3
from glob import glob; from os.path import expanduser
conn = sqlite3.connect(glob(expanduser('data/clients_data.sqlite'))[0])
cursor = conn.cursor()
Now use pandas to read the table and write to csv.
clients = pd.read_sql('SELECT * FROM clients' ,conn)
clients.to_csv('data/Clients100914.csv', index=False)
This is more direct and works all the time.
The below code works for Oracle with Python 3.6 :
import cx_Oracle
import csv
# Create tns
dsn_tns=cx_Oracle.makedsn('<host>', '<port>', service_name='<service_name>')
# Connect to DB using user, password and tns settings
conn=cx_Oracle.connect(user='<user>', password='<pass>',dsn=dsn_tns)
c=conn.cursor()
#Execute the Query
c.execute("select * from <table>")
# Write results into CSV file
with open("<file>.csv", "w", newline='') as csv_file:
csv_writer = csv.writer(csv_file)
csv_writer.writerow([i[0] for i in c.description]) # write headers
csv_writer.writerows(c)