l m fresh user of python.l m working on SQL.l would like to migrate data from excel file to SQL.
Here is my code:
import xlrd
import sqlite3
conn=sqlite3.connect("student.db")
crs=conn.cursor()
file_path="student.xlsx"
file=xlrd.open_workbook(file_path)
sheet=file.sheet_by_index(0)
data=[[sheet.cell_value(r,c)for c in range(sheet.ncols)] for r in range(sheet.nrows)]
for i in range(sheet.ncols):
colm=data[0][i]
crs.execute(("CREATE TABLE IF NOT EXISTS student([%s] VARCHAR(100))")%colm)
firstly, l tried to create tables by using column in excel file but l could not do it. After that , l would like to insert each row in columns. probably, l m doing wrong somewhere but l could not figure out.
why dont you just add the whole table into your sqlite database?
conn = sqlite3.connect('student.db')
cur = conn.cursor()
file = pd.DataFrame("student.xlsx") #assuming your import this as a PD dataframe? You may want to include the path of the file if it is not in your working directory
'yourtablename'.to_sql("file", conn, if_exists="replace")
conn.commit()
'yourtablename' should be replaced by whatever name you want to give the table in your database. You can then use the below code to verify that you table is in the database
con = sqlite3.connect('student.db')
cur = con.cursor()
cur.execute("SELECT name FROM sqlite_master WHERE type='table' ORDER BY name;")
cur.fetchall()
import xlrd
import sqlite3
import pandas as pd
conn = sqlite3.connect("student.db")
crs = conn.cursor()
file_path = "C:/Users/LANDGIS10/PycharmProjects/sqlite/student.xlsx"
file=xlrd.open_workbook(file_path)
sheet = file.sheet_by_index(0)
data = [[sheet.cell_value(r, c) for c in range(sheet.ncols)] for r in range(sheet.nrows)]
df=pd.DataFrame(data)
df.to_sql("student1",conn,if_exists="replace")
conn.commit()
crs.fetchall()
However ,it is not what l want as Columns are in excel migrating in sqlite as row instead of table head.here is the result.
"0" "ID" "NO" "NAME" "SURNAME"
"1" "1.0" "1234.0" "Ali" "ARSA"
"2" "2.0" "23234.0" "Mehmet" "KOT"
"3" "3.0" "234412.0" "Adem" "SAR"
l want "ID" "NO" "NAME" "SURNAME"this part as table head.
Related
I was trying to read some data from a text file and write it down in a Sql server table using Pandas Module and FOR LOOP. Below is my code..
import pandas as pd
import pyodbc
driver = '{SQL Server Native Client 11.0}'
conn = pyodbc.connect(
Trusted_Connection = 'Yes',
Driver = driver,
Server = '***********',
Database = 'Sullins_Data'
)
def createdata():
cursor = conn.cursor()
cursor.execute(
'insert into Sullins_Datasheet(Part_Number,Web_Link) values(?,?);',
(a,j))
conn.commit()
a = pd.read_csv('check9.txt',header=None, names=['Part_Number','Web_Links'] ) # 2 Columns, 8 rows
b = pd.DataFrame(a)
p_no = (b['Part_Number'])
w_link = (b['Web_Links'])
# print(p_no)
for i in p_no:
a = i
for l in w_link:
j = l
createdata()
As you can see from the code that I have created 2 variables a and j to hold the value of both the columns of the text file one by one and write it in the sql table.
But after running the code I have got only the last row value in the table out of 8 rows.
When I used createdate function inside w_link for loop, it write the duplicate value in the table.
Please suggest where I am doing wrong.
here is sample of how your code is working
a = 0
b = 0
ptr=['s','d','f','e']
pt=['a','b','c','d']
for i in ptr:
a=i
print(a,end='')
for j in pt:
b=j
print(b,end='')
I have a data base file .db in SQLite3 format and I was attempting to open it to look at the data inside it. Below is my attempt to code using python.
import sqlite3
# Create a SQL connection to our SQLite database
con = sqlite3.connect(dbfile)
cur = con.cursor()
# The result of a "cursor.execute" can be iterated over by row
for row in cur.execute("SELECT * FROM "):
print(row)
# Be sure to close the connection
con.close()
For the line ("SELECT * FROM ") , I understand that you have to put in the header of the table after the word "FROM", however, since I can't even open up the file in the first place, I have no idea what header to put. Hence how can I code such that I can open up the data base file to read its contents?
So, you analyzed it all right. After the FROM you have to put in the tablenames. But you can find them out like this:
SELECT name FROM sqlite_master WHERE type = 'table'
In code this looks like this:
# loading in modules
import sqlite3
# creating file path
dbfile = '/home/niklas/Desktop/Stuff/StockData-IBM.db'
# Create a SQL connection to our SQLite database
con = sqlite3.connect(dbfile)
# creating cursor
cur = con.cursor()
# reading all table names
table_list = [a for a in cur.execute("SELECT name FROM sqlite_master WHERE type = 'table'")]
# here is you table list
print(table_list)
# Be sure to close the connection
con.close()
That worked for me very good. The reading of the data you have done already right just paste in the tablenames.
If you want to see data for visual analysis as pandas dataframe, the below approach could also be used.
import pandas as pd
import sqlite3
import sqlalchemy
try:
conn = sqlite3.connect("file.db")
except Exception as e:
print(e)
#Now in order to read in pandas dataframe we need to know table name
cursor = conn.cursor()
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
print(f"Table Name : {cursor.fetchall()}")
df = pd.read_sql_query('SELECT * FROM Table_Name', conn)
conn.close()
from flask import Flask
app = Flask(__name__)
from sqlalchemy import create_engine, select, MetaData, Table
from sqlalchemy.sql import and_, or_
engine = create_engine('sqlite://username:password#host/databasename')
class UserModel():
def __init__(self):
try:
self.meta = MetaData()
self.users = Table("users", self.meta, autoload=True, autoload_with=engine)
except Exception as e:
print(e)
def get(self):
stmt = select([self.users.c.name, self.users.c.email, self.users.c.password])
print(stmt)
result = engine.execute(stmt)
temp = [dict(r) for r in result] if result else None
print(temp)
return temp
I tried to fill a SQL SERVER table using Python by executing the Python script below :
import pyodbc
import pandas as pd
from pandas import ExcelWriter
from pandas import ExcelFile
df = pd.read_excel('C:/Users/Username/Desktop/file1.xlsx', sheet_name='Sheet1')
cnxn = pyodbc.connect("Driver={SQL Server Native Client 11.0};"
"Server=MYSERVERNAME;"
"Database=DB;"
"uid=sa;pwd=MYPWD;"
"Trusted_Connection=yes;")
print("Column headings:")
print(df.columns)
'''
for i in df.index:
print(df['Last Name'][i],df['First Name'][i] )
'''
cursor = cnxn.cursor()
for i in df.index:
cursor.execute("insert into pyperson (id,firstname,lastname) values (df['ID'][i],df['First Name'][i],df['Last Name'][i])")
cnxn.commit()
PS:
If I try to read only data from excel file and then print it it works fine
if I try to insert directly with an insert into statement using python it works also fine
but when I combine them it shows me the error message below :
IndentationError: expected an indented block
Any ideas,Any help :)
I am using following code to add data from txt file to SQL Server using Python, hope that helps:
import pymssql
import numpy as np
host = 'YourHostName'
username = 'USERNAME'
password = 'PASSWORD'
database = 'TestDB'
conn = pymssql.connect(host, username, password, database)
cursor = conn.cursor()
cursor.execute("Delete from color_type")
with open("Your file path\\filename.csv", "r") as ins:
array=[]
for line in ins:
array.append(line)
data = line.split('|')
fst = data[0]
lst = data[1]
cursor.execute("insert into color_type values(%s, %s)", (fst, lst))
cursor.execute("select * from color_type")
rows = cursor.fetchall()
conn.commit()
print(rows)
I want to extract table information from sqlite file.
I could list all the table name following this page and tried to extract table information using query method on the session instance. But I got following error.
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such column: ComponentSizes [SQL: 'SELECT ComponentSizes']
Does anyone know how should I revise following code in order to extract table specifying the table name?
class read():
def __init__(self,path):
engine = create_engine("sqlite:///" + sqlFile)
inspector=inspect(engine)
for table_name in inspector.get_table_names():
for column in inspector.get_columns(table_name):
#print("Column: %s" % column['name'])
print (table_name+" : "+column['name'])
Session = sessionmaker(bind=engine)
self.session = Session()
def getTable(self,name):
table=self.session.query(name).all()
return table
if __name__ == '__main__':
test=read(sqlFile)
test.getTable('ComponentSizes')
The error you are getting is suggestive of what is going wrong. Your code is translating into SQL - SELECT ComponentSizes which is incomplete. It's not clear for what is your end goal. If you want to extract contents of a table into CSV, you could do this:
import sqlite3
import csv
con = sqlite3.connect('mydatabase.db')
outfile = open('mydump.csv', 'wb')
outcsv = csv.writer(outfile)
cursor = con.execute('select * from ComponentSizes')
# dump column titles (optional)
outcsv.writerow(x[0] for x in cursor.description)
# dump rows
outcsv.writerows(cursor.fetchall())
outfile.close()
Else, if you want contents of the table into a pandas df for further analysis, you could choose to do this:
import sqlite3
import pandas as pd
# Create your connection.
cnx = sqlite3.connect('file.db')
df = pd.read_sql_query("SELECT * FROM ComponentSizes", cnx)
Hope it helps. Happy coding!
Using python, I am trying to import a csv into an sqlite table and use the headers in the csv file to become the headers in the sqlite table. The code runs but the table "MyTable" does not appear to be created. Here is the code:
with open ('dict_output.csv', 'r') as f:
reader = csv.reader(f)
columns = next(reader)
#Strips white space in header
columns = [h.strip() for h in columns]
#reader = csv.DictReader(f, fieldnames=columns)
for row in reader:
print(row)
con = sqlite3.connect("city_spec.db")
cursor = con.cursor()
#Inserts data from csv into table in sql database.
query = 'insert into MyTable({0}) values ({1})'
query = query.format(','.join(columns), ','.join('?' * len(columns)))
print(query)
cursor = con.cursor()
for row in reader:
cursor.execute(query, row)
#cursor.commit()
con.commit()
con.close()
Thanks in advance for any help.
You can use Pandas to make this easy (you may need to pip install pandas first):
import sqlite3
import pandas as pd
# load data
df = pd.read_csv('dict_output.csv')
# strip whitespace from headers
df.columns = df.columns.str.strip()
con = sqlite3.connect("city_spec.db")
# drop data into database
df.to_sql("MyTable", con)
con.close()
Pandas will do all of the hard work for you, including create the actual table!
You haven't marked your answer solved yet so here goes.
Connect to the database just once, and create a cursor just once.
You can read the csv records only once.
I've added code that creates a crude form of the database table based on the column names alone. Again, this is done just once in the loop.
Your insertion code works fine.
import sqlite3
import csv
con = sqlite3.connect("city_spec.sqlite") ## these statements belong outside the loop
cursor = con.cursor() ## execute them just once
first = True
with open ('dict_output.csv', 'r') as f:
reader = csv.reader(f)
columns = next(reader)
columns = [h.strip() for h in columns]
if first:
sql = 'CREATE TABLE IF NOT EXISTS MyTable (%s)' % ', '.join(['%s text'%column for column in columns])
print (sql)
cursor.execute(sql)
first = False
#~ for row in reader: ## we will read the rows later in the loop
#~ print(row)
query = 'insert into MyTable({0}) values ({1})'
query = query.format(','.join(columns), ','.join('?' * len(columns)))
print(query)
cursor = con.cursor()
for row in reader:
cursor.execute(query, row)
con.commit()
con.close()
You can also do it easy with peewee orm. For this you only use an extension from peewee, the playhouse.csv_loader:
from playhouse.csv_loader import *
db = SqliteDatabase('city_spec.db')
Test = load_csv(db, 'dict_output.csv')
You created the database city_spec.db with the headers as fields and the data from the dict_output.csv
If you don't have peewee you can install it with
pip install peewee