Fill SQL-SERVER table from an excel file - python

I tried to fill a SQL SERVER table using Python by executing the Python script below :
import pyodbc
import pandas as pd
from pandas import ExcelWriter
from pandas import ExcelFile
df = pd.read_excel('C:/Users/Username/Desktop/file1.xlsx', sheet_name='Sheet1')
cnxn = pyodbc.connect("Driver={SQL Server Native Client 11.0};"
"Server=MYSERVERNAME;"
"Database=DB;"
"uid=sa;pwd=MYPWD;"
"Trusted_Connection=yes;")
print("Column headings:")
print(df.columns)
'''
for i in df.index:
print(df['Last Name'][i],df['First Name'][i] )
'''
cursor = cnxn.cursor()
for i in df.index:
cursor.execute("insert into pyperson (id,firstname,lastname) values (df['ID'][i],df['First Name'][i],df['Last Name'][i])")
cnxn.commit()
PS:
If I try to read only data from excel file and then print it it works fine
if I try to insert directly with an insert into statement using python it works also fine
but when I combine them it shows me the error message below :
IndentationError: expected an indented block
Any ideas,Any help :)

I am using following code to add data from txt file to SQL Server using Python, hope that helps:
import pymssql
import numpy as np
host = 'YourHostName'
username = 'USERNAME'
password = 'PASSWORD'
database = 'TestDB'
conn = pymssql.connect(host, username, password, database)
cursor = conn.cursor()
cursor.execute("Delete from color_type")
with open("Your file path\\filename.csv", "r") as ins:
array=[]
for line in ins:
array.append(line)
data = line.split('|')
fst = data[0]
lst = data[1]
cursor.execute("insert into color_type values(%s, %s)", (fst, lst))
cursor.execute("select * from color_type")
rows = cursor.fetchall()
conn.commit()
print(rows)

Related

Open database files (.db) using python

I have a data base file .db in SQLite3 format and I was attempting to open it to look at the data inside it. Below is my attempt to code using python.
import sqlite3
# Create a SQL connection to our SQLite database
con = sqlite3.connect(dbfile)
cur = con.cursor()
# The result of a "cursor.execute" can be iterated over by row
for row in cur.execute("SELECT * FROM "):
print(row)
# Be sure to close the connection
con.close()
For the line ("SELECT * FROM ") , I understand that you have to put in the header of the table after the word "FROM", however, since I can't even open up the file in the first place, I have no idea what header to put. Hence how can I code such that I can open up the data base file to read its contents?
So, you analyzed it all right. After the FROM you have to put in the tablenames. But you can find them out like this:
SELECT name FROM sqlite_master WHERE type = 'table'
In code this looks like this:
# loading in modules
import sqlite3
# creating file path
dbfile = '/home/niklas/Desktop/Stuff/StockData-IBM.db'
# Create a SQL connection to our SQLite database
con = sqlite3.connect(dbfile)
# creating cursor
cur = con.cursor()
# reading all table names
table_list = [a for a in cur.execute("SELECT name FROM sqlite_master WHERE type = 'table'")]
# here is you table list
print(table_list)
# Be sure to close the connection
con.close()
That worked for me very good. The reading of the data you have done already right just paste in the tablenames.
If you want to see data for visual analysis as pandas dataframe, the below approach could also be used.
import pandas as pd
import sqlite3
import sqlalchemy
try:
conn = sqlite3.connect("file.db")
except Exception as e:
print(e)
#Now in order to read in pandas dataframe we need to know table name
cursor = conn.cursor()
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
print(f"Table Name : {cursor.fetchall()}")
df = pd.read_sql_query('SELECT * FROM Table_Name', conn)
conn.close()
from flask import Flask
app = Flask(__name__)
from sqlalchemy import create_engine, select, MetaData, Table
from sqlalchemy.sql import and_, or_
engine = create_engine('sqlite://username:password#host/databasename')
class UserModel():
def __init__(self):
try:
self.meta = MetaData()
self.users = Table("users", self.meta, autoload=True, autoload_with=engine)
except Exception as e:
print(e)
def get(self):
stmt = select([self.users.c.name, self.users.c.email, self.users.c.password])
print(stmt)
result = engine.execute(stmt)
temp = [dict(r) for r in result] if result else None
print(temp)
return temp

How can I store a SQL table in python to work with it?

I am looking to work in python with a table that I have in SQL. I want to store the entire table in a matrix called 'mat' and then get the output after the python code so I can read the table with SQL again. This is how I started:
import pyodbc
import pandas as pd
server = 'myserver'
database = 'mydatabase'
username = 'myuser'
password = 'mypassword'
cnxn = pyodbc.connect('DRIVER={ODBC Driver 13 for SQL Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password)
******Python code*******
mat=pd.read_sql('select * from mytable order by time' , con = cnxn)
How should I read the table to store it in mat and then how do I send it back to SQL?
You have already read the data into a DataFrame. If you want to convert a dataframe to a matrix, do mat.values. If you want to write the data to a sql table, you will have to create a cursor and use it to insert the data.
cursor = cnxn.cursor()
cursor.execute(''' INSERT INTO myTable (FirstName, LastName) VALUES ('Wilsamson', 'Shiphrah') ''')
If you have multiple values, you should use the executemany command;
values = list(zip(mat['FirstName'].values.tolist(), mat['LastName'].values.tolist()))
cursor.executemany('''INSERT INTO myTable (FirstName, LastName) VALUES (?, ?)''', values);
At the end of the INSERT statement, you will need to commit the inserts before closing your cursor and connection.
cursor.commit()
cursor.close()
cnxn.close()
If you want to convert
This is how I do it.
import mysql.connector
import pandas as pd
import numpy as np
# use this to display ALL columns...useful, but definitely not required
pd.set_option('display.max_columns', None)
mydb = mysql.connector.connect(
host="localhost",
user="duser_name",
passwd="pswd",
database="db_naem"
)
mycursor = mydb.cursor()
mycursor.execute("SELECT * FROM YourTable")
myresult = mycursor.fetchall()
df = pd.DataFrame(myresult)
df.to_csv('C:\\path_here\\test.csv', sep=',')
You can easily convert a dataframe to a matrix.
np.array(df.to_records().view(type=np.matrix))
But I'm not sure why you want to do that. I think datframes are a lot more practical for most people's needs.

impyla - as_pandas - empty dataframe

I have a simple impyla code, and I would like to create a pandas dataFrame from my cursor. My code is running but my dataframe is always an empty dataframe.
If I run my query directly on impala, the result is not empty. This is how my code looks like:
from impala.dbapi import connect
from impala.util import as_pandas
conn = connect(host='impala_server', port=21051,
user='user' , password='pass',
use_ssl=True,
auth_mechanism='PLAIN')
cursor = conn.cursor()
cursor.execute("SELECT * FROM TABLE")
results = cursor.fetchall()
df = as_pandas(cursor)
print(df.head())
Help me please, what am I doing wrong?
Just remove:
results = cursor.fetchall()
from your code. It should work.
'results = cursor.fetchall() ' delete this line and it will be ok.
from impala.dbapi import connect
from impala.util import as_pandas
conn = connect(host='****.com', port=****, database='****')
cursor = conn.cursor()
cursor.execute('select * from table limit 10')
df = as_pandas(cursor)
df.head()
I run the code above, and it run well.

How to import mysql data to .txt file using python 3.5?

I am trying to import mysql data into a .txt file using python 3.x but it look like I'm missing something.The expectation is, data should be imported to a file in tabular/columns format. I tried my level best to get solution but I'm not getting what I need.
Below is my code :
import pymysql.cursors
import pymysql
import sys
import os
# Connect to the database
connection = pymysql.connect(host='localhost',
user='root',
password="",
db='jmeterdb',
cursorclass=pymysql.cursors.DictCursor)
try:
with connection.cursor() as cursor:
# Select all records
sql = "select * from emp"
cursor.execute(sql)
# connection is not autocommit by default. So you must commit to save
# your changes.
result = cursor.fetchall()
newfile = open("db-data.txt","a+")
for row in result:
newfile.writelines(row)
print(result)
newfile.close()
finally:
connection.close()
On terminal python shows me data when print(result) is executed but in the db-data.txt file, it shows column-names only.
Expected result :
Column_Name1 Column_Name2 Column_Name3
data1 data2 data3
data1 data2 data3
This code is producing expected output for above question is as below :
import pymysql.cursors
import pymysql
import sys
import os
# Open database connection
connection = pymysql.connect(host='localhost',
user='root',
password="",
db='jmeterdb',
cursorclass=pymysql.cursors.DictCursor)
# prepare a cursor object using cursor() method
with connection.cursor() as cursor:
# Prepare SQL query to select a record into the database.
try:
sql = "SELECT * FROM EMP order by ename asc"
# Execute the SQL command
cursor.execute(sql)
# Fetch all the rows in a list of lists.
results = cursor.fetchall()
# print(results)
if results:
newfile = open("db-data.txt","a+")
newfile.write('ename'+"\t"+'jobs'+"\t"+"salary"+"\t"+'comm'+"\t"+'manager'+"\t"+'hiredate'+"\t"+'deptno'+"\t"+'empno'+"\n")
for index in results:
ltr=[]
ltr.append(index['ename'])
ltr.append(index['job'])
ltr.append(index['sal'])
ltr.append(index['comm'])
ltr.append(index['mgr'])
ltr.append(index['hiredate'])
ltr.append(index['deptno'])
ltr.append(index['empno'])
lenltr=len(ltr)
for i in range(lenltr):
newfile.write('{}'.format(ltr[i]))
newfile.write("\t")
print(ltr[i])
newfile.write("\n")
# # Now print fetched result
#print("ename=%s,empno=%d,job=%d,hiredate=%d,comm=%s,sal=%d,deptno=%d,mgr=%d" %(ename, empno, job, hiredate, comm, sal, deptno, mgr))
# print(index)
except:
print ("Error: unable to fecth data")
# disconnect from server
connection.close()
newfile.close()

csv into sqlite table python

Using python, I am trying to import a csv into an sqlite table and use the headers in the csv file to become the headers in the sqlite table. The code runs but the table "MyTable" does not appear to be created. Here is the code:
with open ('dict_output.csv', 'r') as f:
reader = csv.reader(f)
columns = next(reader)
#Strips white space in header
columns = [h.strip() for h in columns]
#reader = csv.DictReader(f, fieldnames=columns)
for row in reader:
print(row)
con = sqlite3.connect("city_spec.db")
cursor = con.cursor()
#Inserts data from csv into table in sql database.
query = 'insert into MyTable({0}) values ({1})'
query = query.format(','.join(columns), ','.join('?' * len(columns)))
print(query)
cursor = con.cursor()
for row in reader:
cursor.execute(query, row)
#cursor.commit()
con.commit()
con.close()
Thanks in advance for any help.
You can use Pandas to make this easy (you may need to pip install pandas first):
import sqlite3
import pandas as pd
# load data
df = pd.read_csv('dict_output.csv')
# strip whitespace from headers
df.columns = df.columns.str.strip()
con = sqlite3.connect("city_spec.db")
# drop data into database
df.to_sql("MyTable", con)
con.close()
Pandas will do all of the hard work for you, including create the actual table!
You haven't marked your answer solved yet so here goes.
Connect to the database just once, and create a cursor just once.
You can read the csv records only once.
I've added code that creates a crude form of the database table based on the column names alone. Again, this is done just once in the loop.
Your insertion code works fine.
import sqlite3
import csv
con = sqlite3.connect("city_spec.sqlite") ## these statements belong outside the loop
cursor = con.cursor() ## execute them just once
first = True
with open ('dict_output.csv', 'r') as f:
reader = csv.reader(f)
columns = next(reader)
columns = [h.strip() for h in columns]
if first:
sql = 'CREATE TABLE IF NOT EXISTS MyTable (%s)' % ', '.join(['%s text'%column for column in columns])
print (sql)
cursor.execute(sql)
first = False
#~ for row in reader: ## we will read the rows later in the loop
#~ print(row)
query = 'insert into MyTable({0}) values ({1})'
query = query.format(','.join(columns), ','.join('?' * len(columns)))
print(query)
cursor = con.cursor()
for row in reader:
cursor.execute(query, row)
con.commit()
con.close()
You can also do it easy with peewee orm. For this you only use an extension from peewee, the playhouse.csv_loader:
from playhouse.csv_loader import *
db = SqliteDatabase('city_spec.db')
Test = load_csv(db, 'dict_output.csv')
You created the database city_spec.db with the headers as fields and the data from the dict_output.csv
If you don't have peewee you can install it with
pip install peewee

Categories