Python Inserting data into Sql Server - python

Struggling to figure out why this isnt working. I don't get any errors but it will not write to the table.
import pyodbc
connprod = pyodbc.connect('DRIVER={SQL Server};SERVER=server;DATABASE=master;Trusted_Connection=yes')
cursorprod = connprod.cursor()
conndev = pyodbc.connect('DRIVER={SQL Server};SERVER=server;DATABASE=master;Trusted_Connection=yes')
cursordev = conndev.cursor()
connlocal=pyodbc.connect('DRIVER={SQL Server};SERVER=server;DATABASE=DBA;Trusted_Connection=yes')
cursorlocal = connlocal.cursor()
cursorprod.execute("SELECT Servername = ##servername ,Date = getdate() ,wait_type ,waiting_tasks_count ,wait_time_ms ,max_wait_time_ms ,signal_wait_time_ms FROM sys.dm_os_wait_stats GO")
rows = cursorprod.fetchall()
for row in rows:
cursorlocal.execute('insert into dba.dbo.dm_os_wait_stats values (?,?,?,?,?,?,?)', row)
cursorlocal.commit

If your example is accurate, you're not calling the commit method:
cursorlocal.commit()

Related

Printing data results from postgresql to panda dataframe

I am trying to print the results of the joined table from postgresql to python. However when I try to print the results, the table shows up but I receive NaN data. Can someone help?
conn = psy.connect( dbname = "funda_project", host = "localhost", user =
"postgres", password = "ledidhima2021.")
cursor = conn.cursor()
conn.commit()
createjointable2 = '''SELECT(
distance_data."Municipality",
distance_data."Childcare/Nursery",
distance_data."Leisure/Culture/Library",
sales_details."Purchase_price",
sales_details."Publication_date",
sales_details."Date_of_signature",
house_details."Type_of_house",
house_details."Object_categorie",
house_details."Construction_year",
house_details."Energy_label_class",
demo_data."Age_Group_Relation_(15-20)",
demo_data."Age_Group_Relation_(20-25)",
demo_data."Age_Group_Relation_(25-45)")
FROM "distance_data"
INNER JOIN "zip_data"
ON "distance_data"."Municipality" = "zip_data"."Municipality"
INNER JOIN "demo_data"
ON "zip_data"."Municipality" = "demo_data"."Municipality"
INNER JOIN "sales_details"
ON "zip_data"."globalId" = "sales_details"."GlobalID"
INNER JOIN "house_details"
ON "zip_data"."globalId" = "house_details"."GlobalID"
;'''
cursor.execute(createjointable2);
from pandas import DataFrame
eri= pd.DataFrame(cursor.fetchall())
datalist = list(eri)
results = pd.DataFrame (eri, columns = ["Municipality", "Childcare/Nursery",
"Leisure/Culture/Library", "Purchase_price", "Publication_date", "Date_of_signature",
"Type_of_house", "Object_categorie", "Construction_year", "Energy_label_class",
"Age_Group_Relation_(15-20)", "Age_Group_Relation_(20-25)", "Age_Group_Relation_(25-45)"])
results
Pandas has a built-in SQL query reading function pd.read_sql_query(query, connection), which assign the returned table value to a dataframe.
dataframe = pd.read_sql_query("SELECT * FROM table;", conn)
conn being the connection object you created and is also in your code.
Another way is almost what you tried as well:
from pandas import DataFrame
df = DataFrame(cursor.fetchall())
df.columns = cursor.keys()

Inserting huge pandas dataframe into SQL Server table

I am looking for a way to insert a big set of data into a SQL Server table in Python. The problem is that my dataframe in Python has over 200 columns, currently I am using this code:
import pyodbc
import pandas as pd
server = 'yourservername'
database = 'AdventureWorks'
username = 'username'
password = 'yourpassword'
cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER='+server+';DATABASE='+database+'UID='+username+';PWD='+ password)
cursor = cnxn.cursor()
for index, row in df.iterrows():
cursor.execute("INSERT INTO dbo.mytable (A,B,C)values(?,?,?)", row.A, row.B, row.C)
cnxn.commit()
cursor.close()
The problem is in INSERT INTO dbo.mytable (A, B, C) VALUES (?,?,?)", row.A, row.B, row.C as I need to insert a data with over 200 columns and specifying each of these columns is not really time efficient :(
I would appreciate any help!
Create connection in SqlAlchemy
Use df.to_sql() with chunksize param. Link to doc
ps. in my cases connection not in sqlalchemy not working in to_sql - function
Ok, I finally found a way:
serverName = 'xxx'
dataBase = 'zzz'
conn_str = urllib.parse.quote_plus(r'DRIVER={SQL Server};SERVER=' + serverName + r';DATABASE=' + dataBase + r';TRUSTED_CONNECTION=yes')
conn = 'mssql+pyodbc:///?odbc_connect={}'.format(conn_str)
engine = sqlalchemy.create_engine(conn,poolclass=NullPool)
connection = engine.connect()
df.to_sql("TableName", engine, schema='SchemaName', if_exists='append', index= True, chunksize=200)
connection.close()

Convert SQL query output into pandas dataframe

I have been looking since yesterday about the way I could convert the output of an SQL Query into a Pandas dataframe.
For example a code that does this :
data = select * from table
I've tried so many codes I've found on the internet but nothing seems to work.
Note that my database is stored in Azure DataBricks and I can only access the table using its URL.
Thank you so much !
Hope this would help you out. Both insertion & selection are in this code for reference.
def db_insert_user_level_info(table_name):
#Call Your DF Here , as an argument in the function or pass directly
df=df_parameter
params = urllib.parse.quote_plus("DRIVER={SQL Server};SERVER=DESKTOP-ITAJUJ2;DATABASE=githubAnalytics")
engine = create_engine("mssql+pyodbc:///?odbc_connect=%s" % params)
engine.connect()
table_row_count=select_row_count(table_name)
df_row_count=df.shape[0]
if table_row_count == df_row_count:
print("Data Cannot Be Inserted Because The Row Count is Same")
else:
df.to_sql(name=table_name,con=engine, index=False, if_exists='append')
print("********************************** DONE EXECTUTED SUCCESSFULLY ***************************************************")
def select_row_count(table_name):
cnxn = pyodbc.connect("Driver={SQL Server Native Client 11.0};"
"Server=DESKTOP-ITAJUJ2;"
"Database=githubAnalytics;"
"Trusted_Connection=yes;")
cur = cnxn.cursor()
try:
db_cmd = "SELECT count(*) FROM "+table_name
res = cur.execute(db_cmd)
# Do something with your result set, for example print out all the results:
for x in res:
return x[0]
except:
print("Table is not Available , Please Wait...")
Using sqlalchemy to connect to the database, and the built-in method read_sql_query from pandas to go straight to a DataFrame:
import pandas as pd
from sqlalchemy import create_engine
engine = create_engine(url)
connection = engine.connect()
query = "SELECT * FROM table"
df = pd.read_sql_query(query,connection)

Writing Data in Sql table from a text file using Pandas Module

I was trying to read some data from a text file and write it down in a Sql server table using Pandas Module and FOR LOOP. Below is my code..
import pandas as pd
import pyodbc
driver = '{SQL Server Native Client 11.0}'
conn = pyodbc.connect(
Trusted_Connection = 'Yes',
Driver = driver,
Server = '***********',
Database = 'Sullins_Data'
)
def createdata():
cursor = conn.cursor()
cursor.execute(
'insert into Sullins_Datasheet(Part_Number,Web_Link) values(?,?);',
(a,j))
conn.commit()
a = pd.read_csv('check9.txt',header=None, names=['Part_Number','Web_Links'] ) # 2 Columns, 8 rows
b = pd.DataFrame(a)
p_no = (b['Part_Number'])
w_link = (b['Web_Links'])
# print(p_no)
for i in p_no:
a = i
for l in w_link:
j = l
createdata()
As you can see from the code that I have created 2 variables a and j to hold the value of both the columns of the text file one by one and write it in the sql table.
But after running the code I have got only the last row value in the table out of 8 rows.
When I used createdate function inside w_link for loop, it write the duplicate value in the table.
Please suggest where I am doing wrong.
here is sample of how your code is working
a = 0
b = 0
ptr=['s','d','f','e']
pt=['a','b','c','d']
for i in ptr:
a=i
print(a,end='')
for j in pt:
b=j
print(b,end='')

Unable to insert all rows into SQL table using python script

I have two data sets in my JSON API. I am unable to insert both into SQL Server. The Iteration using for loop doesnt seem to pick up the second data. Can someone please help me understand how to fix this. this is new for me, so am not able to find out whats wrong since the coding is bit different from SQL
import urllib, json
import pyodbc
#read data from API
url = "http://nagiosdatagateway.vestas.net/esq/ITE1452552/logstash- 2018.12.16/2/desc"
response = urllib.urlopen(url)
data = json.loads(response.read())
#define db connection
cnxn = pyodbc.connect("Driver={SQL Server Native Client 11.0};"
"Server=DKCDCVDCP42\DPA;"
"Database=VPDC;"
"Trusted_Connection=yes;")
cursor = cnxn.cursor()
i = 0
j = len(data)
print j
for i in range(i,j-1):
# print data[1]["_source"]["utc_timestamp"]
print i
print data[i]["_source"]["nagios_comment"]
print data[i]["_source"]["nagios_author"]
cursor.execute("insert into vpdc.pa.Pythontable(nagios_comment,nagios_author) values (?,?)",(data[i]
["_source"]["nagios_comment"],data[i]["_source"]["nagios_author"] ))
i += 1
print i
cnxn.commit()
both these two sets of values should be in the SQL table for columns
Nagios_comment & Nagios_author
307262828 Alex Christopher Ramos
307160348 Alex Christopher Ramos
the issue had been resolved by correctly indenting the cursor.execute statement in the script as below. In my original script, there was no indentation done for this line.so it was called outside the loop
import urllib, json
import pyodbc
#read data from API
url = "http://nagiosdatagateway.vestas.net/esq/ITE1452552/logstash-2018.12.16/2/desc"
response = urllib.urlopen(url)
data = json.loads(response.read())
#define db connection
cnxn = pyodbc.connect("Driver={SQL Server Native Client 11.0};"
"Server=DKCDCVDCP42\DPA;"
"Database=VPDC;"
"Trusted_Connection=yes;")
cursor = cnxn.cursor()
i = 0
j = len(data)
print j
for i in range(0,2):
#print data[1]["_source"]["utc_timestamp"]
print data[i]["_source"]["nagios_comment"]
print data[i]["_source"]["nagios_author"]
cursor.execute("insert into vpdc.pa.Pythontable(nagios_comment,nagios_author)
values (?,?)",(data[i]["_source"]["nagios_comment"],data[i]["_source"]
["nagios_author"] ))
cnxn.commit()

Categories