I am trying to validating the excel data if it is > 12 chr length then i need to insert in a table (sql) with python code
I have tried with this code and i am getting the below error
'The SQL contains 0 parameter markers, but 1 parameters were supplied', 'HY000')
Value in excel already with closed brackets like ('12ewrr334dgdgskngk')
and i ran the query in SSMS and it is working fine
INSERT INTO #finalresultset1 ( VIN ) Values ('12ewrr334dgdgskngk')
import xlrd
import pyodbc
book = xlrd.open_workbook(r'excelpath')
sheet = book.sheet_by_name(r'Sheet')
cnxn = pyodbc.connect('database connection')
cursor = cnxn.cursor()
query = """ INSERT INTO #finalresultset1 ( VIN ) Values """
VINSheet = sheet.ncols
for row in range(0,sheet.nrows):
for col in range(0,VINSheet):
cell_VIN = sheet.cell(row,col)
if len(cell_VIN.value) >= 12:
cursor.execute(query, cell_VIN.value)
else:
print('VIN Length must be greater than 17')
Tried cursor.execute(query, (cell_VIN.value, ))
This time i got the different error
pyodbc.ProgrammingError: ('42S02',
"[42S02] [Microsoft][ODBC SQL Server Driver][SQL Server]
Invalid object name '#finalresultset1'. (208) (SQLExecDirectW);
[42S02] [Microsoft][ODBC SQL Server Driver][SQL Server]
Statement(s) could not be prepared. (8180)")
and I verified the temp table it is exist in my DB
EDIT
cursor.execute(" INSERT INTO #finalresultset1 ( product ) Values (?) ",
cell_VIN.value)
query = """ INSERT INTO #finalresultset1 ( VIN ) Values (?)"""
(Add the (?) after values)
Invalid object name '#finalresultset1'
#finalresultset1 is a local temporary table because its name begins with #. You are opening your connection and then trying to insert into that table without creating it first. That will never work because local temporary tables only exist for the current session, and your session (created by the connect call) has not created that table.
Related
I am trying to insert data from my CSV file into SQL Server, but I am getting this error, which makes no sense to me..... because I am creating the table... shouldn't I be able to specify the data type? Also, even if I remove the CREATE table part of the code, the data is still not being INSERTED into the an already existing table.
ProgrammingError: ('42000', '[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]The incoming tabular data stream (TDS) remote procedure call (RPC) protocol stream is incorrect. Parameter 12 (""): The supplied value is not a valid instance of data type float. Check the source data for invalid values. An example of an invalid value is data of numeric type with scale greater than precision. (8023) (SQLExecDirectW)')
import pandas as pd
import pyodbc
# Import CSV
data = pd.read_csv (r'C:\Users\Empyz\Desktop\Options_Data_Combined.csv')
df = pd.DataFrame(data)
df2 = df.replace('', np.nan, inplace=True)
# Connect to SQL Server
conn = pyodbc.connect('Driver={SQL Server};'
'Server=localhost;'
'Database=Stocks;'
'Trusted_Connection=yes;')
print('Connected Successfully to SQL Server')
cursor = conn.cursor()
# Insert DataFrame to Table
for row in df.itertuples():
cursor.execute('''
INSERT INTO OPTIONS_TEST2 (contractSymbol, lastTradeDate, strike, lastPrice, bid, ask, change, percentChange, volume, openInterest, impliedVolatility, inTheMoney, contractSize, currency
)
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?)
''',
row.contractSymbol,
row.lastTradeDate,
row.strike,
row.lastPrice,
row.bid,
row.ask,
row.change,
row.percentChange,
row.volume,
row.openInterest,
row.impliedVolatility,
row.inTheMoney,
row.contractSize,
row.currency
)
conn.commit()
I would like to insert entire row from a dataframe into sql server in pandas.
I can insert using below command , how ever, I have 46+ columns and do not want to type all 46 columns.
server = 'server'
database = 'db'
cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER='+server+';DATABASE='+database)
cursor = cnxn.cursor()
for index, row in df.iterrows():
cursor.execute("INSERT INTO HumanResources.DepartmentTest (DepartmentID,Name,GroupName) values(?,?,?)", row.DepartmentID, row.Name, row.GroupName)
cnxn.commit()
cursor.close()
is there a way I can insert entire row without giving column names ?
something like this?
insert into table1
select * from df
I tried below command and it is failing,
for index, row in df.iterrows():
cursor.execute("INSERT INTO dbo.Staging select row"
)
Error:('42S22', "[42S22] [Microsoft][ODBC SQL Server Driver][SQL Server]Invalid column name 'row'. (207) (SQLExecDirectW); [42S22] [Microsoft][ODBC SQL Server Driver][SQL Server]Column name or number of supplied values does not match table definition. (213)")
I cant use to_sql as I cannot import sqlalchemy in UAT or prod.
Can anyone help me with this?
Along with below statement mention column names
insert into table1 (col1,col2,...conN)
select col1,col2,..colN from df
Ignore Identity columns
Trying to load a pandas df into an already created table in a SQL server database. I am able to connect and create a new table but unable to load a df
My code is here:
# Dependencies
from sqlalchemy import create_engine
import urllib
# Variables
server = 'My_Server\SQLEXPRESS'
database = 'My_db'
# Connect to sql db
conn_str = (
r'Driver=ODBC Driver 17 for SQL Server;'
r'Server=My_Server\SQLEXPRESS;'
r'Database=My_db;'
r'Trusted_Connection=yes;'
)
quoted_conn_str = urllib.parse.quote_plus(conn_str)
engine = create_engine(f'mssql+pyodbc:///?odbc_connect={quoted_conn_str}')
cnxn = engine.connect()
# Load df to sql db
My_df.to_sql(name = 'myTable1', con = cnxn, if_exists = 'append',index = False)
cnxn.close()
Here's the error I get:
ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]String or binary data would be truncated in table 'representation_v1.dbo.RepresentationTable1', column 'Country'. Truncated value: '\xa0'. (2628) (SQLExecDirectW); [42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]The statement has been terminated. (3621)")
[SQL: INSERT INTO [RepresentationTable1] ([Country], [countryCode], [Population], [LHR], [UHR], [CPRLH], [CPRUH], [Groups]) VALUES (?, ?, ?, ?, ?, ?, ?, ?)].......
enter image description here
the error message that says "...column 'Country'. Truncated value: '\xa0'. (2628)..." suggests that the no. of characters in my data for the 'Country' column is too much for the no. of characters I set up in the country column and was at risk of being truncated hence the error.
Deleting the table and recreating it and allowing more characters in the column (Varchar (128)) fixed the problem.
I am trying to upload a dataframe to a temporary table (using pandas to_sql method) in SQL Server but having problems. I am able to upload dataframes to 'normal' tables in SQL fine.
The error I get is below & it tells me that a temporary table called #d already exists.
ProgrammingError: (pyodbc.ProgrammingError) ('42S01', "[42S01] [Microsoft][ODBC SQL Server Driver][SQL Server]There is already an object named '#d' in the database. (2714) (SQLExecDirectW)")
[SQL:
CREATE TABLE [#d] (
However if I run the DROP TABLE #d (in my code below) I get the error below & I do have permissions to create and drop tables,
ProgrammingError: (pyodbc.ProgrammingError) ('42S02', "[42S02] [Microsoft][ODBC SQL Server Driver][SQL Server]Cannot drop the table '#d', because it does not exist or you do not have permission. (3701) (SQLExecDirectW)")
[SQL: DROP TABLE #d]
(Background on this error at: http://sqlalche.me/e/f405)
The errors seem conflicting to me
My code is below.
engine = create_engine("mssql+pyodbc:///?odbc_connect={}".format(params))
cnxn = engine.connect()
# q = """DROP TABLE #d"""
# cnxn.execute(q)
q = """
CREATE TABLE #d(id int,
time_stamp datetime,
pressure float)
"""
cnxn.execute(q)
# upload data into temp table
df.to_sql('#d', cnxn, if_exists='append', index=False)
I've been stuck in this problem for a long time, i hope somebody could enlighten me. I have a sql database i would like to update. Here are some pieces of the code. I extracted data from sql to Python, then apply function hex_to_string & slicing bin and I plan to update the SQL database. I don't have any ID in the database, but I have the DATETIME which differentiates the entry.
query = """ select P from Table """
cnxn = pyodbc.connect(conn_str)
cnxn.add_output_converter(pyodbc.SQL_VARBINARY, hexToString)
cursor: object = cnxn.cursor()
cursor.execute(query)
dtbs= cursor.fetchall()
row_list=[]
ln = len(dtbs)
cursor.execute(query)
for i in range(ln):
row=cursor.fetchval()
result=slicing_bin(row)
result_float = [float("{0:.2f}".format(i)) for i in result]
row_list.append(result_float)
crsr = cnxn.cursor()
crsr.execute(query)
aList = [item[0] for item in crsr.fetchall()]
for aValue in aList:
crsr.execute("""UPDATE Table SET P=? WHERE DATETIME=?""", (row_list, aValue))
crsr.close()
cnxn.commit()
When I run this code, I got an error message,
File
"C:/Users/r/.PyCharmCE2018.3/config/scratches/Finalcombined2.py",
line 64, in
crsr.execute("""UPDATE Access.dbo.M_PWA SET P_PULSES=? WHERE DATETIME=?""", (row_list, aValue)) pyodbc.ProgrammingError: ('42000',
"[42000] [Microsoft][ODBC Driver 11 for SQL Server][SQL Server]Column,
parameter, or variable #1: Cannot find data type READONLY. (2715)
(SQLExecDirectW); [42000] [Microsoft][ODBC Driver 11 for SQL
Server][SQL Server]Statement(s) could not be prepared. (8180); [42000]
[Microsoft][ODBC Driver 11 for SQL Server][SQL Server]Parameter or
variable '#P1' has an invalid data type. (2724)")
Please Help, thanks.
Hummm, I would have guessed that Gord was right. That's certainly the first thing I'd look at. Ok, here is a small sample of how I do updates in MS Access, from Python.
#import pypyodbc
import pyodbc
# MS ACCESS DB CONNECTION
pyodbc.lowercase = False
conn = pyodbc.connect(
r"Driver={Microsoft Access Driver (*.mdb, *.accdb)};" +
r"Dbq=C:\\path_here\\Northwind.mdb;")
# OPEN CURSOR AND EXECUTE SQL
cur = conn.cursor()
# Option 1 - no error and no update
cur.execute("UPDATE dbo_test SET Location = 'New York' Where Status = 'Scheduled'");
conn.commit()
cur.close()
conn.close()
Can you adapt this to your specific scenario?