Upload pandas dataframe to a temporary table in SQL Server - python

I am trying to upload a dataframe to a temporary table (using pandas to_sql method) in SQL Server but having problems. I am able to upload dataframes to 'normal' tables in SQL fine.
The error I get is below & it tells me that a temporary table called #d already exists.
ProgrammingError: (pyodbc.ProgrammingError) ('42S01', "[42S01] [Microsoft][ODBC SQL Server Driver][SQL Server]There is already an object named '#d' in the database. (2714) (SQLExecDirectW)")
[SQL:
CREATE TABLE [#d] (
However if I run the DROP TABLE #d (in my code below) I get the error below & I do have permissions to create and drop tables,
ProgrammingError: (pyodbc.ProgrammingError) ('42S02', "[42S02] [Microsoft][ODBC SQL Server Driver][SQL Server]Cannot drop the table '#d', because it does not exist or you do not have permission. (3701) (SQLExecDirectW)")
[SQL: DROP TABLE #d]
(Background on this error at: http://sqlalche.me/e/f405)
The errors seem conflicting to me
My code is below.
engine = create_engine("mssql+pyodbc:///?odbc_connect={}".format(params))
cnxn = engine.connect()
# q = """DROP TABLE #d"""
# cnxn.execute(q)
q = """
CREATE TABLE #d(id int,
time_stamp datetime,
pressure float)
"""
cnxn.execute(q)
# upload data into temp table
df.to_sql('#d', cnxn, if_exists='append', index=False)

Related

How to store a data frame in a sql server database with to_sql?

I'm writing code to store a dataframe in my sql server database. The code is as follows:
from sqlalchemy import create_engine
server='my_server'
database='my_database'
driver='ODBC Driver 17 for SQL Server'
database_con=f'mssql+pyodbc://#{server}/{database}?driver={driver}'
engine=create_engine(database_con)
con=engine.connect()
data = [ 'test','test1', 'tests']
df_data = pd.DataFrame(data)
df_data.to_sql('dbo.test', con, index=False)
However, when I run the code I get an error:
ProgrammingError: (pyodbc.ProgrammingError) ('42000', '[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]The specified schema name "name#email.nl" either does not exist or you do not have permission to use it. (2760) (SQLExecDirectW)')
[SQL:
CREATE TABLE [dbo.hoi] (
[0] VARCHAR(max) NULL
)
]
(Background on this error at: http://sqlalche.me/e/f405)
Does somebody know how to solve the problem?
Cheers,
Vincent

Create table and Insert to SQL Server using Python

I have a huge table (147 columns) and I would like to know if it is possible to create the table in SQL server from my pandas dataframe so I don't have to CREATE TABLE for 147 columns.
I am trying to follow what is answered in most of related questions:
params = urllib.parse.quote_plus("DRIVER={ODBC Driver 17 for SQL Server};SERVER=DESKTOP-LFOSSEF;DATABASE=test;UID=xxxx;PWD=xxx")
engine = create_engine("mssql+pyodbc:///?odbc_connect=%s" % params)
connection = engine.raw_connection()
df.to_sql("table_name", connection,index=False)
The user and password work because that's what I am using to sign in into sqlserver.
When I run this, I get:
Execution failed on sql 'SELECT name FROM sqlite_master WHERE
type='table' AND name=?;': ('42S02', "[42S02] [Microsoft][ODBC Driver
17 for SQL Server][SQL Server]Invalid object name 'sqlite_master'.
(208) (SQLExecDirectW); [42S02] [Microsoft][ODBC Driver 17 for SQL
Server][SQL Server]Statement(s) could not be prepared. (8180)")
If I remove the connection = engine.raw_connection() and use engine directly, I get:
AttributeError: 'Engine' object has no attribute 'cursor'
Any idea what is wrong? Do I need to create an empty table first? I've been troubleshooting for hours and can't find any reason why this isnt working.
Do it like this.
import pyodbc
engine = "mssql+pyodbc://Your_Server_Name/Your_DB_Name?driver=SQL Server Native Client 11.0?trusted_connection=yes"
df.to_sql(x, engine, if_exists='append', index=True)
df = name of your dataframe & x = name of your table in SQL Server

Interface Error when importing Pandas data frame into SQL Server

I have a pandas data frame called : data
I am trying to read this pandas dataframe into a table in sql server.
I am able to read data into python from sql but I am expiering problems loading the dataframe into a table.
I have tried a few examples but keep on getting the same error:
DatabaseError: Execution failed on sql 'SELECT name FROM sqlite_master WHERE type='table' AND name=?;': ('42S02', "[42S02] [Microsoft][ODBC SQL Server Driver][SQL Server]Invalid object name 'sqlite_master'. (208) (SQLExecDirectW); [42S02] [Microsoft][ODBC SQL Server Driver][SQL Server]Statement(s) could not be prepared. (8180)")
At the moment I have the following code:
PASSW = 'test'
SERVER = '111.111.11'
DB = 'Database'
Table = 'TableName'
data = pd.read_csv('2019_Prediction_24_10.csv')
cnxn = pyodbc.connect(DRIVER='{ODBC Driver 13 for SQL Server}', SERVER= SERVER,
DATABASE= DB,User = 'User', Password = PASSW)
data.to_sql(con=cnxn, name='Predictions',schema = 'PA' ,if_exists='replace')
I am new to pyodbc and using python together with sql server, Am not quite sure what is going wrong let alone fix it.
Can please someone assist me, or point me in the right direction
As noted in the to_sql documentation:
con : sqlalchemy.engine.Engine or sqlite3.Connection
You have supplied to_sql with a (pyodbc) Connection object, so pandas is treating it like a SQLite connection. To use to_sql with SQL Server you'll need to install SQLAlchemy, create an Engine object, and pass that to to_sql.

Validating Excel cell value and inserting into SQL error HY000 Python

I am trying to validating the excel data if it is > 12 chr length then i need to insert in a table (sql) with python code
I have tried with this code and i am getting the below error
'The SQL contains 0 parameter markers, but 1 parameters were supplied', 'HY000')
Value in excel already with closed brackets like ('12ewrr334dgdgskngk')
and i ran the query in SSMS and it is working fine
INSERT INTO #finalresultset1 ( VIN ) Values ('12ewrr334dgdgskngk')
import xlrd
import pyodbc
book = xlrd.open_workbook(r'excelpath')
sheet = book.sheet_by_name(r'Sheet')
cnxn = pyodbc.connect('database connection')
cursor = cnxn.cursor()
query = """ INSERT INTO #finalresultset1 ( VIN ) Values """
VINSheet = sheet.ncols
for row in range(0,sheet.nrows):
for col in range(0,VINSheet):
cell_VIN = sheet.cell(row,col)
if len(cell_VIN.value) >= 12:
cursor.execute(query, cell_VIN.value)
else:
print('VIN Length must be greater than 17')
Tried cursor.execute(query, (cell_VIN.value, ))
This time i got the different error
pyodbc.ProgrammingError: ('42S02',
"[42S02] [Microsoft][ODBC SQL Server Driver][SQL Server]
Invalid object name '#finalresultset1'. (208) (SQLExecDirectW);
[42S02] [Microsoft][ODBC SQL Server Driver][SQL Server]
Statement(s) could not be prepared. (8180)")
and I verified the temp table it is exist in my DB
EDIT
cursor.execute(" INSERT INTO #finalresultset1 ( product ) Values (?) ",
cell_VIN.value)
query = """ INSERT INTO #finalresultset1 ( VIN ) Values (?)"""
(Add the (?) after values)
Invalid object name '#finalresultset1'
#finalresultset1 is a local temporary table because its name begins with #. You are opening your connection and then trying to insert into that table without creating it first. That will never work because local temporary tables only exist for the current session, and your session (created by the connect call) has not created that table.

read_sql failing on table name

Apologies if it is a repeat, I could not find a related question based on my search.
I am trying to load data from a MS SQL Server and the following works:
connection = pyodbc.connect(driver='SQL Server',
server=server_name,
database=database_name,
trusted_connection='yes')
df = pd.read_sql('SELECT * FROM MyTables.Table1', connection)
However, this fails:
df = pd.read_sql('MyTables.Table1', connection)
with error:
DatabaseError: Execution failed on sql 'MyTables.Table1': ('42000', "[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]The request for procedure 'Table1' failed because 'Table1' is a table object. (2809) (SQLExecDirectW)")
I understand that read_sql_table() needs a SQLAlchemy connection to make this work but I thought read_sql() would work with pyodbc connection?
How would reading from table_name work with read_sql()?

Categories