I am using pandas to_sql() to insert pandas's dataframe to sql database, using following snippet:
params = quote("DRIVER={SQL Server};SERVER=%s;DATABASE=%s;UID=%s;PWD=%s"%(config.server,config.database,config.user_id,config.password))
self.engine = create_engine("mssql+pyodbc:///?odbc_connect=%s" % self.params)
Connection is working fine.
dataframe.to_sql("InvoiceStandardization_InvoiceExtractTemp", con=self.engine, if_exists="append", index=False)
When I tried this it's showing error as :
[42S22] [Microsoft][ODBC SQL Server Driver][SQL Server]Invalid column name 'None'. (207) (SQLExecDirectW)")
I have checked all columns are available in my dataframe which are in database in same order.
What could be the possible solution?
Related
I would like to insert entire row from a dataframe into sql server in pandas.
I can insert using below command , how ever, I have 46+ columns and do not want to type all 46 columns.
server = 'server'
database = 'db'
cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER='+server+';DATABASE='+database)
cursor = cnxn.cursor()
for index, row in df.iterrows():
cursor.execute("INSERT INTO HumanResources.DepartmentTest (DepartmentID,Name,GroupName) values(?,?,?)", row.DepartmentID, row.Name, row.GroupName)
cnxn.commit()
cursor.close()
is there a way I can insert entire row without giving column names ?
something like this?
insert into table1
select * from df
I tried below command and it is failing,
for index, row in df.iterrows():
cursor.execute("INSERT INTO dbo.Staging select row"
)
Error:('42S22', "[42S22] [Microsoft][ODBC SQL Server Driver][SQL Server]Invalid column name 'row'. (207) (SQLExecDirectW); [42S22] [Microsoft][ODBC SQL Server Driver][SQL Server]Column name or number of supplied values does not match table definition. (213)")
I cant use to_sql as I cannot import sqlalchemy in UAT or prod.
Can anyone help me with this?
Along with below statement mention column names
insert into table1 (col1,col2,...conN)
select col1,col2,..colN from df
Ignore Identity columns
I have a huge table (147 columns) and I would like to know if it is possible to create the table in SQL server from my pandas dataframe so I don't have to CREATE TABLE for 147 columns.
I am trying to follow what is answered in most of related questions:
params = urllib.parse.quote_plus("DRIVER={ODBC Driver 17 for SQL Server};SERVER=DESKTOP-LFOSSEF;DATABASE=test;UID=xxxx;PWD=xxx")
engine = create_engine("mssql+pyodbc:///?odbc_connect=%s" % params)
connection = engine.raw_connection()
df.to_sql("table_name", connection,index=False)
The user and password work because that's what I am using to sign in into sqlserver.
When I run this, I get:
Execution failed on sql 'SELECT name FROM sqlite_master WHERE
type='table' AND name=?;': ('42S02', "[42S02] [Microsoft][ODBC Driver
17 for SQL Server][SQL Server]Invalid object name 'sqlite_master'.
(208) (SQLExecDirectW); [42S02] [Microsoft][ODBC Driver 17 for SQL
Server][SQL Server]Statement(s) could not be prepared. (8180)")
If I remove the connection = engine.raw_connection() and use engine directly, I get:
AttributeError: 'Engine' object has no attribute 'cursor'
Any idea what is wrong? Do I need to create an empty table first? I've been troubleshooting for hours and can't find any reason why this isnt working.
Do it like this.
import pyodbc
engine = "mssql+pyodbc://Your_Server_Name/Your_DB_Name?driver=SQL Server Native Client 11.0?trusted_connection=yes"
df.to_sql(x, engine, if_exists='append', index=True)
df = name of your dataframe & x = name of your table in SQL Server
I am trying to upload a dataframe to a temporary table (using pandas to_sql method) in SQL Server but having problems. I am able to upload dataframes to 'normal' tables in SQL fine.
The error I get is below & it tells me that a temporary table called #d already exists.
ProgrammingError: (pyodbc.ProgrammingError) ('42S01', "[42S01] [Microsoft][ODBC SQL Server Driver][SQL Server]There is already an object named '#d' in the database. (2714) (SQLExecDirectW)")
[SQL:
CREATE TABLE [#d] (
However if I run the DROP TABLE #d (in my code below) I get the error below & I do have permissions to create and drop tables,
ProgrammingError: (pyodbc.ProgrammingError) ('42S02', "[42S02] [Microsoft][ODBC SQL Server Driver][SQL Server]Cannot drop the table '#d', because it does not exist or you do not have permission. (3701) (SQLExecDirectW)")
[SQL: DROP TABLE #d]
(Background on this error at: http://sqlalche.me/e/f405)
The errors seem conflicting to me
My code is below.
engine = create_engine("mssql+pyodbc:///?odbc_connect={}".format(params))
cnxn = engine.connect()
# q = """DROP TABLE #d"""
# cnxn.execute(q)
q = """
CREATE TABLE #d(id int,
time_stamp datetime,
pressure float)
"""
cnxn.execute(q)
# upload data into temp table
df.to_sql('#d', cnxn, if_exists='append', index=False)
I have a pandas data frame called : data
I am trying to read this pandas dataframe into a table in sql server.
I am able to read data into python from sql but I am expiering problems loading the dataframe into a table.
I have tried a few examples but keep on getting the same error:
DatabaseError: Execution failed on sql 'SELECT name FROM sqlite_master WHERE type='table' AND name=?;': ('42S02', "[42S02] [Microsoft][ODBC SQL Server Driver][SQL Server]Invalid object name 'sqlite_master'. (208) (SQLExecDirectW); [42S02] [Microsoft][ODBC SQL Server Driver][SQL Server]Statement(s) could not be prepared. (8180)")
At the moment I have the following code:
PASSW = 'test'
SERVER = '111.111.11'
DB = 'Database'
Table = 'TableName'
data = pd.read_csv('2019_Prediction_24_10.csv')
cnxn = pyodbc.connect(DRIVER='{ODBC Driver 13 for SQL Server}', SERVER= SERVER,
DATABASE= DB,User = 'User', Password = PASSW)
data.to_sql(con=cnxn, name='Predictions',schema = 'PA' ,if_exists='replace')
I am new to pyodbc and using python together with sql server, Am not quite sure what is going wrong let alone fix it.
Can please someone assist me, or point me in the right direction
As noted in the to_sql documentation:
con : sqlalchemy.engine.Engine or sqlite3.Connection
You have supplied to_sql with a (pyodbc) Connection object, so pandas is treating it like a SQLite connection. To use to_sql with SQL Server you'll need to install SQLAlchemy, create an Engine object, and pass that to to_sql.
Apologies if it is a repeat, I could not find a related question based on my search.
I am trying to load data from a MS SQL Server and the following works:
connection = pyodbc.connect(driver='SQL Server',
server=server_name,
database=database_name,
trusted_connection='yes')
df = pd.read_sql('SELECT * FROM MyTables.Table1', connection)
However, this fails:
df = pd.read_sql('MyTables.Table1', connection)
with error:
DatabaseError: Execution failed on sql 'MyTables.Table1': ('42000', "[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]The request for procedure 'Table1' failed because 'Table1' is a table object. (2809) (SQLExecDirectW)")
I understand that read_sql_table() needs a SQLAlchemy connection to make this work but I thought read_sql() would work with pyodbc connection?
How would reading from table_name work with read_sql()?