Create table and Insert to SQL Server using Python - python

I have a huge table (147 columns) and I would like to know if it is possible to create the table in SQL server from my pandas dataframe so I don't have to CREATE TABLE for 147 columns.
I am trying to follow what is answered in most of related questions:
params = urllib.parse.quote_plus("DRIVER={ODBC Driver 17 for SQL Server};SERVER=DESKTOP-LFOSSEF;DATABASE=test;UID=xxxx;PWD=xxx")
engine = create_engine("mssql+pyodbc:///?odbc_connect=%s" % params)
connection = engine.raw_connection()
df.to_sql("table_name", connection,index=False)
The user and password work because that's what I am using to sign in into sqlserver.
When I run this, I get:
Execution failed on sql 'SELECT name FROM sqlite_master WHERE
type='table' AND name=?;': ('42S02', "[42S02] [Microsoft][ODBC Driver
17 for SQL Server][SQL Server]Invalid object name 'sqlite_master'.
(208) (SQLExecDirectW); [42S02] [Microsoft][ODBC Driver 17 for SQL
Server][SQL Server]Statement(s) could not be prepared. (8180)")
If I remove the connection = engine.raw_connection() and use engine directly, I get:
AttributeError: 'Engine' object has no attribute 'cursor'
Any idea what is wrong? Do I need to create an empty table first? I've been troubleshooting for hours and can't find any reason why this isnt working.

Do it like this.
import pyodbc
engine = "mssql+pyodbc://Your_Server_Name/Your_DB_Name?driver=SQL Server Native Client 11.0?trusted_connection=yes"
df.to_sql(x, engine, if_exists='append', index=True)
df = name of your dataframe & x = name of your table in SQL Server

Related

How to store a data frame in a sql server database with to_sql?

I'm writing code to store a dataframe in my sql server database. The code is as follows:
from sqlalchemy import create_engine
server='my_server'
database='my_database'
driver='ODBC Driver 17 for SQL Server'
database_con=f'mssql+pyodbc://#{server}/{database}?driver={driver}'
engine=create_engine(database_con)
con=engine.connect()
data = [ 'test','test1', 'tests']
df_data = pd.DataFrame(data)
df_data.to_sql('dbo.test', con, index=False)
However, when I run the code I get an error:
ProgrammingError: (pyodbc.ProgrammingError) ('42000', '[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]The specified schema name "name#email.nl" either does not exist or you do not have permission to use it. (2760) (SQLExecDirectW)')
[SQL:
CREATE TABLE [dbo.hoi] (
[0] VARCHAR(max) NULL
)
]
(Background on this error at: http://sqlalche.me/e/f405)
Does somebody know how to solve the problem?
Cheers,
Vincent

Pandas to ODBC connection with to_sql

I'm trying to export a pandas DataFrame into an MS Access table through pyodbc.
conn = pyodbc.connect(r'Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=my_db.accdb;')
df.to_sql('test', conn, index=False)
DatabaseError: Execution failed on sql 'SELECT name FROM sqlite_master WHERE type='table' AND name=?;':
('42S02', "[42S02] [Microsoft][ODBC Microsoft Access Driver]
The Microsoft Access database engine cannot find the input table or query 'sqlite_master'.
Make sure it exists and that its name is spelled correctly. (-1305) (SQLExecDirectW)")
sqlite_master? Where does that come from?
.to_sql() expects the second argument to be either a SQLAlchemy Connectable object or a DBAPI Connection object. If it is the latter then pandas assumes that it is a SQLite connection.
You need to use the sqlalchemy-access dialect.
(Disclosure: I maintain that dialect.)

Upload pandas dataframe to a temporary table in SQL Server

I am trying to upload a dataframe to a temporary table (using pandas to_sql method) in SQL Server but having problems. I am able to upload dataframes to 'normal' tables in SQL fine.
The error I get is below & it tells me that a temporary table called #d already exists.
ProgrammingError: (pyodbc.ProgrammingError) ('42S01', "[42S01] [Microsoft][ODBC SQL Server Driver][SQL Server]There is already an object named '#d' in the database. (2714) (SQLExecDirectW)")
[SQL:
CREATE TABLE [#d] (
However if I run the DROP TABLE #d (in my code below) I get the error below & I do have permissions to create and drop tables,
ProgrammingError: (pyodbc.ProgrammingError) ('42S02', "[42S02] [Microsoft][ODBC SQL Server Driver][SQL Server]Cannot drop the table '#d', because it does not exist or you do not have permission. (3701) (SQLExecDirectW)")
[SQL: DROP TABLE #d]
(Background on this error at: http://sqlalche.me/e/f405)
The errors seem conflicting to me
My code is below.
engine = create_engine("mssql+pyodbc:///?odbc_connect={}".format(params))
cnxn = engine.connect()
# q = """DROP TABLE #d"""
# cnxn.execute(q)
q = """
CREATE TABLE #d(id int,
time_stamp datetime,
pressure float)
"""
cnxn.execute(q)
# upload data into temp table
df.to_sql('#d', cnxn, if_exists='append', index=False)

Interface Error when importing Pandas data frame into SQL Server

I have a pandas data frame called : data
I am trying to read this pandas dataframe into a table in sql server.
I am able to read data into python from sql but I am expiering problems loading the dataframe into a table.
I have tried a few examples but keep on getting the same error:
DatabaseError: Execution failed on sql 'SELECT name FROM sqlite_master WHERE type='table' AND name=?;': ('42S02', "[42S02] [Microsoft][ODBC SQL Server Driver][SQL Server]Invalid object name 'sqlite_master'. (208) (SQLExecDirectW); [42S02] [Microsoft][ODBC SQL Server Driver][SQL Server]Statement(s) could not be prepared. (8180)")
At the moment I have the following code:
PASSW = 'test'
SERVER = '111.111.11'
DB = 'Database'
Table = 'TableName'
data = pd.read_csv('2019_Prediction_24_10.csv')
cnxn = pyodbc.connect(DRIVER='{ODBC Driver 13 for SQL Server}', SERVER= SERVER,
DATABASE= DB,User = 'User', Password = PASSW)
data.to_sql(con=cnxn, name='Predictions',schema = 'PA' ,if_exists='replace')
I am new to pyodbc and using python together with sql server, Am not quite sure what is going wrong let alone fix it.
Can please someone assist me, or point me in the right direction
As noted in the to_sql documentation:
con : sqlalchemy.engine.Engine or sqlite3.Connection
You have supplied to_sql with a (pyodbc) Connection object, so pandas is treating it like a SQLite connection. To use to_sql with SQL Server you'll need to install SQLAlchemy, create an Engine object, and pass that to to_sql.

read_sql failing on table name

Apologies if it is a repeat, I could not find a related question based on my search.
I am trying to load data from a MS SQL Server and the following works:
connection = pyodbc.connect(driver='SQL Server',
server=server_name,
database=database_name,
trusted_connection='yes')
df = pd.read_sql('SELECT * FROM MyTables.Table1', connection)
However, this fails:
df = pd.read_sql('MyTables.Table1', connection)
with error:
DatabaseError: Execution failed on sql 'MyTables.Table1': ('42000', "[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]The request for procedure 'Table1' failed because 'Table1' is a table object. (2809) (SQLExecDirectW)")
I understand that read_sql_table() needs a SQLAlchemy connection to make this work but I thought read_sql() would work with pyodbc connection?
How would reading from table_name work with read_sql()?

Categories