Context: I'd like to send a concatenated data frame (I joined several dataframes from individual stock data) into a MySQL database, however, I can't seem to create a table and send the data there
Problem: When I run this code df.to_sql(name='stockdata', con=con, if_exists='append', index=False) (source: Writing a Pandas Dataframe to MySQL), I keep getting this error: pandas.io.sql.DatabaseError: Execution failed on sql 'SELECT name FROM sqlite_master WHERE type='table' AND name=?;': not all arguments converted during string formatting.
I'm new to MySQL as well so any help is very welcome! Thank you
from __future__ import print_function
import pandas as pd
from datetime import date, datetime, timedelta
import numpy as np
import yfinance as yf
import mysql.conector
import pymysql as pymysql
import pandas_datareader.data as web
from sqlalchemy import create_engine
import yahoo_fin.stock_info as si
######################################################
# PyMySQL configuration
user = '...'
passw = '...'
host = '...'
port = 3306
database = 'stockdata'
con.cursor().execute("CREATE DATABASE IF NOT EXISTS {0} ".format(database))
con = pymysql.connect(host=host,
port=port,
user=user,
passwd=passw,
db=database,
charset='utf8')
df.to_sql(name='stockdata', con=con, if_exists='append', index=False)
.to_sql() expects the second argument to be either a SQLAlchemy Connectable object (Engine or Connection) or a DBAPI Connection object. If it is the latter then pandas assumes that it is a SQLite connection.
You need to use SQLAlchemy to create an engine object
engine = create_engine("mysql+pymysql://…")
and pass that to to_sql()
Thanks for reading this. I’m learning python and have run into a bit of a blocker.
I’m trying to set up a basic database to join some data together and query it.
I’ve entered the code below but get this error when I run it
no such table: customer
There is more to the error message but I think if I solve this the rest will go away…Hopefully
import pandas as pd
from sqlalchemy import create_engine
engine = create_engine(f"sqlite:///HFSS.db")
conn = engine.connect()
customer = pd.read_excel('customer_orders.xlsx',index_col = 'CU_GTIN' * 1)
customer ## running this shows me a table
When I try to run a simple query in SQLite I get the error.
query = '''SELECT EccCustomerLevel3Name
FROM customer'''
pd.read_sql_query(query, conn)
Thanks for your time
D
I am creating the necessary connections followed by creating a dataframe which I want to send to Azure SQL Database.
I am getting stuck at the last part.
Any help will be greatly appreciated.
#The last line of code gives me the programming error as stated in the question
#Please, please try to help me with this , I will be eternally grateful
#Creating connections
import pandas as pd
from sqlalchemy import create_engine, MetaData, Table, select
from six.moves import urllib
params = urllib.parse.quote_plus(r'Driver={ODBC Driver 17 for SQL
Server};Server=tcp:abcd.sql.azuresynapse.net,1433;Database=xxx;Uid=yyy;Pwd=
{zzz};Encrypt=yes;TrustServerCertificate=yes;Connection Timeout=30;')
conn_str = 'mssql+pyodbc:///?odbc_connect={}'.format(params)
engine = create_engine(conn_str,connect_args={'autocommit': True})
engine.connect()
#Create dataframe
df=pd.DataFrame(columns=['Name','Subject','Marks','GPA'])
df['Name']=['A','B','C','D','E']
df['Subject']=['IUDI','KDBJSCJ','SJIJSABCIBSA','DCOSANNOA','SDOISD']
df['Marks']=[659 for i in range(0,5)]
df['GPA']=[8.0 for i in range(0,5)]
#Export Dataframe to sql (Problem code)
df.to_sql(name='demo_table',con=engine,index=False)
I got the same error creating / replacing a table on Azure synapse until I manually specified dtypes as sqlalchemy column types (see pandas documentation):
from sqlalchemy.dialects.mssql import NVARCHAR, INTEGER, FLOAT
col_types = {"Name": NVARCHAR(1),
"Subject": NVARCHAR(12),
"Marks": INTEGER,
"GPA": FLOAT}
df.to_sql(name="demo_table", con=engine, dtype=col_types, if_exists="replace")
It is not the first time Azure Synapse turns out to have poorer support than plain SQL Server. I imagine it simply has problems inferring the data type.
Unable to understand why my sql query is throwing an exception of [Oracle][ODBC][Ora]ORA-00936: missing expression.
The case is that the code seems to be working fine when I'm using
select* from reports.ORDERS_NOW.
So it's letting me pull all the data, but for my case, I want only specific columns for which I'm writing the query. Please look at the code below and let me know what's wrong with it.
import pyodbc
import pandas as pd
conn = conn = pyodbc.connect('DSN=abcd;UID=xxxxxx;PWD=xxxxxx')
if conn:
print("Connection is successful")
db query
sql = '''
select [QUANTITY] from reports.ORDERS_NOW
'''
df = pd.read_sql(sql,conn)
i think [] is not allowed in oracle so remove it
select QUANTITY from reports.ORDERS_NOW
I am trying to understand how python could pull data from an FTP server into pandas then move this into SQL server. My code here is very rudimentary to say the least and I am looking for any advice or help at all. I have tried to load the data from the FTP server first which works fine.... If I then remove this code and change it to a select from ms sql server it is fine so the connection string works, but the insertion into the SQL server seems to be causing problems.
import pyodbc
import pandas
from ftplib import FTP
from StringIO import StringIO
import csv
ftp = FTP ('ftp.xyz.com','user','pass' )
ftp.set_pasv(True)
r = StringIO()
ftp.retrbinary('filname.csv', r.write)
pandas.read_table (r.getvalue(), delimiter=',')
connStr = ('DRIVER={SQL Server Native Client 10.0};SERVER=localhost;DATABASE=TESTFEED;UID=sa;PWD=pass')
conn = pyodbc.connect(connStr)
cursor = conn.cursor()
cursor.execute("INSERT INTO dbo.tblImport(Startdt, Enddt, x,y,z,)" "VALUES (x,x,x,x,x,x,x,x,x,x.x,x)")
cursor.close()
conn.commit()
conn.close()
print"Script has successfully run!"
When I remove the ftp code this runs perfectly, but I do not understand how to make the next jump to get this into Microsoft SQL server, or even if it is possible without saving into a file first.
For the 'write to sql server' part, you can use the convenient to_sql method of pandas (so no need to iterate over the rows and do the insert manually). See the docs on interacting with SQL databases with pandas: http://pandas.pydata.org/pandas-docs/stable/io.html#io-sql
You will need at least pandas 0.14 to have this working, and you also need sqlalchemy installed. An example, assuming df is the DataFrame you got from read_table:
import sqlalchemy
import pyodbc
engine = sqlalchemy.create_engine("mssql+pyodbc://<username>:<password>#<dsnname>")
# write the DataFrame to a table in the sql database
df.to_sql("table_name", engine)
See also the documentation page of to_sql.
More info on how to create the connection engine with sqlalchemy for sql server with pyobdc, you can find here:http://docs.sqlalchemy.org/en/rel_1_1/dialects/mssql.html#dialect-mssql-pyodbc-connect
But if your goal is to just get the csv data into the SQL database, you could also consider doing this directly from SQL. See eg Import CSV file into SQL Server
Python3 version using a LocalDB SQL instance:
from sqlalchemy import create_engine
import urllib
import pyodbc
import pandas as pd
df = pd.read_csv("./data.csv")
quoted = urllib.parse.quote_plus("DRIVER={SQL Server Native Client 11.0};SERVER=(localDb)\ProjectsV14;DATABASE=database")
engine = create_engine('mssql+pyodbc:///?odbc_connect={}'.format(quoted))
df.to_sql('TargetTable', schema='dbo', con = engine)
result = engine.execute('SELECT COUNT(*) FROM [dbo].[TargetTable]')
result.fetchall()
Yes, the bcp utility seems to be the best solution for most cases.
If you want to stay within Python, the following code should work.
from sqlalchemy import create_engine
import urllib
import pyodbc
quoted = urllib.parse.quote_plus("DRIVER={SQL Server};SERVER=YOUR\ServerName;DATABASE=YOur_Database")
engine = create_engine('mssql+pyodbc:///?odbc_connect={}'.format(quoted))
df.to_sql('Table_Name', schema='dbo', con = engine, chunksize=200, method='multi', index=False, if_exists='replace')
Don't avoid method='multi', because it significantly reduces the task execution time.
Sometimes you may encounter the following error.
ProgrammingError: ('42000', '[42000] [Microsoft][ODBC SQL Server
Driver][SQL Server]The incoming request has too many parameters. The
server supports a maximum of 2100 parameters. Reduce the number of
parameters and resend the request. (8003) (SQLExecDirectW)')
In such a case, determine the number of columns in your dataframe: df.shape[1]. Divide the maximum supported number of parameters by this value and use the result's floor as a chunk size.
I found that using bcp utility (https://learn.microsoft.com/en-us/sql/tools/bcp-utility) works best when you have a large dataset. I have 2.7 million rows that inserts at 80K rows/sec. You can store your data frame as csv file (use tabs for separator if your data doesn't have tabs and utf8 encoding). With bcp, I've used format "-c" and it works without issues so far.
This worked for me on Python 3.5.2:
import sqlalchemy as sa
import urllib
import pyodbc
conn= urllib.parse.quote_plus('DRIVER={ODBC Driver 17 for SQL Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password)
engine = sa.create_engine('mssql+pyodbc:///?odbc_connect={}'.format(conn))
frame.to_sql("myTable", engine, schema='dbo', if_exists='append', index=False, index_label='myField')
"As the Connection represents an open resource against the database, we want to always limit the scope of our use of this object to a specific context, and the best way to do that is by using Python context manager form, also known as the with statement."
https://docs.sqlalchemy.org/en/14/tutorial/dbapi_transactions.html
The example would then be
from sqlalchemy import create_engine
import urllib
import pyodbc
connection_string = (
"Driver={SQL Server Native Client 11.0};"
"Server=myserver;"
"UID=myuser;"
"PWD=mypwd;"
"Database=mydb;"
)
quoted = urllib.parse.quote_plus(connection_string)
engine = create_engine(f'mssql+pyodbc:///?odbc_connect={quoted}')
with engine.connect() as cnn:
df.to_sql('mytable',con=cnn, if_exists='replace', index=False)
Following is what worked for me using sqlalchemy. Pay attention to the last part ?driver=SQL+Server'.
import sqlalchemy
import pyodbc
engine = sqlalchemy.create_engine('mssql+pyodbc://MyUser:MyPWD#dataserver.sandbox.myserver/MY_DB?driver=SQL+Server')
dt.to_sql("PatientResultTest", engine,if_exists='append')
The SQL table needs an index column at the beginning to store the index value of dataframe.
# using class function
import pandas as pd
import pyodbc
import sqlalchemy
import urllib
class data_frame_to_sql():
def__init__(self,dataFrame,sql_table_name):
self.dataFrame=dataFrame
self.sql_table_name=sql_table_name
def conversion(self):
params = urllib.parse.quote_plus("DRIVER={SQL Server};"
"SERVER=######;"
"DATABASE=####;"
"UID=#####;"
"PWD=###;")
try:
engine = sqlalchemy.create_engine("mssql+pyodbc:///?odbc_connect={}".format(params))
return f"Table '{self.sql_table_name}' added sucsessfully in database" ,self.dataFrame.to_sql(self.sql_table_name, engine)
except Exception as e :
e=str(e).replace(".","")
print(f"{e} in Database." )
data={"BusinessEntityID":["1","2","3"],"FirstName":["raj","abhi","amir"],"LastName":["kapoor","bachn","khhan"]}
df = pd.DataFrame(data, columns= ['BusinessEntityID','FirstName','LastName'])
ab=data_frame_to_sql(df,"ab").conversion()
print(ab)
It's not necessary to use sqlamchemy, one could create a connection with pyodbc directly to use it with pandas, as below: `with pyodbc.connect('DRIVER={ODBC Driver 18 for SQL Server};SERVER='+server
+';DATABASE='+database+';UID='+username+';PWD='+ password) as newconn:
df = pd.read_sql(,newconn)
`