Never use python before, here want to using procedure already written in SQL server Script, then write a python application connect to database (already connected) rather than JAVA, because this is much easier.
Let user to give an input as the #Departments, then calculate the average of that Department's average salary.
My sql server procedure code:
CREATE PROC aaatest # Departments varchar(40)
AS
BEGIN
SELECT AVG(P.Salary)
FROM Company P
WHERE P.Department = #Departments
END
please write the a python application to get the input then pass to the #Departments. (conn = pymssql ... is already done!)
I guess this should give you some hint of how to script your requirement:
import pyodbc
cnxn=pyodbc.connect(r'Driver={SQL Server};Server=<servername>;Trusted_Connection;user=<username>;password=<password>',autocommit = True)
cursor=cnxn.cursor()
department=raw_input('Enter department:')
query='EXEC [master].[dbo].[aaatest] '+department+''
cursor.execute(query)
cnxn.close()
Related
I think I'm going mad here... again :). I'm trying to do the most simple thing on the planet and it doesn't work for some reason unknown to me. I have a python script that connects to a mssql database using pypyodbc and does stuff. when I insert data into the database, it works. when I try to extract it, it fails miserably. what am I doing wrong?
import pypyodbc as mssql
msConnErr = None
try:
msconn = mssql.connect('DRIVER={SQL Server};SERVER=server_name;DATABASE=database;TRUSTED_CONNECTION=True')
print('Source server connected')
srcCursor = msconn.cursor()
except:
print('Source server error')
msConnErr = True
srcCursor.execute("SELECT * FROM schema.table")
srcResult = srcCursor.fetchall()
print(srcResult)
the connection works as I'm being given a successful message. I can also see my script using sql server management studio being connected to the correct database, so I know I'm working in the right environment. the error I'm getting is:
UndefinedTable: relation "schema.table" does not exist
LINE 1: SELECT * FROM schema.table
the table exists, I must specify the schema as I have the same table name in different schemas (data lifecycle). I can extract data from it using sql server management studio, yet python fails miserably. it doesn't fail to insert 35 million rows in it using the same driver. no other query works, even SELECT ##VERSION fails, SELECT TOP (10) * FROM schema.table fails etc. ...
any ideas?
basically, I had a piece of code that would rewrite the srcCursor variable with another connection, obviously that relation wouldn't be present on another server. apologies!
I am trying to use a local database over my computer using "XAMPP". The module name is MySQL & Apache. Like I was indicated by a youtube video, I had to start them two. Now, my IDLE doesn't recognize MySQL, but when I wrote sqlite3, it worked, but it had no place to take the database from. What can I do for my program to accept MySQL?
Thanks.
My Code Looks Like This:
import sqlite3
print "Welcome To The Royal Bank Of The Thomean Kingdom"
db = sqlite3.connect(host = "127.0.0.1",
user = "root",
passwd = "",
db="dbpython")
query = db.cursor()
loop = 'true'
while(loop == 'true'):
username = raw_input("Username:")
password = raw_input("Password:")
if(query.execute("SELECT * FROM `USERS` WHERE `username`='" + username + "' AND `password`='" + password + "'")):
db.commit()
print "Logged In"
else:
db.commit()
print "Failure"
sqlite3 is a builtin Python module, which is why it works out of the box, and importing MySQL libraries will not
SQLite does not connect to MySQL or even remote/local network addresses and ports. The SQLite protocol and data formats are not even the same as MySQL.
You'll need to separately install a MySQL module, such as mysql-python
There's no specific reason why a local XAMPP server couldn't use SQLite and you can refer to the Python documentation for how to open / connect to a specific database file
Similarly, I see no specific reason for XAMPP here. You could use Flask or Django, and run entirely in Python, and only later add a database
Also, please don't write SQL queries using "string + string" - it allows for SQL injection attacks. The solution is to use prepared statements
Also, storing passwords directly in your database immediately after user entry (i.e in plain text) is just asking for trouble
I'm trying to migrate data from a MySQL DB to HANA utilizing Python. The way we're currently implementing this migration at work is manually but the plan is to run a script everyday to collect data from the prior day (stored in MySQL) and move it to HANA to use their analytics tools. I have written a script with 2 functions, one that connects to MySQL and temporarily stores the data from the query in a Pandas Dataframe. The second function uses the sqlalchemy-hana connector to create an engine that I feed into Pandas' to_sql function to store the data into HANA.
Below is the first function call to MySQL
def connect_to_mysql(query):
try:
#connect to the db
stagedb = myscon.connect(
user = 'user-name',
password = 'password',
host = 'awshost.com',
database = 'sampletable',
raise_on_warnings = True)
df = pandas.read_sql(query, stagedb)
except myscon.Error as err:
if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
print('Incorrect user name or password')
elif err.errno == errorcode.ER_BAD_DB_ERROR:
print("Database does not exit")
else:
print(err)
finally:
if central_stagedb:
central_stagecur.close()
central_stagedb.close()
return df
This is the second function call to connect to HANA
def connect_to_hana(query):
#connect to HANA db
try:
engine = create_engine('hana://username:password#host:port')
#return dataframe from first function
to_df = connect_to_mysql(query)
to_df.to_sql('sample_data', engine, if_exists = 'append', index = False, chunksize=20000)
except: raise
My HAHA DB has several schemas in the catalog folder, many of them "SYS" or "_SYS" related. I have created a separate schema to test my code on and play around in, which has the same name as my username.
My questions are as such: 1) is there a more efficient way to load data from MySQL to Hana without using a go-between like a CSV file or, in my case, a Pandas Dataframe. Using VS Code it takes around 90 seconds for the script to complete and 2) when using the sqlalchemy-hana connector, how does it know which schema to create the table and store the data/append the data to? The read-me file didn't really explain. Luckily it's storing it in the right schema (the one with my username) but I created another one as a test and of course the table didn't show up under that one. If I try to specify the database in the create_engine line as so:
engine = create_engine('hana://username:password#host:port/Username')
I get this error: TypeError: connect() got an unexpected keyword argument 'database'.
Also, I noticed that say if I were to run my script twice and count the number of rows in the created table, it adds the rows twice - essentially creating duplicates. Because of this, 3) would it be better to iterate throw the rows of the Dataframe and insert the rows one by one using the pyhdb package?
Any advice/suggestions/answers will be very much appreciated! Thank you!
Gee... that seems like a rather complicated workflow. Alternatively, you may want to check the HANA features Smart Data Access (SDA) and Smart Data Integration (SDI). With these, you could either establish a "virtual" data access in SAP HANA, that is, you read data from the MySQL DB into the HANA process when you run your analytics query. Or you could actually load the data into HANA, making it a data mart.
If it is really just about the "piping" for this data transfer, I probably wouldn't put 3rd party tools into the scenario. This only makes the setup more complicated than necessary.
I'm trying to enter data from python 3.4 into a MySQL database, with both of these entities being within pythonAnywhere. In other words, I'm writing a python 3.4 program in pythonAnywhere and need to connect to a MySQL db also within pythonAnywhere. I've checked other answers here and haven't quite figured it out. I've tried the ( -u -h -p) syntax mentioned by a few posts, but I'm not sure if that is just for gaining access from outside of pythonAnywhere.
Any help would be appreciated.
++++++++++++++++++++
Actually I figured it out (with kudos to Tony Darnell at the Scripting MySQL website, from whom I plagiarized most of this:
import MySQLdb
db=MySQLdb.connect(
host='Your_User_Name.mysql.pythonanywhere-services.com',
user='Your_User_Name',
passwd='Your_pythonAnywhere_password',
db='Your_User_Name$Your_Data_Base')
everything above starting with 'Your' refers to you personal account info with pythonanywhere
everything else gets listed exactly as shown. Watch that $ that follows your user name as part of the database name (db = etc.)
cursor = db.cursor ()
execute the SQL query using execute() method.
cursor.execute ("Enter any MySQL query here. use the quotes. no semi-colon")
fetch a single row from query using fetchone() method.
row = cursor.fetchone ()
print(row)
fetch all the rest of the query using fetchall() method
data = cursor.fetchall()
print(data)
close the cursor object
cursor.close ()
close the connection
db.close ()
This isn't a question, so much as a pre-emptive answer. (I have gotten lots of help from this website & wanted to give back.)
I was struggling with a large bit of SQL query that was failing when I tried to run it via python using pymssql, but would run fine when directly through MS SQL. (E.g., in my case, I was using MS SQL Server Management Studio to run it outside of python.)
Then I finally discovered the problem: pymssql cannot handle temporary tables. At least not my version, which is still 1.0.1.
As proof, here is a snippet of my code, slightly altered to protect any IP issues:
conn = pymssql.connect(host=sqlServer, user=sqlID, password=sqlPwd, \
database=sqlDB)
cur = conn.cursor()
cur.execute(testQuery)
The above code FAILS (returns no data, to be specific, and spits the error "pymssql.OperationalError: No data available." if you call cur.fetchone() ) if I call it with testQuery defined as below:
testQuery = """
CREATE TABLE #TEST (
[sample_id] varchar (256)
,[blah] varchar (256) )
INSERT INTO #TEST
SELECT DISTINCT
[sample_id]
,[blah]
FROM [myTableOI]
WHERE [Shipment Type] in ('test')
SELECT * FROM #TEST
"""
However, it works fine if testQuery is defined as below.
testQuery = """
SELECT DISTINCT
[sample_id]
,[blah]
FROM [myTableOI]
WHERE [Shipment Type] in ('test')
"""
I did a Google search as well as a search within Stack Overflow, and couldn't find any information regarding the particular issue. I also looked under the pymssql documentation and FAQ, found at http://code.google.com/p/pymssql/wiki/FAQ, and did not see anything mentioning that temporary tables are not allowed. So I thought I'd add this "question".
Update: July 2016
The previously-accepted answer is no longer valid. The second "will NOT work" example does indeed work with pymssql 2.1.1 under Python 2.7.11 (once conn.autocommit(1) is replaced with conn.autocommit(True) to avoid "TypeError: Cannot convert int to bool").
For those who run across this question and might have similar problems, I thought I'd pass on what I'd learned since the original post. It turns out that you CAN use temporary tables in pymssql, but you have to be very careful in how you handle commits.
I'll first explain by example. The following code WILL work:
testQuery = """
CREATE TABLE #TEST (
[name] varchar(256)
,[age] int )
INSERT INTO #TEST
values ('Mike', 12)
,('someone else', 904)
"""
conn = pymssql.connect(host=sqlServer, user=sqlID, password=sqlPwd, \
database=sqlDB) ## obviously setting up proper variables here...
conn.autocommit(1)
cur = conn.cursor()
cur.execute(testQuery)
cur.execute("SELECT * FROM #TEST")
tmp = cur.fetchone()
tmp
This will then return the first item (a subsequent fetch will return the other):
('Mike', 12)
But the following will NOT work
testQuery = """
CREATE TABLE #TEST (
[name] varchar(256)
,[age] int )
INSERT INTO #TEST
values ('Mike', 12)
,('someone else', 904)
SELECT * FROM #TEST
"""
conn = pymssql.connect(host=sqlServer, user=sqlID, password=sqlPwd, \
database=sqlDB) ## obviously setting up proper variables here...
conn.autocommit(1)
cur = conn.cursor()
cur.execute(testQuery)
tmp = cur.fetchone()
tmp
This will fail saying "pymssql.OperationalError: No data available." The reason, as best I can tell, is that whether you have autocommit on or not, and whether you specifically make a commit yourself or not, all tables must explicitly be created AND COMMITTED before trying to read from them.
In the first case, you'll notice that there are two "cur.execute(...)" calls. The first one creates the temporary table. Upon finishing the "cur.execute()", since autocommit is turned on, the SQL script is committed, the temporary table is made. Then another cur.execute() is called to read from that table. In the second case, I attempt to create & read from the table "simultaneously" (at least in the mind of pymssql... it works fine in MS SQL Server Management Studio). Since the table has not previously been made & committed, I cannot query into it.
Wow... that was a hassle to discover, and it will be a hassle to adjust my code (developed on MS SQL Server Management Studio at first) so that it will work within a script. Oh well...