Pythonic way to mock the pyodbc.Row - python

What is the pythonic way to do the proper unittesting of a function that depends on the SQL query made by pyodbc? As I understand the best way is to mock the function that returns output from SQL server. The problem is what the mock should return?
My setup:
In lib1:
def selectSQL(connection, query):
cursor = connection.cursor()
cursor.execute(query)
return cursor.fetchall()
In lib2:
def function_to_be_tested(cxnx):
my_query = "SELECT r1, r2 FROM t1"
rows = lib1.selectSQL(cxnx, my_query)
# do someting with the rows like:
a = 0
for row in rows
a += row.r1 * row.r2
return a
I have came with the following sollution:
Print the lib1.selectSQL(cxnx, my_query) to a file
Insert the data from lib1.selectSQL to the namedtuple
,
out_tuple = namedtuple('out1', ["r1", "r2"])
printed_data = [(1,2),(2,3)]
out = [out_tuple(*row) for row in printed_data]
def test_mockSelectSQL(self):
piotrSQL.selectSQL = MagicMock()
piotrSQL.selectSQL.side_effect = [out]
self.assertEqual(lib2.function_to_be_tested(True), 7)
My only concern is that the mock returns namedtuple not the pyodbc.Row like the original function. I have checked following sites in search for the information on how to properly create pyodbc.Row:
https://github.com/mkleehammer/pyodbc/blob/master/tests2/informixtests.py
https://github.com/mkleehammer/pyodbc/wiki/Row
In the unittest of pyodbc there is no constructor of if - neither have I found it in the source code (but I am novice so I might have omitted it)... However I have found following information on the Row documentation:
However, there are some pyodbc additions that make them very convenient:
Values can be accessed by column name.
The Cursor.description values can be accessed even after the cursor is closed.
Values can be replaced.
Rows from the same select statement share memory.
So it seams that the namedtuple is in fact behaving in the same way as pyodbc.Row (when it comes to accessing the values). Is there a more pythonic way to do a unittest on pyodbc.Row? Can one assume that this is a good Mock?

Further to the suggestion from #Nullman in a comment to the question, if you wanted to use an in-memory database you might try using the SQLite ODBC driver so you can return actual pyodbc.Row objects like so:
import pyodbc
conn_str = 'Driver=SQLite3 ODBC Driver;Database=:memory:'
cnxn = pyodbc.connect(conn_str, autocommit=True)
crsr = cnxn.cursor()
# create test data
crsr.execute("CREATE TABLE table1 (id INTEGER PRIMARY KEY, dtm DATETIME)")
crsr.execute("INSERT INTO table1 (dtm) VALUES ('2017-07-26 08:08:08')")
# test retrieval
crsr.execute("SELECT * FROM table1")
print(crsr.fetchall())
# prints:
# [(1, datetime.datetime(2017, 7, 26, 8, 8, 8))]
crsr.close()
cnxn.close()
I just tested it and it worked for me in PyCharm on Windows.

Related

Python MySql.Connector fetchall() is not, in fact, fetching all rows

This question has been asked a dozen times on this site with no real answer.
I use mysql.connector all the time for work, but recently I've discovered that it does not consistently return all results.
sql = ("""SELECT cp.location, cpt.created_ts, cpt.amount FROM
customer_plan_transactions cpt
JOIN customer_plans cp on cp.id = cpt.customer_plan_id
WHERE cpt.created_ts like "2022-09%" """)
cursor = my_db.cursor()
cursor.execute(sql)
rows = cursor.fetchall()
print(len(rows))
4395
Though, if I run this query through phpMyAdmin:
Any ideas? Is there another library I should be using for MySql?
edit: It must be a bug with mysql.connector. If I simply re-order the fields in the select statement, I suddenly get all the rows I am expecting.
sql = ("""SELECT cpt.created_ts, cp.location, cpt.amount FROM customer_plan_transactions cpt
JOIN customer_plans cp on cp.id = cpt.customer_plan_id
WHERE cpt.created_ts between "2022-09-01" and "2022-10-01" """)
cursor = jax_uc.cursor()
cursor.execute(sql)
rows = cursor.fetchall()
print(len(rows))
63140
So, it's a bug, right?

How to escape a #/# (for example 6/8) in the name of a table from a database

I am currently trying to get a list of values from a table inside an SQL database. The problem is appending the values due to the table's name in which I can't change. The table's name is something like Value123/123.
I tried making a variable with the name like
x = 'Value123/123'
then doing
row.append(x)
but that just prints Value123/123 and not the values from the database
cursor = conn.cursor()
cursor.execute("select Test, Value123/123 from db")
Test = []
Value = []
Compiled_Dict = {}
for row in cursor:
Test.append(row.Test)
Value.append(row.Value123/123)
Compiled_Dict = {'Date&Time': Test}
Compiled_Dict['Value'] = Value
conn.close()
df = pd.DataFrame(Compiled_Dict)
The problem occurs in this line
Value.append(row.Value123/123)
When I run it I get that the database doens't have a table named 'Value123'. Since I think it's trying to divide 123 by 123? Unfortunately the table in the database is named like this and I cannot change it, so how do I pull the values from this table?
Edit:
cursor.execute("select Test, Value123/123 as newValue from db")
I tried this and it worked thanks for the solutions. Suggested by Yu Jiaao

Pandas to_sql fails on duplicate primary key

I'd like to append to an existing table, using pandas df.to_sql() function.
I set if_exists='append', but my table has primary keys.
I'd like to do the equivalent of insert ignore when trying to append to the existing table, so I would avoid a duplicate entry error.
Is this possible with pandas, or do I need to write an explicit query?
There is unfortunately no option to specify "INSERT IGNORE". This is how I got around that limitation to insert rows into that database that were not duplicates (dataframe name is df)
for i in range(len(df)):
try:
df.iloc[i:i+1].to_sql(name="Table_Name",if_exists='append',con = Engine)
except IntegrityError:
pass #or any other action
You can do this with the method parameter of to_sql:
from sqlalchemy.dialects.mysql import insert
def insert_on_duplicate(table, conn, keys, data_iter):
insert_stmt = insert(table.table).values(list(data_iter))
on_duplicate_key_stmt = insert_stmt.on_duplicate_key_update(insert_stmt.inserted)
conn.execute(on_duplicate_key_stmt)
df.to_sql('trades', dbConnection, if_exists='append', chunksize=4096, method=insert_on_duplicate)
for older versions of sqlalchemy, you need to pass a dict to on_duplicate_key_update. i.e., on_duplicate_key_stmt = insert_stmt.on_duplicate_key_update(dict(insert_stmt.inserted))
please note that the "if_exists='append'" related to the existing of the table and what to do in case the table not exists.
The if_exists don't related to the content of the table.
see the doc here: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_sql.html
if_exists : {‘fail’, ‘replace’, ‘append’}, default ‘fail’
fail: If table exists, do nothing.
replace: If table exists, drop it, recreate it, and insert data.
append: If table exists, insert data. Create if does not exist.
Pandas has no option for it currently, but here is the Github issue. If you need this feature too, just upvote for it.
The for loop method above slow things down significantly. There's a method parameter you can pass to panda.to_sql to help achieve customization for your sql query
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_sql.html#pandas.DataFrame.to_sql
The below code should work for postgres and do nothing if there's a conflict with primary key "unique_code". Change your insert dialects for your db.
def insert_do_nothing_on_conflicts(sqltable, conn, keys, data_iter):
"""
Execute SQL statement inserting data
Parameters
----------
sqltable : pandas.io.sql.SQLTable
conn : sqlalchemy.engine.Engine or sqlalchemy.engine.Connection
keys : list of str
Column names
data_iter : Iterable that iterates the values to be inserted
"""
from sqlalchemy.dialects.postgresql import insert
from sqlalchemy import table, column
columns=[]
for c in keys:
columns.append(column(c))
if sqltable.schema:
table_name = '{}.{}'.format(sqltable.schema, sqltable.name)
else:
table_name = sqltable.name
mytable = table(table_name, *columns)
insert_stmt = insert(mytable).values(list(data_iter))
do_nothing_stmt = insert_stmt.on_conflict_do_nothing(index_elements=['unique_code'])
conn.execute(do_nothing_stmt)
pd.to_sql('mytable', con=sql_engine, method=insert_do_nothing_on_conflicts)
Pandas doesn't support editing the actual SQL syntax of the .to_sql method, so you might be out of luck. There's some experimental programmatic workarounds (say, read the Dataframe to a SQLAlchemy object with CALCHIPAN and use SQLAlchemy for the transaction), but you may be better served by writing your DataFrame to a CSV and loading it with an explicit MySQL function.
CALCHIPAN repo: https://bitbucket.org/zzzeek/calchipan/
I had trouble where I was still getting the IntegrityError
...strange but I just took the above and worked it backwards:
for i, row in df.iterrows():
sql = "SELECT * FROM `Table_Name` WHERE `key` = '{}'".format(row.Key)
found = pd.read_sql(sql, con=Engine)
if len(found) == 0:
df.iloc[i:i+1].to_sql(name="Table_Name",if_exists='append',con = Engine)
In my case, I was trying to insert new data in an empty table, but some of the rows are duplicated, almost the same issue here, I "may" think about fetching existing data and merge with the new data I got and continue in process, but this is not optimal, and may work only for small data, not a huge tables.
As pandas do not provide any kind of handling for this situation right now, I was looking for a suitable workaround for this, so I made my own, not sure if that will work or not for you, but I decided to control my data first instead of luck of waiting if that worked or not, so what I did is removing duplicates before I call .to_sql so if any error happens, I know more about my data and make sure I know what is going on:
import pandas as pd
def write_to_table(table_name, data):
df = pd.DataFrame(data)
# Sort by price, so we remove the duplicates after keeping the lowest only
data.sort(key=lambda row: row['price'])
df.drop_duplicates(subset=['id_key'], keep='first', inplace=True)
#
df.to_sql(table_name, engine, index=False, if_exists='append', schema='public')
So in my case, I wanted to keep the lowest price of rows (btw I was passing an array of dict for data), and for that, I did sorting first, not necessary but this is an example of what I mean with control the data that I want to keep.
I hope this will help someone who got almost the same as my situation.
When you use SQL Server you'll get a SQL error when you enter a duplicate value into a table that has a primary key constraint. You can fix it by altering your table:
CREATE TABLE [dbo].[DeleteMe](
[id] [uniqueidentifier] NOT NULL,
[Value] [varchar](max) NULL,
CONSTRAINT [PK_DeleteMe]
PRIMARY KEY ([id] ASC)
WITH (IGNORE_DUP_KEY = ON)); <-- add
Taken from https://dba.stackexchange.com/a/111771.
Now your df.to_sql() should work again.
The solutions by Jayen and Huy Tran helped me a lot, but they didn't work straight out of the box. The problem I faced with Jayen code is that it requires that the DataFrame columns be exactly as those of the database. This was not true in my case as there were some DataFrame columns that I won't write to the database.
I modified the solution so that it considers the column names.
from sqlalchemy.dialects.mysql import insert
import itertools
def insertWithConflicts(sqltable, conn, keys, data_iter):
"""
Execute SQL statement inserting data, whilst taking care of conflicts
Used to handle duplicate key errors during database population
This is my modification of the code snippet
from https://stackoverflow.com/questions/30337394/pandas-to-sql-fails-on-duplicate-primary-key
The help page from https://docs.sqlalchemy.org/en/14/core/dml.html#sqlalchemy.sql.expression.Insert.values
proved useful.
Parameters
----------
sqltable : pandas.io.sql.SQLTable
conn : sqlalchemy.engine.Engine or sqlalchemy.engine.Connection
keys : list of str
Column names
data_iter : Iterable that iterates the values to be inserted. It is a zip object.
The length of it is equal to the chunck size passed in df_to_sql()
"""
vals = [dict(zip(z[0],z[1])) for z in zip(itertools.cycle([keys]),data_iter)]
insertStmt = insert(sqltable.table).values(vals)
doNothingStmt = insertStmt.on_duplicate_key_update(dict(insertStmt.inserted))
conn.execute(doNothingStmt)
I faced the same issue and I adopted the solution provided by #Huy Tran for a while, until my tables started to have schemas.
I had to improve his answer a bit and this is the final result:
def do_nothing_on_conflicts(sql_table, conn, keys, data_iter):
"""
Execute SQL statement inserting data
Parameters
----------
sql_table : pandas.io.sql.SQLTable
conn : sqlalchemy.engine.Engine or sqlalchemy.engine.Connection
keys : list of str
Column names
data_iter : Iterable that iterates the values to be inserted
"""
columns = []
for c in keys:
columns.append(column(c))
if sql_table.schema:
my_table = table(sql_table.name, *columns, schema=sql_table.schema)
# table_name = '{}.{}'.format(sql_table.schema, sql_table.name)
else:
my_table = table(sql_table.name, *columns)
# table_name = sql_table.name
# my_table = table(table_name, *columns)
insert_stmt = insert(my_table).values(list(data_iter))
do_nothing_stmt = insert_stmt.on_conflict_do_nothing()
conn.execute(do_nothing_stmt)
How to use it:
history.to_sql('history', schema=schema, con=engine, method=do_nothing_on_conflicts)
The idea is the same as #Nfern's but uses recursive function to divide the df into half in each iteration to skip the row/rows causing the integrity violation.
def insert(df):
try:
# inserting into backup table
df.to_sql("table",con=engine, if_exists='append',index=False,schema='schema')
except:
rows = df.shape[0]
if rows>1:
df1 = df.iloc[:int(rows/2),:]
df2 = df.iloc[int(rows/2):,:]
insert(df1)
insert(df2)
else:
print(f"{df} not inserted. Integrity violation, duplicate primary key/s")

How to retrieve data from SQLite faster in python

I have the following info in my database (example):
longitude (real): 70.74
userid (int): 12
This is how i fetch it:
import sqlite3 as lite
con = lite.connect(dbpath)
with con:
cur = con.cursor()
cur.execute('SELECT latitude, userid FROM message')
con.commit()
print "executed"
while True:
tmp = cur.fetchone()
if tmp != None:
info.append([tmp[0],tmp[1]])
else:
break
To get the same info on the form [70.74, 12]
What else can I do to speed up this process? At 10,000,000 rows this takes approx 50 seconds, as I'm aiming for 200,000,000 rows - I never get through this, possible to a memory leak or something like that?
From the sqlite3 documentation:
A Row instance serves as a highly optimized row_factory for Connection objects. It tries to mimic a tuple in most of its features.
Since a Row closely mimics a tuple, depending on your needs you may not even need to unpack the results.
However, since your numerical types are stored as strings, we do need to do some processing. As #Jon Clements pointed out, the cursor is an iterable, so we can just use a comprehension, obtaining the float and ints at the same time.
import sqlite3 as lite
with lite.connect(dbpath) as conn:
cur = conn.execute('SELECT latitude, userid FROM message')
items = [[float(x[0]), int(x[1])] for x in cur]
EDIT: We're not making any changes, so we don't need to call commit.

don't get duplicated values when exporting sqlite3 file to csv using python?

i have a sqlite3 database that has multiple (six) tables and i need it to be imported to csv, but when i try to import it, i get a duplicated value if a column (in a table) is larger than another (in another table).
ie: this is how my sqlite3 database file looks like:
column on table1 column on table2 column on table3
25 30 20
30
this is the result on the .csv file (using this script as example)
25,30,20
30,30,20
and this is the result i need it to show:
25,30,20
30
EDIT: Ok, this is how i add the values to each table, based on the python documentation example (executed each time a value entry is used):
import sqlite3
conn = sqlite3.connect('database.db')
c = conn.cursor()
# Create table
c.execute('''CREATE TABLE table
(column int)''')
# Insert a row of data
c.execute("INSERT INTO table VALUES (value)")
# Save (commit) the changes
conn.commit()
# We can also close the cursor if we are done with it
c.close()
any help?
-Regards...
This is how you could do this.
import sqlite3
con = sqlite3.connect('database')
cur = con.Cursor()
cur.execute('select table1.column, table2.column, table3.column from table1, table2, table3')
# result should look like ((25, 30, 20), (30,)), if I remember correctly
results = cur.fetchall()
output = '\n'.join(','.join(str(i) for i in line) for line in results)
Note: this code is untested, written out of my head, but I hope you get the idea.
UPDATE: apparently I made some mistakes in the script and somehow sql 'magically' pads the result (you might have guessed now that I'm not a sql guru :D). Another way to do it would be:
import sqlite3
conn = sqlite3.connect('database.db')
cur = conn.cursor()
tables = ('table1', 'table2', 'table3')
results = [list(cur.execute('select column from %s' % table)) for table in tables]
def try_list(lst, i):
try:
return lst[i]
except IndexError:
return ('',)
maxlen = max(len(i) for i in results)
results2 = [[try_list(line, i) for line in results] for i in xrange(maxlen)]
output = '\n'.join(','.join(str(i[0]) for i in line) for line in results2)
print output
which produces
25,30,20
30,,
This is probably an overcomplicated way to do it, but it is 0:30 right now for me, so I'm not on my best...
At least it gets the desired result.

Categories