Volatile table can not be create in python importing pyodbc - python

The code is below, by the way the database I use is teradata ,and in a windows 7 operative system and python version 2.7.
import pyodbc
cnxn = pyodbc.connect('DSN=thisIsAbsolutelyCorrect;UID=cannottellyou;PWD=iamsosorry')
cursor1 = cnxn.cursor()
cursor1=cursor1.execute(
################## OR put your SQL dirctly between here ################
'''
create volatile table table1
(
field1 integer
,field2 integer
)on commit preserve rows;
--insert into table1
--values(12,13);
--select * from table1;
''')
######################### and here ########################
cnxn.commit()
for row in cursor1:
print row
raw_input()
But I get the error like this:
Traceback (most recent call last):
File "C:\Users\issuser\Desktop\py\test.py", line 25, in <module>
for row in cursor1:
ProgrammingError: No results. Previous SQL was not a query.
How can I solve this error?

A cursor object will have no rows to iterate through. What I think you want is to iterate through the results of an execute.
rows = curs.execute(""" sql code """).fetchall()
for row in rows:
print row
here is a template to upload to a volatile table in teradata from python2.7 using pyodbc:
import pyodbc
cnxn = pyodbc.connect('your_connection_string')
curs = cnxn.cursor()
curs.execute("""
CREATE VOLATILE TABLE TABLE_NAME
(
c_0 dec(10,0),
...
c_n dec(10,0)
) PRIMARY INDEX (c0)
ON COMMIT PRESERVE ROWS;
END TRANSACTION;
""")
curs.execute("""
INSERT INTO TABLE_NAME (c_0,...,c_n) VALUES (%s);
"""%value_string)
Depending on your settings in Teradata you must explicitly END TRANSACTION.
You can add your loop around the INSERT to upload information line by line.

Have you considered the following:
import pyodbc
cnxn = pyodbc.connect('DSN=thisIsAbsolutelyCorrect;UID=cannottellyou;PWD=iamsosorry')
cursor1 = cnxn.cursor()
RowCount=cursor1.execute(
'''
create volatile table table1
(
field1 integer
,field2 integer
)on commit preserve rows;
''').rowcount
RowCount=cursor1.execute('''insert into table1 values(12,13);''').rowcount
cnxn.commit()
for row in cursor1:
print row
raw_input()
I believe the issue is that the EXECUTE() method as you have written is expecting a cursor to be returned. DDL and DML statements like INSERT, UPDATE, DELETE to not return result sets. You may have more success in using the ROWCOUNT variable with the EXECUTE() method to process the volatile table creation and its population.
You may also have to issue a commit between the creation of the volatile table and populating the table.

Related

how to drop all tables in sqlite3 using python?

i made a project which collects data from user and store it on different tables, the application has a delete function which the first option is to delete a specific table which is i already did and the second one is to delete all existing tables.
How can i drop all tables inside my database?
so this is my variables.
conn = sqlite3.connect('main.db')
cursor = conn.execute("DROP TABLE")
cursor.close()
How can i drop all tables inside my database?
According to sqlitetutorial.net
SQLite allows you to drop only one table at a time. To remove multiple
tables, you need to issue multiple DROP TABLE statements.
You can do it by querying all table names (https://www.sqlite.org/faq.html#q7)
Then you can use the result to delete the tables one by one
Here is the code, the function delete_all_tables does that
TABLE_PARAMETER = "{TABLE_PARAMETER}"
DROP_TABLE_SQL = f"DROP TABLE {TABLE_PARAMETER};"
GET_TABLES_SQL = "SELECT name FROM sqlite_schema WHERE type='table';"
def delete_all_tables(con):
tables = get_tables(con)
delete_tables(con, tables)
def get_tables(con):
cur = con.cursor()
cur.execute(GET_TABLES_SQL)
tables = cur.fetchall()
cur.close()
return tables
def delete_tables(con, tables):
cur = con.cursor()
for table, in tables:
sql = DROP_TABLE_SQL.replace(TABLE_PARAMETER, table)
cur.execute(sql)
cur.close()
SQLite3 code to issue multiple DROP TABLE statements based on TEMP_% name wildcard:
.output droptables.sql
SELECT "DROP TABLE """|| sqlite_master.name ||""";" FROM sqlite_master
WHERE type = "table" AND sqlite_master.name LIKE 'TEMP_%';
.read droptables.sql
Example result in .sql output file:
DROP TABLE "TEMP_table1";
DROP TABLE "TEMP_table2";
DROP TABLE "TEMP_table3";
...
Python3 to paste SQL into:
conn = sqlite3.connect(f"main.db")
conn.row_factory = sqlite3.Row
dbc = conn.cursor()
dbc.execute(
f"DROP TABLE 'TEMP_table1';"
)
conn.commit()

Python SQL script inserts only the last row of the dataframe to SQL Server [pyodbc]

I am trying to import a pandas dataframe to Microsoft SQL Server. Even though, the command is correct and executed successfully, when I select the first 1000 rows of the table in SQL server only the last row of the imported dataframe is shown.
I have the latest version of SQL Server (downloaded yesterday)
Do you know what's causing this?
SQL Code (Stored procedure)
USE [Movies_Dataset]
GO
/****** Object: StoredProcedure [dbo].[store_genres] Script Date: xxx ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[store_genres]
#Movie_Title NVARCHAR(MAX),
#Genres NVARCHAR(MAX)
AS
BEGIN
DROP TABLE IF EXISTS [dbo].[table_genres];
CREATE TABLE [dbo].[table_genres]([#Movie_Title] NVARCHAR(MAX),
[#Genres] NVARCHAR(MAX)
)
INSERT INTO [dbo].[table_genres]
VALUES
( #Movie_Title,
#Genres
)
END;
Python script to import the data
connStr = pyodbc.connect("DRIVER={SQL Server Native Client 11.0};"
"SERVER=LAPTOP-IFTEP7AL;"
"DATABASE=Movies_Dataset;"
"Trusted_Connection=yes")
cursor = connStr.cursor()
delete_table = """
IF dbo.TableExists('table_genres') = 1
DELETE FROM table_genres
"""
insert_values = """
EXEC [dbo].[store_genres] #Movie_Title = ?, #Genres = ?;
"""
cursor.execute(delete_table)
for index, row in genres.iterrows():
params = (row['title'], row['genres'])
cursor.execute(insert_values, params)
connStr.commit()
cursor.close()
connStr.close()
Dataframe I want to import
SQL Result
I select the first 1000 rows of the table in SQL server only the last row of the imported dataframe is shown.
Because your stored procedure drops the table every time before inserting a row.
So don't do that. Also don't use # in your column names. That's just for parameters, arguments, and local variables.
EG:
USE [Movies_Dataset]
GO
SET QUOTED_IDENTIFIER ON
GO
DROP TABLE IF EXISTS [dbo].[table_genres];
CREATE TABLE [dbo].[table_genres]([Movie_Title] NVARCHAR(MAX),
[Genres] NVARCHAR(MAX)
)
GO
/****** Object: StoredProcedure [dbo].[store_genres] Script Date: xxx ******/
SET ANSI_NULLS ON
GO
CREATE OR ALTER PROCEDURE [dbo].[store_genres]
#Movie_Title NVARCHAR(MAX),
#Genres NVARCHAR(MAX)
AS
BEGIN
INSERT INTO [dbo].[table_genres](Movie_Title,Genres)
VALUES
( #Movie_Title,
#Genres
)
END;

Python PYDOBC Insert Into SQL Server DB with Parameters

I am currently trying to use pyodbc to insert data from a .csv into an Azure SQL Server database. I found a majority of this syntax on Stack Overflow, however for some reason I keep getting one of two different errors.
1) Whenever I use the following code, I get an error that states 'The SQL contains 0 parameter markers, but 7 parameters were supplied'.
import pyodbc
import csv
cnxn = pyodbc.connect('driver', user='username', password='password', database='database')
cnxn.autocommit = True
cursor = cnxn.cursor()
csvfile = open('CSV File')
csv_data = csv.reader(csvfile)
SQL="insert into table([Col1],[Col2],[Col3],[Col4],[Col5],[Col6],[Col7]) values ('?','?','?','?','?','?','?')"
for row in csv_data:
cursor.execute(SQL, row)
time.sleep(1)
cnxn.commit()
cnxn.close()
2) In order to get rid of that error, I am defining the parameter markers by adding '=?' to each of the columns in the insert statement (see code below), however this then gives the following error: ProgrammingError: ('42000'"[42000] [Microsoft] [ODBC SQL Server Driver][SQL Server] Incorrect syntax near '=').
import pyodbc
import csv
cnxn = pyodbc.connect('driver', user='username', password='password', database='database')
cnxn.autocommit = True
cursor = cnxn.cursor()
csvfile = open('CSV File')
csv_data = csv.reader(csvfile)
SQL="insert into table([Col1]=?,[Col2]=?,[Col3]=?,[Col4]=?,[Col5]=?,[Col6]=?,[Col7]=?) values ('?','?','?','?','?','?','?')"
for row in csv_data:
cursor.execute(SQL, row)
time.sleep(1)
cnxn.commit()
cnxn.close()
This is the main error I am haveing trouble with, I have searched all over Stack Overflow and can't seem to find a solution. I know this error is probably very trivial, however I am new to Python and would greatly appreciate any advice or help.
Since SQL server can import your entire CSV file with a single statement this is a reinvention of the wheel.
BULK INSERT my_table FROM 'CSV_FILE'
WITH ( FIELDTERMINATOR=',', ROWTERMINATOR='\n');
If you want to persist with using python, just execute the above query with pyodbc!
If you would still prefer to execute thousands of statements instead of just one
SQL="insert into table([Col1],[Col2],[Col3],[Col4],[Col5],[Col6],[Col7]) values (?,?,?,?,?,?,?)"
note that the ' sorrounding the ? shouldn't be there.
# creating column list for insertion
colsInsert = "["+"],[".join([str(i) for i in mydata.columns.tolist()]) +']'
# Insert DataFrame recrds one by one.
for i,row in mydata.iterrows():
sql = "INSERT INTO Test (" +colsInsert + ") VALUES (" + "%?,"*(len(row)-1) + "%?)"
cursor.execute(sql, tuple(row))
# cursor.execute(sql, tuple(row))
# the connection is not autocommitted by default, so we must commit to save our changes
c.commit()

Not able to insert into mysql database

I have just started using MySQLdb in python. I am able to create table but when I try to insert, no rows are inserted into the table and its shows that table is still empty.
import MySQLdb
db = MySQLdb.connect("localhost","root","shivam","test")
cursor = db.cursor()
s = "DROP TABLE IF EXISTS batting"
cursor.execute(s)
s = """create table batting (
name varchar(50) primary key,
matches integer(5),
innings integer(5),
runs integer(5),
highest integer(3),
strikerate integer(3),
hundreds integer(3),
fifties integer(3)
)"""
cursor.execute(s)
s = """insert into batting(name) values(
"shivam")"""
cursor.execute(s)
db.close()
Where I could be going wrong?
You forgot to commit your connection. Simply add:
cursor.execute(s)
db.commit()
Have a look at this. It explains why you need to commit

Python write to MySQL - no error but no writing

I am trying to write into my localhost MySQL database.
I have created a database named "test", a table called "price_update" and a row called "model"
When I run the script below I get no errors, however, I also get nothing written to my database.
I am not sure where to start looking for the problem. the row is varchar(10) and collation utf9_general_ci.
import MySQLdb
conn = MySQLdb.connect(host="127.0.0.1",user="someUser",passwd="somePassword",db="test")
query = "INSERT INTO price_update (model) values ('12345')"
x = conn.cursor()
x.execute(query)
row = x.fetchall()
You have to commit the changes:
conn.commit()
Also, I'd make your query safer:
query = "INSERT INTO price_update (model) values (%s)"
...
x.execute(query, ('12345',))

Categories