Is it possible to apply a MySQL batch file using Python mysqldb library. So far I tried to "execute" the content of the file:
cur = connection.cursor()
cur.execute(file(filename).read())
cur.commit() # and without commit
This works only on a single statement. Otherwise I get the error:
Failed to apply content. Error 2014: Commands out of sync; you can't run this command now
I intend supporting any kind of MySQL schema, table changes so parsing the file line by line is not an option. Is there other solution than calling mysql client from Python?
I suppose you are using the cx_oracle?
The issue is due to calling a non existant method in cursor where as it should have been called in connection.
It should have been
cur = connection.cursor()
cur.execute(file(filename).read())
connection.commit()
Related
I use a code that opens a csv file to store it in a database. I use SQL SERVER.
when the file is opened in the RAM, after a processing that is done before, we want to store it in the database.
under Postgresql we use the following code but I want an equivalent under SQL SERVER
# upload to db
SQL_STATEMENT = """
COPY %s FROM STDIN WITH
CSV
HEADER
DELIMITER AS ','
"""
cursor.copy_expert(sql=SQL_STATEMENT % tbl_name, file=my_file)
I have no idea how to change the code block without changing the code
Whereas psycopg2 is a Postgres specific DB-API to maintain extended methods like copy_expert, copy_from, copy_to that are only supported in Postgres, pyodbc is a generalized DB-API that interfaces with any ODBC driver including SQL Server, Teradata, MS Access, even PostgreSQL ODBC drivers! Therefore, it is not likely an SQL Server specific convenience command exists to replace copy_expert.
However, consider submitting an SQL Server specific SQL command such as BULK INSERT that can read from flat files and then run cursor.execute. Below uses F-strings (introduced in Python 3.6) for string formatting:
# upload to db
SQL_STATEMENT = (
f"BULK INSERT {tbl_name} "
f"FROM '{my_file}' "
"WITH (FORMAT='CSV');"
)
cur.execute(SQL_STATEMENT)
conn.commit()
below is a sample of code that i am using to push data from one postgres server to another postgres server. I am trying to move 28 Million records. This worked perfectly with sql server to postgres, but now that it's postgres to postgres it is hanging on line
sourcecursor.execute('select * from "schema"."reallylargetable"; ')
it never reaches any of the other statements to get to the Iterator.
I get this message:
psycopg2.DatabaseError: out of memory for query result ad the select query statement.
#cursors for aiods and ili#
sourcecursor = sourceconn.cursor()
destcursor= destconn.cursor()
#name of temp csv file
filenme= 'filename.csv'
#defenition that uses fetchmany to iterate through data in batch. default
value is in 10000#
def ResultIterator(cursor, arraysize=1000):
'iterator using fetchmany and consumes less memory'
while True:
results = cursor.fetchmany(arraysize)
if not results:
break
for result in results:
yield result
#set data for the cursor#
print("start get data")
#it is not going past the line below. it errors at with out of memory for query result
sourcecursor.execute('select * from "schema"."reallylargetable"; ')
print("iterator")
dataresults= ResultIterator(sourcecursor)
*****do something with dataresults *********
Please change this line:
sourcecursor = sourceconn.cursor()
to name your cursor (use whatever name pleases you):
sourcecursor = sourceconn.cursor('mysourcecursor')
What this does is direct psycopg2 to open a postgresql server-side named cursor for your query. Without a named cursor on the server side, psycopg2 attempts to grab all rows when executing the query.
My SQLite script works fine, when I type:
.read 'dummy.sql'
from within the SQLite shell.
However, the following Python code is not doing it properly. I'm getting a syntax error in line 5.
import sqlite3
db = sqlite3.connect('scheduling.db')
cursor=db.cursor()
a='''.read "scheduling.sql"'''
cursor.execute(a)
db.commit
db.close()
I know I'm doing something wrong with the quotes. How do I make this work?
The workaround I would recommend is to read the contents of the .sql file into a Python string variable, as you would read any other text file, and then call executescript. Unlike execute, executescript can execute many statements in one call. For example, it will work correctly if your .sql contains the following:
CREATE TABLE contacts (
contact_id INTEGER PRIMARY KEY,
first_name TEXT NOT NULL,
last_name TEXT NOT NULL
);
INSERT INTO contacts (contact_id, first_name, last_name)
VALUES (1, 'John', 'Smith');
Here's the full Python snippet that you'll need:
import sqlite3
with open('scheduling.sql', 'r') as sql_file:
sql_script = sql_file.read()
db = sqlite3.connect('scheduling.db')
cursor = db.cursor()
cursor.executescript(sql_script)
db.commit()
db.close()
You cannot. The program sqlite3 can be seen as splitted in 2 parts:
externally, it parses lines of input into SQL commands
internally, it passes those SQL commands to the engine
externally again, it displays the result of the SQL commands.
.read is kind of a meta command: the parser opens the file and read lines from it. AFAIK, nothing in the sqlite3 library can emulate that parser part, so you would have to the line parsing into SQL statements by hand, and then execute the SQL statements one at a time.
Try this. you can read query from file using the 'open' function - this will replace
.read
functionality; SQL scripts are text files with query. and then run read_sql_query.
import sqlite3
import pandas as pd
sqlite_file = 'scheduling.db'
conn = sqlite3.connect(sqlite_file)
c = conn.cursor()
f = open('scheduling.sql','r')
sql = f.read()
print pd.read_sql_query(sql, conn)
I am trying to use a python function to execute a .sql file.
The sql file begins with a DROP DATABASE statement.
The first lines of the .sql file look like this:
DROP DATABASE IF EXISTS myDB;
CREATE DATABASE myDB;
The rest of the .sql file defines all the tables and views for 'myDB'
Python Code:
def connect():
conn = psycopg2.connect(dbname='template1', user='user01')
conn.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
cursor = conn.cursor()
sqlfile = open('/path/to/myDB-schema.sql', 'r')
cursor.execute(sqlfile.read())
db = psycopg2.connect(dbname='myDB', user='user01')
cursor = db.cursor()
return db,cursor
When I run the connect() function, I get an error on the DROP DATABASE statement.
ERROR:
psycopg2.InternalError: DROP DATABASE cannot be executed from a function or multi-command string
I spent a lot of time googling this error, and I can't find a solution.
I also tried adding an AUTOCOMMIT statement to the top of the .sql file, but it didn't change anything.
SET AUTOCOMMIT TO ON;
I am aware that postgreSQL doesn't allow you to drop a database that you are currently connected to, but I didn't think that was the problem here because I begin the connect() function by connecting to the template1 database, and from that connection create the cursor object which opens the .sql file.
Has anyone else run into this error, is there any way to to execute the .sql file using a python function?
This worked for me for a file consisting of SQL Queries in each line:
sql_file = open('file.sql','r')
cursor.execute(sql_file.read())
You are reading in the entire file and passing the whole thing to PostgreSQL as one string (as the error message says, "multi-command string". Is that what you are intending to do? If so, it isn't going to work.
Try this:
cursor.execute(sqlfile.readline())
Or, shell out to psql and let it do the work.
In order to deploy scripts using CRON that serve as ETL files that use .SQL, we had to expand how we call the SQL file itself.
sql_file = os.path.join(os.path.dirname(__file__), "../sql/ccd_parcels.sql")
sqlcurr = open(sql_file, mode='r').read()
curDest.execute(sqlcurr)
connDest.commit()
This seemed to please the CRON job...
I'm getting the following error
conn = sqlite3.connect('./mydb.db')
c = conn.cursor()
c.execute('.output ./mytable.sql')
conn.close()
c.execute('.output ./mytable.sql') sqlite3.OperationalError: near ".":
syntax error
That's because .output is a command for the command line sqlite tool. It is not a valid SQL command. Hence it cannot be used when you are using sqlite through a library, only interactively through the command prompt.
None of the shell commands listed at https://www.sqlite.org/cli.html can work as they are something totally separate from the sqlite itself. You can think of them as if they were part of a GUI program - it would not make sense to be able to access something in a GUI program through the library.
What you have to do is fetch the data yourself and parse it yourself and output in the way you want.
Another option is to call the sqlite shell and pass the commands you want it to execute. Something like:
sqlite3 < '.output FILE \n SELECT * FROM TABLE'
(this is untested...)