Mysql connector results syntax error even though there is none - python

Ive been trying to insert a image(here sample.txt) to mysql database using mysql client. I have created a table (output) in my "test" database. I use LONGBLOB for the field for inserting image.
I just converted the image into binary file by opening the file in rb mode and i encoded the result with base64 and tried to insert using execute command but i encountered with the error i mentioned below. I tried several ways and got tired.
#PROGRAM
import mysql.connector as m
import base64
conn = m.connect(host = "localhost",database = "test",user = "root",password = "root",port = 3306)
cur = conn.cursor()
file = open("photo.jpg","rb").read()
file = base64.b64encode(file)
arg = (file)
query = "INSERT INTO OUTPUT VALUES(%s)"
cur.execute(query,arg)
conn.commit()
OUTPUT:
File "C:\\Users\\Thanos\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\mysql\\connector\\connection.py", line 395, in \_handle_result
raise errors.get_exception(packet)
mysql.connector.errors.ProgrammingError: 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '%s)' at line 1
i can predict the reason of the error is that when converting to binary it will come with b''. I thought due to the quotes present in that b my query quotes ending and tried changing with double quotes also but no use still error occurs.
i also tried using format, f string instead of "%s" but still no use still error occurs. i tried using format function, f string instead of "%s" string. still same error continues.
i want to get rid of the error. i saw some yt video they are also using the same logic of mine but i dont know why its occurring to me alone. i tried running the same code in another pc but no use :(

Related

pandas.tocsv() format issue (?) resulting in error when using psycopg2 copy_from

Overview:
I have a function in an external file that returns a dataframe that I then save to a csv with:
df.to_csv('filepath.csv', na_rep="0", index=False)
I then try to import the csv into a postgres table using the pyscopg2 function copy_from:
try:
connect = psycopg2.connect(database = "", user = "", password = "", host = "", port = "")
except:
print("Could not connect to database")
cursor = connect.cursor()
with open("filepath", 'r') as open_csv:
next(open_csv)
try:
cursor.copy_from(open_csv, sep=",")
connect.commit()
print("Copy Complete")
except:
print("Copy Error")
cursor.close()
This results in a copy error exception in the code above (so no real detail) but there are some weird caveats:
For some reason, if I open the csv in libre office and manually save it as a text csv and then run just the above psycopg2 copy_from process, the copy works and there are no issues. So for whatever reason, in the eyes of psycopg2 copy_from, something is off with the to.csv() write that gets fixed if I just manually save the file. Manually saving the csv does not result in any visual changes so what is happening here?
Also, the above psycopg2 code snippet works without error in another file in which all dataframe manipulation is contained within the single file where the to.csv() is completed. So something about returning a dataframe from a function in an external file is off?
Fwiw, when debugging, the issue came up on the .copy_from() line so the issue has something to do with csv formatting and I cannot figure it out. I found a workaround with sqlalchemy but would like to know what I am missing here instead of ignoring the problem.
In the postgres error log, the error is: "invalid input syntax for type integer: "-1.0". This error is occurring in the last column of my table where the value is set as an INT and in the csv, the value is -1 but it is being interpreted as -1.0. Where I am confused is that if I use a COPY query to directly input the csv file into postgres, it does not have a problem. Why does it interpret the value as -1.0 through psycopg2 but not directly in postgres?
This is my first post so if more detail is needed let me know - thanks in advance for the help

inserting data to postgres using python

below is a sample of code that i am using to push data from one postgres server to another postgres server. I am trying to move 28 Million records. This worked perfectly with sql server to postgres, but now that it's postgres to postgres it is hanging on line
sourcecursor.execute('select * from "schema"."reallylargetable"; ')
it never reaches any of the other statements to get to the Iterator.
I get this message:
psycopg2.DatabaseError: out of memory for query result ad the select query statement.
#cursors for aiods and ili#
sourcecursor = sourceconn.cursor()
destcursor= destconn.cursor()
#name of temp csv file
filenme= 'filename.csv'
#defenition that uses fetchmany to iterate through data in batch. default
value is in 10000#
def ResultIterator(cursor, arraysize=1000):
'iterator using fetchmany and consumes less memory'
while True:
results = cursor.fetchmany(arraysize)
if not results:
break
for result in results:
yield result
#set data for the cursor#
print("start get data")
#it is not going past the line below. it errors at with out of memory for query result
sourcecursor.execute('select * from "schema"."reallylargetable"; ')
print("iterator")
dataresults= ResultIterator(sourcecursor)
*****do something with dataresults *********
Please change this line:
sourcecursor = sourceconn.cursor()
to name your cursor (use whatever name pleases you):
sourcecursor = sourceconn.cursor('mysourcecursor')
What this does is direct psycopg2 to open a postgresql server-side named cursor for your query. Without a named cursor on the server side, psycopg2 attempts to grab all rows when executing the query.

Error with .output in sqlite3

I'm getting the following error
conn = sqlite3.connect('./mydb.db')
c = conn.cursor()
c.execute('.output ./mytable.sql')
conn.close()
c.execute('.output ./mytable.sql') sqlite3.OperationalError: near ".":
syntax error
That's because .output is a command for the command line sqlite tool. It is not a valid SQL command. Hence it cannot be used when you are using sqlite through a library, only interactively through the command prompt.
None of the shell commands listed at https://www.sqlite.org/cli.html can work as they are something totally separate from the sqlite itself. You can think of them as if they were part of a GUI program - it would not make sense to be able to access something in a GUI program through the library.
What you have to do is fetch the data yourself and parse it yourself and output in the way you want.
Another option is to call the sqlite shell and pass the commands you want it to execute. Something like:
sqlite3 < '.output FILE \n SELECT * FROM TABLE'
(this is untested...)

Python pymssql connection string using variables

I'm writing a script that connects to MSSQL DB using pymssql module.
I couldn't find a way to make the connect method to work using variables.
This works:
a = pymssql.connect(host='sqlserver', port=3183,user='admin',password='pass',database='master')
This does not (b1-5 are variables):
a = pymssql.connect(b1,b2,b3 b4,b5)
(Like shown in first example in www.pymssql.org/en/latest/pymssql_examples.html)
I'm getting this error:
File "pymssql.pyx", line 636 in pymssql. connect (pymssql. c:10178)
pymssql.OperationalError: (20009, 'DB-Lib error message 20009,severity
9:\nUnable to connect: Adaptive Server is unavailable or does not
exist\nNet-Lib error during Unknown error (10035)\n')
The database is fine, I can manually log in and the literal connection string works.
my variables (b1-5) include no single nor double quotes.
When I'm using single quotes I'm getting
Connection to database failed for an unknown reason.
Do you have an idea what could be the problem?
You should write:
a = pymssql.connect(host=b1, port=b2,user=b3,password=b4,database=b5)
Where b1 is actually a HOST, b2 is a PORT, and so on...

Python MySQLdb execution of multiple statements from a file without parsing

Is it possible to apply a MySQL batch file using Python mysqldb library. So far I tried to "execute" the content of the file:
cur = connection.cursor()
cur.execute(file(filename).read())
cur.commit() # and without commit
This works only on a single statement. Otherwise I get the error:
Failed to apply content. Error 2014: Commands out of sync; you can't run this command now
I intend supporting any kind of MySQL schema, table changes so parsing the file line by line is not an option. Is there other solution than calling mysql client from Python?
I suppose you are using the cx_oracle?
The issue is due to calling a non existant method in cursor where as it should have been called in connection.
It should have been
cur = connection.cursor()
cur.execute(file(filename).read())
connection.commit()

Categories