I am using Maya 2011(64bit) and MySQL 5.5 (64 bit) in Windows 7 (64 bit) machine. I tried to connect maya with Mysqldb through python. So i copied the connector files into maya\python\lib\site packages.
I was able to import MYsqldb module without any error. But when i tried call the cursor object (for querying), I found that Maya is not recognizing the cursor object.
Here is my sample code:
import MySQLdb as mb
import maya.cmds as cmds
def mysql_connect(hostname, username, password, dbname):
db = mb.connect(host=hostname,user=username,passwd=password,db=dbname)
db = mysql_connect("localhost", “root”, “test”, “mydbt")
dbcursor = db.cursor()
dbcursor.execute("select * from maya")
But the code throws the following error :
Error: AttributeError: ‘NoneType’ object has no attribute ‘cursor’ #
I tried verifying the env-path variables, replacing the connector files but the problem persists.
Since being a beginner, i am un-able to identify the exact issue.
I request for your valuable suggestions
You are not returning anything from mysql_connect function. So it returns None. When you do:
db = mysql_connect("localhost", “root”, “test”, “mydbt")
db becomes None. Try changing:
db = mb.connect(host=hostname,user=username,passwd=password,db=dbname)
with
return mb.connect(host=hostname,user=username,passwd=password,db=dbname)
That being said, I'm not sure defining a function to make a single thing is useful. Better to have something like this:
import MySQLdb as mb
import maya.cmds as cmds
db = mb.connect(host="localhost",user=“root”,passwd=“test”,db=“mydbt")
dbcursor = db.cursor()
dbcursor.execute("select * from maya")
Here, you have two things assigned to db. It appears that mysql_connect("localhost", “root”, “test”, “mydbt") is returning None, so when you call db.cursor() later, you get that error.
Make sure you're overwriting the db variable correctly (in this case, it looks like you aren't).
Related
I'm only running my code which the first line is
import psycopg
and then the ImportError "no pq wrapper available" just popped up.
There's not really a solution i could find anywhere, so that's why i'm asking here.
There is a new way.
I was having the exact same issue, and I really couldn't be bothered to hunt it down, as It is working on WSL with IntelliJ on Windows or on WSL with Ubuntu but not in Jupyter lab on Windows (I'll reply back if I find the issue). So, I thought this is a good opportunity to use import pg8000. pg8000 is a pure python implementation for connecting to Postgres, and so far seems similar in operation (I tested it with a cursor and local Postgres). Install pg8000 with pip3 install pg8000, then...
Basic Ingredients
#!/usr/bin/env python3
import pg8000 as pg
def db_conn():
conn = pg.Connection(host='localhost',
database='dev_database',
user='postgres',
password='yourPassword!')
return conn
# Now you have a connection, query your data with a cursor
conn = db_conn()
cur = conn.cursor()
cur.execute('SELECT * FROM tab1')
data = cur.fetchall()
# Now 'data' contains your data, or use the new way with conn.run
if __name__ == '__main__':
print('Grabbing data from tab1...')
for row in conn.run('SELECT * FROM tab1'):
print(row)
conn.close()
The conn.run method returns a tuple with a list for every row, but no issue to be read into a pandas DataFrame, and as it's an iterable (no I don't recommend this, but it's a lazy Saturday 😄)
df = pd.DataFrame([row for row in conn.run('SELECT * FROM journal;')])
I'll get my coat 😅
I'm trying to execute many (~1000) MERGE INTO statements into oracledb 11.2.0.4.0(64bit) using python 3.9.2(64bit) and pyodbc 4.0.30(64bit). However, all the statements return an exception:
HY000: The driver did not supply an error
I've tried everything I can think of to solve this problem, but no luck. I tried changing code, encodings/decodings and ODBC driver from oracle home 12.1(64bit) to oracle home 19.1(64bit). I also tried using pyodbc 4.0.22 in which case the error just changed into:
<class 'pyodbc.ProgrammingError'> returned a result with an error set
Which is not any more helpful error than the first one. The issue I assume cannot be the MERGE INTO statement itself, because when I try running them directly in the database shell, it completes without issue.
Below is my code. I guess I should also mention the commands and parameters are read from stdin before being executed, and oracledb is using utf8 characterset.
cmds = sys.stdin.readlines()
comms = json.loads(cmds[0])
conn = pyodbc.connect(connstring)
conn.setencoding(encoding="utf-8")
cursor = conn.cursor()
cursor.execute("""ALTER SESSION SET NLS_DATE_FORMAT='YYYY-MM-DD"T"HH24:MI:SS.....'""")
for comm in comms:
params = [(None) if str(x)=='None' or str(x)=='NULL' else (x) for x in comm["params"]]
try:
cursor.execute(comm["sql"],params)
except Exception as e:
print(e)
conn.commit()
conn.close()
Edit: Another things worth mentioning for sure - this issue began after python2.7 to 3.9.2 update. The code itself didn't require any changes at all in this particular location, though.
I've had my share of HY000 errors in the past. It almost always came down to a syntax error in the SQL query. Double check all your double and single quotes, and makes sure the query works when run independently in an SQL session to your database.
I am able to connect to aws-redshift with psycopg2 using python, I can query tables and get data back, etc...
However, when I try to run a create udf fucntion through psycopg2, nothing happens, no error returns but nothing gets created.
Here's my code:
def _applyFunctionToDB():
con=psycopg2.connect(dbname = redhsiftDatabase, host = redshiftHost, port = '5439', user = redshiftUser, password = redshiftPwd)
cur = con.cursor()
udf=_fileOpenWrite(udfFile)
size = os.stat(udfFile).st_size
udfCode=udf.read(size)
cur.execute(udfCode)
con.close()
I have run it through the debugger and all the pieces are there, but nothing happens when the "execute" method is invoked on the cursor.
If anyone has any advice and/or ideas on what might be going on here, please advise.
Thanks!
found answer just after posting here: Copying data from S3 to AWS redshift using python and psycopg2
I need to invoke a commit.
So, add con.commit in above code after execute.
I have created a stored procedure usuarios_get , I test it in oracle console and work fine. This is the code of stored procedure
create or replace PROCEDURE USUARIOS_GET(
text_search in VARCHAR2,
usuarios_list out sys_refcursor
)
AS
--Variables
BEGIN
open usuarios_list for select * from USUARIO
END USUARIOS_GET;
The python code is this:
with connection.cursor() as cursor:
listado = cursor.var(cx_Oracle.CURSOR)
l_query = cursor.callproc('usuarios_get', ('',listado)) #in this sentence produces error
l_results = l_query[1]
The error is the following:
NotSupportedError: Variable_TypeByValue(): unhandled data type VariableWrapper
I've also tried with other stored procedure with a out parameter number type and modifying in python code listado= cursor.var(cx_Oracle.NUMBER) and I get the same error
NotSupportedError: Variable_TypeByValue(): unhandled data type VariableWrapper
I work with
python 2.7.12
Django 1.10.4
cx_Oracle 5.2.1
Oracle 12c
Can any help me with this?
Thanks
The problem is that Django's wrapper is incomplete. As such you need to make sure you have a "raw" cx_Oracle cursor instead. You can do that using the following code:
django_cursor = connection.cursor()
raw_cursor = django_cursor.connection.cursor()
out_arg = raw_cursor.var(int) # or raw_cursor.var(float)
raw_cursor.callproc("<procedure_name>", (in_arg, out_arg))
out_val = out_arg.getvalue()
Then use the "raw" cursor to create the variable and call the stored procedure.
Looking at the definition of the variable wrapper in Django it also looks like you can access the "var" property on the wrapper. You can also pass that directly to the stored procedure instead -- but I don't know if that is a better long-term option or not!
Anthony's solution works for me with Django 2.2 and Oracle 12c. Thanks! Couldn't find this solution anywhere else on the web.
dcursor = connection.cursor()
cursor = dcursor.connection.cursor()
import cx_Oracle
out_arg = cursor.var(cx_Oracle.NUMBER)
ret = cursor.callproc("<procedure_name>", (in_arg, out_arg))
I have some code written with pyodbc on win x64 using python 2.6 and I get no problem.
Using the same code switching to MySQLdb I get errors.
Example. Long object not iterable....
whats the difference between pyodbc and MySQLdb?
EDIT
import csv, pyodbc, os
import numpy as np
cxn = pyodbc.connect('DSN=MySQL;PWD=me')
import MySQLdb
cxn = MySQLdb.connect (host = "localhost",user="root",passwd ="me")
csr = cxn.cursor()
try:
csr.execute('Call spex.updtop')
cxn. commit
except: pass
csr.close()
cxn.close()
del csr, cxn
Without seeing code, it's not obvious why you're getting errors. You can connect to MySQL databases using either one, and they both implement version 2.x of the Python DB API, though their underlying workings are totally different, as Ignacio Vazquez-Abrams commented.
Some things to consider:
Are you using extensions to the Python DB API that might not be implemented in both?
Are the two libraries translating MySQL datatypes to Python datatypes the same way?
Is there example code you could post?