Can I set user-defined variable in Python MySQLdb? - python

So My problem is this, I have a query that uses Mysql User-defined variable like:
#x:=0 SELECT #X:=#X+1 from some_table and this code returns a column from 1-1000.
However, this query doesn't work if I sent it through mySQLdb in Python.
connection =MySQLdb.Connect(host='xxx',user='xxx',passwd='xxx',db = 'xxx')
cursor = connection.cursor
cursor.execute("""SET #X:=0;SELECT #X:=#X+1 FROM some_table""")
rows = cursor.fetchall()
print rows
It prints a empty tuple.
How can I solve this?
Thanks

Try to execute one query at a time:
cursor.execute("SET #X:=0;");
cursor.execute("SELECT #X:=#X+1 FROM some_table");

Try it as two queries.
If you want it to be one query, the examples in the comments to the MySQL User Variables documentation look like this:
SELECT #rownum:=#rownum+1 rownum, t.* FROM (SELECT #rownum:=1) r, mytable t;
or
SELECT if(#a, #a:=#a+1, #a:=1) as rownum
See http://dev.mysql.com/doc/refman/5.1/en/user-variables.html

Related

SELECT statement is MySql returning wrong value using python

I am trying to retrieve a value from a table stored in Mysql database using python(pycharm). But instead of outputting the stored value it outputs number of rows instead.
import pymysql
connection = pymysql.connect(host='localhost',
user='root',
password='passw',
database='database1',
charset='utf8',
port=3306
)
x=connection.cursor()
select = x.execute('''SELECT
update_id
FROM
telegram;
''')
print(select)
Output: 1
^Wrong output(Output equals number of rows). As I keep adding on rows the output changes to the number of rows but never returns the value stored.
The command works from MySql perfectly.
SELECT
update_id
FROM
telegram;
Output:233
^This is the correct output.
Why is this happening? What changes should I make in my python code?
According to the documentation on pymysql, this is how you are supposed to do the print out:
sql = "SELECT `id`, `password` FROM `users` WHERE `email`=%s"
cursor.execute(sql, ('webmaster#python.org',))
result = cursor.fetchone()
print(result)
You are missing "cursor.fetchone()"
I hope that helps
According to the documentation, cursor.execute() returns the number of affected rows. You then need to fetch the content with fetch() or fetchall(). See the example at PyMySQL.

PostgreSQL query gives unexpected result

I'm trying to do something extremely simple that works, but not the way I expect it to. I have a database with various tables and for each of those tables, I'm trying to extract the column names from the information schema. I'm using the code below and everything works like a charm (python):
import psycopg2 as pgsql
# code to connect and generate cursor
table = 'some_table_name'
query = 'SELECT column_name FROM information_schema.columns WHERE table_name = %s'
cursor.execute(query, (table,))
result = pd.DataFrame(cursor.fetchall())
print(result)
So far, so good. The problem arises when I replace the query variable with the following:
import psycopg2 as pgsql
# code to connect and generate cursor
table = 'some_table_name'
**query = 'SELECT column_name FROM information_schema.columns WHERE table_name='+table
cursor.execute(query)**
result = pd.DataFrame(cursor.fetchall())
print(result)
If I print the statement, it's correct:
SELECT column_name FROM information_schema.columns WHERE table_name=some_table_name
However, when I run the query, I'm getting this error message:
UndefinedColumn: column "some_table_name" does not exist
LINE 1: ... FROM information_schema.columns WHERE table_name=some_tabl...
some_table_name is a table name as a parameter to the WHERE clause, not a column name. How is this even possible?
Thanks!
Your problem is that you haven't put some_table_name in quotes so it is treated as a column name, not a string literal. Why not stick with the first method which both worked and is in line with the psycopg documentation?

python: how to query multiple mysql databases using pandas.read_sql_query

As mentioned above I need to query several databases with all the same schema using pandas.read_sql_query. I've tried to create loop over all the dbs and run on the fly the sql statement that would result in something like this:
USE db_test_1; SELECT * from test
That's what I've done so far:
cursor = conn.cursor()
cursor.execute("SHOW DATABASES LIKE '%test_%'")
cursor.close()
dbs = [v.replace("'", "") for (v, ) in cursor]
for db in dbs[:100]:
temp = "USE " + db + ";"
fd = open('my_query.sql')
query = fd.read()
fd.close
sql = temp + query
data = pd.read_sql_query(sql, conn)
print(data)
Gives an error saying that the mysql syntax is wrong. Do you have any idea how to handle it or point me to the error?
Many thanks
Your problem lies with your my_query.sql file.
SELECT (SELECT * from tab1), (SELECT * from tab2)
The above is not valid SQL; a subselect can only return a single column. To do this, you would need to join the two subselects in the FROM clause. Which columns you do this on will be entirely dependent on your schema and the needed relation.
Update:
Okay, so the problem here seems to be more about how you're dealing with the query. Your cursor object is connected to a single database. Not the entire database server.
That means that your cursor object cannot use the use keyword here. You need to create a new connection and cursor object for each database you want to connect to.

jaydebeapi Getting column alias names

Is there a way to return the aliased column names from a sql query returned from JayDeBeApi?
For example, I have the following query:
sql = """ SELECT visitorid AS id_alias FROM table LIMIT 1 """
I then run the following (connect_to_vdm() establishes a connection to my DB):
curs = connect_to_vdm().cursor()
curs.execute(sql)
vals = curs.fetchall()
I normally retrieve column names like so:
desc = curs.description
column_names = [col[0] for col in desc]
This returns the original column name "visitorid" and not the alias specified in the query "id_alias".
I know I could swap the names for the value in Python, but hoping to be able to have this done within the query since it is already defined in the Select statement. This behaves as expected in a SQL client, but I cannot seem to get the Aliases to return when using python/JayDeBeApi. Is there a way to do this using JayDeBeApi?
EDIT:
I have discovered that structuring my query with a CTE seems to help fix the problem, but still wondering if there is a more straightforward solution out there. Here is how I rewrote the same query:
sql = """ WITH cte (id_alias) AS (SELECT visitorid AS id_alias FROM table LIMIT 1) SELECT id_alias from cte"""
I was able to fix this using a CTE (Common Table Expression)
sql = """ WITH cte (id_alias) AS (SELECT visitorid AS id_alias FROM table LIMIT 1) SELECT id_alias from cte"""
Hat tip to pybokeh on Github, but this worked for me.
According to IBM (here and here), the behavior of JDBC drivers changed at some point. Bizarrely, the column aliases display just fine when using a tool like DBVisualizer, but not by querying through jaydebeapi.
To fix, add the following to the end of your DB URL:
:useJDBC4ColumnNameAndLabelSemantics=false;
Example:
jdbc:db2://[DBSERVER]:[PORT]/[DBNAME]:useJDBC4ColumnNameAndLabelSemantics=false;

Python MySQLdb doesn't wait for the result

I am trying to run some querys that needs to create some temporary tables and then returns a result set, but i am unable to do that with MySQLdb api.
I already dig something about this issue like here but without success.
My query is like this:
create temporary table tmp1
select * from table1;
alter tmp1 add index(somefield);
create temporary table tmp2
select * from table2;
select * from tmp1 inner join tmp2 using(somefield);
This returns immediatly an empty result set. If i go to the mysql client and do a show full processlist i can see my queries executing. They take some minutes to complete.
Why cursor returns immediatly and don't wait to query to run.
If i try to run another query i have a "Commands out of sync; you can't run this command now"
I already tried to put my connection with autocommit to True
db = MySQLdb.connect(host='ip',
user='root',
passwd='pass',
db='mydb',
use_unicode=True
)
db.autocommit(True)
Or put every statement in is own cursor.execute() and between them db.commit() but without success too.
Can you help me to figure what is the problem? I know mysql don't support transactions for some operations like alter table, but why the api don't wait until everything is finished like it does with a select?
By the way i'm trying to do this on a ipython notebook.
I suspect that you're passing your multi-statement SQL string directly to the cursor.execute function. The thing is, each of the statements is a query in its own right so it's unclear what the result set should contain.
Here's an example to show what I mean. The first case is passing a semicolon set of statements to execute which is what I presume you have currently.
def query_single_sql(cursor):
print 'query_single_sql'
sql = []
sql.append("""CREATE TEMPORARY TABLE tmp1 (id int)""")
sql.append("""INSERT INTO tmp1 VALUES (1)""")
sql.append("""SELECT * from tmp1""")
cursor.execute(';'.join(sql))
print list(cursor.fetchall())
Output:
query_single_sql
[]
You can see that nothing is returned, even though there is clearly data in the table and a SELECT is used.
The second case is where each statement is executed as an independent query, and the results printed for each query.
def query_separate_sql(cursor):
print 'query_separate_sql'
sql = []
sql.append("""CREATE TEMPORARY TABLE tmp3 (id int)""")
sql.append("""INSERT INTO tmp3 VALUES (1)""")
sql.append("""SELECT * from tmp3""")
for query in sql:
cursor.execute(query)
print list(cursor.fetchall())
Output:
query_separate_sql
[]
[]
[(1L,)]
As you can see, we consumed the results of the cursor for each query and the final query has the results we expect.
I suspect that even though you've issued multiple queries, the API only has a handle to the first query executed and so immediately returns when the CREATE TABLE is done. I'd suggest serializing your queries as described in the second example above.

Categories