python pyscopg2 unable to select data from AWS redshift - python

I have a postgres database on AWS redshift. Currently I use Python Pyscopg2 to interact with the database. I find that I can run:
curosr.execute("INSERT INTO datatype VALUES (%s, %s)", ("humidity", "some description"))
connect.commit()
but when I do:
for row in cursor.execute("SELECT * FROM datatype"):
print(row)
what ever I do, it always returns me None Type. Anyone can give me advice that what is the correct way to interact with redshift postgres?
Thank you
As required, here's the whole code
##encoding=utf8
from __future__ import print_function
import psycopg2
def connect():
conn = psycopg2.connect(host = "wbh1.cqdmrqjbi4nz.us-east-1.redshift.amazonaws.com",
port = 5439,
dbname = "dbname",
user = "user",
password = "password")
c = conn.cursor()
return conn, c
conn, c = connect()
c.execute("INSERT INTO table netatmo VALUES (%s, %s)", (1, 10.5))
conn.commit() # this works, and I can see the data in other db client software
for row in c.execute("SELECT * FROM netatmo").fetchall(): # this not working
print(row) # AttributeError: 'NoneType' object has no attribute 'fetchall'

you missed "fetchall()",
when updating - you don't need it, but when selecting - you have to fetch the results
http://initd.org/psycopg/docs/cursor.html
your code should look like this:
cursor.execute("SELECT * FROM datatype;")
for row in cursor.fetchall():
print(row)

Related

Mysql cursor is not fetching any data

I want to make one SELECT query from python to mysql DB.
But I'm getting an empty list when I call cursor.fetchall()
I have already tested the connection and the query with DBeaver, and it works fine.
I tried following the tutorials on https://dev.mysql.com but didn work.
Here is my function:
import mysql.connector
from mysql.connector import connect
def execute_query_2(self,query):
connection = connect(
host="config.host",
database="config.db_name",
user="config.db_user",
password="config.db_user"
)
print(connection)
cursor = connection.cursor()
cursor.execute(query)
result = cursor.fetchall()
print(result)
for row in result:
print(row)
The print(connection) gives me mysql.connector.connection_cext.CMySQLConnection object at 0x7f6a725bfca0.
Also the query is being loaded successfully as 'SELECT * from occUserManager.addressInformation'.
The result for this should bring 44 rows.
Any help is more than welcome.

How to querying in Snowflake using Python (SSO Authentication)?

I tried to connect snowflake(SSO Authentication) and get data from table.
But, When i run the code, i can login with my credentials in the pop-up browser window and it connect Snowflake, no response after that(the program neither terminate nor provide result).
Not sure, where am doing mistake, Pls help.
'''
import snowflake.connector
# Connecting to Snowflake using SAML 2.0-compliant IdP federated authentication
conn = snowflake.connector.connect(
user='G*****K',
account='abcdf',
authenticator='externalbrowser',
warehouse='abcdf',
database='abcdf',
schema='abcdf'
)
cur = conn.cursor()
sql = "select * from abcdf.ACCT limit 10"
x=cur.execute(sql)
cur.close()
print(x)
'''
I believe you are closing the cursor before the print;
try:
cur.execute("SELECT col1, col2 FROM test_table ORDER BY col1")
for (col1, col2) in cur:
print('{0}, {1}'.format(col1, col2))
finally:
cur.close()
Details: https://docs.snowflake.com/en/user-guide/python-connector-example.html#using-cursor-to-fetch-values
Results of the query are stored in cursor. The contents of cursor may then be stored in a local variable.
Also, best practice to close connection in the end.
https://www.psycopg.org/docs/cursor.html
import snowflake.connector
# Connecting to Snowflake using SAML 2.0-compliant IdP federated authentication
conn = snowflake.connector.connect(
user='G*****K',
account='abcdf',
authenticator='externalbrowser',
warehouse='abcdf',
database='abcdf',
schema='abcdf'
)
cur = conn.cursor()
sql = "select * from abcdf.ACCT limit 10"
cur.execute(sql)
print(cur.fetchall())
cur.close()
conn.close()

Access specific table from MySQL database with Python

I have a MySQL database named my_database , and it that database there are a lot of tables. I want to connect MySQL with Python and to work with specific table named my_table from that database.
This is the code that I have for now:
import json
import pymysql
connection = pymysql.connect(user = "root", password = "", host = "127.0.0.1", port = "", database = "my_database")
cursor = connection.cursor()
print(cursor.execute("SELECT * FROM my_database.my_table"))
This code returns number of rows, but I want to get all columns and rows (all values from that table).
I have also tried SELECT * FROM my_table but result is the same.
Did you read the documentation? You need to fetch the results after executing: fetchone(), fetchall() or something like this:
import json
import pymysql
connection = pymysql.connect(user = "root", password = "", host = "127.0.0.1", port = "", database = "my_database")
with connection.cursor(pymysql.cursors.DictCursor) as cursor:
cursor.execute("SELECT * FROM my_database.my_table")
rows = cursor.fetchall()
for row in rows:
print(row)
You probably also want a DictCursor as the results are then parsed as dict.

Updating MySQL DB Using Python For Loop Does not Work

I am trying to update the data from 'Active' to 'Retired by loop through a list of devices from the specific text file.
Somehow, however, it does not filter the list of devices from the text file and update the corresponding data, making no changes to the database at all.
Could it have something to do with my for statement, or mysql statement that I came up with? Regardless of how many times I fix MYSQL, it still results the same.
What could be the problem?
Please take a look at the code below and see if there is any mistake I have made with regards to MYSQL-wise or Python-wise.
Thank you in advance for your great help. Much appreciated.
import pyodbc
conn = pyodbc.connect('Driver={SQL Server};'
'Server=############;'
'Database=########;'
'Trusted_Connection=yes;')
cursor = conn.cursor()
cursor.execute('SELECT id, device_id, model_number, serial_number_1,\
status_1, user_name_1 FROM [Footprint].[fpscdb001_cmdb_004].[desktop]')
results = []
with open('H:\list.txt') as inputfile:
results = inputfile.read().splitlines()
SQL = """UPDATE [Footprint].[fpscdb001_cmdb_004].[desktop]
SET status_1 = "Retired"
WHERE device_id == %s"""
try:
for i in results:
cursor.execute(SQL, results[i])
cursor.commit()
# print(rowcount)
except:
conn.rollback()
finally:
conn.close()
It looks like the problem is both your SQL and your Python.
There is a problem with your SQL at this part: WHERE device_id == %s. In SQL, there is no ==. Instead, you use a single = to both set and check values. You should use WHERE device_id = ?.
In addition, you're using %s as a placeholder in your query. I'm not familiar with pyodbc, but a quick check of the docs looks like you should be using the ? as a placeholder.
So try this:
SQL = """UPDATE [Footprint].[fpscdb001_cmdb_004].[desktop]
SET status_1 = "Retired"
WHERE device_id = ?"""
Building on the answer that #RToyo wrote, you may be able to do this a little more quickly
we can build a list of "?" placeholders in the SQL, and then pass each item safely to the ODBC holder, using the * notation to explode the array of device id's into the ODBC execute() function. This allows you to both execute only one query, and do it securely, too
import pyodbc
conn = pyodbc.connect('Driver={SQL Server};'
'Server=############;'
'Database=########;'
'Trusted_Connection=yes;')
cursor = conn.cursor()
cursor.execute('SELECT id, device_id, model_number, serial_number_1,\
status_1, user_name_1 FROM [Footprint].[fpscdb001_cmdb_004].[desktop]')
results = []
with open('H:\list.txt') as inputfile:
results = inputfile.read().splitlines()
SQL = """UPDATE [Footprint].[fpscdb001_cmdb_004].[desktop]
SET status_1 = "Retired"
WHERE device_id in ({})""".format(("?, " * len(results))[0:-2])
try:
if len(results) > 0:
cursor.execute(SQL, *results)
except:
conn.rollback()
finally:
conn.close()
Hope this helps someone.

can't see table created by pyodbc in ms access

I am accessing a MS Access Database in Python 3.6 using pyodbc library. I can read a table, no problems. The I created a simple table (Employee). I inserted records. I was able to fetch the records too by reading the table, no problems.
I also listed the tables in the MS Access DB. Employee table shows in the list.
But when I open up the MS Access Database, I do not find the table. I changed MS Access DB to show hidden and system objects. Employee table doesn't show up.
What am I doing wrong?
Thanks
Here is the code:
import pyodbc
db_file = r'''C:\TickData2018\StooqDataAnalysis.accdb'''
user = 'admin'
password = ''
odbc_conn_str = 'DRIVER={Microsoft Access Driver (*.accdb)};DBQ=%s;UID=%s;PWD=%s' %\
(db_file, user, password)
# Or, for newer versions of the Access drivers:
odbc_conn_str = 'DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=%s;UID=%s;PWD=%s' %\
(db_file, user, password)
conn = pyodbc.connect(odbc_conn_str)
print("connection made")
c = conn.cursor()
c.execute("SELECT * FROM 5MtsBaseForAnalysisSorted")
list1 = c.fetchmany(2)
print(list1[0][0])
print(list1[0][1])
print(list1[0][2])
try:
c.execute("""CREATE TABLE employee(
first text,
last text,
pay integer
);""")
except Exception as e:
print(e)
conn.commit
c.execute("INSERT INTO employee VALUES ('Krishna', 'Sundar', 50000)")
c.execute("INSERT INTO employee VALUES ('Divya', 'Sundar', 70000)")
c.execute("INSERT INTO employee VALUES ('Panka', 'Sundar', 70000)")
conn.commit
c.execute("SELECT * FROM employee")
print(c.fetchall())
c.tables()
rows = c.fetchall()
for row in rows:
print(row)
c.close()
del c
conn.close()
This is a general Python object model where you need to call the actual function and not its bounded name. Specifically, your commit lines are not correct where
conn.commit
Should be with open/close parentheses:
conn.commit()
Another way to see the difference is by reviewing the object's type:
type(conn.commit)
# <built-in method commit of pyodbc.Connection object at 0x000000000B772E40>
type(conn.commit())
# NoneType
I did reproduce your issue with exact code and adding parentheses resolved the issue.
An additional solution to manually committing is to set autocommit = True when the connection instance is created.
Eg:
conn = pyodbc.connect(odbc_conn_str, autocommit = True)

Categories