I have a MySQL database named my_database , and it that database there are a lot of tables. I want to connect MySQL with Python and to work with specific table named my_table from that database.
This is the code that I have for now:
import json
import pymysql
connection = pymysql.connect(user = "root", password = "", host = "127.0.0.1", port = "", database = "my_database")
cursor = connection.cursor()
print(cursor.execute("SELECT * FROM my_database.my_table"))
This code returns number of rows, but I want to get all columns and rows (all values from that table).
I have also tried SELECT * FROM my_table but result is the same.
Did you read the documentation? You need to fetch the results after executing: fetchone(), fetchall() or something like this:
import json
import pymysql
connection = pymysql.connect(user = "root", password = "", host = "127.0.0.1", port = "", database = "my_database")
with connection.cursor(pymysql.cursors.DictCursor) as cursor:
cursor.execute("SELECT * FROM my_database.my_table")
rows = cursor.fetchall()
for row in rows:
print(row)
You probably also want a DictCursor as the results are then parsed as dict.
Related
I am trying to get data from an excel spreadsheet into a MySQL database using Python. I can do it fine if add my records to a list of tuples manually in my SQL query, but trying to loop through the records I get an error stating the tuple index out of range. Not sure what the issue is here and how to remedy it. Any help would be appreciated. Thank you.
Picture is attached for code and error messages. ErrorMSG PythonCode
import mysql.connector
import xlrd
mydb = mysql.connector.connect(
host = "localhost",
user = "root",
passwd = "*******",
database = "testdb",
)
mycursor = mydb.cursor()
loc=("E:\\SOX Reports\\2021 Reports\\populateData.xlsx")
l=list()
a=xlrd.open_workbook(loc)
sheet=a.sheet_by_index(0)
sheet.cell_value(0,0)
for i in range (1,418):
l.append(tuple(sheet.row_values(i)))
addRows = "insert into emplist (ID,SITE, OU, firstName) values(%s,%s,%s,%s)"
mycursor.executemany(addRows,l)
mydb.commit()
mydb.close()
I am trying to create tables out of json files containing the field names and types of each table of a database downloaded from Bigquery. The SQL request semt fine to me and but no table was created according to psql command-line interpreter typing \d
So, to begin I've just tried with a simpler sql request that doesn't work neither,
Here is the code :
import pandas as pd
import psycopg2
# information used to create a database connection
sqluser = 'postgres'
dbname = 'testdb'
pwd = 'postgres'
# Connect to postgres database
con = psycopg2.connect(dbname=dbname, user=sqluser, password=pwd )
curs=con.cursor()
q="""set search_path to public,public ;
CREATE TABLE tab1(
i INTEGER
);
"""
curs.execute(q)
q = """
SELECT table_name
FROM information_schema.tables
WHERE table_schema='public'
AND table_type='BASE TABLE';
"""
df = pd.read_sql_query(q, con)
print(df.head())
print("End of test")
The code written above displays this new table tab1, but actually this new table doesn't appear listed when typing \d within the psql command line interpreter. If I type in the psql interpreter :
SELECT table_name
FROM information_schema.tables
WHERE table_type='BASE TABLE';
it doesn't get listed neither , seems it's not actually created, Thanks in advance for your help
There was a commit() call missing, that must be written after the table creation sql request,
This code works:
import pandas as pd
import psycopg2
# information used to create a database connection
sqluser = 'postgres'
dbname = 'testdb'
pwd = 'postgres'
# Connect to postgres database
con = psycopg2.connect(dbname=dbname, user=sqluser, password=pwd )
curs=con.cursor()
q="""set search_path to public,public ;
CREATE TABLE tab1(
i INTEGER
);
"""
curs.execute(q)
con.commit()
q = """
SELECT table_name
FROM information_schema.tables
WHERE table_schema='public'
AND table_type='BASE TABLE';
"""
df = pd.read_sql_query(q, con)
print(df.head())
print("End of test")
I have a column called REQUIREDCOLUMNS in a SQL database which contains the columns which I need to select in my Python script below.
Excerpt of Current Code:
db = mongo_client.get_database(asqldb_row.SCHEMA_NAME)
coll = db.get_collection(asqldb_row.TABLE_NAME)
table = list(coll.find())
root = json_normalize(table)
The REQUIREDCOLUMNSin SQL contains values reportId, siteId, price, location
So instead of explicitly typing:
print(root[["reportId","siteId","price","location"]])
Is there a way to do print(root[REQUIREDCOLUMNS])?
Note: (I'm already connected to the SQL database in my python script)
You will have to use cursors if you are using mysql or pymysql , both the syntax are almost similar below i will mention for mysql
import mysql
import mysql.connector
db = mysql.connector.connect(
host = "localhost",
user = "root",
passwd = " ",
database = " "
)
cursor = db.cursor()
sql="select REQUIREDCOLUMNS from table_name"
cursor.execute(sql)
required_cols = cursor.fetchall()#this wll give ["reportId","siteId","price","location"]
cols_as_string=','.join(required_cols)
new_sql='select '+cols_as_string+' from table_name'
cursor.execute(new_sql)
result=cursor.fetchall()
This should probably work, i intentionally split many lines into several lines for understanding.
syntax could be slightly different for pymysql
I am using SQL Alchemy to connect to my Oracle 11g database residing in a Linux machine. I am writing a script in Python to connect to the database and retrieve values from a particular table.
The code i wrote is:
from sqlalchemy import *
def db_connection():
config = ConfigParser.RawConfigParser()
config.read('CMDC_Analyser.cfg')
USER = config.get('DB_Connector','db.user_name' )
PASSWORD = config.get('DB_Connector','db.password' )
SID = config.get('DB_Connector','db.sid' )
IP = config.get('DB_Connector','db.ip')
PORT = config.get('DB_Connector','db.port')
engine = create_engine('oracle://{user}:{pwd}#{ip}:{port}/{sid}'.format(user=USER, pwd=PASSWORD, ip=IP, port=PORT, sid=SID), echo=False)
global connection
connection = engine.connect()
p = connection.execute("select * from ssr.bouquet")
for columns in p:
print columns
connection.close()
The complete values from the table is printed out here. I wanted to select value from a particular column only. Hence i used the following code:
for columns in p:
print columns['BOUQUET_ID']
But here i am getting the following error.
sqlalchemy.exc.NoSuchColumnError: "Could not locate column in row for column 'BOUQUET_ID'"
How to fix this?
I have a postgres database on AWS redshift. Currently I use Python Pyscopg2 to interact with the database. I find that I can run:
curosr.execute("INSERT INTO datatype VALUES (%s, %s)", ("humidity", "some description"))
connect.commit()
but when I do:
for row in cursor.execute("SELECT * FROM datatype"):
print(row)
what ever I do, it always returns me None Type. Anyone can give me advice that what is the correct way to interact with redshift postgres?
Thank you
As required, here's the whole code
##encoding=utf8
from __future__ import print_function
import psycopg2
def connect():
conn = psycopg2.connect(host = "wbh1.cqdmrqjbi4nz.us-east-1.redshift.amazonaws.com",
port = 5439,
dbname = "dbname",
user = "user",
password = "password")
c = conn.cursor()
return conn, c
conn, c = connect()
c.execute("INSERT INTO table netatmo VALUES (%s, %s)", (1, 10.5))
conn.commit() # this works, and I can see the data in other db client software
for row in c.execute("SELECT * FROM netatmo").fetchall(): # this not working
print(row) # AttributeError: 'NoneType' object has no attribute 'fetchall'
you missed "fetchall()",
when updating - you don't need it, but when selecting - you have to fetch the results
http://initd.org/psycopg/docs/cursor.html
your code should look like this:
cursor.execute("SELECT * FROM datatype;")
for row in cursor.fetchall():
print(row)