def view_empdetails(): #this is my function: it works great
conn = mysql.connector.connect(host="localhost",user="root",passwd="#####",database="#DB")
cursor = conn.cursor() # this is database connection
viw = """select * from employees"""
cursor.execute(viw)
for emp_no,first_name,last_name,gender,DOB,street,city,state,zipcode,email,phone,hire_date in cursor.fetchall(): # fetch all data from employee table in DB
print('-'*50)
print(emp_no)
print(first_name)
print(last_name)
print(gender)
print(DOB)
print(street)
print(city)
print(state) # I need all these output be in a table or organize format
print(zipcode) #not only list of records
print(email)
print(phone)
print(hire_date)
print('-'*50)
conn.commit()
conn.close()
return menu2()
I need all records in one table|| codes bring data from Database as line by line without formatting> I need them in table
I'm not sure if you are familiar with the pandas library, but I believe it is helpful here. I have never used it with mysql, but I have used it with psycopg2 and pyodbc, so I think the basic idea should work:
data = pd.DataFrame(cur.fetchall(),columns = colnames)
creates a dataFrame (think python spreadsheet or python table) that uses the column names from the table you're querying.
Related
I have an Sqlite database with a table that includes a geo column. When I add this table into QGIS as a layer, it shows a map of Chicago with polygons as shown below. I think, the polygon points are stored in the column named geo.
I am trying to plot the same in Python to be able to add more things on top of this layout using Matplotlib. To begin with, I could load the table named "Zone" in Python using the following (that I wrote):
import sqlite3 # Package for SQLite
### BEGIN DEFINING A READER FUNCTION ###
def Conditional_Sqdb_reader(Sqdb,Tablename,Columns,Condition):
conn = sqlite3.connect(Sqdb) # Connects the file to Python
print("\nConnected to %s.\n"%(Sqdb))
conn.execute('pragma foreign_keys = off') # Allows making changes into the SQLite file
print("SQLite Foreign_keys are unlocked...\n")
c = conn.cursor() # Assigns c as the cursor
print("Importing columns: %s \nin table %s from %s.\n"%(Columns,Tablename,Sqdb))
c.execute('''SELECT {columns}
FROM {table}
{condition}'''.format(table=Tablename,
columns=Columns,
condition=Condition)) # Selects the table to read/fetch
Sql_headers = [description[0] for description in c.description]
Sql_columns = c.fetchall() # Reads the table and saves into the memory as Sql_rows
print("Importing completed...\n")
conn.commit() # Commits all the changes made
conn.execute('pragma foreign_keys = on') # Locks the SQLite file
print("SQLite Foreign_keys are locked...\n")
conn.close() # Closes the SQLite file
print("Disconnected from %s.\n"%(Sqdb))
return Sql_headers,Sql_columns
### END DEFINING A READER FUNCTION ###
Sqdb = '/mypath/myfile.sqlite'
Tablename = "Zone" # Change this with your desired table to play with
Columns = """*""" # Changes this with your desired columns to import
Condition = '' # Add your condition and leave blank if no condition
headings,data = Conditional_Sqdb_reader(Sqdb,Tablename,Columns,Condition)
The data on the table is stored in "data" as a list. So, data[0][-1] yields the geo of the polygon of the first row, which looks something like: b'\x00\x01$i\x00\x00#\xd9\x94\x8b\xd6<\x1bAb\xda7\xb6]\xb1QA\xf0\xf7\x8b\x19UC\x1bA\x9c\xde\xc5\r\xc3\xb1QA|\x03\x00\x00\x00\x01\x00\x00\x00\x06\x00\x00\x00Hlw\xef-C\x1bA\x9c\xde\xc5\r\xc3\xb1QA\xf0\xf7\x8b\x19UC\x1bAv\xc0u)^\xb1QA\xbcw\xd4\x88\xf1<\x1bAb\xda7\xb6]\xb1QA\xa5\xdc}n\xd7<\x1bA\x84.\xe1r\xbe\xb1QA#\xd9\x94\x8b\xd6<\x1bA\xce\x8eT\xef\xc1\xb1QAHlw\xef-C\x1bA\x9c\xde\xc5\r\xc3\xb1QA\xfe' I do not know how to decode this and convert to a meaningful series of points, but that is what it is and QGIS apparently can do it with no hassle. How can I plot all these polygons in Python while being able to add other things within the Matplotlib world later on?
After spending quite a few hours and learning a lot of things, I found the solution. Basically, using mod_spatialite in sqlite3 was the key per here. When I embedded this package, it allowed me use spatialite functions such as ST_As_Text which converts the sql binary string to a string starting with POLYGON((.... which is sort of a geopanda type entry. There is plenty of sources explaining how we can plot such data. In essence, here is my code (compare it to the one in my question):
import sqlite3 # Package for SQLite
### BEGIN DEFINING A READER FUNCTION ###
def Conditional_Sqdb_reader(Sqdb,Tablename,Columns,Condition):
conn = sqlite3.connect(Sqdb) # Connects the file to Python
conn.enable_load_extension(True)
#mod_spatialite (recommended)
conn.execute('SELECT load_extension("mod_spatialite.so")')
conn.execute('SELECT InitSpatialMetaData(1);')
print("\nConnected to %s.\n"%(Sqdb))
conn.execute('pragma foreign_keys = off') # Allows making changes into the SQLite file
print("SQLite Foreign_keys are unlocked...\n")
c = conn.cursor() # Assigns c as the cursor
print("Importing columns: %s \nin table %s from %s.\n"%(Columns,Tablename,Sqdb))
c.execute('''SELECT {columns}
FROM {table}
{condition}'''.format(table=Tablename,
columns=Columns,
condition=Condition)) # Selects the table to read/fetch
Sql_headers = [description[0] for description in c.description]
Sql_columns = c.fetchall() # Reads the table and saves into the memory as Sql_rows
print("Importing completed...\n")
conn.commit() # Commits all the changes made
conn.execute('pragma foreign_keys = on') # Locks the SQLite file
print("SQLite Foreign_keys are locked...\n")
conn.close() # Closes the SQLite file
print("Disconnected from %s.\n"%(Sqdb))
return Sql_headers,Sql_columns
### END DEFINING A READER FUNCTION ###
Sqdb = '/Users/tanercokyasar/Desktop/Qgis/chicago2018-Supply.sqlite'
Tablename = "Zone" # Change this with your desired table to play with
Columns = """*,
ST_AsText(GEO) as GEO""" # Changes this with your desired columns to import
Condition = '' # Add your condition and leave blank if no condition
headings,data = Conditional_Sqdb_reader(Sqdb,Tablename,Columns,Condition)
I am trying to calculate the mode value of each row and store the value in the judge = judge column, however it updates only the first record and leaves the loop
ps: Analisador is my table and resultado_2 is my db
import sqlite3
import statistics
conn = sqlite3.connect("resultado_2.db")
cursor = conn.cursor()
data = cursor.execute("SELECT Bow, FastText, Glove, Wordvec, Python, juiz, id FROM Analisador")
for x in data:
list = [x[0],x[1],x[2],x[3],x[4],x[5],x[6]]
mode = statistics.mode(list)
try:
cursor.execute(f"UPDATE Analisador SET juiz={mode} where id={row[6]}") #row[6] == id
conn.commit()
except:
print("Error")
conn.close()
You have to fetch your records after SQL is executed:
cursor.execute("SELECT Bow, FastText, Glove, Wordvec, Python, juiz, id FROM Analisador")
data = cursor.fetchall()
That type of SQL query is different from UPDATE (that you're using in your code too) which doesn't need additional step after SQL is executed.
here's a run down of what I'd like to do: I have a list of table names, and I want to run sql against an oracle database and pull back the table name and row count for every table in my table list. However, not every table name in my list of table names is necessarily actually in the database. This causes my code to throw a database error. What I would like to do, is whenever I come to a table name that is not in the database, I create a dataframe that contains the table name and instead of count(*), there's some text that says 'table not found', or something similar. At the end of the loop I'm concatenating all of the dataframes into one dataframe. The overall goal here is to validate that certain tables exist and that they have the expected row counts.
query_list=[]
df_List=[]
connstr= '%s/%s#%s' %(username, password, server)
conn = cx_Oracle.connect(connstr)
with conn:
query_list = ["SELECT '%s' as tbl, count(*) FROM %s." %(elm, database) +elm for elm in table_list]
df_List = [pd.read_sql(elm,conn) for elm in query_list]
df = pd.concat(df_List)
Consider try/except handling to return query output or table not found output:
def get_table_count(sql, conn, elm):
try:
return pd.read_sql(sql, conn)
except:
return pd.DataFrame({'tbl': elm, 'note': 'table not found'}, index = [0])
with conn:
sql = "SELECT '{t}' as tbl, count(*) as table_count FROM {d}.{t}"
df_List = [get_table_count(sql.format(t = elm, d = database), conn, elm) \
for elm in table_list]
df = pd.concat(df_List, ignore_index = True)
Get a list of all the Table Names which are in the DB, then create a loop to query each Table to get the row count.
Here is a SQL statement to get a list of all Tables in an Oracle DB:
SQL:
SELECT DISTINCT TABLE_NAME FROM ALL_TAB_COLUMNS ORDER BY TABLE_NAME ASC;
Python (to make list of tables you want row counts for and which exist in the DB):
list(set(tables_that_exist_in_DB) - (set(tables_that_exist_in_DB) - set(list_of_tables_you_want)))
I am working with a SQL Database on Python. After making the connection, I want to use the output of one query in another query.
Example: query1 gives me a list of all tables in a schema. I want to use each table name from query1 in my query2.
query2 = "SELECT TOP 200 * FROM db.schema.table ORDER BY ID"
I want to use this query for each of the table in the output of query1.
Can someone help me with the Python code for it?
Here is a working example on how to do what you are looking to do. I didn't look up the schemes for the tablelist, but you can simply substitute the SQL code to do so. I just 'faked it' by unioning a statement of 2 tables. There are plenty of other answer on that SQL code and I don't want to clutter this answer:
How do I get list of all tables in a database using TSQL?
It looks like the key part you may have been missing was the join step to build the second SQL statement. This should be enough of a starting point to craft exactly what you are looking for.
import pypyodbc
def main():
table_list = get_table_list()
for table in table_list:
print_table(table)
def print_table(table):
thesql = " ".join(["SELECT TOP 10 businessentityid FROM", table])
connection = get_connection()
cursor = connection.cursor()
cursor.execute(thesql)
for row in cursor:
print (row["businessentityid"])
cursor.close()
connection.close()
def get_table_list():
table_list = []
thesql = ("""
SELECT 'Sales.SalesPerson' AS thetable
UNION
SELECT 'Person.BusinessEntity' thetable
""")
connection = get_connection()
cursor = connection.cursor()
cursor.execute(thesql)
for row in cursor:
table_list.append(row["thetable"])
cursor.close()
connection.close()
return table_list
def get_connection():
'''setup connection depending on which db we are going to write to in which environment'''
connection = pypyodbc.connect(
"Driver={SQL Server};"
"Server=YOURSERVER;"
"Database=AdventureWorks2014;"
"Trusted_Connection=yes"
)
return connection
main ()
I'm quite new to mysql as in manipulating the database itself. I succeeded to store new lines in a table but my next endeavor will be a little more complex.
I'd like to fetch the column names from an existing mysql database and save them to an array in python. I'm using the official mysql connector.
I'm thinking I can achieve this through the information_schema.columns command but I have no idea how to build the query and store the information in an array. It will be around 100-200 columns so performance might become an issue so I don't think its wise just to iterate my way through it for each column.
The base code to inject code into mysql using the connector is:
def insert(data):
query = "INSERT INTO templog(data) " \
"VALUES(%s,%s,%s,%s,%s)"
args = (data)
try:
db_config = read_db_config()
conn = MySQLConnection(db_config)
cursor = conn.cursor()
cursor.execute(query, args)
#if cursor.lastrowid:
# print('last insert id', cursor.lastrowid)
#else:
# print('last insert id not found')
conn.commit()
cursor.close()
conn.close()
except Error as error:
print(error)
As said this above code needs to be modified in order to get data from the sql server. Thanks in advance!
Thanks for the help!
Got this as working code:
def GetNames(web_data, counter):
#get all names from the database
connection = create_engine('mysql+pymysql://user:pwd#server:3306/db').connect()
result = connection.execute('select * from price_usd')
a = 0
sql_matrix = [0 for x in range(counter + 1)]
for v in result:
while a == 0:
for column, value in v.items():
a = a + 1
if a > 1:
sql_matrix[a] = str(('{0}'.format(column)))
This will get all column names from the existing sql database