I am looking for a way to get a row from a database and every value in it. For example, by searching by Name or ItemNumber and the code will retrieve the value for Price or any other column value in the row. I have tried using this code and it returns every row and values for those rows instead of just 1 as I am looking for.
import sqlite3
cursor = sqliteConnection.cursor()
sqlite_select_query = """SELECT 374932 from ItemLookup"""
cursor.execute(sqlite_select_query)
rows = cursor.fetchall()
for row in rows:
print(row)
I have also used cursor.fetchone() and this returns 374932 provided in the SELECT 374932 from ItemLookup code multiple times. Using fetchmany() with any row[0] or row[1] values displays a tuple out of range error message. I have searched the internet and cannot find an explanation for this. Any help would be appreciated!
SQL queries generally follow the syntax of
SELECT field FROM table WHERE condition
for example, this is a query for Joe's information: SELECT * FROM ItemLookup WHERE Name = 'Joe'
You can use this to query the information you want based on specific conditions.
Related
I am trying to get row and column count for tables that are present in different schemas in Redshift using Python's redshift_connector. I tried below code and receiving Programming Error that says. Not sure what is the issue ? Basically, I am trying to replace the shema and tables that are present in the list tables and get row count for the tables.
{'S: 'ERROR','C':123489, 'M' 'Syntax error at or near "{"",'P':'22....'L':714','R':'yyerror'}
Thanks in advance for your time and efforts!
Python Code:
import redshift_connector
tables=['schema1.tablename','schema2.table2']
conn=redshift_connector.connect(
host='my_host',
port= "my_port",
database='my_db'
user="user"
password='password')
cur=conn.cursor()
cur.execute ('select count(*) from {',' .join("'"+y+"'" for y in tables)}')
results=cur.fetchall()
print("The table {} contained".format(tables[0]),*result[0],"rows"+"\n" ) #Printing row counts along with table names
cur.close()
conn.close()
Having a strange issue with pymysql and python. I have a table where date_rec is one of 3 columns composing a primary key. If I do this select, it takes forever to get the result
query = f"SELECT * FROM string WHERE date_rec BETWEEN {date_before} AND {date_after} ORDER BY date_rec"
with connection.cursor() as cursor:
cursor.execute(query)
result = cursor.fetchone()
for row in result:
print(row)
However if I add a limit of 5000, it works superfast, even though there are only 1290 records to be found. The 5000 number doesn't matter... 50,000 fixes the problem exactly the same way (just as fast). As long as it's more than 1290, I get all the records.
query = f"SELECT * FROM string WHERE date_rec BETWEEN {date_before} AND {date_after} ORDER BY date_rec LIMIT 5000"
with connection.cursor() as cursor:
cursor.execute(query)
result = cursor.fetchone()
for row in result:
print(row)
Can someone explain what's happening here and how to make the first case work as fast as the second? Thanks.
EDIT:
3 columns compose primary key:
date_rec
customer_number
order_number
So I did explain on SQL workbench and got this:
Limit-less query
Query with 5000 limit
So Mysql wasn't using the index for whatever reason. Putting in "USE INDEX(PRIMARY)" inside the query fixed the problem.
I have an explanation as to why adding a LIMIT clause speeds up the query, but if you want to tune the query, then consider adding the following index:
CREATE INDEX idx ON string (date_rec);
This index will let MySQL quickly filter off records not inside the date range, and it also provides the ordering needed in the ORDER BY clause.
Using Python looping through a number of SQL Server databases creating tables using select into, but when I run the script nothing happens i.e. no error messages and the tables have not been created. Below is an extract example of what I am doing. Can anyone advise?
df = [] # dataframe of database names as example
for i, x in df.iterrows():
SQL = """
Drop table if exists {x}..table
Select
Name
Into
{y}..table
From
MainDatabase..Details
""".format(x=x['Database'],y=x['Database'])
cursor.execute(SQL)
conn.commit()
Looks like your DB driver doesn't support multiple statements behavior, try to split your query to 2 single statements one with drop and other with select:
for i, x in df.iterrows():
drop_sql = """
Drop table if exists {x}..table
""".format(x=x['Database'])
select_sql = """
Select
Name
Into
{y}..table
From
MainDatabase..Details
""".format(x=x['Database'], y=x['Database'])
cursor.execute(drop_sql)
cursor.execute(select_sql)
cursor.commit()
And second tip, your x=x['Database'] and y=x['Database'] are the same, is this correct?
I am trying to upload data from a csv file (its on my local desktop) to my remote SQL database. This is my query
dsn = "dsnname";pwd="password"
import pyodbc
csv_data =open(r'C:\Users\folder\Desktop\filename.csv')
def func(dsn):
cnnctn=pyodbc.connect(dsn)
cnnctn.autocommit =True
cur=cnnctn.cursor()
for rows in csv_data:
cur.execute("insert into database.tablename (colname) value(?)", rows)
cur.commit()
cnnctn.commit()
cur.close()
cnnctn.close()
return()
c=func(dsn)
The problem is that all of my data gets uploaded in one col- that I specified. If I don't specify a col name it won't run. I have 9 cols in my database table and I want to upload this data into separate cols.
When you insert with SQL, you need to make sure you are telling which columns you want to be inserting on. For example, when you execute:
INSERT INTO table (column_name) VALUES (val);
You are letting SQL know that you want to map column_name to val for that specific row. So, you need to make sure that the number of columns in the first parentheses matches the number of values in the second set of parentheses.
I have a table using SQL Lite with Python. The size of the table always has 3 columns and could have many rows. Each of the cells are strings. Here is example table:
serial_num date_measured status
1234A 1-1-2015 passed
4321B 6-21-2015 failed
1423C 12-25-2015 passed
......
My program prompts me for a serial number. This is saved as a variable called serialNum. How can I delete (or overwrite) an entire row if serialNum equals any of the strings in the serial_num column in my table?
I've seen many examples on how to delete (or overwrite) a row in a table if I know all the values in each cell of that row, but my trouble is that the only cell that could ever be the same in each row would be the serial number. I need to so a search through the serial_number column and if any string in that column equals the current value of my serialNum variable, I need to delete (or overwrite) that row.
import sqlite3
conn = sqlite3.connect('example.db')
c = conn.cursor()
c.execute('''CREATE TABLE test (serial_num text, date_measured text, status text)''')
c.execute("INSERT INTO test VALUES ('1234A', '1-1-2015', 'passed')")
c.execute("INSERT INTO test VALUES ('4321B', '6-21-2015', 'failed')")
c.execute("INSERT INTO test VALUES ('1423C', '12-25-2015', 'passed')")
conn.commit()
Does anyone know a simple way to do this? I've seen others say that an ID must be used or a temporary table, but I would hope there might be an easier way to accomplish my task. Any advice would be great.
SQL suports this: simply use delete
"delete from test where serial_num=<some input>;"
or in this case
c.execute("delete from test where serial_num=%s;", serialNum);
There's no need to search through the list when using SQL. SQL is declarative: you tell it what to do using your query, not how to do it. Don't loop though all your rows to check which to delete: tell it what to delete and the database engine will find the best/fastest way to satisfy that goal.
Hope I well interpreted your question
for row in c.execute('SELECT * FROM test WHERE serial_num = ?', serialNum'):
# do whatever you want on row
print row
I was able to figure out a working solution:
sql = "DELETE FROM test WHERE serial_num = ?"
c.execute(sql, (serialNum,))
The comma after serialNum for some reason has to be there. Thank you #Michiel Arienfor the head start