Redshift - Passing columns as a list - python

I have a set of columns passed into a python list. I am trying to see if I can pass this list as part of the select statement in redshift.
list_name = ['col_a', 'col_b']
Trying to pass this list into the below query:
cur.execute("""select {} from table""".format(list_name))
I get the below message:
ProgrammingError: syntax error at or near "'col_a'"
The above SQL should be equivalent to
select col_a, col_b from table

You can convert a list into a string by using join(), specifying the text to put between entries. For example this:
','.join(['col_a', 'col_b'])
would return:
'col_a,col_b'
Therefore, you can use it when creating the SQL query:
cur.execute("select {} from table".format(','.join(list_name)))
Or using an f-string:
cur.execute(f"select {','.join(list_name)} from table")

Related

How to execute a query with SQL Alchemy and store the result into a variable type int [duplicate]

I have:
res = db.engine.execute('select count(id) from sometable')
The returned object is sqlalchemy.engine.result.ResultProxy.
How do I get count value from res?
Res is not accessed by index but I have figured this out as:
count=None
for i in res:
count = res[0]
break
There must be an easier way right? What is it? I didn't discover it yet.
Note: The db is a postgres db.
While the other answers work, SQLAlchemy provides a shortcut for scalar queries as ResultProxy.scalar():
count = db.engine.execute('select count(id) from sometable').scalar()
scalar() fetches the first column of the first row and closes the result set, or returns None if no row is present. There's also Query.scalar(), if using the Query API.
what you are asking for called unpacking, ResultProxy is an iterable, so we can do
# there will be single record
record, = db.engine.execute('select count(id) from sometable')
# this record consist of single value
count, = record
The ResultProxy in SQLAlchemy (as documented here http://docs.sqlalchemy.org/en/latest/core/connections.html?highlight=execute#sqlalchemy.engine.ResultProxy) is an iterable of the columns returned from the database. For a count() query, simply access the first element to get the column, and then another index to get the first element (and only) element of that column.
result = db.engine.execute('select count(id) from sometable')
count = result[0][0]
If you happened to be using the ORM of SQLAlchemy, I would suggest using the Query.count() method on the appropriate model as shown here: http://docs.sqlalchemy.org/en/latest/orm/query.html?highlight=count#sqlalchemy.orm.query.Query.count

Best way to inject a variable column retrieval list into an SQL query (via psycopg2 execution)

I have a query as such:
SELECT_DATA = """select *
from schema.table tb
order by tb.created_time
"""
However, instead of selecting for all the columns in this table, I want to retrieve by a specified column list that I supply via psycopg2 injection in Python. The supplied column list string would look like this:
'col1, col2, col3'
Simple enough, except I also need to append the table alias "tb" to the beginning of each column name, so it needs to look like:
'tb.col1, tb.col2, tb.col3'
The resulting query is therefore:
SELECT_DATA = """select tb.col1, tb.col2, tb.col3
from schema.table tb
order by tb.created_time
"""
The most straightforward way I'm thinking in my head would be to parse the given string into a comma-separated list, append "tb." to the beginning of each column name, then parse the list back to a string for injection. But that seems pretty messy and hard to follow, so I was wondering if there might a better way to handle this?
Consider a list comprehension of sqlIdentifiers after splitting comma-separated string:
commas_sep_str = "col1, col2, col3"
field_identifiers = [sql.Identifier(s) for s in commas_sep_str.split(',')]
query = (sql.SQL("select {fields} from {schema}.{table}")
.format(
fields=sql.SQL(',').join(field_identifiers),
schema=sql.Identifier('my_schema')
table=sql.Identifier('my_table')
)
)

How to solve Incorrect number of bindings supplied. The current statement uses 1, and there are 2 supplied on Delete and Excutemany? [duplicate]

Say I have a list of following values:
listA = [1,2,3,4,5,6,7,8,9,10]
I want to put each value of this list in a column named formatteddate in my SQLite database using executemany command rather than loop through the entire list and inserting each value separately.
I know how to do it if I had multiple columns of data to insert. For instance, if I had to insert listA,listB,listC then I could create a tuple like (listA[i],listB[i],listC[i]). Is it possible to insert one list of values without a loop. Also assume the insert values are integers.
UPDATE:
Based on the answer provided I tried the following code:
def excutemanySQLCodewithTask(sqlcommand,task,databasefilename):
# create a database connection
conn = create_connection(databasefilename)
with conn:
cur = conn.cursor()
cur.executemany(sqlcommand,[(i,) for i in task])
return cur.lastrowid
tempStorage = [19750328, 19750330, 19750401, 19750402, 19750404, 19750406, 19751024, 19751025, 19751028, 19751030]
excutemanySQLCodewithTask("""UPDATE myTable SET formatteddate = (?) ;""",tempStorage,databasefilename)
It still takes too long (roughly 10 hours). I have 150,000 items in tempStorage. I tried INSERT INTO and that was slow as well. It seems like it isn't possible to make a list of tuple of integers.
As you say, you need a list of tuples. So you can do:
cursor.executemany("INSERT INTO my_table VALUES (?)", [(a,) for a in listA])

python mysqldb query with where

I use MySQLDB to query some data from database, when use like in sql, I am confused about sql sentence.
As I use like, so I construct below sql which can get correct result.
cur.execute("SELECT a FROM table WHERE b like %s limit 0,10", ("%"+"ccc"+"%",))
Now I want to make column b as variable as below. it will get none
cur.execute("SELECT a FROM table WHERE %s like %s limit 0,10", ("b", "%"+"ccc"+"%"))
I searached many website but not get result. I am a bit dizzy.
In the db-api, parameters are for values only, not for columns or other parts of the query. You'll need to insert that using normal string substitution.
column = 'b'
query = "SELECT a FROM table WHERE {} like %s limit 0,10".format(column)
cur.execute(query, ("%"+"ccc"+"%",))
You could make this a bit nicer by using format in the parameters too:
cur.execute(query, ("%{}%".format("ccc",))
The reason that the second query does not work is that the query that results from the substitution in the parameterised query looks like this:
select a from table where 'b' like '%ccc%' limit 0,10
'b' does not refer to a table, but to the static string 'b'. If you instead passed the string abcccba into the query you'd get a query that selects all rows:
cur.execute("SELECT a FROM table WHERE %s like %s limit 0,10", ("abcccba", "%"+"ccc"+"%"))
generates query:
SELECT a FROM table WHERE 'abcccba' like '%ccc%' limit 0,10
From this you should now be able to see why the second query returns an empty result set: the string b is not like %ccc%, so no rows will be returned.
Therefore you can not set values for table or column names using parameterised queries, you must use normal Python string subtitution:
cur.execute("SELECT a FROM table WHERE {} like %s limit 0,10".format('b'), ("abcccba", "%"+"ccc"+"%"))
which will generate and execute the query:
SELECT a FROM table WHERE b like '%ccc%' limit 0,10
You probably need to rewrite your variable substitution from
cur.execute("SELECT a FROM table WHERE b like %s limit 0,10", ("%"+"ccc"+"%"))
to
cur.execute("SELECT a FROM table WHERE b like %s limit 0,10", ("%"+"ccc"+"%",))
Note the trailing comma which adds a last empty element, which makes sure the tuple that states variables is longer than 1 element. In this example the string concatenation isn't even necessary, this code says:
cur.execute("SELECT a FROM table WHERE b like %s limit 0,10", ("%ccc%",))

cannot insert None value in postgres using psycopg2

I have a database(postgresql) with more than 100 columns and rows. Some cells in the table are empty,I am using python for scripting so None value is placed in empty cells but it shows the following error when I try to insert into table.
" psycopg2.ProgrammingError: column "none" does not exist"
Am using psycopg2 as python-postgres interface............Any suggestions??
Thanks in advance......
Here is my code:-
list1=[None if str(x)=='nan' else x for x in list1];
cursor.execute("""INSERT INTO table VALUES %s""" %list1;
);
Do not use % string interpolation, use SQL parameters instead. The database adapter can handle None just fine, it just needs translating to NULL, but only when you use SQL parameters will that happen:
list1 = [(None,) if str(x)=='nan' else (x,) for x in list1]
cursor.executemany("""INSERT INTO table VALUES %s""", list1)
I am assuming that you are trying to insert multiple rows here. For that, you should use the cursor.executemany() method and pass in a list of rows to insert; each row is a tuple with one column here.
If list1 is just one value, then use:
param = list1[0]
if str(param) == 'nan':
param = None
cursor.execute("""INSERT INTO table VALUES %s""", (param,))
which is a little more explicit and readable.

Categories