Converting PYODBC output into an int - python

Pulling data from a SQL query that gives me 1 number that is stored in a PYODBC row. I want to then add that number to another variable in the file. Pseudocode below -
prev = 5
cursor.execute("select statement")
pulledNumber = cursor.fetchall()
value = [row[2] for row in pulledNumber]
final = prev + value
Getting a type error (list and int operation). Tried to cast the list to an int a few different ways but could not get it to work

Related

psycopg2 Syntax errors at or near "' '"

I have a dataframe named Data2 and I wish to put values of it inside a postgresql table. For reasons, I cannot use to_sql as some of the values in Data2 are numpy arrays.
This is Data2's schema:
cursor.execute(
"""
DROP TABLE IF EXISTS Data2;
CREATE TABLE Data2 (
time timestamp without time zone,
u bytea,
v bytea,
w bytea,
spd bytea,
dir bytea,
temp bytea
);
"""
)
My code segment:
for col in Data2_mcw.columns:
for row in Data2_mcw.index:
value = Data2_mcw[col].loc[row]
if type(value).__module__ == np.__name__:
value = pickle.dumps(value)
cursor.execute(
"""
INSERT INTO Data2_mcw(%s)
VALUES (%s)
"""
,
(col.replace('\"',''),value)
)
Error generated:
psycopg2.errors.SyntaxError: syntax error at or near "'time'"
LINE 2: INSERT INTO Data2_mcw('time')
How do I rectify this error?
Any help would be much appreciated!
There are two problems I see with this code.
The first problem is that you cannot use bind parameters for column names, only for values. The first of the two %s placeholders in your SQL string is invalid. You will have to use string concatenation to set column names, something like the following (assuming you are using Python 3.6+):
cursor.execute(
f"""
INSERT INTO Data2_mcw({col})
VALUES (%s)
""",
(value,))
The second problem is that a SQL INSERT statement inserts an entire row. It does not insert a single value into an already-existing row, as you seem to be expecting it to.
Suppose your dataframe Data2_mcw looks like this:
a b c
0 1 2 7
1 3 4 9
Clearly, this dataframe has six values in it. If you were to run your code on this dataframe, then it would insert six rows into your database table, one for each value, and the data in your table would look like the following:
a b c
1
3
2
4
7
9
I'm guessing you don't want this: you'd rather your database table contained the following two rows instead:
a b c
1 2 7
3 4 9
Instead of inserting one value at a time, you will have to insert one entire row at time. This means you have to swap your two loops around, build the SQL string up once beforehand, and collect together all the values for a row before passing it to the database. Something like the following should hopefully work (please note that I don't have a Postgres database to test this against):
column_names = ",".join(Data2_mcw.columns)
placeholders = ",".join(["%s"] * len(Data2_mcw.columns))
sql = f"INSERT INTO Data2_mcw({column_names}) VALUES ({placeholders})"
for row in Data2_mcw.index:
values = []
for col in Data2_mcw.columns:
value = Data2_mcw[col].loc[row]
if type(value).__module__ == np.__name__:
value = pickle.dumps(value)
values.append(value)
cursor.execute(sql, values)

Setting variable to 0 using if statement

Using Sqlite3. Let's say I have this empty table table with no rows. Not one row of data is inserted, completely empty.
c = conn.cursor()
test
-----------------
amount | date
query = "SELECT SUM (column1) FROM test WHERE date BETWEEN '"+blah+"' AND '"+blah+"'"
c.execute(query)
data = c.fetchall()
if data == None:
amountsum = 0
else:
amountsum = data
print(amountsum)
output = (100 - amountsum)
print(output)
When I print amountsum, all i get is (None,) which is not 0. For output, it would give me "TypeError: Unsupported operand type(s) for +: 'int' and 'NoneType'
How do set assign 0 to amountsum if date is a 'NoneType'
Because you use fetchall() data will be/is list with one element - tuple - (None,), when date not between the specific dates. So basically data is [(None,)]
That is why your if check does not work.
Use fetchone(), to get just one tuple.
query = "SELECT SUM(column1) FROM test WHERE date BETWEEN '"+blah+"' AND '"+blah+"'"
c.execute(query)
data = c.fetchone()
if data[0] is None:
amountsum = 0
else:
amountsum = data[0]
it's better to change your query and use COALESCE
query = "SELECT COALESCE(SUM(column1), 0) FROM test WHERE date BETWEEN '"+blah+"' AND '"+blah+"'"
c.execute(query)
data = c.fetchone()
amountsum = data[0]
Your code is not behaving as desired because a list of tuple instances representing database records returned by a query is returned by sqlite3.connect.cursor.fetchall(), in the case that no records are returned by the query an empty list will be returned.
The statement if data == None: will never pass in your script as [] != None.
You could apply the "autosum is query result or 0 if no results present in data" logic right after assignment of data using the following line.
autosum = 0 if not data else data[0][0]
Note how the above accesses the first value within the first tuple in the list data. If we simply wrote autosum = 0 if not data else data we would assign the entire list if any results are returned instead of the single returned value you seem to be trying to work with.
fetchone()
Since you are only seeking to access a single query result (at least as far as I can tell from the code you've shared) it would probably be an idea to use sqlite3.connect.cursor.fetchone() instead of sqlite3.cursor.fetchall(). This will return a tuple representing the first query result instead of the list of tuple instances as with fetchall().
data = c.fetchone()
autosum = 0 if not data else data[0]

Array Outputting result set with the same amount of rows in a sql database

I have a query that reaches into a MySQL database and grabs row data that match the column "cab" which is a variable that is passed on from a previous html page. That variable is cabwrite.
SQL's response is working just fine, it queries and matches the column 'cab' with all data point in the rows that match id cab.
Once that happens I then remove the data I don't need line identifier and cab.
The output from that is result_set.
However when I print the data to verify its what I expect I'm met with the same data for every row I have.
Example data:
Query has 4 matching rows that is finds
This is currently what I'm getting:
> data =
> ["(g11,none,tech11)","(g2,none,tech13)","(g3,none,tech15)","(g4,none,tech31)"]
> ["(g11,none,tech11)","(g2,none,tech13)","(g3,none,tech15)","(g4,none,tech31)"]
> ["(g11,none,tech11)","(g2,none,tech13)","(g3,none,tech15)","(g4,none,tech31)"]
> ["(g11,none,tech11)","(g2,none,tech13)","(g3,none,tech15)","(g4,none,tech31)"]
Code:
cursor = connection1.cursor(MySQLdb.cursors.DictCursor)
cursor.execute("SELECT * FROM devices WHERE cab=%s " , [cabwrite])
result_set = cursor.fetchall()
data = []
for row in result_set:
localint = "('%s','%s','%s')" % ( row["localint"], row["devicename"], row["hostname"])
l = str(localint)
data.append(l)
print (data)
This is what I want it too look like:
data = [(g11,none,tech11),(g2,none,tech13),(g3,none,tech15),(g4,none,tech31)]
["('Gi3/0/13','None','TECH2_HELP')", "('Gi3/0/7','None','TECH2_1507')", "('Gi1/0/11','None','TECH2_1189')", "('Gi3/0/35','None','TECH2_4081')", "('Gi3/0/41','None','TECH2_5625')", "('Gi3/0/25','None','TECH2_4598')", "('Gi3/0/43','None','TECH2_1966')", "('Gi3/0/23','None','TECH2_2573')", "('Gi3/0/19','None','TECH2_1800')", "('Gi3/0/39','None','TECH2_1529')"]
Thanks Tripleee did what you recommended and found my issue... legacy FOR clause in my code upstream was causing the issue.

Python csv_writer: Change the output format of a oracle date column

I have a table with a date column and want to format that column to DD.MM.YYYYY in a csv file but alter session does not effect the python csv_writer.
Is there a way to handle all date columns without using to_char in the sql code?
file_handle=open("test.csv","w")
csv_writer = csv.writer(file_handle,dialect="excel",lineterminator='\n',delimiter=';',quoting=csv.QUOTE_NONNUMERIC)
conn=cx_Oracle.connect(connectionstring)
cur = conn.cursor()
cur.execute("ALTER SESSION SET NLS_DATE_FORMAT = 'DD.MM.YYYY HH24:MI:SS'")
cur.execute("select attr4,to_char(attr4,'DD.MM.YYYY') from aTable")
rows = cur.fetchmany(16000)
while len(rows) > 0:
csv_writer.writerows(rows)
rows = cur.fetchmany(16000)
cur.close()
result:
"1943-04-21 00:00:00";"21.04.1943"
"1955-12-22 00:00:00";"22.12.1955"
"1947-11-01 00:00:00";"01.11.1947"
"1960-01-07 00:00:00";"07.01.1960"
"1979-12-01 00:00:00";"01.12.1979"
The output you see comes from the fact the result of a query is converted to the corresponding python datatypes - thus the values of the first column are datetime objects, and the second - strings (due to the to_char() cast you do in the query). The NLS_DATE_FORMAT controls the output for just regular (user) clients.
Thus the output in the csv is just the default representation of the python's datetime; if you want to output in a different form, you just need to change it.
As the query response is a list of tuples, you can't just change it in-place - it has to be copied and modified; alternatively, you could write it row by row, modified.
Here's just the write part with the 2nd approach:
import datetime
# the rest of your code
while len(rows) > 0:
for row in rows:
value = (row[0].strftime('%d.%m.%Y'), row[1])
csv_writer.writerow(value)
rows = cur.fetchmany(16000)
For reference, here's a short list with the python's strftime directives.

Converting a tuple into an integer in python 3?

I have a tuple with a single value that's the result of a database query (it gives me the max ID # currently in the database). I need to add 1 to the value to utilize for my subsequent query to create a new profile associated with the next ID #.
Having trouble converting the tuple into an integer so that I can add 1 (tried the roundabout way here by turning the values into a string and then turning into a int). Help, please.
sql = """
SELECT id
FROM profiles
ORDER BY id DESC
LIMIT 1
"""
cursor.execute(sql)
results = cursor.fetchall()
maxID = int(','.join(str(results)))
newID = maxID + 1
If you are expecting just the one row, then use cursor.fetchone() instead of fetchall() and simply index into the one row that that method returns:
cursor.execute(sql)
row = cursor.fetchone()
newID = row[0] + 1
Rather than use an ORDER BY, you can ask the database directly for the maximum value:
sql = """SELECT MAX(id) FROM profiles"""

Categories