Trying to get the last value from MySQL on Raspberry Pi. No idea why my simple code wont work, gives error at "execute() first" at row = cursor.fetchone().
Here is my code:
# External module imports
import time
import os
import datetime
import MySQLdb
# Connect to mysql
db=MySQLdb.connect("localhost","zikmir","gforce","temp_database")
# Prepair a cursor
cursor=db.cursor()
# Select three columns, id, time and temp from table time_temp
cursor.execute = ("SELECT id, time, temp FROM time_temp")
# ID is autoincremented value, time is in TIME and temp is float
row = cursor.fetchone()
# Trying to store the last result in variable row
# Close cursor and database
cursor.close()
db.close()
watch the = incursor.execute = ("SELECT id, time, temp FROM time_temp"). It should read cursor.execute("SELECT...")
Related
I'm using a function inside a python program that doesn't work as expected.
I would like to call a sqlite3 function that give me the last record registrered every 2 seconds
It works fine until midnight, then it continues reading the valuew of the same day, it doesn't change when a new day arrives.
the function is(data is today, ora is actual hour):
import sqlite3
from sqlite3 import Error
import time
def leggi_tmp():
try:
time.sleep(2)
conn = sqlite3.connect('DB.db')
cursor = conn.cursor()
cursor.execute('''SELECT * FROM tmp_hr WHERE data = date('now') ORDER BY ora DESC LIMIT 1''')
#Fetching 1st row from the table
result = cursor.fetchone()
tmpe = result[0]
print(result)
#Closing the connection
conn.close()
except Error as e:
print(e)
return tmpe
when I do:
while data.tm_hour in fase_1 and func2_letture.leggi_tmp() <= temp_min_giorno :
func2_letture.leggi_tmp() only reads the day when it is called the first time(but works as expected during the day), it doesn't read the new date when new day arrives
I can't understand where my mistake is...
I suspect that this is a timezone problem.
Add the 'localtime' modifier to the function date():
WHERE data = date('now', 'localtime')
I am trying to calculate the mode value of each row and store the value in the judge = judge column, however it updates only the first record and leaves the loop
ps: Analisador is my table and resultado_2 is my db
import sqlite3
import statistics
conn = sqlite3.connect("resultado_2.db")
cursor = conn.cursor()
data = cursor.execute("SELECT Bow, FastText, Glove, Wordvec, Python, juiz, id FROM Analisador")
for x in data:
list = [x[0],x[1],x[2],x[3],x[4],x[5],x[6]]
mode = statistics.mode(list)
try:
cursor.execute(f"UPDATE Analisador SET juiz={mode} where id={row[6]}") #row[6] == id
conn.commit()
except:
print("Error")
conn.close()
You have to fetch your records after SQL is executed:
cursor.execute("SELECT Bow, FastText, Glove, Wordvec, Python, juiz, id FROM Analisador")
data = cursor.fetchall()
That type of SQL query is different from UPDATE (that you're using in your code too) which doesn't need additional step after SQL is executed.
I'm trying to insert a variable inside one SQL query using "+uid+" but the variable does not seem to be taken into account:
import datetime
import sqlite3 as lite
import sys
con = lite.connect('user.db')
uid = raw_input("UID?: ")
time = datetime.datetime.now()
with con:
cur = con.cursor()
cur.execute("SELECT Name FROM Users WHERE Id = "+uid+";")
rows = cur.fetchall()
for row in rows:
print row
print time
Do not use string manipulation on SQL. It can result in sql-injections.
Change
cur.execute("SELECT Name FROM Users WHERE Id = "+uid+";")
to (using prepared statements)
cur.execute("SELECT Name FROM Users WHERE Id=?;", (uid,))
cur.execute("SELECT Name FROM Users where %s=?" % (Id), (uid,))
also check out this answer
Python sqlite3 string variable in execute
I am using postgreSQL with python and the SQL database is such that rows are added regularly. At present, the python program does not know if new data has been added (I used psycopg2 to read rows. But it reads till the end of rows and stops). How can I let my python program constantly search if new data has been added? Or can I let postgreSQL trigger python when a new row is added?
This is what I have currently:
def get_data():
try:
connect = psycopg2.connect(database="yardqueue", user="postgres", password="abcd", host="localhost", port="5432")
except:
print "Could not open database"
cur = connect.cursor()
cur.execute("SELECT id,position FROM container")
rows = cur.fetchall()
for row in rows:
print "ID = ", row[0]
print "Position = ", row[1]
As you see, when I run this, it stops once variable 'row' reaches the last row.
EDIT: Is there a way I can keep my python code running for a specified amount of time? If so, I can make it go through the database until I kill it.
if you want to check out new records we can write (assuming there are no deletions in container table):
from time import sleep
import psycopg2
IDLE_INTERVAL_IN_SECONDS = 2
def get_data():
try:
connect = psycopg2.connect(database="yardqueue", user="postgres",
password="abcd", host="localhost",
port="5432")
except:
print "Could not open database"
# TODO: maybe we should raise new exception?
# or leave default exception?
return
cur = connect.cursor()
previous_rows_count = 0
while True:
cur.execute("SELECT id, position FROM container")
rows_count = cur.rowcount
if rows_count > previous_rows_count:
rows = cur.fetchall()
for row in rows:
print "ID = ", row[0]
print "Position = ", row[1]
previous_rows_count = rows_count
sleep(IDLE_INTERVAL_IN_SECONDS)
if we want to process only new records we can add ordering by id and offset like
from time import sleep
import psycopg2
IDLE_INTERVAL_IN_SECONDS = 2
def get_data():
try:
connect = psycopg2.connect(database="yardqueue", user="postgres",
password="abcd", host="localhost",
port="5432")
except:
# TODO: maybe we should raise new exception?
# or leave default exception?
print "Could not open database"
return
cur = connect.cursor()
rows_count = 0
while True:
cur.execute("SELECT id, position FROM container "
# sorting records by id to get new records data
# assuming that "id" column values are increasing for new records
"ORDER BY id "
# skipping records that we have already processed
"OFFSET {offset}"
.format(offset=rows_count))
rows_count = cur.rowcount
if rows_count > 0:
rows = cur.fetchall()
for row in rows:
print "ID = ", row[0]
print "Position = ", row[1]
sleep(IDLE_INTERVAL_IN_SECONDS)
Unfortunately, a database has no notion of insertion order, so you as the designer must provide an explicit order. If you do not, the order of the rows you fetch (using a new cursor) may change at any time.
Here a possible way is to have a serial field in your table. PostgreSQL implements a serial field through a sequence, which guarantees that each new inserted row gets a serial number greater than all currently existing ones. But:
there can be holes if a transaction requires a serial number and is aborted
if multiple concurrent transactions insert a serial field, the order of the serial field will be the order of the insert commands, not the order of the commit commands. That means that race conditions can result in a wrong order. But it is fine if you have only one writer in the database
An alternative way is to use an insertion date field - the inserting application has to manage it explicitely or you can use a trigger to set it tranparently. PostgreSQL timestamp have a microsecond precision. That means that many rows can have same insertion date value if they are inserted at the same time. Your Python script should read the time before opening a cursor and fetch all rows with an insertion time greater than its last run time. But here again you should care of race conditions...
My goal is to take two variables, xdate and xtime and store them into an sqlite database in two separate columns using a python scripts. My code is
from datetime import datetime
import sqlite3 as mydb
import sys
con = mydb.connect('testTime.db')
def logTime():
i=datetime.now()
xdate = i.strftime('%Y-%m-%d')
xtime = i.strftime('%H-%M-%S')
return xdate, xtime
z=logTime()
this is where I get hung up I tried
try:
with con:
cur = con.cursor
cur.execute('INSERT INTO DT(Date, Time) Values (?,?)' (z[0],z[1]))
data = cur.fetchone()
print (data)
con.commit()
except:
with con:
cur=con.cursor()
cur.execute("CREATE TABLE DT(Date, Time)')
cur.commit()
I keep getting none when I try to fetch the data.
Any tips or recommended readings??
You are executing a insert query, it's result is not having any thing to fetch. You should run a select query and then fetch the data.
fetchone()
Fetches the next row of a query result set, returning a single sequence, or None when no more data is available.
An example -
>>> cur.execute('INSERT INTO DT(Date, Time) Values (?,?)', (z[0],z[1]))
<sqlite3.Cursor object at 0x0353DF60>
>>> print cur.fetchone()
None
>>> cur.execute('SELECT Date, Time from DT')
<sqlite3.Cursor object at 0x0353DF60>
>>> print cur.fetchone()
(u'2016-02-25', u'12-46-16')