I'm using a function inside a python program that doesn't work as expected.
I would like to call a sqlite3 function that give me the last record registrered every 2 seconds
It works fine until midnight, then it continues reading the valuew of the same day, it doesn't change when a new day arrives.
the function is(data is today, ora is actual hour):
import sqlite3
from sqlite3 import Error
import time
def leggi_tmp():
try:
time.sleep(2)
conn = sqlite3.connect('DB.db')
cursor = conn.cursor()
cursor.execute('''SELECT * FROM tmp_hr WHERE data = date('now') ORDER BY ora DESC LIMIT 1''')
#Fetching 1st row from the table
result = cursor.fetchone()
tmpe = result[0]
print(result)
#Closing the connection
conn.close()
except Error as e:
print(e)
return tmpe
when I do:
while data.tm_hour in fase_1 and func2_letture.leggi_tmp() <= temp_min_giorno :
func2_letture.leggi_tmp() only reads the day when it is called the first time(but works as expected during the day), it doesn't read the new date when new day arrives
I can't understand where my mistake is...
I suspect that this is a timezone problem.
Add the 'localtime' modifier to the function date():
WHERE data = date('now', 'localtime')
Related
I have searched extensively before asking this seemingly simple question. I have a python project, made a sqlite DB and some code to insert and work with it, and all was good until I decided to pull the db functions out of the main file, and pull out the db file, and put both into a folder called db. So now both the function file and the db are in the same folder, one level deep. So the error starts immediately, but the code still runs, albeit not actually doing anything, I search the internet and all I see are people saying to delete the DB file, make it again in place and that usually solves the issue, I did that twice but no luck. What am I missing here? The code runs without an error, but does not actually work, and the error I am reporting here is from the pycharm hover box.
def add_symbols_to_list(symbols_to_add) -> None:
"""This will add symbols to the current symbols list, but leave the previous entries.
:param: a list of user provided symbols as comma separated strings."""
conn = sqlite3.connect('database.db')
c = conn.cursor()
time_now = datetime.datetime.now() # get current time for the int conversion below
this_month_int = time_now.month # get the current month and set it to an int
# gets the current number of rows so new additions have the correct rowid
# c.execute("SELECT * FROM currentMonthStocks")
# current_row_number = c.execute("SELECT COUNT(*) FROM currentMonthStocks")
# rows = int(current_row_number)
# # https://www.sqlitetutorial.net/sqlite-count-function/
# db_row_id = rows + 1 # set the first row number
extra_symbols = symbols_to_add
for i in range(len(extra_symbols)):
c.execute("""INSERT INTO currentMonthStocks
(symbol, month)
VALUES (?, ?)""", (extra_symbols[i], this_month_int))
# db_row_id += 1
print("Added a symbol")
print("Symbols successfully populated into currentMonthStocks table in database.db")
new_symbols = ['tsla', 'dis', 'pltr']
add_symbols_to_list(new_symbols)
def get_symbols_at_month_start() -> None:
"""Function inserts a list of symbols to trade every month into the currentMonthStocks table in database.db.
This is called once at the start of the month, deletes the current symbols and adds the new ones.
:return: None."""
# edited out the url info for brevity
response = requests.request("POST", url, headers=headers, data=payload)
symbols = response.json()['content']['allInstrumentRows']
this_months_symbols = []
for symbol in symbols:
this_months_symbols.append(symbol['Symbol'])
# print(this_months_symbols)
# file = "database.db"
try:
conn = sqlite3.connect('database.db') # setup database connection
c = conn.cursor()
print("Database Connected")
# c.execute("""CREATE TABLE currentMonthStocks (
# id INT PRIMARY KEY,
# symbol TEXT,
# month INT)""")
# print("table created successfully")
# # checks to see if there is at least 1 row in the db, if yes it deletes all rows.
if c.execute("SELECT EXISTS(SELECT 1 FROM currentMonthStocks WHERE id=1 LIMIT 2);"):
# for i in range(len(this_months_symbols)):
c.execute("DELETE FROM currentMonthStocks")
print("Delete all rows successful")
time_now = datetime.datetime.now() # get current time for the int conversion below
this_month_int = time_now.month # get the current month and set it to an int
db_row_id = 1 # set the first row number
for i in range(len(this_months_symbols)):
c.execute("""INSERT INTO currentMonthStocks
(id, symbol, month)
VALUES (?, ?, ?)""", (db_row_id, this_months_symbols[i], this_month_int))
db_row_id += 1
# print("one more entry")
print("Symbols successfully populated into currentMonthStocks table in database.db")
conn.commit() # commits the current transaction.
print("Entries committed to database.db")
# c.close() # closes the connection to the db.
conn.close()
except sqlite3.Error as e:
print("sqlite3 error", e)
finally:
if conn:
conn.close()
print("Database.db Closed")
There was no problem, or at least no solution, pycharm is still not recognizing the table but I wrote 5 CRUD functions and they all work. So the answer is don't worry about it, just see if the DB is updating correctly.
I am using postgreSQL with python and the SQL database is such that rows are added regularly. At present, the python program does not know if new data has been added (I used psycopg2 to read rows. But it reads till the end of rows and stops). How can I let my python program constantly search if new data has been added? Or can I let postgreSQL trigger python when a new row is added?
This is what I have currently:
def get_data():
try:
connect = psycopg2.connect(database="yardqueue", user="postgres", password="abcd", host="localhost", port="5432")
except:
print "Could not open database"
cur = connect.cursor()
cur.execute("SELECT id,position FROM container")
rows = cur.fetchall()
for row in rows:
print "ID = ", row[0]
print "Position = ", row[1]
As you see, when I run this, it stops once variable 'row' reaches the last row.
EDIT: Is there a way I can keep my python code running for a specified amount of time? If so, I can make it go through the database until I kill it.
if you want to check out new records we can write (assuming there are no deletions in container table):
from time import sleep
import psycopg2
IDLE_INTERVAL_IN_SECONDS = 2
def get_data():
try:
connect = psycopg2.connect(database="yardqueue", user="postgres",
password="abcd", host="localhost",
port="5432")
except:
print "Could not open database"
# TODO: maybe we should raise new exception?
# or leave default exception?
return
cur = connect.cursor()
previous_rows_count = 0
while True:
cur.execute("SELECT id, position FROM container")
rows_count = cur.rowcount
if rows_count > previous_rows_count:
rows = cur.fetchall()
for row in rows:
print "ID = ", row[0]
print "Position = ", row[1]
previous_rows_count = rows_count
sleep(IDLE_INTERVAL_IN_SECONDS)
if we want to process only new records we can add ordering by id and offset like
from time import sleep
import psycopg2
IDLE_INTERVAL_IN_SECONDS = 2
def get_data():
try:
connect = psycopg2.connect(database="yardqueue", user="postgres",
password="abcd", host="localhost",
port="5432")
except:
# TODO: maybe we should raise new exception?
# or leave default exception?
print "Could not open database"
return
cur = connect.cursor()
rows_count = 0
while True:
cur.execute("SELECT id, position FROM container "
# sorting records by id to get new records data
# assuming that "id" column values are increasing for new records
"ORDER BY id "
# skipping records that we have already processed
"OFFSET {offset}"
.format(offset=rows_count))
rows_count = cur.rowcount
if rows_count > 0:
rows = cur.fetchall()
for row in rows:
print "ID = ", row[0]
print "Position = ", row[1]
sleep(IDLE_INTERVAL_IN_SECONDS)
Unfortunately, a database has no notion of insertion order, so you as the designer must provide an explicit order. If you do not, the order of the rows you fetch (using a new cursor) may change at any time.
Here a possible way is to have a serial field in your table. PostgreSQL implements a serial field through a sequence, which guarantees that each new inserted row gets a serial number greater than all currently existing ones. But:
there can be holes if a transaction requires a serial number and is aborted
if multiple concurrent transactions insert a serial field, the order of the serial field will be the order of the insert commands, not the order of the commit commands. That means that race conditions can result in a wrong order. But it is fine if you have only one writer in the database
An alternative way is to use an insertion date field - the inserting application has to manage it explicitely or you can use a trigger to set it tranparently. PostgreSQL timestamp have a microsecond precision. That means that many rows can have same insertion date value if they are inserted at the same time. Your Python script should read the time before opening a cursor and fetch all rows with an insertion time greater than its last run time. But here again you should care of race conditions...
Trying to get the last value from MySQL on Raspberry Pi. No idea why my simple code wont work, gives error at "execute() first" at row = cursor.fetchone().
Here is my code:
# External module imports
import time
import os
import datetime
import MySQLdb
# Connect to mysql
db=MySQLdb.connect("localhost","zikmir","gforce","temp_database")
# Prepair a cursor
cursor=db.cursor()
# Select three columns, id, time and temp from table time_temp
cursor.execute = ("SELECT id, time, temp FROM time_temp")
# ID is autoincremented value, time is in TIME and temp is float
row = cursor.fetchone()
# Trying to store the last result in variable row
# Close cursor and database
cursor.close()
db.close()
watch the = incursor.execute = ("SELECT id, time, temp FROM time_temp"). It should read cursor.execute("SELECT...")
Im using a raspberry pi with the rasbpian wheezy distribution running headless. I currently have a cronjob that runs a python script to put the current temperature and datetime in a mySQL database
(table: tempLog, attributes: datetime, temperature float(5,2)).
I want to delete rows that are say 5 days (num days is arbitrary) old and i'm having trouble accomplishing this in python. Here is the code, its not long.
import os
import time
import datetime
import glob
import MySQLdb
from time import strftime
from datetime import timedelta
from datetime import date
# Variables for MySQL
db = MySQLdb.connect(host="localhost", user="root",passwd="password", db="temp_database")
cur = db.cursor()
del_basedate = datetime.datetime.today() - timedelta(1)
# DATE_SUB(NOW() , INTERVAL 1 DAY)
try:
cur.execute("DELETE FROM tempLog WHERE datetime.date.day = del_basedate")
print "Delete successful"
except:
print "An error occured in: deleteRows.py"
finally:
cur.close()
db.close()
I had to do sever import from, because it kept throwing errors of objects not existing.
You need to learn about coding context:
del_basedate = datetime.datetime.today() - timedelta(1)
^^----python variable
cur.execute("DELETE FROM tempLog WHERE datetime.date.day = del_basedate")
text string with the letters "d", "e", "l", "_", etc... --- ^^^
Python isn't magical, and will NOT rummage around in a string to see if any of the text in that string LOOKS like a python variable. So you're effectively telling your DB to compare datetime.date.day against some unknown/undefined del_basedate field.
Try
cur.execute("DELETE FROM tempLog WHERE datetime.date.day = '" + del_basedate + "'")
^^^^^^^^^^^^^^^^^^
Note the extra quotes within the string. Without them, you'd be comparing against = 2016-02-18, which is a math operation, and is parsed/executed by the database as = 1996
I have to get the recently updated data from database. For the purpose of solving it, I have saved the last read row number into shelve of python. The following code works for a simple query like select * from rows. My code is:
from pyodbc import connect
from peewee import *
import random
import shelve
import connection
d = shelve.open("data.shelve")
db = SqliteDatabase("data.db")
class Rows(Model):
valueone = IntegerField()
valuetwo = IntegerField()
class Meta:
database = db
def CreateAndPopulate():
db.connect()
db.create_tables([Rows],safe=True)
with db.atomic():
for i in range(100):
row = Rows(valueone=random.randrange(0,100),valuetwo=random.randrange(0,100))
row.save()
db.close()
def get_last_primay_key():
return d.get('max_row',0)
def doWork():
query = "select * from rows" #could be anything
conn = connection.Connection("localhost","","SQLite3 ODBC Driver","data.db","","")
max_key_query = "SELECT MAX(%s) from %s" % ("id", "rows")
max_primary_key = conn.fetch_one(max_key_query)[0]
print "max_primary_key " + str(max_primary_key)
last_primary_key = get_last_primay_key()
print "last_primary_key " + str(last_primary_key)
if max_primary_key == last_primary_key:
print "no new records"
elif max_primary_key > last_primary_key:
print "There are some datas"
optimizedQuery = query + " where id>" + str(last_primary_key)
print query
for data in conn.fetch_all(optimizedQuery):
print data
d['max_row'] = max_primary_key
# print d['max_row']
# CreateAndPopulate() # to populate data
doWork()
While the code will work for a simple query without where clause, but the query can be anything from simple to complex, having joins and multiple where clauses. If so, then the portion where I'm adding where will fail. How can I get only last updated data from database whatever be the query?
PS: I cannot modify database. I just have to fetch from it.
Use an OFFSET clause. For example:
SELECT * FROM [....] WHERE [....] LIMIT -1 OFFSET 1000
In your query, replace 1000 with a parameter bound to your shelve variable. That will skip the top "shelve" number of rows and only grab newer ones. You may want to consider a more robust refactor eventually, but good luck.