I am working on a temperature sensor network using onewire temperature sensors that runs on a Raspberry Pi 2. I am following this tutorial and as I was going along, I realized that his setup is for one temperature sensor, whereas my setup needs to work with multiple sensors.
As a result of having multiple sensors, I also need to be able to differentiate the sensors from one another. To do this, I want to have 3 columns in the SQLite table. I am encountering the error when I execute the Python script that is supposed to log the readout from the sensor, the date and time, and the sensor name.
Here is the problem, when I am configuring the python script to write three values to the table, I get an error.
Here is the code that I am getting an error when executing
#!/usr/bin/env python
import sqlite3
import os
import time
import glob
# global variables
speriod=(15*60)-1
dbname='/var/www/templog.db'
# store the temperature in the database
def log_temperature(temp):
conn=sqlite3.connect(dbname)
curs=conn.cursor()
sensor1 = 'Sensor1'
curs.execute("INSERT INTO temps values(datetime('now'), (?,?))" (temp, sensor1))
# commit the changes
conn.commit()
conn.close()
"INSERT INTO temps values(datetime('now'), (?,?))" (temp, sensor1)
Breaking this down you will see that this creates a string and then the parenthesis appears to Python to be a function call. However this is nonsensical because you have a string that you are trying to call like it is a function. Hence the error you get about str not being callable, this is definitely a bit cryptic if you are not experienced with Python. Essentially you are missing a comma:
curs.execute("INSERT INTO temps values(datetime('now'), (?,?))", (temp, sensor1))
Now you will get the ? placeholders correctly filled in.
Often the "str is not callable" error will be a result of typos such as this or duplicated variable names (you think you are calling a function but the variable really contained a string), so start by looking for those problems when you see this type of error.
You have to put a , there:
curs.execute("INSERT INTO temps values(datetime('now'), (?,?))" , (temp, sensor1))
From the documentation
Put ? as a placeholder wherever you want to use a value, and then provide a tuple of values as the second argument to the cursor’s execute() method
As you can see you need to provide the tuple as the second argument to the function execute
Related
I am trying to automatically process data in python using a direct query to our SQL database. This part is done, but the code specifies the runnumber each time and when a new batch starts, you have to re-input the newest runnumber into the code before you can proceed - a waste of time and on a recurring basis!
df = pd.read_sql(f"Select time, temp from datatable where machineid=84207 and runnumber=1616862158", conn)
I'd like to update this code with the most recent runnumber (the most recent runnumber will always be a maximum value in the database for each machine) without having to look up and type in the most recent runnumber. There are many different machines that are all collecting data into the SQL database, hence having to specify each dataframe with each machineid. Again, that part of the code is finished and I can replicate for all machines. However, I don't want to do this runnumber for each one, as it will dynamically change over time.
So I'm trying to find a way to either create a string that will define the max runnumber automatically such as this:
pp1 = pd.read_sql(f"Select Max(runnumber) from datatable where machineid=84207", conn)
print(pp1)
max
0 1616862158
Is there a way I can substitute this value contained within "pp1" into a string in the query line below? Or set the runnumber=max? I'm not familiar enough with the syntax or options within the WHERE commands or python commands to set this up. Can someone help?
df = pd.read_sql(f"Select time, temp from datatable where machineid=84207 and runnumber=1616862158", conn)
Thank you in advance!
You could add a subquery in the where statement like this.
select time, temp from datatable where machineid=84207 and runnumber=(select max(runnumber) from datatable)
I am very much a newbie to python programming I am trying to implement store procedure in flask.
This is the part of the code from where i am calling the stored procedure.
cursor.execute(insertquery)
database.commit()
cursor2.execute("select id from users where userid='"+username1+"'")
data2=cursor2.fetchall()
dictionary1=[dict(user=row[0]) for row in data2]
for data in dictionary1:
user_store=int((data.get('user')))
cursor2.callproc('insertPrediction',user_store)
database.commit()
database.close()
and this is the stored procedure
DROP PROCEDURE IF EXISTS `S3UploadDB`.`insertPrediction`;
PROCEDURE `S3UploadDB`.`insertPrediction`(IN id int(20))
BEGIN
INSERT INTO prediction(id,matchno,winner,points)
VALUES (id,1,'na',-500);
END;
when even i am invoking the procedure from toad it is working fine and the data is getting updated
but when ever it is getting invoked from the python code
TypeError: 'int' object is not iterable
It is failing at this line of the code
cursor2.callproc('insertPrediction',user_store)
I have converted the value to INT before passing it as the input to the stored procedure is of type int. I am using MySQL
Please help me on this error.
callproc requires a sequence of arguments such as a list or tuple, even if there is only one argument. Use:
cursor2.callproc('insertPrediction', [user_store])
Link to docs.
I've hit a strange inconsistency problem with SQL Server inserts using a stored procedure. I'm calling a stored procedure from Python via pyodbc by running a loop to call it multiple times for inserting multiple rows in a table.
It seems to work normally most of the time, but after a while it will just stop working in the middle of the loop. At that point even if I try to call it just once via the code it doesn't insert anything. I don't get any error messages in the Python console and I actually get back the incremented identities for the table as though the data were actually inserted, but when I go look at the data, it isn't there.
If I call the stored procedure from within SQL Server Management Studio and pass in data, it inserts it and shows the incremented identity number as though the other records had been inserted even though they are not in the database.
It seems I reach a certain limit on the number of times I can call the stored procedure from Python and it just stops working.
I'm making sure to disconnect after I finish looping through the inserts and other stored procedures written in the same way and sent via the same database connection still work as usual.
I've tried restarting the computer with SQL Server and sometimes it will let me call the stored procedure from Python a few more times, but that eventually stops working as well.
I'm wondering if it is something to do with calling the stored procedure in a loop too quickly, but that doesn't explain why after restarting the computer, it doesn't allow any more inserts from the stored procedure.
I've done lots of searching online, but haven't found anything quite like this.
Here is the stored procedure:
USE [Test_Results]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[insertStepData]
#TestCaseDataId int,
#StepNumber nchar(10),
#StepDateTime nvarchar(50)
AS
SET NOCOUNT ON;
BEGIN TRANSACTION
DECLARE #newStepId int
INSERT INTO TestStepData (
TestCaseDataId,
StepNumber,
StepDateTime
)
VALUES (
#TestCaseDataId,
#StepNumber,
#StepDateTime
)
SET #newStepId = SCOPE_IDENTITY();
SELECT #newStepId
FROM TestStepData
COMMIT TRANSACTION
Here is the method I use to call a stored procedure and get back the id number ('conn' is an active database connection via pyodbc):
def CallSqlServerStoredProc(self, conn, procName, *args):
sql = """DECLARE #ret int
EXEC #ret = %s %s
SELECT #ret""" % (procName, ','.join(['?'] * len(args)))
return int(conn.execute(sql, args).fetchone()[0])
Here is where I'm passing in the stored procedure to insert:
....
for testStep in testStepData:
testStepId = self.CallSqlServerStoredProc(conn, "insertStepData", testCaseId, testStep["testStepNumber"], testStep["testStepDateTime"])
conn.commit()
time.sleep(1)
....
SET #newStepId = SCOPE_IDENTITY();
SELECT #newStepId
FROM StepData
looks mighty suspicious to me:
SCOPE_IDENTITY() returns numeric(38,0) which is larger than int. A conversion error may occur after some time. Update: now that we know the IDENTITY column is int, this is not an issue (SCOPE_IDENTITY() returns the last value inserted into that column in the current scope).
SELECT into variable doesn't guarantee its value if more that one record is returned. Besides, I don't get the idea behind overwriting the identity value we already have. In addition to that, the number of values returned by the last statement is equal to the number of rows in that table which is increasing quickly - this is a likely cause of degradation. In brief, the last statement is not just useless, it's detrimental.
The 2nd statement also makes these statements misbehave:
EXEC #ret = %s %s
SELECT #ret
Since the function doesn't RETURN anything but SELECTs a single time, this chunk actually returns two data sets: 1) a single #newStepId value (from EXEC, yielded by the SELECT #newStepId <...>); 2) a single NULL (from SELECT #ret). fetchone() reads the 1st data set by default so you don't notice this but it doesn't work towards performance or correctness anyway.
Bottom line
Replace the 2nd statement with RETURN #newStepId.
Data not in the database problem
I believe it's caused by RETURN before COMMIT TRANSACTION. Make it the other way round.
In the original form, I believe it was caused by the long-working SELECT and/or possible side-effects from the SELECT not-to-a-variable being inside a transaction.
I have a database with roughly 30 million entries, which is a lot and i don't expect anything but trouble working with larger database entries.
But using py-postgresql and the .prepare() statement i would hope i could fetch entries on a "yield" basis and thus avoiding filling up my memory with only the results from the database, which i aparently can't?
This is what i've got so far:
import postgresql
user = 'test'
passwd = 'test
db = postgresql.open('pq://'+user+':'+passwd+'#192.168.1.1/mydb')
results = db.prepare("SELECT time time FROM mytable")
uniqueue_days = []
with db.xact():
for row in result():
if not row['time'] in uniqueue_days:
uniqueue_days.append(row['time'])
print(uniqueue_days)
Before even getting to if not row['time'] in uniqueue_days: i run out of memory, which isn't so strange considering result() probably fetches all results befor looping through them?
Is there a way to get the library postgresql to "page" or batch down the results in say a 60k per round or perhaps even rework the query to do more of the work?
Thanks in advance!
Edit: Should mention the dates in the database is Unix timestamps, and i intend to convert them into %Y-%m-%d format prior to adding them into the uniqueue_days list.
If you were using the better-supported psycopg2 extension, you could use a loop over the client cursor, or fetchone, to get just one row at a time, as psycopg2 uses a server-side portal to back its cursor.
If py-postgresql doesn't support something similar, you could always explicitly DECLARE a cursor on the database side and FETCH rows from it progressively. I don't see anything in the documentation that suggests py-postgresql can do this for you automatically at the protocol level like psycopg2 does.
Usually you can switch between database drivers pretty easily, but py-postgresql doesn't seem to follow the Python DB-API, so testing it will take a few more changes. I still recommend it.
You could let the database do all the heavy lifting.
Ex: Instead of reading all the data into Python and then calculating unique_dates why not try something like this
SELECT DISTINCT DATE(to_timestamp(time)) AS UNIQUE_DATES FROM mytable;
If you want to strictly enforce sort order on unique_dates returned then do the following:
SELECT DISTINCT DATE(to_timestamp(time)) AS UNIQUE_DATES
FROM mytable
order by 1;
Usefull references for functions used above:
Date/Time Functions and Operators
Data Type Formatting Functions
If you would like to read data in chunks you could use the dates you get from above query to subset your results further down the line:
Ex:
'SELECT * FROM mytable mytable where time between' +UNIQUE_DATES[i] +'and'+ UNIQUE_DATES[j] ;
Where UNIQUE_DATES[i]& [j] will be parameters you would pass from Python.
I will leave it for you to figure how to convert date into unix timestamps.
so I'm using mysql to grab data from a database and feeding it into a python function. I import mysqldb, connect to the database and run a query like this:
conn.query('SELECT info FROM bag')
x = conn.store_result()
for row in x.fetch_row(100):
print row
but my problem is that my data comes out like this (1.234234,)(1.12342,)(3.123412,)
when I really want it to come out like this: 1.23424, 1.1341234, 5.1342314 (i.e. without parenthesis). I need it this way to feed it into a python function. Does anyone know how I can grab data from the database in a way that doesn't have parenthesis?
Rows are returned as tuples, even if there is only one column in the query. You can access the first and only item as row[0]
The first time around in the for loop, row does indeed refer to the first row. The second time around, it refers to the second row, and so on.
By the way, you say that you are using mySQLdb, but the methods that you are using are from the underlying _mysql library (low level, scarcely portable) ... why??
You could also simply use this as your for loop:
for (info, ) in x.fetch_row(100):
print info