Inserting to Mariadb from multiple scripts - python

So currently I have 1 script inserting sensor data into my weather database every hour using python.
I have now added a second script to add rainfall data into the same table also every hour.
Now the problem: When the 2nd script inserts, all other values get 'zeroed'. As displayed in grafana.
am I overwriting somewhere or, if someone could point me in the right direction
Weather sensors insert statement
sql=("INSERT INTO WEATHER_MEASUREMENT (AMBIENT_TEMPERATURE, AIR_PRESSURE, HUMIDITY) VALUES ({},{},{})".format(temperature,pressure,humidity))
mycursor.execute(sql)
weatherdb.commit()
Rainfall sensors insert
sql=("INSERT INTO WEATHER_MEASUREMENT (RAINFALL) VALUES ({})".format(rainfall))
mycursor.execute(sql)
weatherdb.commit()

Tell me if I understand it right:
Your table “WEATHER_MEASUREMENT” has 4 columns (apart ID): AMBIENT_TEMPERATURE, AIR_PRESSURE, HUMIDITY and RAINFALL.
When you add RAINFALL value it creates a new row in your table with other column values at “NULL” and this is the problem?
If this is the case, you probably want to update existing row with a query like:
sql = ("""
UPDATE _ WEATHER_MEASUREMENT
SET RAINFALL = "{}"
WHERE id_of_the_row = {}
""".format(rainfall, id)
mycursor.execute(sql)
You will need to find a way to figure out the ID of the row you just created with your Weather sensor insert statement (maybe search for last inserted row if you are sure of timings).

Related

i want to add one value to one value in one row in sqlite3

i have this code in sqlite
self.cursor.execute('update inventory set cost=cost+? where account=?',(value,account))
i just want to sum the value to the cost just in one row what ever if that was the first row or last, but the problem is when i execute, it sum the value to all rows that have the account name
just to know it don't have any error it just issue in the code
Use a subquery in the WHERE clause to get the max rowid of the account that you want and update only that:
UPDATE inventory
SET cost = cost + ?
WHERE rowid = (SELECT MAX(rowid) FROM inventory WHERE account = ?);

Why is this sql statement super slow?

I am writing large amounts of data to a sqlite database. I am using a temporary dataframe to find unique values.
This sql code takes forever in conn.execute(sql)
if upload_to_db == True:
print(f'########################################WRITING TO TEMP TABLE: {symbol} #######################################################################')
master_df.to_sql(name='tempTable', con=engine, if_exists='replace')
with engine.begin() as cn:
sql = """INSERT INTO instrumentsHistory (datetime, instrumentSymbol, observation, observationColName)
SELECT t.datetime, t.instrumentSymbol, t.observation, t.observationColName
FROM tempTable t
WHERE NOT EXISTS
(SELECT 1 FROM instrumentsHistory f
WHERE t.datetime = f.datetime
AND t.instrumentSymbol = f.instrumentSymbol
AND t.observation = f.observation
AND t.observationColName = f.observationColName)"""
print(f'##############################################WRITING TO FINAL TABLE: {symbol} #################################################################')
cn.execute(sql)
running this takes forever to write to the database. Can someone help me understand how to speed it up?
Edit 1:
How many rows roughly? -About 15,000 at a time. Basically it is pulling data into a pandas dataframe and making some transformations and then writing it to a sqlite database. there are probably 600 different instruments and each having like 15,000 rows so 9M rows ultimately. Give or take a million....
Depending on your SQL database, you could try using something like INSERT INTO IGNORE (MySQL), or MERGE (e.g. on Oracle), which would do the insert only if it would not violate a primary key or unique constraint. This would assume that such a constraint would exist on the 4 columns which you are checking.
In the absence of merge, you could try adding the following index to the instrumentsHistory table:
CREATE INDEX idx ON instrumentsHistory (datetime, instrumentSymbol, observation,
observationColName);
This index would allow for rapid lookup of each incoming record, coming from the tempTable, and so might speed up the insert process.
This subquery
WHERE NOT EXISTS
(SELECT 1 FROM instrumentsHistory f
WHERE t.datetime = f.datetime
AND t.instrumentSymbol = f.instrumentSymbol
AND t.observation = f.observation
AND t.observationColName = f.observationColName)
has to check every row in the table - and match four columns - until a match is found. In the worst case, there is no match and a full table scan must be completed. Therefore, the performance of the query will deteriorate as the table grows in size.
The solution, as mentioned in Tim's answer, is to create an index over the four columns to that the db can quickly determine whether a match exists.

IntegrityError: NOT NULL constraint failed PYTHON, SQLite3

Im trying to store my json file to the popNames DB but this error pops up.
My Json file is a dictionary with the country being the key and the person names as key_value. In my DB I want to put the country as the first element as a primary and the names in the subsequent column in the db table
Could anyone help me with this?
enter image description here
Every INSERT call creates a new row in the PopNamesDB table. Your code creates many such rows: the first row has a country but NULL for all the other columns. The next N rows each have a null country, a value for colName, and NULL for all the other columns.
An easy way to fix your code is to change your followup INSERT calls (on line 109) to change the row you created earlier, instead of creating new rows. The query will look something like
cur.execute(''' UPDATE PopNamesDB SET ''' + colName + ''' = ? WHERE country = ?''', (y, c))

How to insert a data in mysql as previous and next record

How to write SQL in Mysql or Python? I was trying to load on the basis of row_number and how do i bring next value into current row Can you throw some lights on this?
Input
symbol,timestamp,close,open,status,row_number
ABCLTD,2015-01-16,43.25,33.81,Bullish,1
ABCLTD,2015-02-28,29.891,34.22,Bearish,2
ABCLTD,2015-03-05,35.562,34.28,Bullish,3
ABCLTD,2015-03-27,34.23,34.47,Bearish,4
ABCLTD,2015-03-31,35.833,34.53,Bullish,5
ABCLTD,2015-04-30,34.1,34.77,Bearish,6
ABCLTD,2015-05-08,35.029,34.83,Bullish,7
ABCLTD,2015-05-15,33.609,34.87,Bearish,8
ABCLTD,2016-08-12,38.719,36.2,Bullish,9
ABCLTD,2016-10-14,36.233,36.41,Bearish,10
ABCLTD,2016-10-21,38.809,36.45,Bullish,11
ABCLTD,2016-11-18,35.212,36.57,Bearish,12
ABCLTD,2017-01-20,40.81,36.48,Bullish,13
XYZLTD,2018-07-20,1171.31,1172.21,Bearish,1
XYZLTD,2018-08-03,1209.99,1177.21,Bullish,2
Expecting Output
symbol,timestamp,close,bb_dt,bb_close
ABCLTD,2015-01-16,43.25,2015-02-28,29.891
ABCLTD,2015-03-05,35.562,2015-03-27,34.23
ABCLTD,2015-03-31,35.833,2015-04-30,34.1
ABCLTD,2015-05-08,35.029,2015-05-15,33.609
ABCLTD,2016-08-12,38.719,2016-10-14,36.233
ABCLTD,2016-10-21,38.809,2016-11-18,35.212
ABCLTD,2017-01-20,40.81,Null,Null
XYZLTD,2018-07-20,1171.31,2018-08-03,1209.99
The row number depends on how the data is retrieved. Row 1 ordered by date ascending is different to row 1 ordered by descending.
You need to forget row number. If you want a new record between row 1 and 2 just make sure the value you insert is between the values in the column you order by.

psycopg2 - fastest way to insert rows to multiple tables?

I am currently using this code:
while True:
col_num = 0
for table in table_names:
cursor.execute("INSERT INTO public.{0} VALUES(CURRENT_TIMESTAMP, 999999)".format(table))
cursor.connection.commit()
col_num += 1
row_num += 1
And this is pretty slow. One of the problem I see is that its committing multiple times to account for each table. If I can commit for all tables in a single query, I think that would increase the performance. How should I go about this?
You can commit outside the loop:
for table in table_names:
cursor.execute("INSERT INTO public.{0} VALUES(CURRENT_TIMESTAMP, 999999)".format(table))
cursor.connection.commit()
However, there is a side effect. First columns (timestamps) will have different values when committed separately in contrast to the same value when committed together. This is because CURRENT_TIMESTAMP gives the time of the start of transaction.

Categories