This question already has answers here:
Can SQLite support multiple users?
(3 answers)
Closed 7 years ago.
I am trying write two processes using python that require access to same sqlite database table. One of the process updates the table by inserting new data and the other process retrieves information from the same table. However when the other second process runs select query the data base is already locked and I get
the "OperationalError: database is locked". Following are the code for both processes. Appreciate any help
process : 1
------------
while True:
print "Updating"
try:
conn = sqlite3.connect('test.db')
c = conn.cursor()
c.execute('PRAGMA journal_mode = WAL')
c.executemany('INSERT INTO test_table VALUES (?,?,?,?,?)', insertdata)
conn.commit()
conn.close()
except:
pass
time.sleep(60)
process : 2
------------
conn = sqlite3.connect('test.db')
c = conn.cursor()
c.execute("select * from test_table where id='{}' and date = '{}' order by time desc".format(recv_data,datetime.datetime.now().date().isoformat()))
data= c.fetchall()
conn.close()
The link given above only suggests that multiple connection in sqlite is possible. However it doesn't suggest how to do it
The database blocks the access after any writing operation during some milliseconds hence you can try a time.sleep(1) in the process: 2 to avoid this collision at least the first time.
Related
This question already has an answer here:
Python sqlite3 parameterized drop table
(1 answer)
Closed 5 years ago.
I'm trying to use a variable for a table name. I get the error "... near ''myTable'' at line 1
I must not be escaping this right. The double '' in the error seems to be a clue, but I don't get it.
db = MySQLdb.connect("localhost","user","pw","database" )
table = "myTable"
def geno_order(db, table):
cursor = db.cursor() # prepare a cursor object using cursor() method
sql = "SELECT * FROM %s"
cursor.execute(sql, table)
results = cursor.fetchall()
You can't use a parameter for the table name in the execute call. You'll need to use normal Python string interpolation for that:
sql = "SELECT * FROM %s" % table
cursor.execute(sql)
Naturally, you'll need to be extra careful if the table name is coming from user input. To mitigate SQL injection, validate the table name against a list of valid names.
This question already has an answer here:
Why isn't the 'insert' function adding rows using MySQLdb?
(1 answer)
Closed 5 years ago.
I want to insert data to Mysql database using python. Here is my code
def insert_db(time_spent):
global user_id
global total_time_spent
user = user_id
project = get_project_id()
timesingle = time_spent
timeall = total_time_spent
#connect to the database
connect_db = mysql.connector.connect(user='root',password='',host='127.0.0.1',database='python_test')
cursor = connect_db.cursor()
cursor.execute("INSERT INTO time (user_id,project_id,spent_time,total_time) VALUES (%s,%s,%s,%s)",(user,project,timesingle,timeall)) # insert new row to table
if cursor.lastrowid:
print('last insert id', cursor.lastrowid)
else:
print('last insert id not found')
#close the cursor and database connection
cursor.close()
connect_db.close()
The problem is when I execute this function, even 'last insert row id' is showing the id, the data is not inserting to the database. I checked all the variables in this function and they are not empty.
How can I fix this issue
Everytime you modify data in your database you need to commit all the changes with.
connect_db.commit();
This is to ensure that you only save changes where no errors happened.
You need to commit to marks the end of a successful transaction.
connect_db.commit()
For further information: https://dev.mysql.com/doc/connector-python/en/connector-python-example-cursor-transaction.html
Please ensure that you use connect_db.commit() after every insert/update or other such statements.
I'm trying to do multiple inserts on a MySQL db like this:
p = 1
orglist = buildjson(buildorgs(p, p))
while (orglist is not None):
for org in orglist:
sid = org['sid']
try:
sql = "INSERT INTO `Orgs` (`sid`) VALUES (\"{0}\");".format(sid)
cursor.execute(sql)
print("Added {0}".format(org['title']))
except Exception as bug:
print(bug)
conn.commit()
conn.close()
p += 1
orglist = buildjson(buildorgs(p, p))
However I keep getting a bunch of 2055: Lost connection to MySQL server at 'localhost:3306', system error: 9 Bad file descriptor
How can I correctly do multiple inserts at once so I don't have to commit after every single insert. Also, can i only do conn.close()after the while loop or is it better to keep it where it is?
This may be related to this question and/or this question. A couple ideas from the answers to those questions which you might try:
Try closing the cursor before closing the connection (cursor.close() before conn.close(); I don't know if you should close the cursor before or after conn.commit(), so try both.)
If you're using the Oracle MySQL connector, try using PyMySQL instead; several people said that that fixed this problem for them.
This question already has answers here:
Python's MySqlDB not getting updated row
(2 answers)
MySQL-python connection does not see changes to database made on another connection even after change is committed
(2 answers)
Closed 7 years ago.
I'm using MySQLdb on two computers and connect it to the same MySQL database. This database contains a database named dbtest with a table test with 50 rows, defined as following :
CREATE TABLE test (
id int PRIMARY KEY AUTO_INCREMENT,
val varchar(255)
)
I use the interpreter in order to setup the problem :
# Run this on the 2 computers Python interpreter :
import MySQLdb
connection = MySQLdb.connect(host="<database hostname>", user="root", passwd="<root password>")
cursor = connection.cursor()
cursor.execute("USE dbtest")
print cursor.execute("SELECT * FROM test") # will print 50.
All right. The two connections are setup. Now let's add a row with the first client :
# Run this on computer 1 :
cursor.execute("INSERT INTO test (val) VALUES ('gosh stackoverflow is so good !')")
connection.commit()
print cursor.execute("SELECT * FROM test") # will print 51.
Sounds good, WireShark prints the 51 rows returned by the database. But with the computer 2...
# Run this on computer 2 :
print cursor.execute("SELECT * FROM test") # will print 50, expected 51 because of the insert.
Why do I have 50 lines here ? How to avoid this issue ?
Sorry if it's a duplicate, I probably didn't find the right keywords.
EDIT: If I reset the connection on computer 2, I have the 51 lines.
I have a table called "unprocessed" where I want to read 2000 rows, send them over HTTP to another server and then insert the rows into a "processed" table and remove them from the "unprocessed" table.
My python code roughly looks like this:
db = MySQLdb.connect("localhost","username","password","database" )
# prepare a cursor object using cursor() method
cursor = db.cursor()
# Select all the records not yet sent
sql = "SELECT * from unprocessed where SupplierIDToUse = 'supplier1' limit 0, 2000"
cursor.execute(sql)
results = cursor.fetchall()
for row in results:
id = row[0]
<code is here here for sending to other server - it takes about 1/2 a second>
if sentcorrectly="1":
sql = "INSERT into processed (id, dateprocessed) VALUES ('%s', NOW()')" % (id)
try:
inserted = cursor.execute(sql)
except:
print "Failed to insert"
if inserted:
print "Inserted"
sql = "DELETE from unprocessed where id = '%s'" % (id)
try:
deleted = cursor.execute(sql)
except:
print "Failed to delete id from the unprocessed table, even though it was saved in the processed table."
db.close()
sys.exit(0)
I want to be able to run this code concurrently so that I can increase the speed of sending these records to the other server over HTTP.
At the moment if I try and run the code concurrently I get multiple copies of the same data sent top the other server and saved into the the "processed" table as the select query is getting the same id's in multiple instances of the code.
How can I lock the records when I select them and then process each record as a row before moving them to the "processed" table?
The table was MyISAM but I've converted to innoDB today as I realise there's probably a way of locking the records better with innoDB.
Based off your comment reply.
One of two solutions would be a client side python master process to collect the record ID's for all 2000 records and then split that up into chunks to be processed by sub workers.
Short version, your choices are delegate the work or rely on a possibly tricky asset locking mechanism. I would recommend the former approach as it can scale up with the aid of a message queue.
delegate logic would use multiprocessing
import multiprocessing
records = get_all_unprocessed_ids()
pool = multiprocessing.Pool(5) #create 5 workers
pool.map(process_records, records)
That would create 2000 tasks and run 5 tasks at a time or you can split records into chunks, using a solution outlined here
How do you split a list into evenly sized chunks?
pool.map(process_records, chunks(records, 100))
would create 20 lists of 100 records that would be processed in batches of 5
Edit:
syntax error - signature is map(func, iterable[, chunksize]) and I left out the argument for func.