Im creating a python program that connects to mysql.
i need to check if a table contains the number 1 to show that it has connected successfully, this is my code thus far:
xcnx.execute('CREATE TABLE settings(status INT(1) NOT NULL)')
xcnx.execute('INSERT INTO settings(status) VALUES(1)')
cnx.commit()
sqlq = "SELECT * FROM settings WHERE status = '1'"
xcnx.execute(sqlq)
results = xcnx.fetchall()
if results =='1':
print 'yep its connected'
else:
print 'nope not connected'
what have i missed? i am an sql noob, thanks guys.
I believe the most efficient "does it exist" query is just to do a count:
sqlq = "SELECT COUNT(1) FROM settings WHERE status = '1'"
xcnx.execute(sqlq)
if xcnx.fetchone()[0]:
# exists
Instead of asking the database to perform any count operations on fields or rows, you are just asking it to return a 1 or 0 if the result produces any matches. This is much more efficient that returning actual records and counting the amount client side because it saves serialization and deserialization on both sides, and the data transfer.
In [22]: c.execute("select count(1) from settings where status = 1")
Out[22]: 1L # rows
In [23]: c.fetchone()[0]
Out[23]: 1L # count found a match
In [24]: c.execute("select count(1) from settings where status = 2")
Out[24]: 1L # rows
In [25]: c.fetchone()[0]
Out[25]: 0L # count did not find a match
count(*) is going to be the same as count(1). In your case because you are creating a new table, it is going to show 1 result. If you have 10,000 matches it would be 10000. But all you care about in your test is whether it is NOT 0, so you can perform a bool truth test.
Update
Actually, it is even faster to just use the rowcount, and not even fetch results:
In [15]: if c.execute("select (1) from settings where status = 1 limit 1"):
print True
True
In [16]: if c.execute("select (1) from settings where status = 10 limit 1"):
print True
In [17]:
This is also how django's ORM does a queryObject.exists().
If all you want to do is check if you have successfully established a connection then why are you trying to create a table, insert a row, and then retrieve data from it?
You could simply do the following...
sqlq = "SELECT * FROM settings WHERE status = '1'"
xcnx.execute(sqlq)
results = xcnx.fetchone()
if results =='1':
print 'yep its connected'
else:
print 'nope not connected'
In fact if your program has not thrown an exception so far indicates that you have established the connection successfully. (Do check the code above, I'm not sure if fetchone will return a tuple, string, or int in this case).
By the way, if for some reason you do need to create the table, I would suggest dropping it before you exit so that your program runs successfully the second time.
When you run results = xcnx.fetchall(), the return value is a sequence of tuples that contain the row values. Therefore when you check if results == '1', you are trying to compare a sequence to a constant, which will return False. In your case, a single row of value 0 will be returned, so you could try this:
results = xcnx.fetchall()
# Get the value of the returned row, which will be 0 with a non-match
if results[0][0]:
print 'yep its connected'
else:
print 'nope not connected'
You could alternatively use a DictCursor (when creating the cursor, use .cursor(MySQLdb.cursors.DictCursor) which would make things a bit easier to interpret codewise, but the result is the same:
if results[0]['COUNT(*)]':
# Continues...
Also, not a big deal in this case, but you are comparing an integer value to a string. MySQL will do the type conversion, but you could use SELECT COUNT(*) FROM settings WHERE status = 1 and save a (very small) bit of processing.
I recently improved my efficiency by instead of querying select, just adding a primary index to the unique column and then adding it. MySQL will only add it if it doesn't exist.
So instead of 2 statements:
Query MySQL for exists:
Query MySQL insert data
Just do 1 and it will only work if it's unique:
Query MySQL insert data
1 Query is better than 2.
Related
This is the code :
import mysql.connector as mariadb
import time
import random
mariadb_connection = mariadb.connect(user='root', password='xxx', database='UniqueCode',
port='3306', host='192.168.xx.xx')
cursor = mariadb_connection.cursor()
FullChar = 'CFLMNPRTVWXYK123456789' # i just need that char
total = 5000
count = 10
limmit = 0
count = int(count)
entries = []
uq_id = 0
total_all = 0
def inputDatabase(data):
try:
maria_insert_query = "INSERT INTO SN_UNIQUE_CODE(unique_code) VALUES (%s)"
cursor.executemany(maria_insert_query, data)
mariadb_connection.commit()
print("Commiting " + str(total) + " entries..")
except Exception:
maria_alter_query = "ALTER TABLE UniqueCode.SN_UNIQUE_CODE AUTO_INCREMENT=0"
cursor.execute(maria_alter_query)
print("UniqueCode Increment Altered")
while (0 < 1) :
for i in range(total):
unique_code = ''.join(random.sample(FullChar, count))
entry = (unique_code)
entries.append(entry)
inputDatabase(entries)
#print(entries)
entries.clear()
time.sleep(0.1)
Output:
Id unique_code
1 N5LXK2V7CT
2 7C4W3Y8219
3 XR9M6V31K7
The code above runs well, the time it takes to generate it is also fast, the problem I faced was when the unique_code stored in the tuple was to be entered into mariadb, to avoid data redundancy, i added a unique index in the unique_code column.
The more data that is entered, the more checking of the unique code that will be entered, which makes the process of entering data into the database longer.
From that problem, how can I generate 1 billion data to the database in a short time?
note: the process will slow down when the unique_code that enters the database is > 150 million unique_codes
Thank's a lot
The quick way
If you want to insert many records into the database, you can bulk-insert them as you do now.
I would recommend you disable the keys on the table before inserting and skip the check for unique. Else you will have a bad time like #CryptoFool mentioned.
ALTER TABLE SN_UNIQUE_CODE DISABLE KEYS;
<run code>
ALTER TABLE SN_UNIQUE_CODE ENABLE KEYS;
If I were you, then I would try to play around with the maximum you can insert at once. Try changing max_allowed_packet variable in MariaDB if necessary.
The table
It seems like your unique_code could be a natural key. Therefore you could remove the auto_incremented variable, it won't bring much performance but it is a start.
hey i need some help i'm making a order system and i need help
here is what i need it to: i need it to read value in the rows so if the order is complete then value 2 and print out that the order is complete if the order is still active then the value should be 1 and it prints out the order is still active
Long story short i need help retrieving the number and determining if the order is a active order or its complete
here is what i have
sqx = "SELECT Customer_Status FROM Customer WHERE Username = 'Test' AND order_Status ='1'"
cursor.execute(sqx)
result = cursor.fetchall()
if result == '2':
print('Order is complete')
else:
print('order is active')
At it's core, you want to use fetchone() and pull out the tuple. Also, you really should start using parameter substitution to prevent any security issues from appearing in the future.
# It's always a good ide to use parameter substitution to prevent
# security issues. There aren't any here, but this is generally
# good practice
sqx = "SELECT Customer_Status FROM Customer WHERE Username = ? AND order_Status = ?"
cursor.execute(sqx, ('Test', '1'))
# We only want to look at one row, so use fetchone instead of
# fetchall. And in that row, go ahead and pull out the first
# value out of the tuple that's returned
result = cursor.fetchone()
if result is None:
# Either the user or order status is invalid
print("No match for query")
else:
result = result[0]
# And note here: Is the database storing '1' and '2' as strings instead of integers?
# If it is, this is fine, but if this field is an integer,
# use "result == 2" instead of "result == '2'"
if result == '2':
print('Order is complete')
else:
print('order is active')
conn = database_connect()
if(conn is None):
return None
cur = conn.cursor()
try:
# Try executing the SQL and get from the database
sql = """SELECT *
FROM user
WHERE user_id**strong text** =%s AND password =%s"""
cur.execute(sql, (employee_id, password))
r = cur.fetchone()# Fetch the first row
rr = cur.fetchall()
cur.close() # Close the cursor
conn.close() # Close the connection to the db
except:
# If there were any errors, return a NULL row printing an error to the debug
print("Error Invalid Login")
cur.close() # Close the cursor
conn.close() # Close the connection to the db
return None
user_info = []
if rr is None:
print("worry")
return []
for n in rr:
user_info.append(n)
test = {
'info1': user_info[0],
'info2': user_info[1],
'info3': user_info[2],
'info4': user_info[3],
}
return test
Here is my code. First Implement the login function, and then get the user information, but there is a IndexError: list index out of range. How do I fix this?
Here:
r = cur.fetchone()# Fetch the first row
rr = cur.fetchall()
The call to fetchone() will consume the first row, so rr will contain n-1 rows.
Also, if your database allow duplicated user_id then you have a serious db design issue - whether user_id is (suppoed to be) the primary key or the "username" (loging name), it should really be unique. If it isn't, then you want to change your schema, and if it is indeed unique then obviously your query can only return (at most !) one single row, in which case rr is garanteed to always be empty (since you consumed the first row with the fetchone() call.
As a side note, here are some possible improvements:
this :
for n in rr:
user_info.append(n)
is completely useless - you are just building a shallow copy of rr, wo just work directly with rr instead.
Then do not assume you will always have four rows in your query result, so at least build your test dict dynamically:
test = {}
# enumerate(seq) yields an index, value tuple
# for each value in `seq`.
for num, row in enumerate(rr, 1):
key = "info{}.format(num)
test[key] = row
or more succintly:
test = {"info{}".format(num):row for num, row in enumerate(rr, 1)}
Note that this dict doesn't add much value compared to rr - you have "info" keys instead of numeric indexes and that's about all, so you could just directly use the rr list instead.
Last but not least, your try/except clause is too broad (too much code in the try block), the except clause will eat very valuable debugging information (the exact error message and the full traceback) and even worse, the error message you display assumes way too much about what really happens. Actually you probably shouldn't even have an except clause here (at least not until you really know what errors can happen here AND be properly handled at this point) so better to let the error propagate (so you get the full error message and traceback), and use the finally clause to close your connection instead:
sql = """SELECT *
FROM user
WHERE user_id**strong text** =%s AND password =%s"""
try:
cur.execute(sql, (employee_id, password))
r = cur.fetchone()# Fetch the first row
rr = cur.fetchall()
finally:
# this will always be executed whatever happens in the try block
cur.close() # Close the cursor
conn.close() # Close the connection to the db
I want to force the for loop to start always with the same ligne, so that the order of all lignes of the dataset still always the same.
In other words, when I'm searching the index of the ligne with id=15, I found always a diffrent result.
Here is my code:
`import psycopg2 as p
conn = p.connect("dbname=Chicago user=postgres password=admin host=localhost ")
cur = conn.cursor()
cur.execute("select * from chicago_2po_4pgr")
nbrows = cur.rowcount
rows = cur.fetchall()
for r in range(0,nbrows):
id=rows[r][0]
if id==15:
print(r,rows[r][0],rows[r][1])`
The result of the first run is:
`56153 15 4271616`
(r is 56153)
The result of the second run (of the same code) is:
`126523 15 4271616`
(r is 126523)
Any suggestion of how can I edit my code to have always the same order of lignes?
Add an ORDER BY clause. SQL queries without order by can return results in any arbitrary order.
If the data being queried can be changed (insert, update or delete) then the record at position 15 can change. You could query a specific key value, or grab the result set and index it by a key, to get a consistent result.
query = "SELECT serialno from registeredpcs where ipaddress = "
usercheck = query + "'%s'" %fromIP
#print("query"+"-"+usercheck)
print(usercheck)
rs = cursor.execute(usercheck)
print(rs)
row = rs
#print(row)
#rs = cursor.rowcount()
if int(row) == 1:
query = "SELECT report1 from registeredpcs where serialno = "
firstreport = query + "'%s'" %rs
result = cursor.execute(firstreport)
print(result)
elif int(row) == 0:
query_new = "SELECT * from registeredpcs"
cursor.execute(query_new)
newrow = cursor.rowcount()+1
print(new row)
What I am trying to do here is fetch the serialno values from the db when it matches a certain ipaddress. This query if working fine. As it should the query result set rs is 0. Now I am trying to use that value and do something else in the if else construct. Basically I am trying to check for unique values in the db based on the ipaddress value. But I am getting this error
error: uncaptured python exception, closing channel smtpd.SMTPChannel connected
192.168.1.2:3630 at 0x2e47c10 (**class 'TypeError':'int' object is not
callable** [C:\Python34\lib\asyncore.py|read|83]
[C:\Python34\lib\asyncore.py|handle_read_event|442]
[C:\Python34\lib\asynchat.py|handle_read|171]
[C:\Python34\lib\smtpd.py|found_terminator|342] [C:/Users/Dev-
P/PycharmProjects/CR Server Local/LRS|process_message|43])
I know I am making some very basic mistake. I think it's the part in bold thats causing the error. But just can't put my finger on to it. I tried using the rowcount() method didn't help.
rowcount is an attribute, not a method; you shouldn't call it.
"I know I am making some very basic mistake" : well, Daniel Roseman alreay adressed the cause of your main error, but there are a couple other mistakes in your code:
query = "SELECT serialno from registeredpcs where ipaddress = "
usercheck = query + "'%s'" % fromIP
rs = cursor.execute(usercheck)
This part is hard to read (you're using both string concatenation and string formatting for no good reason), brittle (try this with `fromIP = "'foo'"), and very very unsafe. You want to use paramerized queries instead, ie:
# nb check your exact db-api module for the correct placeholder,
# MySQLdb uses '%s' but some other use '?' instead
query = "SELECT serialno from registeredpcs where ipaddress=%s"
params = [fromIP,]
rs = cursor.execute(query, params)
"As it should the query result set rs is 0"
This is actually plain wrong. cursor.execute() returns the number of rows affected (selected, created, updated, deleted) by the query. The "resultset" is really the cursor itself. You can fetch results using cursor.fetchone(), cursor.fetall(), or more simply (and more efficiently if you want to work on the whole resultset with constant memory use) by iterating over the cursor, ie:
for row in cursor:
print row
Let's continue with your code:
row = rs
if int(row) == 1:
# ...
elif int(row) == 0:
# ...
The first line is useless - it only makes row an alias of rs, and badly named - it's not a "row" (one line of results from your query), it's an int. Since it's already an int, converting it to int is also useless. And finally, unless 'ipadress' is a unique key in your table, your query might return more than one row.
If what you want is the effective value(s) for the serialno field for records matching fromIP, you have to fetch the row(s):
row = cursor.fetchone() # first row, as a tuple
then get the value, which in this case will be the first item in row:
serialno = row[0]