Python SQLite3 Mac Address with colons lookup - python

I'm busy with a little project that keeps track of MAC Addresses on certain networks. Everything is working great except when I do a lookup in the DB (SQLite3) for the MAC Address caught by pcapy in python2.7.
Here's the Def in the code.
def db_check(_loc, _mac):
con = lite.connect('database.db')
con.text_factory = str
cur = con.cursor()
cur.execute("SELECT * FROM "+_loc+" WHERE mac_addr=?", (_mac,))
# _loc = Tablename
# _mac = variable with mac address
# mac_addr = column name in sqlite3
# MAC Address format xx:xx:xx:xx:xx
_data = cur.fetchall()
if len(_data)==0:
print('There is no mac %s'%_mac)
else:
print('Component %s found with rowids %s'%(_mac,','.join(map(str,zip(*_data)[0]))))
I want to grab the row number of the row where the Mac address was seen. But when I print the fetchall() I only get [ ] as the output.
When I run the sql query in sqlite then I get the whole row as expected.
Can anyone provide some guidance please?

Ok I managed to get the result I wanted.
"SELECT mac_addr, timestamp, (SELECT COUNT(*) FROM mytblname AS t2 WHERE t2.mac_addr <= t1.mac_addr) AS row_Num FROM mytblname AS t1 ORDER BY timestamp, mac_addr;"
This gave me the row number required.

Related

Store Mysql coulmn names in array using Python mysql connector

I'm quite new to mysql as in manipulating the database itself. I succeeded to store new lines in a table but my next endeavor will be a little more complex.
I'd like to fetch the column names from an existing mysql database and save them to an array in python. I'm using the official mysql connector.
I'm thinking I can achieve this through the information_schema.columns command but I have no idea how to build the query and store the information in an array. It will be around 100-200 columns so performance might become an issue so I don't think its wise just to iterate my way through it for each column.
The base code to inject code into mysql using the connector is:
def insert(data):
query = "INSERT INTO templog(data) " \
"VALUES(%s,%s,%s,%s,%s)"
args = (data)
try:
db_config = read_db_config()
conn = MySQLConnection(db_config)
cursor = conn.cursor()
cursor.execute(query, args)
#if cursor.lastrowid:
# print('last insert id', cursor.lastrowid)
#else:
# print('last insert id not found')
conn.commit()
cursor.close()
conn.close()
except Error as error:
print(error)
As said this above code needs to be modified in order to get data from the sql server. Thanks in advance!
Thanks for the help!
Got this as working code:
def GetNames(web_data, counter):
#get all names from the database
connection = create_engine('mysql+pymysql://user:pwd#server:3306/db').connect()
result = connection.execute('select * from price_usd')
a = 0
sql_matrix = [0 for x in range(counter + 1)]
for v in result:
while a == 0:
for column, value in v.items():
a = a + 1
if a > 1:
sql_matrix[a] = str(('{0}'.format(column)))
This will get all column names from the existing sql database

Real time python app that works on a database

I am using postgreSQL with python and the SQL database is such that rows are added regularly. At present, the python program does not know if new data has been added (I used psycopg2 to read rows. But it reads till the end of rows and stops). How can I let my python program constantly search if new data has been added? Or can I let postgreSQL trigger python when a new row is added?
This is what I have currently:
def get_data():
try:
connect = psycopg2.connect(database="yardqueue", user="postgres", password="abcd", host="localhost", port="5432")
except:
print "Could not open database"
cur = connect.cursor()
cur.execute("SELECT id,position FROM container")
rows = cur.fetchall()
for row in rows:
print "ID = ", row[0]
print "Position = ", row[1]
As you see, when I run this, it stops once variable 'row' reaches the last row.
EDIT: Is there a way I can keep my python code running for a specified amount of time? If so, I can make it go through the database until I kill it.
if you want to check out new records we can write (assuming there are no deletions in container table):
from time import sleep
import psycopg2
IDLE_INTERVAL_IN_SECONDS = 2
def get_data():
try:
connect = psycopg2.connect(database="yardqueue", user="postgres",
password="abcd", host="localhost",
port="5432")
except:
print "Could not open database"
# TODO: maybe we should raise new exception?
# or leave default exception?
return
cur = connect.cursor()
previous_rows_count = 0
while True:
cur.execute("SELECT id, position FROM container")
rows_count = cur.rowcount
if rows_count > previous_rows_count:
rows = cur.fetchall()
for row in rows:
print "ID = ", row[0]
print "Position = ", row[1]
previous_rows_count = rows_count
sleep(IDLE_INTERVAL_IN_SECONDS)
if we want to process only new records we can add ordering by id and offset like
from time import sleep
import psycopg2
IDLE_INTERVAL_IN_SECONDS = 2
def get_data():
try:
connect = psycopg2.connect(database="yardqueue", user="postgres",
password="abcd", host="localhost",
port="5432")
except:
# TODO: maybe we should raise new exception?
# or leave default exception?
print "Could not open database"
return
cur = connect.cursor()
rows_count = 0
while True:
cur.execute("SELECT id, position FROM container "
# sorting records by id to get new records data
# assuming that "id" column values are increasing for new records
"ORDER BY id "
# skipping records that we have already processed
"OFFSET {offset}"
.format(offset=rows_count))
rows_count = cur.rowcount
if rows_count > 0:
rows = cur.fetchall()
for row in rows:
print "ID = ", row[0]
print "Position = ", row[1]
sleep(IDLE_INTERVAL_IN_SECONDS)
Unfortunately, a database has no notion of insertion order, so you as the designer must provide an explicit order. If you do not, the order of the rows you fetch (using a new cursor) may change at any time.
Here a possible way is to have a serial field in your table. PostgreSQL implements a serial field through a sequence, which guarantees that each new inserted row gets a serial number greater than all currently existing ones. But:
there can be holes if a transaction requires a serial number and is aborted
if multiple concurrent transactions insert a serial field, the order of the serial field will be the order of the insert commands, not the order of the commit commands. That means that race conditions can result in a wrong order. But it is fine if you have only one writer in the database
An alternative way is to use an insertion date field - the inserting application has to manage it explicitely or you can use a trigger to set it tranparently. PostgreSQL timestamp have a microsecond precision. That means that many rows can have same insertion date value if they are inserted at the same time. Your Python script should read the time before opening a cursor and fetch all rows with an insertion time greater than its last run time. But here again you should care of race conditions...

Failing to fetch data from MySQLdb database

[enter image description here][1]I am currently making a project for school and the time for reaching the wall has come.
I am trying to fetch data from the USB port on Raspberry Pi 3+. I have connected an Arduino Nano and I am sending a string(UID number in decimal of an RFID Card) from it to the Pi via the USB port. Everything works fine here, I can print out the string(ID) without a problem. I am comparing the ID from the card with the one in my database, and if I put a static number ( commented below in the code) it prints the data. However if I try with the serial line, nothing happens. It seems like it doesn't fetch the data at all. The outlook of my database is underneath and the python code as well.
Thanks in Advance !!
card_id serial_no LastName FirstName
1 | 2136106133 | Hansen | Peter |
2 | 117254270 | Larsen | Thompson |
#!/usr/bin/env python
import MySQLdb
import serial
ser = serial.Serial('/dev/ttyUSB0',9600)
db = MySQLdb.connect(host="localhost", # your host, usually localhost
user="root", # your username
passwd="root", # your password
db="RFID") # name of the data base
cur=db.cursor()
CardID = 0
LastName = ""
FirstName = ""
while True:
CardID=ser.readline()
print "pweasda"
print CardID
print "pewpew"
# CardID = 117254270 - this works. The problem is that I have many RFID cards/tags with different IDs of course. With this statement it prints everything correctly.
cur.execute('SELECT * FROM cards WHERE serial_no=%s',(CardID))
results = cur.fetchall()
for row in results:
FirstName = row[3]
LastName = row [2]
serial_no = row [1]
card_id = row [0]
#print the fetched results
print "FirstName=%s,LastName=%s,serial_no=%s,card_id=%s" % \
(FirstName, LastName, serial_no, card_id )
db.commit()
print "Data committed"
output image (no errors): [1]: http://postimg.org/image/jf2doogrv/
Possible solution could be:
import sqlite3
conn = sqlite3.connect("users.db")#path to your sqlite db file!
cursor = conn.cursor()
CardID=ser.readline().strip()
sql = "SELECT * FROM cards WHERE serial_no=?"
cursor.execute(sql, [(CardID)])
try:
results = cursor.fetchall()[0]# just fetching the first row only, you can loop through all rows here!
FirstName = results[3]
LastName = results[2]
serial_no = results[1]
card_id = results[0]
print "FirstName=%s,LastName=%s,serial_no=%s,card_id=%s" % \
(FirstName, LastName, serial_no, card_id )
except IndexError as e:
print 'CardID Not Exist'+str(e)
except Exception as e2:
print(str(e2))
In above code I am assuming the database is in sqlite DB, and also handled the exceptions so you can figure out the runtime error, if any!

Python not able to modify MySQL but user can

Sorry if this question is stupid, I am 2 days into learning python
I have been beating my head against a wall trying to understand why my python script can run SELECT statements but not UPDATE or DELETE statements.
I believe this would be a MySQL issue and not a Python issue but I am no longer able to troubleshoot
pcheck.py
import re
import time
import json
import MySQLdb
import requests
from array import *
conn = MySQLdb.connect([redacted])
cur = conn.cursor()
sql1 = "SELECT pkey,pmeta FROM table1 WHERE proced = 0 LIMIT 1"
cur.execute(sql1)
row = cur.fetchone()
while row is not None:
print "row is: ",row[0]
rchk = [
r"(SHA256|MD5)",
r"(abc|def)"
]
for trigger in rchk:
regexp = re.compile(trigger)
pval = row[1]
if regexp.search(pval) is not None:
print "matched on: ",row[0]
sql2 = """INSERT INTO table2 (drule,dval,dmeta) VALUES('%s', '%s', '%s')"""
try:
args2 = (trigger, pval, row[1])
cur.execute(sql2, args2)
print(cur._last_executed)
except UnicodeError:
print "pass-uni"
break
else:
pass
sql3 = """UPDATE table1 SET proced=1 WHERE pkey=%s"""
args3 = row[0]
cur.execute(sql3, args3)
print(cur._last_executed)
row = cur.fetchone()
sql3 = """DELETE FROM table1 WHERE proced=1 AND last_update < (NOW() - INTERVAL 6 MINUTE)"""
cur.execute(sql3)
print(cur._last_executed)
cur.close()
conn.close()
print "Finished"
And the actual (and suprisingly expected) output:
OUTPUT
scrape#:~/python$ python pcheck.py
row is: 0GqQ0d6B
UPDATE table1 SET proced=1 WHERE pkey='0GqQ0d6B'
DELETE FROM table1 WHERE proced=1 AND last_update < (NOW() - INTERVAL 6 MINUTE)
Finished
However, the database is not being UPDATED. I checked that the query was making it to MySQL:
MySQL Log
"2015-12-14 22:53:56","localhost []","110","0","Query","SELECT `pkey`,`pmeta` FROM `table1` WHERE `proced`=0 LIMIT 200"
"2015-12-14 22:53:57","localhost []","110","0","Query","UPDATE `table1` SET `proced`=1 WHERE `pkey`='0GqQ0d6B'"
"2015-12-14 22:53:57","localhost []","110","0","Query","DELETE FROM table1 WHERE proced=1 AND last_update < (NOW() - INTERVAL 6 MINUTE)"
However proced value for row 0GqQ0d6B is still NOT 1
If I make the same queries via Sqlyog (logged in as user) the queries work as expected.
These kind of issues can be very frustrating. You sure there's no extra spaces here?
print "row is:*"+row[0]+"*"
Perhaps comment out the
for trigger in rchk:
section, and sprinkle some print statements around?
As the commenter Bob Dylan was able to deduce the cursor needed to be committed after the change.

why can't I fetch sql statements in python?

I have a very large table (374870 rows) and when I run the following code timestamps just ends up being a long int with the value 374870.... I want to be able to grab all the timestamps in the table... but all I get is a long int :S
import MySQLdb
db = MySQLdb.connect(
host = "Some Host",
user = "SOME USER",
passwd = "SOME PASS",
db = "SOME DB",
port = 3306
)
sql = "SELECT `timestamp` from `table`"
timestamps = db.cursor().execute(sql)
Try this:
cur = db.cursor()
cur.execute(sql)
timestamps = []
for rec in cur:
timestamps.append(rec[0])
You need to call fetchmany() on the cursor to fetch more than one row, or call fetchone() in a loop until it returns None.
Consider the possibility that the not-very-long integer that you are getting is the number of rows in your query result.
Consider reading the docs (PEP 249) ... (1) return value from cursor.execute() is not defined; what you are seeing is particular to your database and for portability sake should not be relied on. (2) you need to do results = cursor.fetch{one|many|all}() or iterate over the cursor ... for row in cursor: do_something(row)

Categories